mr2.operators.SymmetrizedGradientOp
- class mr2.operators.SymmetrizedGradientOp[source]
Bases:
LinearOperatorSymmetrized gradient operator.
This operator computes the symmetric part of the discrete gradient. The first axis of the input tensor indexes components and must satisfy
v.shape[0] == len(dim).Directional finite differences are computed with
FiniteDifferenceOpalong the axes indimand then symmetrized over the first two axes:\[E(v) = \tfrac{1}{2}\,(\nabla v + (\nabla v)^{\mathsf T}), \qquad E(v)_{i,j} = \tfrac{1}{2}\,((\nabla v)_{i,j} + (\nabla v)_{j,i}).\]For input shape
(len(dim), ...), the output shape is(len(dim), len(dim), ...).- __init__(dim: Sequence[int], mode: Literal['central', 'forward', 'backward'] = 'backward', pad_mode: Literal['zeros', 'circular'] = 'zeros') None[source]
Symmetrized gradient operator.
- Parameters:
dim (
Sequence[int]) – Axes along which finite differences are computed. Axis0is reserved for vector components and must not be part ofdim.mode (
Literal['central','forward','backward'], default:'backward') – Finite-difference scheme ('forward','backward', or'central').pad_mode (
Literal['zeros','circular'], default:'zeros') – Boundary handling used by finite differences ('zeros'or'circular').
- Raises:
ValueError – If
dimcontains axis0.
- property H: LinearOperator[source]
Adjoint operator.
Obtains the adjoint of an instance of this operator as an
AdjointLinearOperator, which itself is a anLinearOperatorthat can be applied to tensors.Note:
linear_operator.H.H == linear_operator
- property gram: LinearOperator[source]
Gram operator.
For a LinearOperator \(A\), the self-adjoint Gram operator is defined as \(A^H A\).
Note
This is the inherited default implementation.
- __call__(v: Tensor) tuple[Tensor][source]
Apply the symmetrized gradient.
- Parameters:
v (
Tensor) – Input tensor with shape(len(dim), ...).- Returns:
Symmetrized gradient with shape
(len(dim), len(dim), ...).
- adjoint(w: Tensor) tuple[Tensor][source]
Apply the adjoint of the symmetrized gradient.
- Parameters:
w (
Tensor) – Symmetrized-gradient tensor with shape(len(dim), len(dim), *S).- Returns:
Tensor with shape
(len(dim), ...).- Raises:
ValueError – If the first two dimensions of
wdo not equallen(dim).
- forward(v: Tensor) tuple[Tensor][source]
Apply forward of SymmetrizedGradientOp.
Note
Prefer calling the instance of the SymmetrizedGradientOp operator as
operator(x)over directly calling this method. See this PyTorch discussion.
- operator_norm(initial_value: Tensor, dim: Sequence[int] | None, max_iterations: int = 20, relative_tolerance: float = 1e-4, absolute_tolerance: float = 1e-5, callback: Callable[[Tensor], None] | None = None) Tensor[source]
Power iteration for computing the operator norm of the operator.
- Parameters:
initial_value (
Tensor) – initial value to start the iteration; must be element of the domain. if the initial value contains a zero-vector for one of the considered problems, the function throws anValueError.The dimensions of the tensors on which the operator operates. The choice of
dimdetermines how the operator norm is inperpreted. For example, for a matrix-vector multiplication with a batched matrix tensor of shape(batch1, batch2, row, column)and a batched input tensor of shape(batch1, batch2, row):If
dim=None, the operator is considered as a block diagonal matrix with batch1*batch2 blocks and the result is a tensor containing a single norm value (shape(1, 1, 1)).If
dim=(-1),batch1*batch2matrices are considered, and for each a separate operator norm is computed.If
dim=(-2,-1),batch1matrices withbatch2blocks are considered, and for each matrix a separate operator norm is computed.
Thus, the choice of
dimdetermines implicitly determines the domain of the operator.max_iterations (
int, default:20) – maximum number of iterationsrelative_tolerance (
float, default:1e-4) – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration.absolute_tolerance (
float, default:1e-5) – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration.callback (
Callable[[Tensor],None] |None, default:None) – user-provided function to be called at each iteration
- Returns:
An estimaton of the operator norm. Shape corresponds to the shape of the input tensor
initial_valuewith the dimensions specified indimreduced to a single value. The pointwise multiplication ofinitial_valuewith the result of the operator norm will always be well-defined.
- __add__(other: LinearOperator | Tensor | complex) LinearOperator[source]
- __add__(other: Operator[Tensor, tuple[Tensor]]) Operator[Tensor, tuple[Tensor]]
Operator addition.
Returns
lambda x: self(x) + other(x)if other is a operator,lambda x: self(x) + otherif other is a tensor
- __matmul__(other: LinearOperator) LinearOperator[source]
- __matmul__(other: Operator[Unpack[Tin2], tuple[Tensor]] | Operator[Unpack[Tin2], tuple[Tensor, ...]]) Operator[Unpack[Tin2], tuple[Tensor]]
Operator composition.
Returns
lambda x: self(other(x))
- __mul__(other: Tensor | complex) LinearOperator[source]
Operator elementwise left multiplication with tensor/scalar.
Returns
lambda x: self(x*other)
- __or__(other: LinearOperator) LinearOperatorMatrix[source]
Horizontal stacking of two LinearOperators.
A|Bis aLinearOperatorMatrixwith two columns, with(A|B)(x1,x2) == A(x1) + B(x2). Seemr2.operators.LinearOperatorMatrixfor more information.
- __radd__(other: Tensor | complex) LinearOperator[source]
Operator addition.
Returns
lambda x: self(x) + other*x
- __rmul__(other: Tensor | complex) LinearOperator[source]
Operator elementwise right multiplication with tensor/scalar.
Returns
lambda x: other*self(x)