mr2.operators.SymmetrizedGradientOp

class mr2.operators.SymmetrizedGradientOp[source]

Bases: LinearOperator

Symmetrized gradient operator.

This operator computes the symmetric part of the discrete gradient. The first axis of the input tensor indexes components and must satisfy v.shape[0] == len(dim).

Directional finite differences are computed with FiniteDifferenceOp along the axes in dim and then symmetrized over the first two axes:

\[E(v) = \tfrac{1}{2}\,(\nabla v + (\nabla v)^{\mathsf T}), \qquad E(v)_{i,j} = \tfrac{1}{2}\,((\nabla v)_{i,j} + (\nabla v)_{j,i}).\]

For input shape (len(dim), ...), the output shape is (len(dim), len(dim), ...).

__init__(dim: Sequence[int], mode: Literal['central', 'forward', 'backward'] = 'backward', pad_mode: Literal['zeros', 'circular'] = 'zeros') None[source]

Symmetrized gradient operator.

Parameters:
  • dim (Sequence[int]) – Axes along which finite differences are computed. Axis 0 is reserved for vector components and must not be part of dim.

  • mode (Literal['central', 'forward', 'backward'], default: 'backward') – Finite-difference scheme ('forward', 'backward', or 'central').

  • pad_mode (Literal['zeros', 'circular'], default: 'zeros') – Boundary handling used by finite differences ('zeros' or 'circular').

Raises:

ValueError – If dim contains axis 0.

property H: LinearOperator[source]

Adjoint operator.

Obtains the adjoint of an instance of this operator as an AdjointLinearOperator, which itself is a an LinearOperator that can be applied to tensors.

Note: linear_operator.H.H == linear_operator

property gram: LinearOperator[source]

Gram operator.

For a LinearOperator \(A\), the self-adjoint Gram operator is defined as \(A^H A\).

Note

This is the inherited default implementation.

__call__(v: Tensor) tuple[Tensor][source]

Apply the symmetrized gradient.

Parameters:

v (Tensor) – Input tensor with shape (len(dim), ...).

Returns:

Symmetrized gradient with shape (len(dim), len(dim), ...).

adjoint(w: Tensor) tuple[Tensor][source]

Apply the adjoint of the symmetrized gradient.

Parameters:

w (Tensor) – Symmetrized-gradient tensor with shape (len(dim), len(dim), *S).

Returns:

Tensor with shape (len(dim), ...).

Raises:

ValueError – If the first two dimensions of w do not equal len(dim).

forward(v: Tensor) tuple[Tensor][source]

Apply forward of SymmetrizedGradientOp.

Note

Prefer calling the instance of the SymmetrizedGradientOp operator as operator(x) over directly calling this method. See this PyTorch discussion.

operator_norm(initial_value: Tensor, dim: Sequence[int] | None, max_iterations: int = 20, relative_tolerance: float = 1e-4, absolute_tolerance: float = 1e-5, callback: Callable[[Tensor], None] | None = None) Tensor[source]

Power iteration for computing the operator norm of the operator.

Parameters:
  • initial_value (Tensor) – initial value to start the iteration; must be element of the domain. if the initial value contains a zero-vector for one of the considered problems, the function throws an ValueError.

  • dim (Sequence[int] | None) –

    The dimensions of the tensors on which the operator operates. The choice of dim determines how the operator norm is inperpreted. For example, for a matrix-vector multiplication with a batched matrix tensor of shape (batch1, batch2, row, column) and a batched input tensor of shape (batch1, batch2, row):

    • If dim=None, the operator is considered as a block diagonal matrix with batch1*batch2 blocks and the result is a tensor containing a single norm value (shape (1, 1, 1)).

    • If dim=(-1), batch1*batch2 matrices are considered, and for each a separate operator norm is computed.

    • If dim=(-2,-1), batch1 matrices with batch2 blocks are considered, and for each matrix a separate operator norm is computed.

    Thus, the choice of dim determines implicitly determines the domain of the operator.

  • max_iterations (int, default: 20) – maximum number of iterations

  • relative_tolerance (float, default: 1e-4) – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration.

  • absolute_tolerance (float, default: 1e-5) – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration.

  • callback (Callable[[Tensor], None] | None, default: None) – user-provided function to be called at each iteration

Returns:

An estimaton of the operator norm. Shape corresponds to the shape of the input tensor initial_value with the dimensions specified in dim reduced to a single value. The pointwise multiplication of initial_value with the result of the operator norm will always be well-defined.

__add__(other: LinearOperator | Tensor | complex) LinearOperator[source]
__add__(other: Operator[Tensor, tuple[Tensor]]) Operator[Tensor, tuple[Tensor]]

Operator addition.

Returns lambda x: self(x) + other(x) if other is a operator, lambda x: self(x) + other if other is a tensor

__matmul__(other: LinearOperator) LinearOperator[source]
__matmul__(other: Operator[Unpack[Tin2], tuple[Tensor]] | Operator[Unpack[Tin2], tuple[Tensor, ...]]) Operator[Unpack[Tin2], tuple[Tensor]]

Operator composition.

Returns lambda x: self(other(x))

__mul__(other: Tensor | complex) LinearOperator[source]

Operator elementwise left multiplication with tensor/scalar.

Returns lambda x: self(x*other)

__or__(other: LinearOperator) LinearOperatorMatrix[source]

Horizontal stacking of two LinearOperators.

A|B is a LinearOperatorMatrix with two columns, with (A|B)(x1,x2) == A(x1) + B(x2). See mr2.operators.LinearOperatorMatrix for more information.

__radd__(other: Tensor | complex) LinearOperator[source]

Operator addition.

Returns lambda x: self(x) + other*x

__rmul__(other: Tensor | complex) LinearOperator[source]

Operator elementwise right multiplication with tensor/scalar.

Returns lambda x: other*self(x)