Abstract
Calculating the variance of a family of tensors, each represented by a symmetric positive semi-definite second order tensor/matrix, involves the formation of a fourth order tensor \(R_{abcd}\). To form this tensor, the tensor product of each second order tensor with itself is formed, and these products are then summed, giving the tensor \(R_{abcd}\) the same symmetry properties as the elasticity tensor in continuum mechanics. This tensor has been studied with respect to many properties: representations, invariants, decomposition, the equivalence problem et cetera. In this paper we focus on the two-dimensional case where we give a set of invariants which ensures equivalence of two such fourth order tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\). In terms of components, such an equivalence means that components \(R_{ijkl}\) of the first tensor will transform into the components \(\widetilde{R}_{ijkl}\) of the second tensor for some change of the coordinate system.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Positive semi-definite second order tensors arise in several applications. For instance, in image processing, a structure tensor is computed from greyscale images that captures the local orientation of the image intensity variations [10, 17] and is employed to address a broad range of challenges. Diffusion tensor magnetic resonance imaging (DT-MRI) [1, 5] characterizes anisotropic water diffusion by enabling the measurement of the apparent diffusion tensor, which makes it possible to delineate the fibrous structure of the tissue. Recent work has shown that diffusion MR measurements of restricted diffusion obscures the fine details of the pore shape under certain experimental conditions [11], and all remaining features can be encoded accurately by a confinement tensor [19].
All such second order tensors share the same mathematical properties, namely, they are real-valued, symmetric, and positive semi-definite. Moreover, in these disciplines, one encounters a collection of such tensors, e.g., at different locations of the image. Populations of such tensors have also been key to some studies aiming to model the underlying structure of the medium under investigation [8, 12, 18].
Irrespective of the particular application, let \(R_{ab}\) denote such tensors,Footnote 1 and we shall refer to the set of n tensors as \(\{R_{ab}^{(i)}\}_{i}\). Our desire is to find relevant descriptors or models of such a family. One relevant statistical measure of this family is the (population) variance
where \(\widehat{R}_{ab}=\frac{1}{n}\sum _{i=1}^n R^{(i)}_{ab}\) is the mean. (For another approach, see e.g., [8]). In this paper, we are interested in the first term, i.e., we study the fourth order tensor (skipping the normalization)
where \(R^{(i)}_{ab}\ge 0\) stands for \(R^{(i)}_{ab}\) being positive semi-definite. It is obvious that \(R_{abcd}\) has the symmetries \(R_{abcd}=R_{bacd}=R_{abdc}\) and \(R_{abcd}=R_{cdab}\), i.e., \(R_{abcd}\) has the same symmetries as the elasticity tensor [14] from continuum mechanics. The elasticity tensor is well studied [13], e.g. with respect to classification, decompositions, and invariants. In most cases this is done in three dimensions. The same (w.r.t. symmetries) tensor has also been studied in the context of diffusion MR [2].
In this paper we will focus on the corresponding tensor \(R_{abcd}\) in two dimensions. First, there are direct applications in image processing, and secondly, the problems posed will be more accessible in two dimensions than in three. In particular we study the equivalence problem, namely, we ask the question: given the components \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\) of two such tensors do they represent the same tensor in different coordinate systems (see Sects. 2.1.2 and 4)?
1.1 Outline
Section 2 contains tensorial matters. We will assume some basic knowledge of tensors, although some definitions are given for completeness. The notation(s) used is commented on and in particular the three-dimensional Euclidean vector space \(V_{(ab)}\) is introduced.
In Sect. 2.1.2, we make some general remarks concerning the tensor \(R_{abcd}\) and specify the problem we focus on. Section 2.1 is concluded with some remarks on the Voigt/Kelvin notation and the corresponding visualisation in \(\mathbb {R}^3\).
Section 2.2 gives examples of invariants, especially invariants which are easily accessible from \(R_{abcd}\). Also, more general invariant/canonical decompositions of \(R_{abcd}\) are given.
In Sect. 3, we discuss how the tensor \(R_{abcd}\) can (given a careful choice of basis) be expressed in terms of a \(3 \times 3\) matrix, and how this matrix is affected by a rotation of the coordinate system in the underlying two-dimensional space on which \(R_{abcd}\) is defined.
In Sect. 4 we return to the equivalence problem and give the main result of this work. In Sect. 4.1.1 we provide a geometric condition for equivalence, while in Sect. 4.1.2, we present the equivalence in terms of a \(3 \times 3\) matrix. Both these characterisations rely on the choice of particular basis elements for the vector spaces employed. In Sect. 4.1.3 the same equivalence conditions are given in a form which does not assume a particular basis.
2 Preliminaries
In this section we clarify the notation and some concepts which we need. Section 2.1 deals with the (alternatives of) tensor notation and some representations. The equivalence (and related) problems are also briefly addressed. Section 2.2 accounts for some natural invariants, traces and decompositions of \(R_{abcd}\).
We will assume some familiarity with tensors, but to clarify the view on tensors we recall some facts. We start with a (finite dimensional) vector space V with dual \(V^*\). A tensor of order (p,q) is then a multi-linear mapping \(\underbrace{V \times V \cdots \times V}_{q} \times \underbrace{V^* \times \cdots \times V^*}_{p} \rightarrow \mathbb {R}\). Moreover, a (non-degenerate) metric/scalar product \(g: V \times V \rightarrow \mathbb {R}\) gives an isomorphism from V to \(V^*\) through \(v \rightarrow g(v,\cdot )\), and it is this isomorphism which is used to ‘raise and lower indices’, see below. Indeed, for a fixed \(v \in V\), \(g(v,\cdot )\) is a linear mapping \(V \rightarrow \mathbb {R}\), i.e., an element of \(V^*\).
2.1 Tensor Notation and Representations
There is a plethora of notations for tensors. Here, we follow the well-adopted convention [16] that early lower case Latin letters (\({T^a}_{bc}\)) refer to the tensor as a geometric object, its type being inferred from the indices and their positions (the abstract index notation). \(g_{ab}\) denotes the metric tensor. When the indices are lower case Latin letters from the middle of the alphabet, \({T^i}_{jk}\), they refer to components of \({T^a}_{bc}\) in a certain frame. The super-index i denotes a contravariant index while the sub-indices j, k are covariant. For instance, a typical vector (tensor of type (1, 0)) will be written \(v^a\) with components \(v^i\), while the metric \(g_{ab}\) (tensor of type (0, 2)) has components \(g_{ij}\). At a number of occasions, it will also be useful to express quantities in terms of components with respect to orthonormal frames, i.e., Cartesian coordinates. This is sometimes referred to as ‘Cartesian tensors’, and the distinction between contra- and covariant indices is obscured. In these situations, it is possible (but not necessary) to write all indices as sub-indices, and sometimes the symbol \(\overset{\varvec{\cdot }}{=}\) is used to indicate that an equation is only valid in Cartesian coordinates. For example \(T_i\overset{\varvec{\cdot }}{=}T_{ijk}\delta _{jk}\) instead of \(T^i={T^i}_{jk}g^{jk}={T^{ik}}_k\). Often this is clear form the context, but we will sometimes use \(\overset{\varvec{\cdot }}{=}\) to remind the reader that a Cartesian assumption is made. Here, the Einstein summation convention is implied, i.e., repeated indices are to be summed over, so that for instance \(T^i={T^i}_{jk}g^{jk}={T^{ik}}_k= \sum \limits _{j=1}^n\sum \limits _{k=1}^n {T^i}_{jk}g^{jk}=\sum \limits _{k=1}^n {T^{ik}}_k\) if each index ranges from 1 to n. We have also used the metric \(g_{ij}\) and its inverse \(g^{ij}\) to raise and lower indices. For instance, since \(g_{ij} v^i\) is an element of \(V^*\), we write \(g_{ij} v^i=v_j\).
We also remind of the notation for symmetrisation. For a two-tensor \(T_{(ab)}=\frac{1}{2}(T_{ab}+T_{ba})\), while more generally for a tensor \(T_{a_1 a_2 \cdots a_n}\) of order (0, n) we have
where the sum is taken over all permutations \(\pi \) of \(1,2, \ldots , n\). Naturally, this convention can also be applied to subsets of indices. For instance, \(H_{a(bc)}=\tfrac{1}{2}(H_{abc}+H_{acb})\).
2.1.1 The Vector Space of Symmetric Two-Tensors
In any coordinate frame a symmetric tensor \(R_{ab}\) (i.e., \(R_{ab}=R_{ba}\)) is represented by a symmetric matrix \(R_{ij}\) (\( 2 \times 2\) or \(3 \times 3\) depending on the dimension of the underlying space). In the two-dimensional case, with the underlying vector space \(V^{a}\sim \mathbb {R}^2\) , this means that \(R_{ab}\) lives in a three-dimensional vector space, which we denote by \(V_{(ab)}\). \(V_{(ab)}\) is equipped with a natural scalar product: \(<A_{ab},B_{ab}> = A_{ab}B^{ab}\), making it into a three-dimensional Euclidean space. Here \(A_{ab}B^{ab}=A_{ab}B_{cd}g^{ac}g^{bd}\), i.e, the contraction of \(A_{ab}B_{cd}\) over the indices a, c and b, d, and the tensor product \(A_{ab}B_{cd}\) itself is the tensor of order (0, 4) given by \((A_{ab}B_{cd})v^a u^b w^c m^d= (A_{ab}v^a u^b)(B_{cd} w^c m^d)\) together with multi-linearity.
2.1.2 The Tensor \(R_{abcd}\) and the Equivalence Problem
As noted above, \(R_{abcd}\) given by (1) has the symmetries \(R_{abcd}=R_{(ab)cd}=R_{ab(cd)}\) and \(R_{abcd}=R_{cdab}\), and it is not hard to see that this gives \(R_{abcd}\) six degrees of freedom in two dimensions. (See also Sect. 2.1.3.) It is also interesting to note that \(R_{abcd}\) provides a mapping \(V_{(ab)} \rightarrow V_{(ab)}\) through
and that this mapping is symmetric (due to the symmetry \(R_{abcd}=R_{cdab}\)). Given \(R_{abcd}\) there are a number of questions one can ask, e.g.,
-
Feasibility—given a tensor \(R_{abcd}\) with the correct symmetries, can it be written in the form (1)?
-
Canonical decomposition—given \(R_{abcd}\) of the form (1), can you write \(R_{abcd}\) as a canonical sum of the form (1), but with a fixed number of terms (cf. eigenvector decomposition of symmetric matrices)?
-
Visualisation—since fourth order tensors are a bit involved, how can one visualise them in ordinary space?
-
Characterisation/relevant sets of invariants—what invariants are relevant from an application point of view?
-
The equivalence problem—in terms of components, how do we know if \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\) represent the same tensor when they are in different coordinate systems?
We will now focus on the equivalence problem in two dimensions. This problem can be formulated as above: given, in terms of components, two tensors (with the symmetries we consider) \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\), do they represent the same tensor in the sense that there is a coordinate transformation taking the components \(R_{ijkl}\) into the components \(\widetilde{R}_{ijkl}\)? In other words, does there exist an (invertible) matrix \({P^m}_i\) so that
This problem can also be formulated when \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\) are expressed in Cartesian frames. Then the coordinate transformation must be a rotation, i.e., given by a rotation matrix \({Q^i}_j \in \) SO(2). Hence, the problem of (unitary) equivalence is: Given \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\), both expressed in Cartesian frames, is there a matrix (applying the ‘Cartesian convention’) \(Q_{ij} \in \) SO(2) so that
2.1.3 The Voigt/Kelvin Notation
Since (in two dimensions) the space \(V_{(ab)}\) is three-dimensional, one can introduce coordinates, for example \(\sim \) and use vector algebra on \(\mathbb {R}^3\). This is used in the Voigt notation [15] and the related Kelvin notation [6]. As always, one must be careful to specify with respect to which basis in \(V_{(ab)}\) the coordinates are taken. For instance, in the correspondence \(\sim \) , the understood basis for \(V_{(ab)}\) (in the understood/induced coordinate system) is . These elements are orthogonal (viewed as vectors in \(V_{(ab)}\)) to each other, but not (all of them) of unit length.
Since the unit matrix plays a special role, we make the following choice. Starting with an orthonormal basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) for V, (i.e., \(\{\hat{\varvec{\xi }^a} , \hat{\eta }^a \}\) for \(V^a\)) a suitable orthonormal basis for \(V_{(ab)}\) is \(\{e^{(1)}_{ab}, e^{(2)}_{ab}, e^{(3)}_{ab} \}\) where \(e^{(1)}_{ab}=\frac{1}{\sqrt{2}}(\xi _a \xi _b-\eta _a \eta _b)\), \(e^{(2)}_{ab}=\frac{1}{\sqrt{2}}(\xi _a \eta _b+\eta _a \xi _b)\), \(e^{(3)}_{ab}=\frac{1}{\sqrt{2}}(\xi _a \xi _b+\eta _a \eta _b)\), i.e., in the induced basis we have
In this basis, we write an arbitrary element \(M_{ab} \in V_{(ab)}\) as , which means that \(M_{ab}\) gets the coordinates . Note that \(M_{ij}\) is positive definite if \(z^2-x^2-y^2 \ge 0\) and \(z \ge 0\). In terms of the coordinates of the Voigt notation, the tensor \(R_{abcd}\) corresponds to a symmetric mapping \(\mathbb {R}^3 \rightarrow \mathbb {R}^3\), given by a symmetric \(3\times 3\) matrix, which also shows that the degrees of freedom for \(R_{abcd}\) is six.
2.1.4 Visualization in \(\mathbb {R}^3\)
Through the Voigt notation, any symmetric two-tensor (in two dimensions) can be visualised as a vector in \(\mathbb {R}^3\). Using the basis vector given by (2), we note that \(e^{(1)}_{ij}\) and \(e^{(2)}_{ij}\) correspond to indefinite quadratic forms, while \(e^{(3)}_{ij}\) is positive definite. We also see that \(e^{(1)}_{ij}+e^{(3)}_{ij}\) and \(e^{(2)}_{ij}+e^{(3)}_{ij}\) are positive semi-definite.
In Fig. 1 (left) these matrices are illustrated as vectors in \(\mathbb {R}^3\). The set of positive semi-definite matrices corresponds to a cone, cf. [4], indicated in blue. When the symmetric \(2\times 2\) matrices are viewed as vectors in \(\mathbb {R}^3\), the outer product of such a vector with itself gives a symmetric \(3 \times 3\) matrix. Hence we get a positive semi-definite quadratic form on \(\mathbb {R}^3\), which can be illustrated by an (degenerate) ellipsoid in \(\mathbb {R}^3\). In Fig. 1 (right) \((e^{(1)}_{ab}+e^{(3)}_{ab})(e^{(1)}_{cd}+e^{(3)}_{cd})\), \((e^{(2)}_{ab}+e^{(3)}_{ab})(e^{(2)}_{cd}+e^{(3)}_{cd})\) and \(e^{(3)}_{ab}e^{(3)}_{cd}\) are visualised in this manner. Note that all these quadratic forms correspond to matrices which are rank one. (Cf. the ellipsoids in Fig. 2.)
2.2 Invariants, Traces and Decompositions
By an invariant, we mean a quantity that can be calculated from measurements, and which is independent of the frame/coordinate system with respect to which the measurements are performed, despite the fact that components, e.g., \({T^i}_{jk}\) themselves depend on the coordinate system. It is this property that makes invariants important, and typically they are formed via tensor products and contractions, e.g., \({T^i}_{jk}{T^k}_{il}g^{jl}\). Sometimes, the invariants have a direct geometrical meaning. For instance, for a vector \(v^i\), the most natural invariant is its squared length \(v^i v_i\). For a tensor \({T^i}_j\) of order (1,1) in three dimensions, viewed as a linear mapping \(\mathbb {R}^3 \rightarrow \mathbb {R}^3\), the most well known invariants are perhaps the trace \({T^i}_i\) and the determinant \(\det ({T^i}_j)\). The modulus of the determinant gives the volume scaling under the mapping given by \({T^i}_j\), while the trace equals the sum of the eigenvalues. If \({T^i}_j\) represents a rotation matrix, then its trace is \(1+2\cos \phi \), where \(\phi \) is the rotation angle. In general, however, the interpretation of a given invariant may be obscure. (For an account relevant to image processing, see e.g., [9]. A different, but relevant, approach in the field of diffusion MRI is found in [20].)
2.2.1 Natural Traces and Invariants
From (1), and considering the symmetries of \(R_{abcd}\), two (and only two) natural traces arise. For a tensor of order (1, 1), e.g., \({R_i}^j\), it is natural to consider this as an ordinary matrix, and consequently use stem letters without any indices at all. To indicate this slight deviation from the standard tensor notation, we denote e.g., \({R_i}^j\) by \(\bar{\bar{R}}\). Using \([\cdot ]\) for the trace, so that \([\bar{\bar{R}}]={{\,\mathrm{Tr}\,}}(\bar{\bar{R}})={R_a}^a\), we then have
and
Hence, in a Cartesian frame, where the index position is unimportant, we have for the matrices \(\bar{\bar{T}}=T_{ij}, \bar{\bar{S}}=S_{ij}\)
To proceed there are two double traces (i.e., contracting \(R_{abcd}\) twice):
and
In two dimensions, the difference \(T_{ab}\) \(-\) \(S_{ab}\) is proportional to the metric \(g_{ab}\). Namely,
Lemma 1
With \(T_{ab}\) and \(S_{ab}\) given by (3) and (4), it holds that (in two dimensions)
Proof
By linearity, it is enough to prove the statement when \(n=1\), i.e., when the sum has just one term. Raising the second index, and using components, the statement then is \({T_i}^j-{S_i}^j=\det (\bar{\bar{R}}^{(1)}){\delta _i}^j\). Putting \(\bar{\bar{R}}^{(1)}=A\), we see that \({T_i}^j-{S_i}^j=A[A]-A^2\) while \(\det (\bar{\bar{R}}^{(1)}){\delta _i}^j=\det (A)I\), and by the Cayley-Hamilton theorem in two dimensions, \(A[A]-A^2\) is indeed \(\det (A)I\). \(\square \)
From lemma 1, it follows that \(T-S=2\sum _{i=1}^n \det (\bar{\bar{R}}^{(i)})\ge 0\). In fact the following inequalities hold.
Lemma 2
With T and S defined as above, it holds that \(S \le T \le 2S\). If \(T=S\), all tensors \(R^{(i)}_{ab}\) have rank 1. If \(T=2S\), all tensors \(R^{(i)}_{ab}\) are isotropic, i.e., proportional to the metric \(g_{ab}\).
Proof
Again, by linearity it is enough to consider one tensor \(\bar{\bar{R}}^{(1)}=A\). In an orthonormal frame which diagonalises A, we have (with \(a\ge 0, c\ge 0\), \(a+c>0\)). Hence
The first inequality becomes equality when \(ac=0\), i.e., when A has rank one. The second inequality becomes equality when \(a=c\), i.e., when A is isotropic. \(\square \)
Definition 1
We define the mean rank, \(r_m\), by \(r_m=T/S\), with T and S as above.
Hence, in two dimensions, \(1 \le r_m \le 2\).
2.2.2 A Canonical Decomposition
It is customary [3, 7] to decompose a tensor with the symmetries of \(R_{abcd}\) into a sum where one term is the completely symmetric part:
It is also customary to split \(H_{abcd}\) into a trace-free part and ‘trace part’. We start by defining \(H_{ab}={H_{abc}}^c\), \(H={H_a}^a\) and then the trace-free part of \(H_{ab}\): \(\mathring{H}_{ab}=H_{ab}-\frac{1}{2}Hg_{ab}\) so that \(H_{ab}=\mathring{H}_{ab}+\frac{1}{2}Hg_{ab}\). (These decompositions can be made in any dimension, but the actual coefficients, e.g., \(\frac{1}{2}\) above and \(\frac{1}{8}\) and \(\frac{3}{8}\) et cetera below depend on the underlying dimension.) It is straightforward to check that
is also trace-free. Hence we have the decomposition
Moreover, due to the symmetry of \(R_{abcd}\), we find that
and therefore that
which implies that \(H_{ab}={H_{abc}}^{c}=\tfrac{1}{3}(T_{ab}+2 S_{ab})\) and \(W_{ab}={W_{abc}}^{c}=\tfrac{2}{3}(T_{ab}-S_{ab})\).
The degres of freedom, which for \(R_{abcd}\) is six, is distributed, where \(R_{abcd} \sim \{\mathring{H}_{abcd}, H_{ab},W_{abcd}\}\), as
For \(H_{ab}\) (or the pair \(\mathring{H}_{ab}, H\)) this is clear. The total symmetry of \(\mathring{H}_{abcd}\) leaves only five components (in a basis), \(\mathring{H}_{1111}, \mathring{H}_{1112},\mathring{H}_{1122},\mathring{H}_{1222},\mathring{H}_{2222}\). However, the trace-free condition \(\mathring{H}_{abcd}g^{cd}=0\) imposes three conditions. (In an orthonormal frame, \(\mathring{H}_{1122}=-\mathring{H}_{1111}\), \(\mathring{H}_{2222}=-\mathring{H}_{1122}\) and \(\mathring{H}_{1112}=-\mathring{H}_{1222}\).) That \(W_{abcd}\) has only one degree of freedom follows from the following lemma.
Lemma 3
Suppose that \(W_{abcd}\) is given by (7), and put \(W_{ab}=W_{abcd}g^{cd}\), \(W=W_{ab}g^{ab}\). Then (in two dimensions)
Proof
By linearity, it is enough to consider the case when \(R_{abcd}=A_{ab}A_{cd}\) for some (symmetric) \(A_{ab}\). In terms of eigenvectors (to \({A^a}_b\)) we can write \(A_{ab}=\alpha x_a x_b+\beta y_a y_b\), where \(x_a x^a=y_a y^a=1, x_a y^a=0\). In particular \(g_{ab}=x_a x_b +y_a y_b\). From (7) we then get
Expanding the parentheses, the components \(x_a x_b x_c x_d\) and \(y_a y_b y_c y_d\) vanish, leaving
where the last equality can be seen by inserting \(g_{ab}=x_a x_b +y_a y_b\) (for all indices) and expanding. Taking one trace, i.e., contracting with \(g^{cd}\) gives \(W_{ab}=\frac{2\alpha \beta }{3}g_{ab}\), and another trace gives \(W=\frac{4\alpha \beta }{3}\), which proves the lemma. \(\square \)
3 \(R_{abcd}\) as a Quadratic Form on \(\mathbb {R}^3\)
Through the orthonormal basis for the space of symmetric two-tensors (in two dimensions) given by (2), the tensor \(R_{abcd}\) viewed as a quadratic form can be represented by a \(3 \times 3\)-matrix. Here, we will restrict ourselves to an orthonormal basis for \(V_{(ab)}\), namely the basis \(\{e^{(1)}_{ab}, e^{(2)}_{ab}, e^{(3)}_{ab} \}\) from Sect. 2.1.3, defined in terms of the orthonormal basis \(\{\xi ^a , \eta ^a \}\) for \(V^a\). Thus, given \(R_{abcd}\), we associate the symmetric matrix \(M_{ij}\), where (the choice of an orthonormal basis justifies the mismatch of the indices i, j)
It is instructive to see how the various derived tensors show up in \(M_{ij}\). In terms of the basis (2) it is natural to look at the various parts of \(M_{ij}\) as follows
This splitting is natural for reasons which will become apparent in the next sections. Note, however, that with this representation it is tempting to consider coordinate changes in \(\mathbb {R}^3\), which is not natural in this case. Rather, of interest is the change of basis in \(V^a\) and the related induced change of coordinates in the representation (10). See Sect. 3.2.
3.1 Representation of the Canonically Derived Parts of \(R_{abcd}\)
It is helpful to see how the components of the various tensors \(T_{ab}\), \(S_{ab}\), T, S, \(\mathring{H}_{abcd}\), \(\mathring{H}_{ab}\), H and W show up as components of \(M_{ij}\). As for \(\mathring{H}_{ab}\), e.g., \(\mathring{T}_{ab}\) denotes the trace-free part of \(T_{ab}\). Immediate is \(M_{33}\):
Similarly, for \(i=1,2\) we have
where the last equality follows form the trace-freeness of \(e^{(1)}_{ab}\) and \(e^{(2)}_{ab}\). This means that the components of \(\mathring{T}_{ab}\) (properly rescaled) goes into \(M_{ij}\) as the components of \(\overline{v}\) (and \(\overline{v}^t\)) in (10). The same holds for \(\mathring{S}_{ab}\) and \(\mathring{H}_{ab}\), as \(\mathring{S}_{ab}=\mathring{T}_{ab}\) by Lemma 1, which then implies that also \(\mathring{H}_{ab}=\mathring{T}_{ab}=\mathring{S}_{ab}\). This latter relation follows from the trace-free part of the relation \(H_{ab}=\tfrac{1}{3}(T_{ab}+2 S_{ab})\). Hence
where \(\overrightarrow{\mathring{T}}=\overrightarrow{\mathring{S}}=\overrightarrow{\mathring{H}}\) encodes the two degrees of freedom in \(\mathring{T}_{ab}=\mathring{S}_{ab}=\mathring{H}_{ab}\). The matrix A is decomposed as \(A=\frac{\sigma }{2}I+\mathring{A}\) where I is the (\(2 \times 2\)) identity matrix and \(\mathring{A}\) is trace-free part of A. In particular, \([A]=\sigma \).
To investigate \([M_{ij}]=M_{11}+M_{22}+M_{33}\), i.e., the trace of \(M_{ij}\) we note that for a general symmetric matrix we have \(R_{ij}e^{(1)}_{ij} \overset{\varvec{\cdot }}{=}\frac{a-c}{\sqrt{2}}\), \(R_{ij}e^{(2)}_{ij} \overset{\varvec{\cdot }}{=}\frac{2b}{\sqrt{2}}\), \(R_{ij}e^{(3)}_{ij} \overset{\varvec{\cdot }}{=}\frac{a+c}{\sqrt{2}}\). When \(M_{ij}\) is constructed from \(R_{abcd}\) which is an outer product \(R_{ab}R_{cd}\) the trace is given by \(M_{11}+M_{22}+M_{33}= (\frac{a-c}{\sqrt{2}})^2+(\frac{2b}{\sqrt{2}})^2+(\frac{a+c}{\sqrt{2}})^2=a^2+2b^2+c^2\) and from (6) this is S. Together with linearity, this shows that \( [M]=M_{11}+M_{22}+M_{33}=S \) also when \(R_{abcd}\) is formed as in (1). Taking trace in (13), this gives
In addition, the relations below Eq. (7) show that
The two degres of freedom in \(\mathring{A}\) corresponds to the two degrees of freedom in \(\mathring{H}_{abcd}\).
3.2 The Behaviour of \(M_{ij}\) Under a Rotation of the Coordinate System in \(V^a\)
The components of \(M_{ij}\) are expressed in terms of the orthonormal basis tensors given by (2), and these in turn are based on the ON basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) for V. Putting the basis vectors in a row matrix \(\begin{pmatrix}\hat{\xi }&\hat{ \eta }\end{pmatrix}\) and the coordinates in a column matrix so that a vector , and considering only orthonormal frames, the relevant change of basis is given by a rotation matrix \(Q(v)=Q_v=\begin{pmatrix} \cos v &{} -\sin v \\ \sin v &{} \cos v \end{pmatrix}\), i.e., we consider the change of basis
This means that for a vector \(\mathbf {u}=\begin{pmatrix}\hat{\tilde{\xi }}&\hat{\tilde{\eta }}\end{pmatrix} \begin{pmatrix}\tilde{\xi }\\ \tilde{\eta }\end{pmatrix}=\begin{pmatrix}\hat{\xi }&\hat{\eta }\end{pmatrix} \begin{pmatrix} \xi \\ \eta \end{pmatrix}\), the coordinates transform as
For the components of the basis vectors \(e^{(1)}_{ab}, e^{(2)}_{ab}, e^{(3)}_{ab}\) we find (omitting the factor \(1/\sqrt{2}\))
and this means that the components \(M_{ij}\) transform as
But this latter expression is just
hence we have the following important remark/observation:
Remark 1
Viewing the matrix \(M_{ij}\) as an ellipsoid in \(\mathbb {R}^3\), the effect of a rotation by an angle v in \(V^a\) corresponds to a rotation of the ellipsoid by an angle 2v around the z-axis in \(\mathbb {R}^3\) (where the z-axis corresponds to the ‘isotropic direction’ given by \(g_{ab}\)).
4 The Equivalence Problem for \(R_{abcd}\)
The equivalence problem for \(R_{abcd}\) can be formulated in different ways (for an account in three dimensions, we refer to [3]). Given two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\), both with the symmetries implied by (1), the question whether they are the same or not is straightforward as one can compare the components in any basis. However, \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) could live in different (but isomorphic) vector spaces, e.g. two tangent spaces at different points, and the concept of equality becomes less clear. On the other hand, in terms of components \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\), one could ask whether there is a change of coordinates which takes one set of components into the other. If so, one can find a (invertible) matrix \({P^i}_j\) so that
and the tensors are then said to be equivalent. As already mentioned, it is convenient to restrict the coordinate systems to orthonormal coordinates. This means that two different coordinate systems differ only by their orientation, i.e., the change of coordinates are given by a rotation matrix \(Q \in \) SO(2). Under the ’Cartesian convention’ that all indices are written as subscripts, \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent if there is a matrix \(Q \in \) SO(2) so that (their Cartesian components satisfy)
4.1 Different Ways to Characterize the Equivalence of \(R_{abcd}\) and \(\widetilde{R}_{abcd}\)
In this section, we will discuss three ways to determine whether two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent or not. In Sects. 4.1.1 and 4.1.2 we present two such methods briefly, while Sect. 4.1.3, which is more complete, contains the main result of this work.
As mentioned in Sect. 1.1, the results of Sects. 4.1.1 and 4.1.2, which may be used in their own rights, rely on particular choices of basis matrices for \(V_{(ab)}\). The formulation in Sect. 4.1.3 on the other hand, is expressed in the components of \(R_{abcd}\) (in any coordinate system) directly.
4.1.1 Orientation of the Ellipsoid in \(\mathbb {R}^3\)
A necessary condition for \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) to be equivalent is that their corresponding \(3 \times 3\)-matrices \(M_{ij}\) and \({\widetilde{M}}_{ij}\) have the same eigenvalues. On the other hand, this is not sufficient since the representation in \(\mathbb {R}^3\) should reflect the freedom in rotating the coordinate system in \(V^a \sim \mathbb {R}^2\). With the coordinates adopted, this corresponds to a rotation of the associated ellipsoid around the z-axis in \(\mathbb {R}^3\) (see Remark 1 in Sect. 3.2). This is illustrated in Fig. 2 where three ellipsoids, all representing positive definite symmetric mappings having identical eigenvalues, are shown. The two first ellipsoids can be rotated into each other by a rotation around the z-axis. This implies that the corresponding tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent. The third ellipsoid can also be rotated into the two others, but these rotations are around directions other than the z-axis, which means that this ellipsoid represents a different tensor.
In the generic case, with all eigenvalues different, it is easy to test whether two different ellipsoids can be transfered into each other through a rotation around the z-axis. This will be the case if the corresponding eigenvectors (of \(M_{ij}\) and \(\widetilde{M}_{ij}\)) have the same angle with the z-axis. Hence it is just a matter of checking the z-components of the three normalized eigenvectors and see if they are equal up to sign.
4.1.2 Components in a Canonical Coordinate System
In a sense, this is the most straightforward method. In a coordinate system which respects \(e^{(3)}_{ab}\) as the z-axis in \(V_{(ab)} \sim \mathbb {R}^3\), two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent if there is a rotation matrix (in two dimensions) Q such that
Hence, equivalence can be easily tested by first checking that \(T=\widetilde{T}\) and that \(|| \overrightarrow{\mathring{T}} ||=||\overrightarrow{\mathring{\widetilde{T}}} ||\). If this is the case, (and if \(|| \overrightarrow{\mathring{T}} ||>0\)) one determines the rotation matrix Q which gives \(\overrightarrow{\mathring{T}} =Q^t\overrightarrow{\mathring{\widetilde{T}}} \), and equivalence is then determined by if \(A=Q^t\widetilde{A} Q\) or not. If \(|| \overrightarrow{\mathring{T}} ||= || \overrightarrow{\mathring{\widetilde{T}}} ||=0\), the equivalence of A and \(\widetilde{A}\) can be determined directly, i.e., by checking whether \([A]=[\widetilde{A}]\) and \([A^2]=[\widetilde{A}^2]\) or not.
4.1.3 Equivalence Through (algebraic) Invariants of \(R_{abcd}\)
If a solution is found, this is perhaps the most satisfactory way to establish equivalence, in particular if the invariants are constructed by simple algebraic operations only. (For instance, to a symmetric \(3 \times 3\)-matrix A one can take the three eigenvalues as invariants or else for instance the traces of \(A, A^2\) and \(A^3\). The former set requires some calculations, but the latter is immediate.)
Examples of invariants are \(T=R_{abcd}g^{ab}g^{cd}\), \(S=R_{abcd}g^{ac}g^{bd}\) and the invariants \(H=H_{ab}g^{ab}, W=W_{ab}g^{ab}\). To produce the invariants, we use the tensor \(R_{abcd}\) and the metric \(g_{ab}\). However, if we regard \(V^a \sim \mathbb {R}^2\) as oriented, so that the orthonormal basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) for \(V^a\) also is oriented, then invariants can also be formed in another way. Namely, since the space of symmetric \(2 \times 2\) matrices is 3-dimensional, and since the metric \(g_{ab}\) singles out a 1-dimensional subspace, it also determines a 2-dimensional subspace L; all elements orthogonal to \(g_{ab}\). This subspace is the set of all symmetric \(2 \times 2\) matrices which are also trace-free. L can be given an orientation through an area form, which in turn inherits the orientation from \(V^{a}\).
In general, with right-handed Cartesian coordinates \(x^1, x^2\), the area form \(\epsilon \) is given by \(\epsilon = dx^1 \wedge dx^2\) where \((\omega \wedge \mu )_{ab}=\omega _a \mu _b -\omega _b \mu _a\). With the orthonormal basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) ( for \(V^a\) ) also right handed, we define, cf. (2),
The area form on L is then \(\epsilon \sim e^{(1)} \wedge e^{(2)}\), or
It is not hard to see that this definition is independent of the orientation of \(\{\hat{\varvec{\xi }} , \hat{\varvec{\eta }} \}\). We observe that \(2E_{abcd}=(\hat{\varvec{\xi }}_a \hat{\varvec{\xi }}_b - \hat{\varvec{\eta }}_a \hat{\varvec{\eta }}_b)(\hat{\varvec{\xi }}_c \hat{\varvec{\eta }}_d + \hat{\varvec{\eta }}_c \hat{\varvec{\xi }}_d)-(\hat{\varvec{\xi }}_a \hat{\varvec{\eta }}_b + \hat{\varvec{\eta }}_a \hat{\varvec{\xi }}_b)(\hat{\varvec{\xi }}_c \hat{\varvec{\xi }}_d - \hat{\varvec{\eta }}_c \hat{\varvec{\eta }}_d)\). By replacing \(\hat{\varvec{\xi }}\) by \(\hat{\varvec{\omega }}=\cos v\, \hat{\varvec{\xi }}+\sin v \, \hat{\varvec{\eta }}\) and \(\hat{\varvec{\eta }}\) by \(\hat{\varvec{\mu }}=-\sin v\, \hat{\varvec{\xi }}+\cos v \, \hat{\varvec{\eta }}\), i.e., a rotated orthonormal basis, it is straightforward to check that
so that \(E_{abcd}\) is well defined. We recollect that area form \(E_{abcd}\) is defined, through the induced metric, on the plane L (which in turn is also defined through the metric \(g_{ab}\)) and the orientation on \(V^a\). Hence \(E_{abcd}\) can be used when forming invariants.
We will now state the result of this work, namely the existence of six invariants which can be used to investigate equivalence of two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\). We start by defining
where \(E_{abcd}\) is defined by (17) and (18). Similarly, we define \(\widetilde{S}, \widetilde{T}, \widetilde{J}_0, \widetilde{J}_1, \widetilde{J}_2\) and \( \widetilde{J}_3\) as the corresponding invariants formed from \(\widetilde{R}_{abcd}\). We make the remark that for most of these invariants, their immediate interpretations still remain to be found. Rather, their value lie in the fact that they form a set which can be used to establish the equivalence in Theorem 1 below. On the other hand, some interpretations are possible. In particular, the quotient T/S (see Definition 1) lies in the interval [1, 2] and has the meaning given by Lemma 2.
Theorem 1
Suppose that \(R_{abcd}=\sum _{i=1}^n R^{(i)}_{ab}R^{(i)}_{cd}\), with \(R^{(i)}_{ab}\ge 0\) and that \(R_{ijkl}\) are the components of \(R_{abcd}\) in some basis. Suppose also that \(\widetilde{R}_{abcd}=\sum _{i=1}^{\widetilde{n}} \widetilde{R}^{(i)}_{ab}{\widetilde{R}}^{(i)}_{cd}\), with \({\widetilde{R}}^{(i)}_{ab}\ge 0\) and that \(\widetilde{R}_{ijkl}\) are the components of \(\widetilde{R}_{abcd}\) in some, possibly unrelated, basis. If (and only if) \(S=\widetilde{S}, T=\widetilde{T}, J_0=\widetilde{J}_0, \ J_1=\widetilde{J}_1, J_2=\widetilde{J}_2, J_3=\widetilde{J}_3\), then there is a transformation matrix \({P^i}_j\) such that
Proof
Since the invariants are defined without reference to any basis, it is sufficient to consider the components expressed in an orthonormal frame, and in that case we must prove the existence of a rotation matrix \(Q \in \) SO(2) so that
Since
we can consider the invariants formed from the components of
and we must demonstrate the existence of a rotation matrix \(Q=Q_{2v}\) such that
We make the ansatz
Through (21) it is straightforward to see that
so if \(S=\widetilde{S}, T=\widetilde{T}, J_0=\widetilde{J}_0, J_1=\widetilde{J}_1\), it follows that \(\sigma =\widetilde{\sigma }, c=\widetilde{c}\), \(a^2+b^2=\widetilde{a}^2+\widetilde{b}^2\) and \(x^2+y^2=\widetilde{x}^2+\widetilde{y}^2\). Since the isotropic part of A, i.e., \(\frac{\sigma }{2}I\) is unaffected by a rotation of the coordinate system, we consider the traceless parts , , and the task is to assert a rotation matrix Q such that
if also \(J_2=\widetilde{J}_2, J_3=\widetilde{J}_3\). Again it is straightforward to calculate the remaining invariants, and we find
and similarly for \(\widetilde{J}_2, \widetilde{J}_3\). Hence, (since \(\sigma =\widetilde{\sigma }, c=\widetilde{c}\))
Suppose first that \(x^2+y^2>0\). The equality \(x^2+y^2=\widetilde{x}^2+\widetilde{y}^2\) then guarantees the existence of the rotation matrix Q which is determined via the relation . This can also be expressed as for some rotation matrices \(Q_1, Q_2\), where \(Q=Q_2 Q_1^t\). We now choose the rotation matrix \(Q_1\) so that in the untilded coordinates, \(y=0\). Similarly we choose \(Q_2\) so that for the tilded coordinates, we get a frame where \(\widetilde{y}=0\). The equalities between the invariants in (25) then become
so that \(a=\widetilde{a}\), \(b=\widetilde{b}\). This proves the theorem when \(x^2+y^2>0\). When \(x^2+y^2=\widetilde{x}^2+\widetilde{y}^2=0\), i.e., \(x=y=\widetilde{x}=\widetilde{y}=0\), the remaining equality \(a^2+b^2=\widetilde{a}^2+\widetilde{b}^2\) is sufficient since we can again choose frames in which \(b=\widetilde{b}=0\) and \(a>0, \widetilde{a}>0\). It then follows that \(a=\widetilde{a}\). \(\square \)
5 Discussion
In this work, we started with a family of symmetric positive (semi-)definite tensors in two dimensions and considered its variance. This lead us to a fourth order tensor \(R_{abcd}\) with the same symmetries as the elasticity tensor in continuum mechanics. After listing a number of possible issues to address, we focused on the equivalence problem. Namely, given the components of two such tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\), how can one determine if they represent the same tensor (but in different coordinate systems) or not? In Sect. 4, we saw that this could be investigated in different ways. The result of Theorem 1 is most satisfactory in the sense that it is expressible in terms of the components of the fourth order tensors directly.
There are two natural extensions and/or ways to continue this work. The first is to apply the result to realistic families of e.g., diffusion tensors in two dimensions. The objective is then, apart from establishing possible equivalences, to investigate the geometric meaning of the invariants. The other natural continuation is to investigate the corresponding problem in three dimensions. The degrees of freedom of \(R_{abcd}\) will then increase from 6 to 21, leaving us with a substantially harder, but also perhaps more interesting, problem.
Notes
- 1.
For the notation of tensors used here, see Sect. 2.1.
References
Basser, P.J., Mattiello, J., LeBihan, D.: MR diffusion tensor spectroscopy and imaging. Biophys. J. 66(1), 259–267 (1994)
Basser, P.J., Pajevic, S.: A normal distribution for tensor-valued random variables: applications to diffusion tensor MRI. IEEE Trans. Med. Imaging 22(7), 785–94 (2003). https://doi.org/10.1109/TMI.2003.815059
Boehler, J.P., Kirillov Jr., A.A., Onat, E.T.: On the polynomial invariants of the elasticity tensor. J. Elast. 34(2), 97–110 (1994)
Burgeth, B., Didas, S., Florack, L., Weickert, J.: A generic approach to diffusion filtering of matrix-fields. Computing 81, 179–197 (2007). https://doi.org/10.1007/s00607-007-0248-9
Callaghan, P.T.: Translational Dynamics and Magnetic Resonance: Principles of Pulsed Gradient Spin Echo NMR. Oxford University Press, New York (2011)
Helbig, K.: Review paper: What Kelvin might have written about elasticity. Geophys. Prospect. 61, 1–20 (2013). https://doi.org/10.1111/j.1365-2478.2011.01049.x
Itin, Y., Hehl, F.W.: Irreducible decompositions of the elasticity tensor under the linear and orthogonal groups and their physical consequences. J. Phys.: Conf. Ser. 597, 012046 (2015)
Jian, B., Vemuri, B.C., Özarslan, E., Carney, P.R., Mareci, T.H.: A novel tensor distribution model for the diffusion-weighted MR signal. NeuroImage 37(1), 164–176 (2007). https://doi.org/10.1016/j.neuroimage.2007.03.074
Kanatani, K.: Group-Theoretical Methods in Image Understanding. Springer, Berlin (1990)
Knutsson, H.: Representing local structure using tensors. In: Proceedings of the 6th Scandinavian Conference on Image Analysis, pp. 244–251. Oulu University, Oulu (1989)
Özarslan, E., Yolcu, C., Herberthson, M., Westin, C.F., Knutsson, H.: Effective potential for magnetic resonance measurements of restricted diffusion. Front. Phys. 5, 68 (2017)
Shakya, S., Batool, N., Özarslan, E., Knutsson, H.: Multi-fiber reconstruction using probabilistic mixture models for diffusion MRI examinations of the brain. In: Schultz, T., Özarslan, E., Hotz, I. (eds.) Modeling, Analysis, and Visualization of Anisotropy, pp. 283–308. Springer International Publishing, Cham (2017)
Slaughter, W.S.: The Linearized Theory of Elasticity. Birkhäuser, Basel (2002)
Thomson, W.: Xxi. elements of a mathematical theory of elasticity. Philso. Trans. R. Soc. Lond. 146, 481–498 (1856)
Voigt, W.: Lehrbuch Der Kristallphysik. Vieweg \(+\) Teubner Verlag (1928)
Wald, R.M.: General Relativity. University of Chicago Press, Chicago (1984)
Weickert, J.: Anisotropic Diffusion in Image Processing. Teubner-Verlag, Stuttgart (1998)
Westin, C.F., Knutsson, H., Pasternak, O., Szczepankiewicz, F., Özarslan, E., van Westen, D., Mattisson, C., Bogren, M., O’Donnell, L.J., Kubicki, M., Topgaard, D., Nilsson, M.: Q-space trajectory imaging for multidimensional diffusion MRI of the human brain. NeuroImage 135, 345–62 (2016). https://doi.org/10.1016/j.neuroimage.2016.02.039
Yolcu, C., Memiç, M., Şimşek, K., Westin, C.F., Özarslan, E.: NMR signal for particles diffusing under potentials: from path integrals and numerical methods to a model of diffusion anisotropy. Phys. Rev. E 93, 052602 (2016)
Zucchelli, M., Deslauriers-Gauthier, S., Deriche, R.: A closed-form solution of rotation invariant spherical harmonic features in diffusion MRI, pp. 77–89. Springer, Cham (2019)
Acknowledgements
The authors acknowledge the following sources for funding: Swedish Foundation for Strategic Research AM13-0090, the Swedish Research Council 2015-05356 and 2016-04482, Linköping University Center for Industrial Information Technology (CENIIT), VINNOVA/ITEA3 17021 IMPACT, Analytic Imaging Diagnostics Arena (AIDA), and National Institutes of Health P41EB015902.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this paper
Cite this paper
Herberthson, M., Özarslan, E., Westin, CF. (2021). Variance Measures for Symmetric Positive (Semi-) Definite Tensors in Two Dimensions. In: Özarslan, E., Schultz, T., Zhang, E., Fuster, A. (eds) Anisotropy Across Fields and Scales. Mathematics and Visualization. Springer, Cham. https://doi.org/10.1007/978-3-030-56215-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-56215-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-56214-4
Online ISBN: 978-3-030-56215-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)