1 Introduction

Positive semi-definite second order tensors arise in several applications. For instance, in image processing, a structure tensor is computed from greyscale images that captures the local orientation of the image intensity variations [10, 17] and is employed to address a broad range of challenges. Diffusion tensor magnetic resonance imaging (DT-MRI) [1, 5] characterizes anisotropic water diffusion by enabling the measurement of the apparent diffusion tensor, which makes it possible to delineate the fibrous structure of the tissue. Recent work has shown that diffusion MR measurements of restricted diffusion obscures the fine details of the pore shape under certain experimental conditions [11], and all remaining features can be encoded accurately by a confinement tensor [19].

All such second order tensors share the same mathematical properties, namely, they are real-valued, symmetric, and positive semi-definite. Moreover, in these disciplines, one encounters a collection of such tensors, e.g., at different locations of the image. Populations of such tensors have also been key to some studies aiming to model the underlying structure of the medium under investigation [8, 12, 18].

Irrespective of the particular application, let \(R_{ab}\) denote such tensors,Footnote 1 and we shall refer to the set of n tensors as \(\{R_{ab}^{(i)}\}_{i}\). Our desire is to find relevant descriptors or models of such a family. One relevant statistical measure of this family is the (population) variance

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n (R^{(i)}_{ab}-\widehat{R}_{ab})(R^{(i)}_{cd}-\widehat{R}_{cd}) =\left( \frac{1}{n}\sum _{i=1}^n R^{(i)}_{ab}R^{(i)}_{cd}\right) -\widehat{R}_{ab}\widehat{R}_{cd} \ , \end{aligned}$$

where \(\widehat{R}_{ab}=\frac{1}{n}\sum _{i=1}^n R^{(i)}_{ab}\) is the mean. (For another approach, see e.g., [8]). In this paper, we are interested in the first term, i.e., we study the fourth order tensor (skipping the normalization)

$$\begin{aligned} R_{abcd}=\sum _{i=1}^n R^{(i)}_{ab}R^{(i)}_{cd}, \quad R^{(i)}_{ab}\ge 0, \end{aligned}$$
(1)

where \(R^{(i)}_{ab}\ge 0\) stands for \(R^{(i)}_{ab}\) being positive semi-definite. It is obvious that \(R_{abcd}\) has the symmetries \(R_{abcd}=R_{bacd}=R_{abdc}\) and \(R_{abcd}=R_{cdab}\), i.e., \(R_{abcd}\) has the same symmetries as the elasticity tensor [14] from continuum mechanics. The elasticity tensor is well studied [13], e.g. with respect to classification, decompositions, and invariants. In most cases this is done in three dimensions. The same (w.r.t. symmetries) tensor has also been studied in the context of diffusion MR [2].

In this paper we will focus on the corresponding tensor \(R_{abcd}\) in two dimensions. First, there are direct applications in image processing, and secondly, the problems posed will be more accessible in two dimensions than in three. In particular we study the equivalence problem, namely, we ask the question: given the components \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\) of two such tensors do they represent the same tensor in different coordinate systems (see Sects. 2.1.2 and 4)?

1.1 Outline

Section 2 contains tensorial matters. We will assume some basic knowledge of tensors, although some definitions are given for completeness. The notation(s) used is commented on and in particular the three-dimensional Euclidean vector space \(V_{(ab)}\) is introduced.

In Sect. 2.1.2, we make some general remarks concerning the tensor \(R_{abcd}\) and specify the problem we focus on. Section 2.1 is concluded with some remarks on the Voigt/Kelvin notation and the corresponding visualisation in \(\mathbb {R}^3\).

Section 2.2 gives examples of invariants, especially invariants which are easily accessible from \(R_{abcd}\). Also, more general invariant/canonical decompositions of \(R_{abcd}\) are given.

In Sect. 3, we discuss how the tensor \(R_{abcd}\) can (given a careful choice of basis) be expressed in terms of a \(3 \times 3\) matrix, and how this matrix is affected by a rotation of the coordinate system in the underlying two-dimensional space on which \(R_{abcd}\) is defined.

In Sect. 4 we return to the equivalence problem and give the main result of this work. In Sect. 4.1.1 we provide a geometric condition for equivalence, while in Sect. 4.1.2, we present the equivalence in terms of a \(3 \times 3\) matrix. Both these characterisations rely on the choice of particular basis elements for the vector spaces employed. In Sect. 4.1.3 the same equivalence conditions are given in a form which does not assume a particular basis.

2 Preliminaries

In this section we clarify the notation and some concepts which we need. Section 2.1 deals with the (alternatives of) tensor notation and some representations. The equivalence (and related) problems are also briefly addressed. Section 2.2 accounts for some natural invariants, traces and decompositions of \(R_{abcd}\).

We will assume some familiarity with tensors, but to clarify the view on tensors we recall some facts. We start with a (finite dimensional) vector space V with dual \(V^*\). A tensor of order (p,q) is then a multi-linear mapping \(\underbrace{V \times V \cdots \times V}_{q} \times \underbrace{V^* \times \cdots \times V^*}_{p} \rightarrow \mathbb {R}\). Moreover, a (non-degenerate) metric/scalar product \(g: V \times V \rightarrow \mathbb {R}\) gives an isomorphism from V to \(V^*\) through \(v \rightarrow g(v,\cdot )\), and it is this isomorphism which is used to ‘raise and lower indices’, see below. Indeed, for a fixed \(v \in V\), \(g(v,\cdot )\) is a linear mapping \(V \rightarrow \mathbb {R}\), i.e., an element of \(V^*\).

2.1 Tensor Notation and Representations

There is a plethora of notations for tensors. Here, we follow the well-adopted convention [16] that early lower case Latin letters (\({T^a}_{bc}\)) refer to the tensor as a geometric object, its type being inferred from the indices and their positions (the abstract index notation). \(g_{ab}\) denotes the metric tensor. When the indices are lower case Latin letters from the middle of the alphabet, \({T^i}_{jk}\), they refer to components of \({T^a}_{bc}\) in a certain frame. The super-index i denotes a contravariant index while the sub-indices jk are covariant. For instance, a typical vector (tensor of type (1, 0)) will be written \(v^a\) with components \(v^i\), while the metric \(g_{ab}\) (tensor of type (0, 2)) has components \(g_{ij}\). At a number of occasions, it will also be useful to express quantities in terms of components with respect to orthonormal frames, i.e., Cartesian coordinates. This is sometimes referred to as ‘Cartesian tensors’, and the distinction between contra- and covariant indices is obscured. In these situations, it is possible (but not necessary) to write all indices as sub-indices, and sometimes the symbol \(\overset{\varvec{\cdot }}{=}\) is used to indicate that an equation is only valid in Cartesian coordinates. For example \(T_i\overset{\varvec{\cdot }}{=}T_{ijk}\delta _{jk}\) instead of \(T^i={T^i}_{jk}g^{jk}={T^{ik}}_k\). Often this is clear form the context, but we will sometimes use \(\overset{\varvec{\cdot }}{=}\) to remind the reader that a Cartesian assumption is made. Here, the Einstein summation convention is implied, i.e., repeated indices are to be summed over, so that for instance \(T^i={T^i}_{jk}g^{jk}={T^{ik}}_k= \sum \limits _{j=1}^n\sum \limits _{k=1}^n {T^i}_{jk}g^{jk}=\sum \limits _{k=1}^n {T^{ik}}_k\) if each index ranges from 1 to n. We have also used the metric \(g_{ij}\) and its inverse \(g^{ij}\) to raise and lower indices. For instance, since \(g_{ij} v^i\) is an element of \(V^*\), we write \(g_{ij} v^i=v_j\).

We also remind of the notation for symmetrisation. For a two-tensor \(T_{(ab)}=\frac{1}{2}(T_{ab}+T_{ba})\), while more generally for a tensor \(T_{a_1 a_2 \cdots a_n}\) of order (0, n) we have

$$ T_{(a_1 a_2 \cdots a_n)}=\frac{1}{n!}\sum _\pi T_{a_{\pi (1)}a_{\pi (2)}\cdots a_{\pi (n)}} $$

where the sum is taken over all permutations \(\pi \) of \(1,2, \ldots , n\). Naturally, this convention can also be applied to subsets of indices. For instance, \(H_{a(bc)}=\tfrac{1}{2}(H_{abc}+H_{acb})\).

2.1.1 The Vector Space of Symmetric Two-Tensors

In any coordinate frame a symmetric tensor \(R_{ab}\) (i.e., \(R_{ab}=R_{ba}\)) is represented by a symmetric matrix \(R_{ij}\) (\( 2 \times 2\) or \(3 \times 3\) depending on the dimension of the underlying space). In the two-dimensional case, with the underlying vector space \(V^{a}\sim \mathbb {R}^2\) , this means that \(R_{ab}\) lives in a three-dimensional vector space, which we denote by \(V_{(ab)}\). \(V_{(ab)}\) is equipped with a natural scalar product: \(<A_{ab},B_{ab}> = A_{ab}B^{ab}\), making it into a three-dimensional Euclidean space. Here \(A_{ab}B^{ab}=A_{ab}B_{cd}g^{ac}g^{bd}\), i.e, the contraction of \(A_{ab}B_{cd}\) over the indices ac and bd, and the tensor product \(A_{ab}B_{cd}\) itself is the tensor of order (0, 4) given by \((A_{ab}B_{cd})v^a u^b w^c m^d= (A_{ab}v^a u^b)(B_{cd} w^c m^d)\) together with multi-linearity.

2.1.2 The Tensor \(R_{abcd}\) and the Equivalence Problem

As noted above, \(R_{abcd}\) given by (1) has the symmetries \(R_{abcd}=R_{(ab)cd}=R_{ab(cd)}\) and \(R_{abcd}=R_{cdab}\), and it is not hard to see that this gives \(R_{abcd}\) six degrees of freedom in two dimensions. (See also Sect. 2.1.3.) It is also interesting to note that \(R_{abcd}\) provides a mapping \(V_{(ab)} \rightarrow V_{(ab)}\) through

$$ R_{ab} \mapsto R_{abcd}R^{cd}, $$

and that this mapping is symmetric (due to the symmetry \(R_{abcd}=R_{cdab}\)). Given \(R_{abcd}\) there are a number of questions one can ask, e.g.,

  • Feasibility—given a tensor \(R_{abcd}\) with the correct symmetries, can it be written in the form (1)?

  • Canonical decomposition—given \(R_{abcd}\) of the form (1), can you write \(R_{abcd}\) as a canonical sum of the form (1), but with a fixed number of terms (cf. eigenvector decomposition of symmetric matrices)?

  • Visualisation—since fourth order tensors are a bit involved, how can one visualise them in ordinary space?

  • Characterisation/relevant sets of invariants—what invariants are relevant from an application point of view?

  • The equivalence problem—in terms of components, how do we know if \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\) represent the same tensor when they are in different coordinate systems?

We will now focus on the equivalence problem in two dimensions. This problem can be formulated as above: given, in terms of components, two tensors (with the symmetries we consider) \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\), do they represent the same tensor in the sense that there is a coordinate transformation taking the components \(R_{ijkl}\) into the components \(\widetilde{R}_{ijkl}\)? In other words, does there exist an (invertible) matrix \({P^m}_i\) so that

$$ R_{ijkl}=\widetilde{R}_{mnop}{P^m}_i {P^n}_j {P^o}_k {P^p}_l ? $$

This problem can also be formulated when \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\) are expressed in Cartesian frames. Then the coordinate transformation must be a rotation, i.e., given by a rotation matrix \({Q^i}_j \in \) SO(2). Hence, the problem of (unitary) equivalence is: Given \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\), both expressed in Cartesian frames, is there a matrix (applying the ‘Cartesian convention’) \(Q_{ij} \in \) SO(2) so that

$$ R_{ijkl}=\widetilde{R}_{mnop}Q_{mi} Q_{nj} Q_{ok} Q_{pl} ? $$

2.1.3 The Voigt/Kelvin Notation

Since (in two dimensions) the space \(V_{(ab)}\) is three-dimensional, one can introduce coordinates, for example \(\sim \) and use vector algebra on \(\mathbb {R}^3\). This is used in the Voigt notation [15] and the related Kelvin notation [6]. As always, one must be careful to specify with respect to which basis in \(V_{(ab)}\) the coordinates are taken. For instance, in the correspondence \(\sim \) , the understood basis for \(V_{(ab)}\) (in the understood/induced coordinate system) is . These elements are orthogonal (viewed as vectors in \(V_{(ab)}\)) to each other, but not (all of them) of unit length.

Since the unit matrix plays a special role, we make the following choice. Starting with an orthonormal basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) for V, (i.e., \(\{\hat{\varvec{\xi }^a} , \hat{\eta }^a \}\) for \(V^a\)) a suitable orthonormal basis for \(V_{(ab)}\) is \(\{e^{(1)}_{ab}, e^{(2)}_{ab}, e^{(3)}_{ab} \}\) where \(e^{(1)}_{ab}=\frac{1}{\sqrt{2}}(\xi _a \xi _b-\eta _a \eta _b)\), \(e^{(2)}_{ab}=\frac{1}{\sqrt{2}}(\xi _a \eta _b+\eta _a \xi _b)\), \(e^{(3)}_{ab}=\frac{1}{\sqrt{2}}(\xi _a \xi _b+\eta _a \eta _b)\), i.e., in the induced basis we have

$$\begin{aligned} e^{(1)}_{ij} =\frac{1}{\sqrt{2}}\begin{pmatrix} 1 &{} 0 \\ 0 &{} -1 \end{pmatrix} \sim \hat{x}, \quad e^{(2)}_{ij} =\frac{1}{\sqrt{2}}\begin{pmatrix} 0 &{} 1 \\ 1 &{} 0 \end{pmatrix} \sim \hat{y}, \quad e^{(3)}_{ij} =\frac{1}{\sqrt{2}}\begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix} \sim \hat{z}. \end{aligned}$$
(2)

In this basis, we write an arbitrary element \(M_{ab} \in V_{(ab)}\) as , which means that \(M_{ab}\) gets the coordinates . Note that \(M_{ij}\) is positive definite if \(z^2-x^2-y^2 \ge 0\) and \(z \ge 0\). In terms of the coordinates of the Voigt notation, the tensor \(R_{abcd}\) corresponds to a symmetric mapping \(\mathbb {R}^3 \rightarrow \mathbb {R}^3\), given by a symmetric \(3\times 3\) matrix, which also shows that the degrees of freedom for \(R_{abcd}\) is six.

2.1.4 Visualization in \(\mathbb {R}^3\)

Through the Voigt notation, any symmetric two-tensor (in two dimensions) can be visualised as a vector in \(\mathbb {R}^3\). Using the basis vector given by (2), we note that \(e^{(1)}_{ij}\) and \(e^{(2)}_{ij}\) correspond to indefinite quadratic forms, while \(e^{(3)}_{ij}\) is positive definite. We also see that \(e^{(1)}_{ij}+e^{(3)}_{ij}\) and \(e^{(2)}_{ij}+e^{(3)}_{ij}\) are positive semi-definite.

Fig. 1
figure 1

Left: the symmetric matrices \(e^{(1)}_{ab}, e^{(2)}_{ab}, e^{(3)}_{ab}\) (red) and \(e^{(1)}_{ab}+e^{(3)}_{ab}, e^{(2)}_{ab}+e^{(3)}_{ab}\) (blue) as vectors in \(\mathbb {R}^3\). The positive semi-definite matrices correspond to vectors which are inside/above the indicated cone (including the boundary). Right: the fourth order tensors \((e^{(1)}_{ab}+e^{(3)}_{ab})(e^{(1)}_{cd}+e^{(3)}_{cd})\) and \((e^{(2)}_{ab}+e^{(3)}_{ab})(e^{(2)}_{cd}+e^{(3)}_{cd})\) depicted in blue, and \(e^{(3)}_{ab}e^{(3)}_{cd}\) shown in red are viewed as quadratic forms and illustrated as ellipsoids (made a bit ‘fatter’ than they should be for the sake of clarity)

In Fig. 1 (left) these matrices are illustrated as vectors in \(\mathbb {R}^3\). The set of positive semi-definite matrices corresponds to a cone, cf. [4], indicated in blue. When the symmetric \(2\times 2\) matrices are viewed as vectors in \(\mathbb {R}^3\), the outer product of such a vector with itself gives a symmetric \(3 \times 3\) matrix. Hence we get a positive semi-definite quadratic form on \(\mathbb {R}^3\), which can be illustrated by an (degenerate) ellipsoid in \(\mathbb {R}^3\). In Fig. 1 (right) \((e^{(1)}_{ab}+e^{(3)}_{ab})(e^{(1)}_{cd}+e^{(3)}_{cd})\), \((e^{(2)}_{ab}+e^{(3)}_{ab})(e^{(2)}_{cd}+e^{(3)}_{cd})\) and \(e^{(3)}_{ab}e^{(3)}_{cd}\) are visualised in this manner. Note that all these quadratic forms correspond to matrices which are rank one. (Cf. the ellipsoids in Fig. 2.)

2.2 Invariants, Traces and Decompositions

By an invariant, we mean a quantity that can be calculated from measurements, and which is independent of the frame/coordinate system with respect to which the measurements are performed, despite the fact that components, e.g., \({T^i}_{jk}\) themselves depend on the coordinate system. It is this property that makes invariants important, and typically they are formed via tensor products and contractions, e.g., \({T^i}_{jk}{T^k}_{il}g^{jl}\). Sometimes, the invariants have a direct geometrical meaning. For instance, for a vector \(v^i\), the most natural invariant is its squared length \(v^i v_i\). For a tensor \({T^i}_j\) of order (1,1) in three dimensions, viewed as a linear mapping \(\mathbb {R}^3 \rightarrow \mathbb {R}^3\), the most well known invariants are perhaps the trace \({T^i}_i\) and the determinant \(\det ({T^i}_j)\). The modulus of the determinant gives the volume scaling under the mapping given by \({T^i}_j\), while the trace equals the sum of the eigenvalues. If \({T^i}_j\) represents a rotation matrix, then its trace is \(1+2\cos \phi \), where \(\phi \) is the rotation angle. In general, however, the interpretation of a given invariant may be obscure. (For an account relevant to image processing, see e.g., [9]. A different, but relevant, approach in the field of diffusion MRI is found in [20].)

2.2.1 Natural Traces and Invariants

From (1), and considering the symmetries of \(R_{abcd}\), two (and only two) natural traces arise. For a tensor of order (1, 1), e.g., \({R_i}^j\), it is natural to consider this as an ordinary matrix, and consequently use stem letters without any indices at all. To indicate this slight deviation from the standard tensor notation, we denote e.g., \({R_i}^j\) by \(\bar{\bar{R}}\). Using \([\cdot ]\) for the trace, so that \([\bar{\bar{R}}]={{\,\mathrm{Tr}\,}}(\bar{\bar{R}})={R_a}^a\), we then have

$$\begin{aligned} T_{ab}={R_{abc}}^c=\sum _{i=1}^n R^{(i)}_{ab}{R^{(i)}_{c}}^{c}= \sum _{i=1}^n R^{(i)}_{ab}[\bar{\bar{R}}^{(i)}], \end{aligned}$$
(3)

and

$$\begin{aligned} S_{ab}={R_{acb}}^c=\sum _{i=1}^n R^{(i)}_{ac}{R^{(i)}_{b}}^{c}. \end{aligned}$$
(4)

Hence, in a Cartesian frame, where the index position is unimportant, we have for the matrices \(\bar{\bar{T}}=T_{ij}, \bar{\bar{S}}=S_{ij}\)

$$ \bar{\bar{T}}=\sum _{i=1}^n \bar{\bar{R}}^{(i)}[\bar{\bar{R}}^{(i)}], \quad \bar{\bar{S}}=\sum _{i=1}^n \bar{\bar{R}}^{(i)}\bar{\bar{R}}^{(i)}. $$

To proceed there are two double traces (i.e., contracting \(R_{abcd}\) twice):

$$\begin{aligned} T={T_a}^a={{{{R_{a}}^a}_c}}^c=\sum _{i=1}^n {R^{(i)}_{a}}^{a}{R^{(i)}_{c}}^{c}=\sum _{i=1}^n [\bar{\bar{R}}^{(i)}]^2 \end{aligned}$$
(5)

and

$$\begin{aligned} S={S_a}^a={R_{ac}}^{ac}=\sum _{i=1}^n R^{(i)}_{ac}{R^{(i)}}^{ac}=\sum _{i=1}^n [(\bar{\bar{R}}^{(i)})^2]. \end{aligned}$$
(6)

In two dimensions, the difference \(T_{ab}\) \(-\) \(S_{ab}\) is proportional to the metric \(g_{ab}\). Namely,

Lemma 1

With \(T_{ab}\) and \(S_{ab}\) given by (3) and (4), it holds that (in two dimensions)

$$ T_{ab}-S_{ab} =\sum _{i=1}^n \det (\bar{\bar{R}}^{(i)})g_{ab}. $$

Proof

By linearity, it is enough to prove the statement when \(n=1\), i.e., when the sum has just one term. Raising the second index, and using components, the statement then is \({T_i}^j-{S_i}^j=\det (\bar{\bar{R}}^{(1)}){\delta _i}^j\). Putting \(\bar{\bar{R}}^{(1)}=A\), we see that \({T_i}^j-{S_i}^j=A[A]-A^2\) while \(\det (\bar{\bar{R}}^{(1)}){\delta _i}^j=\det (A)I\), and by the Cayley-Hamilton theorem in two dimensions, \(A[A]-A^2\) is indeed \(\det (A)I\). \(\square \)

From lemma 1, it follows that \(T-S=2\sum _{i=1}^n \det (\bar{\bar{R}}^{(i)})\ge 0\). In fact the following inequalities hold.

Lemma 2

With T and S defined as above, it holds that \(S \le T \le 2S\). If \(T=S\), all tensors \(R^{(i)}_{ab}\) have rank 1. If \(T=2S\), all tensors \(R^{(i)}_{ab}\) are isotropic, i.e., proportional to the metric \(g_{ab}\).

Proof

Again, by linearity it is enough to consider one tensor \(\bar{\bar{R}}^{(1)}=A\). In an orthonormal frame which diagonalises A, we have (with \(a\ge 0, c\ge 0\), \(a+c>0\)). Hence

$$ S=a^2+c^2 \le a^2+c^2+2ac=(a+c)^2=T = 2(a^2+c^2)-(a-c)^2 \le 2S. $$

The first inequality becomes equality when \(ac=0\), i.e., when A has rank one. The second inequality becomes equality when \(a=c\), i.e., when A is isotropic. \(\square \)

Definition 1

We define the mean rank, \(r_m\), by \(r_m=T/S\), with T and S as above.

Hence, in two dimensions, \(1 \le r_m \le 2\).

2.2.2 A Canonical Decomposition

It is customary [3, 7] to decompose a tensor with the symmetries of \(R_{abcd}\) into a sum where one term is the completely symmetric part:

$$ R_{abcd}=H_{abcd}+W_{abcd}, \text{ where } H_{abcd}=R_{(abcd)}, W_{abcd}=R_{abcd}-H_{abcd}. $$

It is also customary to split \(H_{abcd}\) into a trace-free part and ‘trace part’. We start by defining \(H_{ab}={H_{abc}}^c\), \(H={H_a}^a\) and then the trace-free part of \(H_{ab}\): \(\mathring{H}_{ab}=H_{ab}-\frac{1}{2}Hg_{ab}\) so that \(H_{ab}=\mathring{H}_{ab}+\frac{1}{2}Hg_{ab}\). (These decompositions can be made in any dimension, but the actual coefficients, e.g., \(\frac{1}{2}\) above and \(\frac{1}{8}\) and \(\frac{3}{8}\) et cetera below depend on the underlying dimension.) It is straightforward to check that

$$\mathring{H}_{abcd}=H_{abcd}-g_{(ab}H_{cd)}+\tfrac{1}{8}Hg_{(ab}g_{cd)}=H_{abcd}-g_{(ab}\mathring{H}_{cd)}-\tfrac{3}{8}Hg_{(ab}g_{cd)}$$

is also trace-free. Hence we have the decomposition

$$H_{abcd}=\mathring{H}_{abcd}+g_{(ab}H_{cd)}-\tfrac{1}{8}Hg_{(ab}g_{cd)}=\mathring{H}_{abcd}+g_{(ab}\mathring{H}_{cd)}+\tfrac{3}{8}Hg_{(ab}g_{cd)}.$$

Moreover, due to the symmetry of \(R_{abcd}\), we find that

$$ H_{abcd}=\tfrac{1}{3}\left( R_{abcd}+R_{acbd}+R_{adbc} \right) $$

and therefore that

$$\begin{aligned} W_{abcd}=\tfrac{1}{3}\left( 2 R_{abcd}-R_{acbd}-R_{adbc} \right) \end{aligned}$$
(7)

which implies that \(H_{ab}={H_{abc}}^{c}=\tfrac{1}{3}(T_{ab}+2 S_{ab})\) and \(W_{ab}={W_{abc}}^{c}=\tfrac{2}{3}(T_{ab}-S_{ab})\).

The degres of freedom, which for \(R_{abcd}\) is six, is distributed, where \(R_{abcd} \sim \{\mathring{H}_{abcd}, H_{ab},W_{abcd}\}\), as

$$ \underset{(6)}{R_{abcd}} \sim \{\underset{(2)}{\mathring{H}_{abcd}}, \underset{(3)}{H_{ab}}, \underset{(1)}{W_{abcd}} \} \sim \{\underset{(2)}{\mathring{H}_{abcd}}, \underset{(2)}{\mathring{H}_{ab}}, \underset{(1)}{H_{}},\underset{(1)}{W_{abcd}} \}. $$

For \(H_{ab}\) (or the pair \(\mathring{H}_{ab}, H\)) this is clear. The total symmetry of \(\mathring{H}_{abcd}\) leaves only five components (in a basis), \(\mathring{H}_{1111}, \mathring{H}_{1112},\mathring{H}_{1122},\mathring{H}_{1222},\mathring{H}_{2222}\). However, the trace-free condition \(\mathring{H}_{abcd}g^{cd}=0\) imposes three conditions. (In an orthonormal frame, \(\mathring{H}_{1122}=-\mathring{H}_{1111}\), \(\mathring{H}_{2222}=-\mathring{H}_{1122}\) and \(\mathring{H}_{1112}=-\mathring{H}_{1222}\).) That \(W_{abcd}\) has only one degree of freedom follows from the following lemma.

Lemma 3

Suppose that \(W_{abcd}\) is given by (7), and put \(W_{ab}=W_{abcd}g^{cd}\), \(W=W_{ab}g^{ab}\). Then (in two dimensions)

$$ W_{abcd}=\tfrac{W}{4}\left( 2 g_{ab}g_{cd}-g_{ac}g_{bd}-g_{ad}g_{bc} \right) $$

Proof

By linearity, it is enough to consider the case when \(R_{abcd}=A_{ab}A_{cd}\) for some (symmetric) \(A_{ab}\). In terms of eigenvectors (to \({A^a}_b\)) we can write \(A_{ab}=\alpha x_a x_b+\beta y_a y_b\), where \(x_a x^a=y_a y^a=1, x_a y^a=0\). In particular \(g_{ab}=x_a x_b +y_a y_b\). From (7) we then get

$$\begin{aligned} \begin{aligned} W_{abcd}=&\tfrac{1}{3}\left( 2 R_{abcd}-R_{acbd}-R_{adbc} \right) \\ =&\tfrac{1}{3}\left( 2 A_{ab}A_{cd}-A_{ac}A_{bd}-A_{ad}A_{bc} \right) \\ =&\tfrac{1}{3}\left( 2(\alpha x_a x_b+\beta y_a y_b)(\alpha x_c x_d+\beta y_c y_d)\right. \\&-(\alpha x_a x_c+\beta y_a y_c)(\alpha x_b x_d+\beta y_b y_d)\\&\left. -(\alpha x_a x_d+\beta y_a y_d)(\alpha x_b x_c+\beta y_b y_c) \right) . \end{aligned} \end{aligned}$$
(8)

Expanding the parentheses, the components \(x_a x_b x_c x_d\) and \(y_a y_b y_c y_d\) vanish, leaving

$$\begin{aligned} \begin{aligned} \frac{\alpha \beta }{3}(&2 x_a x_by_c y_d+2y_a y_bx_c x_d-x_a x_cy_b y_d\\&-y_a y_c x_b x_d -x_a x_d y_b y_c-y_a y_d x_b x_c)\\ =&\frac{\alpha \beta }{3}\left( 2 g_{ab}g_{cd}-g_{ac}g_{bd}-g_{ad}g_{bc} \right) , \end{aligned} \end{aligned}$$
(9)

where the last equality can be seen by inserting \(g_{ab}=x_a x_b +y_a y_b\) (for all indices) and expanding. Taking one trace, i.e., contracting with \(g^{cd}\) gives \(W_{ab}=\frac{2\alpha \beta }{3}g_{ab}\), and another trace gives \(W=\frac{4\alpha \beta }{3}\), which proves the lemma. \(\square \)

3 \(R_{abcd}\) as a Quadratic Form on \(\mathbb {R}^3\)

Through the orthonormal basis for the space of symmetric two-tensors (in two dimensions) given by (2), the tensor \(R_{abcd}\) viewed as a quadratic form can be represented by a \(3 \times 3\)-matrix. Here, we will restrict ourselves to an orthonormal basis for \(V_{(ab)}\), namely the basis \(\{e^{(1)}_{ab}, e^{(2)}_{ab}, e^{(3)}_{ab} \}\) from Sect. 2.1.3, defined in terms of the orthonormal basis \(\{\xi ^a , \eta ^a \}\) for \(V^a\). Thus, given \(R_{abcd}\), we associate the symmetric matrix \(M_{ij}\), where (the choice of an orthonormal basis justifies the mismatch of the indices ij)

$$ M_{ij}\overset{\varvec{\cdot }}{=}{R^{ab}}_{cd}e^{(i)}_{ab}(e^{(j)})^{cd}, \quad 1 \le i,j \le 3. $$

It is instructive to see how the various derived tensors show up in \(M_{ij}\). In terms of the basis (2) it is natural to look at the various parts of \(M_{ij}\) as follows

$$\begin{aligned} M_{ij} \overset{\varvec{\cdot }}{=} \left( \begin{array}{c|c} {\begin{matrix} \times &{} \times \\ \times &{} \times \end{matrix}} &{} {\begin{matrix} \times \\ \times \end{matrix}} \\ \hline {\begin{matrix} \times &{} \times \end{matrix}}&\times \end{array}\right) \overset{\varvec{\cdot }}{=} \left( \begin{array}{c|c} A &{} \overline{v} \\ \hline \overline{v}^t &{} a \end{array}\right) . \end{aligned}$$
(10)

This splitting is natural for reasons which will become apparent in the next sections. Note, however, that with this representation it is tempting to consider coordinate changes in \(\mathbb {R}^3\), which is not natural in this case. Rather, of interest is the change of basis in \(V^a\) and the related induced change of coordinates in the representation (10). See Sect. 3.2.

3.1 Representation of the Canonically Derived Parts of \(R_{abcd}\)

It is helpful to see how the components of the various tensors \(T_{ab}\), \(S_{ab}\), T, S, \(\mathring{H}_{abcd}\), \(\mathring{H}_{ab}\), H and W show up as components of \(M_{ij}\). As for \(\mathring{H}_{ab}\), e.g., \(\mathring{T}_{ab}\) denotes the trace-free part of \(T_{ab}\). Immediate is \(M_{33}\):

$$\begin{aligned} M_{33}\overset{\varvec{\cdot }}{=}{R^{ab}}_{cd}e^{(3)}_{ab}(e^{(3)})^{cd} \overset{\varvec{\cdot }}{=}\frac{1}{2}{R^{ab}}_{cd}g_{ab}g^{cd} =\frac{1}{2}T_{cd}g^{cd}=\frac{1}{2}T. \end{aligned}$$
(11)

Similarly, for \(i=1,2\) we have

$$\begin{aligned} M_{i3}\overset{\varvec{\cdot }}{=}\frac{1}{\sqrt{2}}{R^{ab}}_{cd}e^{(i)}_{ab}g^{cd} \overset{\varvec{\cdot }}{=}\frac{1}{\sqrt{2}}T^{ab}e^{(i)}_{ab} \overset{\varvec{\cdot }}{=}\frac{1}{\sqrt{2}}\mathring{T}^{ab}e^{(i)}_{ab}, \end{aligned}$$
(12)

where the last equality follows form the trace-freeness of \(e^{(1)}_{ab}\) and \(e^{(2)}_{ab}\). This means that the components of \(\mathring{T}_{ab}\) (properly rescaled) goes into \(M_{ij}\) as the components of \(\overline{v}\) (and \(\overline{v}^t\)) in (10). The same holds for \(\mathring{S}_{ab}\) and \(\mathring{H}_{ab}\), as \(\mathring{S}_{ab}=\mathring{T}_{ab}\) by Lemma 1, which then implies that also \(\mathring{H}_{ab}=\mathring{T}_{ab}=\mathring{S}_{ab}\). This latter relation follows from the trace-free part of the relation \(H_{ab}=\tfrac{1}{3}(T_{ab}+2 S_{ab})\). Hence

$$\begin{aligned} M_{ij} \overset{\varvec{\cdot }}{=} \left( \begin{array}{c|c} A &{} \overrightarrow{\mathring{T}} \\ \hline {\overrightarrow{\mathring{T}}}^t &{} \frac{1}{2}T \end{array}\right) \overset{\varvec{\cdot }}{=} \left( \begin{array}{c|c} \frac{\sigma }{2}I+\mathring{A} &{} \overrightarrow{\mathring{T}} \\ \hline {\overrightarrow{\mathring{T}}}^t &{} \frac{1}{2}T \end{array}\right) , \end{aligned}$$
(13)

where \(\overrightarrow{\mathring{T}}=\overrightarrow{\mathring{S}}=\overrightarrow{\mathring{H}}\) encodes the two degrees of freedom in \(\mathring{T}_{ab}=\mathring{S}_{ab}=\mathring{H}_{ab}\). The matrix A is decomposed as \(A=\frac{\sigma }{2}I+\mathring{A}\) where I is the (\(2 \times 2\)) identity matrix and \(\mathring{A}\) is trace-free part of A. In particular, \([A]=\sigma \).

To investigate \([M_{ij}]=M_{11}+M_{22}+M_{33}\), i.e., the trace of \(M_{ij}\) we note that for a general symmetric matrix we have \(R_{ij}e^{(1)}_{ij} \overset{\varvec{\cdot }}{=}\frac{a-c}{\sqrt{2}}\), \(R_{ij}e^{(2)}_{ij} \overset{\varvec{\cdot }}{=}\frac{2b}{\sqrt{2}}\), \(R_{ij}e^{(3)}_{ij} \overset{\varvec{\cdot }}{=}\frac{a+c}{\sqrt{2}}\). When \(M_{ij}\) is constructed from \(R_{abcd}\) which is an outer product \(R_{ab}R_{cd}\) the trace is given by \(M_{11}+M_{22}+M_{33}= (\frac{a-c}{\sqrt{2}})^2+(\frac{2b}{\sqrt{2}})^2+(\frac{a+c}{\sqrt{2}})^2=a^2+2b^2+c^2\) and from (6) this is S. Together with linearity, this shows that \( [M]=M_{11}+M_{22}+M_{33}=S \) also when \(R_{abcd}\) is formed as in (1). Taking trace in (13), this gives

$$ S=\sigma +\tfrac{1}{2}T, \quad \text{ i.e., } \quad \sigma =S-\tfrac{1}{2}T. $$

In addition, the relations below Eq. (7) show that

$$ {\left\{ \begin{array}{ll} H=\tfrac{1}{3}(T+2S) \\ W=\tfrac{2}{3}(T-S) \end{array}\right. } \quad \text{ i.e., } \quad {\left\{ \begin{array}{ll} T=H+W \\ S=H-\tfrac{1}{2}W \end{array}\right. } \quad \text{ so } \text{ that } \quad \sigma =\tfrac{1}{2}H-W. $$

The two degres of freedom in \(\mathring{A}\) corresponds to the two degrees of freedom in \(\mathring{H}_{abcd}\).

3.2 The Behaviour of \(M_{ij}\) Under a Rotation of the Coordinate System in \(V^a\)

The components of \(M_{ij}\) are expressed in terms of the orthonormal basis tensors given by (2), and these in turn are based on the ON basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) for V. Putting the basis vectors in a row matrix \(\begin{pmatrix}\hat{\xi }&\hat{ \eta }\end{pmatrix}\) and the coordinates in a column matrix so that a vector , and considering only orthonormal frames, the relevant change of basis is given by a rotation matrix \(Q(v)=Q_v=\begin{pmatrix} \cos v &{} -\sin v \\ \sin v &{} \cos v \end{pmatrix}\), i.e., we consider the change of basis

$$ \begin{pmatrix}\hat{\xi }&\hat{ \eta }\end{pmatrix} \rightarrow \begin{pmatrix}\hat{\tilde{\xi }}&\hat{\tilde{\eta }}\end{pmatrix}= \begin{pmatrix}\hat{\xi }&\hat{ \eta }\end{pmatrix} \begin{pmatrix} \cos v &{} -\sin v \\ \sin v &{} \cos v \end{pmatrix}=\begin{pmatrix}\hat{\xi }&\hat{ \eta }\end{pmatrix} Q(v). $$

This means that for a vector \(\mathbf {u}=\begin{pmatrix}\hat{\tilde{\xi }}&\hat{\tilde{\eta }}\end{pmatrix} \begin{pmatrix}\tilde{\xi }\\ \tilde{\eta }\end{pmatrix}=\begin{pmatrix}\hat{\xi }&\hat{\eta }\end{pmatrix} \begin{pmatrix} \xi \\ \eta \end{pmatrix}\), the coordinates transform as

$$ \begin{pmatrix} \xi \\ \eta \end{pmatrix}\rightarrow \begin{pmatrix}\tilde{\xi }\\ \tilde{\eta }\end{pmatrix}=Q^{-1}(v)\begin{pmatrix} \xi \\ \eta \end{pmatrix} =Q^{t}(v)\begin{pmatrix} \xi \\ \eta \end{pmatrix}=Q(-v)\begin{pmatrix} \xi \\ \eta \end{pmatrix}. $$

For the components of the basis vectors \(e^{(1)}_{ab}, e^{(2)}_{ab}, e^{(3)}_{ab}\) we find (omitting the factor \(1/\sqrt{2}\))

$$\begin{aligned} \begin{aligned} \begin{pmatrix} 1 &{} 0 \\ 0 &{} -1 \end{pmatrix}&\rightarrow \begin{pmatrix} \cos v &{} \sin v \\ -\sin v &{} \cos v \end{pmatrix} \begin{pmatrix} 1 &{} 0 \\ 0 &{} -1 \end{pmatrix} \begin{pmatrix} \cos v &{} -\sin v \\ \sin v &{} \cos v \end{pmatrix} =\begin{pmatrix} \cos 2v &{} -\sin 2v \\ -\sin 2v &{} -\cos 2v \end{pmatrix}\\ \begin{pmatrix} 0 &{} 1 \\ 1 &{} 0 \end{pmatrix}&\rightarrow \begin{pmatrix} \cos v &{} \sin v \\ -\sin v &{} \cos v \end{pmatrix} \begin{pmatrix} 0 &{} 1 \\ 1 &{} 0 \end{pmatrix} \begin{pmatrix} \cos v &{} -\sin v \\ \sin v &{} \cos v \end{pmatrix} =\begin{pmatrix} \sin 2v &{} \cos 2v \\ \cos 2v &{} -\sin 2v \end{pmatrix}\\ \begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix}&\rightarrow \begin{pmatrix} \cos v &{} \sin v \\ -\sin v &{} \cos v \end{pmatrix} \begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix} \begin{pmatrix} \cos v &{} -\sin v \\ \sin v &{} \cos v \end{pmatrix} =\begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix}, \end{aligned} \end{aligned}$$
(14)

and this means that the components \(M_{ij}\) transform as

$$\begin{aligned} M_{ij} \overset{\varvec{\cdot }}{=} \left( \begin{array}{c|c} A &{} \overline{v} \\ \hline \overline{v}^t &{} a \end{array}\right) \rightarrow \widetilde{M}_{ij} \overset{\varvec{\cdot }}{=} \left( \begin{array}{c|c} Q^t_{2v} A Q_{2v} &{} Q^t_{2v}\overline{v} \\ \hline \overline{v}^t Q_{2v} &{} a \end{array}\right) . \end{aligned}$$
(15)

But this latter expression is just

$$ \left( \begin{array}{c|c} Q^t_{2v} &{} \overline{0} \\ \hline \overline{0}^t &{} 1 \end{array}\right) \left( \begin{array}{c|c} A &{} \overline{v} \\ \hline \overline{v}^t &{} a \end{array}\right) \left( \begin{array}{c|c} Q_{2v} &{} \overline{0} \\ \hline \overline{0}^t &{} 1 \end{array}\right) , $$

hence we have the following important remark/observation:

Remark 1

Viewing the matrix \(M_{ij}\) as an ellipsoid in \(\mathbb {R}^3\), the effect of a rotation by an angle v in \(V^a\) corresponds to a rotation of the ellipsoid by an angle 2v around the z-axis in \(\mathbb {R}^3\) (where the z-axis corresponds to the ‘isotropic direction’ given by \(g_{ab}\)).

4 The Equivalence Problem for \(R_{abcd}\)

The equivalence problem for \(R_{abcd}\) can be formulated in different ways (for an account in three dimensions, we refer to [3]). Given two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\), both with the symmetries implied by (1), the question whether they are the same or not is straightforward as one can compare the components in any basis. However, \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) could live in different (but isomorphic) vector spaces, e.g. two tangent spaces at different points, and the concept of equality becomes less clear. On the other hand, in terms of components \(R_{ijkl}\) and \(\widetilde{R}_{ijkl}\), one could ask whether there is a change of coordinates which takes one set of components into the other. If so, one can find a (invertible) matrix \({P^i}_j\) so that

$$ R_{ijkl}=\widetilde{R}_{mnop}{P^m}_i {P^n}_j {P^o}_k {P^p}_l, $$

and the tensors are then said to be equivalent. As already mentioned, it is convenient to restrict the coordinate systems to orthonormal coordinates. This means that two different coordinate systems differ only by their orientation, i.e., the change of coordinates are given by a rotation matrix \(Q \in \) SO(2). Under the ’Cartesian convention’ that all indices are written as subscripts, \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent if there is a matrix \(Q \in \) SO(2) so that (their Cartesian components satisfy)

$$ R_{ijkl}=\widetilde{R}_{mnop}{Q}_{mi} {Q}_{nj} {Q}_{ok} {Q}_{pl}. $$

4.1 Different Ways to Characterize the Equivalence of \(R_{abcd}\) and \(\widetilde{R}_{abcd}\)

In this section, we will discuss three ways to determine whether two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent or not. In Sects. 4.1.1 and 4.1.2 we present two such methods briefly, while Sect. 4.1.3, which is more complete, contains the main result of this work.

As mentioned in Sect. 1.1, the results of Sects. 4.1.1 and 4.1.2, which may be used in their own rights, rely on particular choices of basis matrices for \(V_{(ab)}\). The formulation in Sect. 4.1.3 on the other hand, is expressed in the components of \(R_{abcd}\) (in any coordinate system) directly.

4.1.1 Orientation of the Ellipsoid in \(\mathbb {R}^3\)

A necessary condition for \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) to be equivalent is that their corresponding \(3 \times 3\)-matrices \(M_{ij}\) and \({\widetilde{M}}_{ij}\) have the same eigenvalues. On the other hand, this is not sufficient since the representation in \(\mathbb {R}^3\) should reflect the freedom in rotating the coordinate system in \(V^a \sim \mathbb {R}^2\). With the coordinates adopted, this corresponds to a rotation of the associated ellipsoid around the z-axis in \(\mathbb {R}^3\) (see Remark 1 in Sect. 3.2). This is illustrated in Fig. 2 where three ellipsoids, all representing positive definite symmetric mappings having identical eigenvalues, are shown. The two first ellipsoids can be rotated into each other by a rotation around the z-axis. This implies that the corresponding tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent. The third ellipsoid can also be rotated into the two others, but these rotations are around directions other than the z-axis, which means that this ellipsoid represents a different tensor.

Fig. 2
figure 2

Three identical (truncated) ellipsoids in \(\mathbb {R}^3\) with different orientations. The two leftmost ellipsoids can be carried over to each other through a rotation around the (vertical in the figure) z-axis, which implies that they represent the same tensor \(R_{abcd}\) (up to the meaning discussed). The right ellipsoid, despite identical eigenvalues with the two others, represent a different tensor since the rotation which carries this ellipsoid to any of the others is not around the z-axis

In the generic case, with all eigenvalues different, it is easy to test whether two different ellipsoids can be transfered into each other through a rotation around the z-axis. This will be the case if the corresponding eigenvectors (of \(M_{ij}\) and \(\widetilde{M}_{ij}\)) have the same angle with the z-axis. Hence it is just a matter of checking the z-components of the three normalized eigenvectors and see if they are equal up to sign.

4.1.2 Components in a Canonical Coordinate System

In a sense, this is the most straightforward method. In a coordinate system which respects \(e^{(3)}_{ab}\) as the z-axis in \(V_{(ab)} \sim \mathbb {R}^3\), two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\) are equivalent if there is a rotation matrix (in two dimensions) Q such that

$$\begin{aligned} \left( \begin{array}{c|c} A &{} \overrightarrow{\mathring{T}} \\ \hline \overrightarrow{\mathring{T}}^t &{} \frac{1}{2}T \end{array}\right) = \left( \begin{array}{c|c} Q^t\widetilde{A} Q &{} Q^t\overrightarrow{\mathring{\widetilde{T}}} \\ \hline \overrightarrow{\mathring{\widetilde{T}}}^t Q &{} \frac{1}{2}\widetilde{T} \end{array}\right) . \end{aligned}$$
(16)

Hence, equivalence can be easily tested by first checking that \(T=\widetilde{T}\) and that \(|| \overrightarrow{\mathring{T}} ||=||\overrightarrow{\mathring{\widetilde{T}}} ||\). If this is the case, (and if \(|| \overrightarrow{\mathring{T}} ||>0\)) one determines the rotation matrix Q which gives \(\overrightarrow{\mathring{T}} =Q^t\overrightarrow{\mathring{\widetilde{T}}} \), and equivalence is then determined by if \(A=Q^t\widetilde{A} Q\) or not. If \(|| \overrightarrow{\mathring{T}} ||= || \overrightarrow{\mathring{\widetilde{T}}} ||=0\), the equivalence of A and \(\widetilde{A}\) can be determined directly, i.e., by checking whether \([A]=[\widetilde{A}]\) and \([A^2]=[\widetilde{A}^2]\) or not.

4.1.3 Equivalence Through (algebraic) Invariants of \(R_{abcd}\)

If a solution is found, this is perhaps the most satisfactory way to establish equivalence, in particular if the invariants are constructed by simple algebraic operations only. (For instance, to a symmetric \(3 \times 3\)-matrix A one can take the three eigenvalues as invariants or else for instance the traces of \(A, A^2\) and \(A^3\). The former set requires some calculations, but the latter is immediate.)

Examples of invariants are \(T=R_{abcd}g^{ab}g^{cd}\), \(S=R_{abcd}g^{ac}g^{bd}\) and the invariants \(H=H_{ab}g^{ab}, W=W_{ab}g^{ab}\). To produce the invariants, we use the tensor \(R_{abcd}\) and the metric \(g_{ab}\). However, if we regard \(V^a \sim \mathbb {R}^2\) as oriented, so that the orthonormal basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) for \(V^a\) also is oriented, then invariants can also be formed in another way. Namely, since the space of symmetric \(2 \times 2\) matrices is 3-dimensional, and since the metric \(g_{ab}\) singles out a 1-dimensional subspace, it also determines a 2-dimensional subspace L; all elements orthogonal to \(g_{ab}\). This subspace is the set of all symmetric \(2 \times 2\) matrices which are also trace-free. L can be given an orientation through an area form, which in turn inherits the orientation from \(V^{a}\).

In general, with right-handed Cartesian coordinates \(x^1, x^2\), the area form \(\epsilon \) is given by \(\epsilon = dx^1 \wedge dx^2\) where \((\omega \wedge \mu )_{ab}=\omega _a \mu _b -\omega _b \mu _a\). With the orthonormal basis \(\{\hat{\varvec{\xi }} , \hat{\eta }\}\) ( for \(V^a\) ) also right handed, we define, cf. (2),

$$\begin{aligned} e^{(1)}_{ab}=\tfrac{1}{\sqrt{2}}(\hat{\varvec{\xi }}_a \hat{\varvec{\xi }}_b - \hat{\varvec{\eta }}_a \hat{\varvec{\eta }}_b), \quad e^{(2)}_{ab}=\tfrac{1}{\sqrt{2}} (\hat{\varvec{\xi }}_a \hat{\varvec{\eta }}_b + \hat{\varvec{\eta }}_a \hat{\varvec{\xi }}_b). \end{aligned}$$
(17)

The area form on L is then \(\epsilon \sim e^{(1)} \wedge e^{(2)}\), or

$$\begin{aligned} \epsilon \sim E_{abcd}=e^{(1)}_{ab}e^{(2)}_{cd}-e^{(2)}_{ab}e^{(1)}_{cd}. \end{aligned}$$
(18)

It is not hard to see that this definition is independent of the orientation of \(\{\hat{\varvec{\xi }} , \hat{\varvec{\eta }} \}\). We observe that \(2E_{abcd}=(\hat{\varvec{\xi }}_a \hat{\varvec{\xi }}_b - \hat{\varvec{\eta }}_a \hat{\varvec{\eta }}_b)(\hat{\varvec{\xi }}_c \hat{\varvec{\eta }}_d + \hat{\varvec{\eta }}_c \hat{\varvec{\xi }}_d)-(\hat{\varvec{\xi }}_a \hat{\varvec{\eta }}_b + \hat{\varvec{\eta }}_a \hat{\varvec{\xi }}_b)(\hat{\varvec{\xi }}_c \hat{\varvec{\xi }}_d - \hat{\varvec{\eta }}_c \hat{\varvec{\eta }}_d)\). By replacing \(\hat{\varvec{\xi }}\) by \(\hat{\varvec{\omega }}=\cos v\, \hat{\varvec{\xi }}+\sin v \, \hat{\varvec{\eta }}\) and \(\hat{\varvec{\eta }}\) by \(\hat{\varvec{\mu }}=-\sin v\, \hat{\varvec{\xi }}+\cos v \, \hat{\varvec{\eta }}\), i.e., a rotated orthonormal basis, it is straightforward to check that

$$\begin{aligned} \begin{aligned}&(\hat{\varvec{\omega }}_a \hat{\varvec{\omega }}_b - \hat{\varvec{\mu }}_a \hat{\varvec{\mu }}_b)(\hat{\varvec{\omega }}_c \hat{\varvec{\mu }}_d + \hat{\varvec{\mu }}_c \hat{\varvec{\omega }}_d)-(\hat{\varvec{\omega }}_a \hat{\varvec{\mu }}_b + \hat{\varvec{\mu }}_a \hat{\varvec{\omega }}_b)(\hat{\varvec{\omega }}_c \hat{\varvec{\omega }}_d - \hat{\varvec{\mu }}_c \hat{\varvec{\mu }}_d)\\ =&(\hat{\varvec{\xi }}_a \hat{\varvec{\xi }}_b - \hat{\varvec{\eta }}_a \hat{\varvec{\eta }}_b)(\hat{\varvec{\xi }}_c \hat{\varvec{\eta }}_d + \hat{\varvec{\eta }}_c \hat{\varvec{\xi }}_d)-(\hat{\varvec{\xi }}_a \hat{\varvec{\eta }}_b + \hat{\varvec{\eta }}_a \hat{\varvec{\xi }}_b)(\hat{\varvec{\xi }}_c \hat{\varvec{\xi }}_d - \hat{\varvec{\eta }}_c \hat{\varvec{\eta }}_d) \end{aligned} \end{aligned}$$
(19)

so that \(E_{abcd}\) is well defined. We recollect that area form \(E_{abcd}\) is defined, through the induced metric, on the plane L (which in turn is also defined through the metric \(g_{ab}\)) and the orientation on \(V^a\). Hence \(E_{abcd}\) can be used when forming invariants.

We will now state the result of this work, namely the existence of six invariants which can be used to investigate equivalence of two tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\). We start by defining

$$\begin{aligned} \begin{aligned} S =&R_{abcd}g^{ac}g^{bd}\\ T =&R_{abcd}g^{ab}g^{cd}\\ J_0=&R_{abcd}R^{abcd}\\ J_1=&T^{ab}T_{ab}\\ J_2=&R_{abcd}T^{ab}T^{cd}\\ J_3 =&T^{ab}R_{abcd}E^{cdef}T_{ef}. \end{aligned} \end{aligned}$$
(20)

where \(E_{abcd}\) is defined by (17) and (18). Similarly, we define \(\widetilde{S}, \widetilde{T}, \widetilde{J}_0, \widetilde{J}_1, \widetilde{J}_2\) and \( \widetilde{J}_3\) as the corresponding invariants formed from \(\widetilde{R}_{abcd}\). We make the remark that for most of these invariants, their immediate interpretations still remain to be found. Rather, their value lie in the fact that they form a set which can be used to establish the equivalence in Theorem 1 below. On the other hand, some interpretations are possible. In particular, the quotient T/S (see Definition 1) lies in the interval [1, 2] and has the meaning given by Lemma 2.

Theorem 1

Suppose that \(R_{abcd}=\sum _{i=1}^n R^{(i)}_{ab}R^{(i)}_{cd}\), with \(R^{(i)}_{ab}\ge 0\) and that \(R_{ijkl}\) are the components of \(R_{abcd}\) in some basis. Suppose also that \(\widetilde{R}_{abcd}=\sum _{i=1}^{\widetilde{n}} \widetilde{R}^{(i)}_{ab}{\widetilde{R}}^{(i)}_{cd}\), with \({\widetilde{R}}^{(i)}_{ab}\ge 0\) and that \(\widetilde{R}_{ijkl}\) are the components of \(\widetilde{R}_{abcd}\) in some, possibly unrelated, basis. If (and only if) \(S=\widetilde{S}, T=\widetilde{T}, J_0=\widetilde{J}_0, \ J_1=\widetilde{J}_1, J_2=\widetilde{J}_2, J_3=\widetilde{J}_3\), then there is a transformation matrix \({P^i}_j\) such that

$$ R_{ijkl}=\widetilde{R}_{mnop}{P^m}_i {P^n}_j {P^o}_k {P^p}_l. $$

Proof

Since the invariants are defined without reference to any basis, it is sufficient to consider the components expressed in an orthonormal frame, and in that case we must prove the existence of a rotation matrix \(Q \in \) SO(2) so that

$$ R_{ijkl}=\widetilde{R}_{mnop}{Q}_{mi} {Q}_{nj} {Q}_{ok} {Q}_{pl}. $$

Since

$$\begin{aligned} R_{abcd}=M_{ij} e^{(i)}_{ab}e^{(j)}_{cd}, \end{aligned}$$
(21)

we can consider the invariants formed from the components of

$$\begin{aligned} M_{ij}= \left( \begin{array}{c|c} A &{} \overline{u} \\ \hline \overline{u}^t &{} c \end{array}\right) \text{ and } \widetilde{M}_{ij}= \left( \begin{array}{c|c} \widetilde{A} &{} \widetilde{\overline{u}} \\ \hline \widetilde{\overline{u}}^t &{} \widetilde{c} \end{array}\right) \end{aligned}$$
(22)

and we must demonstrate the existence of a rotation matrix \(Q=Q_{2v}\) such that

$$\begin{aligned} \widetilde{A}=Q^t_{2v} A Q_{2v}, \quad \widetilde{\overline{u}}=Q^t_{2v}\overline{u}, \quad \widetilde{c} = c. \end{aligned}$$
(23)

We make the ansatz

(24)

Through (21) it is straightforward to see that

$$ \begin{array}{l} S=\sigma +c, \quad T=2c, \quad J_0= 2(a^2 + b^2) + c^2 + \sigma ^2/2 + 2(x^2 + y^2),\\ J_1=2(c^2+x^2+y^2) \end{array} $$

so if \(S=\widetilde{S}, T=\widetilde{T}, J_0=\widetilde{J}_0, J_1=\widetilde{J}_1\), it follows that \(\sigma =\widetilde{\sigma }, c=\widetilde{c}\), \(a^2+b^2=\widetilde{a}^2+\widetilde{b}^2\) and \(x^2+y^2=\widetilde{x}^2+\widetilde{y}^2\). Since the isotropic part of A, i.e., \(\frac{\sigma }{2}I\) is unaffected by a rotation of the coordinate system, we consider the traceless parts , , and the task is to assert a rotation matrix Q such that

$$ \begin{pmatrix} a &{}b \\ b &{}-a \end{pmatrix} =Q^t \begin{pmatrix} \widetilde{a} &{}\widetilde{b} \\ \widetilde{b} &{}-\widetilde{a} \end{pmatrix} Q, \quad \begin{pmatrix} x \\ y \end{pmatrix} =Q^t \begin{pmatrix} \widetilde{x} \\ \widetilde{y} \end{pmatrix}, $$

if also \(J_2=\widetilde{J}_2, J_3=\widetilde{J}_3\). Again it is straightforward to calculate the remaining invariants, and we find

$$ \begin{array}{cl} J_2=&{}4bxy+2a(x^2-y^2)+ 2c^3+(4c+\sigma )(x^2+y^2)\\ J_3=&{}4axy-2b(x^2-y^2)\ . \end{array} $$

and similarly for \(\widetilde{J}_2, \widetilde{J}_3\). Hence, (since \(\sigma =\widetilde{\sigma }, c=\widetilde{c}\))

$$\begin{aligned} \begin{array}{rl} a^2+b^2=&{}\widetilde{a}^2+\widetilde{b}^2\\ x^2+y^2=&{}\widetilde{x}^2+\widetilde{y}^2\\ 2bxy+a(x^2-y^2)=&{}2\widetilde{b}\widetilde{x}\widetilde{y}+\widetilde{a}(\widetilde{x}^2-\widetilde{y}^2)\\ 2axy-b(x^2-y^2)=&{}2\widetilde{a}\widetilde{x}\widetilde{y}-\widetilde{b}(\widetilde{x}^2-\widetilde{y}^2)\ . \end{array} \end{aligned}$$
(25)

Suppose first that \(x^2+y^2>0\). The equality \(x^2+y^2=\widetilde{x}^2+\widetilde{y}^2\) then guarantees the existence of the rotation matrix Q which is determined via the relation . This can also be expressed as for some rotation matrices \(Q_1, Q_2\), where \(Q=Q_2 Q_1^t\). We now choose the rotation matrix \(Q_1\) so that in the untilded coordinates, \(y=0\). Similarly we choose \(Q_2\) so that for the tilded coordinates, we get a frame where \(\widetilde{y}=0\). The equalities between the invariants in (25) then become

$$ \begin{array}{rl} a^2+b^2=&{}\widetilde{a}^2+\widetilde{b}^2\\ x^2=&{}\widetilde{x}^2\\ a x^2=&{}\widetilde{a} \widetilde{x}^2\\ -b x^2=&{}-\widetilde{b}\widetilde{x}^2 \ , \end{array} $$

so that \(a=\widetilde{a}\), \(b=\widetilde{b}\). This proves the theorem when \(x^2+y^2>0\). When \(x^2+y^2=\widetilde{x}^2+\widetilde{y}^2=0\), i.e., \(x=y=\widetilde{x}=\widetilde{y}=0\), the remaining equality \(a^2+b^2=\widetilde{a}^2+\widetilde{b}^2\) is sufficient since we can again choose frames in which \(b=\widetilde{b}=0\) and \(a>0, \widetilde{a}>0\). It then follows that \(a=\widetilde{a}\). \(\square \)

5 Discussion

In this work, we started with a family of symmetric positive (semi-)definite tensors in two dimensions and considered its variance. This lead us to a fourth order tensor \(R_{abcd}\) with the same symmetries as the elasticity tensor in continuum mechanics. After listing a number of possible issues to address, we focused on the equivalence problem. Namely, given the components of two such tensors \(R_{abcd}\) and \(\widetilde{R}_{abcd}\), how can one determine if they represent the same tensor (but in different coordinate systems) or not? In Sect. 4, we saw that this could be investigated in different ways. The result of Theorem 1 is most satisfactory in the sense that it is expressible in terms of the components of the fourth order tensors directly.

There are two natural extensions and/or ways to continue this work. The first is to apply the result to realistic families of e.g., diffusion tensors in two dimensions. The objective is then, apart from establishing possible equivalences, to investigate the geometric meaning of the invariants. The other natural continuation is to investigate the corresponding problem in three dimensions. The degrees of freedom of \(R_{abcd}\) will then increase from 6 to 21, leaving us with a substantially harder, but also perhaps more interesting, problem.