Mat\'{e}rn Class Tensor-Valued Random Fields and Beyond

We construct classes of homogeneous random fields on a three-dimensional Euclidean space that take values in linear spaces of tensors of a fixed rank and are isotropic with respect to a fixed orthogonal representation of the group of $3\times 3$ orthogonal matrices. The constructed classes depend on finitely many isotropic spectral densities. We say that such a field belong to either the Mat\'{e}rn or the dual Mat\'{e}rn class if all of the above densities are Mat\'{e}rn or dual Mat\'{e}rn. Several examples are considered.


Introduction
Random functions of more than one variable, or random fields, were introduced in the 20th years of the past century as mathematical models of physical phenomena like turbulence, see, e.g., Friedmann and Keller (1924), von Kármán and Howarth (1938), Robertson (1940). To explain how random fields appear in continuum physics, consider the following example.
Example 1. Let E = E 3 be a three-dimensional Euclidean point space, and let V be the translation space of E with an inner product (·, ·). Following Truesdell (1991), the elements A of E are called the places in E. The symbol B − A is the vector in V that translates A into B.
Let B ⊂ E be a subset of E occupied by a material, e.g., a turbulent fluid or a deformable body. The temperature is a rank 0 tensor-valued function T : B → R 1 . The velocity of a fluid is a rank 1 tensor-valued function v : B → V . The strain tensor is a rank 2 tensor-valued function ε : B → S 2 (V ), where S 2 (V ) is the linear space of symmetric rank 2 tensors over V . The piezoelectricity tensor is a rank 3 tensor-valued function D : B → S 2 (V ) ⊗ V . The elastic modulus is a rank 4 tensor-valued function C : B → S 2 (S 2 (V )). Denote the range of any of the above functions by V. Physicists call V the constitutive tensor space. It is a subspace of the tensor power V ⊗r , where r is a nonnegative integer. The form (x 1 ⊗ · · · ⊗ x r , y 1 ⊗ · · · ⊗ y r ) = (x 1 , y 1 ) · · · (x r , y r ) can be extended by linearity to the inner product on V ⊗r and then restricted to V.
At microscopic length scales, spatial randomness of the material needs to be taken into account. Mathematically, there is a probability space (Ω, F, P) and a function T(A, ω) : B × Ω → V such that for any fixed A 0 ∈ V and for any Borel set B ⊆ V the inverse image T −1 (A 0 , B) is an event. The map T(x, ω) is a random field.
Translate the whole body B by a vector x ∈ V . The random fields T(A + x) and T(A) have the same finite-dimensional distributions. It is therefore convenient to assume that there is a random field defined on all of E such that its restriction to B is equal to T(A). For brevity, denote the new field by the same symbol T(A) (but this time A ∈ E). The random field T(A) is strictly homogeneous, that is, the random fields T(A + x) and T(A) have the same finite-dimensional distributions. In other words, for each positive integer n, for each x ∈ V , and for all distinct places A 1 , . . . , A n ∈ E the random elements T(A 1 ) ⊕ · · · ⊕ T(A n ) and T(A 1 + x) ⊕ · · · ⊕ T(A n + x) of the direct sum on n copies of the space V have the same probability distribution.
Let K be the material symmetry group of the material body B acting in V . The group K is a subgroup of the orthogonal group O(V ). For simplicity, we assume that the material is fully symmetric, that is, K = O(V ). Fix a place O ∈ B and identify E with V by the map f that maps A ∈ E to A − O ∈ V . Then K acts in E and rotates the body B by Let A 0 ∈ B. Under the above action of K the point A 0 becomes g · A 0 . The random tensor T(A 0 ) becomes U(g))T(A 0 ), where U is the restriction of the orthogonal representation g → g ⊗r of the group O(V ) to the subspace V of the space V ⊗r . The random fields T(g · A) and U(g))T(A) must have the same finite-dimensional distributions, because g · A 0 is the same material point in a different place. Note that this property does not depend on a particular choice of the place O, because the field is strictly homogeneous. We call such a field strictly isotropic.
Assume that the random field T(A) is second-order, that is Define the one-point correlation tensor of the field T(A) by

T(A) = E[T(A)]
and its two-point correlation tensor by

T(A), T(B) = E[(T(A) − T(A) ) ⊗ (T(B) − T(B) )].
Assume that the field T(A) is mean-square continuous, that is, its two-point correlation tensor T(A), T(B) : E × E → V ⊗ V is a continuous function. Note that Marinucci and Peccati (2013) had shown that any finite-variance isotropic random field on a compact group is necessarily mean-square continuous under standard measurability assumptions, and hence its covariance function is continuous. In the related settings, the characterisation of covariance function for a real homogeneous isotropic random field in d-dimensional Euclidean space was given in the classical paper by Schoenberg (1938), where it was conjectured that the only form of discontinuity which could be allowed for such a function would occur at the origin. This conjecture was proved by Crum (1956) for d ≥ 2. This result was widely used in Geostatistics (see, i.e., Gneiting and Sasvári (1999), among the others), who argued that the homogenous and isotropic random field could be expressed as a mean-square continuous component and what they called "nugget effect", e.g. a purely discontinuous component. In fact this latter component should be necessarily non-measurable (see, i.e., Kallianpur (1980, Example 1.2.5). The relation between measurability and mean-square continuity in non-compact situation is still unclear even for scalar random fields. That is why we assume in this paper that our random fields are mean-square continuous, and hence their covariance functions are continuous.
If the field T(A) is strictly homogeneous, then its one-point correlation tensor is a constant tensor in V, while its two-point correlation tensor is a function of the vector B−A, i.e., a function on V . Call such a field wide-sense homogeneous.
Similarly, if the field T(A) is strictly isotropic, then we have (1) Definition 1. A random field T(A) is called wide-sense isotropic if its onepoint and two-point correlation tensors satisfy (1).
There is another definition of isotropy.
Definition 2 ((Yaglom, 1957)). A random field T(A) is called a multidimensional scalar wide-sense isotropic if its one-point correlation tensor is a constant, while the two-point correlation tensor T(x, T(y) depends only on y − x .
It is easy to see that Definition 2 is a particular case of Definition 1 when the representation U is trivial, that is, maps all elements g ∈ K to the identity operator.
In the case of r = 0, the complete description of the two-point correlation functions of scalar homogeneous and isotropic random fields is as follows.
Recall that a measure µ defined on the Borel σ-field of a Hausdorff topological space X is called Borel measure.
establishes a one-to-one correspondence between the set of two-point correlation functions of homogeneous and isotropic random fields T (x) on the space domain R 3 and the set of all finite Borel measures µ on the interval [0, ∞).
Theorem 1 is a translation of the result proved by Schoenberg (1938) to the language of random fields. This translation is performed as follows. Assume that B(x) is a two-point correlation function of a homogeneous and isotropic random field T (x). Let n be a positive integer, let x 1 , . . . , x n be n distinct points in R 3 , and let c 1 , . . . , c n be n complex numbers. Consider the random variable X = n j=1 c j [T (x j ) − T (x j ) ]. Its variance is non-negative: In other words, the two-point correlation function T (x), T (y) is a nonnegative-definite function. Moreover, it is continuous, because the random field T (x) is mean-square continuous, and depends only on the distance y − x between the points x and y, because the field is homogeneous and isotropic. Schoenberg (1938) proved that Equation (2) describes all of such functions. Conversely, assume that the function T (x), T (y) is described by Equation (2). The centred Gaussian random field with the two-point correlation function (2) is homogeneous and isotropic. In other words, there is a link between the theory of random fields and the theory of positive-definite functions.
In what follows, we consider the fields with absolutely continuous spectrum.
Definition 3 (Ivanov and Leonenko (1989)). A homogeneous and isotropic random field T (x) has an absolutely continuous spectrum if the measure µ is absolutely continuous with respect to the measure 4πλ 2 dλ, i.e., there exist a nonnegative measurable function f (λ) such that is called the isotropic spectral density of the random field T (x).
Example 2 (The Matérn two-point correlation function). Consider a twopoint correlation function of a scalar random field T (x) of the form where σ 2 > 0, a > 0, ν > 0 and K ν (z) is the Bessel function of the third kind of order ν. Here, the parameter ν measures the differentiability of the random field; the parameter σ is its variance and the parameter a measures how quickly the correlation function of the random field decays with distance. The corresponding isotropic spectral density is Note that Example 2 demonstrates another link, this time between the theory of random fields and the theory of special functions.
In this paper, we consider the following problem. How to define the Matérn two-point correlation tensor for the case of r > 0? A particular answer to this question can be formulated as follows.
Example 3 (Parsimonious Matérn model, Gneiting et al. (2010)). We assume that the vector random field has the two-point correlation tensor B (x, y) = (B ij (x, y)) 1≤i,j≤m . It is not straightforward to specify the cross-covariance functions B ij (x) , 1 ≤ i, j ≤ m, i = j as non-trivial, valid parametric models because of the requirement of their non-negative definiteness. In the multivariate Matérn model, each marginal covariance function is also a Matérn function with co-location correlation coefficient b ij , smoothness parameter ν ij and scale parameter a ij .The spectral densities are The question then is to determine the values of ν ij , a ij and b ij so that the non-negative definiteness condition is satisfied. Let m ≥ 2. Suppose that and that there is a common scale parameter in the sense that there exists an a > 0 such that a i = ... = a m = a, and a ij = a for 1 ≤ i, j ≤ m, i = j.
Then the multivariate Matérn model provides a valid second-order structure where the matrix (β ij ) i,j=1,...,m has diagonal elements β ii = 1 for i = 1, ..., m, and off-diagonal elements β ij , 1 ≤ i, j ≤ m, i = j so that it is symmetric and non-negative definite.
Example 4 (Flexible Matérn model). Consider the vector random field T(x) ∈ R m , x ∈ R 3 with the two-point covariance tensor We assume that the matrix Σ = (σ ij ) 1≤i,j≤m = (σ ij ) > 0 (nonnegative definite), and we denote σ 2 i = σ ii , i = 1, . . . , m. Then the spectral density F = (f ij ) 1≤i,j≤m has the entries We need to find some conditions on parameters a ij > 0, ν ij > 0, under which the matrix F > 0 (nonnegative definite). The general conditions can be found in Du et al. (2012) and Apanasovich et al. (2012).
Recall that a symmetric, real m× m matrix Θ = (θ ij ) 1≤i,j≤m , is said to be conditionally negative definite (Bapat and Raghavan, 1997), if the inequality In general, a necessary condition for the above inequality is which implies that all entries of a conditionally negative definite matrix are nonnegative whenever its diagonal entries are non-negative. If all its diagonal entries vanish, a conditionally negative definite matrix is also named a Euclidean distance matrix. It is known that Θ = (θ ij ) 1≤i,j≤m is conditionally negative definite if and only if an m × m matrix S with entries exp{−θ ij u} is positive definite, for every fixed u ≥ 0 (cf. Bapat and Raghavan (1997, Theorem 4.1.3)), or S = e −uΘ , where e Λ is an Hadamar exponential of a matrix Λ.
Some simple examples of conditionally negative definite matrices are Recall that the Hadamard product of two matrices A and B is the matrix where one need to find conditions under which We consider first the case 1, in which we assume that if and only if the matrix is conditionally negative definite (see above examples (i)-(vi)), then for such This class is not empty, since it is included the case of the so-called parsimonious model: ν ij = ν i +ν j 2 (see Example 3). Thus, for the case 1, the following multivariate Matérn models are valid under the following conditions (see, Du et al. (2012); Apanasovich et al. (2012) form a non-negative definite matrices. Consider the case 2: Then the following multivariate Matérn models are valid under the following conditions (Apanasovich et al., 2012): form a conditionally non-negative matrix and σ ij /a 3 ij , 1 ≤ i, j ≤ m, form non-negative definite matrices. These classes of Matérn models are not empty since in the case of parsimonious model they are consistent with Gneiting et al. (2010, Theorem 1). For the parsimonious model form this paper ( ν ij = ν ii +ν jj 2 , 1 ≤ i, j ≤ m), the following multivariate Matérn models are valid under conditions A3) either The most general conditions and new examples can be found in Du et al. (2012) and Apanasovich et al. (2012). The paper by Genton and Kleiber (2015) reviews the main approaches to building multivariate correlation and covariance structures, including the multivariate Matérn models.

Note that for the Matérn models
This condition is known as short range dependence, while for the dual Matérn model, the long range dependence is possible: When m = 3, the random field of Example 3 is scalar isotropic but not isotropic. How to construct examples of homogeneous and isotropic vector and tensor random fields with Matérn two-point correlation tensors?
To solve this problem, we develop a sketch of a general theory of homogeneous and isotropic tensor-valued random fields in Section 2. This theory was developed by Ostoja-Starzewski (2014, 2016a). In particular, we explain another two links: one leads from the theory of random fields to classical invariant theory, another one was established recently and leads from the theory of random fields to the theory of convex compacta.
In Section 3, we give examples of Matérn homogeneous and isotropic tensor-valued random fields. Finally, in Appendices we shortly describe the mathematical terminology which is not always familiar to specialists in probability: tensors, group representations, and classical invariant theory. For different aspects of theory of random fields see also Leonenko (1999) and Leonenko and Sakhno (2012).

A sketch of a general theory
Let r be a nonnegative integer, let V be an invariant subspace of the representation g → g ⊗r of the group O(3), and let U be the restriction of the above representation to V. Consider a homogeneous V-valued random field T(x), x ∈ R 3 . Assume it is isotropic, that is, satisfies (1). It is very easy to see that its one-point correlation tensor T(x) is an arbitrary element of the isotypic subspace of the space V that corresponds to the trivial representation. In particular, in the case of r = 0 the representation U is trivial, and T(x) is an arbitrary real number. In the case of r = 1 we have U(g) = g. This representation does not contain a trivial component, therefore T(x) = 0. In the case of r = 2 and U(g) = S 2 (g) the isotypic subspace that corresponds to the trivial representation is described in Example 13, we have T(x) = CI, where C is an arbitrary real number, and I is the identity operator in R 3 , and so on.
Can we quickly describe the two-point correlation tensor in the same way? The answer is positive. Indeed, the second equation in (1) means that T(x), T(y) is a measurable covariant of the pair (g, U). The integrity basis for polynomial invariants of the defining representation contains one element I 1 = x 2 . By the Wineman-Pipkin theorem (Appendix A, Theorem 6), we obtain where T l (y − x) are the basic covariant tensors of the representation U.
For example, when r = 1, the basis covariant tensors of the defining representations are δ ij and x i x j by the result of Weyl (1997) mentioned in Appendix C. We obtain the result by Robertson (1940): When r = 2 and U(g) = S 2 (g), the three rank 4 isotropic tensors are δ ij δ kl , δ ik δ jl , and δ il δ jk . Consider the group Σ of order 8 of the permutations of symbols i, j, k, and l, generated by the transpositions (ij), (kl), and the product (ik)(jl). The group Σ acts on the set of rank 4 isotropic tensors and has two orbits. The sums of elements on each orbit are basis isotropic tensors: Consider the case of degree 2 and of order 4. For the pair of representations (g ⊗4 , (R 3 ) ⊗4 ) and (g, R 3 ) we have 6 covariant tensors: The action of the group Σ has 2 orbits, and the symmetric covariant tensors are In the case of degree 4 and of order 4 we have only one covariant: easily follows. The case of r = 3 will be considered in details elsewhere. When r = 4 and U(g) = S 2 (S 2 (g)), the situation is more delicate. There are 8 symmetric isotropic tensors connected by 1 syzygy, 13 basic covariant tensors of degree 2 and of order 8 connected by 3 syzygies, 10 basic covariant tensors of degree 4 and of order 8 connected by 2 syzygies, 3 basic covariant tensors of degree 6 and of order 8, and 1 basic covariant tensor of degree 8 and of order 8, see Malyarenko and Ostoja-Starzewski (2016c) and Malyarenko and Ostoja-Starzewski (2016b) for details. It follows that there are 29 independent basic covariant tensors. The result by Lomakin (1965) includes only 15 of them and is therefore incomplete.
How to find the functions ϕ m ? In the case of r = 0, the answer is given by Theorem 1: In the case of r = 1, the answer has been found by Yaglom (1957): where ρ = y − x , j n are the spherical Bessel functions, and Φ 1 and Φ 2 are two finite measures on [0, In the general case, we proceed in steps. The main idea is simple. We describe all homogeneous random fields and throw away those that are not isotropic. The homogeneous random fields are described by the following result.
establishes a one-to-one correspondence between the set of the two-point correlation tensors of homogeneous random fields T(x) on the space domain R 3 with values in a complex finite-dimensional space V C and the set of all measures µ on the Borel σ-field B(R 3 ) of the wavenumber domainR 3 with values in the cone of nonnegative-definite Hermitian operators in V C .
We would like to write as many formulae as possible in a coordinate-free form, like (5). To do that, let J be a real structure in the space V C , that is, a map j : Any tensor x ∈ V C can be written as Both sets V + and V − are real vector spaces. If the values of the random field T(x) lie in V + , then the measure µ satisfies the condition Next, the following Lemma can be proved. Let p = (λ, ϕ p , θ p ) be the spherical coordinates in the wavenumber domain.
Lemma 1. A homogeneous random field described by (5) and (6) is isotropic if and only if its two-point correlation tensor has the form where ν is a finite measure on the interval [0, ∞), and where f is a measurable function taking values in the set of all symmetric nonnegative-definite operators on V + with unit trace and satisfying the condition When λ = 0, condition (8) gives f (0) = S 2 (U)(g)f (0) for all g ∈ O(3). In other words, the tensor f (0) lies in the isotypic subspace of the space S 2 (V + ) that corresponds to the trivial representation of the group O(3), call it H 1 . The intersection of H 1 with the set of all symmetric nonnegative-definite operators on V + with unit trace is a convex compact set, call it C 1 .
When λ > 0, condition (8)  Fix an orthonormal basis T 0,1,0 , . . . , T 0,n 0 ,0 of the space H 1 . Assume that the space H 0 ⊖H 1 has the non-zero intersection with the spaces of n 1 copies of the irreducible representation U 2g , n 2 copies of the irreducible representation U 4g , . . . , n r copies of the irreducible representation U 2rg of the group O(3), and let T 2ℓ,n,m , −2ℓ ≤ m ≤ 2ℓ, be the tensors of the Gordienko basis of the nth copy of the representation U 2ℓg . We have f (λ, 0, 0) = r ℓ=0 n ℓ n=1 f ℓn (λ)T 2ℓ,n,0 with f ℓn (0) = 0 for ℓ > 0 and 1 ≤ n ≤ n ℓ . By (8) we obtain Equation (7) takes the form where we used the relation Substitute the Rayleigh expansion into (10). We obtain where r = y − x. Returning back to the matrix entries U 2ℓg m0 (ϕ r , θ r ), we have where It is easy to check that the function M 2ℓ,n (ϕ r , θ r ) is a covariant of degree 2ℓ and of order 2r. Therefore, the M-function is a linear combination of basic symmetric covariant tensors, or L-functions: where q kr is the number of linearly independent symmetric covariant tensors of degree 2k and of order 2r. The right hand side is indeed a polynomial in sines and cosines of the angles ϕ r and θ r . Equation (11) takes the form Recall that f ℓn (λ) are measurable functions such that the tensor (9) lies in C 1 for λ = 0 and in C 0 for λ > 0. The final form of the two-point correlation tensor of the random field T(x) is determined by geometry of convex compacta C 0 and C 1 . For example, in the case of r = 1 the set C 0 is an interval (see Malyarenko and Ostoja-Starzewski (2016a)), while C 1 is a one-point set inside this interval. The set C 0 has two extreme points, and the corresponding random field is a sum of two uncorrelated components given by Equation (12) below. The one-point set C 1 lies in the middle of the interval, the condition Φ 1 ({0}) = Φ 2 ({0}) follows. In the case of r = 2, the set of extreme points of the set C 0 has three connected components: two one-point sets and an ellipse, see Malyarenko and Ostoja-Starzewski (2016a), and the corresponding random field is a sum of three uncorrelated components.
In general, the two-point correlation tensor of the field has the simplest form when the set C 0 is a simplex. We use this idea in Examples 6 and 8 below.
3 Examples of Matérn homogeneous and isotropic random fields Example 6. Consider a centred homogeneous scalar isotropic random field T (x) on the space R 3 with values in the two-dimensional space R 2 . It is easy to see that both C 0 and C 1 are equal to the set of all symmetric nonnegativedefinite 2 × 2 matrices with unit trace. Every such matrix has the form x y y 1 − x with x ∈ [0, 1] and y 2 ≤ x(1 − x). Geometrically, C 0 and C 1 are the balls Inscribe an equilateral triangle with vertices into the above ball. The function f (p) takes the form where a m ( p ) are the barycentric coordinates of the point f (p) inside the triangle. The two-point correlation tensor of the field takes the form where dΦ m (λ) = a m (λ)dν(λ) are three finite measures on [0, ∞), and ν is the measure of Equation (7). Define dΦ m (λ) as Matérn spectral densities of Example 2 (resp. dual Matérn spectral densities of Example 5). We obtain a scalar homogeneous and isotropic Matérn (resp. dual Matérn) random field.
Example 7. Using (4) and the well-known formulae we write the two-point correlation tensor of rank 1 homogeneous and isotropic random field in the form v(x), v(y) = B (1) where r = y − x, and Now assume that the measures Φ 1 and Φ 2 are described by Matérn densities: It is possible to substitute these densities to (12) and calculate the integrals using Prudnikov et al. (1986, Equation 2.5.9.1). We obtain rather long expressions that include the generalised hypergeometric function 1 F 2 . The situation is different for the dual model: Using Prudnikov et al. (1988, Equations 2.16.14.3, 2.16.14.4), we obtain Example 8. Consider the case when r = 2 and U(g) = S 2 (g). In order to write down symmetric rank 4 tensors in a compressed matrix form, consider an orthogonal operator τ acting from S 2 (S 2 (R 3 )) to S 2 (R 6 ) as follows: see (Helnwein, 2001, Equation (44)). It is possible to prove the following.
The matrix τ f ijkl (0) lies in the interval C 1 with extreme points C 1 and C 2 , where the nonzero elements of the symmetric matrix C 1 lying on and over the main diagonal are as follows: The matrix τ f ijkl (λ, 0, 0) with λ > 0 lies in the convex compact set

The third component is the ellipse
Choose three points D 3 , D 4 , D 5 lying on the above ellipse. If we allow the matrix τ f ijkl (λ, 0, 0) with λ > 0 to take values in the simplex with vertices D i , 1 ≤ i ≤ 5, then the two-point correlation tensor of the random field ε(x) is the sum of five integrals. The more the four-dimensional Lebesgue measure of the simplex in comparison with that of C 0 , the wider class of random fields is described.
Note that the simplex should contain the set C 1 . The matrix C 1 lies on the ellipse and corresponds to the value of θ = 2 arcsin( 2/3). It follows that one of the above points, say D 3 , must be equal to C 1 . If we choose D 4 to correspond to the value of θ = 2(π − arcsin( 2/3)), that is, and C 2 lies in the simplex. Finally, choose D 5 to correspond to the value of θ = π, that is The constructed simplex is not the one with maximal possible Lebesgue measure, but the coefficients in formulas are simple.
Theorem 3. Let ε(x) be a random field that describes the stress tensor of a deformable body. The following conditions are equivalent.
Theorem 4. The following conditions are equivalent.
2. The field ε(x) has the form and where Z n ℓ ′ m ′ kl is the sequence of uncorrelated scattered random measures on [0, ∞) with control measures Φ n .
The idea of proof is as follows. Write down the Rayleigh expansion for e i(p,x) and for e −i(p,y) separately,substitute both expansions into (10) and use the following result, known as the Gaunt integral : This theorem can be proved exactly in the same way, as its complex counterpart, see, for example, Marinucci and Peccati (2011). Then apply Karhunen's theorem, see Karhunen (1947

A Tensors
There are several equivalent definitions of tensors. Surprisingly, the most abstract of them is useful in the theory of random fields. Let r be a nonnegative integer, and let V 1 , . . . , V r be linear spaces over the same field K. When r = 0, define the tensor product of the empty family of spaces as K 1 , the one-dimensional linear space over K.
Theorem 5 (The universal mapping property). There exist a unique linear space V 1 ⊗ · · · ⊗ V r and a unique linear operator τ : V 1 × V 2 × · · · × V r → V 1 ⊗ · · · ⊗ V r that satisfy the universal mapping property: for any linear space W and for any multilinear map β : V 1 × V 2 × · · · × V r → W , there exists a unique linear operator B : V 1 ⊗ · · · ⊗ V r → X such that β = B • τ : In other words: the construction of the tensor product of linear spaces reduces the study of multilinear mappings to the study of linear ones.
The tensor product v 1 ⊗ · · · ⊗ v r of the vectors Let V 1 , . . . , V r , W 1 , . . . , W r be finite-dimensional linear spaces, and let The tensor product of linear operators, A 1 ⊗ · · · ⊗ A r , is a unique element of the space L(V 1 ⊗ · · · ⊗ V r , W 1 ⊗ · · · ⊗ W r ) such that If all the spaces V i , 1 ≤ i ≤ r, are copies of the same space V , then we write V ⊗r for the r-fold tensor product of V with itself, and v ⊗r for the tensor product of r copies of a vector v ∈ V . Similarly, for A ∈ L(V, V ) we write A ⊗r for the r-fold tensor product of A with itself. Note that A ⊗0 is the identity operator in the space K 1 .

B Group representations
Let G be a topological group. A finite-dimensional representation of G is a pair (ρ, V ), where V is a finite-dimensional linear space, and ρ : G → GL(V ) is a continuous group homomorphism. Here GL(V ) is the general linear group of order n, or the group of all invertible n × n matrices. In what follows, we omit the word "finite-dimensional" unless infinite-dimensional representations are under consideration.
In a coordinate form, a representation of G is a continuous group homomorphism ρ : G → GL(n, K) and the space K n .
Let W ⊆ V be a linear subspace of the space V . W is called an invariant subspace of the representation (ρ, V ) if ρ(g)w ∈ W for all g ∈ G and w ∈ W . The restriction of ρ to W is then a representation (σ, W ) of G. Formula In a coordinate form, take a basis for W and complete it to a basis for V . The matrix of ρ(g) relative to the above basis is Let (ρ, V ) and (τ, W ) be representations of G. An operator A ∈ L(V, W ) is called an intertwining operator if The intertwining operators form a linear space L G (V, W ) over F. The representations (ρ, V ) and (τ, W ) are called equivalent if the space L G (V, W ) contains an invertible operator. Let A be such an operator. Multiply (14) by A −1 from the right. We obtain In a coordinate form, τ (g) and ρ(g) are matrices of the same presentation, written in two different bases, and A is the transition matrix between the bases.
In a coordinate form, all blocks of the matrix (13) are nonempty. Otherwise, the representation is called irreducible.
Example 9. Let G = O(3). The mapping g → g ⊗r is a representation of the group G in the space (R 3 ) ⊗r . When r = 0, this representation is called trivial, when r = 1, it is called defining. When r ≥ 2, this representation is reducible.
From now on we suppose that the topological group G is compact. There exists an inner product (·, ·) on V such that In a coordinate form, we can choose an orthonormal basis in V . If V is a complex linear space, then the representation (ρ, V ) takes values in U(n), the group of n × n unitary matrices, and we speak of a unitary representation If V is a real linear space, then the representation (ρ, V ) takes values in O(n), and we speak of an orthogonal representation. Let (π, V ) and (ρ, W ) be representations of G. The direct sum of representations is the representation (π ⊕ ρ, V ⊕ W ) acting by In a coordinate form, we have Consider the action π ⊗ ρ of the group G on the set of tensor products v ⊗ w defined by This action may be extended by linearity to the tensor product of representations (π ⊗ ρ, V ⊗ W ). In a coordinate form, (π ⊗ ρ)(g) is a rank 4 tensor with components A representation (σ, V ) of a group G is called completely reducible if for every invariant subspace W ⊂ V there exists an invariant subspace U ⊂ V such that V = W ⊕ U. In a coordinate form, any basis {w 1 , . . . , w p } for W can be completed to a basis {w 1 , . . . , w p , u 1 , . . . , u q } for V such that the span of the vectors u 1 ,. . . ,u q is invariant. The matrix σ(g) in the above basis has the form (15). Any representation of a compact group is completely reducible.
Let (ρ, V ) be an irreducible representation of a group G. Denote by [ρ] the equivalence class of all representations of G equivalent to (ρ, V ) and byĜ the set of all equivalence classes of irreducible representations of G. For any finitedimensional representation (σ, V ) of G, there exists finitely many equivalence classes [ρ 1 ], . . . , [ρ k ] ∈Ĝ and uniquely determined positive integers m 1 , . . . , m k such that (σ, V ) is equivalent to the direct sum of m 1 copies of the representation (ρ 1 , V 1 ), . . . , m k copies of the representation (ρ k , V k ). The direct sum m i V i of m i copies of the linear space V i is called the isotypic subspace of the space V that corresponds to the representation (ρ i , V i ). The numbers m i are called the multiplicities of the irreducible representation Assume that a compact group G is easy reducible. This means that for any three irreducible representation (ρ, V ), (σ, W ), and (τ, U) of G the multiplicity m τ of τ in ρ ⊗ σ is equal to either 0 or 1. For example, the group O(3) is easy reducible. Assume m τ = 1. Let { e ρ i : 1 ≤ i ≤ dim ρ } be an orthonormal basis in V , and similarly for σ and τ . There are two natural bases in the space V ⊗ W . The coupled basis is The uncoupled basis is In a coordinate form, the elements of the space V ⊗ W are matrices with dim ρ rows and dim σ columns. The coupled basis consists of matrices having 1 in the ith row and jth column, and all other entries equal to 0. Denote by c k [i,j] τ [ρ,σ] the coefficients of expansion of the vectors of uncoupled basis in the coupled basis: The numbers c ρ,σ] are called the Clebsch-Gordan coefficients of the group G. In the coupled basis, the vectors of the uncoupled basis are matrices c k τ [ρ,σ] with matrix entries c ρ,σ] , the Clebsch-Gordan matrices.
The representation ρ ℓ acts as follows: Note that ρ ℓ (−E) = E if and only if ℓ is integer. The Wigner orthonormal basis in the space P 2ℓ (V ) is as follows: The matrix entries of the operators ρ ℓ (g) in the above basis are called Wigner D functions and are denoted by D ℓ mn (g). The tensor product ρ ℓ 1 ⊗ ρ ℓ 2 is expanding as follows Example 11 (Irreducible unitary representations of SO(3) and O (3)). Realise the linear space R 3 with coordinates x −1 , x 0 , and x 1 as the set of traceless Hermitian matrices over C 2 with entries The matrix (17) acts on the so realised R 3 as follows: The mapping π is a homomorphism of SU(2) onto SO(3). The kernel of π is ±E. Assume that (ρ, V ) is an irreducible unitary representation of SO(3). Then (ρ • π, V ) is an irreducible unitary representation of SU(2) with kernel ±E. Then we have ρ • π = ρ ℓ for some integer ℓ. In other words, every irreducible unitary representation (ρ ℓ , V ) of SU(2) with integer ℓ gives rise to an irreducible unitary representation of SO (3), and no other irreducible unitary representations exist. We denote the above representation of SO(3) again by (ρ ℓ , V ). Let SO(2) be the subgroup of SO(3) that leaves the vector (0, 0, 1) ⊤ fixed. The restriction of ρ ℓ to SO(2) is equivalent to the direct sum of irreducible unitary representations (e imϕ , C 1 ), −ℓ ≤ m ≤ ℓ of SO(2). Moreover, the space of the representation (e imϕ , C 1 ) is spanned by the vector e m (ξ, η) of the Wigner basis (18).This is where their enumeration comes from.
The orthonormal basis of eigenvectors of J with eigenvalue 1 was proposed by Gordienko (2002). In this basis, the representations ρ ℓ,+ and ρ ℓ,− become orthogonal and will be denoted by U ℓg and U ℓu (g by German gerade, even, and u by ungerade, odd).

C Classical invariant theory
Let V and W be two finite-dimensional linear spaces over the same field K. Let (ρ, V ) and (σ, W ) be two representations of a group G. A mapping h : W → V is called a covariant or form-invariant or a covariant tensor of the pair of representations (ρ, V ) and (σ, W ), if h(σ(g)w) = ρ(g)h(w), g ∈ G.
In other words, the diagram If V = F 1 and ρ is the trivial representation of G, then the corresponding covariant scalars are called absolute invariants (or just invariants) of the representation (σ, W ), hence the name Invariant Theory. Note that the set K[W ] G of invariants is an algebra over the field K, that is, a linear space over F with bilinear multiplication operation and a multiplication identity 1. The product of a covariant h : W → V and an invariant f ∈ K[W ] G is again a covariant. In other words, the covariant tensors of the pair of representations (ρ, V ) and (σ, W ) form a module over the algebra of invariants of the representation (σ, W ).
A mapping h : W → V is called homogeneous polynomial mapping of degree d if for any v ∈ V the mapping w → (h(w), v) is a homogeneous polynomial of degree d in dim W variables. The mapping h is called a polynomial covariant of degree d if it is homogeneous polynomial mapping of degree d and a covariant.
Let (σ, W ) be the defining representation of G, and (ρ, V ) be the rth tensor power of the defining representation. The corresponding covariant tensors are said to have an order r. The covariant tensors of degree 0 and of order r of the group O(n) are known as isotropic tensors.
The algebra of invariants and the module of covariant tensors were an object of intensive research. The first general result was obtained by Gordan (1868). He proved that for any finite-dimensional complex representation of the group G = SL(2, C) the algebra of invariants and the module of covariant tensors are finitely generated. In other words, there exists an integrity basis: a finite set of invariant homogeneous polynomials I 1 , . . . , I N such that every polynomial invariant can be written as a polynomial in I 1 , . . . , I N . An integrity basis is called minimal if none of its elements can be expressed as a polynomial in the others. A minimal integrity basis is not necessarily unique, but all minimal integrity bases have the same amount of elements of each degree.
The algebra of invariants is not necessarily free. Some polynomial relations between generators, called syzygies may exist.
The importance of polynomial invariants can be explained by the following result. Let G be a closed subgroup of the group O(3), the group of symmetries of a material. Let (ρ, V), (ρ 1 , V 1 ), . . . , (ρ N , V N ) be finitely many orthogonal representations of G in real finite-dimensional spaces. Let T : V 1 ⊕ · · · ⊕ V N → V be an arbitrary (say, measurable) covariant of the pair ρ and ρ 1 ⊕ · · · ⊕ ρ N . Let { I k : 1 ≤ k ≤ K } be an integrity basis for polynomial invariants of the representation ρ, and let { T l : 1 ≤ l ≤ L } be an integrity basis for polynomial covariant tensors of the pair ρ and ρ 1 ⊕· · ·⊕ρ N . Following Wineman and Pipkin (1964), we call T l basic covariant tensors.
In 1939 in the first edition of Weyl (1997) Hermann Weyl proved that any polynomial covariant of degree d and of order r of the group O(n) is a linear combination of products of Kronecker's deltas δ ij and second degree homogeneous polynomials x i x j .