1 Introduction

Random functions of more than one variable, or random fields, were introduced in the 20th years of the past century as mathematical models of physical phenomena like turbulence, see, e.g., [9, 20, 39]. To explain how random fields appear in continuum physics, consider the following example.

Example 1

Let \(E=E^3\) be a three-dimensional Euclidean point space, and let V be the translation space of E with an inner product \((\varvec{\cdot }, \varvec{\cdot })\). Following [43], the elements A of E are called the places in E. The symbol \(B-A\) is the vector in V that translates A into B.

Let \(\mathcal {B}\subset E\) be a subset of E occupied by a material, e.g., a turbulent fluid or a deformable body. The temperature is a rank 0 tensor-valued function \(T:\mathcal {B}\rightarrow \mathbb {R}^1\). The velocity of a fluid is a rank 1 tensor-valued function \(\mathbf {v}:\mathcal {B}\rightarrow V\). The strain tensor is a rank 2 tensor-valued function \(\varepsilon :\mathcal {B}\rightarrow \mathsf {S}^2(V)\), where \(\mathsf {S}^2(V)\) is the linear space of symmetric rank 2 tensors over V. The piezoelectricity tensor is a rank 3 tensor-valued function \(\mathsf {D}:\mathcal {B}\rightarrow \mathsf {S} ^2(V)\otimes V\). The elastic modulus is a rank 4 tensor-valued function \( \mathsf {C}:\mathcal {B}\rightarrow \mathsf {S}^2(\mathsf {S}^2(V))\). Denote the range of any of the above functions by \(\mathsf {V}\). Physicists call \( \mathsf {V}\) for ranks 2 or 3 or 4 the constitutive tensor space. It is a subspace of the tensor power \(V^{\otimes r}\), where r is a nonnegative integer. The form

$$\begin{aligned} (\mathbf {x}_1\otimes \cdots \otimes \mathbf {x}_r,\mathbf {y}_1\otimes \cdots \otimes \mathbf {y}_r) =(\mathbf {x}_1,\mathbf {y}_1)\cdots (\mathbf {x}_r,\mathbf { y}_r) \end{aligned}$$

can be extended by linearity to the inner product on \(V^{\otimes r}\) and then restricted to \(\mathsf {V}\).

At microscopic length scales, spatial randomness of the material needs to be taken into account. Mathematically, there is a probability space \((\Omega ,\mathfrak {F},\mathsf {P})\) and a function \(\mathsf {T}(A,\omega ):\mathcal {B}\times \Omega \rightarrow \mathsf {V}\) such that for any fixed \(A_0\in \mathsf { V}\) and for any Borel set \(B\subseteq \mathsf {V}\) the inverse image \(\mathsf {T }^{-1}(A_0,B)\) is an event. The map \(\mathsf {T}(\mathbf {x},\omega )\) is a random field.

Translate the whole body \(\mathcal {B}\) by a vector \(\mathbf {x}\in V\). The random fields \(\mathsf {T}(A+\mathbf {x})\) and \(\mathsf {T}(A)\) have the same finite-dimensional distributions. It is therefore convenient to assume that there is a random field defined on all of E such that its restriction to \(\mathcal {B}\) is equal to \(\mathsf {T}(A)\). For brevity, denote the new field by the same symbol \(\mathsf {T}(A)\) (but this time \(A\in E\)). The random field \(\mathsf {T}(A)\) is strictly homogeneous, that is, the random fields \(\mathsf {T}(A+\mathbf {x})\) and \(\mathsf {T}(A)\) have the same finite-dimensional distributions. In other words, for each positive integer n, for each \(\mathbf {x}\in V\), and for all distinct places \(A_1\), ..., \(A_n\in E\) the random elements \(\mathsf {T}(A_1)\oplus \cdots \oplus \mathsf {T}(A_n)\) and \(\mathsf {T}(A_1+\mathbf {x})\oplus \cdots \oplus \mathsf {T} (A_n+\mathbf {x})\) of the direct sum on n copies of the space \(\mathsf {V}\) have the same probability distribution.

Let K be the material symmetry group of the material body \(\mathcal {B}\) acting in V. The group K is a subgroup of the orthogonal group \(\mathrm{O }(V)\). For simplicity, we assume that the material is fully symmetric, that is, \(K=\mathrm{O}(V)\). Fix a place \(O\in \mathcal {B}\) and identify E with V by the map f that maps \(A\in E\) to \(A-O\in V\). Then K acts in E and rotates the body \(\mathcal {B}\) by

$$\begin{aligned} g\cdot A=f^{-1}gfA,\quad g\in \mathrm{O}(V),\quad A\in \mathcal {B}. \end{aligned}$$

Let U be the restriction of the orthogonal representation \(g\mapsto g^{\otimes r}\) of the group \(\mathrm{O}(V)\) to the subspace \(\mathsf {V}\) of the space \(V^{\otimes r}\). The group K acts in \(\mathsf {V}\) by \(\mathsf {v}\mapsto U(g)\mathsf {v}\), \(g\in K\). Under the action of K in E, the point \(A_0\) becomes \(g\cdot A_0\). Under the action of K in \(\mathsf {V}\), the random tensor \(\mathsf {T}(A_0)\) becomes \(U(g)\mathsf {T}(A_0)\). The random fields \( \mathsf {T}(g\cdot A)\) and \(U(g)\mathsf {T}(A)\) must have the same finite-dimensional distributions, because \(g\cdot A_0\) is the same material point in a different place. Note that this property does not depend on a particular choice of the place O, because the field is strictly homogeneous. We call such a field strictly isotropic.

Assume that the random field \(\mathsf {T}(A)\) is second-order, that is

$$\begin{aligned} \mathsf {E}[\Vert \mathsf {T}(A)\Vert ^2]<\infty ,\quad A\in E. \end{aligned}$$

Define the one-point correlation tensor of the field \(\mathsf {T}(A)\) by

$$\begin{aligned} \langle \mathsf {T}(A)\rangle =\mathsf {E}[\mathsf {T}(A)] \end{aligned}$$

and its two-point correlation tensor by

$$\begin{aligned} \langle \mathsf {T}(A),\mathsf {T}(B)\rangle =\mathsf {E}[(\mathsf {T}(A) -\langle \mathsf {T}(A)\rangle )\otimes (\mathsf {T}(B) -\langle \mathsf {T}(B)\rangle )]. \end{aligned}$$

Assume that the field \(\mathsf {T}(A)\) is mean-square continuous, that is, its two-point correlation tensor \(\langle \mathsf {T}(A),\mathsf {T} (B)\rangle :E\times E\rightarrow \mathsf {V}\otimes \mathsf {V}\) is a continuous function.

Note that [35] had shown that any finite-variance isotropic random field on a compact group is necessarily mean-square continuous under standard measurability assumptions, and hence its covariance function is continuous. In the related settings, the characterisation of covariance function for a real homogeneous isotropic random field in d-dimensional Euclidean space was given in the classical paper by [40], where it was conjectured that the only form of discontinuity which could be allowed for such a function would occur at the origin. This conjecture was proved by [7] for \(d\ge 2\). This result was widely used in Geostatistics (see, i.e., [13], among the others), who argued that the homogenous and isotropic random field could be expressed as a mean-square continuous component and what they called “nugget effect”, e.g. a purely discontinuous component. In fact this latter component should be necessarily non-measurable (see, i.e., [18, Example 1.2.5]). The relation between measurability and mean-square continuity in non-compact situation is still unclear even for scalar random fields. That is why we assume in this paper that our random fields are mean-square continuous, and hence their covariance functions are continuous.

If the field \(\mathsf {T}(A)\) is strictly homogeneous, then its one-point correlation tensor is a constant tensor in \(\mathsf {V}\), while its two-point correlation tensor is a function of the vector \(B-A\), i.e., a function on V . Call such a field wide-sense homogeneous.

Similarly, if the field \(\mathsf {T}(A)\) is strictly isotropic, then we have

$$\begin{aligned}&\langle \mathsf {T}(g\cdot A)\rangle =U(g)\langle \mathsf {T}(A)\rangle ,\nonumber \\&\langle \mathsf {T}(g\cdot A),\mathsf {T}(g\cdot B)\rangle =(U\otimes U)(g)\langle \mathsf {T}(A),\mathsf {T}(B)\rangle . \end{aligned}$$
(1)

Definition 1

A random field \(\mathsf {T}(A)\) is called wide-sense isotropic if its one-point and two-point correlation tensors satisfy (1).

For simplicity, identify the field \(\{\,\mathsf {T}(A):A\in E\,\}\) defined on E with the field \(\{\,\mathsf {T}^{\prime }(\mathbf {x}):\mathbf {x}\in V\,\}\) defined by \(\mathsf {T}^{\prime }(\mathbf {x})=\mathsf {T} (O+\mathbf {x})\). Introduce the Cartesian coordinate system (xyz) in V. Use the introduced system to identify V with the coordinate space \(\mathbb { R}^3\) and \(\mathrm{O}(V)\) with \(\mathrm{O}(3)\). Call \(\mathbb {R}^3\) the space domain. The action of \(\mathrm{O}(3)\) on \(\mathbb {R}^3\) is the matrix-vector multiplication.

Definition 1 was used by many authors including [36, 42, 46].

There is another definition of isotropy.

Definition 2

[46] A random field \(\mathsf {T}(A)\) is called a multidimensional scalar wide-sense isotropic if its one-point correlation tensor is a constant, while the two-point correlation tensor \(\langle \mathsf { T}(\mathbf {x},\mathsf {T}(\mathbf {y})\rangle \) depends only on \(\Vert \mathbf {y}- \mathbf {x}\Vert \).

It is easy to see that Definition 2 is a particular case of Definition 1 when the representation U is trivial, that is, maps all elements \(g\in K\) to the identity operator.

In the case of \(r=0\), the complete description of the two-point correlation functions of scalar homogeneous and isotropic random fields is as follows. Recall that a measure \(\mu \) defined on the Borel \(\sigma \)-field of a Hausdorff topological space X is called Borel measure.

Theorem 1

Formula

$$\begin{aligned} \langle T(\mathbf {x}),T(\mathbf {y})\rangle =\int ^{\infty }_0\frac{ \sin (\lambda \Vert \mathbf {y}-\mathbf {x}\Vert )}{\lambda \Vert \mathbf {y}-\mathbf {x}\Vert }\, \mathrm{d}\mu (\lambda ) \end{aligned}$$
(2)

establishes a one-to-one correspondence between the set of two-point correlation functions of homogeneous and isotropic random fields \(T(\mathbf {x })\) on the space domain \(\mathbb {R}^3\) and the set of all finite Borel measures \(\mu \) on the interval \([0,\infty )\).

Theorem 1 is a translation of the result proved by [40] to the language of random fields. This translation is performed as follows. Assume that \(B(\mathbf {x})\) is a two-point correlation function of a homogeneous and isotropic random field \(T(\mathbf {x})\). Let n be a positive integer, let \(\mathbf {x}_1, \dots , \mathbf {x}_n\) be n distinct points in \(\mathbb {R}^3\), and let \(c_1, \dots , c_n\) be n complex numbers. Consider the random variable \(X=\sum ^n_{j=1}c_j[T(\mathbf {x} _j)-\langle T(\mathbf {x}_j)\rangle ]\). Its variance is non-negative:

$$\begin{aligned} \mathsf {E}[X^2]=\sum _{j,k=1}^{n}c_j\overline{c_k}\langle T(\mathbf {x}_j),T( \mathbf {x}_k)\rangle \ge 0. \end{aligned}$$

In other words, the two-point correlation function \(\langle T(\mathbf {x}),T( \mathbf {y})\rangle \) is a non-negative-definite function. Moreover, it is continuous, because the random field \(T(\mathbf {x})\) is mean-square continuous, and depends only on the distance \(\Vert \mathbf {y}-\mathbf {x}\Vert \) between the points \(\mathbf {x}\) and \(\mathbf {y}\), because the field is homogeneous and isotropic. [40] proved that Eq.  (2) describes all of such functions.

Conversely, assume that the function \(\langle T(\mathbf {x}),T(\mathbf {y} )\rangle \) is described by Equation (2). The centred Gaussian random field with the two-point correlation function (2) is homogeneous and isotropic. In other words, there is a link between the theory of random fields and the theory of positive-definite functions.

In what follows, we consider the fields with absolutely continuous spectrum.

Definition 3

([17]) A homogeneous and isotropic random field \(T(\mathbf {x})\) has an absolutely continuous spectrum if the measure \(\mu \) is absolutely continuous with respect to the measure \(4\pi \lambda ^2\,\mathrm{d}\lambda \), i.e., there exist a nonnegative measurable function \(f(\lambda )\) such that

$$\begin{aligned} \int ^{\infty }_0\lambda ^2f(\lambda )\,\mathrm{d}\lambda <\infty \end{aligned}$$

and \(d\mu (\lambda )=4\pi \lambda ^2f(\lambda )\,\mathrm{d}\lambda \). The function \(f(\lambda )\) is called the isotropic spectral density of the random field \(T(\mathbf {x})\).

Example 2

(The Matérn two-point correlation function) Consider a two-point correlation function of a scalar random field \(T(\mathbf {x})\) of the form

$$\begin{aligned} \left\langle T(\mathbf {x}),T(\mathbf {y})\right\rangle =M_{\nu ,a}\left( \mathbf {x},\mathbf {y}\right) =\frac{2^{1-\nu }\sigma ^{2}}{\varGamma \left( \nu \right) }\left( a\left\| \mathbf {x}-\mathbf {y}\right\| \right) ^{{}\nu }K_{{}\nu }\left( a\left\| \mathbf {x}-\mathbf {y}\right\| \right) ,\quad \end{aligned}$$
(3)

where \(\sigma ^{2}>0,a>0,\nu >0\) and \(K_{{}\nu }\left( z\right) \) is the Bessel function of the third kind of order \(\nu \). Here, the parameter \(\nu \) measures the differentiability of the random field; the parameter \(\sigma \) is its variance and the parameter a measures how quickly the correlation function of the random field decays with distance. The corresponding isotropic spectral density is

$$\begin{aligned} f\left( \lambda \right) =f_{\nu ,a,\sigma ^{2}}\left( \lambda \right) =\frac{ \sigma ^{2}\varGamma \left( \nu +\frac{3}{2}\right) a^{2\nu }}{2\pi ^{3/2}\left( a^{2}+\lambda ^{2}\right) ^{\nu +\frac{3}{2}}},\quad \lambda \ge 0. \end{aligned}$$

Note that Example 2 demonstrates another link, this time between the theory of random fields and the theory of special functions.

In this paper, we consider the following problem. How to define the Matérn two-point correlation tensor for the case of \(r>0\)? A particular answer to this question can be formulated as follows.

Example 3

(Parsimonious Matérn model, [12]) We assume that the vector random field

$$\begin{aligned} T\left( \mathbf {x}\right) =\left( T_{1}\left( \mathbf {x}\right) ,\dots ,T_{m}\left( \mathbf {x}\right) \right) ^{\top }, \quad \mathbf {x}\in \mathbb {R}^{3}, \end{aligned}$$

has the two-point correlation tensor \(B\left( \mathbf {x},\mathbf {y}\right) =(B_{ij}(\mathbf {x},\mathbf {y})) _{1\le i,j\le m}. \) It is not straightforward to specify the cross-covariance functions \( B_{ij}\left( \mathbf {x}\right) ,1\le i,j\le m,i\ne j\), as non-trivial, valid parametric models because of the requirement of their non-negative definiteness. In the multivariate Matérn model, each marginal covariance function

$$\begin{aligned} B_{ii}\left( \mathbf {x},\mathbf {y}\right) =\sigma _{i}^{2}M_{\nu _{i},a_{i}}\left( \mathbf {x},\mathbf {y}\right) ,\quad i=1,\ldots ,m, \end{aligned}$$

is of the type (3) with the isotropic spectral density \( f_{ii}(\lambda )=f_{\nu _{i},a_{i},\sigma _{i}^{2}}\left( \lambda \right) .\)

Each cross-covariance function

$$\begin{aligned} B_{ij}\left( \mathbf {x},\mathbf {y}\right) =B_{ji}\left( \mathbf {x},\mathbf {y} \right) =b_{ij}\sigma _{i}\sigma _{j}M_{\nu _{ij},a_{ij}}\left( \mathbf {x}, \mathbf {y}\right) ,\quad 1\le i,j\le m,\quad i\ne j \end{aligned}$$

is also a Matérn function with co-location correlation coefficient \( b_{ij},\) smoothness parameter \(\nu _{ij}\) and scale parameter \(a_{ij}.\)The spectral densities are

$$\begin{aligned} f_{ij}\left( \mathbf {x}\right) =f_{\nu _{ij},a_{ij},b_{ij}\sigma _{i}\sigma _{j}}\left( \mathbf {x}\right) ,\quad 1\le i,j\le m,i\ne j. \end{aligned}$$

The question then is to determine the values of \(\nu _{ij},a_{ij}\) and \( b_{ij}\) so that the non-negative definiteness condition is satisfied. Let \( m\ge 2\). Suppose that

$$\begin{aligned} \nu _{ij}=\frac{1}{2}\left( \nu _{i}+\nu _{j}\right) ,\quad 1\le i, j\le m,\quad i\ne j, \end{aligned}$$

and that there is a common scale parameter in the sense that there exists an \(a>0\) such that

$$\begin{aligned} a_{i}=\cdots =a_{m}=a,\text { and }a_{ij}=a\text { for }\quad 1\le i, j\le m,\quad i\ne j. \end{aligned}$$

Then the multivariate Matérn model provides a valid second-order structure in \(\mathbb {R}^{3}\) if

$$\begin{aligned} b_{ij}=\beta _{ij}\left[ \frac{\varGamma \left( \nu _{i}+\frac{3}{2}\right) }{ \varGamma \left( \nu _{i}\right) }\frac{\varGamma \left( \nu _{j}+\frac{3}{2} \right) }{\varGamma \left( \nu _{j}\right) }\right] ^{1/2}\frac{\varGamma \left( \frac{1}{2}\left( \nu _{i}+\nu _{j}\right) \right) }{\varGamma \left( \frac{1}{2 }\left( \nu _{i}+\nu _{j}\right) +\frac{3}{2}\right) } \end{aligned}$$

for \(1\le i,j\le m,i\ne j,\) where the matrix \(\left( \beta _{ij}\right) _{i,j=1,\ldots ,m}\) has diagonal elements \(\beta _{ii}=1\) for \(i=1,\ldots ,m,\) and off-diagonal elements \(\beta _{ij},1\le i,j\le m,i\ne j\) so that it is symmetric and non-negative definite.

Example 4

(Flexible Matérn model) Consider the vector random field \(\mathbf {T}(\mathbf {x})\in \mathbb {R}^{m}, \mathbf {x}\in \mathbb {R}^{3}\) with the two-point covariance tensor

$$\begin{aligned} \left\langle T_{i}(x),T_{j}(\mathbf {y})\right\rangle =B_{ij}(\mathbf {x}, \mathbf {y})=\bar{B}_{ij}(\mathbf {y}-\mathbf {x})=\sigma _{ij}M_{\nu _{ij},a_{ij}}\left( \mathbf {x},\mathbf {y}\right) ,1\le i,j\le m, \end{aligned}$$

where again

$$\begin{aligned} M_{\nu ,a}\left( \mathbf {x},\mathbf {y}\right) =\frac{2^{1-\nu }\sigma ^{2}}{ \varGamma \left( \nu \right) }\left( a\left\| \mathbf {y}-\mathbf {x} \right\| \right) ^{{}\nu }K_{{}\nu }\left( a\left\| \mathbf {y}-\mathbf { x}\right\| \right) .\quad \end{aligned}$$

Denote by \(\mathcal {N}\) the set of all nonnegative-definite matrices. Assume that the matrix \(\Sigma =(\sigma _{ij})_{1\le i,j\le m}=(\sigma _{ij})\in \mathcal {N}\), and denote \(\sigma _{i}^{2}=\sigma _{ii} \), \(i=1, \dots , m\).

Then the spectral density \(F=(f_{ij})_{1\le i,j\le m}\) has the entries

$$\begin{aligned} f_{ij}({\varvec{\lambda }} )= & {} \frac{1}{(2\pi )^{3}}\int _{\mathbb {R}^{3}}e^{-\mathrm{i}({\varvec{\lambda }} ,\mathbf {h})}\bar{B}_{ij}(\mathbf {h})\,\mathrm{d}\mathbf {h}\\= & {} \sigma _{ij}a_{ij}^{2\nu _{ij}}\frac{1}{(a_{ij}+\left\| {\varvec{\lambda }} \right\| ^{2})^{\nu _{ij}+\frac{3}{2}}}\frac{\varGamma (\nu _{ij}+\frac{3}{2})}{\varGamma (\nu _{ij})},\quad 1\le i, j\le m,\quad \lambda \in \mathbb {R}^{3}. \end{aligned}$$

We need to find some conditions on parameters \(a_{ij}>0,\nu _{ij}>0,\) under which \(F\in \mathcal {N}\). The general conditions can be found in [2, 8].

Recall that a symmetric, real \(m\times \ m\) matrix \(\Theta =(\theta _{ij})_{1\le i,j\le m},\) is said to be conditionally negative definite [3], if the inequality

$$\begin{aligned} \sum _{i=1}^{m}\sum _{j=1}^{m}c_{i}c_{j}\theta _{ij}\le 0 \end{aligned}$$

holds for any real numbers \(c_{1},\ldots ,\) \(c_{m,}\) subject to

$$\begin{aligned} \sum _{i=1}^{m}c_{i}=0. \end{aligned}$$

In general, a necessary condition for the above inequality is

$$\begin{aligned} \theta _{ii}+\theta _{jj}\le 2\theta _{ij},\quad i,j=1,\ldots ,m, \end{aligned}$$

which implies that all entries of a conditionally negative definite matrix are nonnegative whenever its diagonal entries are non-negative. If all its diagonal entries vanish, a conditionally negative definite matrix is also named a Euclidean distance matrix. It is known that \(\Theta =(\theta _{ij})_{1\le i,j\le m}\) is conditionally negative definite if and only if an \(m\times \ m\) matrix S with entries \(\exp \{-\theta _{ij}u\}\) is positive definite, for every fixed \(u\ge 0\) (cf. [3, Theorem 4.1.3]), or \(S=e^{-u\Theta },\) where \(e^{\Lambda }\) is an Hadamar exponential of a matrix \(\Lambda .\)

Some simple examples of conditionally negative definite matrices are

  1. (i)

    \(\theta _{ij}=\theta _{i}+\theta _{j};\)

  2. (ii)

    \(\theta _{ij}=\mathrm{const};\)

  3. (iii)

    \(\theta _{ij}=\left| \theta _{i}-\theta _{j}\right| ;\)

  4. (iv)

    \(\theta _{ij}=\left| \theta _{i}-\theta _{j}\right| ^{2}\)

  5. (v)

    \(\theta _{ij}=\max \{\theta _{i},\theta _{j}\};\)

  6. (vi)

    \(\theta _{ij}=-\theta _{i}\theta _{j}.\)

Recall that the Hadamard product of two matrices A and B is the matrix \( A\circ B=(A_{ij}\cdot B_{ij})_{1\le i,j\le m}.\) By Schur theorem if \( A\in \mathcal {N}\), \(B\in \mathcal {N}\), then so is \(A\circ B.\)

Then

$$\begin{aligned} F=\Sigma \circ A\circ B\circ C, \end{aligned}$$

where one need to find conditions under which

$$\begin{aligned}&A=\left( \frac{1}{(1+\left\| {\varvec{\lambda }} \right\| ^{2}/a_{ij}^{2})^{\nu _{ij}+\frac{3}{2}}}\right) _{1\le i,j\le m}\ge 0,\quad B=\left( \frac{1}{a_{ij}^{3}}\right) _{1\le i,j\le m}\ge 0,\\&C=\left( \frac{\varGamma (\nu _{ij}+\frac{3}{2})}{\varGamma (\nu _{ij})}\right) _{1\le i,j\le m}\ge 0. \end{aligned}$$

We consider first the case 1, in which we assume that

$$\begin{aligned} a_{i}=\cdots =a_{m}=a,\text { }1\le i,j\le m. \end{aligned}$$

Then

$$\begin{aligned} A=e^{-\frac{3}{2}}\left( \exp \left\{ -\nu _{ij}\log \left( 1+\frac{\left\| {\varvec{\lambda }}\right\| ^{2}}{a^{2}}\right) \right\} \right) _{1\le i,j\le m}\ge 0, \end{aligned}$$

if and only if the matrix

$$\begin{aligned} Y=\left( -\nu _{ij}\right) _{1\le i,j\le m} \end{aligned}$$

is conditionally negative definite (see above examples (i)–(vi)), then for such \((-\nu _{ij})_{1\le i,j\le m},\) we have to check that the matrix \(C=(\varGamma (\nu _{ij}+\frac{3}{2})/\varGamma (\nu _{ij})_{1\le i,j\le m}\ge 0.\) This class is not empty, since it includes the case of the so-called parsimonious model: \(\nu _{ij}=\frac{\nu _{i}+\nu _{j}}{2}\) (see Example 3).

Recall that a Hermitian matrix \(A=(a_{ij})_{i,j=1,\ldots ,p}\) is conditional non-negative if \(\mathbf {x}^{\top }A\mathbf {x}^{*}\ge 0,\) for all \(\mathbf {x}\in \mathbb {C}^{p}\) such that \(\displaystyle \sum \nolimits _{i=1}^{p}x_{i}=0,\) and \(\mathbf {x}^{*}\) is the complex conjugate of \(\mathbf {x}.\)

Thus, for the case 1, the following multivariate Matérn models are valid under the following conditions (see, [2, 8]):

(A1) Assume that

  1. (i)

    \(a_{i}=\cdots =a_{m}=a,\) \(1\le i,j\le m;\)

  2. (ii)

    \(-\nu _{ij}\) ,\(1\le i,j\le m;\) form conditionally non-negative matrices;

  3. (iii)

    \(\sigma _{ij}\frac{\varGamma (\nu _{ij}+\frac{3}{2})}{\varGamma (\nu _{ij})} ,1\le i,j\le m;\) form non-negative definite matrices.

Consider the case 2:

$$\begin{aligned} \nu _{ij}=\nu >0,\quad 1\le i, j\le m. \end{aligned}$$

Then the following multivariate Matérn models are valid under the following conditions [2]:

(A2) either

  1. (a)

    \(-a_{ij}^{2}\) ,\(1\le i,j\le m,\) form a conditionally non-negative matrix and \(\sigma _{ij}a_{ij}^{2\nu },1\le i,j\le m,\) form non-negative definite matrices; or

  2. (b)

    \(-a_{ij}^{-2}\) ,\(1\le i,j\le m,\) form a conditionally non-negative matrix and \(\sigma _{ij}/a_{ij}^{3},1\le i,j\le m,\) form non-negative definite matrices.

These classes of Matérn models are not empty since in the case of parsimonious model they are consistent with [12, Theorem 1]. For the parsimonious model from this paper \((\) \(\nu _{ij}=\frac{\nu _{ii}+\nu _{jj}}{2},1\le i,j\le m),\) the following multivariate Matérn models are valid under conditions

(A3) either

  1. (a)

    \(\ \nu _{ij}=\frac{\nu _{ii}+\nu _{jj}}{2},a_{ij}^{2}=\frac{ a_{ii}^{2}+a_{jj}^{2}}{2},1\le i,j\le m,\) and \(\sigma _{ij}a_{ij}^{2\nu _{ij}}/\varGamma (\nu _{ij}),1\le i,j\le m,\)form non-negative definite matrices; or

  2. (b)

    \(\nu _{ij}=\frac{\nu _{ii}+\nu _{jj}}{2},a_{ij}^{-2}=\frac{ a_{ii}^{-2}+a_{jj}^{-2}}{2},1\le i,j\le m,\)and \(\sigma _{ij}/a_{ij}^{3}/\varGamma (\nu _{ij}),1\le i,j\le m,\) form non-negative definite matrices;

The most general conditions and new examples can be found in [2, 8]. The paper by [11] reviews the main approaches to building multivariate correlation and covariance structures, including the multivariate Matérn models.

Example 5

(Dual Matérn models) Adapting the so-called duality theorem (see, i.e., [10]), one can show that under the conditions A1, A2 or A3

$$\begin{aligned} \frac{1}{(1+\left\| \mathbf {h}\right\| ^{2})^{\nu _{ij}+\frac{3}{2}}} =\int _{\mathbb {R}^{3}}e^{\mathrm{i}({\varvec{\lambda } },\mathbf {h})}s_{ij}( {\varvec{\lambda } })d{\varvec{\lambda } },\quad 1\le i, j\le m, \end{aligned}$$

where

$$\begin{aligned} s_{ij}({\varvec{\lambda } )=}\frac{1}{(2\pi )^{3}2^{\nu _{ij}-1}\varGamma (\nu _{ij}+ \frac{3}{2})}(\left\| {\varvec{\lambda } }\right\| )^{\nu _{ij}}K_{\nu _{ij}}(\left\| {\varvec{\lambda } }\right\| ),\quad {\varvec{\lambda } \in }\mathbb {R }^{3},\quad 1\le i, j\le m, \end{aligned}$$

is the valid spectral density of the vector random field with correlation structure \(((1+\left\| \mathbf {h}\right\| ^{2})^{-(\nu _{ij}+\frac{3}{2 })})_{1\le i,j\le m}=(D_{ij}(\mathbf {h}))_{1\le i,j\le m}\). We will call it the dual Matérn model.

Note that for the Matérn models

$$\begin{aligned} \int _{\mathbb {R}^{3}}\bar{B}_{ij}(\mathbf {x})d\mathbf {x}<\infty . \end{aligned}$$

This condition is known as short range dependence, while for the dual Matérn model, the long range dependence is possible:

$$\begin{aligned} \int _{\mathbb {R}^{3}}D_{ij}(\mathbf {h})d\mathbf {h}=\infty ,\quad \text { if }0<\nu _{ij}<\frac{3}{2}. \end{aligned}$$

When \(m=3\), the random field of Example 3 is scalar isotropic but not isotropic. How to construct examples of homogeneous and isotropic vector and tensor random fields with Matérn two-point correlation tensors?

To solve this problem, we develop a sketch of a general theory of homogeneous and isotropic tensor-valued random fields in Sect. . This theory was developed by [30, 33]. In particular, we explain another two links: one leads from the theory of random fields to classical invariant theory, other one was established recently and leads from the theory of random fields to the theory of convex compacta.

In Sect. 3, we give examples of Matérn homogeneous and isotropic tensor-valued random fields. Finally, in Appendices we shortly describe the mathematical terminology which is not always familiar to specialists in probability: tensors, group representations, and classical invariant theory. For different aspects of theory of random fields see also [24, 25].

2 A Sketch of a General Theory

Let r be a nonnegative integer, let \(\mathsf {V}\) be an invariant subspace of the representation \(g\mapsto g^{\otimes r}\) of the group \(\mathrm{O}(3)\), and let U be the restriction of the above representation to \(\mathsf {V}\). Consider a homogeneous \(\mathsf {V}\)-valued random field \(\mathsf {T}(\mathbf {x })\), \(\mathbf {x}\in \mathbb {R}^3\). Assume it is isotropic, that is, satisfies (1). It is very easy to see that its one-point correlation tensor \( \langle \mathsf {T}(\mathbf {x})\rangle \) is an arbitrary element of the isotypic subspace of the space \(\mathsf {V}\) that corresponds to the trivial representation. In particular, in the case of \(r=0\) the representation U is trivial, and \(\langle \mathsf {T}(\mathbf {x})\rangle \) is an arbitrary real number. In the case of \(r=1\) we have \(U(g)=g\). This representation does not contain a trivial component, therefore \(\langle \mathsf {T}(\mathbf {x})\rangle = \mathbf {0}\). In the case of \(r=2\) and \(U(g)=\mathsf {S}^2(g)\) the isotypic subspace that corresponds to the trivial representation is described in Example 12, we have \(\langle \mathsf {T}(\mathbf {x})\rangle =CI\), where C is an arbitrary real number, and I is the identity operator in \( \mathbb {R}^3\), and so on.

Can we quickly describe the two-point correlation tensor in the same way? The answer is positive. Indeed, the second equation in (1) means that \(\langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle \) is a measurable covariant of the pair (gU). The integrity basis for polynomial invariants of the defining representation contains one element \(I_1=\Vert \mathbf {x}\Vert ^2\). By the Wineman–Pipkin theorem (Appendix A, Theorem 6), we obtain

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle =\sum _{l=1}^{L} \varphi _l(\Vert \mathbf {y}-\mathbf {x}\Vert ^2)\mathsf {T}_l(\mathbf {y}-\mathbf {x}), \end{aligned}$$

where \(\mathsf {T}_l(\mathbf {y}-\mathbf {x})\) are the basic covariant tensors of the representation U.

For example, when \(r=1\), the basis covariant tensors of the defining representations are \(\delta _{ij}\) and \(x_ix_j\) by the result of [44] mentioned in Appendix C. We obtain the result by [39]:

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle = \varphi _1(\Vert \mathbf {y}-\mathbf {x}\Vert ^2)\delta _{ij} +\varphi _2(\Vert \mathbf {y}-\mathbf {x}\Vert ^2) \frac{(y_i-x_i)(y_j-x_j)}{\Vert \mathbf {y}-\mathbf {x}\Vert ^2}. \end{aligned}$$

When \(r=2\) and \(U(g)=\mathsf {S}^2(g)\), the three rank 4 isotropic tensors are \(\delta _{ij}\delta _{kl}\), \(\delta _{ik}\delta _{jl}\), and \( \delta _{il}\delta _{jk}\). Consider the group \(\Sigma \) of order 8 of the permutations of symbols i, j, k, and l, generated by the transpositions (ij), (kl), and the product (ik)(jl). The group \(\Sigma \) acts on the set of rank 4 isotropic tensors and has two orbits. The sums of elements on each orbit are basis isotropic tensors:

$$\begin{aligned} L^1_{ijkl}=\delta _{ij}\delta _{kl},\quad L^2_{ijkl}=\delta _{ik}\delta _{jl} +\delta _{il}\delta _{jk}. \end{aligned}$$

Consider the case of degree 2 and of order 4. For the pair of representations \( (g^{\otimes 4},(\mathbb {R}^3)^{\otimes 4})\) and \((g,\mathbb {R}^3)\) we have 6 covariant tensors:

$$\begin{aligned} \delta _{il}x_jx_k,\delta _{jk}x_ix_{l},\delta _{jl}x_ix_k, \delta _{ik}x_jx_{l},\delta _{kl}x_ix_j,\delta _{ij}x_kx_{l}. \end{aligned}$$

The action of the group \(\Sigma \) has 2 orbits, and the symmetric covariant tensors are

$$\begin{aligned}&\Vert \mathbf {x}\Vert ^2L^3_{ijkl}(\mathbf {x})=\delta _{il}x_jx_k +\delta _{jk}x_ix_{l}+\delta _{jl}x_ix_k+\delta _{ik}x_jx_{l},\\&\Vert \mathbf {x}\Vert ^2L^4_{ijkl}(\mathbf {x})=\delta _{kl}x_ix_j +\delta _{ij}x_kx_{l}. \end{aligned}$$

In the case of degree 4 and of order 4 we have only one covariant:

$$\begin{aligned} \Vert \mathbf {x}\Vert ^4L^5_{ijkl}(\mathbf {x})=x_ix_jx_kx_{l}. \end{aligned}$$

The result by [28]

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle =\sum _{m=1}^{5}\varphi _m(\Vert \mathbf {y}-\mathbf {x}\Vert ^2) L^m_{ijkl}(\mathbf {y}- \mathbf {x}) \end{aligned}$$

easily follows.

The case of \(r=3\) will be considered in details elsewhere.

When \(r=4\) and \(U(g)=\mathsf {S}^2(S^2(g))\), the situation is more delicate. A linear relations between isotropic tensors, called syzygies, appear. There are 8 symmetric isotropic tensors connected by 1 syzygy, 13 basic covariant tensors of degree 2 and of order 8 connected by 3  syzygies, 10 basic covariant tensors of degree 4 and of order 8 connected by 2 syzygies, 3 basic covariant tensors of degree 6 and of order 8, and 1 basic covariant tensor of degree 8 and of order 8, see [31, 32] for details. It follows that there are

$$\begin{aligned} (8-1)+(13-3)+(10-2)+3+1=29 \end{aligned}$$

linearly independent basic covariant tensors. The result by [29] includes only 15 of them and is therefore incomplete.

How to find the functions \(\varphi _m\)? In the case of \(r=0\), the answer is given by Theorem 1:

$$\begin{aligned} \varphi _1(\Vert \mathbf {y}-\mathbf {x}\Vert ^2)=\int ^{\infty }_0 \frac{\sin (\lambda \Vert \mathbf {y}-\mathbf {x}\Vert )}{\lambda \Vert \mathbf {y}-\mathbf {x}\Vert }\,\mathrm{d} \mu (\lambda ). \end{aligned}$$

In the case of \(r=1\), the answer has been found by [46]:

$$\begin{aligned}&\varphi _1(\Vert \mathbf {y}-\mathbf {x}\Vert ^2)=\frac{1}{\rho ^2}\left( \int ^{ \infty }_0j_2(\lambda \rho ) \,\mathrm{d}\varPhi _2(\lambda )-\int ^{\infty }_0j_1(\lambda \rho ) \,\mathrm{d}\varPhi _1(\lambda )\right) ,\nonumber \\&\varphi _2(\Vert \mathbf {y}-\mathbf {x}\Vert ^2)=\int ^{\infty }_0\frac{j_1(\lambda \rho )}{\lambda \rho } \,\mathrm{d}\varPhi _1(\lambda ) +\int ^{\infty }_0\left( j_0(\lambda \rho )-\frac{j_1(\lambda \rho )}{\lambda \rho } \right) \,\mathrm{d}\varPhi _2(\lambda ), \end{aligned}$$
(4)

where \(\rho =\Vert \mathbf {y}-\mathbf {x}\Vert \), \(j_n\) are the spherical Bessel functions, and \(\varPhi _1\) and \(\varPhi _2\) are two finite measures on \([0,\infty )\) with \(\varPhi _1(\{0\})=\varPhi _2(\{0\})\).

In the general case, we proceed in steps. The main idea is simple. We describe all homogeneous random fields and throw away those that are not isotropic. The homogeneous random fields are described by the following result.

Theorem 2

Formula

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle =\int _{\hat{ \mathbb {R}}^3}e^{\mathrm{i}(\mathbf {p},\mathbf {y}-\mathbf {x})} \,\mathrm{d} \mu (\mathbf {p}) \end{aligned}$$
(5)

establishes a one-to-one correspondence between the set of the two-point correlation tensors of homogeneous random fields \(\mathsf {T}(\mathbf {x})\) on the space domain \(\mathbb {R}^3\) with values in a complex finite-dimensional space \(\mathsf {V}_{\mathbb {C}}\) and the set of all measures \(\mu \) on the Borel \(\sigma \)-field \(\mathfrak {B}(\hat{\mathbb {R}}^3)\) of the wavenumber domain \(\hat{\mathbb {R}}^3\) with values in the cone of nonnegative-definite Hermitian operators in \(\mathsf {V}_{\mathbb {C}}\).

This theorem was proved by [22, 23] for one-dimensional stochastic processes. Kolmogorov’s results have been further developed by [4,5,6, 27] among others.

We would like to write as many formulae as possible in a coordinate-free form, like (5). To do that, let J be a real structure in the space \(\mathsf {V}_{\mathbb {C}}\), that is, a map \(j:\mathsf {V}_{ \mathbb {C}}\rightarrow \mathsf {V}_{\mathbb {C}}\) with

  • \(J(\mathsf {x}+\mathsf {y})=J(\mathsf {x})+J(\mathsf {y})\), \(\mathsf {x}\), \( \mathsf {y}\in \mathsf {V}_{\mathbb {C}}\).

  • \(J(\alpha \mathsf {x})=\overline{\alpha }J(\mathsf {x})\), \(\mathsf {x}\in \mathsf {V}_{\mathbb {C}}\), \(\alpha \in \mathbb {C}\).

  • \(J(J(\mathsf {x}))=\mathsf {x}\), \(\mathsf {x}\in \mathsf {V}_{\mathbb {C}}\).

Any tensor \(\mathsf {x}\in \mathsf {V}_{\mathbb {C}}\) can be written as \(\mathsf { x}=\mathsf {x}^++\mathsf {x}^-\), where

$$\begin{aligned} \mathsf {x}^+=\frac{1}{2}(\mathsf {x}+J\mathsf {x}),\quad \mathsf {x}^-=\frac{1 }{2}(\mathsf {x}-J\mathsf {x}). \end{aligned}$$

Denote

$$\begin{aligned} \mathsf {V}^+=\{\,\mathsf {x}\in \mathsf {V}_{\mathbb {C}}:J\mathsf {x}= \mathsf {x}\,\},\quad \mathsf {V}^-=\{\,\mathsf {x}\in \mathsf {V}_{\mathbb {C} }:J\mathsf {x}=-\mathsf {x}\,\}. \end{aligned}$$

Both sets \(\mathsf {V}^+\) and \(\mathsf {V}^-\) are real vector spaces. If the values of the random field \(\mathsf {T}(\mathbf {x})\) lie in \(\mathsf {V}^+\), then the measure \(\mu \) satisfies the condition

$$\begin{aligned} \mu (-A)=\mu ^{\top }(A) \end{aligned}$$
(6)

for all Borel subsets \(A\subseteq \hat{\mathbb {R}}^3\), where \(-A=\{\,-\mathbf { p}:\mathbf {p}\in A\,\}\).

Next, the following Lemma can be proved. Let \(\mathbf {p}=(\lambda ,\varphi _{ \mathbf {p}},\theta _{\mathbf {p}})\) be the spherical coordinates in the wavenumber domain.

Lemma 1

A homogeneous random field described by (5) and (6) is isotropic if and only if its two-point correlation tensor has the form

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle =\frac{1}{4\pi } \int _{0}^{\infty }\int _{S^2}e^{\mathrm{i}(\mathbf {p},\mathbf {y}-\mathbf {x})} f(\lambda ,\varvec{\varphi }_{\mathbf {p}},\varvec{\theta }_{\mathbf {p}})\sin \varvec{\theta }_{\mathbf {p}} \, \mathrm{d}\varvec{\varphi }_{\mathbf {p}}\,\mathrm{d}\varvec{\theta }_{\mathbf {p}}\,\mathrm{d} \nu (\lambda ), \end{aligned}$$
(7)

where \(\nu \) is a finite measure on the interval \([0,\infty )\), and where f is a measurable function taking values in the set of all symmetric nonnegative-definite operators on \(\mathsf {V}^+\) with unit trace and satisfying the condition

$$\begin{aligned} f(g\mathbf {p})=\mathsf {S}^2(U)(g)f(\mathbf {p}),\quad \mathbf {p}\in \hat{ \mathbb {R}}^3, \quad g\in \mathrm{O}(3). \end{aligned}$$
(8)

When \(\lambda =0\), condition (8) gives \(f(\mathbf {0})=\mathsf {S} ^2(U)(g)f(\mathbf {0})\) for all \(g\in \mathrm{O}(3)\). In other words, the tensor \(f(\mathbf {0})\) lies in the isotypic subspace of the space \(\mathsf {S} ^2(\mathsf {V^+})\) that corresponds to the trivial representation of the group \(\mathrm{O}(3)\), call it \(\mathsf {H}_1\). The intersection of \(\mathsf {H }_1\) with the set of all symmetric nonnegative-definite operators on \( \mathsf {V}^+\) with unit trace is a convex compact set, call it \(\mathcal {C} _1 \).

When \(\lambda >0\), condition (8) gives \(f(\lambda ,0,0)=\mathsf {S} ^2(U)(g)f(\lambda ,0,0)\) for all \(g\in \mathrm{O}(2)\), because \(\mathrm{O}(2)\) is the subgroup of \(\mathrm{O}(3)\) that fixes the point \((\lambda ,0,0)\). In other words, consider the restriction of the representation \(\mathsf {S}^2(U)\) to the subgroup \(\mathrm{O}(2)\). The tensor \(f(\lambda ,0,0)\) lies in the isotypic subspace of the space \(\mathsf {S}^2(\mathsf {V^+})\) that corresponds to the trivial representation of the group \(\mathrm{O}(2)\), call it \(\mathsf { H}_0\). We have \(\mathsf {H}_1\subset \mathsf {H}_0\), because \(\mathrm{O}(2)\) is a subgroup of \(\mathrm{O}(3)\). The intersection of \(\mathsf {H}_0\) with the set of all symmetric nonnegative-definite operators on \(\mathsf {V}^+\) with unit trace is a convex compact set, call it \(\mathcal {C}_0\).

Fix an orthonormal basis \(\mathsf {T}^{0,1,0}\), ..., \(\mathsf {T}^{0,n_0,0}\) of the space \(\mathsf {H}_1\). Assume that the space \(\mathsf {H}_0\ominus \mathsf {H}_1\) has the non-zero intersection with the spaces of \(n_1\) copies of the irreducible representation \(U^{2g}\), \(n_2\) copies of the irreducible representation \(U^{4g}\), ..., \(n_r\) copies of the irreducible representation \(U^{2rg}\) of the group \(\mathrm{O}(3)\), and let \(\mathsf {T} ^{2\ell ,n,m}\), \(-2\ell \le m\le 2\ell \), be the tensors of the Gordienko basis of the nth copy of the representation \(U^{2\ell g}\). We have

$$\begin{aligned} f(\lambda ,0,0)=\sum _{\ell =0}^{r}\sum _{n=1}^{n_{\ell }}f_{\ell n}(\lambda ) \mathsf {T}^{2\ell ,n,0} \end{aligned}$$
(9)

with \(f_{\ell n}(0)=0\) for \(\ell >0\) and \(1\le n\le n_{\ell }\). By (8) we obtain

$$\begin{aligned} f(\lambda ,\varphi _{\mathbf {p}},\theta _{\mathbf {p}})=\sum _{\ell =0}^{r} \sum _{n=1}^{n_{\ell }}f_{\ell n}(\lambda )\sum _{m=-2\ell }^{2\ell } U^{2\ell g}_{m0}(\varphi _{\mathbf {p}},\theta _{\mathbf {p}})\mathsf {T}^{2\ell ,n,m}. \end{aligned}$$

Equation (7) takes the form

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle= & {} \frac{1}{2 \sqrt{\pi }} \sum _{\ell =0}^{r}\sum _{n=1}^{n_{\ell }}\sum _{m=-2\ell }^{2\ell }\int _{0}^{ \infty }\int _{S^2} e^{\mathrm{i}(\mathbf {p},\mathbf {y}-\mathbf {x})}f_{\ell n}(\lambda ) \frac{1}{\sqrt{4\ell +1}}\nonumber \\&\times S^m_{2\ell }(\varphi _{\mathbf {p}},\theta _{\mathbf {p}}) \mathsf {T}^{2\ell ,n,m}\sin \theta _{\mathbf {p}}\,\mathrm{d}\varphi _{ \mathbf {p}}\, \mathrm{d}\theta _{\mathbf {p}}\,\mathrm{d}\nu (\lambda ), \end{aligned}$$
(10)

where we used the relation

$$\begin{aligned} U^{2\ell g}_{m0}(\varphi _{\mathbf {p}},\theta _{\mathbf {p}})=\sqrt{\frac{4\pi }{ 4\ell +1}} S^m_{2\ell }(\varphi _{\mathbf {p}},\theta _{\mathbf {p}}). \end{aligned}$$

Substitute the Rayleigh expansion

$$\begin{aligned} \mathrm{e}^{\mathrm{i}(\mathbf {p},\mathbf {r})}=4\pi \sum ^{\infty }_{\ell =0} \sum ^{\ell }_{m=-\ell }\mathrm{i}^{\ell }j_{\ell }(\Vert \mathbf {p}\Vert \cdot \Vert \mathbf {r }\Vert ) S^m_{\ell }(\theta _{\mathbf {p}},\varphi _{\mathbf {p}}) S^m_{\ell }(\theta _{ \mathbf {r}},\varphi _{\mathbf {r}}) \end{aligned}$$

into (10). We obtain

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle= & {} 2\sqrt{\pi } \sum _{\ell =0}^{r}\sum _{n=1}^{n_{\ell }}\sum _{m=-2\ell }^{2\ell }\int _{0}^{ \infty } (-1)^{\ell }j_{2\ell }(\lambda \Vert \mathbf {r}\Vert )f_{\ell n}(\lambda )\frac{1}{\sqrt{4\ell +1}}\\&\quad \times \, S^m_{\ell }(\varphi _{\mathbf {r}},\theta _{\mathbf {r}}) \mathsf {T}^{2\ell ,n,m}\,\mathrm{d}\nu (\lambda ), \end{aligned}$$

where \(\mathbf {r}=\mathbf {y}-\mathbf {x}\). Returning back to the matrix entries \(U^{2\ell g}_{m0}(\varphi _{\mathbf {r}},\theta _{\mathbf {r}})\), we have

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle = \int _{0}^{\infty }\sum _{\ell =0}^{r}(-1)^{\ell }j_{2\ell }(\lambda \Vert \mathbf {r} \Vert ) \sum _{n=1}^{n_{\ell }}f_{\ell n}(\lambda ) M^{2\ell ,n}(\varphi _{\mathbf {r} },\theta _{\mathbf {r}})\,\mathrm{d}\nu (\lambda ), \end{aligned}$$
(11)

where

$$\begin{aligned} M^{2\ell ,n}(\varphi _{\mathbf {r}},\theta _{\mathbf {r}})=\sum _{m=-2\ell }^{2 \ell } U^{2\ell g}_{m0}(\varphi _{\mathbf {r}},\theta _{\mathbf {r}})\mathsf {T} ^{2\ell ,n,m}. \end{aligned}$$

It is easy to check that the function \(M^{2\ell ,n}(\varphi _{\mathbf {r} },\theta _{\mathbf {r}})\) is a covariant of degree \(2\ell \) and of order 2r. Therefore, the M-function is a linear combination of basic symmetric covariant tensors, or L-functions:

$$\begin{aligned} M^{2\ell ,n}(\varphi _{\mathbf {r}},\theta _{\mathbf {r}})=\sum _{k=0}^{\ell } \sum _{q=1}^{q_{kr}}c_{nkq}\frac{L^{2k,q}(\mathbf {y}-\mathbf {x})}{\Vert \mathbf {y }-\mathbf {x}\Vert ^{2k}}, \end{aligned}$$

where \(q_{kr}\) is the number of linearly independent symmetric covariant tensors of degree 2k and of order 2r. The right hand side is indeed a polynomial in sines and cosines of the angles \(\varphi _{\mathbf {r}}\) and \( \theta _{\mathbf {r}}\). Equation (11) takes the form

$$\begin{aligned} \langle \mathsf {T}(\mathbf {x}),\mathsf {T}(\mathbf {y})\rangle= & {} \int _{0}^{\infty }\sum _{\ell =0}^{r}(-1)^{\ell }j_{2\ell }(\lambda \Vert \mathbf {r} \Vert ) \sum _{n=1}^{n_{\ell }}f_{\ell n}(\lambda )\\&\times \,\sum _{k=0}^{\ell } \sum _{q=1}^{q_{kr}}c_{nkq}\frac{L^{2k,q}(\mathbf {y}-\mathbf {x})}{\Vert \mathbf {y }-\mathbf {x}\Vert ^{2k}}\,\mathrm{d}\nu (\lambda ). \end{aligned}$$

Recall that \(f_{\ell n}(\lambda )\) are measurable functions such that the tensor (9) lies in \(\mathcal {C}_1\) for \(\lambda =0\) and in \( \mathcal {C}_0\) for \(\lambda >0\). The final form of the two-point correlation tensor of the random field \(\mathsf {T}(\mathbf {x})\) is determined by geometry of convex compacta \(\mathcal {C}_0\) and \(\mathcal {C}_1\). For example, in the case of \(r=1\) the set \(\mathcal {C}_0\) is an interval (see [33]), while \(\mathcal {C}_1\) is a one-point set inside this interval. The set \(\mathcal {C}_0\) has two extreme points, and the corresponding random field is a sum of two uncorrelated components given by Eq. (12) below. The one-point set \(\mathcal {C}_1\) lies in the middle of the interval, the condition \(\varPhi _1(\{0\})=\varPhi _2(\{0\})\) follows. In the case of \(r=2\), the set of extreme points of the set \( \mathcal {C}_0\) has three connected components: two one-point sets and an ellipse, see [33], and the corresponding random field is a sum of three uncorrelated components.

In general, the two-point correlation tensor of the field has the simplest form when the set \(\mathcal {C}_0\) is a simplex. We use this idea in Examples 6 and 8 below.

3 Examples of Matérn Homogeneous and Isotropic Random Fields

Example 6

Consider a centred homogeneous scalar isotropic random field \(T(\mathbf {x})\) on the space \(\mathbb {R}^3\) with values in the two-dimensional space \(\mathbb {R}^2\). It is easy to see that both \(\mathcal {C }_0\) and \(\mathcal {C}_1\) are equal to the set of all symmetric nonnegative-definite \(2\times 2\) matrices with unit trace. Every such matrix has the form

$$\begin{aligned} \begin{pmatrix} x &{} y \\ y &{} 1-x \end{pmatrix} \end{aligned}$$

with \(x\in [0,1]\) and \(y^2\le x(1-x)\). Geometrically, \(\mathcal {C}_0\) and \( \mathcal {C}_1\) are the balls

$$\begin{aligned} \left( x-\frac{1}{2}\right) ^2+y^2=\frac{1}{4}. \end{aligned}$$

Inscribe an equilateral triangle with vertices

$$\begin{aligned} C^1= \begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix} ,\quad C^{2,3}=\frac{1}{4} \begin{pmatrix} 1 &{} \pm \sqrt{3} \\ \pm \sqrt{3} &{} 3 \end{pmatrix} \end{aligned}$$

into the above ball. The function \(f(\mathbf {p})\) takes the form

$$\begin{aligned} f(\mathbf {p})=\sum _{m=1}^{3}a_m(\Vert \mathbf {p}\Vert )C^m, \end{aligned}$$

where \(a_m(\Vert \mathbf {p}\Vert )\) are the barycentric coordinates of the point \(f( \mathbf {p})\) inside the triangle. The two-point correlation tensor of the field takes the form

$$\begin{aligned} \langle T(\mathbf {x}),T(\mathbf {y})\rangle =\sum _{m=1}^{3}\int _{0}^{\infty } \frac{\sin (\lambda \Vert \mathbf {y}-\mathbf {x}\Vert )}{\lambda \Vert \mathbf {y}-\mathbf {x} \Vert } C^m\,\mathrm{d}\varPhi _m(\lambda ), \end{aligned}$$

where \(\mathrm{d}\varPhi _m(\lambda )=a_m(\lambda )\mathrm{d}\nu (\lambda )\) are three finite measures on \([0,\infty )\), and \(\nu \) is the measure of Eq. (7). Define \(\mathrm{d}\varPhi _m(\lambda )\) as Matérn spectral densities of Example 2 (resp. dual Matérn spectral densities of Example 5). We obtain a scalar homogeneous and isotropic Matérn (resp. dual Matérn) random field.

Example 7

Using (4) and the well-known formulae

$$\begin{aligned} j_0(t)=\frac{\sin t}{t},\quad j_1(t)=\frac{\sin t}{t^2}-\frac{\cos t}{t}, \quad j_2(t)=\left( \frac{3}{t^2}-1\right) \frac{\sin t}{t}-\frac{3\cos t}{t^2 }, \end{aligned}$$

we write the two-point correlation tensor of rank 1 homogeneous and isotropic random field in the form

$$\begin{aligned} \langle \mathbf {v}(\mathbf {x}),\mathbf {v}(\mathbf {y})\rangle =B^{(1)}_{ij}( \mathbf {r})+B^{(2)}_{ij}(\mathbf {r}), \end{aligned}$$

where \(\mathbf {r}=\mathbf {y}-\mathbf {x}\), and

$$\begin{aligned} B^{(1)}_{ij}(\mathbf {x},\mathbf {y})= & {} \int _{0}^{\infty }\left[ \left( -\frac{3\sin (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r}\Vert )^3} +\frac{\sin (\lambda \Vert \mathbf {r}\Vert )}{\lambda \Vert \mathbf {r}\Vert } +\frac{3\cos (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r}\Vert )^2} \right) \frac{r_ir_j}{\Vert \mathbf {r}\Vert ^2}\right. \nonumber \\&+\,\left. \left( \frac{\sin (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r} \Vert )^3} -\frac{\cos (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r}\Vert )^2}\right) \delta _{ij} \right] \,\mathrm{d}\varPhi _1(\lambda ),\nonumber \\ B^{(2)}_{ij}(\mathbf {x},\mathbf {y})= & {} \int _{0}^{\infty }\left[ \left( \frac{3\sin (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r}\Vert )^3} -\frac{\sin (\lambda \Vert \mathbf {r}\Vert )}{\lambda \Vert \mathbf {r}\Vert } -\frac{3\cos (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r}\Vert )^2} \right) \frac{r_ir_j}{\Vert \mathbf {r}\Vert ^2}\right. \nonumber \\&+\,\left. \left( \frac{\sin (\lambda \Vert \mathbf {r}\Vert )}{\lambda \Vert \mathbf {r}\Vert } -\frac{\sin (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r}\Vert )^3} +\frac{\cos (\lambda \Vert \mathbf {r}\Vert )}{(\lambda \Vert \mathbf {r}\Vert )^2}\right) \delta _{ij} \right] \,\mathrm{d}\varPhi _2(\lambda ). \end{aligned}$$
(12)

Now assume that the measures \(\varPhi _1\) and \(\varPhi _2\) are described by Matérn densities:

$$\begin{aligned} \mathrm{d}\varPhi _i(\lambda )=2\pi \lambda ^2\frac{\sigma _i^{2}\varGamma \left( \nu _i +\frac{3}{2}\right) a_i^{2\nu _i}}{2\pi ^{3/2}\left( a_i^{2}+\lambda ^{2}\right) ^{\nu _i +\frac{3}{2}}},\quad i=1,2. \end{aligned}$$

It is possible to substitute these densities to (12) and calculate the integrals using [37, Eq. 2.5.9.1]. We obtain rather long expressions that include the generalised hypergeometric function \({}_1F_2\).

The situation is different for the dual model:

$$\begin{aligned} \mathrm{d}\varPhi _i(\lambda )=\frac{1}{(2\pi )^22^{\nu _i-1}\varGamma (\nu _i+3/2)} \lambda ^{\nu +2}K_{\nu }(\lambda ). \end{aligned}$$

Using [38, Eqs. 2.16.14.3, 2.16.14.4], we obtain

$$\begin{aligned} B^{(k)}_{ij}(\mathbf {x},\mathbf {y})= & {} C_k\left( -\frac{3\pi \varGamma (2\nu _k)}{4\Vert \mathbf {r}\Vert ^3 (1+\Vert \mathbf {r}\Vert ^2)^{\nu _k/2}} \left[ P^{-\nu _k}_{\nu _k-1}\left( \frac{\Vert \mathbf {r}\Vert }{\sqrt{1+\Vert \mathbf {r}\Vert ^2}}\right) \right. \right. \\&\left. \left. -\,P^{-\nu _k}_{\nu _k-1}\left( -\frac{\Vert \mathbf {r}\Vert }{\sqrt{1+\Vert \mathbf {r}\Vert ^2}}\right) \right] +\frac{2^{\nu _k}\sqrt{\pi }\varGamma ( \nu _k+3/2)}{(1+\Vert \mathbf {r}\Vert ^2)^{\nu _k+3/2}}\right. \\&+\,\left. \frac{3\cdot 2^{\nu _k-1}\sqrt{\pi }\varGamma (\nu _k+1/2)}{(1+\Vert \mathbf {r}\Vert ^2)^{\nu _k+1/2}}\right) \frac{r_ir_j}{\Vert \mathbf {r}\Vert ^2}\\&+\,C_1\left( \frac{\pi \varGamma (2\nu _k)}{4\Vert \mathbf {r}\Vert ^3 (1+\Vert \mathbf {r}\Vert ^2)^{\nu _1/2}}\left[ P^{-\nu _k}_{\nu _k-1}\left( \frac{\Vert \mathbf {r}\Vert }{\sqrt{1+\Vert \mathbf {r}\Vert ^2}}\right) \right. \right. \\&\left. \left. -\,P^{-\nu _k}_{\nu _k-1}\left( -\frac{\Vert \mathbf {r}\Vert }{\sqrt{1+\Vert \mathbf {r}\Vert ^2}}\right) \right] -\frac{2^{\nu _k}\sqrt{\pi }\varGamma ( \nu _k+1/2)}{(1+\Vert \mathbf {r}\Vert ^2)^{\nu _k+3/2}}\right) \delta _{ij}, \end{aligned}$$

where

$$\begin{aligned} C_k=\frac{1}{(2\pi )^22^{\nu _k-1}\varGamma (\nu _k+3/2)},\quad k=1, 2. \end{aligned}$$

Example 8

Consider the case when \(r=2\) and \(U(g)=\mathsf {S}^2(g)\) . In order to write down symmetric rank 4 tensors in a compressed matrix form, consider an orthogonal operator \(\tau \) acting from \(\mathsf {S}^2( \mathsf {S}^2(\mathbb {R}^3))\) to \(\mathsf {S}^2(\mathbb {R}^6)\) as follows:

$$\begin{aligned} \tau f_{ijkl}=\left( {\begin{matrix} f_{-1-1-1-1} &{}\quad f_{-1-100} &{}\quad f_{-1-111} &{}\quad \sqrt{2}f_{-1-1-10} &{}\quad \sqrt{2} f_{-1-101} &{}\quad \sqrt{2}f_{-1-11-1} \\ f_{00-1-1} &{}\quad f_{0000} &{}\quad f_{0011} &{}\quad \sqrt{2}f_{00-10} &{}\quad \sqrt{2}f_{0001} &{}\quad \sqrt{2}f_{001-1} \\ f_{11-1-1} &{}\quad f_{1100} &{}\quad f_{1111} &{}\quad \sqrt{2}f_{11-10} &{}\quad \sqrt{2}f_{1101} &{}\quad \sqrt{2}f_{111-1} \\ \sqrt{2}f_{-10-1-1} &{}\quad \sqrt{2}f_{-1000} &{}\quad \sqrt{2}f_{-1011} &{}\quad 2f_{-10-10} &{}\quad 2f_{-1001} &{}\quad 2f_{-101-1} \\ \sqrt{2}f_{01-1-1} &{}\quad \sqrt{2}f_{0100} &{}\quad \sqrt{2}f_{0111} &{}\quad 2f_{01-10} &{}\quad 2f_{0101} &{}\quad 2f_{011-1} \\ \sqrt{2}f_{1-1-1-1} &{}\quad \sqrt{2}f_{1-100} &{}\quad \sqrt{2}f_{1-111} &{}\quad 2f_{1-1-10} &{}\quad 2f_{1-101} &{}\quad 2f_{1-11-1} \end{matrix}} \right) , \end{aligned}$$

see [16, Eq. (44)]. It is possible to prove the following. The matrix \(\tau f_{ijkl}(\mathbf {0})\) lies in the interval \(\mathcal {C}_1\) with extreme points \(C^1\) and \(C^2\), where the nonzero elements of the symmetric matrix \(C^1\) lying on and over the main diagonal are as follows:

$$\begin{aligned} C^1_{11}=C^1_{12}=C^1_{13}=C^1_{22}=C^1_{23}=C^1_{33}=\frac{1}{3}, \end{aligned}$$

while those of the matrix \(C^2\) are

$$\begin{aligned}&C^2_{11}=C^2_{22}=C^2_{33}=\frac{2}{15},\quad C^2_{44}=C^2_{55}=C^2_{66}= \frac{1}{5},\\&C^2_{12}=C^2_{13}=C^2_{23}=-\frac{1}{15}. \end{aligned}$$

The matrix \(\tau f_{ijkl}(\lambda ,0,0)\) with \(\lambda >0\) lies in the convex compact set \(\mathcal {C}_0\). The set of extreme points of \(\mathcal {C}_0\) contains three connected components. The first component is the one-point set \(\{D^1\}\) with

$$\begin{aligned} D^1_{44}=D^1_{66}=\frac{1}{2}. \end{aligned}$$

The second component is the one-point set \(\{D^2\}\) with

$$\begin{aligned} D^2_{11}=D^2_{33}=\frac{1}{4},\quad D^2_{55}=\frac{1}{2},\quad D^2_{13}=- \frac{1}{4}. \end{aligned}$$

The third component is the ellipse \(\{\,D^{\theta }:0\le \theta <2\pi \,\} \) with

$$\begin{aligned}&D^{\theta }_{11}=D^{\theta }_{33}=D^{\theta }_{13}=\frac{1}{2} \sin ^2(\theta /2),\quad D^{\theta }_{22}=\cos ^2(\theta /2),\\&D^{\theta }_{12}=D^{\theta }_{23}=\frac{1}{2\sqrt{2}}\sin (\theta ). \end{aligned}$$

Choose three points \(D^3\), \(D^4\), \(D^5\) lying on the above ellipse. If we allow the matrix \(\tau f_{ijkl}(\lambda ,0,0)\) with \(\lambda >0\) to take values in the simplex with vertices \(D^i\), \(1\le i\le 5\), then the two-point correlation tensor of the random field \(\varepsilon (\mathbf {x})\) is the sum of five integrals. The more the four-dimensional Lebesgue measure of the simplex in comparison with that of \(\mathcal {C}_0\), the wider class of random fields is described.

Note that the simplex should contain the set \(\mathcal {C}_1\). The matrix \( C^1 \) lies on the ellipse and corresponds to the value of \(\theta =2\arcsin ( \sqrt{2/3})\). It follows that one of the above points, say \(D^3\), must be equal to \(C^1\). If we choose \(D^4\) to correspond to the value of \( \theta =2(\pi -\arcsin (\sqrt{2/3}))\), that is,

$$\begin{aligned} D^4_{11}=D^4_{33}=D^4_{13}=\frac{1}{6},\quad D^4_{22}=\frac{2}{3},\quad D^4_{12}=D^4_{23}=-\frac{1}{3}, \end{aligned}$$

then

$$\begin{aligned} C^2=\frac{2}{5}(D^1+D^2)+\frac{1}{5}D^4, \end{aligned}$$

and \(C^2\) lies in the simplex. Finally, choose \(D_5\) to correspond to the value of \(\theta =\pi \), that is

$$\begin{aligned} D^5_{11}=D^5_{33}=D^5_{13}=\frac{1}{2}. \end{aligned}$$

The constructed simplex is not the one with maximal possible Lebesgue measure, but the coefficients in formulas are simple.

Theorem 3

Let \(\varepsilon (\mathbf {x})\) be a random field that describes the stress tensor of a deformable body. The following conditions are equivalent.

  1. 1.

    The matrix \(\tau f_{ijkl}(\lambda ,0,0)\) with \(\lambda >0\) takes values in the simplex described above.

  2. 2.

    The correlation tensor of the field has the spectral expansion

$$\begin{aligned} \langle \varepsilon (\mathbf {x}),\varepsilon (\mathbf {y})\rangle = \sum ^5_{n=1}\int ^{\infty }_0\sum ^5_{q=1} \tilde{N}_{nq}(\lambda ,\Vert \mathbf {r} \Vert )L^q_{ijkl}(\mathbf {r})\,\mathrm{d}\varPhi _n(\lambda ), \end{aligned}$$

where the non-zero functions \(\tilde{N}_{nq}(\lambda ,r)\) are given in Table  1 , and where \(\varPhi _n(\lambda )\) are five finite measures on \( [0,\infty )\) with

$$\begin{aligned} \varPhi _1(\{0\})=\varPhi _2(\{0\})=2\varPhi _4(\{0\}),\quad \varPhi _5(\{0\})=0. \end{aligned}$$

Assume that all measures \(\varPhi _n\) are absolutely continuous and their densities are either the Matérn or the dual Matérn densities. The two-point correlation tensors of the corresponding random fields can be calculated in exactly the same way as in Example 7.

Table 1 The functions \(\tilde{N}_{nq}(\lambda ,r)\)

Introduce the following notation:

$$\begin{aligned}&\mathsf {T}^{0,1}_{ijkl}=\frac{1}{3}\delta _{ij}\delta _{kl},\\&\mathsf {T}^{0,2}_{ijkl}=\frac{1}{\sqrt{5}} \sum _{n=-2}^{2}g^{n[i,j]}_{2[1,1]} g^{n[k,l]}_{2[1,1]},\\&\mathsf {T}^{2,1,m}_{ijkl}=\frac{1}{\sqrt{6}}(\delta _{ij}g^{m[k,l]}_{2[1,1]} +\delta _{kl}g^{m[i,j]}_{2[1,1]}),\quad -2\le m\le 2,\\&\mathsf {T}^{2,2,m}_{ijkl}= \sum _{n,q=-2}^{2}g^{m[n,q]}_{2[2,2]}g^{n[i,j]}_{2[1,1]} g^{q[k,l]}_{2[1,1]},\quad -2\le m\le 2,\\&\mathsf {T}^{4,1,m}_{ijkl}= \sum _{n,q=-4}^{4}g^{m[n,q]}_{4[2,2]}g^{n[i,j]}_{2[1,1]} g^{q[k,l]}_{2[1,1]},\quad -4\le m\le 4, \end{aligned}$$

where \(g^{n[n_1,n_2]}_{N[N_1,N_2]}\) are the so called Godunov–Gordienko coefficients described in [14]. Introduce the following notation:

$$\begin{aligned} G^{\ell ''m''m}_{\ell 'm'p}=\sqrt{(2\ell '+1)(2\ell ''+1)} g^{m[m',m'']}_{p[\ell ',\ell '']}g^{0[0,0]}_{m[\ell ',\ell '']}. \end{aligned}$$

Consider the five nonnegative-definite matrices \(A^n\), \(1\le n\le 5\), with the following matrix entries:

$$\begin{aligned} a^{\ell ''m''kl,1}_{\ell 'm'ij}= & {} \left( \frac{1}{\sqrt{5}} \mathsf {T}^{0,2}_{ijkl}G^{\ell ''m''0}_{\ell 'm'0} -\frac{1}{5\sqrt{14}}\sum _{m=-2}^{2}\mathsf {T}^{2,2,m}_{ijkl} G^{\ell ''m''m}_{\ell 'm'2} -\frac{2\sqrt{2}}{9\sqrt{35}}\sum _{m=-4}^{4}\mathsf {T}^{4,1,m}_{ijkl} G^{\ell ''m''m}_{\ell 'm'4}\right) ,\\ a^{\ell ''m''kl,2}_{\ell 'm'ij}= & {} \left( \frac{1}{\sqrt{5}} \mathsf {T}^{0,2}_{ijkl}G^{\ell ''m''0}_{\ell 'm'0} +\frac{\sqrt{2}}{5\sqrt{7}}\sum _{m=-2}^{2}\mathsf {T}^{2,2,m}_{ijkl} G^{\ell ''m''m}_{\ell 'm'2}+\frac{1}{9\sqrt{70}}\sum _{m=-4}^{4}\mathsf {T}^{4,1,m}_{ijkl} G^{\ell ''m''m}_{\ell 'm'4}\right) ,\\ a^{\ell ''m''kl,3}_{\ell 'm'ij}= & {} \mathsf {T}^{0,1}_{ijkl}G^{\ell ''m''0}_{\ell 'm'0},\\ a^{\ell ''m''kl,4}_{\ell 'm'ij}= & {} \left( \frac{1}{9\sqrt{5}} \mathsf {T}^{0,2}_{ijkl}G^{\ell ''m''0}_{\ell 'm'0} -\frac{\sqrt{2}}{5\sqrt{7}}\sum _{m=-2}^{2} \mathsf {T}^{2,2,m}_{ijkl}G^{\ell ''m''m}_{\ell 'm'2} +\frac{\sqrt{2}}{3\sqrt{35}}\sum _{m=-4}^{4}\mathsf {T}^{4,1,m}_{ijkl} G^{\ell ''m''m}_{\ell 'm'4}\right) ,\\ a^{\ell ''m''kl,5}_{\ell 'm'ij}= & {} \left( \left( \frac{2}{3} \mathsf {T}^{0,1}_{ijkl}+\frac{1}{3\sqrt{5}} \mathsf {T}^{0,2}_{ijkl}\right) G^{\ell ''m''0}_{\ell 'm'0} +\left( \frac{2}{9}\sum _{m=-2}^{2}\mathsf {T}^{2,1,m}_{ijkl} -\frac{\sqrt{2}}{9\sqrt{7}}\sum _{m=-2}^{2}\mathsf {T}^{2,2,m}_{ijkl} \right) G^{\ell ''m''m}_{\ell 'm'2}\right. \\&+\,\left. \frac{\sqrt{2}}{9\sqrt{35}}\sum _{m=-4}^{4} \mathsf {T}^{4,1,m}_{ijkl}G^{\ell ''m''m}_{\ell 'm'4}\right) , \end{aligned}$$

and let \(L^n\) be infinite lower triangular matrices from Cholesky factorisation of the matrices \(A^n\).

Theorem 4

The following conditions are equivalent.

  1. 1.

    The matrix \(\tau f_{ij\ell m}(\lambda ,0,0)\) with \(\lambda >0\) takes values in the simplex described above.

  2. 2.

    The field \(\varepsilon (\mathbf {x})\) has the form

    $$\begin{aligned} \varepsilon _{ij}(\rho ,\theta ,\varphi )=C\delta _{ij}+2\sqrt{\pi } \sum _{n=1}^{5}\sum _{\ell =0}^{\infty } \sum _{m=-\ell }^{\ell }\int _{0}^{\infty }j_{\ell }(\lambda \rho )\,\mathrm{d} Z^{n^{\prime }}_{\ell mij}(\lambda )S^m_{\ell }(\theta ,\varphi ), \end{aligned}$$

    where

    $$\begin{aligned} Z^{n^{\prime }}_{\ell mij}(A)=\sum _{(\ell ^{\prime },m^{\prime },k,l)\le (\ell ,m,i,j)}Z^n_{\ell ^{ \prime }m^{\prime }kl}(A), \end{aligned}$$

    and where \(Z^n_{\ell ^{\prime }m^{\prime }kl}\) is the sequence of uncorrelated scattered random measures on \([0,\infty )\) with control measures \(\varPhi _n\).

The idea of proof is as follows. Write down the Rayleigh expansion for \( \mathrm{e}^{\mathrm{i}(\mathbf {p},\mathbf {x})}\) and for \(\mathrm{e}^{- \mathrm{i}(\mathbf {p},\mathbf {y})}\) separately,substitute both expansions into (10) and use the following result, known as the Gaunt integral:

$$\begin{aligned} \int _{S^2}S^{m_1}_{\ell _1}(\theta ,\varphi )S^{m_2}_{\ell _2}(\theta ,\varphi ) S^{m_3}_{\ell _3}(\theta ,\varphi )\sin \theta \,\mathrm{d}\varphi \, \mathrm{d} \theta= & {} \sqrt{\frac{(2\ell _1+1)(2\ell _2+1)}{4\pi (2\ell _3+1)}}\\&\times \, g^{m_3[m_1,m_2]}_{\ell _3[\ell _1,\ell _2]}g^{0[0,0]}_{\ell _3[\ell _1,\ell _2]}. \end{aligned}$$

This theorem can be proved exactly in the same way, as its complex counterpart, see, for example, [34]. Then apply Karhunen’s theorem, see [19].

In order to simulate such fields numerically, one can use simulation algorithms based on spectral expansions. One of such algorithms is described in [21] and realised using MATLAB®, see also references herein. Other software, like R, may be used as well. In comparison with [21], only one new problem appears, that is, calculation of the Godunov–Gordienko coefficients \(g^{n[n_1,n_2]}_{N[N_1,N_2]}\). An algorithm for calculation of the above coefficients is given in [41]. It was realised by the second named author using MATLAB and used for calculation of the syzygies and spectral expansions.

The significance of the Matérn class of tensor-valued random fields follows from the fact that scalar random fields with such a correlation structures are solutions of the fractional analogous of the stochastic Helmholtz equations and hence they are widely used in applications of isotropic random fields on Euclidean space as well spherical random fields obtained as the restriction of isotropic random fields onto the sphere, see [26, Example 2]. For an application of spherical tensor random fields to estimation of parameters of Cosmic Microwave Background one can also propose an analogous of the Matérn class tensor-valued correlation structure, a paper by the authors is currently in preparation.