# Time-Varying Isotropic Vector Random Fields on Compact Two-Point Homogeneous Spaces

## Abstract

A general form of the covariance matrix function is derived in this paper for a vector random field that is isotropic and mean square continuous on a compact connected two-point homogeneous space and stationary on a temporal domain. A series representation is presented for such a vector random field which involves Jacobi polynomials and the distance defined on the compact two-point homogeneous space.

## Introduction

Consider the sphere $$\mathbb {S}^d$$ embedded into $$\mathbb {R}^{d+1}$$ as follows: $$\mathbb {S}^d=\{\,\mathbf {x}\in \mathbb {R}^{d+1}:\Vert \mathbf {x}\Vert =1\,\}$$, and define the distance between the points $$\mathbf {x}_1$$ and $$\mathbf {x}_2$$ by $$\rho (\mathbf {x}_1,\mathbf {x}_2)=\cos ^{-1}(\mathbf {x}_1^{\top }\mathbf {x}_2)$$. With this distance, any isometry between two pairs of points can be extended to an isometry of $$\mathbb {S}^d$$. A metric space with such a property is called two-point homogeneous. A complete classification of connected and compact two-point homogeneous spaces is performed in [40]. Besides spheres, the list includes projective spaces over different algebras; see Sect. 2 for details. It turns out that any such space is a manifold. We denote it by $$\mathbb {M}^d$$, where d is the topological dimension of the manifold. Following [24], denote by $$\mathbb {T}$$ either the set $$\mathbb {R}$$ of real numbers or the set $$\mathbb {Z}$$ of integers, and call it the temporal domain.

Let $$(\varOmega ,\mathfrak {F},\mathsf {P})$$ be a probability space.

### Definition 1

An $$\mathbb {R}^m$$-valued spatio-temporal random field $$\mathbf {Z}(\omega ,\mathbf {x},t):\varOmega \times \mathbb {M}^d \times \mathbb {T}\rightarrow \mathbb {R}^m$$ is called (wide-sense) isotropic over $$\mathbb {M}^d$$ and (wide-sense) stationary over the temporal domain $$\mathbb {T}$$, if its mean function $$\mathsf {E}[\mathbf {Z}(\mathbf {x}; t)]$$ equals a constant vector, and its covariance matrix function

\begin{aligned} {{\,\mathrm{cov}\,}}(\mathbf {Z}(\mathbf {x}_1; t_1), \mathbf {Z}(\mathbf {x}_2; t_2))= & {} \mathsf {E}\left[ (\mathbf {Z}(\mathbf {x}_1; t_1) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_1; t_1)])(\mathbf {Z}(\mathbf {x}_2; t_2) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_2; t_2)])^{\top }\right] , \\&\mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, t_1, t_2 \in \mathbb {T}, \end{aligned}

depends only on the time lag $$t_2-t_1$$ between $$t_2$$ and $$t_1$$ and the distance $$\rho (\mathbf {x}_1,\mathbf {x}_2)$$ between $$\mathbf {x}_1$$ and $$\mathbf {x}_2$$.

As usual, we omit the argument $$\omega \in \varOmega$$ in the notation for the random field under consideration. In such a case, the covariance matrix function is denoted by $$\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t)$$,

\begin{aligned} \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t_1-t_2)= & {} \mathsf {E}\left[ (\mathbf {Z}(\mathbf {x}_1; t_1) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_1; t_1)])(\mathbf {Z}(\mathbf {x}_2; t_2) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_2; t_2)])^{\top }\right] , \\&\mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, t_1, t_2 \in \mathbb {T}. \end{aligned}

It is an $$m \times m$$ matrix function, $$\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t) = ( \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) )^{\top }$$, and the inequality

\begin{aligned} \sum _{i=1}^n \sum _{j=1}^n \mathbf {a}^{\top }_i \mathsf {C} (\rho (\mathbf {x}_i, \mathbf {x}_j); t_i-t_j) \mathbf {a}_j \ge 0 \end{aligned}

holds for every $$n \in \mathbb {N}$$, any $$\mathbf {x}_i \in \mathbb {M}^d$$, $$t_i \in \mathbb {T}$$, and $$\mathbf {a}_i \in \mathbb {R}^m$$ ($$i =1, 2, \ldots , n$$), where $$\mathbb {N}$$ stands for the set of positive integers, while $$\mathbb {N}_0$$ denotes the set of nonnegative integers below. On the other hand, given an $$m \times m$$ matrix function with these properties, there exists an m-variate Gaussian or elliptically contoured random field $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$ with $$\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t)$$ as its covariance matrix function [21].

For a scalar and purely spatial random field $$\{\, Z(\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}$$ that is isotropic and mean square continuous, its covariance function is continuous and possesses a series representation of the form [8, 14, 37]

\begin{aligned} {{\,\mathrm{cov}\,}}( Z (\mathbf {x}_1), Z( \mathbf {x}_2)) = \sum \limits _{n=0}^\infty b_n P_n^{ (\alpha , \beta ) } \left( \cos (\rho (\mathbf {x}_1, \mathbf {x}_2)) \right) ,\quad \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}
(1)

where $$\{\, b_n:n \in \mathbb {N}_0\, \}$$ is a sequence of nonnegative numbers with $$\sum \nolimits _{n=0}^\infty b_n P_n^{ (\alpha , \beta ) } (1)$$ convergent, $$P_n^{ (\alpha , \beta )} (x)$$ is a Jacobi polynomial of degree n with a pair of parameters $$(\alpha , \beta )$$ [1, 38], shown in Table 2. A general form of the covariance matrix function and a series representation are derived in [24] for a vector random field that is isotropic and mean square continuous on a sphere and stationary on a temporal domain. They are extended to $$\mathbb {M}^d \times \mathbb {T}$$ in this paper.

Isotropic random fields over $$\mathbb {S}^d$$ with values in $$\mathbb {R}^1$$ and $$\mathbb {C}^1$$ were introduced in [35]. Theoretical investigations and practical applications of isotropic scalar-valued random fields on spheres may be found in [7, 11, 12, 19, 43], and vector- and tensor-valued random fields on spheres have been considered in [18, 23, 24, 30], among others. Cosmological applications, in particular, studies of tiny fluctuations of the Cosmic Microwave Background, require development of the theory of random sections of vector and tensor bundles over $$\mathbb {S}^2$$ [4, 15, 25, 27]. See also surveys of the topic in the monographs [26, 31, 42, 44]. Isotropic random fields on connected compact two-point homogeneous spaces are studied in [2, 14, 28, 29, 33], among others.

Some important properties of $$\mathbb {M}^d$$, $$\rho (\mathbf {x}_1, \mathbf {x}_2)$$, and $$P_n^{(\alpha , \beta )} (x)$$ are reviewed in Sect. 2, and two lemmas are derived: one as a special case of the Funk–Hecke formula on $$\mathbb {M}^d$$ and the other as a kind of probability interpretation. A series representation is given in Sect. 3 for an isotropic and mean square continuous vector random field on $$\mathbb {M}^d$$, and a series expression of its covariance matrix function, in terms of Jacobi polynomials. Section 4 deals with a spatio-temporal vector random field on $$\mathbb {M}^d\times \mathbb {T}$$, which is isotropic and mean square continuous vector random field on $$\mathbb {M}^d$$ and stationary on $$\mathbb {T}$$, and obtains a series representation for the random field and a general form for its covariance matrix function. The lemmas and theorems are proved in Appendix A.

## Compact Two-Point Homogeneous Spaces and Jacobi Polynomials

This section starts by recalling some important properties of the compact connected two-point homogeneous space $$\mathbb {M}^d$$ and those of Jacobi polynomials and then establishes two useful lemmas on a special case of the Funk–Hecke formula on $$\mathbb {M}^d$$ and its probability interpretation, which are conjectured in [24]. In what follows, we consider only connected compact two-point homogeneous spaces.

The compact connected two-point homogeneous spaces are shown in the first column of Table 1. Besides spheres, there are projective spaces over the fields $$\mathbb {R}$$ and $$\mathbb {C}$$, over the skew field $$\mathbb {H}$$ of quaternions, and over the algebra $$\mathbb {O}$$ of octonions. The possible values of d are chosen in such a way that all the spaces in Table 1 are different and exhaust the list. In the lowest dimensions, we have $$\mathbb {P}^1(\mathbb {R})=\mathbb {S}^1$$, $$\mathbb {P}^2(\mathbb {C})=\mathbb {S}^2$$, $$\mathbb {P}^4(\mathbb {H})=\mathbb {S}^4$$, and $$\mathbb {P}^8(\mathbb {O})=\mathbb {S}^8$$.

All compact two-point homogeneous spaces share the same property [6] that all of their geodesic lines are closed. Moreover, all of them are circles and have the same length. In particular, when the sphere $$\mathbb {S}^d$$ is embedded into the space $$\mathbb {R}^{d+1}$$ as described in Sect. 1, the length of any geodesic line is equal to that of the unit circle, that is, $$2\pi$$. It is natural to norm the distance in such a way that the length of any geodesic line is equal to $$2\pi$$, exactly as in the case of the unit sphere.

There are at least two different approaches to the subject of compact two-point homogeneous spaces in the literature. They are reviewed in the next two subsections.

### An Approach Based on Lie Algebras

This approach goes back to Cartan [10]. It has been used in both the probabilistic literature [14] and the approximation theory literature [3].

Let G be the connected component of the group of isometries of $$\mathbb {M}^d$$, and let K be the stationary subgroup of a fixed point in $$\mathbb {M}^d$$, call it $$\mathbf {o}$$. Cartan [10] defined and calculated the numbers p and q, which are dimensions of some root spaces connected with the Lie algebras of the groups G and K. The groups G and K are listed in the second and the third columns of Table 1, while the numbers p and q are listed in the fourth and fifth columns of the table.

By [17, Theorem 11], if $$\mathbb {M}^d$$ is a two-point homogeneous space, then the only differential operators on $$\mathbb {M}^d$$ that are invariant under all isometries of $$\mathbb {M}^d$$ are the polynomials in a special differential operator $$\varDelta$$ called the Laplace–Beltrami operator. Let $$\hbox {d}\nu (\mathbf {x})$$ be the measure which is induced on the homogeneous space $$\mathbb {M}^d=G/K$$ by the probabilistic invariant measure on G. It is possible to define $$\varDelta$$ as a self-adjoint operator in the space $$H=L^2(\mathbb {M}^d,\hbox {d}\nu (\mathbf {x}))$$. The spectrum of $$\varDelta$$ is discrete, and the eigenvalues are

\begin{aligned} \lambda _{n}=-\varepsilon n(\varepsilon n+\alpha +\beta +1), ~~~~~~ n \in \mathbb {N}_0, \end{aligned}

where

\begin{aligned} \alpha =(p+q-1)/2,\qquad \beta =(q-1)/2, \end{aligned}
(2)

and where $$\varepsilon =2$$ if $$\mathbb {M}^d= \mathbb {P}^d(\mathbb {R})$$ and $$\varepsilon =1$$ otherwise.

Let $$H_{n}$$ be the eigenspace of $$\varDelta$$ corresponding to $$\lambda _{n}$$. The space H is the Hilbert direct sum of its subspaces $$H_{n}$$, $$n\in \mathbb {N}_0$$. The space $$H_n$$ is finite-dimensional with

\begin{aligned} \dim H_n= \frac{(2n+\alpha +\beta +1)\varGamma (\beta +1) \varGamma (n+\alpha +\beta +1)\varGamma (n+\alpha +1)}{\varGamma (\alpha +1)\varGamma (\alpha +\beta +2)\varGamma (n+1)\varGamma (n+\beta +1)}. \end{aligned}

Each of the spaces $$H_{n}$$ contains a unique one-dimensional subspace whose elements are K-spherical functions; that is, functions invariant under the action of K on $$\mathbb {M}^d$$. Such a function, say $$f_{n}(\mathbf {x})$$, depends only on the distance $$r=\rho (\mathbf {x},\mathbf {o})$$, $$f_{n}(\mathbf {x})=f^*_{n}(r)$$. A spherical function is called zonal if $$f^*_{n}(0)=1$$.

The zonal spherical functions of all compact connected two-point homogeneous spaces are listed in the last column of Table 1. To explain notation, we recall that the Jacobi polynomials

\begin{aligned} P_n^{(\alpha , \beta )} (x)= & {} \frac{\varGamma (\alpha +n+1)}{n! \varGamma (\alpha +\beta +n+1)}\sum _{k=0}^n\left( {\begin{array}{c}n\\ k\end{array}}\right) \frac{\varGamma (\alpha +\beta +n+k+1)}{\varGamma ( \alpha +k+1 )} \left( \frac{x-1}{2} \right) ^k,\\ x\in & {} [-1,1],\quad n \in \mathbb {N}_0, \end{aligned}

are the eigenfunctions of the Jacobi operator [38, Theorem 4.2.1]

\begin{aligned} \varDelta _x=\frac{1}{(1-x)^{\alpha }(1+x)^{\beta }}\frac{\hbox {d}}{\hbox {d}x} \left( (1-x)^{\alpha +1}(1+x)^{\beta +1}\frac{\hbox {d}}{\hbox {d}x}\right) . \end{aligned}

In the last column of Table 1, the normalised Jacobi polynomials are introduced,

\begin{aligned} R^{(\alpha ,\beta )}_{n}(x)=\frac{P^{(\alpha ,\beta )}_{n}(x)}{P^{(\alpha ,\beta )}_{n}(1)}, \qquad n \in \mathbb {N}_0, \end{aligned}

where

\begin{aligned} P^{(\alpha ,\beta )}_{n}(1)=\frac{\varGamma (n+\alpha +1)}{\varGamma (n+1)\varGamma (\alpha +1)}. \end{aligned}
(3)

The reason for the exceptional behaviour of the real projective spaces is as follows; see [14, 16]. The space $$\mathbb {P}^d(\mathbb {R})$$ may be constructed by identification of antipodal points on the sphere $$\mathbb {S}^d$$. An $$\hbox {O}(d)$$-invariant function f on $$\mathbb {P}^d(\mathbb {R})$$ can be lifted to an $$\hbox {SO}(d)$$-invariant function g on $$\mathbb {S}^d$$ by $$g(\mathbf {x})=f(\pi (\mathbf {x}))$$, where $$\pi$$ maps a point $$\mathbf {x}\in \mathbb {S}^d$$ to the pair of antipodal points $$\pi (\mathbf {x})\in \mathbb {P}^d(\mathbb {R})$$. This simply means that a function on [0, 1] can be extended to an even function on $$[-\,1,1]$$. Only the even polynomials can be functions on the so constructed manifold. By [38, Equation (4.1.3)], we have

\begin{aligned} P^{(\alpha ,\beta )}_{n}(x)=(-1)^{n}P^{(\beta ,\alpha )}_{n}(-x). \end{aligned}

For the real projective spaces $$\alpha =\beta$$, and the corresponding normalised Jacobi polynomials are even if and only if n is even.

### Remark 1

If two Lie groups have the same connected component of identity, then they have the same Lie algebra. For example, the groups $$\hbox {SO}(d)$$ and $$\hbox {O}(d)$$ have the same Lie algebra $$\mathfrak {so}(d)$$. That is, the approach based on Lie algebras gives the same values of p and q for spheres and real projective spaces of equal dimensions. Only zonal spherical functions can distinguish between the two cases.

In the only case of $$\mathbb {M}^d=\mathbb {S}^1$$, we have $$p=q=0$$. The reason is that only in this case the Lie algebra $$\mathfrak {so}(2)$$ is commutative rather than semisimple, and does not have nonzero root spaces at all.

### A Geometric Approach

There is a trick that allows us to write down all zonal spherical functions of all compact two-point homogeneous spaces in the same form, which is used in probabilistic literature [2, 26, 28, 29, 33] and in approximation theory [9, 13]. Denote $$y=\cos (\rho (\mathbf {x},\mathbf {o})/2)$$. Then we have $$\cos (\rho (\mathbf {x},\mathbf {o}))=2y^2-1$$. For the case of $$\mathbb {M}^d= \mathbb {P}^d(\mathbb {R})$$, $$\alpha =\beta =(d-2)/2$$. By [38, Theorem 4.1],

\begin{aligned} P^{(\alpha ,\alpha )}_{2n}(y)=\frac{\varGamma (2n+\alpha +1)\varGamma (n+1)}{\varGamma (n+\alpha +1)\varGamma (2 n+1)}P^{(\alpha ,-1/2)}_{n}(2y^2-1). \end{aligned}

In terms of the normalised Jacobi polynomials, we obtain

\begin{aligned} R^{(\alpha ,\alpha )}_{2n}(\cos (\rho (\mathbf {x},\mathbf {o})/2)) =R^{(\alpha ,-1/2)}_{n}(\cos (\rho (\mathbf {x},\mathbf {o}))). \end{aligned}

For the case of $$\mathbb {M}^d= \mathbb {P}^d(\mathbb {R})$$, if we redefine $$\alpha =(d-2)/2$$, $$\beta =-1/2$$, then all zonal spherical functions of all compact two-point homogeneous spaces are given by the same expression $$R^{(\alpha ,\beta )}_{n}(\cos (\rho (\mathbf {x},\mathbf {o})))$$.

It easily follows from (2) that the new values for p and q in the case of $$\mathbb {M}^d=P^d(\mathbb {R})$$ are $$p=d-1$$ and $$q=0$$. It is interesting to note that the new values of p and q for the real projective spaces together with their old values for the rest of spaces still have a meaning; see [13] and Table 2. This time, the values of p and q are connected with the geometry of the space $$\mathbb {M}^d$$ rather than with Lie algebras.

Specifically, let $$\mathbb {A}=\{\,\mathbf {x}\in \mathbb {M}^d :\rho (\mathbf {x},\mathbf {o})=\pi \,\}$$. This set is called the antipodal manifold of the point $$\mathbf {o}$$. The antipodal manifolds are listed in the sixth column of Table 2. Geometrically, if $$\mathbb {M}^d=\mathbb {S}^d$$ and $$\mathbf {o}$$ is the North pole, then $$\mathbb {A}=\mathbb {S}^0$$ is the South pole. Otherwise, $$\mathbb {A}$$ is the space at infinity of the point $$\mathbf {o}$$ in the terms of projective geometry. The new number p turns out to be the dimension of the antipodal manifold, while the number $$p+q+1$$ is, as before, the dimension of the space $$\mathbb {M}^d$$ itself.

In what follows, we use the geometric approach. It turns out that all the spaces $$\mathbb {M}^d$$ are Riemannian manifolds, as is defined in [5]. Each Riemannian manifold carries the canonical measure$$\mu$$; see [5, pp. 10–11]. The measure $$\mu$$ is proportional to the measure $$\nu$$ constructed in Sect. 2.1. The coefficient of proportionality or the total measure $$\mu (\mathbb {M}^d)$$ of the compact manifold $$\mathbb {M}^d$$ is called the volume of $$\mathbb {M}^d$$.

### Lemma 1

The volume of the space $$\mathbb {M}^d$$ is

\begin{aligned} \omega _d=\mu (\mathbb {M}^d)=\frac{(4\pi )^{\alpha +1} \varGamma (\beta +1)}{\varGamma (\alpha +\beta +2)}. \end{aligned}
(4)

In what follows, we write just $$\hbox {d}\mathbf {x}$$ instead of $$\hbox {d}\mu (\mathbf {x})$$.

### Orthogonal Properties of Jacobi Polynomials

The set of Jacobi polynomials $$\{\, P_n^{(\alpha , \beta )} (x):n \in \mathbb {N}_0, x \in \mathbb {R}\, \}$$ possesses two types of orthogonal properties. First, for each pair of $$\alpha >-1$$ and $$\beta >-1$$, this set is a complete orthogonal system on the interval $$[-\,1, 1]$$ with respect to the weight function $$(1-x)^\alpha (1+x)^\beta$$, in the sense that

\begin{aligned} \int _{-1}^1 P^{(\alpha , \beta )}_i (x) P^{(\alpha , \beta )}_j (x) (1-x)^\alpha (1+x)^\beta \hbox {d}x = \left\{ \begin{array}{ll} \frac{2^{\alpha +\beta +1} }{2 j +\alpha +\beta +1} \frac{\varGamma (j+\alpha +1) \varGamma (j+\beta +1)}{ j! \varGamma ( j +\alpha +\beta +1) }, ~ &{} ~ i =j, \\ 0, ~ &{} ~ i \ne j. \end{array}\right. \end{aligned}
(5)

Second, for selected values of $$\alpha$$ and $$\beta$$ given by (2) with p and q given in Table 2, they are orthogonal over $$\mathbb {M}^d$$, as the following lemma describes, which is derived from the Funk–Hecke formula recently established in [3]. In the particular case $$\mathbb {M}^d=\mathbb {S}^d$$, the Funk–Hecke formula may be found in classical references such as [1, 34].

### Lemma 2

For $$i, j \in \mathbb {N}_0$$, and $$\mathbf {x}_1$$, $$\mathbf {x}_2 \in \mathbb {M}^d$$,

\begin{aligned} \int _{\mathbb {M}^d } P_i^{(\alpha ,\beta ) } (\cos (\rho (\mathbf {x}_1,\mathbf {x}))) P_j^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_2,\mathbf {x})))\,\mathrm{d}\mathbf {x} =\frac{\delta _{ij}\omega _d}{a_i^2} P_i^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1,\mathbf {x}_2))), \end{aligned}

where

\begin{aligned} a_n=\left( \frac{\varGamma (\beta +1)(2 n +\alpha +\beta +1)\varGamma (n+\alpha +\beta +1)}{\varGamma (\alpha +\beta +2)\varGamma (n+\beta +1)}\right) ^{\frac{1}{2}},\qquad n \in \mathbb {N}_0. \end{aligned}
(6)

The probabilistic interpretation of zonal spherical functions on $$\mathbb {M}^d$$ is provided in Lemma 3. The spherical case is given in [23].

### Definition 2

A random vector $$\mathbf {U}$$ is said to be uniformly distributed on $$\mathbb {M}^d$$ if, for every Borel set $$A\subseteq \mathbb {M}^d$$ and every isometry g we have $$\mathsf {P} (\mathbf {U}\in A ) =\mathsf {P} (\mathbf {U}\in gA)$$.

To construct $$\mathbf {U}$$, we start with a measure $$\sigma$$ proportional to the invariant measure $$\nu$$ of Sect. 2.1. Let $$T_{\mathbf {o}}$$ be the tangent space to $$\mathbb {M}^d$$ at the point $$\mathbf {o}$$. Choose a Cartesian coordinate system in $$T_{\mathbf {o}}$$ and identify this space with the space $$\mathbb {R}^{d}$$. Construct a chart $$\varphi :\mathbb {M}^d\setminus \mathbb {A}\rightarrow \mathbb {R}^{d}$$ as follows. Put $$\varphi (\mathbf {o})=\mathbf {0}\in \mathbb {R}^d$$. For any other point $$\mathbf {x}\in \mathbb {M}^d\setminus \mathbb {A}$$, draw the unique geodesic line connecting $$\mathbf {o}$$ and $$\mathbf {x}$$. Let $$\mathbf {r}\in \mathbb {R}^{d}$$ be the unit tangent vector to the above geodesic line. Define

\begin{aligned} \varphi (\mathbf {x})= \mathbf {r} \tan (\rho (\mathbf {x},\mathbf {o})/2), \end{aligned}

and, for each Borel set $$B\subseteq \mathbb {M}^d$$,

\begin{aligned} \sigma (B)=\int _{\varphi ^{-1}(B\setminus \mathbb {A})}\frac{\hbox {d}\mathbf {x}}{(1+\Vert \mathbf {x}\Vert ^2)^{\alpha +\beta +2}}. \end{aligned}

This measure is indeed invariant [39, p. 113]. Finally, define a probability space $$(\varOmega ',$$$$\mathfrak {F}',$$$$\mathsf {P}')$$ as follows: $$\varOmega '=\mathbb {M}^d$$, $$\mathfrak {F}'$$ is the $$\sigma$$-field of Borel subsets of $$\varOmega '$$, and

\begin{aligned} \mathsf {P}'(B)=\frac{\sigma (B)}{\sigma (\mathbb {M}^d)},\qquad B\in \mathfrak {B}'. \end{aligned}

The random variable $$\mathbf {U}(\omega )=\omega$$ is then uniformly distributed on $$\mathbb {M}^d$$.

### Lemma 3

Let $$\mathbf {U}$$ be a random vector uniformly distributed on $$\mathbb {M}^d$$. For $$n \in \mathbb {N}$$,

\begin{aligned} Z_n(\mathbf {x})=a_n P_n^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x},\mathbf {U}))), \qquad \mathbf {x}\in \mathbb {M}^d, \end{aligned}

is a centred isotropic random field with covariance function

\begin{aligned} {{\,\mathrm{cov}\,}}( Z_n (\mathbf {x}_1), Z_n (\mathbf {x}_2) ) =P_n^{ (\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1, \mathbf {x}_2))), ~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}

where $$a_n$$ is given by (6). Moreover, for $$k \ne n$$, the random fields $$\{\, Z_k (\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}$$ and $$\{\,Z_n(\mathbf {x}): \mathbf {x} \in \mathbb {M}^d\, \}$$ are uncorrelated:

\begin{aligned} {{\,\mathrm{cov}\,}}(Z_k (\mathbf {x}_1), Z_n (\mathbf {x}_2) ) =0, ~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}
(7)

## Isotropic Vector Random Fields on $$\mathbb {M}^d$$

In the purely spatial case, this section presents a series representation for an m-variate isotropic and mean square continuous random field $$\{\, \mathbf {Z} (\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}$$ and a series expression for its covariance matrix function, in terms of Jacobi polynomials. By mean square continuous, we mean that, for $$k =1, \ldots , m$$,

\begin{aligned} \mathsf {E}\left[ | Z_k (\mathbf {x}_1) -Z_k (\mathbf {x}_2) |^2\right] \rightarrow 0, ~~ \text{ as } ~~ \rho (\mathbf {x}_1, \mathbf {x}_2 ) \rightarrow 0, ~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}

It implies the continuity of each entry of the associated covariance matrix function in terms of $$\rho (\mathbf {x}_1, \mathbf {x}_2)$$.

In what follows, d is assumed to be greater than 1, while $$\mathbb {M}^d$$ reduces to the unit circle $$\mathbb {S}^1$$ when $$d=1$$, over which the treatment of isotropic vector random fields may be found in [23, 24]. For an $$m \times m$$ symmetric and nonnegative definite matrix $$\mathsf {B}$$ with nonnegative eigenvalues $$\lambda _1, \dots$$, $$\lambda _m$$, there is an orthogonal matrix $$\mathsf {S}$$ such that $$\mathsf {S}^{-1}\mathsf {B}\mathsf {S}=\mathsf {D}$$, where $$\mathsf {D}$$ is a diagonal matrix with diagonal entries $$\lambda _1, \ldots , \lambda _m$$. Define the square root of $$\mathsf {B}$$ by

\begin{aligned} \mathsf {B}^{\frac{1}{2}}=\mathsf {S}\mathsf {D}^{\frac{1}{2}}\mathsf {S}^{-1}, \end{aligned}

where $$\mathsf {D}^{\frac{1}{2}}$$ is a diagonal matrix with diagonal entries $$\sqrt{\lambda _1}, \ldots , \sqrt{ \lambda _m}$$. Clearly, $$\mathsf {B}^{\frac{1}{2}}$$ is symmetric, nonnegative definite, and $$(\mathsf {B}^{\frac{1}{2}})^2=\mathsf {B}$$. Denote by $$\mathsf {I}_m$$ an $$m \times m$$ identity matrix. For a sequence of $$m \times m$$ matrices $$\{\, \mathsf {B}_n:n \in \mathbb {N}_0 \,\}$$, the series $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n$$ is said to be convergent, if each of its entries is convergent.

### Theorem 1

Suppose that $$\{\, \mathbf {V}_n:n \in \mathbb {N}_0\, \}$$ is a sequence of independent m-variate random vectors with $$\mathsf {E} ( \mathbf {V}_n)= \mathbf {0}$$ and $${{\,\mathrm{cov}\,}}( \mathbf {V}_n, \mathbf {V}_n ) = a_n^2\mathsf {I}_m$$, $$\mathbf {U}$$ is a random vector uniformly distributed on $$\mathbb {M}^d$$ and is independent of $$\{\, \mathbf {V}_n:n \in \mathbb {N}_0\, \}$$, and that $$\{\, \mathsf {B}_n:n \in \mathbb {N}_0\, \}$$ is a sequence of $$m \times m$$ symmetric nonnegative definite matrices. If the series $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } (1)$$ converges, then

\begin{aligned} \mathbf {Z} (\mathbf {x}) = \sum _{n=0}^\infty \mathsf {B}_n^{\frac{1}{2}} \mathbf {V}_n P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )), ~~~~~~ \mathbf {x} \in \mathbb {M}^d, \end{aligned}
(8)

is a centred m-variate isotropic random field on $$\mathbb {M}^d$$, with covariance matrix function

\begin{aligned} {{\,\mathrm{cov}\,}}( \mathbf {Z} (\mathbf {x}_1), \mathbf {Z}(\mathbf {x}_2) ) = \sum _{n=0}^\infty \mathsf {B}_n P_n^{(\alpha , \beta ) } \left( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) \right) , ~~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}
(9)

The terms of (8) are uncorrelated; more precisely,

\begin{aligned} {{\,\mathrm{cov}\,}}\left( \mathsf {B}_i^{\frac{1}{2}} \mathbf {V}_i P_i^{ (\alpha , \beta ) } ( \rho (\mathbf {x}_1, \mathbf {U})), ~ \mathsf {B}_j^{\frac{1}{2}} \mathbf {V}_j P_j^{ (\alpha , \beta ) } ( \rho (\mathbf {x}_2, \mathbf {U} )) \right) = \mathbf {0}, ~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ i \ne j. \end{aligned}

Since $$\left| P_n^{ (\alpha , \beta ) } (\cos \vartheta ) \right| \le P_n^{ (\alpha , \beta ) } (1), n \in \mathbb {N}_0,$$ the convergent assumption of the series $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } (1)$$ ensures not only the mean square convergence of the series at the right-hand side of (8), but also the uniform and absolute convergence of the series at the right-hand side of (9).

When $$\mathbb {M}^d=\mathbb {S}^2$$ and $$m=1$$, we have $$\dim H_n=2n+1$$, and (9) takes the form

\begin{aligned} {{\,\mathrm{cov}\,}}( Z (\mathbf {x}_1), Z(\mathbf {x}_2) ) = \sum _{n=0}^\infty b_n P_n\left( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) \right) , \end{aligned}

where $$P_n (x)$$ are Legendre polynomials. In the theory of Cosmic Microwave Background, this equation is traditionally written in the form

\begin{aligned} {{\,\mathrm{cov}\,}}( Z (\mathbf {x}_1), Z(\mathbf {x}_2) ) = \sum _{\ell =0}^\infty (2\ell +1)C_{\ell } P_{\ell }\left( \mathbf {x}_1\cdot \mathbf {x}_2\right) , \end{aligned}

and the sequence $$\{\,C_{\ell }:\ell \ge 0\,\}$$ is called the angular power spectrum. In the general case, define the angular power spectrum by

\begin{aligned} \mathsf {C}_n=\frac{1}{\dim H_n}\mathsf {B}_n. \end{aligned}

A lot of examples of the angular power spectrum for general compact two-point homogeneous spaces may be found in [2].

As the next theorem indicates, (9) is a general form that the covariance matrix function of an m-variate isotropic and mean square continuous random field on $$\mathbb {M}^d$$ must take.

### Theorem 2

For an m-variate isotropic and mean square continuous random field $$\{\, Z(\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}$$, its covariance matrix function $${{\,\mathrm{cov}\,}}( Z(\mathbf {x}_1), Z (\mathbf {x}_2) )$$ is of the form

\begin{aligned} \mathsf {C} ( \mathbf {x}_1, \mathbf {x}_2 ) = \sum _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } \left( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) \right) , ~~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}
(10)

where $$\{\,\mathsf {B}_n:n \in \mathbb {N}_0\, \}$$ is a sequence of $$m \times m$$ nonnegative definite matrices and the series $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } (1)$$ converges.

Conversely, if an $$m \times m$$ matrix function $$\mathsf {C} (\mathbf {x}_1, \mathbf {x}_2)$$ is of the form (10), then it is the covariance matrix function of an m-variate isotropic Gaussian or elliptically contoured random field on $$\mathbb {M}^d$$.

Examples of covariance matrix functions on $$\mathbb {S}^d$$ may be found in, for instance, [23, 24]. We would call for parametric and semi-parametric covariance matrix structures on $$\mathbb {M}^d$$.

## Time-Varying Isotropic Vector Random Fields on $$\mathbb {M}^d$$

For an m-variate random field $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$ that is isotropic and mean square continuous over $$\mathbb {M}^d$$ and stationary on $$\mathbb {T}$$, this section presents the general form of its covariance matrix function $$\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)$$, which is a continuous function of $$\rho (\mathbf {x}_1, \mathbf {x}_2)$$ and is also a continuous function of $$t \in \mathbb {R}$$ if $$\mathbb {T} = \mathbb {R}$$. A series representation is given in the following theorem for such a random field, as an extension of that on $$\mathbb {S}^d \times \mathbb {T}$$.

### Theorem 3

If an m-variate random field $$\{ \mathbf {Z} (\mathbf {x}; t), \mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T} \}$$ is isotropic and mean square continuous over $$\mathbb {M}^d$$ and stationary on $$\mathbb {T}$$, then

\begin{aligned} \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t) = ( \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) )^{\top }, \end{aligned}

and $$\frac{\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2}$$ is of the form

\begin{aligned}&\frac{\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2} \nonumber \\&\quad = \sum \limits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), \quad \mathbf {x}_1, \mathbf {x}_2\in \mathbb {M}^d, t\in \mathbb {T}, \end{aligned}
(11)

where, for each fixed $$n \in \mathbb {N}_0$$, $$\mathsf {B}_n (t)$$ is a stationary covariance matrix function on $$\mathbb {T}$$, and, for each fixed $$t \in \mathbb {T}$$, $$\mathsf {B}_n (t)$$ ($$n \in \mathbb {N}_0$$) are $$m \times m$$ symmetric matrices and $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (1)$$ converges.

While a general form of $$\frac{\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2}$$, instead of $$\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)$$ itself, is given in Theorem 3, that of $$\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)$$ can be obtained in certain special cases, such as spatio-temporal symmetric, and purely spatial.

### Corollary 1

If $$\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)$$ is spatio-temporal symmetric in the sense that

\begin{aligned} \mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); - t ) =\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t ), ~~~~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ t \in \mathbb {T}, \end{aligned}

then it takes the form

\begin{aligned} \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) = \sum \limits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ t \in \mathbb {T}. \end{aligned}

In contrast to those in (11), the $$m \times m$$ matrices $$\mathsf {B}_n (t)$$ ($$n \in \mathbb {N}_0$$) in the next theorem are not necessarily symmetric. One simple such example is

which is the covariance matrix function of an m-variate first order moving average time series , where $$\{\, \varvec{\varepsilon } (t):t \in \mathbb {Z}\, \}$$ is m-variate white noise with $$\mathsf {E}[ \varvec{\varepsilon } (t)] = \mathbf {0}$$ and , and is an $$m \times m$$ matrix.

### Theorem 4

An $$m \times m$$ matrix function

\begin{aligned} \mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t) = \sum \limits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~ ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ t \in \mathbb {T}, \end{aligned}
(12)

is the covariance matrix function of an m-variate Gaussian or elliptically contoured random field on $$\mathbb {M}^d \times \mathbb {T}$$ if and only if $$\{\, \mathsf {B}_n (t):n \in \mathbb {N}_0\, \}$$ is a sequence of stationary covariance matrix functions on $$\mathbb {T}$$ and $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n (0) P_n^{ (\alpha , \beta ) } (1)$$ converges.

As an example of (12), let

where is a sequence of $$m \times m$$ nonnegative definite matrices and $$P_n^{ (\alpha , \beta ) } (1)$$ converges. In this case, (12) is the covariance matrix function of an m-variate Gaussian or elliptically contoured random field on $$\mathbb {M}^d \times \mathbb {Z}$$.

Gaussian and second-order elliptically contoured random fields form one of the largest sets, if not the largest set, which allows any possible correlation structure [21]. The covariance matrix functions developed in Theorem 4 can be adopted for a Gaussian or elliptically contoured vector random field. However, they may not be available for other non-Gaussian random fields, such as a log-Gaussian [32], $$\chi ^2$$ [20], K-distributed [22], or skew-Gaussian one, for which admissible correlation structure must be investigated on a case-by-case basis. A series representation is given in the following theorem for an m-variate spatio-temporal random field on $$\mathbb {M}^d\times \mathbb {T}$$.

### Theorem 5

An m-variate random field

\begin{aligned} \mathbf {Z} (\mathbf {x}; t) = \sum _{n=0}^\infty \mathbf {V}_n (t) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U})), ~~~~~~ \mathbf {x} \in \mathbb {M}^d, ~ t \in \mathbb {T}, \end{aligned}
(13)

is isotropic and mean square continuous on $$\mathbb {M}^d$$, stationary on $$\mathbb {T}$$, and possesses mean $$\mathbf {0}$$ and covariance matrix function (12), where $$\{ \,\mathbf {V}_n (t):n \in \mathbb {N}_0 \, \}$$ is a sequence of independent m-variate stationary stochastic processes on $$\mathbb {T}$$ with

\begin{aligned} \mathsf {E} ( \mathbf {V}_n )= \mathbf {0}, ~~~ {{\,\mathrm{cov}\,}}( \mathbf {V}_n (t_1), \mathbf {V}_n (t_2) ) = a_n^2 \mathsf {B}_n (t_1-t_2), ~~~ n \in \mathbb {N}_0, \end{aligned}

the random vector $$\mathbf {U}$$ is uniformly distributed on $$\mathbb {M}^d$$ and is independent with $$\{\, \mathbf {V}_n (t) :$$$$n \in \mathbb {N}_0\, \}$$, and $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n (0) P_n^{ (\alpha , \beta ) } (1)$$ converges.

The distinct terms of (13) are uncorrelated each other,

\begin{aligned}&{{\,\mathrm{cov}\,}}\left( \mathbf {V}_i (t) P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U}) ), ~ \mathbf {V}_j (t) P_j^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U}) ) \right) = \mathbf {0},\\&\quad \mathbf {x} \in \mathbb {M}^d, ~ t \in \mathbb {T}, i \ne j, \end{aligned}

due to Lemma 3 and the independent assumption among $$\mathbf {U}, \mathbf {V}_i (t), \mathbf {V}_j (t)$$. The vector stochastic process $$\mathbf {V}_n (t)$$ can be expressed as, in terms of $$\mathbf {Z} (\mathbf {x}; t)$$ and $$\mathbf {U}$$,

\begin{aligned} \mathbf {V}_n (t) = \frac{a^2_n}{\omega _d P_n^{ (\alpha , \beta ) } (1)} \int _{\mathbb {M}^d} \mathbf {Z} (\mathbf {x}; t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}, \mathbf {U})) \mathrm{d} \mathbf {x}, ~~~~~ t \in \mathbb {T}, ~ n \in \mathbb {N}_0, \end{aligned}

where the integral is understood as a Bochner integral of a function taking values in the Hilbert space of random vectors $$\mathbf {Z}\in \mathbb {R}^m$$ with $$\mathsf {E}[\Vert \mathbf {Z}\Vert ^2_{\mathbb {R}^m}]<\infty$$.

It is obtained after we multiply both sides of (13) by $$P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}, \mathbf {U}))$$, integrate over $$\mathbb {M}^d$$, and apply Lemma 3,

\begin{aligned}&\int _{\mathbb {M}^d} \mathbf {Z} (\mathbf {x}; t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}, \mathbf {U})) \mathrm{d} \mathbf {x}\\&\quad = \sum _{k=0}^\infty \mathbf {V}_n (t) \int _{\mathbb {M}^d} P_k^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U}) ) P_n^{(\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U})) \mathrm{d} \mathbf {x} \\&\quad = \frac{1}{a_n^2} P_n^{ (\alpha , \beta ) } (1) \mathbf {V}_n (t). \end{aligned}

## References

1. 1.

Andrews, G.E., Askey, R., Roy, R.: Special functions, Encyclopedia of Mathematics and its Applications, vol. 71. Cambridge University Press, Cambridge (1999)

2. 2.

Askey, R., Bingham, N.H.: Gaussian processes on compact symmetric spaces. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 37(2), 127–143 (1976/77)

3. 3.

Azevedo, D., Barbosa, V.S.: Covering numbers of isotropic reproducing kernels on compact two-point homogeneous spaces. Math. Nachr. 290(16), 2444–2458 (2017)

4. 4.

Baldi, P., Rossi, M.: Representation of Gaussian isotropic spin random fields. Stoch. Process. Appl. 124(5), 1910–1941 (2014)

5. 5.

Berger, M., Gauduchon, P., Mazet, E.: Le spectre d’une variétériemannienne. Lecture Notes in Mathematics, vol. 194. Springer, Berlin (1971)

6. 6.

Besse, A.L.: Manifolds all of whose geodesics are closed. With appendices. In: Epstein, D.B.A., Bourguignon, J.-P., Bérard-Bergery, L., Berger, M., Kazdan, J.L. (eds.) Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas], vol. 93. Springer, Berlin (1978)

7. 7.

Bingham, N.H.: Positive definite functions on spheres. Proc. Cambridge Philos. Soc. 73, 145–156 (1973)

8. 8.

Bochner, S.: Hilbert distances and positive definite functions. Ann. Math. 2(42), 647–656 (1941)

9. 9.

Brown, G., Dai, F.: Approximation of smooth functions on compact two-point homogeneous spaces. J. Funct. Anal. 220(2), 401–423 (2005)

10. 10.

Cartan, E.: Sur certaines formes Riemanniennes remarquables des géométries à groupe fondamental simple. Ann. Sci. Éc. Norm. Supér. 3(44), 345–467 (1927)

11. 11.

Cheng, D., Xiao, Y.: Excursion probability of Gaussian random fields on sphere. Bernoulli 22(2), 1113–1130 (2016)

12. 12.

Cohen, S., Lifshits, M.A.: Stationary Gaussian random fields on hyperbolic spaces and on Euclidean spheres. ESAIM Probab. Stat. 16, 165–221 (2012)

13. 13.

Colzani, L., Tenconi, M.: Localization for Riesz means on the compact rank one symmetric spaces. In: Proceedings of the AMSI/AustMS 2014 Workshop in Harmonic Analysis and its Applications, Proc. Centre Math. Appl. Austral. Nat. Univ., vol. 47, pp. 26–49. Austral. Nat. Univ., Canberra (2017)

14. 14.

Gangolli, R.: Positive definite kernels on homogeneous spaces and certain stochastic processes related to Lévy’s Brownian motion of several parameters. Ann. Inst. H. Poincaré Sect. B (N.S.) 3, 121–226 (1967)

15. 15.

Geller, D., Marinucci, D.: Spin wavelets on the sphere. J. Fourier Anal. Appl. 16(6), 840–884 (2010)

16. 16.

González Vieli, F.J.: Pointwise Fourier inversion on rank one compact symmetric spaces using Cesàro means. Acta Sci. Math. (Szeged) 68(3–4), 783–795 (2002)

17. 17.

Helgason, S.: Differential operators on homogeneous spaces. Acta Math. 102, 239–299 (1959)

18. 18.

Leonenko, N., Sakhno, L.: On spectral representations of tensor random fields on the sphere. Stoch. Anal. Appl. 30(1), 44–66 (2012)

19. 19.

Leonenko, N.N., Shieh, N.R.: Rényi function for multifractal random fields. Fractals 21(2), 1350,009 (2013). 13

20. 20.

Ma, C.: Covariance matrix functions of vector $$\chi ^2$$ random fields in space and time. IEEE Trans. Commun. 59(9), 2554–2561 (2011). https://doi.org/10.1109/TCOMM.2011.063011.100528

21. 21.

Ma, C.: Vector random fields with second-order moments or second-order increments. Stoch. Anal. Appl. 29(2), 197–215 (2011)

22. 22.

Ma, C.: K-distributed vector random fields in space and time. Stat. Probab. Lett. 83(4), 1143–1150 (2013). https://doi.org/10.1016/j.spl.2013.01.004

23. 23.

Ma, C.: Stochastic representations of isotropic vector random fields on spheres. Stoch. Anal. Appl. 34(3), 389–403 (2016)

24. 24.

Ma, C.: Time varying isotropic vector random fields on spheres. J. Theor. Probab. 30(4), 1763–1785 (2017)

25. 25.

Malyarenko, A.: Invariant random fields in vector bundles and application to cosmology. Ann. Inst. Henri Poincaré Probab. Stat. 47(4), 1068–1095 (2011)

26. 26.

Malyarenko, A.: Invariant Random Fields on Spaces with a Group Action. Probability and its Applications (New York). Springer, Heidelberg (2013). (With a foreword by Nikolai Leonenko)

27. 27.

Malyarenko, A.: Spectral expansions of random sections of homogeneous vector bundles. Teor. $$\breve{\text{I}}$$movīr Mat. Stat. 97, 142–156 (2017)

28. 28.

Malyarenko, A.A.: Local properties of Gaussian random fields on compact symmetric spaces, and Jackson-type and Bernstein-type theorems. Ukraïn. Mat. Zh. 51(1), 60–68 (1999)

29. 29.

Malyarenko, A.A.: Abelian and Tauberian theorems for random fields on two-point homogeneous spaces. Teor. $$\breve{\text{ I }}$$movīr Mat. Stat. 69, 106–118 (2003)

30. 30.

Malyarenko, A.A., Olenko, A.Y.: Multidimensional covariant random fields on commutative locally compact groups. Ukraïn. Mat. Zh. 44(11), 1505–1510 (1992)

31. 31.

Marinucci, D., Peccati, G.: Random fields on the sphere. Representation, limit theorems and cosmological applications, London Mathematical Society Lecture Note Series, vol. 389. Cambridge University Press, Cambridge (2011)

32. 32.

Matheron, G.: The internal consistency of models in geostatistics. In: Armstrong, M. (ed.) Geostatistics, pp. 21–38. Springer, Dordrecht (1989)

33. 33.

Molčan, G.M.: Homogeneous random fields on symmetric spaces of rank one. Teor. Veroyatnost. i Mat. Statist. 21, 123–148, 167 (1979)

34. 34.

Müller, C.: Analysis of Spherical Symmetries in Euclidean Spaces, Applied Mathematical Sciences, vol. 129. Springer, New York (1998)

35. 35.

Obukhov, A.M.: Statistically homogeneous fields on a sphere. Usp. Mat. Nauk 2(2), 196–198 (1947)

36. 36.

Sakamoto, K.: Helical minimal immersions of compact Riemannian manifolds into a unit sphere. Trans. Am. Math. Soc. 288(2), 765–790 (1985)

37. 37.

Schoenberg, I.J.: Positive definite functions on spheres. Duke Math. J. 9, 96–108 (1942)

38. 38.

Szegő, G.: Orthogonal polynomials, vol. XXIII, 4th edn. American Mathematical Society, Colloquium Publications, Providence (1975)

39. 39.

Volchkov, V.V., Volchkov, V.V.: Offbeat Integral Geometry on Symmetric Spaces. Birkhäuser, Basel (2013). https://doi.org/10.1007/978-3-0348-0572-8

40. 40.

Wang, H.C.: Two-point homogeneous spaces. Ann. Math. 2(55), 177–191 (1952)

41. 41.

Weinstein, A.: On the volume of manifolds all of whose geodesics are closed. J. Differ. Geom. 9, 513–517 (1974)

42. 42.

Yadrenko, M.$$\breve{\text{ I }}$$.: Spectral theory of random fields. Translation Series in Mathematics and Engineering. Optimization Software, Inc., Publications Division, New York (1983). (Translated from the Russian)

43. 43.

Yaglom, A.M.: Second-order homogeneous random fields. In: Proceedings of 4th Berkeley Symposium on Mathematical Statistics and Probability, vol. II, pp. 593–622. University of California Press, Berkeley (1961)

44. 44.

Yaglom, A.M.: Correlation Theory of Stationary and Related Random Functions, vol. I: Basic Results. Springer Series in Statistics. Springer, New York (1987)

## Acknowledgements

We are grateful to the anonymous referee for careful reading of the manuscript and useful remarks.

## Author information

Authors

### Corresponding author

Correspondence to Anatoliy Malyarenko.

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## A Proofs

### Proof of Lemma 1

To calculate $$\mu (\mathbb {M}^d)$$, we use the result of [41]. If all the geodesics on a d-dimensional Riemannian manifold M are closed and have length $$2\pi L$$, then the ratio

\begin{aligned} i(M)=\frac{\mu (\mathbb {M}^d)}{L^n\mu (\mathbb {S}^d)} \end{aligned}

is an integer. With our convention $$L=1$$, we obtain $$\mu (\mathbb {M}^d)=i(\mathbb {M}^d)\mu (\mathbb {S}^d)$$. It is well known that

\begin{aligned} \mu (\mathbb {S}^d)=\frac{2\pi ^{(d+1)/2}}{\varGamma ((d+1)/2)} =\frac{2\pi ^{\alpha +3/2}}{\varGamma (\alpha +3/2)}. \end{aligned}
(14)

The Weinstein’s integers$$i(\mathbb {M}^d)$$ are shown in the last column of Table 2. Following [36], consider all the geodesics from $$\mathbf {o}$$ to a point in $$\mathbb {A}$$. Draw a tangent line to each of them and denote by e the dimension of the linear space generated by these lines. We have $$e=d$$ for $$\mathbb {S}^d$$, 1 for $$P^d(\mathbb {R})$$, 2 for $$P^d(\mathbb {C})$$, 4 for $$P^d(\mathbb {H})$$, and 8 for $$P^2(\mathbb {O})$$. It is proved in [36] that

\begin{aligned} i(\mathbb {M}^d)=\frac{2^{d-1}\varGamma ((d+1)/2)\varGamma (e/2)}{\sqrt{\pi }\varGamma ((d+e)/2)} \end{aligned}

We know that $$d=2\alpha +2$$. It is easy to check that $$e=2\beta +2$$, then we obtain

\begin{aligned} i(\mathbb {M}^d)=\frac{2^{2\alpha +1}\varGamma (\alpha +3/2) \varGamma (\beta +1)}{\sqrt{\pi }\varGamma (\alpha +\beta +2)}, \end{aligned}

and (4) easily follows. $$\square$$

### Proof of Lemma 2

In Theorem 2.1 of [3], put $$K(x)=P_i^{ (\alpha ,\beta )} (x)$$ and $$S(\mathbf {x})=P_j^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_2,\mathbf {x})))$$. We obtain

\begin{aligned}&\int _{\mathbb {M}^d}P_i^{ (\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1,\mathbf {x}))) P_j^{ (\alpha ,\beta )} (\cos (\rho (\mathbf {x}_2,\mathbf {x})))\,\hbox {d}\mathbf {x} \\&\quad = \omega _d P_j ^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1,\mathbf {x}_2))) \int _{-1}^{1} \frac{P_i^{(\alpha ,\beta )} (x)}{ P_i^{(\alpha ,\beta )} (1)} P_j^{(\alpha ,\beta )} (x) \hbox {d}\nu _{\alpha ,\beta }(x) \\&\quad = \omega _d \frac{\delta _{ij}}{a_i^2} P_i^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1,\mathbf {x}_2))), \end{aligned}

where the last equality follows from (3) , (5), and the following well-known result: the probabilistic measure $$\nu _{\alpha ,\beta }$$ on $$[-\,1,1]$$, proportional to $$(1-x)^{\alpha }(1+x)^{\beta }\,\hbox {d}x$$, is

\begin{aligned} \hbox {d}\nu _{\alpha ,\beta }(x)=\frac{\varGamma (\alpha +\beta +2)}{2^{\alpha +\beta +1} \varGamma (\alpha +1)\varGamma (\beta +1)}(1-x)^{\alpha } (1+x)^{\beta }\,\hbox {d}x. \end{aligned}
(15)

$$\square$$

### Proof of Lemma 3

The mean function of $$\{\, Z_n (\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}$$ is obtained by applying of [3, Theorem 2.1] to $$K(x)=1$$ and $$S(\mathbf {x})=P^{(\alpha ,\beta )}_n (\cos (\rho (\mathbf {x},\mathbf {y})))$$,

\begin{aligned} \mathsf {E}[Z_n (\mathbf {x})] = a_n\omega _d \int _{\mathbb {M}^d} P_n^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x},\mathbf {y}))) \,\hbox {d}\mathbf {y} = a_n \cdot 0 = 0. \end{aligned}

The covariance function is

\begin{aligned} {{\,\mathrm{cov}\,}}( Z_n (\mathbf {x}_1), Z_n(\mathbf {x}_2) )= & {} \omega _d^{-1}a_n^2\int _{\mathbb {M}^d} P_n^{ (\alpha ,\beta )} (\cos (\rho ( \mathbf {x}_1, \mathbf {z})) P_n^{ (\alpha ,\beta )} (\cos (\rho (\mathbf {x}_2,\mathbf {z})))\,\hbox {d}\mathbf {z}\\= & {} P_n^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1, \mathbf {x}_2) ), \end{aligned}

by Lemma 2. Equation (7) easily follows from the same lemma. $$\square$$

### Proof of Theorem 1

The series at the right-hand side of (8) converges in mean square for every $$\mathbf {x} \in \mathbb {M}^d$$ since

\begin{aligned}&\mathsf {E}\left[ \left( \sum _{i=n_1}^{n_1+n_2} \mathsf {B}_i^{\frac{1}{2}} \mathbf {V}_i P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )) \right) \left( \sum _{j=n_1}^{n_1+n_2} \mathsf {B}_j^{\frac{1}{2}} \mathbf {V}_j P_j^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )) \right) ^{\top }\right] \\&\quad = \sum _{i=n_1}^{n_1+n_2} \sum _{j=n_1}^{n_1+n_2} \mathsf {B}_i^{\frac{1}{2}} \mathsf {B}_j^{\frac{1}{2}} \mathsf {E}[ ( \mathbf {V}_i \mathbf {V}^{\top }_j)] \mathsf {E} \left[ \left( P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )) P_j^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )) \right) \right] \\&\quad = \sum _{i=n_1}^{n_1+n_2} \mathsf {B}_i \sigma _i^2 \mathsf {E} \left[ \left( P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )) P_i^{ (\alpha , \beta ) } ( \rho (\mathbf {x}, \mathbf {U} )) \right) \right] \\&\quad = \sum _{i=n_1}^{n_1+n_2} \mathsf {B}_i P_i^{ (\alpha , \beta ) } ( 1) \\&\quad \rightarrow \mathbf {0}, ~~~~ \text{ as } ~ n_1, n_2 \rightarrow \infty , \end{aligned}

where the second equality follows from the independent assumption between $$\{\, \mathbf {V}_n:n \in \mathbb {N}_0\, \}$$ and $$\mathbf {U}$$, and the third from Lemma 3. Thus, (8) is an m-variate second-order random field. Its mean function is clearly identical to $$\mathbf {0}$$, and it covariance function is

\begin{aligned}&{{\,\mathrm{cov}\,}}\left( \sum _{i=0}^\infty \mathsf {B}_i^{\frac{1}{2}} \mathbf {V}_i P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {U} )), ~ \sum _{j=0}^\infty \mathsf {B}_j^{\frac{1}{2}} \mathbf {V}_j P_j^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_2, \mathbf {U} )) \right) \\&\quad = \sum _{i=0}^\infty \sum _{j=0}^\infty \mathsf {B}_i^{\frac{1}{2}} \mathsf {B}_j^{\frac{1}{2}} \mathsf {E}\left[ ( \mathbf {V}_i \mathbf {V}^{\top }_j)] \mathsf {E}[ \left( P_i^{ (\alpha , \beta )} ( \cos \rho (\mathbf {x}, \mathbf {U} )) P_j^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )) \right) \right] \\&\quad = \sum _{i=0}^\infty \mathsf {B}_i \sigma _i^2 \mathsf {E} \left[ \left( P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {U} )) P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_2, \mathbf {U} )) \right) \right] \\&\quad = \sum _{i=0}^\infty \mathsf {B}_i P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}

Two distinct terms of (8) are obviously uncorrelated each other. $$\square$$

### Proof of Theorem 2

It suffices to verify (10) to be a general form, since in Theorem 1 we already construct an m-variate isotropic random field on $$\mathbb {M}^d$$ whose covariance matrix function is (10). To this end, suppose that $$\{\, \mathbf {Z}(\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}$$ is an m-variate isotropic and mean square continuous random field. Then, for an arbitrary $$\mathbf {a} \in \mathbb {R}^m$$, $$\{\, \mathbf {a}^{\top } \mathbf {Z}(\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}$$ is a scalar isotropic and mean square continuous random field, so that its covariance function has to be of the form (1),

\begin{aligned} {{\,\mathrm{cov}\,}}\left( \mathbf {a}^{\top } \mathbf {Z}(\mathbf {x}_1), \mathbf {a}^{\top } \mathbf {Z}(\mathbf {x}_2) \right) = \sum _{n=0}^\infty b_n (\mathbf {a}) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}
(16)

where $$\{\, b_n (\mathbf {a}):n \in \mathbb {N}_0\, \}$$ is a sequence of nonnegative constants and $$\sum \nolimits _{n=0}^\infty b_n (\mathbf {a}) P_n^{ (\alpha , \beta ) } ( 1 )$$ converges. Similarly, for $$\mathbf {b} \in \mathbb {R}^m$$, we obtain

\begin{aligned}&\frac{1}{4} {{\,\mathrm{cov}\,}}( (\mathbf {a} +\mathbf {b})^{\top } \mathbf {Z}(\mathbf {x}_1), (\mathbf {a}+\mathbf {b})^{\top } \mathbf {Z}(\mathbf {x}_2) )\\&\quad = \sum _{n=0}^\infty b_n (\mathbf {a}+\mathbf {b}) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {x}_2)), \\&\frac{1}{4} {{\,\mathrm{cov}\,}}( (\mathbf {a} -\mathbf {b})^{\top } \mathbf {Z}(\mathbf {x}_1), (\mathbf {a}-\mathbf {b})^{\top } \mathbf {Z}(\mathbf {x}_2) )\\&\quad = \sum _{n=0}^\infty b_n (\mathbf {a}-\mathbf {b}) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}

Taking the difference between the last two equations yields

\begin{aligned}&\frac{1}{2} \left( \mathbf {a}^{\top } {{\,\mathrm{cov}\,}}( \mathbf {Z}(\mathbf {x}_1), \mathbf {Z}(\mathbf {x}_2) ) \mathbf {b}+ \mathbf {b}^{\top } {{\,\mathrm{cov}\,}}( \mathbf {Z}(\mathbf {x}_1), \mathbf {Z}(\mathbf {x}_2) )\mathbf {a} \right) \\&\quad =\frac{1}{2} \left( {{\,\mathrm{cov}\,}}( \mathbf {a}^{\top } \mathbf {Z}(\mathbf {x}_1), \mathbf {b}^{\top } \mathbf {Z}(\mathbf {x}_2) ) +{{\,\mathrm{cov}\,}}( \mathbf {b}^{\top } \mathbf {Z}(\mathbf {x}_1), \mathbf {a}^{\top } \mathbf {Z}(\mathbf {x}_2) )\right) \\&\quad =\sum \limits _{n=0}^\infty \left( b_n (\mathbf {a}+\mathbf {b}) -b_n (\mathbf {a}-\mathbf {b})\right) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}

or

\begin{aligned} \mathbf {a}^{\top } {{\,\mathrm{cov}\,}}( \mathbf {Z}(\mathbf {x}_1), \mathbf {Z}(\mathbf {x}_2) ) \mathbf {b} = \sum \limits _{n=0}^\infty (b_n (\mathbf {a}+\mathbf {b}) -b_n (\mathbf {a}-\mathbf {b})) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}
(17)

noticing that $${{\,\mathrm{cov}\,}}( \mathbf {Z}(\mathbf {x}_1), \mathbf {Z}(\mathbf {x}_2) )$$ is a symmetric matrix. The form (10) of $${{\,\mathrm{cov}\,}}( \mathbf {Z}(\mathbf {x}_1), \mathbf {Z}(\mathbf {x}_2) )$$ is now confirmed by letting the ith entry of $$\mathbf {a}$$ and the jth entry of $$\mathbf {b}$$ be 1 and the rest vanish in (17). It remains to verify the nonnegative definiteness of each $$\mathsf {B}_n$$ in (10). To do so, we multiply its both sides by $$\mathbf {a}^{\top }$$ from the left and $$\mathbf {a}$$ from the right, and obtain

\begin{aligned} \mathbf {a}^{\top } \mathsf {C} ( \mathbf {x}_1, \mathbf {x}_2 ) \mathbf {a} = \sum _{n=0}^\infty \mathbf {a}^{\top } \mathsf {B}_n \mathbf {a} P_n^{ (\alpha , \beta ) } \left( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) \right) , ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}

comparing which with (16) results in that $$\mathbf {a}^{\top } \mathsf {B}_n \mathbf {a} \ge 0$$ or the nonnegative definiteness of $$\mathsf {B}_n$$, $$n \in \mathbb {N}_0$$, and the convergence of $$\sum \nolimits _{n=0}^\infty \mathbf {a}^{\top } \mathsf {B}_n \mathbf {a} P_n^{ (\alpha , \beta )} (1)$$ or that of each entry of the matrix $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } (1)$$. $$\square$$

### Proof of Theorem 3

For a fixed $$t \in \mathbb {T}$$, consider a random field $$\left\{ \, \mathbf {Z} (\mathbf {x}; 0) + \mathbf {Z} (\mathbf {x}; t):\right. \left. \mathbf {x} \in \mathbb {M}^d \,\right\}$$. It is isotropic and mean square continuous on $$\mathbb {M}^d$$, with covariance matrix function

\begin{aligned}&{{\,\mathrm{cov}\,}}\left( \mathbf {Z} (\mathbf {x}_1; 0) + \mathbf {Z} (\mathbf {x}_1; t), ~ \mathbf {Z} (\mathbf {x}_2; 0) + \mathbf {Z} (\mathbf {x}_2; t) \right) \\&\quad =2\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); 0) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)\\&\quad =\sum _{n=0}^\infty \mathsf {B}_{n+} (t) P_n^{(\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}

where the last equality follows from Theorem 2, $$\{\, \mathsf {B}_{n+} (t):n \in \mathbb {N}_0\, \}$$ is a sequence of nonnegative definite matrices, and $$\sum \nolimits _{n=0}^\infty \mathsf {B}_{n+} (t) P_n^{(\alpha , \beta )} (1)$$ converges. Similarly, we have

\begin{aligned}&{{\,\mathrm{cov}\,}}\left( \mathbf {Z} (\mathbf {x}_1; 0) - \mathbf {Z} (\mathbf {x}_1; t), ~ \mathbf {Z} (\mathbf {x}_2; 0) - \mathbf {Z} (\mathbf {x}_2; t) \right) \\&\quad =2 \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); 0) - \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) - \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)\\&\quad =\sum _{n=0}^\infty \mathsf {B}_{n-} (t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), \end{aligned}

and thus,

\begin{aligned}&\frac{\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2}\\&\quad =\frac{1}{4} [ 2 \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); 0) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)]\\&\qquad - \frac{1}{4} [ 2 \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); 0) - \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) - \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t) ] \\&\quad =\sum _{n=0}^\infty \mathsf {B}_n (t) P_n^{(\alpha , \beta )} (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}

which confirms the format (11) for $$\frac{\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2}$$, with $$B_n (t) =\frac{ \mathsf {B}_{n+} (t) - \mathsf {B}_{n-} (t)}{4}$$, $$n \in \mathbb {N}_0$$. Obviously, $$\mathsf {B}_n(t)$$ is symmetric, and $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (1)$$ converges. Moreover, (11) is the covariance matrix function of an m-variate isotropic random field $$\left\{ \, \frac{ \mathbf {Z} (\mathbf {x}; t)+\tilde{\mathbf {Z}} (\mathbf {x}; -t)}{\sqrt{2}}:\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \right\}$$, where $$\{ \,\tilde{\mathbf {Z}} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T} \,\}$$ is an independent copy of $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$. In fact,

\begin{aligned}&{{\,\mathrm{cov}\,}}\left( \frac{ \mathbf {Z} (\mathbf {x}_1; t_1)+\tilde{\mathbf {Z}} (\mathbf {x}_1; -t_1)}{\sqrt{2}}, ~ \frac{ \mathbf {Z} (\mathbf {x}_2; t_2)+\tilde{\mathbf {Z}} (\mathbf {x}_2; -t_2)}{\sqrt{2}} \right) \\&\quad =\frac{\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t_1-t_2) +\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t_2-t_1)}{2} \\&\quad =\sum _{k=0}^\infty \mathsf {B}_{k} (t_1-t_2) P_k^{ (\alpha , \beta )} (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)) \end{aligned}

with $$\mathbf {x}_1$$, $$\mathbf {x}_2 \in \mathbb {M}^d$$, $$t_1$$, $$t_2 \in \mathbb {T}$$.

For each fixed $$n \in \mathbb {N}_0$$, in order to verify that $$\mathsf {B}_n (t)$$ is a stationary covariance matrix function on $$\mathbb {T}$$, we consider an m-variate stochastic process

\begin{aligned} \mathbf {W}_n (t) = \int _{\mathbb {M}^d} \frac{ \mathbf {Z} (\mathbf {x}; t)+\tilde{\mathbf {Z}} (\mathbf {x}; -t)}{\sqrt{2}} P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U}) ) \mathrm{d} \mathbf {x}, \quad t \in \mathbb {T}, \end{aligned}

where $$\{\, \tilde{\mathbf {Z}} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$ is an independent copy of $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$, $$\mathbf {U}$$ is a random vector uniformly distributed on $$\mathbb {M}^d$$, and $$\mathbf {U}$$, $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$ and $$\{\, \tilde{\mathbf {Z}} (\mathbf {x}; t):\mathbf {x} \in \mathbb {S}^d, t \in \mathbb {T}\, \}$$ are independent. By Lemma 2, the mean function of $$\{\, \mathbf {W}_n (t):t \in \mathbb {T}\, \}$$ is

\begin{aligned} \mathsf {E} [ \mathbf {W}_n (t)] = \left\{ \begin{array}{ll} \sqrt{2} P^{(\alpha ,\beta )}_0(1)\omega _d \mathsf {E} [\mathbf {Z} (\mathbf {x}; t)], ~ &{} ~ n= 0, \\ 0, ~ &{} ~ n \in \mathbb {N}, \end{array} \right. \end{aligned}

and its covariance matrix function is by Lemmas 2 and 3

\begin{aligned}&{{\,\mathrm{cov}\,}}( \mathbf {W}_n(t_1), ~ \mathbf {W}_n (t_2) ) \\&\quad = \frac{1}{\omega _d} {{\,\mathrm{cov}\,}}\left( \int _{\mathbb {M}^d} \frac{ \mathbf {Z} (\mathbf {x}; t_1)+\tilde{\mathbf {Z}} (\mathbf {x}; -t_1)}{\sqrt{2}} P_n^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {x}, \mathbf {U}) ) \mathrm{d} \mathbf {x},\right. \\&\qquad \left. \int _{\mathbb {M}^d} \frac{ \mathbf {Z} (\mathbf {y}; t_2)+\tilde{\mathbf {Z}} (\mathbf {y}; -t_2)}{\sqrt{2}} P_n^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {y}, \mathbf {U})) \mathrm{d} \mathbf {y} \right) \\&\quad =\frac{1}{\omega _d} \int _{\mathbb {M}^d} {{\,\mathrm{cov}\,}}\left( \int _{\mathbb {M}^d} \frac{ \mathbf {Z} (\mathbf {x}; t_1)+\tilde{\mathbf {Z}} (\mathbf {x}; -t_1)}{\sqrt{2}} P_n^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {x}, \mathbf {U}) )\mathrm{d} \mathbf {x}, ~ \right. \\&\qquad \left. ~~~~~~~~~~ \int _{\mathbb {M}^d} \frac{ \mathbf {Z} (\mathbf {y}; t_2)+\tilde{\mathbf {Z}} (\mathbf {y}; -t_2)}{\sqrt{2}} P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {y}, \mathbf {u}) ) \mathrm{d} \mathbf {y} \right) d \mathbf {u} \\&\quad =\frac{1}{\omega _d} \int _{\mathbb {M}^d} \int _{\mathbb {M}^d} \int _{\mathbb {M}^d} {{\,\mathrm{cov}\,}}\left( \frac{ \mathbf {Z} (\mathbf {x}; t_1)+\tilde{\mathbf {Z}} (\mathbf {x}; -t_1)}{\sqrt{2}}, ~ \frac{ \mathbf {Z} (\mathbf {y}; t_2)+\tilde{\mathbf {Z}} (\mathbf {y}; -t_2)}{\sqrt{2}} \right) \\&\qquad \times P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {u}) ) P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {y}, \mathbf {u}) ) \mathrm{d} \mathbf {x} \mathrm{d} \mathbf {y} \mathrm{d} \mathbf {u} \\&\quad =\int _{\mathbb {M}^d} \int _{\mathbb {M}^d} \int _{\mathbb {M}^d} \frac{ \mathsf {C} (\rho (\mathbf {x}, \mathbf {y}); t_1-t_2)+\mathsf {C} ( \rho (\mathbf {x}, \mathbf {y}); t_2-t_1)}{2\omega _d}\\&\qquad \times P_n^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {x}, \mathbf {u}) ) P_n^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {y}, \mathbf {u}) ) \mathrm{d} \mathbf {x} \mathrm{d} \mathbf {y} \mathrm{d} \mathbf {u} \\&\quad =\frac{1}{\omega _d} \int _{\mathbb {M}^d} \int _{\mathbb {M}^d} \int _{\mathbb {M}^d} \sum _{k=0}^\infty \mathsf {B}_{k} (t_1-t_2) P_k^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {x}, \mathbf {y}))\\&\qquad \times P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {u})) P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {y}, \mathbf {u})) \mathrm{d} \mathbf {x} \mathrm{d} \mathbf {y} \mathrm{d} \mathbf {u} \\&\quad =\frac{1}{\omega _d} \sum _{k=0}^\infty \mathsf {B}_{k} (t_1-t_2) \int _{\mathbb {M}^d} \int _{\mathbb {M}^d} \int _{\mathbb {M}^d} P_k^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {y}))\\&\qquad \times P_n^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {x}, \mathbf {u}) ) \mathrm{d} \mathbf {x} P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {y}, \mathbf {u}) ) \mathrm{d} \mathbf {y} \mathrm{d} \mathbf {u} \\&\quad = \frac{1}{\omega _d} \mathsf {B}_{n} (t_1-t_2) \int _{\mathbb {M}^d} \frac{1}{a_n^2} \int _{\mathbb {M}^d} P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {y}, \mathbf {u})) P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {y}, \mathbf {u})) \mathrm{d} \mathbf {y} \mathrm{d} \mathbf {u} \\&\quad = \frac{1}{\omega _d} \mathsf {B}_{n} (t_1-t_2) \int _{\mathbb {M}^d} \left( \frac{\omega _d}{a_n^2} \right) ^2 P_n^{(\alpha , \beta ) } (1) \mathrm{d} \mathbf {u} \\&\quad =\mathsf {B}_{n} (t_1-t_2) \left( \frac{\omega _d}{a_n^2} \right) ^2 P_n^{ (\alpha , \beta ) } (1), ~~~~~~ t_1, t_2 \in \mathbb {T}, \end{aligned}

which implies that $$\mathsf {B}_n (t)$$ is a stationary covariance matrix function on $$\mathbb {T}$$. $$\square$$

### Proof of Theorem 4

The convergent assumption of $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n (0) P_n^{ (\alpha , \beta ) } (1)$$ ensures the uniform and absolute convergence of the series at the right-hand side of (12). If $$\{\, \mathsf {B}_n (t):n \in \mathbb {N}_0\, \}$$ is a sequence of stationary covariance matrix function on $$\mathbb {T}$$, then each term of the series at the right-hand side of (12) is the product of a stationary covariance matrix function $$\mathsf {B}_n (t)$$ on $$\mathbb {T}$$ and an isotropic covariance function $$P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)$$ on $$\mathbb {M}^d$$, and thus, (12) can be treated [21] as the covariance matrix function of an m-variate random field on $$\mathbb {M}^d \times \mathbb {T}$$.

On the other hand, assume that (12) is the covariance matrix function of an m-variate random field $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$. The convergence of $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n (0) P_n^{ (\alpha , \beta ) } (1)$$ results from the existence of $$\mathsf {C} (0; 0) = {{\,\mathrm{\mathsf {Var}}\,}}[Z (\mathbf {x}; t)]$$. In order to show that $$\mathsf {B}_n (t)$$ is a stationary covariance matrix function on $$\mathbb {T}$$ for each fixed $$n \in \mathbb {N}_0$$, consider an m-variate stochastic process

\begin{aligned} \mathbf {W}_n (t) = \int _{\mathbb {M}^d} \mathbf {Z} (\mathbf {x}; t) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U})) \mathrm{d} \mathbf {x},\quad t \in \mathbb {T}, \end{aligned}

where $$\mathbf {U}$$ is a random vector uniformly distributed on $$\mathbb {M}^d$$ and is independent with $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T} \,\}$$. Similar to that in the proof of Theorem 3, applying Lemmas 2 and 3 we obtain that the covariance matrix function of $$\{\, \mathbf {W}_n (t):t \in \mathbb {T}\, \}$$ is positively propositional to $$\mathsf {B}_n (t)$$; more precisely,

\begin{aligned} {{\,\mathrm{cov}\,}}( \mathbf {W}_n (t_1), \mathbf {W}_n (t_2) ) = \mathsf {B}_{n} (t_1-t_2) \left( \frac{\omega _d}{a_n^2} \right) ^2 P_n^{ (\alpha , \beta ) } (1), ~~~~~~ t_1, t_2 \in \mathbb {T}, \end{aligned}

which implies that $$\mathsf {B}_n (t)$$ is a stationary covariance matrix function on $$\mathbb {T}$$. $$\square$$

### Proof of Theorem 5

The convergent assumption of $$\sum \nolimits _{n=0}^\infty \mathsf {B}_n (0) P_n^{ (\alpha , \beta ) } (1)$$ ensures the mean square convergence of the series at the right-hand side of (13), since

\begin{aligned}&\mathsf {E}\left[ \left( \sum _{i=n_1}^{n_1+n_2} \mathbf {V}_i (t) P_i^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {x}, \mathbf {U}) ) \right) \left( \sum _{j=n_1}^{n_1+n_2} \mathbf {V}_j (t) P_j^{ (\alpha , \beta )} ( \cos \rho ( \mathbf {x}, \mathbf {U})) \right) ^{\top }\right] \\&\quad =\mathsf {E} \left[ \sum _{i=n_1}^{n_1+n_2} \sum _{j=n_1}^{n_1+n_2} \mathbf {V}_i (t) \mathbf {V}^{\top }_j (t) P_i^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {U})) P_j^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {U})) \right] \\&\quad = \sum _{i=n_1}^{n_1+n_2} \sum _{j=n_1}^{n_1+n_2} \mathsf {E} [\mathbf {V}_i (t) \mathbf {V}^{\top }_j (t) ] \mathsf {E} \left[ P_i^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {U})) P_j^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {U})) \right] \\&\quad = \omega _d \sum _{i=n_1}^{n_1+n_2} \mathsf {B}_i (0) P_i^{ (\alpha , \beta ) } (1) \\&\quad \rightarrow 0, ~~~~~~~ \text{ as } ~ n_1, n_2 \rightarrow \infty , \end{aligned}

where the second equality follows from the independent assumption between $$\mathbf {U}$$ and $$\{\, \mathbf {V}_n (t):n \in \mathbb {N}_0\, \}$$, and the third one from Lemma 3. Applying Lemma 3 we obtain the mean and covariance matrix functions of $$\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}$$, under the independent assumption among $$\mathbf {U}$$ and $$\{\, \mathbf {V}_n (t):n \in \mathbb {N}_0\, \}$$,

\begin{aligned} \mathsf {E} [\mathbf {Z} (\mathbf {x}; t )] = \sum _{n=0}^\infty \mathsf {E} [\mathbf {V}_n (t)] \mathsf {E} \left[ P_n^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}, \mathbf {U}))\right] = \mathbf {0}, ~~~ \mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}, \end{aligned}

and

\begin{aligned}&{{\,\mathrm{cov}\,}}( \mathbf {Z} (\mathbf {x}_1; t_1), \mathbf {Z} ( \mathbf {x}_2; t_2) )\\&\quad = {{\,\mathrm{cov}\,}}\left( \sum _{i=0}^\infty \mathbf {V}_i (t_1) P_i^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}_1, \mathbf {U})), ~ \sum _{j=0}^\infty \mathbf {V}_j (t_2) P_j^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}_2, \mathbf {U})) \right) \\&\quad =\sum _{i=0}^\infty \sum _{j=0}^\infty \mathsf {E} [ \mathbf {V}_i (t_1) \mathbf {V}^{\top }_j (t_2) ] \mathsf {E} \left[ P_i^{ (\alpha , \beta ) } ( \cos \rho ( \mathbf {x}_1, \mathbf {U})) P_j^{ (\alpha , \beta ) } (\cos \rho ( \mathbf {x}_2, \mathbf {U})) \right] \\&\quad =\sum _{n=0}^\infty \mathsf {B}_n (t_1-t_2)\frac{1}{a^2_n} P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) ), ~~~~~~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ t_1, t_2 \in \mathbb {T}. \end{aligned}

The latter is obviously isotropic and continuous on $$\mathbb {M}^d$$ and stationary on $$\mathbb {T}$$. $$\square$$

## Rights and permissions

Reprints and Permissions

Ma, C., Malyarenko, A. Time-Varying Isotropic Vector Random Fields on Compact Two-Point Homogeneous Spaces. J Theor Probab 33, 319–339 (2020). https://doi.org/10.1007/s10959-018-0872-7

• Revised:

• Published:

• Issue Date:

### Keywords

• Covariance matrix function
• Elliptically contoured random field
• Gaussian random field
• Isotropy
• Stationarity
• Jacobi polynomials

• 60G60
• 62M10
• 62M30