Advertisement

Time-Varying Isotropic Vector Random Fields on Compact Two-Point Homogeneous Spaces

  • Chunsheng Ma
  • Anatoliy MalyarenkoEmail author
Open Access
Article
  • 273 Downloads

Abstract

A general form of the covariance matrix function is derived in this paper for a vector random field that is isotropic and mean square continuous on a compact connected two-point homogeneous space and stationary on a temporal domain. A series representation is presented for such a vector random field which involves Jacobi polynomials and the distance defined on the compact two-point homogeneous space.

Keywords

Covariance matrix function Elliptically contoured random field Gaussian random field Isotropy Stationarity Jacobi polynomials 

Mathematics Subject Classification (2010)

60G60 62M10 62M30 

1 Introduction

Consider the sphere \(\mathbb {S}^d\) embedded into \(\mathbb {R}^{d+1}\) as follows: \(\mathbb {S}^d=\{\,\mathbf {x}\in \mathbb {R}^{d+1}:\Vert \mathbf {x}\Vert =1\,\}\), and define the distance between the points \(\mathbf {x}_1\) and \(\mathbf {x}_2\) by \(\rho (\mathbf {x}_1,\mathbf {x}_2)=\cos ^{-1}(\mathbf {x}_1^{\top }\mathbf {x}_2)\). With this distance, any isometry between two pairs of points can be extended to an isometry of \(\mathbb {S}^d\). A metric space with such a property is called two-point homogeneous. A complete classification of connected and compact two-point homogeneous spaces is performed in [40]. Besides spheres, the list includes projective spaces over different algebras; see Sect. 2 for details. It turns out that any such space is a manifold. We denote it by \(\mathbb {M}^d\), where d is the topological dimension of the manifold. Following [24], denote by \(\mathbb {T}\) either the set \(\mathbb {R}\) of real numbers or the set \(\mathbb {Z}\) of integers, and call it the temporal domain.

Let \((\varOmega ,\mathfrak {F},\mathsf {P})\) be a probability space.

Definition 1

An \(\mathbb {R}^m\)-valued spatio-temporal random field \(\mathbf {Z}(\omega ,\mathbf {x},t):\varOmega \times \mathbb {M}^d \times \mathbb {T}\rightarrow \mathbb {R}^m\) is called (wide-sense) isotropic over \(\mathbb {M}^d\) and (wide-sense) stationary over the temporal domain \(\mathbb {T}\), if its mean function \(\mathsf {E}[\mathbf {Z}(\mathbf {x}; t)]\) equals a constant vector, and its covariance matrix function
$$\begin{aligned} {{\,\mathrm{cov}\,}}(\mathbf {Z}(\mathbf {x}_1; t_1), \mathbf {Z}(\mathbf {x}_2; t_2))= & {} \mathsf {E}\left[ (\mathbf {Z}(\mathbf {x}_1; t_1) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_1; t_1)])(\mathbf {Z}(\mathbf {x}_2; t_2) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_2; t_2)])^{\top }\right] , \\&\mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, t_1, t_2 \in \mathbb {T}, \end{aligned}$$
depends only on the time lag \(t_2-t_1\) between \(t_2\) and \(t_1\) and the distance \(\rho (\mathbf {x}_1,\mathbf {x}_2)\) between \(\mathbf {x}_1\) and \(\mathbf {x}_2\).
As usual, we omit the argument \(\omega \in \varOmega \) in the notation for the random field under consideration. In such a case, the covariance matrix function is denoted by \(\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t)\),
$$\begin{aligned} \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t_1-t_2)= & {} \mathsf {E}\left[ (\mathbf {Z}(\mathbf {x}_1; t_1) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_1; t_1)])(\mathbf {Z}(\mathbf {x}_2; t_2) -\mathsf {E}[\mathbf {Z}(\mathbf {x}_2; t_2)])^{\top }\right] , \\&\mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, t_1, t_2 \in \mathbb {T}. \end{aligned}$$
It is an \(m \times m\) matrix function, \(\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t) = ( \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) )^{\top }\), and the inequality
$$\begin{aligned} \sum _{i=1}^n \sum _{j=1}^n \mathbf {a}^{\top }_i \mathsf {C} (\rho (\mathbf {x}_i, \mathbf {x}_j); t_i-t_j) \mathbf {a}_j \ge 0 \end{aligned}$$
holds for every \(n \in \mathbb {N}\), any \(\mathbf {x}_i \in \mathbb {M}^d\), \(t_i \in \mathbb {T}\), and \(\mathbf {a}_i \in \mathbb {R}^m\) (\( i =1, 2, \ldots , n\)), where \(\mathbb {N}\) stands for the set of positive integers, while \(\mathbb {N}_0\) denotes the set of nonnegative integers below. On the other hand, given an \(m \times m\) matrix function with these properties, there exists an m-variate Gaussian or elliptically contoured random field \(\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}\) with \(\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t)\) as its covariance matrix function [21].
For a scalar and purely spatial random field \(\{\, Z(\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}\) that is isotropic and mean square continuous, its covariance function is continuous and possesses a series representation of the form [8, 14, 37]
$$\begin{aligned} {{\,\mathrm{cov}\,}}( Z (\mathbf {x}_1), Z( \mathbf {x}_2)) = \sum \limits _{n=0}^\infty b_n P_n^{ (\alpha , \beta ) } \left( \cos (\rho (\mathbf {x}_1, \mathbf {x}_2)) \right) ,\quad \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}$$
(1)
where \(\{\, b_n:n \in \mathbb {N}_0\, \}\) is a sequence of nonnegative numbers with \(\sum \nolimits _{n=0}^\infty b_n P_n^{ (\alpha , \beta ) } (1)\) convergent, \(P_n^{ (\alpha , \beta )} (x)\) is a Jacobi polynomial of degree n with a pair of parameters \((\alpha , \beta )\) [1, 38], shown in Table 2. A general form of the covariance matrix function and a series representation are derived in [24] for a vector random field that is isotropic and mean square continuous on a sphere and stationary on a temporal domain. They are extended to \(\mathbb {M}^d \times \mathbb {T}\) in this paper.

Isotropic random fields over \(\mathbb {S}^d\) with values in \(\mathbb {R}^1\) and \(\mathbb {C}^1\) were introduced in [35]. Theoretical investigations and practical applications of isotropic scalar-valued random fields on spheres may be found in [7, 11, 12, 19, 43], and vector- and tensor-valued random fields on spheres have been considered in [18, 23, 24, 30], among others. Cosmological applications, in particular, studies of tiny fluctuations of the Cosmic Microwave Background, require development of the theory of random sections of vector and tensor bundles over \(\mathbb {S}^2\) [4, 15, 25, 27]. See also surveys of the topic in the monographs [26, 31, 42, 44]. Isotropic random fields on connected compact two-point homogeneous spaces are studied in [2, 14, 28, 29, 33], among others.

Some important properties of \(\mathbb {M}^d\), \(\rho (\mathbf {x}_1, \mathbf {x}_2)\), and \(P_n^{(\alpha , \beta )} (x)\) are reviewed in Sect. 2, and two lemmas are derived: one as a special case of the Funk–Hecke formula on \(\mathbb {M}^d\) and the other as a kind of probability interpretation. A series representation is given in Sect. 3 for an isotropic and mean square continuous vector random field on \(\mathbb {M}^d\), and a series expression of its covariance matrix function, in terms of Jacobi polynomials. Section 4 deals with a spatio-temporal vector random field on \(\mathbb {M}^d\times \mathbb {T}\), which is isotropic and mean square continuous vector random field on \(\mathbb {M}^d\) and stationary on \(\mathbb {T}\), and obtains a series representation for the random field and a general form for its covariance matrix function. The lemmas and theorems are proved in Appendix A.

2 Compact Two-Point Homogeneous Spaces and Jacobi Polynomials

This section starts by recalling some important properties of the compact connected two-point homogeneous space \(\mathbb {M}^d\) and those of Jacobi polynomials and then establishes two useful lemmas on a special case of the Funk–Hecke formula on \(\mathbb {M}^d\) and its probability interpretation, which are conjectured in [24]. In what follows, we consider only connected compact two-point homogeneous spaces.

The compact connected two-point homogeneous spaces are shown in the first column of Table 1. Besides spheres, there are projective spaces over the fields \(\mathbb {R}\) and \(\mathbb {C}\), over the skew field \(\mathbb {H}\) of quaternions, and over the algebra \(\mathbb {O}\) of octonions. The possible values of d are chosen in such a way that all the spaces in Table 1 are different and exhaust the list. In the lowest dimensions, we have \(\mathbb {P}^1(\mathbb {R})=\mathbb {S}^1\), \(\mathbb {P}^2(\mathbb {C})=\mathbb {S}^2\), \(\mathbb {P}^4(\mathbb {H})=\mathbb {S}^4\), and \(\mathbb {P}^8(\mathbb {O})=\mathbb {S}^8\).
Table 1

An approach based on Lie algebras

\(\mathbb {M}^d\)

G

K

p

q

Zonal function

\(\mathbb {S}^d\), \(d=1\), 2, \(\dots \)

\(\hbox {SO}(d+1)\)

\(\hbox {SO}(d)\)

0

\(d-1\)

\(R^{(\alpha ,\beta )}_{n}(\cos (\rho (\mathbf {x},\mathbf {o})))\)

\(\mathbb {P}^d(\mathbb {R})\), \(d=2\), 3, ...

\(\hbox {SO}(d+1)\)

\(\hbox {O}(d)\)

0

\(d-1\)

\(R^{(\alpha ,\beta )}_{2n}(\cos (\rho (\mathbf {x},\mathbf {o})/2))\)

\(\mathbb {P}^d(\mathbb {C})\), \(d=4\), 6, ...

\(\hbox {SU}(\frac{d}{2}+1)\)

\(\hbox {S}(\hbox {U}(\frac{d}{2})\times \hbox {U}(1))\)

\(d-2\)

1

\(R^{(\alpha ,\beta )}_{n}(\cos (\rho (\mathbf {x},\mathbf {o})))\)

\(\mathbb {P}^d(\mathbb {H})\), \(d=8\), 12, ...

\(\hbox {Sp}(\frac{d}{4}+1)\)

\(\hbox {Sp}(\frac{d}{4})\times \hbox {Sp}(1)\)

\(d-4\)

3

\(R^{(\alpha ,\beta )}_{n}(\cos (\rho (\mathbf {x},\mathbf {o})))\)

\(\mathbb {P}^{16}(\mathbb {O})\)

\(\hbox {F}_{4(-52)}\)

\(\hbox {Spin}(9)\)

8

7

\(R^{(\alpha ,\beta )}_{n}(\cos (\rho (\mathbf {x},\mathbf {o})))\)

All compact two-point homogeneous spaces share the same property [6] that all of their geodesic lines are closed. Moreover, all of them are circles and have the same length. In particular, when the sphere \(\mathbb {S}^d\) is embedded into the space \(\mathbb {R}^{d+1}\) as described in Sect. 1, the length of any geodesic line is equal to that of the unit circle, that is, \(2\pi \). It is natural to norm the distance in such a way that the length of any geodesic line is equal to \(2\pi \), exactly as in the case of the unit sphere.

There are at least two different approaches to the subject of compact two-point homogeneous spaces in the literature. They are reviewed in the next two subsections.

2.1 An Approach Based on Lie Algebras

This approach goes back to Cartan [10]. It has been used in both the probabilistic literature [14] and the approximation theory literature [3].

Let G be the connected component of the group of isometries of \(\mathbb {M}^d\), and let K be the stationary subgroup of a fixed point in \(\mathbb {M}^d\), call it \(\mathbf {o}\). Cartan [10] defined and calculated the numbers p and q, which are dimensions of some root spaces connected with the Lie algebras of the groups G and K. The groups G and K are listed in the second and the third columns of Table 1, while the numbers p and q are listed in the fourth and fifth columns of the table.

By [17, Theorem 11], if \(\mathbb {M}^d\) is a two-point homogeneous space, then the only differential operators on \(\mathbb {M}^d\) that are invariant under all isometries of \(\mathbb {M}^d\) are the polynomials in a special differential operator \(\varDelta \) called the Laplace–Beltrami operator. Let \(\hbox {d}\nu (\mathbf {x})\) be the measure which is induced on the homogeneous space \(\mathbb {M}^d=G/K\) by the probabilistic invariant measure on G. It is possible to define \(\varDelta \) as a self-adjoint operator in the space \(H=L^2(\mathbb {M}^d,\hbox {d}\nu (\mathbf {x}))\). The spectrum of \(\varDelta \) is discrete, and the eigenvalues are
$$\begin{aligned} \lambda _{n}=-\varepsilon n(\varepsilon n+\alpha +\beta +1), ~~~~~~ n \in \mathbb {N}_0, \end{aligned}$$
where
$$\begin{aligned} \alpha =(p+q-1)/2,\qquad \beta =(q-1)/2, \end{aligned}$$
(2)
and where \(\varepsilon =2\) if \(\mathbb {M}^d= \mathbb {P}^d(\mathbb {R})\) and \(\varepsilon =1\) otherwise.
Let \(H_{n}\) be the eigenspace of \(\varDelta \) corresponding to \(\lambda _{n}\). The space H is the Hilbert direct sum of its subspaces \(H_{n}\), \(n\in \mathbb {N}_0\). The space \(H_n\) is finite-dimensional with
$$\begin{aligned} \dim H_n= \frac{(2n+\alpha +\beta +1)\varGamma (\beta +1) \varGamma (n+\alpha +\beta +1)\varGamma (n+\alpha +1)}{\varGamma (\alpha +1)\varGamma (\alpha +\beta +2)\varGamma (n+1)\varGamma (n+\beta +1)}. \end{aligned}$$
Each of the spaces \(H_{n}\) contains a unique one-dimensional subspace whose elements are K-spherical functions; that is, functions invariant under the action of K on \(\mathbb {M}^d\). Such a function, say \(f_{n}(\mathbf {x})\), depends only on the distance \(r=\rho (\mathbf {x},\mathbf {o})\), \(f_{n}(\mathbf {x})=f^*_{n}(r)\). A spherical function is called zonal if \(f^*_{n}(0)=1\).
The zonal spherical functions of all compact connected two-point homogeneous spaces are listed in the last column of Table 1. To explain notation, we recall that the Jacobi polynomials
$$\begin{aligned} P_n^{(\alpha , \beta )} (x)= & {} \frac{\varGamma (\alpha +n+1)}{n! \varGamma (\alpha +\beta +n+1)}\sum _{k=0}^n\left( {\begin{array}{c}n\\ k\end{array}}\right) \frac{\varGamma (\alpha +\beta +n+k+1)}{\varGamma ( \alpha +k+1 )} \left( \frac{x-1}{2} \right) ^k,\\ x\in & {} [-1,1],\quad n \in \mathbb {N}_0, \end{aligned}$$
are the eigenfunctions of the Jacobi operator [38, Theorem 4.2.1]
$$\begin{aligned} \varDelta _x=\frac{1}{(1-x)^{\alpha }(1+x)^{\beta }}\frac{\hbox {d}}{\hbox {d}x} \left( (1-x)^{\alpha +1}(1+x)^{\beta +1}\frac{\hbox {d}}{\hbox {d}x}\right) . \end{aligned}$$
In the last column of Table 1, the normalised Jacobi polynomials are introduced,
$$\begin{aligned} R^{(\alpha ,\beta )}_{n}(x)=\frac{P^{(\alpha ,\beta )}_{n}(x)}{P^{(\alpha ,\beta )}_{n}(1)}, \qquad n \in \mathbb {N}_0, \end{aligned}$$
where
$$\begin{aligned} P^{(\alpha ,\beta )}_{n}(1)=\frac{\varGamma (n+\alpha +1)}{\varGamma (n+1)\varGamma (\alpha +1)}. \end{aligned}$$
(3)
The reason for the exceptional behaviour of the real projective spaces is as follows; see [14, 16]. The space \(\mathbb {P}^d(\mathbb {R})\) may be constructed by identification of antipodal points on the sphere \(\mathbb {S}^d\). An \(\hbox {O}(d)\)-invariant function f on \(\mathbb {P}^d(\mathbb {R})\) can be lifted to an \(\hbox {SO}(d)\)-invariant function g on \(\mathbb {S}^d\) by \(g(\mathbf {x})=f(\pi (\mathbf {x}))\), where \(\pi \) maps a point \(\mathbf {x}\in \mathbb {S}^d\) to the pair of antipodal points \(\pi (\mathbf {x})\in \mathbb {P}^d(\mathbb {R})\). This simply means that a function on [0, 1] can be extended to an even function on \([-\,1,1]\). Only the even polynomials can be functions on the so constructed manifold. By [38, Equation (4.1.3)], we have
$$\begin{aligned} P^{(\alpha ,\beta )}_{n}(x)=(-1)^{n}P^{(\beta ,\alpha )}_{n}(-x). \end{aligned}$$
For the real projective spaces \(\alpha =\beta \), and the corresponding normalised Jacobi polynomials are even if and only if n is even.

Remark 1

If two Lie groups have the same connected component of identity, then they have the same Lie algebra. For example, the groups \(\hbox {SO}(d)\) and \(\hbox {O}(d)\) have the same Lie algebra \(\mathfrak {so}(d)\). That is, the approach based on Lie algebras gives the same values of p and q for spheres and real projective spaces of equal dimensions. Only zonal spherical functions can distinguish between the two cases.

In the only case of \(\mathbb {M}^d=\mathbb {S}^1\), we have \(p=q=0\). The reason is that only in this case the Lie algebra \(\mathfrak {so}(2)\) is commutative rather than semisimple, and does not have nonzero root spaces at all.

2.2 A Geometric Approach

There is a trick that allows us to write down all zonal spherical functions of all compact two-point homogeneous spaces in the same form, which is used in probabilistic literature [2, 26, 28, 29, 33] and in approximation theory [9, 13]. Denote \(y=\cos (\rho (\mathbf {x},\mathbf {o})/2)\). Then we have \(\cos (\rho (\mathbf {x},\mathbf {o}))=2y^2-1\). For the case of \(\mathbb {M}^d= \mathbb {P}^d(\mathbb {R})\), \(\alpha =\beta =(d-2)/2\). By [38, Theorem 4.1],
$$\begin{aligned} P^{(\alpha ,\alpha )}_{2n}(y)=\frac{\varGamma (2n+\alpha +1)\varGamma (n+1)}{\varGamma (n+\alpha +1)\varGamma (2 n+1)}P^{(\alpha ,-1/2)}_{n}(2y^2-1). \end{aligned}$$
In terms of the normalised Jacobi polynomials, we obtain
$$\begin{aligned} R^{(\alpha ,\alpha )}_{2n}(\cos (\rho (\mathbf {x},\mathbf {o})/2)) =R^{(\alpha ,-1/2)}_{n}(\cos (\rho (\mathbf {x},\mathbf {o}))). \end{aligned}$$
For the case of \(\mathbb {M}^d= \mathbb {P}^d(\mathbb {R})\), if we redefine \(\alpha =(d-2)/2\), \(\beta =-1/2\), then all zonal spherical functions of all compact two-point homogeneous spaces are given by the same expression \(R^{(\alpha ,\beta )}_{n}(\cos (\rho (\mathbf {x},\mathbf {o})))\).
It easily follows from (2) that the new values for p and q in the case of \(\mathbb {M}^d=P^d(\mathbb {R})\) are \(p=d-1\) and \(q=0\). It is interesting to note that the new values of p and q for the real projective spaces together with their old values for the rest of spaces still have a meaning; see [13] and Table 2. This time, the values of p and q are connected with the geometry of the space \(\mathbb {M}^d\) rather than with Lie algebras.
Table 2

A geometric approach

\(\mathbb {M}^d\)

p

q

\(\alpha \)

\(\beta \)

\(\mathbb {A}\)

\(i(\mathbb {M}^d)\)

\(\mathbb {S}^d\), \(d=1\), 2, ...

0

\(d-1\)

\(\frac{d-2}{2}\)

\(\frac{d-2}{2}\)

\(\mathbb {S}^0\)

1

\(\mathbb {P}^d(\mathbb {R})\), \(d=2\), 3, ...

\(d-1\)

0

\(\frac{d-2}{2}\)

\(-\frac{1}{2}\)

\( \mathbb {P}^{d-1}(\mathbb {R})\)

\(2^{d-1}\)

\(\mathbb {P}^d(\mathbb {C})\), \(d=4\), 6, ...

\(d-2\)

1

\(\frac{d-2}{2}\)

0

\( \mathbb {P}^{d-2}(\mathbb {C})\)

\(\left( {\begin{array}{c}d-1\\ d/2-1\end{array}}\right) \)

\(\mathbb {P}^d(\mathbb {H})\), \(d=8\), 12, ...

\(d-4\)

3

\(\frac{d-2}{2}\)

1

\( \mathbb {P}^{d-4}(\mathbb {H})\)

\(\frac{1}{d/2+1}\left( {\begin{array}{c}d-1\\ d/2-1\end{array}}\right) \)

\(\mathbb {P}^{16}(\mathbb {O})\)

8

7

7

3

\( \mathbb {P}^{8}(\mathbb {O})\)

39

Specifically, let \(\mathbb {A}=\{\,\mathbf {x}\in \mathbb {M}^d :\rho (\mathbf {x},\mathbf {o})=\pi \,\}\). This set is called the antipodal manifold of the point \(\mathbf {o}\). The antipodal manifolds are listed in the sixth column of Table 2. Geometrically, if \(\mathbb {M}^d=\mathbb {S}^d\) and \(\mathbf {o}\) is the North pole, then \(\mathbb {A}=\mathbb {S}^0\) is the South pole. Otherwise, \(\mathbb {A}\) is the space at infinity of the point \(\mathbf {o}\) in the terms of projective geometry. The new number p turns out to be the dimension of the antipodal manifold, while the number \(p+q+1\) is, as before, the dimension of the space \(\mathbb {M}^d\) itself.

In what follows, we use the geometric approach. It turns out that all the spaces \(\mathbb {M}^d\) are Riemannian manifolds, as is defined in [5]. Each Riemannian manifold carries the canonical measure \(\mu \); see [5, pp. 10–11]. The measure \(\mu \) is proportional to the measure \(\nu \) constructed in Sect. 2.1. The coefficient of proportionality or the total measure \(\mu (\mathbb {M}^d)\) of the compact manifold \(\mathbb {M}^d\) is called the volume of \(\mathbb {M}^d\).

Lemma 1

The volume of the space \(\mathbb {M}^d\) is
$$\begin{aligned} \omega _d=\mu (\mathbb {M}^d)=\frac{(4\pi )^{\alpha +1} \varGamma (\beta +1)}{\varGamma (\alpha +\beta +2)}. \end{aligned}$$
(4)

In what follows, we write just \(\hbox {d}\mathbf {x}\) instead of \(\hbox {d}\mu (\mathbf {x})\).

2.3 Orthogonal Properties of Jacobi Polynomials

The set of Jacobi polynomials \(\{\, P_n^{(\alpha , \beta )} (x):n \in \mathbb {N}_0, x \in \mathbb {R}\, \}\) possesses two types of orthogonal properties. First, for each pair of \(\alpha >-1\) and \(\beta >-1\), this set is a complete orthogonal system on the interval \([-\,1, 1]\) with respect to the weight function \((1-x)^\alpha (1+x)^\beta \), in the sense that
$$\begin{aligned} \int _{-1}^1 P^{(\alpha , \beta )}_i (x) P^{(\alpha , \beta )}_j (x) (1-x)^\alpha (1+x)^\beta \hbox {d}x = \left\{ \begin{array}{ll} \frac{2^{\alpha +\beta +1} }{2 j +\alpha +\beta +1} \frac{\varGamma (j+\alpha +1) \varGamma (j+\beta +1)}{ j! \varGamma ( j +\alpha +\beta +1) }, ~ &{} ~ i =j, \\ 0, ~ &{} ~ i \ne j. \end{array}\right. \end{aligned}$$
(5)
Second, for selected values of \(\alpha \) and \(\beta \) given by (2) with p and q given in Table 2, they are orthogonal over \(\mathbb {M}^d\), as the following lemma describes, which is derived from the Funk–Hecke formula recently established in [3]. In the particular case \(\mathbb {M}^d=\mathbb {S}^d\), the Funk–Hecke formula may be found in classical references such as [1, 34].

Lemma 2

For \(i, j \in \mathbb {N}_0\), and \(\mathbf {x}_1\), \(\mathbf {x}_2 \in \mathbb {M}^d\),
$$\begin{aligned} \int _{\mathbb {M}^d } P_i^{(\alpha ,\beta ) } (\cos (\rho (\mathbf {x}_1,\mathbf {x}))) P_j^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_2,\mathbf {x})))\,\mathrm{d}\mathbf {x} =\frac{\delta _{ij}\omega _d}{a_i^2} P_i^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1,\mathbf {x}_2))), \end{aligned}$$
where
$$\begin{aligned} a_n=\left( \frac{\varGamma (\beta +1)(2 n +\alpha +\beta +1)\varGamma (n+\alpha +\beta +1)}{\varGamma (\alpha +\beta +2)\varGamma (n+\beta +1)}\right) ^{\frac{1}{2}},\qquad n \in \mathbb {N}_0. \end{aligned}$$
(6)

The probabilistic interpretation of zonal spherical functions on \(\mathbb {M}^d\) is provided in Lemma 3. The spherical case is given in [23].

Definition 2

A random vector \(\mathbf {U}\) is said to be uniformly distributed on \(\mathbb {M}^d\) if, for every Borel set \(A\subseteq \mathbb {M}^d\) and every isometry g we have \(\mathsf {P} (\mathbf {U}\in A ) =\mathsf {P} (\mathbf {U}\in gA)\).

To construct \(\mathbf {U}\), we start with a measure \(\sigma \) proportional to the invariant measure \(\nu \) of Sect. 2.1. Let \(T_{\mathbf {o}}\) be the tangent space to \(\mathbb {M}^d\) at the point \(\mathbf {o}\). Choose a Cartesian coordinate system in \(T_{\mathbf {o}}\) and identify this space with the space \(\mathbb {R}^{d}\). Construct a chart \(\varphi :\mathbb {M}^d\setminus \mathbb {A}\rightarrow \mathbb {R}^{d}\) as follows. Put \(\varphi (\mathbf {o})=\mathbf {0}\in \mathbb {R}^d\). For any other point \(\mathbf {x}\in \mathbb {M}^d\setminus \mathbb {A}\), draw the unique geodesic line connecting \(\mathbf {o}\) and \(\mathbf {x}\). Let \(\mathbf {r}\in \mathbb {R}^{d}\) be the unit tangent vector to the above geodesic line. Define
$$\begin{aligned} \varphi (\mathbf {x})= \mathbf {r} \tan (\rho (\mathbf {x},\mathbf {o})/2), \end{aligned}$$
and, for each Borel set \(B\subseteq \mathbb {M}^d\),
$$\begin{aligned} \sigma (B)=\int _{\varphi ^{-1}(B\setminus \mathbb {A})}\frac{\hbox {d}\mathbf {x}}{(1+\Vert \mathbf {x}\Vert ^2)^{\alpha +\beta +2}}. \end{aligned}$$
This measure is indeed invariant [39, p. 113]. Finally, define a probability space \((\varOmega ', \) \( \mathfrak {F}', \) \(\mathsf {P}')\) as follows: \(\varOmega '=\mathbb {M}^d\), \(\mathfrak {F}'\) is the \(\sigma \)-field of Borel subsets of \(\varOmega '\), and
$$\begin{aligned} \mathsf {P}'(B)=\frac{\sigma (B)}{\sigma (\mathbb {M}^d)},\qquad B\in \mathfrak {B}'. \end{aligned}$$
The random variable \(\mathbf {U}(\omega )=\omega \) is then uniformly distributed on \(\mathbb {M}^d\).

Lemma 3

Let \(\mathbf {U}\) be a random vector uniformly distributed on \(\mathbb {M}^d\). For \(n \in \mathbb {N}\),
$$\begin{aligned} Z_n(\mathbf {x})=a_n P_n^{(\alpha ,\beta )} (\cos (\rho (\mathbf {x},\mathbf {U}))), \qquad \mathbf {x}\in \mathbb {M}^d, \end{aligned}$$
is a centred isotropic random field with covariance function
$$\begin{aligned} {{\,\mathrm{cov}\,}}( Z_n (\mathbf {x}_1), Z_n (\mathbf {x}_2) ) =P_n^{ (\alpha ,\beta )} (\cos (\rho (\mathbf {x}_1, \mathbf {x}_2))), ~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}$$
where \(a_n\) is given by (6). Moreover, for \(k \ne n\), the random fields \( \{\, Z_k (\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}\) and \( \{\,Z_n(\mathbf {x}): \mathbf {x} \in \mathbb {M}^d\, \}\) are uncorrelated:
$$\begin{aligned} {{\,\mathrm{cov}\,}}(Z_k (\mathbf {x}_1), Z_n (\mathbf {x}_2) ) =0, ~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}$$
(7)

3 Isotropic Vector Random Fields on \(\mathbb {M}^d\)

In the purely spatial case, this section presents a series representation for an m-variate isotropic and mean square continuous random field \(\{\, \mathbf {Z} (\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}\) and a series expression for its covariance matrix function, in terms of Jacobi polynomials. By mean square continuous, we mean that, for \(k =1, \ldots , m\),
$$\begin{aligned} \mathsf {E}\left[ | Z_k (\mathbf {x}_1) -Z_k (\mathbf {x}_2) |^2\right] \rightarrow 0, ~~ \text{ as } ~~ \rho (\mathbf {x}_1, \mathbf {x}_2 ) \rightarrow 0, ~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}$$
It implies the continuity of each entry of the associated covariance matrix function in terms of \(\rho (\mathbf {x}_1, \mathbf {x}_2)\).
In what follows, d is assumed to be greater than 1, while \(\mathbb {M}^d\) reduces to the unit circle \(\mathbb {S}^1\) when \(d=1\), over which the treatment of isotropic vector random fields may be found in [23, 24]. For an \(m \times m\) symmetric and nonnegative definite matrix \(\mathsf {B}\) with nonnegative eigenvalues \(\lambda _1, \dots \), \(\lambda _m\), there is an orthogonal matrix \(\mathsf {S}\) such that \(\mathsf {S}^{-1}\mathsf {B}\mathsf {S}=\mathsf {D}\), where \(\mathsf {D}\) is a diagonal matrix with diagonal entries \(\lambda _1, \ldots , \lambda _m\). Define the square root of \( \mathsf {B}\) by
$$\begin{aligned} \mathsf {B}^{\frac{1}{2}}=\mathsf {S}\mathsf {D}^{\frac{1}{2}}\mathsf {S}^{-1}, \end{aligned}$$
where \(\mathsf {D}^{\frac{1}{2}}\) is a diagonal matrix with diagonal entries \(\sqrt{\lambda _1}, \ldots , \sqrt{ \lambda _m}\). Clearly, \(\mathsf {B}^{\frac{1}{2}}\) is symmetric, nonnegative definite, and \((\mathsf {B}^{\frac{1}{2}})^2=\mathsf {B}\). Denote by \(\mathsf {I}_m\) an \(m \times m\) identity matrix. For a sequence of \(m \times m\) matrices \(\{\, \mathsf {B}_n:n \in \mathbb {N}_0 \,\}\), the series \(\sum \nolimits _{n=0}^\infty \mathsf {B}_n\) is said to be convergent, if each of its entries is convergent.

Theorem 1

Suppose that \(\{\, \mathbf {V}_n:n \in \mathbb {N}_0\, \}\) is a sequence of independent m-variate random vectors with \(\mathsf {E} ( \mathbf {V}_n)= \mathbf {0}\) and \({{\,\mathrm{cov}\,}}( \mathbf {V}_n, \mathbf {V}_n ) = a_n^2\mathsf {I}_m\), \(\mathbf {U}\) is a random vector uniformly distributed on \(\mathbb {M}^d\) and is independent of \(\{\, \mathbf {V}_n:n \in \mathbb {N}_0\, \}\), and that \(\{\, \mathsf {B}_n:n \in \mathbb {N}_0\, \}\) is a sequence of \(m \times m\) symmetric nonnegative definite matrices. If the series \(\sum \nolimits _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } (1)\) converges, then
$$\begin{aligned} \mathbf {Z} (\mathbf {x}) = \sum _{n=0}^\infty \mathsf {B}_n^{\frac{1}{2}} \mathbf {V}_n P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U} )), ~~~~~~ \mathbf {x} \in \mathbb {M}^d, \end{aligned}$$
(8)
is a centred m-variate isotropic random field on \(\mathbb {M}^d\), with covariance matrix function
$$\begin{aligned} {{\,\mathrm{cov}\,}}( \mathbf {Z} (\mathbf {x}_1), \mathbf {Z}(\mathbf {x}_2) ) = \sum _{n=0}^\infty \mathsf {B}_n P_n^{(\alpha , \beta ) } \left( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) \right) , ~~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d. \end{aligned}$$
(9)
The terms of (8) are uncorrelated; more precisely,
$$\begin{aligned} {{\,\mathrm{cov}\,}}\left( \mathsf {B}_i^{\frac{1}{2}} \mathbf {V}_i P_i^{ (\alpha , \beta ) } ( \rho (\mathbf {x}_1, \mathbf {U})), ~ \mathsf {B}_j^{\frac{1}{2}} \mathbf {V}_j P_j^{ (\alpha , \beta ) } ( \rho (\mathbf {x}_2, \mathbf {U} )) \right) = \mathbf {0}, ~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ i \ne j. \end{aligned}$$

Since \(\left| P_n^{ (\alpha , \beta ) } (\cos \vartheta ) \right| \le P_n^{ (\alpha , \beta ) } (1), n \in \mathbb {N}_0, \) the convergent assumption of the series \(\sum \nolimits _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } (1)\) ensures not only the mean square convergence of the series at the right-hand side of (8), but also the uniform and absolute convergence of the series at the right-hand side of (9).

When \(\mathbb {M}^d=\mathbb {S}^2\) and \(m=1\), we have \(\dim H_n=2n+1\), and (9) takes the form
$$\begin{aligned} {{\,\mathrm{cov}\,}}( Z (\mathbf {x}_1), Z(\mathbf {x}_2) ) = \sum _{n=0}^\infty b_n P_n\left( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) \right) , \end{aligned}$$
where \(P_n (x) \) are Legendre polynomials. In the theory of Cosmic Microwave Background, this equation is traditionally written in the form
$$\begin{aligned} {{\,\mathrm{cov}\,}}( Z (\mathbf {x}_1), Z(\mathbf {x}_2) ) = \sum _{\ell =0}^\infty (2\ell +1)C_{\ell } P_{\ell }\left( \mathbf {x}_1\cdot \mathbf {x}_2\right) , \end{aligned}$$
and the sequence \(\{\,C_{\ell }:\ell \ge 0\,\}\) is called the angular power spectrum. In the general case, define the angular power spectrum by
$$\begin{aligned} \mathsf {C}_n=\frac{1}{\dim H_n}\mathsf {B}_n. \end{aligned}$$
A lot of examples of the angular power spectrum for general compact two-point homogeneous spaces may be found in [2].

As the next theorem indicates, (9) is a general form that the covariance matrix function of an m-variate isotropic and mean square continuous random field on \(\mathbb {M}^d\) must take.

Theorem 2

For an m-variate isotropic and mean square continuous random field \( \{\, Z(\mathbf {x}):\mathbf {x} \in \mathbb {M}^d\, \}\), its covariance matrix function \({{\,\mathrm{cov}\,}}( Z(\mathbf {x}_1), Z (\mathbf {x}_2) ) \) is of the form
$$\begin{aligned} \mathsf {C} ( \mathbf {x}_1, \mathbf {x}_2 ) = \sum _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } \left( \cos \rho (\mathbf {x}_1, \mathbf {x}_2) \right) , ~~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, \end{aligned}$$
(10)
where \(\{\,\mathsf {B}_n:n \in \mathbb {N}_0\, \}\) is a sequence of \(m \times m\) nonnegative definite matrices and the series \(\sum \nolimits _{n=0}^\infty \mathsf {B}_n P_n^{ (\alpha , \beta ) } (1)\) converges.

Conversely, if an \(m \times m \) matrix function \(\mathsf {C} (\mathbf {x}_1, \mathbf {x}_2)\) is of the form (10), then it is the covariance matrix function of an m-variate isotropic Gaussian or elliptically contoured random field on \(\mathbb {M}^d\).

Examples of covariance matrix functions on \(\mathbb {S}^d\) may be found in, for instance, [23, 24]. We would call for parametric and semi-parametric covariance matrix structures on \(\mathbb {M}^d\).

4 Time-Varying Isotropic Vector Random Fields on \(\mathbb {M}^d\)

For an m-variate random field \(\{\, \mathbf {Z} (\mathbf {x}; t):\mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T}\, \}\) that is isotropic and mean square continuous over \(\mathbb {M}^d\) and stationary on \(\mathbb {T}\), this section presents the general form of its covariance matrix function \(\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)\), which is a continuous function of \(\rho (\mathbf {x}_1, \mathbf {x}_2)\) and is also a continuous function of \(t \in \mathbb {R}\) if \(\mathbb {T} = \mathbb {R}\). A series representation is given in the following theorem for such a random field, as an extension of that on \(\mathbb {S}^d \times \mathbb {T}\).

Theorem 3

If an m-variate random field \(\{ \mathbf {Z} (\mathbf {x}; t), \mathbf {x} \in \mathbb {M}^d, t \in \mathbb {T} \}\) is isotropic and mean square continuous over \(\mathbb {M}^d\) and stationary on \(\mathbb {T}\), then
$$\begin{aligned} \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t) = ( \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) )^{\top }, \end{aligned}$$
and \( \frac{\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2} \) is of the form
$$\begin{aligned}&\frac{\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2} \nonumber \\&\quad = \sum \limits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), \quad \mathbf {x}_1, \mathbf {x}_2\in \mathbb {M}^d, t\in \mathbb {T}, \end{aligned}$$
(11)
where, for each fixed \(n \in \mathbb {N}_0\), \( \mathsf {B}_n (t)\) is a stationary covariance matrix function on \(\mathbb {T}\), and, for each fixed \(t \in \mathbb {T}\), \( \mathsf {B}_n (t)\) (\( n \in \mathbb {N}_0\)) are \(m \times m \) symmetric matrices and \(\sum \nolimits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (1)\) converges.

While a general form of \( \frac{\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t) + \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); -t)}{2} \), instead of \(\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)\) itself, is given in Theorem 3, that of \(\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)\) can be obtained in certain special cases, such as spatio-temporal symmetric, and purely spatial.

Corollary 1

If \(\mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t)\) is spatio-temporal symmetric in the sense that
$$\begin{aligned} \mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); - t ) =\mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t ), ~~~~~~~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ t \in \mathbb {T}, \end{aligned}$$
then it takes the form
$$\begin{aligned} \mathsf {C} (\rho (\mathbf {x}_1, \mathbf {x}_2); t) = \sum \limits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ t \in \mathbb {T}. \end{aligned}$$
In contrast to those in (11), the \(m \times m\) matrices \( \mathsf {B}_n (t)\) (\( n \in \mathbb {N}_0\)) in the next theorem are not necessarily symmetric. One simple such example iswhich is the covariance matrix function of an m-variate first order moving average time series Open image in new window , where \(\{\, \varvec{\varepsilon } (t):t \in \mathbb {Z}\, \}\) is m-variate white noise with \(\mathsf {E}[ \varvec{\varepsilon } (t)] = \mathbf {0}\) and Open image in new window , and Open image in new window is an \(m \times m\) matrix.

Theorem 4

An \(m \times m\) matrix function
$$\begin{aligned} \mathsf {C} ( \rho (\mathbf {x}_1, \mathbf {x}_2); t) = \sum \limits _{n=0}^\infty \mathsf {B}_n (t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}_1, \mathbf {x}_2)), ~ ~~ \mathbf {x}_1, \mathbf {x}_2 \in \mathbb {M}^d, ~ t \in \mathbb {T}, \end{aligned}$$
(12)
is the covariance matrix function of an m-variate Gaussian or elliptically contoured random field on \( \mathbb {M}^d \times \mathbb {T} \) if and only if \(\{\, \mathsf {B}_n (t):n \in \mathbb {N}_0\, \}\) is a sequence of stationary covariance matrix functions on \(\mathbb {T}\) and \(\sum \nolimits _{n=0}^\infty \mathsf {B}_n (0) P_n^{ (\alpha , \beta ) } (1)\) converges.
As an example of (12), letwhere Open image in new window is a sequence of \(m \times m\) nonnegative definite matrices and Open image in new window \(P_n^{ (\alpha , \beta ) } (1)\) converges. In this case, (12) is the covariance matrix function of an m-variate Gaussian or elliptically contoured random field on \( \mathbb {M}^d \times \mathbb {Z}\).

Gaussian and second-order elliptically contoured random fields form one of the largest sets, if not the largest set, which allows any possible correlation structure [21]. The covariance matrix functions developed in Theorem 4 can be adopted for a Gaussian or elliptically contoured vector random field. However, they may not be available for other non-Gaussian random fields, such as a log-Gaussian [32], \(\chi ^2\) [20], K-distributed [22], or skew-Gaussian one, for which admissible correlation structure must be investigated on a case-by-case basis. A series representation is given in the following theorem for an m-variate spatio-temporal random field on \(\mathbb {M}^d\times \mathbb {T}\).

Theorem 5

An m-variate random field
$$\begin{aligned} \mathbf {Z} (\mathbf {x}; t) = \sum _{n=0}^\infty \mathbf {V}_n (t) P_n^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U})), ~~~~~~ \mathbf {x} \in \mathbb {M}^d, ~ t \in \mathbb {T}, \end{aligned}$$
(13)
is isotropic and mean square continuous on \(\mathbb {M}^d\), stationary on \(\mathbb {T}\), and possesses mean \(\mathbf {0}\) and covariance matrix function (12), where \(\{ \,\mathbf {V}_n (t):n \in \mathbb {N}_0 \, \}\) is a sequence of independent m-variate stationary stochastic processes on \(\mathbb {T}\) with
$$\begin{aligned} \mathsf {E} ( \mathbf {V}_n )= \mathbf {0}, ~~~ {{\,\mathrm{cov}\,}}( \mathbf {V}_n (t_1), \mathbf {V}_n (t_2) ) = a_n^2 \mathsf {B}_n (t_1-t_2), ~~~ n \in \mathbb {N}_0, \end{aligned}$$
the random vector \(\mathbf {U}\) is uniformly distributed on \(\mathbb {M}^d\) and is independent with \(\{\, \mathbf {V}_n (t) :\) \( n \in \mathbb {N}_0\, \}\), and \(\sum \nolimits _{n=0}^\infty \mathsf {B}_n (0) P_n^{ (\alpha , \beta ) } (1)\) converges.
The distinct terms of (13) are uncorrelated each other,
$$\begin{aligned}&{{\,\mathrm{cov}\,}}\left( \mathbf {V}_i (t) P_i^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U}) ), ~ \mathbf {V}_j (t) P_j^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U}) ) \right) = \mathbf {0},\\&\quad \mathbf {x} \in \mathbb {M}^d, ~ t \in \mathbb {T}, i \ne j, \end{aligned}$$
due to Lemma 3 and the independent assumption among \(\mathbf {U}, \mathbf {V}_i (t), \mathbf {V}_j (t)\). The vector stochastic process \(\mathbf {V}_n (t)\) can be expressed as, in terms of \(\mathbf {Z} (\mathbf {x}; t)\) and \(\mathbf {U}\),
$$\begin{aligned} \mathbf {V}_n (t) = \frac{a^2_n}{\omega _d P_n^{ (\alpha , \beta ) } (1)} \int _{\mathbb {M}^d} \mathbf {Z} (\mathbf {x}; t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}, \mathbf {U})) \mathrm{d} \mathbf {x}, ~~~~~ t \in \mathbb {T}, ~ n \in \mathbb {N}_0, \end{aligned}$$
where the integral is understood as a Bochner integral of a function taking values in the Hilbert space of random vectors \(\mathbf {Z}\in \mathbb {R}^m\) with \(\mathsf {E}[\Vert \mathbf {Z}\Vert ^2_{\mathbb {R}^m}]<\infty \).
It is obtained after we multiply both sides of (13) by \(P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}, \mathbf {U}))\), integrate over \(\mathbb {M}^d\), and apply Lemma 3,
$$\begin{aligned}&\int _{\mathbb {M}^d} \mathbf {Z} (\mathbf {x}; t) P_n^{ (\alpha , \beta ) } (\cos \rho (\mathbf {x}, \mathbf {U})) \mathrm{d} \mathbf {x}\\&\quad = \sum _{k=0}^\infty \mathbf {V}_n (t) \int _{\mathbb {M}^d} P_k^{ (\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U}) ) P_n^{(\alpha , \beta ) } ( \cos \rho (\mathbf {x}, \mathbf {U})) \mathrm{d} \mathbf {x} \\&\quad = \frac{1}{a_n^2} P_n^{ (\alpha , \beta ) } (1) \mathbf {V}_n (t). \end{aligned}$$

Notes

Acknowledgements

We are grateful to the anonymous referee for careful reading of the manuscript and useful remarks.

References

  1. 1.
    Andrews, G.E., Askey, R., Roy, R.: Special functions, Encyclopedia of Mathematics and its Applications, vol. 71. Cambridge University Press, Cambridge (1999)Google Scholar
  2. 2.
    Askey, R., Bingham, N.H.: Gaussian processes on compact symmetric spaces. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 37(2), 127–143 (1976/77)Google Scholar
  3. 3.
    Azevedo, D., Barbosa, V.S.: Covering numbers of isotropic reproducing kernels on compact two-point homogeneous spaces. Math. Nachr. 290(16), 2444–2458 (2017)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Baldi, P., Rossi, M.: Representation of Gaussian isotropic spin random fields. Stoch. Process. Appl. 124(5), 1910–1941 (2014)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Berger, M., Gauduchon, P., Mazet, E.: Le spectre d’une variétériemannienne. Lecture Notes in Mathematics, vol. 194. Springer, Berlin (1971)Google Scholar
  6. 6.
    Besse, A.L.: Manifolds all of whose geodesics are closed. With appendices. In: Epstein, D.B.A., Bourguignon, J.-P., Bérard-Bergery, L., Berger, M., Kazdan, J.L. (eds.) Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas], vol. 93. Springer, Berlin (1978)Google Scholar
  7. 7.
    Bingham, N.H.: Positive definite functions on spheres. Proc. Cambridge Philos. Soc. 73, 145–156 (1973)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Bochner, S.: Hilbert distances and positive definite functions. Ann. Math. 2(42), 647–656 (1941)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Brown, G., Dai, F.: Approximation of smooth functions on compact two-point homogeneous spaces. J. Funct. Anal. 220(2), 401–423 (2005)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Cartan, E.: Sur certaines formes Riemanniennes remarquables des géométries à groupe fondamental simple. Ann. Sci. Éc. Norm. Supér. 3(44), 345–467 (1927)CrossRefGoogle Scholar
  11. 11.
    Cheng, D., Xiao, Y.: Excursion probability of Gaussian random fields on sphere. Bernoulli 22(2), 1113–1130 (2016)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Cohen, S., Lifshits, M.A.: Stationary Gaussian random fields on hyperbolic spaces and on Euclidean spheres. ESAIM Probab. Stat. 16, 165–221 (2012)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Colzani, L., Tenconi, M.: Localization for Riesz means on the compact rank one symmetric spaces. In: Proceedings of the AMSI/AustMS 2014 Workshop in Harmonic Analysis and its Applications, Proc. Centre Math. Appl. Austral. Nat. Univ., vol. 47, pp. 26–49. Austral. Nat. Univ., Canberra (2017)Google Scholar
  14. 14.
    Gangolli, R.: Positive definite kernels on homogeneous spaces and certain stochastic processes related to Lévy’s Brownian motion of several parameters. Ann. Inst. H. Poincaré Sect. B (N.S.) 3, 121–226 (1967)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Geller, D., Marinucci, D.: Spin wavelets on the sphere. J. Fourier Anal. Appl. 16(6), 840–884 (2010)MathSciNetCrossRefGoogle Scholar
  16. 16.
    González Vieli, F.J.: Pointwise Fourier inversion on rank one compact symmetric spaces using Cesàro means. Acta Sci. Math. (Szeged) 68(3–4), 783–795 (2002)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Helgason, S.: Differential operators on homogeneous spaces. Acta Math. 102, 239–299 (1959)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Leonenko, N., Sakhno, L.: On spectral representations of tensor random fields on the sphere. Stoch. Anal. Appl. 30(1), 44–66 (2012)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Leonenko, N.N., Shieh, N.R.: Rényi function for multifractal random fields. Fractals 21(2), 1350,009 (2013). 13CrossRefGoogle Scholar
  20. 20.
    Ma, C.: Covariance matrix functions of vector \(\chi ^2\) random fields in space and time. IEEE Trans. Commun. 59(9), 2554–2561 (2011).  https://doi.org/10.1109/TCOMM.2011.063011.100528 CrossRefGoogle Scholar
  21. 21.
    Ma, C.: Vector random fields with second-order moments or second-order increments. Stoch. Anal. Appl. 29(2), 197–215 (2011)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Ma, C.: K-distributed vector random fields in space and time. Stat. Probab. Lett. 83(4), 1143–1150 (2013).  https://doi.org/10.1016/j.spl.2013.01.004 MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Ma, C.: Stochastic representations of isotropic vector random fields on spheres. Stoch. Anal. Appl. 34(3), 389–403 (2016)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Ma, C.: Time varying isotropic vector random fields on spheres. J. Theor. Probab. 30(4), 1763–1785 (2017)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Malyarenko, A.: Invariant random fields in vector bundles and application to cosmology. Ann. Inst. Henri Poincaré Probab. Stat. 47(4), 1068–1095 (2011)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Malyarenko, A.: Invariant Random Fields on Spaces with a Group Action. Probability and its Applications (New York). Springer, Heidelberg (2013). (With a foreword by Nikolai Leonenko) CrossRefGoogle Scholar
  27. 27.
    Malyarenko, A.: Spectral expansions of random sections of homogeneous vector bundles. Teor. \(\breve{\text{I}}\)movīr Mat. Stat. 97, 142–156 (2017)Google Scholar
  28. 28.
    Malyarenko, A.A.: Local properties of Gaussian random fields on compact symmetric spaces, and Jackson-type and Bernstein-type theorems. Ukraïn. Mat. Zh. 51(1), 60–68 (1999)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Malyarenko, A.A.: Abelian and Tauberian theorems for random fields on two-point homogeneous spaces. Teor. \(\breve{\text{ I }}\)movīr Mat. Stat. 69, 106–118 (2003)Google Scholar
  30. 30.
    Malyarenko, A.A., Olenko, A.Y.: Multidimensional covariant random fields on commutative locally compact groups. Ukraïn. Mat. Zh. 44(11), 1505–1510 (1992)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Marinucci, D., Peccati, G.: Random fields on the sphere. Representation, limit theorems and cosmological applications, London Mathematical Society Lecture Note Series, vol. 389. Cambridge University Press, Cambridge (2011)Google Scholar
  32. 32.
    Matheron, G.: The internal consistency of models in geostatistics. In: Armstrong, M. (ed.) Geostatistics, pp. 21–38. Springer, Dordrecht (1989)CrossRefGoogle Scholar
  33. 33.
    Molčan, G.M.: Homogeneous random fields on symmetric spaces of rank one. Teor. Veroyatnost. i Mat. Statist. 21, 123–148, 167 (1979)MathSciNetGoogle Scholar
  34. 34.
    Müller, C.: Analysis of Spherical Symmetries in Euclidean Spaces, Applied Mathematical Sciences, vol. 129. Springer, New York (1998)CrossRefGoogle Scholar
  35. 35.
    Obukhov, A.M.: Statistically homogeneous fields on a sphere. Usp. Mat. Nauk 2(2), 196–198 (1947)Google Scholar
  36. 36.
    Sakamoto, K.: Helical minimal immersions of compact Riemannian manifolds into a unit sphere. Trans. Am. Math. Soc. 288(2), 765–790 (1985)MathSciNetCrossRefGoogle Scholar
  37. 37.
    Schoenberg, I.J.: Positive definite functions on spheres. Duke Math. J. 9, 96–108 (1942)MathSciNetCrossRefGoogle Scholar
  38. 38.
    Szegő, G.: Orthogonal polynomials, vol. XXIII, 4th edn. American Mathematical Society, Colloquium Publications, Providence (1975)zbMATHGoogle Scholar
  39. 39.
    Volchkov, V.V., Volchkov, V.V.: Offbeat Integral Geometry on Symmetric Spaces. Birkhäuser, Basel (2013).  https://doi.org/10.1007/978-3-0348-0572-8 CrossRefzbMATHGoogle Scholar
  40. 40.
    Wang, H.C.: Two-point homogeneous spaces. Ann. Math. 2(55), 177–191 (1952)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Weinstein, A.: On the volume of manifolds all of whose geodesics are closed. J. Differ. Geom. 9, 513–517 (1974)MathSciNetCrossRefGoogle Scholar
  42. 42.
    Yadrenko, M.\(\breve{\text{ I }}\).: Spectral theory of random fields. Translation Series in Mathematics and Engineering. Optimization Software, Inc., Publications Division, New York (1983). (Translated from the Russian) Google Scholar
  43. 43.
    Yaglom, A.M.: Second-order homogeneous random fields. In: Proceedings of 4th Berkeley Symposium on Mathematical Statistics and Probability, vol. II, pp. 593–622. University of California Press, Berkeley (1961)Google Scholar
  44. 44.
    Yaglom, A.M.: Correlation Theory of Stationary and Related Random Functions, vol. I: Basic Results. Springer Series in Statistics. Springer, New York (1987)Google Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Mathematics, Statistics, and PhysicsWichita State UniversityWichitaUSA
  2. 2.Division of Applied MathematicsMälardalen UniversityVästeråsSweden

Personalised recommendations