Abstract
Geostatistical modeling is often based on the use of covariance functions, i.e., positive definite functions. However, when interpolation problems have to be solved, it is advisable to consider the subset of strictly positive definite functions. Indeed, it will be argued that ensuring strict positive definiteness for a covariance function is convenient from a theoretical and practical point of view. In this paper, an extensive analysis on strictly positive definite covariance functions has been given. The closure of the set of strictly positive definite functions with respect to the sum and the product of covariance functions defined on the same Euclidean dimensional space or on factor spaces, as well as on partially overlapped lower dimensional spaces, has been analyzed. These results are particularly useful (a) to extend strict positive definiteness in higher dimensional spaces starting from covariance functions which are only defined on lower dimensional spaces and/or are only strictly positive definite in lower dimensional spaces, (b) to construct strictly positive definite covariance functions in space–time as well as (c) to obtain new asymmetric and strictly positive definite covariance functions.
Similar content being viewed by others
References
Bernstein S (1928) Sur les fonctions absolument monotones. Acta Math 52(1):1–66
Bochner S (1959) Lectures on Fourier integrals. Princeton University Press, New Jersey
Chang K (1996) Strictly positive definite functions. J Approx Theory 87(2):148–158
Chen D, Menegatto V, Sun X (2003) A necessary and sufficient condition for strictly positive definite functions on spheres. Proc Am Math Soc 131(9):2733–2740
Cressie N, Huang H (1999) Classes of nonseparable, spatio-temporal stationary covariance functions. J Am Stat Assoc 94(448):1330–1340
Cressie N, Majure J (1997) Spatio-temporal statistical modeling of livestock waste in streams. J Agric Biol Environ Stat 2(1):24–47
De Iaco S, Posa D (2013) Positive and negative non-separability for space–time covariance models. J Stat Plan Inf 143(2):378–391
De Iaco S, Myers D, Posa D (2001) Space–time analysis using a general product–sum model. Stat Probab Lett 52(1):21–28
De Iaco S, Myers D, Posa D (2011) On strict positive definiteness of product and product–sum covariance models. J Stat Plan Inf 141:1132–1140
Gneiting T (2002) Nonseparable, stationary covariance functions for space–time data. J Am Stat Assoc 97(458):590–600
Gneiting T (2013) Strictly and non-strictly positive definite functions on spheres. Bernoulli 19(4):1327–1349
Horn RA, Johnson CR (1991) Topics in matrix analysis. Cambridge University Press, New York
Horn RA, Johnson CR (1996) Matrix analysis. Cambridge University Press, New York
Khinchin A (1934) Korrelations theorie der stationären stochastischen prozesse. Math Ann 109:604–615
Kolovos A, Christakos G, Hristopulos D, Serre M (2004) Methods for generating non-separable spatiotemporal covariance models with potential environmental applications. Adv Water Resour 27(8):815–830
Ma C (2002) Spatio-temporal covariance functions generated by mixtures. Math Geol 34(8):965–975
Ma C (2003) Families of spatio-temporal stationary covariance models. J Stat Plan Inf 116(2):489–501
Ma C (2005) Linear combinations of space–time covariance functions and variograms. IEEE Trans Signal Process 53(3):857–864
Martinez-Ruiz F, Mateu J, Montes F, Porcu E (2010) Mortality risk assessment through stationary space–time covariance functions. Stoch Environ Res Risk Assess 24(4):519–526
Mathias M (1923) Über positive Fourier-Integrale. Math Z 16:103–125
Menegatto VA (1994) Strictly positive definite kernels on the Hilbert sphere. Appl Anal 55:91–101
Miller K, Samko S (2001) Completely monotonic functions. Integr Transforms Spec Funct 12(4):389–402
Montero J, Fernández-Avilés G, Mateu J (2015) Spatial and spatio-temporal geostatistical modeling and kriging. Wiley, Hoboken
Myers DE (1988) Interpolation with positive definite functions. Sciences de la Terre 28:251–265
Myers DE, Journel AG (1990) Variograms with zonal anisotropies and non-invertible kriging systems. Math Geol 22(7):779–785
Pinkus A (2004a) Strictly Hermitian positive definite functions. J Anal Math 94:293–318
Pinkus A (2004b) Strictly positive definite functions on a real inner product space. Adv Comput Math 20(4):263–271
Porcu E, Schilling L (2011) From Schoenberg to Pick–Nevanlinna: toward a complete picture of the variogram class. Bernoulli 17(1):441–455
Porcu E, Gregori P, Mateu J (2006) Nonseparable stationary anisotropic space–time covariance functions. Stoch Environ Res Risk Assess 21(2):113–122
Rodrigues A, Diggle P (2010) A class of convolution-based models for spatio-temporal processes with non-separable covariance structure. Scand J Stat 37(4):553–567
Ron A, Sun X (1996) Strictly positive definite functions on spheres in euclidean spaces. Math Comput 65(216):1513–1530
Schoenberg I (1938a) Metric spaces and completely monotone functions. Ann Math 39(4):811–841
Schoenberg I (1938b) Metric spaces and positive definite functions. Trans Am Math Soc 44(3):522–536
Schreiner M (1997) On a new condition for strictly positive definite functions on spheres. Proc Am Math Soc 125:531–539
Stein ML (2005) Space-time covariance functions. J Am Stat Assoc 100(469):310–320
Strauss H (1997) On interpolation with products of positive definite functions. Numer Algorithms 15(2):153–165
Wendland H (2005) Scattered data approximation. Cambridge University Press, New York
Xu Y, Cheney E (1992) Strictly positive definite functions on spheres. Proc Am Math Soc 116(4):977–981
Yaglom A (1962) An introduction to the theory of stationary random functions. Dover Publications Inc, New York, Translated and edited by R.A. Silverman, p 235
zu Castell W, Filbir F, Szwarc R (2005) Strictly positive definite functions in \({{\mathbb{R}}}^d\). J Approx Theory 137(2):277–280
Acknowledgements
The authors would like to thank the associate editor, the reviewers and prof. Giorgio Metafune for their interest in the paper and their useful comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix
Proof. Theorem 2. point 1
Given \(n=2\), let \(C_1\) and \(C_2\) be covariance functions, both defined on \({\mathbb {R}}^m\) and
a covariance function defined on \({\mathbb {R}}^m\). Assume that \(C_1\) is a SPD covariance function and \(C_2\) is not identically zero (i.e. \(C_2(\mathbf{0})>0\)) positive definite covariance function. For any \(k\in N_+\) and any choice of distinct points \(\mathbf{s}_i \in {\mathbb {R}}^{m}, i=1,\ldots ,k\) and \({\varvec{\lambda }}=[\lambda _1,\ldots ,\lambda _k]^T\in {\mathbb {R}}^k\), not zero, the quadratic form can be written as follows \({\varvec{\lambda }}{} {\mathbf{C}} {\varvec{\lambda }}^T\), where the generic element of \({\mathbf{C}}\) is
The matrix \({\mathbf{C}}\) is the Hadamard product between the matrices \({\mathbf{C}} _1\), whose generic element \({\mathbf{C}} _1[ij]\) is \(C_1({\mathbf{s }}_i-{\mathbf{s }}_j)\), and \({\mathbf{C}} _2\), whose generic element \({\mathbf{C}} _2[ij]\) is \(C_2({\mathbf{s }}_i-{\mathbf{s }}_j)\). Since \({\mathbf{C}} _1\) is SPD and \({\mathbf{C}} _2\) has no diagonal entry equal to 0, then by recalling Horn and Johnson (1991), C defined in (49) is SPD. Thus model in (48) is SPD. For \(n=3\), assume that \(C_1\) is a SPD covariance function and \(C_2\), \(C_3\) are not identically zero positive definite covariance functions. Since \(C_2C_3\) is still a not identically zero positive definite covariance function then the following covariance function,
is SPD. The proof can be completed by induction for any n.
Proof. Theorem 2. point 2
Part 1 (only if)
The proof of this implication (from the right to the left) follows from the quadratic form in (2).
Part 2 (if)
The proof of this implication (from the left to the right) follows from the Bochner characterization as clarified hereafter. However, before proving the theorem, it is worth introducing the following Lemma.
Lemma 2
C is SPD if and only if any trigonometric polynomial \(P(\omega )=\displaystyle \sum _{j=1}^{n}\lambda _{j}\exp ({i{\mathbf{s }}_{j}^{T}\omega })\), such that
is identically zero, that is \(P(\omega )=0\quad \forall \omega \in {\mathbb {R}}^m.\)
Proof of Lemma 2
Lemma-Part 1 (if)
Let C be a SPD covariance function and
Let \({\mathbf{s }}_{i}, \; i=1,2,\ldots , n\) be distinct points in \({\mathbb {R}}^m.\)
Then
Lemma-Part 2 (only if)
Let \({\mathbf{s }}_{1}, {\mathbf{s }}_{2},\ldots , {\mathbf{s }}_{n}\) be distinct points on \({\mathbb {R}}^m\) and
The trigonometric polynomial
is not identically zero on \({\mathbb {R}}^m\), because the exponential functions \(\exp ({i{\mathbf{s }}_{j}^{T}\omega }), j=1,\ldots ,n,\) are linearly independent on \({\mathbb {R}}^m\).
Then, by hypothesis, \(\displaystyle 0<\int _{{\mathbb {R}}^m}|P(\omega )|^2\, dF(\omega )=\sum _{i=1}^{n}\sum _{j=1}^{n}\lambda _i\lambda _j\, C({\mathbf{s }}_i-{\mathbf{s }}_j).\) \(\square\)
Thus, the proof of the second part of Theorem 2 follows. Given \(n=2\), by contradiction let
with \(C(\mathbf{h})=C_1(\mathbf{h})+C_2(\mathbf{h}),\) and \(F(\omega )=F_1(\omega )+F_2(\omega ),\) where C is a SPD covariance function, while \(C_1\) and \(C_2\) are only positive definite covariance functions. Because of Lemma 2, there exist the trigonometric polynomials \(P_1\) and \(P_2\) on \({\mathbb {R}}^m\)
both not identically zero such that:
Let \(P(\omega )=P_1(\omega )\cdot P_2(\omega )\) and P be a trigonometric polynomial not identically zero. Moreover,
Hence,
then by taking the sum:
thus, because of Lemma 2, C should not be a SPD covariance function.
For \(n=3\), let us assume by contradiction that
with \(C(\mathbf{h})=C_1(\mathbf{h})+C_2(\mathbf{h})+C_3(\mathbf{h}),\) and \(F(\omega )=F_1(\omega )+F_2(\omega )+F_3(\omega ),\) where C is a SPD covariance function, while \(C_k\), \(k=1,2,3,\) are only positive definite covariance functions. Then, since \(C_1(\mathbf{h})+C_2(\mathbf{h})\) is still positive definite, thus, again because of Lemma 2, C should not be a SPD covariance function. The proof can be completed by induction for any n.
Proof. Theorem 3
Part 1 (only if)
If the covariance function C in (19) is SPD, then for any \(n\in N_+\) and any choice of \(\mathbf{u}_1=({\mathbf{x}} _{_1},{\mathbf{y}} _{_1},{\mathbf{z}} _{_1}), \ldots , \mathbf{u}_n=({\mathbf{x}} _{_n},{\mathbf{y}} _{_n},{\mathbf{z}} _{_n})\in {\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3}\) and \(\lambda _1, \ldots , \lambda _n\in {\mathbb {R}}\) not all zero,
This implies that for any \(n\in N_+\) and any choice of \(\mathbf{u}_1=({\mathbf{x}} _{_1},{\mathbf{y}} _{_1},{\mathbf{z}} _{_1}), \ldots , \mathbf{u}_n=({\mathbf{x}} _{_n},{\mathbf{y}} _{_n},{\mathbf{z}} _{_n})\in {\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3}\), where \({\mathbf{y}} _{_1}=\cdots ={\mathbf{y}} _{_n}\) and \({\mathbf{z}} _{_1}=\cdots ={\mathbf{z}} _{_n}\), and \(\lambda _1, \ldots , \lambda _n\in {\mathbb {R}}\) not all zero,
Hence
thus \(C_1\) is SPD on \({\mathbb {R}}^{m_1}\).
Similarly, for any \(n\in N_+\) and any choice of \(\mathbf{u}_1=({\mathbf{x}} _{_1},{\mathbf{y}} _{_1},{\mathbf{z}} _{_1}), \ldots , \mathbf{u}_n=({\mathbf{x}} _{_n},{\mathbf{y}} _{_n},{\mathbf{z}} _{_n})\in {\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3},\) where \({\mathbf{x}} _{_1}=\cdots ={\mathbf{x}} _{_n}\), \({\mathbf{y}} _{_1}=\cdots ={\mathbf{y}} _{_n}\) , and \(\lambda _1, \ldots , \lambda _n\in {\mathbb {R}}\) not all zero,
Thus \(C_2\) is SPD on \({\mathbb {R}}^{m_3}\).
At the end, for any \(n\in N_+\) and any choice of \(\mathbf{u}_1=({\mathbf{x}} _{_1},{\mathbf{y}} _{_1},{\mathbf{z}} _{_1}), \ldots , \mathbf{u}_n=({\mathbf{x}} _{_n},{\mathbf{y}} _{_n},{\mathbf{z}} _{_n})\in {\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3},\) where \({\mathbf{x}} _{_1}=\cdots ={\mathbf{x}} _{_n}\), \({\mathbf{z}} _{_1}=\cdots ={\mathbf{z}} _{_n}\) , and \(\lambda _1, \ldots , \lambda _n\in {\mathbb {R}}\) not all zero,
Thus, for Theorem 2, at least \(C_1\) or \(C_2\) must be SPD on \({\mathbb {R}}^{m_2}\); consequently, since the covariance function C in (19) is SPD on \({\mathbb {R}}^m\), if \(C_2\) is SPD on \({\mathbb {R}}^{m_2}\) and \({\mathbb {R}}^{m_3}\), then it is SPD on \({\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3}\), or alternatively if \(C_1\) is SPD on \({\mathbb {R}}^{m_1}\) and \({\mathbb {R}}^{m_2}\), then it is SPD on \({\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\).
Part 2. (if).
Let \(\mathbf{u}_1=({\mathbf{x}} _{_1},{\mathbf{y}} _{_1},{\mathbf{z}} _{_1}), \ldots , \mathbf{u}_n=({\mathbf{x}} _{_n},{\mathbf{y}} _{_n},{\mathbf{z}} _{_n})\) be distinct points in \({\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3}\), let \({\mathbf{x}} _{_1}, \ldots ,{\mathbf{x}} _{n_1}\), \({\mathbf{y}} _{_1}, \ldots ,{\mathbf{y}} _{n_2}\) and \({\mathbf{z}} _{_1}, \ldots ,{\mathbf{z}} _{n_3}\) be the corresponding distinct coordinates in \({\mathbb {R}}^{m_1}\), \({\mathbb {R}}^{m_2}\) and \({\mathbb {R}}^{m_3}\), respectively. Then \(\{\mathbf{u }_{1}, \ldots , \mathbf{u }_{n}\}\) is a subset of \(\{({\mathbf{x}} _i,{\mathbf{y}} _k,{\mathbf{z}} _g)\in {\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3},i=1,\ldots ,n_1, k=1,\ldots , n_2, g=1,\ldots , n_3\}.\) The latter set is a \((n_1\times n_2\times n_3)\) regular pattern.
Let \(\Sigma\) be the \(N\times N\) matrix, where \(N=n_1\times n_2\times n_3\), whose entries are \(C({\mathbf{x }}_i-{\mathbf{x }}_{j},{\mathbf{y }}_k-{\mathbf{y }}_{l},{\mathbf{z }}_g-{\mathbf{z }}_{h})\), \(i,j= 1,2, \ldots , n_1\), \(k,l= 1,2, \ldots , n_2\), \(g,h= 1,2, \ldots , n_3\). Note that
where
Then the generic element \({\mathbf{C}} _{kl}\) of the block covariance matrix \(\Sigma\) can be written as the Kronecker product (denoted as \(\otimes\)) of two matrices \({\mathbf{C}} _{1(n_1\times n_1)}({\mathbf{y }}_k-{\mathbf{y }}_{l})\) and \({\mathbf{C}} _{2(n_3\times n_3)}({\mathbf{y }}_k-{\mathbf{y }}_{l})\).
For \(n_2=1\), the covariance function C is defined just as the product of two covariance functions defined on disjoint domains, then the covariance matrix \(\Sigma _1\) (where \(\Sigma _1\) denotes the covariance matrix for C, when \(n_2=1\)) can be written just as the Kronecker product of two matrices \({\mathbf{C}} _{1(n_1\times n_1)}({\mathbf{0}} )\) and \({\mathbf{C}} _{2(n_3\times n_3)}({\mathbf{0}} )\). In this case, the strict positive definiteness of the covariance function C, for any choice of \({\mathbf{x}} _{_1}, \ldots ,{\mathbf{x}} _{n_1}\) and \({\mathbf{z}} _{_1}, \ldots ,{\mathbf{z}} _{n_3}\), is ensured if \(C_1\) and \(C_2\) are two SPD covariance functions on \({\mathbb {R}}^{m_1}\) and \({\mathbb {R}}^{m_3}\), respectively. Recall that a symmetric matrix is SPD if and only if the leading principal minors are positive, then it can be shown that the leading principal minors of the \((n\times n)\) matrix of a product covariance function, corresponding to the subset \({\mathbf{u}} _{_1}, {\mathbf{u}} _{_2}, \ldots ,{\mathbf{u}} _{n}\) in \({\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_3}\), are also positive.
For \(n_2=2\), the block covariance matrix \(\Sigma _{2}\) is the following:
This matrix is positive definite if and only if \({\mathbf{C}} _{11}\) is SPD and the Schur complement \({\mathbf{C}} _{22}-{\mathbf{C}} _{21}{} {\mathbf{C}} _{11}^{-1}{} {\mathbf{C}} _{12}\) of \({\mathbf{C}} _{11}\) is SPD, furthermore this condition is equivalent to having \(\rho ({\mathbf{C}} _{21}{} {\mathbf{C}} _{11}^{-1}{} {\mathbf{C}} _{12}{} {\mathbf{C}} _{22}^{-1})<1\), where \(\rho (\cdot )\) is the spectral radius of a matrix, i.e. the maximum eigenvalue in modulus. In this case, it is worth pointing out that
-
\({\mathbf{C}} _{11}^{-1}={\mathbf{C}} _{22}^{-1}={\mathbf{C}} _1({\mathbf{0}} )^{-1}\otimes \,{\mathbf{C}} _2({\mathbf{0}} )^{-1}\)
-
\({\mathbf{C}} _{12}={\mathbf{C}} _{12}^T={\mathbf{C}} _1({\mathbf{y }}_1-{\mathbf{y }}_{2})\otimes \,{\mathbf{C}} _2({\mathbf{y }}_1-{\mathbf{y }}_{2}).\)
Thus
If \(C_2\) is a SPD covariance function on \({\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3}\) and \(C_1\) is only a positive definite covariance function on \({\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\), then \(\rho [{\mathbf{C}} _2({\mathbf{y }}_1-{\mathbf{y }}_{2}){\mathbf{C}} _2({\mathbf{0}} )^{^{-1}}{} {\mathbf{C}} _2({\mathbf{y }}_1-{\mathbf{y }}_{2}){\mathbf{C}} _2({\mathbf{0}} )^{^{-1}}]<1\) and \(\rho [{\mathbf{C}} _1({\mathbf{y }}_1-{\mathbf{y }}_{2}){\mathbf{C}} _1({\mathbf{0}} )^{^{-1}}{} {\mathbf{C}} _1({\mathbf{y }}_1-{\mathbf{y }}_{2}){\mathbf{C}} _1({\mathbf{0}} )^{^{-1}}]\le 1\).
Taking into account that
-
the eigenvalues of the Kronecker product of two matrices is the product of the eigenvalues of the two matrices (Horn and Johnson 1991, p. 245), and
-
the spectral radius of the Kronecker product of two matrices is, according to its definition (Horn and Johnson 1996, p. 35), the maximum eigenvalue in modulus of the Kronecker product of two matrices,
it follows that the above spectral radius is less than 1, if \(C_2\) is a SPD covariance function on \({\mathbb {R}}^{m_2}\times {\mathbb {R}}^{m_3}\) or alternatively \(C_1\) is a SPD covariance function on \({\mathbb {R}}^{m_1}\times {\mathbb {R}}^{m_2}\).
For \(n_2=3\), the block covariance matrix \(\Sigma _3\) can be partitioned as follows
where
and \({\mathbf{B}} =[{\mathbf{C}} _{31} \;\; {\mathbf{C}} _{32} \;\; {\mathbf{C}} _{33}]\). If \(C_2\) is SPD, then the row spaces \(\textit{C}({\mathbf{A }})\) and \(\textit{C}({\mathbf{B}} )\) are essentially disjoint. Thus, \(rank (\Sigma _3)=rank({\mathbf{A }})+rank({\mathbf{B}} )=n_1\times 2\times n_3+n_1\times n_3=n_1\times 3\times n_3\), that is the covariance matrix \(\Sigma _3\) is a full rank matrix, i.e. it is SPD. Similarly if \(C_1\) is SPD.
For any other additional point in \({\mathbb {R}}^{m_2}\), the proof can be completed by induction. Recall that a symmetric matrix is SPD if and only if the leading principal minors are positive, then it can be shown that the leading principal minors of the \((n\times n)\) matrix of a product covariance function, corresponding to the subset \({\mathbf{u}} _{_1}, {\mathbf{u}} _{_2}, \ldots ,{\mathbf{u}} _{n}\), are also positive. This implies that the covariance matrix C is SPD for any choice of points. \(\square\)
Proof. Theorem 4
If \(C_1\) and \(C_2\) are, for all \(x\in V\subseteq U\), two SPD covariance functions, defined on \({\mathbb {R}}^{n_1}\times {\mathbb {R}}\) and \({\mathbb {R}}^{n_2}\times {\mathbb {R}}\), respectively, then
for any \(n\in \mathbb {N}_+\) and any choice of distinct points \(\mathbf{u}_1=({\mathbf{x}} _{_1},{\mathbf{y}} _{_1},t_{_1}), \ldots ,\) \(\ldots ,\mathbf{u}_n=({\mathbf{x}} _{_n},{\mathbf{y}} _{_n},t_{_n})\in {\mathbb {R}}^{n_1}\times {\mathbb {R}}^{n_2}\times {\mathbb {R}}\) and \(\lambda _1, \ldots , \lambda _n\in {\mathbb {R}}\) not all zero. Consequently,
This implies that the covariance function C in (36) is SPD. \(\square\)
Proof. Theorem 5
If \(C\in \textit{L}_1({\mathbb {R}}^m\times {\mathbb {R}})\), then its Fourier transform
is continuous. Indeed, for any \(\varphi =(\omega ,\tau ) \in {\mathbb {R}}^m\times {\mathbb {R}}\) and \(\epsilon \in {\mathbb {R}}^m\times {\mathbb {R}},\) we have \(|f(\varphi + \epsilon )- f(\varphi )|= \left| \int _{{\mathbb {R}}^m\times {\mathbb {R}}} \exp [{-i\varphi ^T ({\mathbf{h}} ,t)}][\exp [{-i\epsilon ^T ({\mathbf{h}} ,t)}]-1]C({\mathbf{h}} ,t)d{\mathbf{h}} dt\right| \le\)
Since
and
for any \(({\mathbf{h}} ,t) \in {\mathbb {R}}^m\times {\mathbb {R}}\), then, by the dominated convergence theorem,
This proves the continuity of f. Moreover, if C is not identically zero, then its Fourier transform is not identically zero on \({\mathbb {R}}^m\times {\mathbb {R}}\) and there exists \((\omega _0,\tau _0)\in {\mathbb {R}}^m\times {\mathbb {R}}\) such that \(f(\omega _0,\tau _0)>0\). Hence, because of the continuity of \(f(\cdot ,\cdot )\), there exists an open set \(I\subset {\mathbb {R}}^m\times {\mathbb {R}}\) such that \(f(\omega ,\tau )>0, \, \forall (\omega ,\tau )\in I\). Thus, because a covariance function C is SPD if the support of the density f, associated with the measure F in (4), contains an open subset, C is SPD on \({\mathbb {R}}^m\times {\mathbb {R}}\). \(\square\)
Proof. Theorem 6
If \(C_1 \in L_1({\mathbb {R}}^{m}\times {\mathbb {R}})\) is a continuous not identically zero covariance function, i.e. it is SPD, and \(C_2\) is a not identically zero (i.e. \(C(\mathbf{0})>0\)) positive definite covariance function on \({\mathbb {R}}^m\), then C is integrable and not identically zero
The Fourier transform
is continuous and, because of theorem 5, it is SPD. Similarly for the result in 2., since it is easy to show that C is integrable, then because of theorem 5, it is SPD. \(\square\)
Rights and permissions
About this article
Cite this article
De Iaco, S., Posa, D. Strict positive definiteness in geostatistics. Stoch Environ Res Risk Assess 32, 577–590 (2018). https://doi.org/10.1007/s00477-017-1432-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00477-017-1432-x