Abstract
This article provides P values for two new tests on the mean direction of the von Mises–Fisher distribution. The test statistics are obtained from the exponent of the saddlepoint approximation to the density of M-estimators, as suggested by Robinson et al. (Ann Stat 31:1154–1169, 2003). These test statistics are chi-square distributed with asymptotically small relative errors. Despite the high dimensionality of the problem, the proposed P values are accurate and simple to compute. The numerical precision of the P values of the new tests is illustrated by some simulation studies.
Similar content being viewed by others
References
Abramowitz M, Stegun IE (1972) Handbook of mathematical functions with formulas, graphs, and mathematical tables. Dover, Mineola (reprint)
Barndorff-Nielsen OE (1983) A formula for the distribution of the maximum likelihood estimator. Biometrika 70:343–365
Chan YM, He X (1993) On median-type estimators of direction for the von Mises-Fisher distribution. Biometrika 80:869–875
Christie D (2015) Efficient von Mises–Fisher concentration parameter estimation using Taylor series. J Stat Comput Simul 85:3259–3265
Field C (1982) Small sample asymptotic expansions for multivariate M-estimates. Ann Stat 10:672–689
Gatto R (2000) Multivariate saddlepoint test for the wrapped normal model. J Stat Comput Simul 65:271–285
Gatto R (2006) A bootstrap test for circular data. Commun Stat Theory Methods 35:281–291
Gatto R (2017) Saddlepoint approximations to the distribution of the total distance of the multivariate isotropic and von Mises–Fisher random walks. Math Methods Stat 26:20–36
He X, Simpson DG (1992) Robust direction estimation. Ann Stat 20:351–369
Ma Y, Ronchetti E (2011) Saddlepoint test in measurement error models. J Am Stat Assoc 106:147–156
Mardia KV, Jupp PE (2000) Directional statistics. Wiley, New York
Reid N (1988) Saddlepoint methods and statistical inference. Stat Sci 3:213–238
Robinson J, Ronchetti E, Young GA (2003) Saddlepoint approximations and tests based on multivariate M-estimates. Ann Stat 31:1154–1169
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author states that there is no conflict of interest.
Additional information
The author is grateful to the Editor in Chief, the Associate Editor and two Referees for various constructive suggestions and corrections.
Appendix
Appendix
This appendix provides the detailed proofs of Results 2.1 and 2.2.
Proof
of Result 2.1 We need the specific form of the c.g.f. (8) for the score \(\varvec{\psi }\) given in (10). For \(\varvec{v} \in \mathbb {R}^p\), let \(M(\varvec{v};\varvec{\zeta },\varvec{\zeta }_{0}) = \mathsf{E}[ \exp \{ \langle \varvec{v} , \varvec{\psi }(\varvec{X},\varvec{\eta }(\varvec{\zeta }))\rangle \}]\), then
from (1) and (4), provided \(|| \varvec{\zeta }_0 + \varvec{v} || \ne 0\). The last equality uses (3) in order to obtain \(\varvec{\eta } (\varvec{\zeta }) = A_p(|| \varvec{\zeta } ||)/ || \varvec{\zeta } || \, \varvec{\zeta }\). The desired c.g.f. is \(K = \log M\).
From the penultimate line of (16) we compute
where the second equality follows from the re-expression of (7) as
By equating this gradient to zero and by solving w.r.t. \(\varvec{v}\) one finds the saddlepoint \(\varvec{v}_0 = \varvec{\zeta } - \varvec{\zeta }_0\).
According to (9), define \(h_{\zeta _{0}} ( \varvec{\zeta } ) = \text{ sup }_{v \in \mathbb {R}^p} \{ - K(\varvec{v} ; \varvec{\zeta },\varvec{\zeta }_{0}) \}\). It follows from strict convexity that \( h_{\zeta _{0}} ( \varvec{\zeta } ) = - K(\varvec{v}_0 ; \varvec{\zeta },\varvec{\zeta }_{0}) = - K(\varvec{\zeta } - \varvec{\zeta }_0 ; \varvec{\zeta },\varvec{\zeta }_{0})\), which yields (11).
The validity of the chi-square asymptotic approximation with relative error \(\mathrm{O}(n^{-1})\) uniformly over the normal deviations region follows from Theorem 1 of Robinson et al. (2003). \(\square \)
Proof
of Result 2.2 For the score \(\varvec{\psi }\) given in (14) and \(\varvec{v} \in \mathbb {R}^p\), we define \(M(\varvec{v};\varvec{\mu },\varvec{\mu }_{0},\kappa _0) = \mathsf{E}[ \exp \{ \langle \varvec{v} , \varvec{\psi }(\varvec{X},\varvec{\mu }) \rangle \}]\) and obtain
from (1), provided \(|| \kappa _0 \varvec{\mu }_0 + P_\mu \varvec{v} || \ne 0\). The desired c.g.f. is \(K = \log M\).
From the penultimate line of (18) we obtain
where the second equality follows from (17). By equating this gradient to zero and by solving w.r.t. \(\varvec{v}\) one finds \(\varvec{v}_0 = - \kappa _0 P_\mu \varvec{\mu }_0\) as possible solution. By noting that \(K(\varvec{v};\varvec{\mu },\varvec{\mu }_{0},\kappa _0)\) depends on \(\varvec{v}\) only through \(P_\mu \varvec{v}\), we deduce that \(\varvec{v}_0\) is the unique solution over the \((p-1)\)-dimensional hyperplane tangent to \(\mathbb {S}^{p-1}\) at \(\varvec{\mu }\).
Following (9), define \(h_{\mu _{0},\kappa _0} ( \varvec{\mu } ) = \text{ sup }_{v \in \mathbb {R}^p} \{ - K(\varvec{v} ; \varvec{\mu },\varvec{\mu }_{0},\kappa _0) \}\). Strict convexity implies \(h_{\mu _{0},\kappa _0} ( \varvec{\mu } ) = - K(\varvec{v}_0 ; \varvec{\mu },\varvec{\mu }_{0},\kappa _0) = - K( - \kappa _0 P_\mu \varvec{\mu }_0; \varvec{\mu },\varvec{\mu }_{0},\kappa _0)\), which yields (15).
The validity of the chi-square approximation with relative error \(\mathrm{O}(n^{-1})\) uniformly over the normal deviations region follows from Theorem 1 of Robinson et al. (2003). \(\square \)
Rights and permissions
About this article
Cite this article
Gatto, R. Multivariate saddlepoint tests on the mean direction of the von Mises–Fisher distribution. Metrika 80, 733–747 (2017). https://doi.org/10.1007/s00184-017-0625-0
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-017-0625-0
Keywords
- Cumulant generating function
- Directional distribution
- M-functional
- Minimum \(L_2\)-distance estimator
- P value
- Relative error