Skip to main content
Log in

Novel sampling method for the von Mises–Fisher distribution

  • Original Paper
  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

The von Mises–Fisher distribution is a widely used probability model in directional statistics. An algorithm for generating pseudo-random vectors from this distribution was suggested by Wood (Commun Stat Simul Comput 23(1):157–164, 1994), which is based on a rejection sampling scheme. This paper proposes an alternative to this rejection sampling approach for drawing pseudo-random vectors from arbitrary von Mises–Fisher distributions. A useful mixture representation is derived, which is a mixture of beta distributions with mixing weights that follow a confluent hypergeometric distribution. A condensed table-lookup method is adopted for sampling from the confluent hypergeometric distribution. A theoretical analysis investigates the amount of computation required to construct the condensed lookup table. Through numerical experiments, we demonstrate that the proposed algorithm outperforms the rejection-based method when generating a large number of pseudo-random vectors from the same distribution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Abramowitz, M., Stegun, I.A. (eds): Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. United States. National Bureau of Standards. Applied Mathematics Series, 55. U.S. Govt. Print. Off., Washington, D.C. (1964)

  • Amrhein, L., Harsha, K., Fuchs, C.: scModels: Fitting Discrete Distribution Models to Count Data (2023). https://CRAN.R-project.org/package=scModels. R package version 1.0.4

  • Cheng, R.C.: Generating beta variates with nonintegral shape parameters. Commun. ACM 21(4), 317–322 (1978)

    Article  MathSciNet  Google Scholar 

  • Eddelbuettel, D., Balamuta, J.J.: Extending R with C++: A brief introduction to Rcpp. Am Stat 72(1), 28–36 (2018)

    Article  MathSciNet  Google Scholar 

  • García-Portugués, E., Paindaveine, D., Verdebout, T.: rotasym: Tests for Rotational Symmetry on the Hypersphere (2023). https://CRAN.R-project.org/package=rotasym. R package version 1.1.5

  • Hoff, P.D.: Simulation of the matrix Bingham–von Mises–Fisher distribution, with applications to multivariate and relational data. J. Comput. Graph. Stat. 18(2), 438–456 (2009)

    Article  MathSciNet  Google Scholar 

  • Kang, S., Oh, H.-S.: rvMF: Fast Generation of von Mises–Fisher Distributed Pseudo-Random Vectors (2023). https://CRAN.R-project.org/package=rvMF. R package version 0.0.8

  • Kato, S., Eguchi, S.: Robust estimation of location and concentration parameters for the von Mises–Fisher distribution. Stat. Pap. 57(1), 205–234 (2016)

    Article  MathSciNet  Google Scholar 

  • Kent, J.T.: The Fisher–Bingham distribution on the sphere. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 44(1), 71–80 (1982)

    MathSciNet  Google Scholar 

  • Kume, A., Walker, S.G.: On the Fisher–Bingham distribution. Stat. Comput. 19(2), 167–172 (2009)

    Article  MathSciNet  Google Scholar 

  • Kurz, G., Hanebeck, U.D.: Stochastic sampling of the hyperspherical von Mises–Fisher distribution without rejection methods. In: 2015 Sensor Data Fusion: Trends, Solutions, Applications (SDF), pp. 1–6. IEEE (2015)

  • Ley, C., Verdebout, T.: Skew-rotationally-symmetric distributions and related efficient inferential procedures. J. Multivar. Anal. 159, 67–81 (2017)

    Article  MathSciNet  Google Scholar 

  • Mardia, K.V., Jupp, P.E.: Directional Statistics, vol. 494. Wiley, Chichester (1999)

  • Marsaglia, G.: Generating discrete random variables in a computer. Commun. ACM 6(1), 37–38 (1963)

    Article  Google Scholar 

  • Marsaglia, G., Tsang, W.W., Wang, J.: Fast generation of discrete random variables. J. Stat. Softw. 11(3), 1–11 (2004)

    Article  Google Scholar 

  • Muller, M.E.: A note on a method for generating points uniformly on \(n\)-dimensional spheres. Commun. ACM 2(4), 19–20 (1959)

    Article  Google Scholar 

  • Nunez-Antonio, G., Gutiérrez-Pena, E.: A Bayesian analysis of directional data using the von Mises-Fisher distribution. Commun. Stat. Simul. Comput. 34(4), 989–999 (2005)

    Article  MathSciNet  Google Scholar 

  • Papadakis, M., Tsagris, M., Dimitriadis, M., Fafalios, S., Tsamardinos, I., Fasiolo, M., Borboudakis, G., Burkardt, J., Zou, C., Lakiotaki, K., Chatzipantsiou, C.: Rfast: A Collection of Efficient and Extremely Fast R Functions (2023). https://CRAN.R-project.org/package=Rfast. R package version 2.0.7

  • Pewsey, A., García-Portugués, E.: Recent advances in directional statistics. TEST 30(1), 1–58 (2021)

    Article  MathSciNet  Google Scholar 

  • Porwal, S., Kumar, S.: Confluent hypergeometric distribution and its applications on certain classes of univalent functions. Afrika Matematika 28(1), 1–8 (2017)

  • Temme, N.M.: Asymptotic expansions of Kummer hypergeometric functions for large values of the parameters. Integral Transforms Spec. Funct. 33(1), 16–31 (2022)

    Article  MathSciNet  Google Scholar 

  • Ulrich, G.: Computer generation of distributions on the m-sphere. J. R. Stat. Soc. Ser. C (Appl. Stat.) 33(2), 158–163 (1984)

    MathSciNet  Google Scholar 

  • Wood, A.T.: Simulation of the von Mises Fisher distribution. Commun. Stat. Simul. Comput. 23(1), 157–164 (1994)

    Article  Google Scholar 

  • Zhang, F., Hancock, E.R., Goodlett, C., Gerig, G.: Probabilistic white matter fiber tracking using particle filtering and von Mises–Fisher sampling. Med. Image Anal. 13(1), 5–18 (2009)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Research Foundation of Korea (NRF) funded by the Korea government (2021R1A2C1091357).

Author information

Authors and Affiliations

Authors

Contributions

SK and H-SO wrote the main manuscript text. SK prepared all figures and tables. All authors reviewed the manuscript.

Corresponding author

Correspondence to Hee-Seok Oh.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

Review of the rejection algorithm

In this section, we briefly summarize the vMF pseudo-random vector generating algorithm of Wood (1994), which is based on a rejection sampling method. We note that Wood (1994) is a correction of Ulrich (1984), which had some errors in describing the algorithm.

Algorithm

(Sampling from the vMF distribution by Wood) To generate n samples from \({\text {vMF}}_p(\varvec{\mu }, \kappa )\), we use the following algorithm:

Step 0: Put \(b = [-2\kappa +\{4\kappa ^2+(p-1)^2\}^\frac{1}{2}]/(p-1)\), \(x_0 = (1-b)/(1+b)\), and \(c=\kappa x_0 + (p-1)\log (1-x_0^2)\).

for \(i = 1,\dots ,n\) do

Step 1: Generate \(Z\sim {\text {Beta}}\left( \frac{p-1}{2},\frac{p-1}{2}\right) \), \(U\sim {\text {Unif}}[0,1]\), and let \(W = \{1-(1+b)Z\}/\{1-(1-b)Z\}\).

Step 2: If \(\kappa W + (p-1)\log (1-x_0W)-c < \log (U)\), go to Step 1.

Step 3: Generate \(\varvec{\xi }\sim {\text {Unif}}(S^{p-2})\), and let \(\textbf{X}_i = (\sqrt{1-Z^2}\varvec{\xi }^\top , Z)^\top \).

end for

Step 4: Choose any \(p\times p\) rotation matrix \(\textbf{U}\) such that \(\textbf{R}: (0,\dots ,0,1)^\top \mapsto \varvec{\mu }\). Return \(\textbf{U}[\textbf{X}_1,\dots ,\textbf{X}_n]\).

In Theorem 2 of Ulrich (1984), the acceptance probability of their rejection sampling algorithm is derived to be

$$\begin{aligned} \left( \frac{\kappa }{2}\right) ^{1-\frac{p}{2}}\Gamma \left( \frac{p}{2}\right) I_{\frac{p}{2}-1} (\kappa )\exp \left\{ \frac{p-1-\sqrt{(p-1)^2+4\kappa ^2}}{2}\right\} \nonumber \\ \times \left( \frac{-(p-1)^2+(p-1)\sqrt{(p-1)^2+4\kappa ^2}}{2\kappa ^2}\right) ^{\frac{1-p}{2}}. \end{aligned}$$
(A.1)

This probability is the same for the algorithm in Wood (1994).

The acceptance probability (A.1) is increasing in p and decreasing in \(\kappa \). Hence, it gets minimized when \(p=2\) and \(\kappa \rightarrow \infty \). In particular, by (9.7.1) of Abramowitz and Stegun (1964), the acceptance probability for \(p=2\) is

$$\begin{aligned}{} & {} I_0(\kappa )\exp \left\{ \frac{1-\sqrt{1+4\kappa ^2}}{2}\right\} \left( \frac{-1+\sqrt{1+4\kappa ^2}}{2\kappa ^2}\right) ^{-\frac{1}{2}} \\{} & {} \quad = \frac{1}{\sqrt{2\pi }}\exp \left\{ \kappa + \frac{1-\sqrt{1+4\kappa ^2}}{2}\right\} \left( \frac{-1+\sqrt{1+4\kappa ^2}}{2\kappa }\right) ^{-\frac{1}{2}}\\{} & {} \qquad \times [1+O(\kappa ^{-1})], \end{aligned}$$

as \(\kappa \rightarrow \infty \). The RHS converges to \(\sqrt{\frac{e}{2\pi }}\approx 0.6577446\) as \(\kappa \rightarrow \infty \). Hence, the worst acceptance probability for Wood ’s method is roughly two-thirds.

Proofs for Section 2

1.1 Proof of Eq. (2.2)

Proof

From equation (2.1), we can see that the \(\pi _L(l)\) is proportional to \((2\kappa )^l B\left( \frac{p-1}{2}+l,\frac{p-1}{2}\right) /l\,!\). We obtain the reciprocal of the normalizing constant by

$$\begin{aligned}&\sum _{l=0}^\infty \frac{(2\kappa )^l}{l\,!} B\left( \frac{p-1}{2}+l,\frac{p-1}{2}\right) \\&\quad = \sum _{l=0}^\infty \frac{(2\kappa )^l}{l\,!} \times \frac{\Gamma \left( \frac{p-1}{2}+l\right) \Gamma \left( \frac{p-1}{2}\right) }{\Gamma (p-1+l)}\\&\quad = \frac{\left\{ \Gamma \left( \frac{p-1}{2}\right) \right\} ^2}{\Gamma (p-1)} \sum _{l=0}^\infty \frac{(2\kappa )^l}{l\,!} \times \frac{\left( \frac{p-1}{2}\right) _l}{(p-1)_l}\\&\quad = \frac{\left\{ \Gamma \left( \frac{p-1}{2}\right) \right\} ^2}{\Gamma (p-1)} M\left( \frac{p-1}{2},p-1,2\kappa \right) . \end{aligned}$$

The last line follows from the definition of the confluent hypergeometric function. Hence, (2.2) can be derived by

$$\begin{aligned} \pi _L(l)&= \frac{\Gamma (p-1)}{\left\{ \Gamma \left( \frac{p-1}{2}\right) \right\} ^2 M\left( \frac{p-1}{2},p-1,2\kappa \right) } \\&\quad \times \frac{(2\kappa )^l}{l\,!} B\left( \frac{p-1}{2}+l,\frac{p-1}{2}\right) \\&\quad = \frac{\left( \frac{p-1}{2}\right) _l(2\kappa )^l}{(p-1)_l\,l\,!\, M\left( \frac{p-1}{2},p-1,2\kappa \right) }. \end{aligned}$$

\(\square \)

1.2 Proof of Proposition 1

Proof

Case 1: \(\kappa \leqslant 1\). Note that \(p\geqslant 2\). Thus,

$$\begin{aligned} \frac{\pi _L(l+1)}{\pi _L(l)}&= \frac{\kappa (p-1 + 2l)}{(p-1+l)(l+1)}\\&\leqslant \frac{\kappa (p-1+pl)}{(p-1+l)(l+1)}\\&\leqslant \frac{\kappa (p-1+pl + l^2)}{(p-1+l)(l+1)} = \kappa . \end{aligned}$$

Therefore, \(\pi _L(l+1)\leqslant \pi _L(l)\) for \(l=0,1,\dots \), and the pmf is decreasing.

Case 2: \(\kappa > 1\). For \(l=0,1,\dots ,\)

$$\begin{aligned} \frac{\pi _L(l+1)}{\pi _L(l)} \leqslant 1&\Leftrightarrow l^2 + l(p-2\kappa ) - (p-1)(\kappa -1)\geqslant 0\\&\Leftrightarrow {\left\{ \begin{array}{ll} l \geqslant \frac{2\kappa -p + \sqrt{(2\kappa -p)^2+4(p-1)(\kappa -1)}}{2}\\ l \leqslant \frac{2\kappa -p - \sqrt{(2\kappa -p)^2+4(p-1)(\kappa -1)}}{2} \end{array}\right. }. \end{aligned}$$

Note that \(\frac{2\kappa -p - \sqrt{(2\kappa -p)^2+4(p-1)(\kappa -1)}}{2} < 0\). Hence, the pmf is a unimodal function with mode at l\(= \Big \lceil \frac{2\kappa -p + \sqrt{(2\kappa -p)^2+4(p-1)(\kappa -1)}}{2}\Big \rceil \geqslant 1\), where \(\lceil \cdot \rceil \) denotes the ceiling function. \(\square \)

1.3 Proof of Proposition 2

Proof

Ignoring terms independent of l,

$$\begin{aligned}{} & {} \pi _L(l) \propto \frac{\Gamma \left( \frac{p-1}{2}+l\right) (2\kappa )^l}{\Gamma (p-1+l)\,l\,!}\\{} & {} \quad = \sqrt{\frac{p-1+l}{\frac{p-1}{2}+l}}\left( \frac{\frac{p-1}{2}+l}{e}\right) ^{\frac{p-1}{2}+l} \\{} & {} \quad \times \left( \frac{e}{p-1+l}\right) ^{p-1+l}\frac{(2\kappa )^l}{l!}[1+O(l^{-1})]\\{} & {} \quad = \left( \frac{\frac{p-1}{2}+l}{p-1+l}\right) ^{\frac{p-1}{2}}\left( \frac{\frac{p-1}{2}+l}{p-1+l} \right) ^{l}\\{} & {} \quad \times \left( \frac{e}{p-1+l}\right) ^{\frac{p-1}{2}}\frac{(2\kappa )^l}{l!}[1+O(l^{-1})]\\{} & {} \quad = l^{-\frac{p-1}{2}}\frac{(2\kappa )^l}{l!}[1+O(l^{-1})], \end{aligned}$$

as \(l\rightarrow \infty \). Here, the second line follows from the Stirling’s approximation. Hence,

\(\pi _L(l) = O\Big (\left( 2\kappa \right) ^l\left( l^{\frac{p-1}{2}}\,l\,!\right) ^{-1}\Big )\) as \(l\rightarrow \infty \). \(\square \)

Proofs for Section 3

1.1 Proof of Proposition 3

Proof

Let \(L\sim {\text {CH}}(\frac{p-1}{2},p-1,2\kappa )\). Then,

$$\begin{aligned} \mathbb {E}L= & {} \sum _{l=0}^\infty l\times \frac{\left( \frac{p-1}{2}\right) _l(2\kappa )^l}{(p-1)_l\,l\,!\, M\left( \frac{p-1}{2},p-1,2\kappa \right) }\\= & {} \frac{2\kappa \left( \frac{p-1}{2}\right) }{(p-1)M\left( \frac{p-1}{2},p-1,2\kappa \right) }\\{} & {} \quad \times \sum _{l=1}^\infty \frac{\left( \frac{p+1}{2}\right) _{l-1}(2\kappa )^{l-1}}{(p)_{l-1}\,(l-1)\,!\, }\\{} & {} \quad = \frac{\kappa M\left( \frac{p+1}{2},p,2\kappa \right) }{M\left( \frac{p-1}{2},p-1,2\kappa \right) }. \end{aligned}$$

To derive the variance, first compute \(\mathbb {E}[L(L-1)]\),

$$\begin{aligned} \mathbb {E}[L(L-1)]&= \sum _{l=0}^\infty l(l-1)\\&\quad \times \frac{\left( \frac{p-1}{2}\right) _l(2\kappa )^l}{(p-1)_l\,l\,!\, M\left( \frac{p-1}{2},p-1,2\kappa \right) }\\&= \frac{(2\kappa )^2 \left( \frac{p-1}{2}\right) \left( \frac{p+1}{2}\right) }{(p-1)p M\left( \frac{p-1}{2},p-1,2\kappa \right) }\\&\quad \times \sum _{l=2}^\infty \frac{\left( \frac{p+3}{2}\right) _{l-2}(2\kappa )^{l-2}}{(p+1)_{l-2}\,(l-2)\,!\, }\\&= \frac{\kappa ^2 (p+1) M\left( \frac{p+3}{2},p+1,2\kappa \right) }{p M\left( \frac{p-1}{2},p-1,2\kappa \right) }. \end{aligned}$$

From the result above,

$$\begin{aligned} {\text {var}}(L)&= \mathbb {E}[L(L-1)] + \mathbb {E}L - (\mathbb {E}L)^2\\&= \frac{\kappa ^2 (p+1) M\left( \frac{p+3}{2},p+1,2\kappa \right) }{p M\left( \frac{p-1}{2},p-1,2\kappa \right) } \\&\quad + \frac{\kappa M\left( \frac{p+1}{2},p,2\kappa \right) }{M\left( \frac{p-1}{2},p-1,2\kappa \right) }\\&\quad - \left( \frac{\kappa M\left( \frac{p+1}{2},p,2\kappa \right) }{M\left( \frac{p-1}{2},p-1,2\kappa \right) }\right) ^2 \end{aligned}$$

\(\square \)

1.2 Proof of Proposition 4

Proof of (i)

Let \(L\sim {\text {CH}}(\frac{p-1}{2},p-1,2\kappa )\). From Proposition 3 (i), we have that

$$\begin{aligned} \mathbb {E}L&= \frac{\kappa M\left( \frac{p+1}{2},p,2\kappa \right) }{M\left( \frac{p-1}{2},p-1,2\kappa \right) }. \end{aligned}$$

Then, from (13.1.4) of Abramowitz and Stegun (1964), we obtain

$$\begin{aligned} \mathbb {E}L&= \kappa \frac{\frac{\Gamma (p)}{\Gamma \left( \frac{p+1}{2}\right) }e^{2\kappa }(2\kappa )^{\frac{1-p}{2}} [1+O(\kappa ^{-1})]}{\frac{\Gamma (p-1)}{\Gamma \left( \frac{p-1}{2}\right) }e^{2\kappa }(2\kappa ) ^{\frac{1-p}{2}}[1+O(\kappa ^{-1})]}\\&= 2\kappa [1+O(\kappa ^{-1})], \end{aligned}$$

as \(\kappa \rightarrow \infty \). Hence, \((2\kappa )^{-1}\mathbb {E}L \rightarrow 1\) as \(\kappa \rightarrow \infty \). \(\square \)

Proof of (ii)

This proof is almost identical to the proof of (i). Let \(L\sim {\text {CH}}(\frac{p-1}{2},p-1,2\kappa )\). Using (13.5.1) of Abramowitz and Stegun (1964), a further expansion of the \(\mathbb {E}L\) is given as

$$\begin{aligned} \mathbb {E}L&= 2\kappa \frac{1+\frac{1}{2\kappa }\left( \frac{p-1}{2}\right) \left( \frac{1-p}{2}\right) + O(\kappa ^{-2})}{1+\frac{1}{2\kappa }\left( \frac{p-1}{2}\right) \left( \frac{3-p}{2}\right) + O(\kappa ^{-2})}\\&= 2\kappa \left[ 1-\frac{p-1}{4\kappa }+O(\kappa ^{-2})\right] , \end{aligned}$$

as \(\kappa \rightarrow \infty \).

From Proposition 3 (ii), we have that

$$\begin{aligned} {\text {var}}(L)&= \frac{\kappa ^2 (p+1) M\left( \frac{p+3}{2},p+1,2\kappa \right) }{p M\left( \frac{p-1}{2},p-1,2\kappa \right) } \\&\quad + \frac{\kappa M\left( \frac{p+1}{2},p,2\kappa \right) }{M\left( \frac{p-1}{2},p-1,2\kappa \right) } - \left( \frac{\kappa M\left( \frac{p+1}{2},p,2\kappa \right) }{M\left( \frac{p-1}{2},p-1,2\kappa \right) }\right) ^2. \end{aligned}$$

Then, from (13.5.1) of Abramowitz and Stegun (1964) and using the expansion of \(\mathbb {E}L\), we obtain

$$\begin{aligned} {\text {var}}(L)&= \frac{\kappa ^2 (p+1) \frac{\Gamma (p+1)}{\Gamma (\frac{p+3}{2})}e^{2\kappa }(2\kappa )^{\frac{1-p}{2}}\left[ 1+\frac{1}{2\kappa }\left( \frac{p-1}{2}\right) \left( \frac{-1-p}{2}\right) +O(\kappa ^{-2})\right] }{p \frac{\Gamma (p-1)}{\Gamma (\frac{p-1}{2})}e^{2\kappa }(2\kappa )^{\frac{1-p}{2}}\left[ 1+\frac{1}{2\kappa }\left( \frac{p-1}{2}\right) \left( \frac{3-p}{2}\right) +O(\kappa ^{-2})\right] }\\&\quad + 2\kappa [1+O(\kappa ^{-1})] - 4\kappa ^2\left[ 1-\frac{p-1}{4\kappa }+O(\kappa ^{-2})\right] ^2 = 4\kappa ^2 \left[ 1-\frac{p-1}{2\kappa } +O(\kappa ^{-2})\right] \\&\quad + 2\kappa [1+O(\kappa ^{-1})] - 4\kappa ^2\left[ 1-\frac{p-1}{2\kappa }+O(\kappa ^{-2})\right] = 2\kappa [1+O(\kappa ^{-1})], \end{aligned}$$

as \(\kappa \rightarrow \infty \). Hence, \((2\kappa )^{-1}{\text{ var }}(L) \rightarrow 1\) as \(\kappa \rightarrow \infty \). \(\square \)

1.3 Proof of Proposition 5

Proof of (ii)

Using (2.15) of Temme (2022) and the Stirling’s approximation for the gamma function, the pmf of \({\text {CH}}\left( \frac{p-1}{2},p-1,2\kappa \right) \) can be written as follows:

$$\begin{aligned} \pi _L(l)= & {} \frac{\left( \frac{p-1}{2}\right) _l(2\kappa )^l}{(p-1)_l\,l\,!\, M\left( \frac{p-1}{2},p-1,2\kappa \right) }\\= & {} \frac{\Gamma \left( \frac{p-1}{2}+l\right) \Gamma (p-1)(2\kappa )^l}{\Gamma \left( \frac{p-1}{2}\right) \Gamma (p-1+l)\,l\,!} \\{} & {} \quad \times e^{\!-\!\kappa \!-\!\frac{p-1}{2}}2^{p\!-\!\frac{3}{2}}\frac{\Gamma \left( \frac{p\!-\!1}{2}\right) }{\Gamma (p\!-\!1)} \left( \frac{p\!-\!1}{2}\right) ^{\frac{p\!-\!1}{2}}[1\!+\!O(p^{\!-\!1})]\\= & {} \sqrt{\frac{p-1+l}{\frac{p-1}{2}+l}}\times \frac{(\frac{p-1}{2}+l)^{\frac{p-1}{2}+l} (2\kappa )^l}{(p-1+l)^{p-1+l}\,l\,!}\\{} & {} \quad \times e^{-\kappa }2^{p-\frac{3}{2}} \left( \frac{p-1}{2}\right) ^{\frac{p-1}{2}}[1+O(p^{-1})]\\= & {} \sqrt{2}\times \frac{\kappa ^l e^{-\kappa }}{l\,!} 2^{p-\frac{3}{2}}\left( \frac{p-1+2l}{2(p-1+l)}\right) ^{\frac{p-1}{2}}\\{} & {} \quad \times \left( \frac{p-1}{2(p-1+l)}\right) ^{\frac{p-1}{2}}[1+O(p^{-1})]\\= & {} \frac{\kappa ^l e^{-\kappa }}{l\,!} \left( 1+\frac{l}{p-1+l}\right) ^{\frac{p-1}{2}}\\{} & {} \quad \times \left( 1+\frac{-l}{p-1+l}\right) ^{\frac{p-1}{2}}[1+O(p^{-1})]\\= & {} \frac{\kappa ^l e^{-\kappa }}{l\,!} e^{\frac{l}{2}-\frac{l}{2}} [1+O(p^{-1})]\\= & {} \frac{\kappa ^l e^{-\kappa }}{l\,!} [1+O(p^{-1})] \end{aligned}$$

as \(p\rightarrow \infty \). It is immediate from the above result that the cmf of \({\text {CH}}\left( \frac{p-1}{2},p-1,2\kappa \right) \) converges to the cmf of \({\text {Poisson}}(\kappa )\) at every continuous point of the former distribution’s cmf as \(p\rightarrow \infty \). Hence, \({\text {CH}}\left( \frac{p-1}{2},p-1,2\kappa \right) \) converges in distribution to \({\text {Poisson}}(\kappa )\). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kang, S., Oh, HS. Novel sampling method for the von Mises–Fisher distribution. Stat Comput 34, 106 (2024). https://doi.org/10.1007/s11222-024-10419-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11222-024-10419-3

Keywords

Navigation