Skip to main content
Log in

On the circular correlation coefficients for bivariate von Mises distributions on a torus

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This paper studies circular correlations for the bivariate von Mises sine and cosine distributions. These are two simple and appealing models for bivariate angular data with five parameters each that have interpretations connected to those in the ordinary bivariate normal model. However, the variability and association of the angle pairs cannot be easily deduced from the model parameters unlike the bivariate normal. Thus to compute such summary measures, tools from circular statistics are needed. We derive analytic expressions and study the properties of the Jammalamadaka–Sarma and Fisher–Lee circular correlation coefficients for the von Mises sine and cosine models. Likelihood-based inference of these coefficients from sample data is then presented. The correlation coefficients are illustrated with numerical and visual examples, and the maximum likelihood estimators are assessed on simulated and real data, with comparisons to their non-parametric counterparts. Implementations of these computations for practical use are provided in our R package BAMBI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. In the literature the JS and FL correlation coefficients have been denoted by \(\rho _c\) and \(\rho _T\) respectively, following the authors’ notations. We however, shall use \(\rho _{\text {FL}}\) and \(\rho _{\text {JS}}\) in this paper for clarity.

  2. Circular variance is defined for an angular variable \(\Theta \) as \(\text {var}(\Theta ) = 1-E(\cos (\Theta ))\) (see, e.g., Jammalamadaka and Sengupta (2001)). Expressions for \(\text {var}(\Theta )\) and \(\text {var}(\Phi )\) for the von Mises sine distribution were first provided in Singh et al. (2002).

  3. Options for computing jackknife and bootstrap-based standard error estimates are provided in our R package BAMBI.

References

  • Abramowitz M, Stegun IA (1964) Handbook of mathematical functions: with formulas, graphs, and mathematical tables, vol 55. Courier Corporation, Chelmsford

    MATH  Google Scholar 

  • Bhattacharya D, Cheng J (2015) De novo protein conformational sampling using a probabilistic graphical model. Sci Rep 5:16332

    Article  Google Scholar 

  • Chakraborty S, Wong SWK (2021) BAMBI: an R package for fitting bivariate angular mixture models. J Stat Softw 99(11):1–69. https://doi.org/10.18637/jss.v099.i11

    Article  Google Scholar 

  • Chakraborty S, Lan T, Tseng Y, Wong SW (2021) Bayesian analysis of coupled cellular and nuclear trajectories for cell migration. Biometrics

  • Davison AC, Hinkley DV (1997) Bootstrap methods and their application. Cambridge series in statistical and probabilistic mathematics. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511802843

  • Fisher N (1995) Statistical analysis of circular data. Cambridge University Press, Cambridge

    Google Scholar 

  • Fisher NI, Lee A (1983) A correlation coefficient for circular data. Biometrika 70(2):327–332

    Article  MathSciNet  MATH  Google Scholar 

  • Jammalamadaka SR, Sarma Y (1988) A correlation coefficient for angular variables. Stat Theory Data Anal II:349–364

    MathSciNet  MATH  Google Scholar 

  • Jammalamadaka SR, Sengupta A (2001) Topics in circular statistics, vol 5. World Scientific, Singapore

    Google Scholar 

  • Lan T, Hung SH, Su X, Wong SW, Tseng Y (2018) Integrating transient cellular and nuclear motions to comprehensively describe cell migration patterns. Sci Rep 8:1488

    Article  Google Scholar 

  • Lehmann EL, Casella G (1998) Theory of point estimation, 2nd ed. Springer texts in statistics

  • Lennox KP, Dahl DB, Vannucci M, Tsai JW (2009) Density estimation for protein conformation angles using a bivariate von Mises distribution and Bayesian nonparametrics. J Am Stat Assoc 104(486):586–596

    Article  MathSciNet  MATH  Google Scholar 

  • Mardia KV (1975) Statistics of directional data. J R Stat Soc B37(3):349–393

    MathSciNet  MATH  Google Scholar 

  • Mardia K, Jupp P (2009) Directional statistics. Wiley series in probability and statistics. Wiley, New York

    Google Scholar 

  • Mardia KV, Taylor CC, Subramaniam GK (2007) Protein bioinformatics and mixtures of bivariate von Mises distributions for angular data. Biometrics 63(2):505–512

    Article  MathSciNet  MATH  Google Scholar 

  • Rivest LP (1988) A distribution for dependent unit vectors. Commun Stat-Theory Methods 17(2):461–483

    Article  MathSciNet  MATH  Google Scholar 

  • Self SG, Liang KY (1987) Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions. J Am Stat Assoc 82(398):605–610

    Article  MathSciNet  MATH  Google Scholar 

  • Shieh GS, Johnson RA (2005) Inferences based on a bivariate distribution with von mises marginals. Ann Inst Stat Math 57(4):789

    Article  MathSciNet  MATH  Google Scholar 

  • Singh H, Hnizdo V, Demchuk E (2002) Probabilistic model for two dependent circular variables. Biometrika 89(3):719–723

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuel W. K. Wong.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Technical results required in the proofs of Theorem 2.1 and 2.2

Appendix A: Technical results required in the proofs of Theorem 2.1 and 2.2

Proposition A.0.1

Let \((\Theta , \Phi ) \sim {{\,\mathrm{{vM}_{s}}\,}}(\kappa _1, \kappa _2, \kappa _3, 0, 0)\). Then

  1. (i)

    \(E\left( \sin \Theta \sin \Phi \right) = \frac{1}{C_s} \frac{\partial C_s}{\partial \kappa _3}\).

  2. (ii)

    \({{\,\mathrm{{sgn}}\,}}( E(\sin \Phi \sin \Theta )) = {{\,\mathrm{{sgn}}\,}}(\kappa _3)\).

  3. (iii)

    \(E\left( \cos \Theta \cos \Phi \right) = \frac{1}{C_s} \frac{\partial ^2 C_s}{\partial \kappa _1 \partial \kappa _2}\).

  4. (iv)

    \(E\left( \cos \Theta \right) = \frac{1}{C_s} \frac{\partial C_s}{\partial \kappa _1}\), and \(E\left( \cos \Phi \right) = \frac{1}{C_s} \frac{\partial C_s}{\partial \kappa _2}\).

  5. (v)

    \(E\left( \cos ^2 \Theta \right) = \frac{1}{C_s} \frac{\partial ^2 C_s}{\partial \kappa _1^2}\), and \(E\left( \cos ^2 \Phi \right) = \frac{1}{C_s} \frac{\partial ^2 C_s}{\partial \kappa _2^2}\).

  6. (vi)

    \(E(\sin \Phi \cos \Theta ) = E(\sin \Theta \cos \Phi ) = 0\).

  7. (vii)

    \(E(\sin \Theta \cos \Theta ) = E(\sin \Phi \cos \Phi ) = 0\).

Proof

$$\begin{aligned} C_s = \int _{-\pi }^{\pi } \int _{-\pi }^{\pi } \exp \left( \kappa _1 \cos \theta + \kappa _2 \cos \phi + \kappa _3 \sin \theta \sin \phi \right) \, d\theta \, d\phi \end{aligned}$$
(A1)

Because the integrand in (A1) is smooth and has continuous first and second order partial derivatives with respect to the parameters \((\kappa _1, \kappa _2, \kappa _3)\), and the limits of the integral are finite and constant (free of the parameters), partial differentiation with respect to the parameters, and the integration can be done in interchangeable orders (Leibniz’s rule).

  1. (1)

    Differentiating both sides of (A1) partially with respect to \(\kappa _3\), and then applying Leibniz’s rule, we get

    $$\begin{aligned} \frac{\partial C_s}{\partial \kappa _3}&= \int _{-\pi }^{\pi } \int _{-\pi }^{\pi } \sin \theta \sin \phi \, \exp \left( \kappa _1 \cos \theta + \kappa _2 \cos \phi + \kappa _3 \sin \theta \sin \phi \right) \, d\theta \, d\phi \\&= C_s E\left( \sin \Theta \sin \Phi \right) . \end{aligned}$$
  2. (2)

    Let \(g(\lambda ) = \frac{\partial C_c}{\partial \lambda }\). Since \(C_s > 0\), following part (i), it is enough to show that \({{\,\mathrm{{sgn}}\,}}(g(\lambda )) = {{\,\mathrm{{sgn}}\,}}(\lambda )\). From the infinite series representation (2.7) we get

    $$\begin{aligned} g(\lambda ) = 8 \pi ^2 \sum _{m=1}^{\infty } m \left( {\begin{array}{c}2m\\ m\end{array}}\right) \frac{\lambda ^{2m-1}}{(4\kappa _1 \kappa _2)^m} I_{m}(\kappa _1) I_{m}(\kappa _2) \lesseqgtr 0 \end{aligned}$$

    according as \(\lambda \lesseqgtr 0\). This completes the proof.

  3. (3)

    The result is obtained by partially differentiating (A1) twice, once with respect \(\kappa _1\) and then with respect to \(\kappa _2\), and then by applying Leibniz’s rule.

  4. (4)

    The proof is given in (Singh et al., 2002, Theorem 2(b)).

  5. (5)

    The first half is obtained by partially differentiating (A1) twice with respect to \(\kappa _1\), and the second half, with respect to \(\kappa _2\); followed by an application of Leibniz’s rule.

  6. (6)

    We shall only prove the first half. The proof of the second half is similar. It follows (see Singh et al. (2002)) that the conditional distribution of \(\Phi \) given \(\Theta = \theta \) is univariate von Mises \({{\,\mathrm{{vM}}\,}}\left( \kappa = a(\theta ), \mu = b(\theta ) \right) \), and the marginal density of \(\Theta \) is given by:

    $$\begin{aligned} f_\Theta (\theta ) = \frac{2 \pi I_0(a(\theta ))}{C_s} \exp (\kappa _1 \cos \theta ) \mathbbm {1}_{[-\pi , \pi )} (\theta ) \end{aligned}$$

    where

    $$\begin{aligned} a(\theta ) = \left\{ \kappa _2^2 + \kappa _3^2 \sin ^2 \theta \right\} ^{1/2} \text { and } b(\theta ) = \tan ^{-1} \left( \frac{\kappa _3}{\kappa _2} \sin \theta \right) . \end{aligned}$$

    Note that \(f_\Theta \) is symmetric about \((\mu _1 = )\; 0\). Therefore, we have

    $$\begin{aligned} E\left( \sin \Phi \cos \Theta \right)&= E \left[ \cos \Theta \, E\left( \sin \Phi \mid \Theta \right) \right] \\&= E \left[ \cos \Theta \, \frac{I_1(a(\Theta ))}{I_0(a(\Theta )) } \, \sin (\beta (\Theta )) \right] \\&= E \left[ \cos \Theta \, \frac{I_1(a(\Theta ))}{I_0(a(\Theta )) } \, \frac{(\kappa _3/\kappa _2) \sin \Theta }{\sqrt{1 + (\kappa _3/\kappa _2)^2 \sin ^2 \Theta }} \right] \\&= 0, \end{aligned}$$

    where the second equality follows from Proposition A.0.2, and the last from the fact that the associated integral is an odd function.

  7. (7)

    These results are immediate consequences of symmetry of the marginal distributions.

\(\square \)

Proposition A.0.2

Let X have a univariate von Mises distribution \({{\,\mathrm{{vM}}\,}}(\kappa , \mu )\). Then \(E(\sin X) = \frac{I_1(\kappa )}{I_0(\kappa )} \sin \mu \).

Proof

Because the density of X is symmetric about \(\mu \), we have,

$$\begin{aligned} E[\sin (X - \mu )] = E(\sin X) \cos \mu - E(\cos X) \sin \mu = 0. \end{aligned}$$
(A2)

Also (see, e.g., Abramowitz and Stegun 1964, §9.6.19)),

$$\begin{aligned} E[\cos (X - \mu )] = E(\cos X) \cos \mu + E(\sin X) \sin \mu = \frac{I_1(\kappa )}{I_0(\kappa )}. \end{aligned}$$
(A3)

Solving for \(E(\sin X)\) from (A2) and (A3) yields \(E(\sin X) = \frac{I_1(\kappa )}{I_0(\kappa )} \sin \mu \). \(\square \)

Proposition A.0.3

Let \((\Theta , \Phi ) \sim {{\,\mathrm{{vM}_{c}}\,}}(\kappa _1, \kappa _2, \kappa _3, 0, 0)\). Then

  1. (i)

    \(E\left( \cos \Theta \cos \Phi \right) = \frac{1}{C_c} \frac{\partial ^2 C_c}{\partial \kappa _1 \partial \kappa _2}\).

  2. (ii)

    \(E\left( \sin \Theta \sin \Phi \right) = \frac{1}{C_c} \left\{ \frac{\partial C_c}{\partial \kappa _3} - \frac{\partial ^2 C_c}{\partial \kappa _1 \partial \kappa _2} \right\} \).

  3. (iii)

    \({{\,\mathrm{{sgn}}\,}}( E(\sin \Phi \sin \Theta )) = {{\,\mathrm{{sgn}}\,}}(\kappa _3)\).

  4. (iv)

    \(E\left( \cos \Theta \right) = \frac{1}{C_s} \frac{\partial C_s}{\partial \kappa _1}\), and \(E\left( \cos \Phi \right) = \frac{1}{C_s} \frac{\partial C_s}{\partial \kappa _2}\).

  5. (v)

    \(E\left( \cos ^2 \Theta \right) = \frac{1}{C_c} \frac{\partial ^2 C_c}{\partial \kappa _1^2}\), and \(E\left( \cos ^2 \Phi \right) = \frac{1}{C_c} \frac{\partial ^2 C_c}{\partial \kappa _2^2}\).

  6. (vi)

    \(E(\sin \Phi \cos \Theta ) = E(\sin \Theta \cos \Phi ) = 0\).

  7. (vii)

    \(E(\sin \Theta \cos \Theta ) = E(\sin \Phi \cos \Phi ) = 0\).

Proof

We have

$$\begin{aligned} C_c = \int _{-\pi }^{\pi } \int _{-\pi }^{\pi } \exp \left( \kappa _1 \cos \theta + \kappa _2 \cos \phi + \kappa _3 \cos (\theta - \phi ) \right) \, d\theta \, d\phi \end{aligned}$$
(A4)

Using the same arguments as in the von Mises sine case, it follows that partial differentiation with respect to the parameters, and the integration can be done in interchangeable orders (Leibniz’s rule).

  1. (i)

    Differentiating both sides of (A4) twice, once with respect \(\kappa _1\) and then with respect to \(\kappa _2\), and then by applying Leibniz’s rule, we get

    $$\begin{aligned}&\frac{\partial ^2 C_c}{\partial \kappa _1 \partial \kappa _2} \\&\quad = \int _{-\pi }^{\pi } \int _{-\pi }^{\pi } \sin \theta \sin \phi \, \exp \left( \kappa _1 \cos \theta + \kappa _2 \cos \phi + \kappa _3 \cos (\theta - \phi ) \right) \, d\theta \, d\phi \\&\quad = C_c E \left( \cos \Theta \cos \Phi \right) . \end{aligned}$$
  2. (ii)

    Differentiating (A4) partially with respect to \(\kappa _3\), and then applying Leibniz’s rule, we get

    $$\begin{aligned} \frac{\partial C_c}{\partial \kappa _3}&= \int _{-\pi }^{\pi } \int _{-\pi }^{\pi } \cos (\theta -\phi ) \, \exp \left( \kappa _1 \cos \theta + \kappa _2 \cos \phi + \kappa _3 \cos (\theta - \phi ) \right) \, d\theta \, d\phi \\&= C_c E\cos \left( \Theta - \Phi \right) = C_c E\cos \left( \cos \Theta \cos \Phi + \sin \Theta \sin \Phi \right) . \end{aligned}$$

    This, together with part (i) yields

    $$\begin{aligned} \frac{\partial C_c}{\partial \kappa _3} - \frac{\partial ^2 C_c}{\partial \kappa _1 \partial \kappa _2} = C_c \, E (\sin \Theta \sin \Phi ) \end{aligned}$$
  3. (iii)

    Let \(g(\kappa _3) = \frac{\partial C_c}{\partial \kappa _3} - \frac{\partial ^2 C_c}{\partial \kappa _1 \partial \kappa _2}\). Since \(C_c > 0\), following part (i), it is enough to show that \({{\,\mathrm{{sgn}}\,}}(g(\kappa _3)) = {{\,\mathrm{{sgn}}\,}}(\kappa _3)\). Straightforward algebra on the infinite series representations (2.13) and (2.16) of \(\frac{\partial C_c}{\partial \kappa _3}\) and \(\frac{\partial ^2 C_c}{\partial \kappa _1 \partial \kappa _2}\) yields,

    $$\begin{aligned}&g(\kappa _3) \nonumber \\&\quad = 2 \pi ^2 \left\{ \sum _{m=1}^\infty I_{m-1}(\kappa _1) I_{m-1}(\kappa _2) I_m(\kappa _3) - \sum _{m=1}^\infty I_{m-1}(\kappa _1) I_{m+1}(\kappa _2) I_m(\kappa _3) \right. \nonumber \\&\qquad \left. - \sum _{m=1}^\infty I_{m+1}(\kappa _1) I_{m-1}(\kappa _2) I_m(\kappa _3) + \sum _{m=1}^\infty I_{m+1}(\kappa _1) I_{m+1}(\kappa _2) I_m(\kappa _3) \right\} \nonumber \\&\quad = 2 \pi ^2 \sum _{m=1}^\infty [I_{m-1}(\kappa _1) - I_{m+1}(\kappa _1)] [I_{m-1}(\kappa _2) - I_{m+1}(\kappa _2)] I_m(\kappa _3) \nonumber \\&\quad = \sum _{m=1}^\infty a_m \, I_m(\kappa _3) \end{aligned}$$
    (A5)

    where \(a_m = 2 \pi ^2 [I_{m-1}(\kappa _1) - I_{m+1}(\kappa _1)] [I_{m-1}(\kappa _2) - I_{m+1}(\kappa _2)]\). Note that \((a_m)_{m \ge 1}\) is a decreasing sequence of positive real numbers since \(I_n(x) > I_{n+1}(x)\) for \(n \ge 1\) and \(x \ge 0\). We consider the cases \(\kappa _3 = 0\), \(\kappa _3 > 0\) and \(\kappa _3 < 0\) separately, and note the sign of \(g(\kappa _3)\) in each case.

    1. (a)

      If \(\kappa _3 = 0\), then \(I_m(\kappa _3) = 0\) for all \(m = 1, 2, \ldots \). Consequently, the right hand side of (A5) becomes zero.

    2. (b)

      If \(\kappa _3 > 0\), then \(I_m(\kappa _3) > 0\) for all \(m = 1, 2, \ldots \). Therefore, the right hand side of (A5) is a series of positive terms, and hence is positive.

    3. (c)

      If \(\kappa _3 < 0\), then \(I_m(\kappa _3) = (-1)^m I_m(\vert \kappa _3\vert )\) for \(m = 1, 2, \ldots \), and the right hand side of (A5) is an (absolutely convergent) alternating series

      $$\begin{aligned} S = \sum _{m = 1}^\infty (-1)^m \, a_m \, I_m(\vert \kappa _3\vert ). \end{aligned}$$

      Note that

      $$\begin{aligned} S&= - \sum _{m = 1}^\infty a_{2m - 1} \, I_{2m - 1}(\vert \kappa _3\vert ) + \sum _{m = 1}^\infty a_{2m} \, I_{2m}(\vert \kappa _3\vert ) \\&< - \sum _{m = 1}^\infty a_{2m - 1} \, I_{2m - 1}(\vert \kappa _3\vert ) + \sum _{m = 1}^\infty a_{2m-1} \, I_{2m}(\vert \kappa _3\vert ) \\&= - \sum _{m = 1}^\infty a_{2m - 1} \, [I_{2m - 1}(\vert \kappa _3\vert ) - I_{2m}(\vert \kappa _3\vert )] = - S^* \\&< 0. \end{aligned}$$

      where the inequality in the second line follows from the fact that \((a_m)_{m \ge 1}\) is decreasing and positive, and that in the last line is a consequence of the fact that \(S^*\), being a series of positive terms (since \(I_{2m - 1}(\vert \kappa _3\vert ) > I_{2m}(\vert \kappa _3\vert )\) for all \(m \ge 1\)), is positive.

  4. (iv)

    The first part is proved by partially differentiating (A4) with respect to \(\kappa _1\), and the second part, with respect to \(\kappa _2\); followed by an application of Leibniz’s rule.

  5. (v)

    The first half is obtained by partially differentiating (A4) twice with respect to \(\kappa _1\), and the second half, with respect to \(\kappa _2\); followed by an application of Leibniz’s rule.

  6. (vi)

    We shall only prove the first half. The proof of the second half is similar. It follows from Mardia et al. (2007) that the conditional distribution of \(\Phi \) given \(\Theta = \theta \) is univariate von Mises \({{\,\mathrm{{vM}}\,}}\left( \kappa = \kappa _{13}, \mu = \theta _0 \right) \), and the marginal density of \(\Theta \) is given by:

    $$\begin{aligned} g_\Theta (\theta ) = \frac{2 \pi I_0(\kappa _{13}(\theta ))}{C_c} \, \exp (\kappa _2 \cos \theta ) \, \mathbbm {1}_{[-\pi , \pi )} (\theta ) \end{aligned}$$

    where

    $$\begin{aligned} \kappa _{13} (\theta ) = \kappa _1^2 + \kappa _3^2 + 2 \kappa _1 \kappa _3 \cos \theta \text { and } \theta _0 = \tan ^{-1} \left( \frac{\kappa _3 \sin \theta }{\kappa _1 + \kappa _3 \cos \theta } \right) . \end{aligned}$$

    Note that \(f_\Theta \) is symmetric about \((\mu _1 = )\; 0\). Therefore, we have

    $$\begin{aligned} E\left( \sin \Phi \cos \Theta \right)&= E \left[ \cos \Theta \, E\left( \sin \Phi \mid \Theta \right) \right] \\&= E \left[ \cos \Theta \, \frac{I_1(\kappa _{13}(\Theta ))}{I_0(\kappa _{13}(\Theta )) } \, \sin \tan ^{-1} \left( \frac{\kappa _3 \sin \Theta }{\kappa _1 + \kappa _3 \cos \Theta } \right) \right] \\&= E \left[ \cos \Theta \, \frac{I_1(\kappa _{13}(\Theta ))}{I_0(\kappa _{13}(\Theta )) } \, \frac{\left( \frac{\kappa _3 \sin \Theta }{\kappa _1 + \kappa _3 \cos \Theta } \right) }{\sqrt{1 + \left( \frac{\kappa _3 \sin \Theta }{\kappa _1 + \kappa _3 \cos \Theta } \right) ^2}} \right] \\&= 0, \end{aligned}$$

    where the second equality follows from Proposition A.0.2, and the last from the fact that the associated integral is an odd function.

  7. (vii)

    These results are immediate consequences of symmetry of the marginal distributions.

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chakraborty, S., Wong, S.W.K. On the circular correlation coefficients for bivariate von Mises distributions on a torus. Stat Papers 64, 643–675 (2023). https://doi.org/10.1007/s00362-022-01333-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-022-01333-9

Keywords

Navigation