1 Introduction

In many areas of statistical modelling, data are represented as directions or unit length vectors (Mardia 1972; Jupp 1995; Mardia and Jupp 2000). The analysis of directional data has attracted much research interests in various disciplines, from hydrology (Chen et al. 2013) to biology (Boomsma et al. 2006), and from image analysis (Zhe et al. 2019) to text mining (Banerjee et al. 2005). The von Mises–Fisher (vMF) distribution is one of the most commonly used distributions to model data distributed on the surface of the unit hypersphere (Fisher 1953; Mardia and Jupp 2000). The vMF distribution has been applied successfully in many domains (e.g., (Sinkkonen and Kaski 2002; Mcgraw et al. 2006; Bangert et al. 2010)).

A mixture of vMF distributions (Banerjee et al. 2005) assumes that each observation is drawn from one of the p vMF distributions. Applications of the vMF mixture model are diverse, including image analysis (Mcgraw et al. 2006) and text mining (Banerjee et al. 2005). More recently, it has been shown that the vMF mixture model can approximate any continuous density function on the unit hypersphere to arbitrary degrees of accuracy given sufficient numbers of mixture component (Ng and Kwong 2020).

Various estimation strategies have been developed to perform model estimation, including the maximum likelihood approach (Banerjee et al. 2005) and Bayesian methods (Bagchi and Kadane 1991; Taghia et al. 2014). The maximum likelihood approach, which is typically performed using the Expectation–Maximization (EM) algorithm (Dempster et al. 1977; Banerjee et al. 2005), is among the most popular approach to parameter estimation. However, as we show in Sect. 3 the likelihood function is unbounded from above, and consequently a global maximum likelihood estimate (MLE) fails to exist.

The unboundedness of likelihood function occurs in various mixture modelling context, particularly for mixture models with location-scale family distributions including the mixture of normal distributions (Ciuperca et al. 2003; Chen et al. 2008) and the mixture of Gamma distributions (Chen et al. 2016). Various approaches have been developed in order to tackle the likelihood degeneracy problem, including resorting to local estimates (Peters et al. 1978), compactification of the parameter space (Redner 1981), and constrained maximization of the likelihood function (Hathaway 1985).

An alternative solution to the likelihood degeneracy problem is to penalized the likelihood function such that the resulting penalized likelihood function is bounded and the existence of the penalized MLE is guaranteed. The approach of penalized maximum likelihood was applied to normal mixture models (Ciuperca et al. 2003; Chen et al. 2008), two-parameter Gamma mixture models (Chen et al. 2016). A penalized likelihood approach has also a Bayesian interpretation (Goodd and Gaskins 1971; Ciuperca et al. 2003), whereby the penalized likelihood function corresponds to a posterior density and the penalized maximum likelihood solution to the maximum a posterior estimate.

Previously, the penalized maximum likelihood approach is applied to study the mixture of von-Mises distributions (Chen et al. 2008) where consistency results were obtained. The von-Mises distribution is a special case of the von Mises–Fisher distribution defined on the circle. We generalize the results in Chen et al. (2008) to the arbitrary dimensional sphere. The consistency proof in Chen et al. (2008) relies heavily on the univariate properties of the von Mises distribution and generalization of the result to higher dimensions is not straightforward. In this paper we prove a few useful technical lemmas before proving the main results. To handle the non-identifiability of the mixture models, we use the framework of Redner (1981) to obtain consistency in the quotient space.

In this paper, we also consider the penalized likelihood approach to tackle the problem of likelihood unboundedness for the mixture of vMF distributions. We incorporate a penalty term into the likelihood function and maximize the resulting penalized likelihood function. We study conditions on the penalty function to ensure consistency of the penalized maximum likelihood estimator (PMLE). We develop an Expectation–Maximization algorithm to perform model estimation based on the penalized likelihood function. The rest of the paper is structured as follows. Section 2 introduces the background on vMF mixtures and key notations used in the subsequent sections. The problem of likelihood degeneracy is formally presented in Sect. 3. Section 4 develops the penalized maximum likelihood approach and discusses conditions on the penalty function to ensure strong consistency of the resulting estimator. An Expectation–Maximization algorithm is also developed in Sect. 4, and its performance is examined in Sect. 5. Section 6 illustrate the proposed EM algorithm using a data application. We conclude the paper with a discussion section.

2 Background

The probability density function of a d-dimensional vMF distribution is given by:

$$\begin{aligned} f({\mathbf {x}};\varvec{\mu }, \kappa ) = c_{d}(\kappa ) e^{\kappa \varvec{\mu }^{T} {\mathbf {x}}} , \end{aligned}$$
(1)

where \(x \in {\mathbb {S}}^{d-1}\) is a d-dimensional unit vector (i.e. \( {\mathbb {S}}^{d-1} := \{ {\mathbf {x}} \in {\mathbb {R}}^d: ||{\mathbf {x}}|| = 1 \}\) where \(||\cdot ||\) is the \(L_2\) norm), \(\varvec{\mu } \in {\mathbb {S}}^{d-1}\) is the mean direction, and \(\kappa \ge 0\) is the concentration parameter. The normalizing constant \(c_d(\kappa )\) has the form

$$\begin{aligned} c_d(\kappa ) = \frac{\kappa ^{d/2 - 1}}{(2 \pi )^{d/2} I_{d/2-1}(\kappa )}, \end{aligned}$$

where \(I_r(\cdot )\) is the modified Bessel function of the first kind of order r. The vMF distribution becomes increasingly concentrated at the mean direction \(\mu \) as the concentration parameter \(\kappa \) increases. The case \(\kappa =0\) corresponds to the uniform distribution on \({\mathbb {S}}^{d-1}\).

The probability density function of a mixture of vMF distributions with p mixture components can be expressed as

$$\begin{aligned} g({\mathbf {x}}; \{\pi _k, \varvec{\mu }_k, \kappa _k\}_{k=1}^{p}) = \sum _{k=1}^{p} \pi _k f({\mathbf {x}}; \varvec{\mu }_k, \kappa _k) , \end{aligned}$$
(2)

where \((\pi _1, \ldots , \pi _p)\) is the mixing proportions, \((\varvec{\mu }_k, \kappa _k)\) are the parameters for the kth component of the mixture, and \(f(\cdot ; \varvec{\mu }_k, \kappa _k)\) is the vMF density function defined in (1).

We let \(\varTheta := \{ \theta \equiv (\varvec{\mu }, \kappa ): \varvec{\mu } \in {\mathbb {S}}^{d-1}, \kappa \ge 0 \}\) be the parameter space of the vMF distribution, with the metric \(\rho ( \cdot , \cdot )\) defined as

$$\begin{aligned} \rho (\theta _1, \theta _2) = \text{ arccos }(\varvec{\mu }_1^{T} \varvec{ \mu }_2) + |\kappa _1 - \kappa _2| , \end{aligned}$$
(3)

for \(\theta _1 = (\varvec{\mu }_1, \kappa _1), \theta _2 = (\varvec{\mu }_2, \kappa _2)\). For any \(\theta = (\varvec{\mu }, \kappa ) \in \varTheta \), we write \(f_{\theta }(\cdot ) := f(\cdot ; \varvec{\mu }, \kappa )\) for the density function and \(\gamma _{\theta }\) for the corresponding measure. The space of mixing probabilities is denoted by \(\varPi := \{ (\pi _1, \ldots , \pi _p): \sum _{i=1}^{p} \pi _p = 1, \pi _k \ge 0, k = 1, \ldots , p \}\). A p-component mixture of vMF distributions can be expressed as \(\gamma = \sum _{k=1}^{p} \pi _k \gamma _{\theta _k}\) where \((\pi _1, \ldots , \pi _p) \in \varPi \) and \((\theta _1, \ldots , \theta _p) \in \varTheta ^{p}\), and where \(\varTheta ^{p} = \varTheta \times \cdots \times \varTheta \) is the product of the parameter spaces. We define the product space \(\varGamma := \varPi \times \varTheta ^{p}\), and we slightly abuse notations to let \(\gamma \) denote both the mixing measure and the parameters in \(\varGamma \). While \(\varGamma \) is a natural parameterization of the family of mixture of vMF distributions, elements of \(\varGamma \) are not identifiable. Thus, we let \({\tilde{\varGamma }}\) be the quotient topological space obtained from \(\varGamma \) by identifying all parameters \((\pi _1, \ldots , \pi _p, \theta _1, \ldots , \theta _p)\) such that their corresponding densities are equal (almost) everywhere. For the rest of the paper, we assume that the number of mixture components p is known.

3 Likelihood degeneracy

We investigate the likelihood degeneracy problem of the vMF mixture model in this section. For any observations generated from a vMF mixture model with two or more mixture components, we show that the resulting likelihood function on the parameter space \(\varGamma \) is unbounded above. As discussed in Sect. 1, likelihood degeneracy is a common problem for mixture models with location-scale distributions, including the normal mixtures. In the case of normal mixture distributions, one can show that by letting the mean parameter of a mixture component equal to one of the observations and letting the variance of the same mixture component converge to zero while holding other parameters fixed, the likelihood function diverges to positive infinity (Chen et al. 2008).

For the vMF mixture distributions, the likelihood unboundedness can be best understood in the special case of \(x \in {\mathbb {S}}^1\), or the mixture of von Mises distributions. The von Mises distribution, also known as the circular normal distribution, approaches a normal distribution with large concentration parameter \(\kappa \):

$$\begin{aligned} f(x|\mu , \kappa ) \approx \frac{1}{\sigma \sqrt{2 \pi }} \exp \bigg [ \frac{-(x - \mu )^2}{2 \sigma ^2} \bigg ] , \end{aligned}$$

with \(\sigma ^2 = 1 / \kappa \), and the approximation converges uniformly as \(\kappa \) goes to infinity. Therefore, the likelihood function of a mixture of von Mises distributions diverges to infinity by letting the mean parameter of a mixture component equal to one of the observations and letting the concentration parameter diverges to infinity.

We now consider the general case of the vMF mixture models. Let \(\mathcal{X} = \{{\mathbf {x}}_1, \ldots , {\mathbf {x}}_n\}\) be the observations generated from a mixture of vMF distributions with density function \(\sum _{k=1}^{p} \pi _k f_{\theta _k}(\cdot )\) where \(\theta _k = (\varvec{\mu }_k, \kappa _k)\). The likelihood function can be expressed as:

$$\begin{aligned} L(\mathcal{X}; \varvec{\theta }, \varvec{\pi }) = \prod _{i=1}^{n} \sum _{k=1}^{p} \pi _k f({\mathbf {x}}_i; \varvec{\mu }_k, \kappa _k) , \end{aligned}$$

where \(\varvec{\theta } = (\theta _1, \ldots , \theta _p) = ((\varvec{\mu }_1, \kappa _1), \ldots , (\varvec{\mu }_p, \kappa _p))\) and \(\varvec{\pi } = (\pi _1, \ldots , \pi _p)\). We can show that by letting the mean direction \(\varvec{\mu }_k\) of one of the mixture components equals to an arbitrary observation and letting the corresponding concentration parameter \(\kappa _k\) goes to infinity, the resulting likelihood function diverges.

Theorem 1

For any observations \(\mathcal{X} = ({\mathbf {x}}_1, \ldots , {\mathbf {x}}_n)\), there exists a sequence \((\varvec{\theta }^{(q)}, \varvec{\pi }^{(q)})\), \(q=1,2,\ldots \) such that \(L(\mathcal{X}; \varvec{\theta }^{(q)}, \varvec{\pi }^{(q)}) \uparrow \infty \) as \(q \rightarrow \infty \).

The proof of Theorem 1 is provided in the “Appendix”. The unboundedness of the likelihood function on the parameter space implies that the maximum likelihood estimator is not well defined.

4 Penalized maximum likelihood estimation

4.1 Preliminary

Let \(\gamma _0 \in \varGamma \) be the true mixing measure for the mixture of vMF distributions with corresponding density function \(f_0\) on \({\mathbb {S}}^{d-1}\). We let M be the maximum of the true density \(f_0\):

$$\begin{aligned} M := \max _{{\mathbf {x}} \in {\mathbb {S}}^{d-1}} f_0({\mathbf {x}}) , \end{aligned}$$
(4)

and define the metric \(d({\mathbf {x}}, {\mathbf {y}}) = \arccos ({\mathbf {x}}^{T} {\mathbf {y}})\) on \({\mathbb {S}}^{d-1}\) as the angle between two unit vectors \({\mathbf {x}}, {\mathbf {y}} \in {\mathbb {S}}^{d-1}\). For any fixed \({\mathbf {x}} \in {\mathbb {S}}^{d-1}\) and positive number \(\epsilon \), the \(\epsilon \)-ball in \({\mathbb {S}}^{d-1}\) centered at \({\mathbf {x}}\) is defined as \(B_{\epsilon }({\mathbf {x}}) = \{{\mathbf {y}} \in {\mathbb {S}}^{d-1}: d({\mathbf {x}}, {\mathbf {y}}) < \epsilon \}\). For any measurable set \(B \subset {\mathbb {S}}^{d-1}\), the spherical measure of B is given by \( \omega (B) := \int _{B} d \omega \), where \(d \omega \) is the standard surface measure on \({\mathbb {S}}^{d-1}\).

For any \({\mathbf {x}} \in {\mathbb {S}}^{d-1}\) and small positive number \(\epsilon \), the measure of the ball \(B_{2\epsilon }({\mathbf {x}})\) in \({\mathbb {S}}^{d-1}\) is given by (Li 2011)

$$\begin{aligned} \omega (B_{2 \epsilon }({\mathbf {x}}))= & {} \frac{2 \pi ^{(d-1)/2}}{\varGamma (\frac{d-1}{2})} \int _{0}^{2\epsilon } \sin ^{d-2}(\theta ) d\theta \nonumber \\\le & {} 2^{d-1} \frac{2 \pi ^{(d-1)/2}}{\varGamma (\frac{d-1}{2})} \epsilon ^{d-1} \end{aligned}$$
(5)
$$\begin{aligned}= & {} A_2 \epsilon ^{d-1}, \end{aligned}$$
(6)

where

$$\begin{aligned} A_2 = 2^{d-1} \frac{2 \pi ^{(d-1)/2}}{\varGamma (\frac{d-1}{2})} . \end{aligned}$$
(7)

We define the function \(\delta (\cdot )\) by

$$\begin{aligned} \delta (\epsilon ) := M A_2 \epsilon ^{d-1} , \end{aligned}$$
(8)

where the constants M and \(A_2\) are defined in Eqs. (4) and (7), respectively. The function \(\delta (\cdot )\) plays a crucial role in Lemmas 1 and 2 . Lemmas 1 and 2 are analogous to Lemmas 1 and 2 in Chen et al. (2008). They provide (almost sure) upper bounds on the number of observations in a small \(\epsilon \)-ball in \({\mathbb {S}}^{d-1}\). The upper bound in Lemma 1 is for each fixed \(\epsilon \) in an interval whereas the upper bound in Lemma 2 holds uniformly for all \(\epsilon \) in the same interval. The proof of Lemma 1 is given in the “Appendix”. The proof of Lemma 2 is similar to the proof of Lemma 2 in Chen et al. (2008) and is omitted. Lemmas 1 and 2 are crucial to ensure consistency of the penalized maximum likelihood estimator.

We note that Lemmas 1 and 2 may be generalized by relaxing the assumption that the true density is a mixture of vMF densities. This is possible because the vMF assumption does not play a crucial role. Such a generalization has been obtained for normal mixtures (Chen et al. 2016, Lemma 3.2). However, this is not required for the proof of our main result.

Lemma 1

For any sufficiently small positive number \(\xi _0\), as \(n \rightarrow \infty \), and for each fixed \(\epsilon \) such that

$$\begin{aligned} \frac{\log n}{M n A_2} \le \epsilon ^{d-1} < \xi _0 , \end{aligned}$$

the following inequalities hold except for a zero probability event:

$$\begin{aligned} \sup _{\varvec{\mu } \in S^{d-1}} \bigg \{ \frac{1}{n} \sum _{i=1}^{n} I\big (X_i \in B_{\epsilon }(\varvec{\mu })\big ) \bigg \} \le 2 \delta (\epsilon ). \end{aligned}$$
(9)

Uniformly for all \(\epsilon \) such that

$$\begin{aligned} 0< \epsilon ^{d-1} < \frac{\log n}{M n A_2}, \end{aligned}$$

the following inequalities hold except for a zero probability event:

$$\begin{aligned} \sup _{\varvec{\mu } \in S^{d-1}} \bigg \{ \frac{1}{n} \sum _{i=1}^{n} I\big (X_i \in B_{\epsilon }(\varvec{\mu })\big ) \bigg \} \le 2 \frac{(\log n)^2}{n} . \end{aligned}$$
(10)

Lemma 2

For any sufficiently small positive number \(\xi _0\), as \(n \rightarrow \infty \), uniformly for all \(\epsilon \) such that

$$\begin{aligned} \frac{\log n}{M n A_2} \le \epsilon ^{d-1} < \xi _0 , \end{aligned}$$

the following inequality holds except for a zero probability event:

$$\begin{aligned} \sup _{\varvec{\mu } \in S^{d-1}} \bigg \{ \frac{1}{n} \sum _{i=1}^{n} I\big (X_i \in B_{\epsilon }(\varvec{\mu })\big ) \bigg \} \le 4 \delta (\epsilon ). \end{aligned}$$
(11)

Uniformly for all \(\epsilon \) such that

$$\begin{aligned} 0< \epsilon ^{d-1} < \frac{\log n}{M n A_2}, \end{aligned}$$

the following inequalities hold except for a zero probability event:

$$\begin{aligned} \sup _{\varvec{\mu } \in S^{d-1}} \bigg \{ \frac{1}{n} \sum _{i=1}^{n} I\big (X_i \in B_{\epsilon }(\varvec{\mu })\big ) \bigg \} \le 2 \frac{(\log n)^2}{n} . \end{aligned}$$
(12)

4.2 Penalized maximum likelihood estimator

For any mixing measure of a p-component mixture \(\gamma = \sum _{l=1}^{p} \pi _l \gamma _{\theta _l} \) in \(\varGamma \), and n i.i.d. observations \(\mathcal{X}\), the penalized log-likelihood function is defined as

$$\begin{aligned} pl_n(\gamma ) = l_n(\gamma ) + p_n(\varvec{\kappa }) \end{aligned}$$
(13)

where \(l_n(\gamma )\) is the log-likelihood function:

$$\begin{aligned} l_n(\gamma ) = \sum _{i=1}^{n} \log \bigg \{ \sum _{k=1}^{p} \pi _k f({\mathbf {x}}_i; \varvec{\mu }_k, \kappa _k) \bigg \}, \end{aligned}$$

and \(p_n(\cdot )\) is a penalty function that depends on \(\varvec{\kappa } = (\kappa _1, \ldots , \kappa _p)\). Note that we slightly abuse notations and let \(p_n(\cdot )\) denotes the penalty function and p denotes the number of mixture components. We impose the following conditions on the penalty function \(p_n(\cdot )\).

  1. C1

    \(p_n(\varvec{\kappa }) = \sum _{l=1}^{p} {\tilde{p}}_n(\kappa _l)\),

  2. C2

    For \(l=1,\ldots , p\), \(\sup _{\kappa _l > 0} \max \{0, {\tilde{p}}_n(\kappa _l) \} = o(n)\) and \({\tilde{p}}_n(\kappa _l) = o(n)\) for each fixed \(\kappa _l \ge 0\),

  3. C3

    For \(l=1, \ldots , p\), and for

    $$\begin{aligned} 0 < \frac{1}{\log (\kappa _l)^{2d - 2}} \le \frac{\log n}{M n A_2} , \end{aligned}$$

    \({\tilde{p}}_n(\kappa _l) \le -3 (\log n)^{2} \log \kappa _l\) for large enough n.

Conditions C1 – C3 on the penalty function are analogous to the three conditions proposed in Chen et al. (2008). Condition C1 assumes that the penalty function is of additive form. Condition C2 ensures that the penalty is not overly strong while condition C3 allows the penalty to be severe when the concentration parameter is very large. Recall the true mixing measure \(\gamma _0 \in \varGamma \), and let \({\hat{\gamma }}\) denote the maximizer of the penalized log-likelihood function defined in Eq. (13). We have the following main result of this paper demonstrating that the maximizer of the penalized log-likelihood function is strongly consistency.

Theorem 2

Let \({\hat{\gamma }}_n\) be the maximizer of the penalized log-likelihood \(pl_n(\gamma )\), then \({\hat{\gamma }}_n \rightarrow \gamma _0\) almost surely in the quotient topological space \({\tilde{\varGamma }}\).

4.3 EM algorithm

We develop an Expectation–Maximization algorithm to maximize the penalized log-likelihood function defined in Eq. (13). By condition C1, the penalty function is assumed to have the form \(p_n(\varvec{\kappa }) = \sum _{l=1}^{p} {\tilde{p}}_n(\kappa _l)\). We consider \({\tilde{p}}_n(\kappa _l)\) to have the form \(p_n(\kappa _l) = - \psi _n \kappa _l\) for all l where the constant \(\psi _n \propto n^{-1}\) that depends on the sample size n. In particular, we may set \(\psi _n = \zeta / n\) for some constant \(\zeta > 0\) or \(\psi _n = S_x / n\) where \(S_x\) is the sample circular variance.

The resulting penalty function clearly satisfies condition C2. We note that condition C3 is also satisfied since for

$$\begin{aligned} 0 < \frac{1}{\log (\kappa _l)^{2d - 2}} \le \frac{\log n}{M n A_2} , \end{aligned}$$

we have

$$\begin{aligned} \kappa _l \approx \exp \big ( (n / \log n)^{1/(2d-2)} \big ) . \end{aligned}$$

The EM algorithm developed in Banerjee et al. (2005) can be easily modified to incorporate an additional penalty function. The E-Step of the penalized EM involves computing the conditional probabilities:

$$\begin{aligned} p(Z_i = h|{\mathbf {x}}_i, \varvec{\theta }) = \frac{\pi _h f({\mathbf {x}}_i; \theta _h)}{ \sum _{l=1}^{p} \pi _l f({\mathbf {x}}_i;\theta _l) }, \quad h=1,\ldots , p, \end{aligned}$$
(14)

where \(Z_i\) is the latent variable denoting the cluster membership of the ith observation. For the M-step, using the method of Lagrange multipliers, we optimize the full conditional penalized log-likelihood function below

$$\begin{aligned}&\sum _{l=1}^{p} \bigg [ \sum _{i=1}^{n} (\log ( \pi _l ) + \log (c_d(\kappa _l)) ) p(Z_i=l| {\mathbf {x}}_i, \varvec{\theta })\\&\quad + \sum _{i=1}^{n} \kappa _l \mu _l^{T} x_i p(Z_i=l| {\mathbf {x}}_i, \varvec{\theta }) - \psi _n \kappa _l + \lambda _l (1 - \varvec{\mu }_l^{T} \varvec{\mu }_l ) \bigg ] \end{aligned}$$

with respect to \(\varvec{\mu }_h, \kappa _h, \pi _h\) for \(h=1,\ldots ,p\), which gives:

$$\begin{aligned} {\hat{\pi }}_h= & {} \frac{1}{n} \sum _{i=1}^{N} p(Z_i = h|{\mathbf {x}}_i, \varvec{\theta }) \end{aligned}$$
(15)
$$\begin{aligned} \hat{\varvec{\mu }}_h= & {} \frac{ r_h }{ ||r_h||} \end{aligned}$$
(16)
$$\begin{aligned} \frac{I_{d/2}({\hat{\kappa }}_h)}{I_{d/2-1}({\hat{\kappa }}_h)}= & {} \frac{- \psi _n + ||r_h||}{\sum _{i=1}^{N} p(Z_i=h|{\mathbf {x}}_i, \varvec{\theta })} \end{aligned}$$
(17)

where \( r_h = \sum _{i=1}^{n} {\mathbf {x}}_i p(Z_i = h|{\mathbf {x}}_i, \varvec{\theta })\). We note that the assumption on \(\psi _n\) implies that \(-\psi _n + ||r_h|| \ge 0\) almost surely as \(n \rightarrow \infty \). However, for a finite sample size, there is a non-zero possibility that \( -\psi _n + ||r_h|| < 0\), and the updating equation for \(\kappa _h\) is not well defined since the left hand side of Eq. (17) is non-negative. However, the left hand side of Eq. (17) is a strictly monotonically increasing function from \([0, \infty )\) to [0, 1) (Schou 1978; Hornik and Grün 2014), and in particular \({\hat{\kappa }}_h = 0\) whenever

$$\begin{aligned} \frac{I_{d/2}({\hat{\kappa }}_h)}{I_{d/2-1}({\hat{\kappa }}_h)} = 0. \end{aligned}$$

Thus, we can simply set \(\kappa _h = 0\) whenever \( -\psi _n + ||r_h|| < 0\). To solve Eq. (17) for \({\hat{\kappa }}_h\), various approximations have been proposed (Banerjee et al. 2005; Tanabe et al. 2007; Song et al. 2012). Section 2.2 of Hornik and Grün (2014) contains a detailed review of available approximations. We consider the approximation used in Banerjee et al. (2005):

$$\begin{aligned} {\hat{\kappa }}_h \approx \frac{\rho _h (d - \rho _h^2)}{ 1 - \rho _h^{2}} , \end{aligned}$$

with

$$\begin{aligned} \rho _h = \frac{- \psi _n + ||r_h||}{\sum _{i=1}^{N} p(Z_i=h|{\mathbf {x}}_i, \varvec{\theta })} . \end{aligned}$$

We initialize the EM algorithm by randomly assigning the observations into mixture components, and the algorithm is terminated if the change in the penalized log-likelihood falls below a small threshold which is set at \(10^{-5}\) in the experiements.

5 Simulation studies

We perform simulation studies to investigate the performance of the proposed EM algorithm for maximizing the penalized likelihood function. We generate data from the mixture of vMF distributions with two and three mixture components and with dimensions \(d=2,3,4\). For each model, data are generated with increasing samples sizes to assess the convergence of the estimated parameters toward the true parameters. The concentration parameters \(\varvec{\kappa }\) and the mixing proportions \(\varvec{\pi }\) are pre-specficied whereas the mean directions \(\varvec{\mu }\) are drawn from the uniform distribution on the surface of the unit hypersphere.

Table 1 Simulation results for the vMF mixtures with two mixture components

For the two mixture components model, we specify the mixing proportions as \(\varvec{\pi } = (0.5, 0.5)\) and the concentration parameters \(\varvec{\kappa } = (10, 1)\). For the model with three mixture components, we set \(\varvec{\pi } = (0.4, 0.3, 0.3)\) and \(\varvec{\kappa } = (10, 5, 1)\). For illustrative purpose, we consider the penalty function \({\tilde{p}}_n(\kappa _l) = - (1/n) \kappa _l\). For each combination of dimension d and sample size n, we simulate 500 random samples from the model and the EM algorithm developed in Sect. 4.3 is used to obtain the parameter estimates. We measure the distance between the estimated parameters and the true parameters for each random sample. For the mean direction parameters \(\varvec{\mu }\), the distance is measured using the metric \(d({\mathbf {x}}, {\mathbf {y}}) = \text{ arccos }({\mathbf {x}}^T {\mathbf {y}})\).

Simulation results for the two and three mixture cases are presented in Tables 1 and 2, respectively. The average distance and the standard deviation between the true and the estimated parameters from 500 replications are reported. We observe that the estimated parameters converge to the true parameter as n increases. We notice that the mean direction parameter can be estimated with higher precision when the corresponding concentration parameter is large. This is expected since observations are more closely clustered with a large concentration parameter.

Table 2 Simulation results for the vMF mixtures with three mixture components

Tables 3 and 4 show the number of degeneracies when running the EM algorithm for computing the ordinary MLE for mixture of vMF distributions. Observations are generated from mixture of vMF distributions with one mixture component for Table 3 and with two mixture components for Table 4. We vary the dimension of the data from \(d=3\) to \(d=4\) and the sample size from \(n=100\) to \(n=500\). Mixtures of vMF distributions with \(p=2,3,4,5\) components are fitted to the generated data. We compute the ordinary MLE using the EM algorithm and record the number of times that the EM fails to converge from 1000 simulation runs. The EM algorithm is considered fail to converge if one of the concentration parameters becomes exceedingly large (greater than \(10^{10}\)). From Tables 3 and 4, the EM algorithm tends to fail to converge with smaller sample sizes. We also note that when the fitted model has a larger number of mixture components p, the EM algorithm is more likely to fail to converge.

Table 3 Number of degeneracies of the EM algorithm when computing the ordinary MLE for mixture of vMF distributions with one mixture component
Table 4 Number of degeneracies of the EM algorithm when computing the ordinary MLE for mixture of vMF distributions with two mixture components

6 Data application

We illustrate the EM algorithm for maximum penalized log-likelihood using the household data set from R package HSAUR3. The data set contains the household expenditures of 20 single men and 20 single women on four commodity group. As in Hornik and Grün (2014), we will also focus on the three commodity groups (housing, food and service). The EM algorithms for ordinary MLE and for the penalized MLE with 2 and 3 mixture components are fitted to the data. The results are shown in Tables 5 and 6, respectively, where the estimated parameters \(\hat{\varvec{\pi }}, \hat{\varvec{\mu }}, \hat{\varvec{\kappa }}\) are shown for all cases. The estimated paramters for the MLE and for the penalized MLE are very similar for both \(p=2\) and \(p=3\). The log-likelihood evaluated at the MLE is slightly larger than the penalized log-likelihood evaluated at the penalized MLE. More interestingly, we observe that for each case the largest concentration parameter obtained under the penalized MLE is smaller than that obtained under the MLE. This behavior suggests that the incorporation of a penalty function pulls back the estimate of largest concentration parameter towards 0 and prevents the divergence of the likelihood function.

Table 5 Maximum likelihood estimates obtained from fitting mixtures of vMF distributions to the household expenses example
Table 6 Penalized maximum likelihood estimates obtained from fitting mixtures of vMF distributions to the household expenses example

7 Discussion

In this paper we considered a penalized maximum likelihood approach to the estimation of the mixture of vMF distributions. By incorporating a suitable penalty function, we showed that the resulting penalized MLE is strongly consistent. An EM algorithm was derived to maximize the penalized likelihood function, and its performance and behavior were examined using simulation studies and a data application. The techniques used in this work to prove consistency could be applicable to study other mixture models for spherical observations.