1 Introduction

When we utilize a statistical model \(p(x|\theta )\) in order to analyze i.i.d. samples from an unknown distribution, a commonly used approach is to compute a single point \(\hat{\theta }\) from the parameter space. Another natural approach is to make an inference as a distribution of \(\theta \), which is natural in Bayesian statistics or ensemble learning. In this paper, we focus on the following mixture distribution:

$$\begin{aligned} r(x;q)=\int p(x|\theta ) q(\theta ) d\theta \end{aligned}$$

and discuss estimating the mixing distribution \(q(\theta )\) nonparametrically, where \(q(\theta )\) is arbitrary, including continuous distributions. It was proved by Lindsay (1983) that when samples \(\{x_{1},\ldots ,x_{n}\}\) are provided, the maximum likelihood estimator of the mixing distribution is a discrete distribution and the number of its support points, in other words the probability mass points, is no more than the sample size. This provides a guideline for determining the number of mixture components from data. The mixture estimation algorithm developed in Nowozin and Bakir (2008) can be utilized for estimating such discrete distributions. However, it is vulnerable to overfitting because of the high flexibility of the nonparametric estimation.

In this paper, we define an objective functional with a single parameter \(\beta \), called entropic risk measure (Rudloff et al. 2008) and propose a nonparametric mixture estimation method as a minimization problem of it. With specific choices of \(\beta \), the method reduces to the maximum likelihood estimation (MLE) (Lindsay 1983, 1995) and the kernel vector quantization (KVQ) (Tipping and Schölkopf 2001). We generalize Lindsay’s theorem for the proposed method and prove the discreteness of the optimal mixing distribution for general \(\beta \). Then, we provide an algorithm which is an extension of the procedure in Nowozin and Bakir (2008) to calculate the optimal mixing distribution for the entropic risk measure. Numerical experiments indicate that an appropriate choice of \(\beta \) will reduce the generalization error. We discuss the estimation bias and variance to show that the range of optimal \(\beta \) depends on the sample size. We also discuss the relation between the proposed mixture estimation method and the rate-distortion problem (Berger 1971).

The paper is organized as follows. Section 2 introduces the entropic risk measure as the objective functional for estimating the mixture model. Section 3 proves the discreteness of the optimal mixing distribution with an overview of Lindsay’s proof, and a concrete estimation algorithm for the mixing distribution is shown. Section 4 examines its properties through numerical experiments for the Gaussian mixture model. In Sect. 5, we consider the range of \(\beta \) that will improve the generalization ability and describe the relation to the rate-distortion theory. Section 6 discusses the extension to other objective functionals than the entropic risk measure, and Sect. 7 concludes this paper.

2 Mixture model and objective functional

We consider a problem of estimating an unknown underlying distribution \(p^{*}(x)\) behind \(n\) training samples, \(\{x_{1},\ldots ,x_{n}\}\), \(x_{i}\in \mathbf {R}^{d}\). As is common for density estimation and clustering, we use the following mixture density of the model \(p(x| \theta )\) with parameter \(\theta \in \Omega \) and the mixing distribution \(q(\theta )\):

$$\begin{aligned} r(x)=r(x;q)=\int p(x|\theta )q(\theta )d\theta \end{aligned}$$
(1)

where \(q(\theta ) \ge 0\) for \(\forall \theta \in \Omega \) and \(\int q(\theta ) d\theta =1\). For further discussion, we assume \(p(x|\theta )\) is bounded for \(\forall x \in \mathbf {R}^{d}\) and \(\forall \theta \in \Omega \).

If \(q(\theta )\) is a single point distribution, computing \(q(\theta )\) is a point estimation; and if \(q(\theta )\) is a parametric distribution, the inference of \(q(\theta )\) is known as the empirical Bayesian approach. Instead, we consider the problem nonparametrically.

Let us start by showing two approaches which are closely related to our framework. One is the Maximum Likelihood Estimation (MLE) (Lindsay 1983, 1995); the problem is denoted as follows:

$$\begin{aligned} \hat{q}(\theta )=\mathop {\mathrm{argmin}}\limits _{q}\left[ -\frac{1}{n} \sum _{i=1}^{n}\log r_{i}\right] \end{aligned}$$

where \(r_{i}=r(x_{i};q)=\int p(x_{i}|\theta )q(\theta )d\theta \). The other is the Kernel Vector Quantization (KVQ) (Tipping and Schölkopf 2001),Footnote 1

$$\begin{aligned} \hat{q}(\theta )=\mathop {\mathrm{argmin}}\limits _{q} \max _{i}\left[ - \log r_{i} \right] . \end{aligned}$$

In this paper, we extend these ideas and consider the following optimization problem:

$$\begin{aligned} \hat{q}(\theta )=\mathop {\mathrm{argmin}}\limits _{q}F_{\beta }(q), \end{aligned}$$

where

$$\begin{aligned} F_{\beta }(q)=\left\{ \begin{array}{ll} \frac{1}{\beta }\log \left( \frac{1}{n}\sum _{i=1}^{n}r_{i}^{-\beta }\right) &{} \text{ for }~\beta \ne 0,~\beta \ge -1\\ -\frac{1}{n}\sum _{i=1}^{n}\log r_{i} &{} \text{ for }~\beta =0. \end{array} \right. \end{aligned}$$
(2)

This objective functional corresponds to the entropic risk measure (of \(\log r(x)\)) in the literature of mathematical finance (Rudloff et al. 2008).Footnote 2 Note that \(F_{\beta }(q)\) is continuous with respect to \(\beta \in \mathbf {R}\), and convex for \(\beta \ge -1\). We will discuss the convexity of it in Sect. 3.1. The above optimization problem becomes the MLE for \(\beta =0\) and the KVQ for \(\beta \rightarrow \infty \). We will discuss other choices of the convex objective functional in Sect. 6.

In the rest of this section, we investigate the properties of \(F_{\beta }\). Let \(p^{*}(x)\) be the true distribution that is generating the data \(\{x_{1}, \ldots , x_{n}\}\). The law of large numbers ensures that \(F_{\beta }\) converges to

$$\begin{aligned} \frac{1}{\beta }\log \int p^{*}(x) r(x)^{-\beta }dx \end{aligned}$$

as \(n \rightarrow \infty \). Let us recall the definition of the Renyi divergence (Renyi 1961),

$$\begin{aligned} D_{\alpha }(p_{1},p_{2}) = \frac{1}{\alpha -1} \log \int p_{1}(x)^{\alpha }p_{2}(x)^{1-\alpha }dx\ge 0,\;\;\; D_{\alpha }(p,p)=0 \;\;\;(\alpha > 0). \end{aligned}$$

From the non-negativity of the Renyi divergence, the following is shown easily by setting \(p_{1}=\widetilde{p}^{*}\), \(p_{2}=r\) and \(\alpha =\beta + 1\):

$$\begin{aligned} \frac{1}{\beta }\log \int p^{*}(x) r(x)^{-\beta }dx \ge \frac{1}{\beta }\log \int p^{*}(x) \widetilde{p}^{*}(x)^{-\beta }dx \end{aligned}$$
(3)

where

$$\begin{aligned} \widetilde{p}^{*}(x) = \frac{p^{*}(x)^{\frac{1}{1+\beta }}}{\int p^{*}(x)^{\frac{1}{1+\beta }} dx}. \end{aligned}$$
(4)

The inequality in Eq. (3) implies that the estimated mixture distribution \(\hat{r}(x)\) approaches \(\widetilde{p}^{*}(x)\), which is equivalent to the escort distribution of \(p^{*}\). The escort distribution is a distribution derived from the properties of the nonextensive form of entropy proposed by Tsallis (2009). To put it another way, the optimal mixture distribution \(r(x)\) satisfies the following relation:

$$\begin{aligned} p^{*}(x) = \frac{r(x)^{1+\beta }}{\int r(x)^{1+\beta } dx}. \end{aligned}$$

The differential entropy, \(\widetilde{H} = -\int \widetilde{p}^{*}(x)\log \widetilde{p}^{*}(x)dx\), of the escort distribution defined in Eq. (4) increases as \(\beta \) increases since \(\frac{d\widetilde{H}}{d\beta }\ge 0\) for \(\beta \ge -1\), and the escort distribution converges to the uniform distribution as \(\beta \rightarrow \infty \).

3 Optimal mixing distribution

3.1 Discreteness of the optimal mixing distribution

In this section, we generalize Lindsay’s theorem (Lindsay 1983, 1995) to prove that the optimal mixing distribution \(q(\theta )\) which minimizes \(F_{\beta }\) in Eq. (2) is discrete. Furthermore, this enables us to rely on the decoupled approach in Nowozin and Bakir (2008). We will see this in Sect. 3.2.

First, we show the convexity of \(F_{\beta }\) with respect to \({\varvec{r}}=(r_{1},\ldots ,r_{n})\) for \(\beta \ge -1\). We denote \(F_{\beta }\) as a function of \({\varvec{r}}\) by \(F_{\beta }({\varvec{r}})\). The function \(F_{\beta }({\varvec{r}})\) is convex if the following inequality holds for any two points, \({\varvec{r}}_{0}=(r_{01},\ldots ,r_{0n})\) and \({\varvec{r}}_{1}=(r_{11},\ldots ,r_{1n})\), and for \(0\le \eta \le 1\):

$$\begin{aligned} F_{\beta }({\varvec{r}}_{0}+\eta ({\varvec{r}}_{1}-{\varvec{r}}_{0})) \le (1-\eta ) F_{\beta }({\varvec{r}}_{0})+\eta F_{\beta }({\varvec{r}}_{1}). \end{aligned}$$

This is equivalent to \(\frac{d^{2}}{d\eta ^{2}} F_{\beta }({\varvec{r}}_{0}+\eta ({\varvec{r}}_{1}-{\varvec{r}}_{0})) \ge 0\), \(0\le \eta \le 1\) since \(F_{\beta }\) is twice differentiable. Note here that the convexity of \(F_{\beta }\) with respect to \(\varvec{r}\) is equivalent to the convexity with respect to \(q\) because \(F_{\beta }\) depends linearly on \(q\) through \(r_{i}=r(x_{i};q)\). Let \(r_{i}(\eta )=(1-\eta )r_{0i}+\eta r_{1i}\), and we have

$$\begin{aligned} \frac{d^{2} }{d \eta ^{2}} F_{\beta }({\varvec{r}}_{0}+\eta ({\varvec{r}}_{1}-{\varvec{r}}_{0}))&= \left[ (\beta +1)\sum _{i=1}^{n}r_{i}(\eta )^{-\beta -2}(r_{1i}-r_{0i})^{2} \sum _{l=1}^{n}r_{l}(\eta )^{-\beta } \right. \\&-\left. \beta \left\{ \sum _{i=1}^{n} r_{i}(\eta )^{-\beta -1}(r_{0i}-r_{1i})\right\} ^{2}\right] \frac{1}{\left( \sum _{j=1}^{n}r_{j}(\eta )^{-\beta }\right) ^{2}}\\&= \left[ (\beta +1) \left\{ \left( \sum _{i=1}^{n}a_{i}^{2}\right) \left( \sum _{i=1}^{n}b_{i}^{2}\right) -\left( \sum _{i=1}^{n} a_{i}b_{i}\right) ^{2} \right\} \right. \\&+\left. \left( \sum _{i=1}^{n} a_{i}b_{i}\right) ^{2} \right] \frac{1}{\left( \sum _{j=1}^{n}r_{j}(\eta )^{-\beta }\right) ^{2}} \ge 0 \end{aligned}$$

where \(a_{i} = r_{i}(\eta )^{-\frac{\beta }{2}-1}(r_{1i}-r_{0i})\), \(b_{i} = r_{i}(\eta )^{-\frac{\beta }{2}}\) and the fact \(\beta \ge -1\) and the Cauchy-Schwarz inequality are used. This shows that \(F_{\beta }({\varvec{r}})\) is convex.

Next, we show the directional derivative of \(F_{\beta }({\varvec{r}})\). The directional derivative from \({\varvec{r}}_{0}\) to \({\varvec{r}}_{1}\) is defined as follows:

$$\begin{aligned} F_{\beta , {\varvec{r}}_{0}}^{\prime }({\varvec{r}}_{1}) = \lim _{\eta \rightarrow 0} \frac{ F_{\beta }({\varvec{r}}_{0}+\eta ({\varvec{r}}_{1}-{\varvec{r}}_{0}))-F_{\beta }({\varvec{r}}_{0})}{\eta } = 1-\frac{ \sum _{i=1}^{n} r_{0i}^{-\beta -1} r_{1i}}{\sum _{j=1}^{n}r_{0j}^{-\beta }}. \end{aligned}$$
(5)

Note that Eq. (5) is valid for \(\beta =0\) as well.

If \({\varvec{r}}^{*}\) minimizes \(F_{\beta }({\varvec{r}})\), \(F_{\beta , {\varvec{r}}^{*}}^{\prime }({\varvec{r}}_{1}) \ge 0\) must hold for any \({\varvec{r}}_{1}\). It has been proved for \(\beta =0\) that there exists a unique \({\varvec{r}}\) that minimizes \(F_{\beta }\) at the boundary of the convex hull of the set \(\{{\varvec{p}}_{\theta } = (p(x_{1}|\theta ),\ldots , p(x_{n}|\theta )) | \theta \in \Omega \}\), where \(\Omega \) is the parameter space (Lindsay 1983, 1995). This result can be generalized for the case \(\beta \ge -1\) because of the convexity of \(F_{\beta }\). From Caratheodory’s theorem, this means that the optimal \({\varvec{r}}\) is expressed by a convex combination, \(\sum _{l=1}^{k}\pi _{l}{\varvec{p}}_{\theta _{l}}\), with \(\pi _{l}\ge 0\), \(\sum _{l=1}^{k}\pi _{l}=1\) and \(k\le n\), indicating that the optimal mixing distribution is \(q(\theta )=\sum _{l=1}^{k}\pi _{l}\delta (\theta -\theta _{l})\), \(\theta _{l}\in \Omega \), which is a discrete distribution where the number of the support points is no more than \(n\).

3.2 Optimization of mixing distribution

In this section, we derive an estimation algorithm for \(q(\theta )\) following Nowozin and Bakir (2008). This algorithm iterates the subproblem that augments a new point to the support of \(q(\theta )\) and the learning of the finite mixture model. The minimization of \(F_{\beta }\) over finite mixture models is implemented with a simple updating rule by the expectation-maximization (EM) algorithm (Dempster et al. 1977; Barber 2012).

3.2.1 Learning procedure

If \({\varvec{r}}^{*}\) minimizes \(F_{\beta }({\varvec{r}})\), \(F_{\beta , {\varvec{r}}^{*}}^{\prime }({\varvec{p}}_{\theta }) \ge 0\) for any \({\varvec{p}}_{\theta }\). Furthermore, \(F_{\beta , {\varvec{r}}^{*}}^{\prime }({\varvec{p}}_{\theta }) = 0\) holds for \(\theta =\theta _{l}\) where \(\{\theta _{l}\}_{l=1}^{k}\) is the set of support points of the optimal mixing distribution. This follows from the fact that if \(F_{\beta , {\varvec{r}}^{*}}^{\prime }({\varvec{p}}_{\theta _{l}}) > 0\) holds for some \(\theta _{l}\), there must exist a \(\theta _{l'}\) such that \(F_{\beta , {\varvec{r}}^{*}}^{\prime }({\varvec{p}}_{\theta _{l'}}) < 0\),Footnote 3 and \(F_{\beta }({\varvec{r}})\) can be decreased by adding more weight \(\pi _{l'}\) on \(\theta _{l'}\). Thus, the optimal condition for the mixing distribution \(q(\theta )\) is summarized as

$$\begin{aligned} \left\{ \begin{array}{ll} \mu (\theta )=1 &{} (\theta =\theta _{l}, l=1,2,\ldots ,k)\\ \mu (\theta )\le 1 &{} (otherwise) \end{array} \right. \end{aligned}$$

where

$$\begin{aligned} \mu (\theta )&= \sum _{i=1}^{n}\alpha _{i}p(x_{i}|\theta ), \end{aligned}$$
(6)
$$\begin{aligned} \alpha _{i}&= \frac{r_{i}^{-\beta -1}}{\sum _{j=1}^{n}r_{j}^{-\beta }}, \end{aligned}$$
(7)

and \(r_{i}=\int p(x_{i}|\theta ) q(\theta )d\theta \). This yields Algorithm 1 for the optimization of the mixing distribution \(q(\theta )\), which sequentially augments points \(\theta \), satisfying \(\mu (\theta )>1\) until the maximum of \(\mu (\theta )\) approaches \(1\) (Nowozin and Bakir 2008). In each iteration, the maximum of \(\mu (\theta )\) is calculated (Step 3). If the maximum is larger than \(1\), one point is added to the set of support points. Then locations and weights of the support points are optimized (Step 4), that is, parameters of a finite mixture model are estimated. We derive an EM-like algorithm for this step in Sect. 3.2.2. In the case of the KVQ (\(\beta \rightarrow \infty \)), this step was originally formulated and solved by linear programming (Tipping and Schölkopf 2001; Nowozin and Bakir 2008).

figure a

The above algorithm updates \(\{\theta _{l}\}\) as well as \(\{\pi _{l}\}\) in Step 4. This is an extension of Nowozin and Bakir (2008), where only \(\{\pi _{l}\}\) is updated in the algorithm. Algorithm 1 requires a constant \(\epsilon \) and strongly depends on it especially when only \(\{\pi _{l}\}\) is updated. In the numerical experiments in the next section, we set \(\epsilon = 0.01\). From the assertion in Sect. 3.1, this learning procedure is guaranteed to stop before the support size of \(q(\theta )\) exceeds \(n\) if the learning procedure is started with an empty support set and updating both \(\{\theta _{l}\}\) and \(\{\pi _{l}\}\).

3.2.2 EM updates for finite mixtures

In this subsection, we discuss the learning algorithm of a finite mixture model that is required in Step 4 of Algorithm 1. We separate the learning algorithm for the cases \(-1\le \beta <0\) and \(\beta \ge 0\), and derive the algorithm for each case. Let us start with \(-1\le \beta < 0\). First we define the following function \(F_{\beta }({\varvec{\theta }},{\varvec{\pi }},{\varvec{w}})\) for \(\beta \ne 0\):Footnote 4

$$\begin{aligned} F_{\beta }({\varvec{\theta }},{\varvec{\pi }},{\varvec{w}}) = - \sum _{i=1}^{n}w_{i}\log r(x_{i};q)-\frac{1}{\beta }\sum _{i=1}^{n}w_{i}\log w_{i}-\frac{1}{\beta }\log n \end{aligned}$$
(8)

where \({\varvec{w}}\in \Delta = \{{\varvec{w}}=(w_{1}, w_{2}, \ldots , w_{n}) | w_{i}\ge 0, \sum _{i=1}^{n} w_{i}=1\}\), \({\varvec{\theta }}=\{\theta _{l}\}_{l=1}^{k}\) and \({\varvec{\pi }}=\{\pi _{l}\}_{l=1}^{k}\). Note that \(F_{\beta }({\varvec{\theta }},{\varvec{\pi }},{\varvec{w}})\) is convex for \({\varvec{w}}\) when \(-1\le \beta <0\) and

$$\begin{aligned} \min _{{\varvec{w}}\in \Delta }F_{\beta }({\varvec{\theta }},{\varvec{\pi }},{\varvec{w}}) = F_{\beta }({\varvec{r}}) \end{aligned}$$

where the minimum is attained when

$$\begin{aligned} w_{i} = \frac{r_{i}^{-\beta }}{\sum _{j=1}^{n}r_{j}^{-\beta }}. \end{aligned}$$
(9)

Since \(F_{\beta }({\varvec{\theta }},{\varvec{\pi }},{\varvec{w}})\) is convex with respect to \({\varvec{w}}\), when \(-1\le \beta < 0\), we can derive a double minimization algorithm as follows:

$$\begin{aligned} w_{i}^{(t)}=\frac{r_{i}^{(t)-\beta }}{\sum _{j=1}^{n} r_{j}^{(t)-\beta }}, \;\;\;r_{i}^{(t)} = \sum _{l=1}^{k}\pi _{l}^{(t)}p(x_{i}|\theta _{l}^{(t)}), \end{aligned}$$
$$\begin{aligned} \pi _{l}^{(t+1)}&= \sum _{i=1}^{n}w_{i}^{(t)} \nu _{il}^{(t)},\end{aligned}$$
(10)
$$\begin{aligned} \theta _{l}^{(t+1)}&= \mathop {{\mathop {\mathrm{argmax}}\limits }}\limits _{\theta }\sum _{i=1}^{n}w_{i}^{(t)}\nu _{il}^{(t)} \log p(x_{i}|\theta ) \end{aligned}$$
(11)

where

$$\begin{aligned} \nu _{il}^{(t)}=\frac{\pi _{l}^{(t)}p(x_{i}|\theta _{l}^{(t)})}{\sum _{m=1}^{k}\pi _{m}^{(t)}p(x_{i}|\theta _{m}^{(t)})} \end{aligned}$$

is the posterior probability that the data point \(x_{i}\) is assigned to the cluster center \(\theta _{l}\). In fact, at the stationary point, either \(\mu (\theta _{l}) <1\) and \(\pi _{l}=0\) or \(\mu (\theta _{l})=1\) and \(\pi _{l}>0\) hold. This is an EM-like algorithm which can be seen from the fact that Eq. (8) is equivalent to a weighted sum of negative log-likelihood.

When \(p(x|\theta )\) is a member of the exponential family with the sufficient statistic \(T(x)\), that is, \(p(x|\theta )=h(x)\exp \{\theta ^{T}T(x) - G(\theta )\}\), Eq. (11) is simplified to

$$\begin{aligned} \theta _{l}^{(t+1)}=\left( \nabla G\right) ^{-1}\left( \sum _{i=1}^{n}w_{i}^{(t)}\nu _{il}^{(t)} T(x_{i})\right) \end{aligned}$$

where \(\left( \nabla G\right) ^{-1}\) is the link function to the natural parameter space (Banerjee et al. 2005).

We can prove that the above update monotonically decreases the objective \(F_{\beta }\) for \(\beta \le 0.\) Footnote 5

Let us move to the case \(\beta >0\). When \(\beta >0\), \(F_{\beta }({\varvec{\theta }},{\varvec{\pi }},{\varvec{w}})\) is concave with respect to \({\varvec{w}}\) and the previous EM-like algorithm does not work.Footnote 6 To directly minimize \(F_{\beta }(q)\) with respect to \(\{{\varvec{\theta }}, {\varvec{\pi }}\}\), we switch to the following updating rules:

$$\begin{aligned}&\pi _{l}^{(t+1)} \propto \left\{ \sum _{i=1}^{n} (\nu _{il}^{(t)})^{1+\beta }p(x_{i}|\theta _{l})^{-\beta }\right\} ^{\frac{1}{1+\beta }},\\&\theta _{l}^{(t+1)} = \theta _{l}^{(t)} +H(\theta _{l}^{(t)})^{-1} \sum _{i=1}^{n}(\nu _{il}^{(t)})^{1+\beta }p(x_{i}|\theta _{l}^{(t)})^{-\beta } \frac{\partial \log p(x_{i} | \theta _{l}^{(t)})}{\partial \theta } \end{aligned}$$

where

$$\begin{aligned} H(\theta _{l}^{(t)})&= \sum _{i=1}^{n} (\nu _{il}^{(t)})^{1+\beta } p(x_{i} | \theta _{l}^{(t)})^{-\beta } \left\{ -\frac{\partial ^{2} \log p(x_{i} | \theta _{l}^{(t)})}{\partial \theta \partial \theta ^{T}}\right. \\&+\left. \beta \left( \frac{\partial \log p(x_{i} | \theta _{l}^{(t)})}{\partial \theta }\right) \left( \frac{\partial \log p(x_{i} | \theta _{l}^{(t)})}{\partial \theta }\right) ^{T} \right\} \end{aligned}$$

is the Hessian matrix. These updates monotonically decrease \(F_{\beta }(q_{k})\) for \(\beta >0\) and are derived as the Newton-Raphson step (Boyd and Vandenberghe 2004, Sect. 9.5) to decrease the right hand side of the inequality

$$\begin{aligned} \sum _{i=1}^{n} \left( \sum _{l=1}^{k} \pi _{l} p(x_{i} | \theta _{l})\right) ^{-\beta } \le \sum _{i=1}^{n} \sum _{l=1}^{k} \nu _{il}^{1+\beta } \left( \pi _{l} p(x_{i} | \theta _{l})\right) ^{-\beta }, \end{aligned}$$

which is Jensen’s inequality for the convex function \(x^{-\beta }\) with \(\beta >0\).

3.3 Pre-imaging for generation of support points

We discuss the relationship between the proposed algorithm and kernel-based learning algorithms. In this subsection, we focus on the case in which the component \(p(x|\theta )\) is a location family and is represented as \(p(x|\theta )\propto f(x-\theta )\) for some function \(f\), such as the Gaussian density in Eq. (12).

As mentioned in Nowozin and Bakir (2008), the maximization of \(\mu (\theta )\) in Eq. (6) reduces to the pre-image problem (Schölkopf et al. 1999) if the likelihood function \(p(x|\theta )\propto f(x-\theta )\) is, up to multiplication of a constant, given by the kernel function \(K(x,\theta )\) associated with a reproducing kernel Hilbert space. This is because, for the feature map \(\phi (x)\) satisfying \(f(x-\theta )=K(x,\theta )=\phi (x)^{T}\phi (\theta )\), the maximization of \(\mu (\theta )\) in Eq. (6) is equivalent to the minimization of the norm (squared) in the Hilbert space

$$\begin{aligned} \left\| \phi (\theta )-\sum _{i=1}^{n}\alpha _{i}\phi (x_{i})\right\| ^{2}&= K(\theta , \theta ) -2\sum _{i=1}^{n}\alpha _{i}K(x_{i}, \theta )+\sum _{i, j} \alpha _{i}\alpha _{j}K(x_{i}, x_{j}) \end{aligned}$$

when \(K(\theta ,\theta )\) is constant. Note here that the coefficients \(\{\alpha _{i}\}\) depend on \(\beta \) as in Eq. (7). More specifically, the coefficient \(\alpha _{i}\) is identified by \(w_{i}\) in Eq. (9) and \(r_{i}=\int K(x_{i},\theta )q(\theta )d\theta \) from Eq. (7) as

$$\begin{aligned} \alpha _{i}=\frac{w_{i}}{r_{i}}. \end{aligned}$$

The reciprocal dependence on \(r_{i}\) means that maximizing Eq. (6) yields the new support point around which the finite mixture constructed so far has low density. Further, if \(\beta \ne 0\), \(w_{i}\) weighs each sample point according to Eq. (9).

4 Numerical experiments

In this section, we investigate the properties of the estimation method by a numerical simulation with \(2\)-dimensional Gaussian mixtures where

$$\begin{aligned} p(x|\theta )= \frac{\gamma }{\pi }\exp \left( -\gamma ||x-\theta ||^{2}\right) . \end{aligned}$$
(12)

We generated synthetic data by the following distribution:

$$\begin{aligned} p^{*}(x)= \frac{1}{2}N(x|\theta _{1}^{*}, I_{2}) +\frac{1}{2}N(x|\theta _{2}^{*},I_{2}) \end{aligned}$$
(13)

where \(\theta _{1}^{*}=(0, 0)^{T}\), \(\theta _{2}^{*}=(4, 4)^{T}\) and \(N(x|\theta , \sigma ^{2}I_{2}) = \frac{1}{2\pi \sigma ^{2}}\exp \left( -\frac{||x-\theta ||^{2}}{2\sigma ^{2}}\right) \) is the Gaussian density function. Figure 1 shows \(p^{*}(x)\) and an example of a data set (\(n=50\)).

Fig. 1
figure 1

The density function of \(p^{*}(x)\) (3D plot) and an example of a data set (crosses) (\(n=50\))

We applied Algorithm 1 to the synthetic data drawn from \(p^{*}(x)\) in Eq. (13) and estimated \(q(\theta )\). We also applied the original version of the algorithm in Nowozin and Bakir (2008), where each \(\theta _{l}\) is not updated in Step 4 of Algorithm 1 but fixed once generated in Step 3, and only \(\{\pi _{l}\}\) are updated. Results for this case will be indicated as “means fixed.”

Let \(\hat{q}(\theta )\) be an estimated mixing distribution. We calculated the likelihood as the training error

$$\begin{aligned} -\frac{1}{n}\sum _{i=1}^{n}\log r(x_{i};\hat{q}) = -\frac{1}{n}\sum _{i=1}^{n}\log \int p(x_{i}|\theta )\hat{q}(\theta )d\theta , \end{aligned}$$
(14)

the prediction error for the test data \(\{\tilde{x}_{i}\}_{i=1}^{\tilde{n}}\) drawn from the true distribution in Eq. (13)

$$\begin{aligned} -\frac{1}{\tilde{n}}\sum _{i=1}^{\tilde{n}}\log r(\tilde{x}_{i};\hat{q})=-\frac{1}{\tilde{n}}\sum _{i=1}^{\tilde{n}}\log \int p(\tilde{x}_{i}|\theta )\hat{q}(\theta )d\theta , \end{aligned}$$
(15)

and the maximum error for the training data \(\{x_{i}\}_{i=1}^{n}\)

$$\begin{aligned} \max _{1\le i \le n}\left[ -\log \int p(x_{i}|\theta )\hat{q}(\theta )d\theta \right] \end{aligned}$$
(16)

which corresponds to the objective functional of the KVQ. The number of training data is \(n=50\) and that of test data is \(\tilde{n}=200{,}000\).

Furthermore, we investigated the number of estimated components. Since this number strongly depends on \(\epsilon \), we also applied hard assignments to cluster centers for each data point and counted the number of hard clusters. Here, each point \(x_{i}\) is assigned to the cluster center \(\hat{\theta }_{l}\) that maximizes the posterior probability

$$\begin{aligned} p(\hat{\theta }_{l}|x_{i})=\nu _{il}=\frac{\hat{\pi }_{l}p(x_{i}|\hat{\theta }_{l})}{\sum _{m=1}^{\hat{k}} \hat{\pi }_{m}p(x_{i}|\hat{\theta }_{m})} \end{aligned}$$
(17)

where we have assumed \(\hat{q}(\theta )=\sum _{l=1}^{\hat{k}}\hat{\pi }_{l} \delta (\theta - \hat{\theta }_{l})\). The number of the hard clusters is usually smaller than the number of the mixture components, that is, there are some components which will never be selected by the hard assignment.

This posterior probability and the number of components will be used in connection with rate-distortion function in Sect. 5.2. All results were averaged over \(100\) trials for different data sets generated by (13).

4.1 Prediction with known Kernel width

First, we assumed that the kernel width \(\gamma \) in Eq. (12) was known and \(p(x|\theta )\) was set to \(p(x|\theta ) = \frac{1}{2\pi } \exp \left( -\frac{||x-\theta ||^{2}}{2}\right) \). The distribution \(p^{*}(x)\) is realized in this case by the mixing distribution \(q(\theta )=\frac{1}{2}\delta (\theta - \theta _{1}^{*}) + \frac{1}{2} \delta (\theta - \theta _{2}^{*})\). An example of the estimated mixture model for \(\beta = -0.2\) and \(\gamma =0.5\) is demonstrated in Fig. 2a.

Figure 2b, c, respectively, show the training error (14) and the prediction error (15).

Fig. 2
figure 2

a Example of the estimated mixture for \(\beta =-0.2\) and \(\gamma =0.5\). The corresponding mixing distribution is illustrated in the x–y plane where the location and the height of the solid lines are, respectively, the mean parameter \(\hat{\theta }_{l}\) and the weight \(\hat{\pi }_{l}\) of each component. bd Training error, prediction error and maximum error against \(\beta \), respectively. The error bars show 95 % confidence intervals

We see that the average training error is minimized at \(\beta = 0\) as expected, while the minimum of average prediction error is attained around \(\beta = -0.2\). Figure 2d shows the average of the maximum errors of Eq. (16). As expected, it monotonically decreases with respect to \(\beta \), which is consistent with the fact that the estimation approaches the KVQ as \(\beta \rightarrow \infty \).

In Fig. 3, we show the average number of estimated components remaining after the elimination of components with sufficiently small mixing proportions (less than \(\frac{1}{n^{2}}\)) and the average number of hard clusters.

Fig. 3
figure 3

Number of components (crosses) and number of hard clusters (asterisks) against \(\beta \)

The number of components \(\hat{k}\) as well as the number of hard clusters increase as \(\beta \) becomes larger. The discussion in Sect. 2 suggests that as \(\beta \) grows, more components are estimated to increase the entropy of the mixture \(r(x;q)\). This regularization reduces the average prediction error when \(\beta \) takes a slightly negative value as we have just observed in Fig. 2c. In Sect. 5.1, we will discuss the effective range of \(\beta \) which reduces the generalization error.

4.2 Mismatched Kernel width

Next we assumed \(\gamma \ne 0.5\), that is, the variance of a component has a mismatch.

When \(\gamma <0.5\), the true distribution in Eq. (13) cannot be realized with the model in Eq. (1). This will induce a larger objective \(F_{\beta }\) and a larger training error by the order of \(O(1)\). The top panels of Fig. 4a–c show examples of estimated mixtures and the average prediction error as a function of \(\beta \) for \(\gamma =0.05\), \(\gamma =0.2\) and \(\gamma =0.4\), respectively. We see that the prediction error is much larger for \(\gamma =0.05\) and \(\gamma =0.2\) than for \(\gamma =0.4\).

When \(\gamma \ge 0.5\), the distribution \(p^{*}(x)\) is realizable with the mixture distribution in Eq. (1) using the mixing distribution, \(q(\theta )=\frac{1}{2} N(\theta |\theta _{1}^{*}, (1-1/(2\gamma ))I_{2}) + \frac{1}{2} N(\theta |\theta _{2}^{*}, (1-1/(2\gamma ))I_{2})\). The top panels of Fig. 4d–f show examples of estimated mixtures and the average prediction error as a function of \(\beta \) for \(\gamma =0.6\), \(\gamma =1.0\) and \(\gamma =2.0\), respectively. For \(\gamma =1.0\) and \(\gamma =2.0\), the minimum is achieved when \(\beta >0\). This is expected from the fact that infinitely many components are required for \(r(x;q)\) to be identical to the true distribution.

Fig. 4
figure 4

Examples of estimated mixtures (top row) and prediction errors as a function of \(\beta \) (bottom row) for a \(\gamma =0.05\) (\(\hat{\beta }=0.3\)), b \(\gamma =0.2\) (\(\hat{\beta }=0.4\)), c \(\gamma =0.4\) (\(\hat{\beta }=-0.2\)), d \(\gamma =0.6\) (\(\hat{\beta }=-0.2\)), e \(\gamma =1.0\) (\(\hat{\beta }=0.1\)) and f \(\gamma =2.0\) (\(\hat{\beta }=0.5\)). In each column, the estimated mixture and the mixing distribution are displayed in the same way as in Fig. 2a, and the prediction error is displayed as in Fig. 2c. The value of \(\beta \), indicated in parentheses as \(\hat{\beta }\), was selected so as to minimize the average prediction error when the means are updated

The results in Fig. 4c (\(\gamma =0.4\)) and Fig. 4d (\(\gamma =0.6\)) are similar to those presented in Sect. 4.1 because \(\gamma \) is close to the true value, \(0.5\). The prediction error increases when the mismatch of \(\gamma \) is large. The above results imply that it is possible to use cross-validation to select \(\beta \) and the kernel width \(\gamma \). But a practical procedure needs to be explored further. The next section is devoted to discussing the selection of these parameters.

5 Selection of parameters

This section first discusses the optimal parameter \(\beta \) that minimizes the average generalization error. The relationship between the mixture estimation and the rate-distortion problem is then described.

5.1 Effective range of \(\beta \)

Let \(\hat{r}_{\beta }(x)\) be the estimated mixture model for \(\beta \). We discuss the \(\beta \) which minimizes the average generalization error

$$\begin{aligned} E\left[ \int p^{*}(x) \log \frac{p^{*}(x)}{\hat{r}_{\beta }(x)}dx\right] \end{aligned}$$

where \(E\) denotes the expectation with respect to the distribution of the data sets, \(\prod _{i=1}^{n}p^{*}(x_{i})\). The optimal \(\beta \) is related to the number of data \(n\). As described in Sect. 2, \(\hat{r}_{\beta }(x)\) approaches the escort distribution \(\widetilde{p}^{*}(x)\propto p^{*}(x)^{\frac{1}{1+\beta }}\). This brings a bias to the estimation. We use the Kullback-Leibler divergence \(KL(p^{*}, \widetilde{p}^{*})=\int p^{*}(x) \log \frac{p^{*}(x)}{\widetilde{p}^{*}(x)}dx\) as the indicator of the bias. It follows from the Taylor expansion of the divergence around \(\beta = 0\) that

$$\begin{aligned} KL(p^{*}, \widetilde{p}^{*}) = \frac{\text{ Var }_{p^{*}}[\log p^{*}(x)]}{2}\beta ^{2} + O(\beta ^{3}) \end{aligned}$$
(18)

where \(\text{ Var }_{p^{*}}[\log p^{*}(x)]=\int p^{*}(x)(\log p^{*}(x))^{2}dx -\left( \int p^{*}(x)\log p^{*}(x)\right) ^{2}\).

We focus on the Gaussian mixture used in Sect. 4 and consider the condition that the influence of the bias does not exceed the reduction in variance of estimation. It is conjectured that the log-likelihood ratio of the finite mixture models is in the order of \(\log \log n\) when the model has redundant components (Hartigan 1985). This implies that \(\sum _{i=1}^{n}\log \frac{\hat{r}_{0}(x_{i})}{p^{*}(x_{i})} = O_{p}(\log \log n)\). We further assume symmetry between the training error, \(E\left[ \frac{1}{n}\sum _{i=1}^{n}\log \frac{p^{*}(x_{i})}{\hat{r}_{0}(x_{i})}\right] \), and the generalization error, \(E\left[ KL(p^{*}, \hat{r}_{0})\right] \). The symmetry means that the training and generalization errors converge to zero symmetrically from below and above zero as \(n\rightarrow \infty \) (Amari et al. 1992; Watanabe 2005). Here, we assume the symmetry holds in the slightly weaker sense that both errors have the same order with respect to \(n\). More specifically,

$$\begin{aligned} E\left[ KL(p^{*}, \hat{r}_{0})\right] =O\left( \frac{\log \log n}{n}\right) . \end{aligned}$$
(19)

From Eq. (18) and the above result, the order of the optimal \(\beta \) that minimizes the generalization error is

$$\begin{aligned} \hat{\beta } = O\left( \sqrt{\frac{\log \log n}{n}}\right) \end{aligned}$$

since otherwise the bias in Eq. (18) has a larger order than that of the variance around \(\beta = 0\).

Figure 5a, b show the average generalization error, \(\frac{1}{\tilde{n}}\sum _{i=1}^{\tilde{n}}\log \frac{p^{*}(\tilde{x}_{i})}{r(\tilde{x}_{i};\hat{q})}\) against \(\beta \) for different sample sizes, \(n\), when \(\gamma =0.5\) and \(\gamma =2.0\) respectively.Footnote 7 As can be seen from the results for \(\beta =0\) in both figures, the assumption in Eq. (19) is reasonable. We also see that the generalization error is minimized by \(\beta \) with the smaller absolute value as \(n\) increases. This tendency becomes more apparent for \(\gamma =2.0\), that is, when the variance is underestimated.

Fig. 5
figure 5

Generalization errors against \(\beta \) for different \(n\) when a \(\gamma =0.5\) and b \(\gamma =2.0\). Minimums are marked with circles

5.2 Connection to the rate-distortion problem

The convex clustering in Lashkari and Golland (2007) corresponds to a special case of our proposal, that is, \(\beta =0\) (MLE), and the support points of \(q(\theta )\) are fixed to the training data set \(\{x_{1},\ldots , x_{n}\}\). For this restricted version of the problem, it is pointed out that the kernel width, \(\gamma \) in Eq. (12), has a relationship to the rate-distortion (RD) function of source \(\hat{p}(x)\) (the empirical distribution) and distortion associated with \(p(x|\theta )\) (for example, the squared distortion for Gaussian) (Lashkari and Golland 2007). In this section, we investigate the relationship of our proposal with the RD theory in a general case where the support points of \(q(\theta )\) are not restricted to the sample points and \(\beta \ne 0\). The relationship of mixture modeling with the RD theory is also partly discussed in Banerjee et al. (2005) for the finite mixture of exponential family distributions under the constraint that the cardinality of the support of \(q(\theta )\) is fixed.

Let us start with a short summary of the RD theory. The source random variable \(X\) with density \(p^{*}(x)\) is reproduced to \(\Theta \) with the conditional distribution \(p(\theta | x)\), where the distribution is chosen to minimize the rate, i.e., the mutual information \(I(X;\Theta )\) under the constraint of the average distortion measure. This is formulated by the Lagrange multiplier method as follows and is reformulated as the optimization problem in Eq. (21) (Berger 1971):

$$\begin{aligned}&\min _{q(\theta |x)} \left[ I(X;\Theta )-sE[d(X,\Theta )]\right] \nonumber \\&= \min _{q(\theta |x)} \left[ \int p^{*}(x)q(\theta |x) \log \frac{q(\theta |x)}{ \int p^{*}(x)q(\theta |x) dx \exp (sd(x,\theta ))}dxd\theta \right] \nonumber \\&= \min _{q(\theta |x), q(\theta )} \left[ \int p^{*}(x)q(\theta |x) \log \frac{q(\theta |x)}{q(\theta ) \exp (sd(x,\theta ))}dxd\theta \right] \end{aligned}$$
(20)
$$\begin{aligned}&= \min _{q(\theta )} \left[ -\int p^{*}(x) \{\log \int q(\theta ) \exp (sd(x,\theta ))d\theta \}dx\right] \end{aligned}$$
(21)

where Eqs. (20) and (21) follow from the facts that the minimization on \(q(\theta )\) and \(q(\theta |x)\) are respectively attained by

$$\begin{aligned} q(\theta ) = \int p^{*}(x)q(\theta |x) dx \end{aligned}$$

and

$$\begin{aligned} q(\theta |x) = \frac{q(\theta ) \exp (sd(x,\theta ))}{\int q(\theta ) \exp (sd(x,\theta ))d\theta }. \end{aligned}$$

Here \(d(x,\theta )\) is the distortion measure and the negative real variable \(s\) is a Lagrange multiplier. \(s\) provides the slope of a tangent to the RD curve and hence has one-to-one correspondence with a point on the RD curve. The problem in Eq. (21) reduces to the MLE (\(F_{\beta }(q)\) when \(\beta =0\)) with \(p(x|\theta ) \propto \exp (sd(x,\theta ))\) if the source \(p^{*}(x)\) is replaced with the empirical distribution. In the case of the Gaussian mixture with \(d(x,\theta ) = ||x-\theta ||^{2}\), \(s\) specifies the kernel width by \(\gamma =-s\).

For a general \(\beta \), the expression (8) and the optimal reconstruction distribution \(\hat{q}(\theta ) =\sum _{l=1}^{\hat{k}} \hat{\pi }_{l}\delta (\theta -\hat{\theta }_{l})\) imply the RD function of the source, \(\sum _{i=1}^{n}w_{i}\delta (x-x_{i})\), with the rate

$$\begin{aligned} R = \sum _{i=1}^{n}\sum _{l=1}^{\hat{k}} w_{i}\nu _{il}\log \frac{\nu _{il}}{\sum _{j=1}^{n}w_{j}\nu _{jl}} \end{aligned}$$

and the average distortion

$$\begin{aligned} D = \sum _{i=1}^{n}\sum _{l=1}^{\hat{k}} w_{i}\nu _{il}d(x_{i},\hat{\theta }_{l}) \end{aligned}$$

where \(\nu _{il}\) is the posterior probability defined by Eq. (17). Since the rate is equivalent to the mutual information between \(X\) and \(\Theta \), it is bounded from above by the entropy, \(-\sum _{l=1}^{\hat{k}}\hat{\pi }_{l}\log \hat{\pi }_{l}\) and further by \(\log \hat{k}\).

Figure 6a shows the RD curve for \(p^{*}(x)\) given by the Gaussian mixture in Eq. (13) and its Shannon lower bound (Berger 1971). To draw the RD curve, we used the following facts: The optimal reconstruction distribution \(q(\theta )\) is the Dirac delta distribution centered at \(\bar{\theta } = \frac{\theta _{1}^{*}+\theta _{2}^{*}}{2} = (2, 2)^{T}\) for a distortion larger than \(D_{\max } = \min _{\theta }\int p^{*}(x)||\theta -x||^{2}dx = 10\). The optimal \(q(\theta )\) is the Gaussian mixture \( \frac{1}{2}N\left( \theta |\theta _{1}^{*}, (1+\frac{1}{2s})I_{2}\right) + \frac{1}{2}N\left( \theta |\theta _{2}^{*}, (1+\frac{1}{2s})I_{2}\right) \) for a distortion less than \(1\), the variance of the component. The RD curve is equal to the Shannon lower bound for this range of the distortion. For a distortion between \(1\) and \(D_{\max }\), the optimal \(q(\theta )\) is a two-component discrete distribution, \(\frac{1}{2}\delta (\theta -\bar{\theta }-a{\varvec{1}}) +\frac{1}{2}\delta (\theta -\bar{\theta }+a{\varvec{1}})\), where \({\varvec{1}}=(1, 1)^{T}\) and \(a\) is a real number between \(0\) and \(2\), which we identified for each \(s\) (\(s\ge -0.5\)) by the optimality condition of the minimization problem in Eq. (21). These facts agree with the result of Rose (1994), which proves that for the squared distortion, if the Shannon lower bound is not tight, then the optimal reconstruction distribution is discrete.

Fig. 6
figure 6

Examples of RD curves. a RD curve for the true Gaussian mixture in Eq. (13) (solid line) and its Shannon lower bound (dotted line). bd: RD curve for an empirical distribution (dashed line) and the average of linearly interpolated RD curves over 100 empirical data sets with the upper and lower bands indicating 2 standard deviations (dotted lines) when b \(\beta =0.0\), c \(\beta =-0.2\) and d \(\beta =0.5\). The rate is scaled by \(\log 2\) to yield bits

Figure 6b overwrites Fig. 6a (without the Shannon lower bound) with the RD curve for the empirical distribution given by a data set (\(n=50\)) generated by \(p^{*}(x)\). We used Algorithm 1 with \(\beta =0\) for estimating \(q(\theta )\) for each \(s\) and interpolated linearly to draw the RD curve. Figure 6c, d show RD curves for \(\beta =-0.2\) and \(\beta =0.5\), respectively, drawn by using Algorithm 1 for each \(s\). Note here that, in the above interpretation of the proposed optimization as an RD problem, the source depends on \(w_{i}\), which depends on \(q(\theta )\) as in Eq. (9) whereas in the original RD problem, the source does not depend on the reconstruction distribution. Hence the above pair of rate and distortion does not necessarily inherit properties of the usual RD function such as convexity, except for \(\beta =0\). In fact, the RD curve for \(\beta =0.5\) loses convexity as in Fig. 6d.

Figure 6b–d also show the average and twice the standard deviation of the linearly interpolated RD curves for \(100\) empirical data sets. We can see that when compared to the MLE (\(\beta =0\)), the RD curves for \(\beta =-0.2\) have small variation around the point \((D, R)=(2, 1)\), and those for \(\beta =0.5\) are, on average, close to the RD curve for the true Gaussian mixture in the small distortion region, that is, for a small variance of the Gaussian component. These imply the observation in Sects. 4.1 and 4.2 that \(\beta \ne 0\) can reduce the generalization error of the MLE. For \(\beta >0\), the learning algorithm developed in Sect. 3 can be considered as an algorithm for computing Renyi’s analog of the rate-distortion function previously appearing in Arikan and Merhav (1998) in the context of guessing.

6 Further topics

By extending Lindsay’s theorem, we proved in Sect. 3.1 that the estimated \(q(\theta )\) is a discrete distribution consisting of distinct support points no greater in number than the number of training data. If \(p(x|\theta )\) is bounded for all \(x\) and \(\theta \), this statement can be generalized to other objective functionals as long as they are convex with respect to \(q(\theta )\) and hence to \({\varvec{r}}=(r_{1},\ldots , r_{n})\). The proposed algorithm in Sect. 3.2 is based on the decoupled approach developed in Nowozin and Bakir (2008). The general objective functional considered in Nowozin and Bakir (2008) includes the MLE and the KVQ. More specifically, the following four objective functionals are demonstrated as examples in Nowozin and Bakir (2008). Here, \(\rho = \min _{i} r_{i}\) and \(C\) is a constant.

  1. 1.

    MLE: \(-\sum _{i=1}^{n}\log r_{i}\)

  2. 2.

    KVQ: \(-\rho \)

  3. 3.

    Margin-minus-variance: \(-\rho + \frac{C}{n}\sum _{i=1}^{n}\left( r_{i}-\rho \right) ^{2}\)

  4. 4.

    Mean-minus-variance: \(-\frac{1}{n}\sum _{i=1}^{n} r_{i} +\frac{C}{n}\sum _{i=1}^{n} \left( r_{i}-\frac{1}{n}\sum _{j=1}^{n}r_{j}\right) ^{2}\)

The objective functional \(F_{\beta }\) in Eq. (2) combines the first two objectives by the parameter \(\beta \). The other two objectives above are convex with respect to \({\varvec{r}}\) as well and hence can be proven to have optimal discrete distributions \(q(\theta )\) with no more than \(n\) support points. Note that since \({\varvec{r}}\) is a linear function of \(q(\theta )\), the convexity on \({\varvec{r}}\) is equivalent to that on \(q(\theta )\). Furthermore, we have developed in Sect. 3.2.2 a simple algorithm for finite mixture models to minimize \(F_{\beta }\). In fact, this optimization algorithm for large \(\beta \) is used for approximate computation of the prior distribution achieving the normalized maximum likelihood in the context of universal coding (Barron et al. 2014). Note that, to apply the general framework of Sect. 3.2.1 to specific objective functionals, we need learning algorithms for optimizing them for finite mixture models.

Another aspect of the choice of the objective functional is the robustness of the estimation. In Sect. 2, we demonstrated that the minimization of \(F_{\beta }\) is related to that of the Renyi divergence. We further discuss its relationship to the divergence minimization that was proposed for the purpose of robust estimation. The gamma divergence (Fujisawa and Eguchi 2008; Eguchi et al. 2011) is defined for non-negative densities \(g\) and \(h\) with a real parameter \(\gamma \ge -1\) as

$$\begin{aligned} D_{\gamma }(g,h) = d_{\gamma }(g,h)-d_{\gamma }(g,g) \end{aligned}$$
(22)

where \(d_{\gamma }\) is the gamma cross entropy

$$\begin{aligned} d_{\gamma }(g,h) =-\frac{1}{\gamma }\log \frac{\int g(x) h(x)^{\gamma }dx}{\left\{ \int h(x)^{1+\gamma }dx \right\} ^{\frac{\gamma }{1+\gamma }}}. \end{aligned}$$

The following relation holds:

$$\begin{aligned} F_{\beta }(q)=(1+\gamma )d_{\gamma }(\hat{p}, r^{\frac{1}{1+\gamma }}) \end{aligned}$$

where \(\beta =-\frac{\gamma }{1+\gamma }\) and \(\hat{p}(x)\) is the empirical distribution.

The beta divergence in Murata et al. (2004) and Eguchi and Kato (2010) is a generalization of the Kullback-Leibler divergence, which consists of a cross entropy term as above \(d_{\gamma }\), and is identical with the power divergence in Basu et al. (1998).

The expression (8) of \(F_{\beta }\) can be viewed as a weighted version of the log-likelihood function. When \(\beta < 0\), Eq. (9) provides a downweighting for outlying observations. This downweighting is equivalent to what is referred to in Basu et al. (1998) as a relative-to-the-model downweighting. This implies that the robustness of the estimation, the main feature of minimization of these divergences, carries over to \(F_{\beta }\) minimization for \(\beta < 0\). We observed that this can alleviate overfitting in Sect. 4.1 where the generalization error is minimized with a slightly negative value of \(\beta \). It is an interesting direction to explore a class of robustness-inducing objective functionals of \(q(\theta )\).

7 Conclusion

In this article, a nonparametric estimation method of mixing distributions is discussed. We have proposed an objective functional for the learning of mixing distributions of mixture models which unifies the MLE and the KVQ with the parameter \(\beta \). By extending Lindsay’s result, we proved that the optimal mixing distribution is a discrete distribution with distinct support points no greater in number than the sample size, and we provided a simple algorithm to calculate it. It has been demonstrated through numerical experiments and analyzed theoretically that the estimated distribution is less prone to overfitting for some range of \(\beta \). We have further discussed the nature of the objective functional in relation to the RD theory. Finally, we have shown certain open problems. We believe these results open a new direction for further research.