Latent variable models have been playing a central role in psychometrics and related fields. Commonly used latent variable models include item response theory models (Embretson & Reise 2000; Reckase 2009), latent class models (Clogg, 1995; Rupp et al., 2010; von Davier & Lee, 2019), structural equation models (Bollen, 1989), error-in-variable models (Carroll et al., 2006), random-effects models (Hsiao, 2014), and models for missing data (Little & Rubin, 1987), where latent variables have different interpretations, such as hypothetical constructs, ‘true’ variables measured with error, unobserved heterogeneity, and missing data. We refer the readers to Rabe-Hesketh & Skrondal (2004) and Bartholomew et al. (2011) for a comprehensive review of latent variable models.

A latent variable model contains unobserved latent variables and unknown parameters. For example, an item response theory model contains individual-specific latent traits as latent variables and item-specific parameters as model parameters. Comparing with models without latent variables, such as linear regression and generalized linear regression, the estimation of latent variable models is typically more involved. This estimation problem can be viewed from three perspectives: (1) fixed latent variables and parameters, (2) random latent variables and fixed parameters, and (3) random latent variables and parameters.

The first perspective, i.e., fixed latent variables and parameters, leads to the joint maximum likelihood (JML) estimator. This estimator can often be efficiently computed, for example, by an alternating minimization algorithm (Birnbaum, 1968; Chen et al., 2019; 2020). Unfortunately, however, the JML estimator is typically statistically inconsistent (Neyman & Scott, 1948; Andersen, 1973; Haberman, 1977; Ghosh, 1995), except under some high-dimensional asymptotic regime that is suitable for large-scale applications (Chen et al., 2019; 2020; Haberman, 1977; 2004). Treating both latent variables and parameters as random variables, the third perspective leads to a full Bayesian estimator, for which many Markov chain Monte Carlo (MCMC) algorithms have been developed (e.g., Béguin & Glas, 2001; Bolt & Lall, 2003; Dunson, 2000; 2003; Edwards, 2010).

The second perspective, i.e., random latent variables and fixed parameters, essentially follows an empirical Bayes (EB) approach (Robbins, 1956; C.-H. Zhang, 2003). This perspective is the most commonly adopted one (Rabe-Hesketh & Skrondal, 2004). Throughout the paper, we refer to estimators derived under this perspective as EB estimators. Both the full-information marginal maximum likelihood (MML) estimator (Bock & Aitkin, 1981) and the limited-information composite maximum likelihood (CML) estimator (Jöreskog & Moustaki, 2001; Vasdekis et al., 2012) can be viewed as special cases. Such estimators involve optimizing an objective function with respect to the fixed parameters, while the objective function is often intractable due to an integral with respect to the latent variables. The most commonly used algorithm for this optimization problem is the expectation-maximization (EM) algorithm (Dempster et al., 1977; Bock & Aitkin, 1981). This algorithm typically requires to iteratively evaluate numerical integrals with respective to the latent variables, which is often computationally unaffordable when the dimension of the latent space is high.

A high-dimensional latent space is not the only challenge to the computation of EB estimators. Penalties and constraints on parameters may also involve in the optimization, further complicating the computation. In fact, penalized estimators have become increasingly more popular in latent variable analysis for learning sparse structure, with applications to restricted latent class analysis, exploratory item factor analysis, variable selection in structural equation models, differential item functioning analysis, among others (Chen et al., 2015; Sun et al., 2016; Chen et al., 2018; Lindstrøm & Dahl, 2020; Tutz & Schauberger, 2015; Jacobucci et al., 2016; Magis et al., 2015). The penalty function is often non-smooth (e.g., Lasso penalty, Tibshirani, 1996), for which many standard optimization tools (e.g., gradient descent methods) are not applicable. In addition, complex inequality constraints are also commonly encountered in latent variable estimation, for example, in structural equation models (Van De Schoot et al., 2010) and restricted latent class models (e.g., de la Torre, 2011, Xu, 2017). Such complex constraints further complicate the optimization.

In this paper, we propose a quasi-Newton stochastic proximal algorithm that simultaneously tackles the computational challenges mentioned above. This algorithm can be viewed as an extension of the stochastic approximation (SA) method (Robbins & Monro, 1951). Comparing with SA, the proposed method converges faster and is more robust, thanks to the use of Polyak–Ruppert averaging (Polyak & Juditsky, 1992; Ruppert, 1988). The proposed method can also be viewed as a stochastic version of a proximal gradient descent algorithm (Chapter 4, Parikh & Boyd, 2014), in which constraints and penalties are handled by a proximal update. As will be illustrated by examples later, the proximal update is easy to evaluate for many commonly used penalties and constraints, making the proposed algorithm computationally efficient. Theoretical properties of the proposed method are established, showing that the proposed one is almost optimal in its convergence speed.

The proposed method is closely related to the stochastic-EM algorithm (Celeux, 1985; Ip, 2002; Nielsen, 2000; S. Zhang et al., 2020b) and the MCMC stochastic approximation algorithms (Cai, 2010a; b; Gu & Kong, 1998), two popular methods for latent variable model estimation. Although these methods perform well in many problems, they are not as powerful as the proposed one. Specifically, the MCMC stochastic approximation algorithms cannot handle complex inequality constraints or non-smooth penalties, because they rely on stochastic gradients which do not always exist when there are complex inequality constraints or non-smooth penalties. In addition, as will be discussed later, both the stochastic-EM algorithm and the MCMC stochastic approximation algorithms are computationally less efficient than the proposed method, even for estimation problems without complex constraints or penalties.

The proposed method is also closely related to a perturbed proximal gradient algorithm proposed in Atchadé et al. (2017). The current development improves upon that of Atchadé et al. (2017) from two aspects. First, the proposed method is a Quasi-Newton method, in which the second-order information (i.e., second derivatives) of the objective function is used in the update. Although this step may only change the asymptotic convergence speed by a constant factor (when the number of iterations grows to infinity), our simulation study suggests that the new method converges much faster than that of Atchadé et al. (2017) empirically. Second, the theoretical analysis of Atchadé et al. (2017) only considers a convex optimization setting, while we consider a non-convex setting which is typically the case for latent variable model estimation. Note that the analysis is much more involved when the objective function is non-convex. Therefore, our proof of sequence convergence is different from that of Atchadé et al. (2017). Specifically, the convergence theory is established by analyzing the convergence of a set-valued generalization of an ordinary differential equation (ODE).

The rest of the paper is organized as follows. In Sect. 1, we formulate latent variable model estimation as a general optimization problem which covers many commonly used estimators as special cases. In Sect. 2, a quasi-Newton stochastic proximal algorithm is proposed. Theoretical properties of the proposed algorithm are established in Sect. 3, suggesting that the proposed algorithm achieves the optimal convergence rate. The performance of the proposed algorithm is demonstrated and compared with other estimators by simulation studies in Sect. 4. We conclude with some discussions in Sect. 5. An R package has been developed that can be found on https://github.com/slzhang-fd/lvmcomp2.

1 Estimation of Latent Variable Models

1.1 Problem Setup

We consider the estimation of a parametric latent variable model. We adopt a general setting, followed by concrete examples in Sects. 1.2 and 1.3. Let \({\mathbf {Y}}\) be a random object representing observable data and let \({\mathbf {y}}\) be its realization. For example, in item factor analysis (IFA), \({\mathbf {Y}}\) represents (categorical) responses to all the items from all the respondents. A latent variable model specifies the distribution of \({\mathbf {Y}}\) by introducing a set of latent variables \(\varvec{\xi } \in \Xi \), where \(\Xi \) denotes the state space of the latent vector \(\varvec{\xi }\). For example, in item factor analysis, \(\varvec{\xi }\) consists of the latent traits of all the respondents and \(\Xi \) is a Euclidean space. Let \(\varvec{\beta } = (\beta _1,\ldots , \beta _p)^\top \in {\mathcal {B}}\) be a set of parameters in the model, where \({\mathcal {B}}\) denotes the parameter space. The goal is to estimate \(\varvec{\beta }\) given observed data \({\mathbf {y}}\).

We consider an EB estimator which takes the form

$$\begin{aligned} l(\varvec{\beta }) = \log \left( \int _{\Xi } f({\mathbf {y}}, \varvec{\xi }\mid \varvec{\beta })d \varvec{\xi }\right) , \end{aligned}$$
(1)

where \(f({\mathbf {y}}, \varvec{\xi }\mid \varvec{\beta })\) is a complete-data likelihood/pseudo-likelihood function that has an analytic form. We assume that the objective function \(l(\varvec{\beta })\) is finite for any \({\varvec{\beta }}\in {\mathcal {B}}\) and is also smooth in \(\varvec{\beta }\).

The estimator is given by solving the following optimization problem

$$\begin{aligned} {\hat{{\varvec{\beta }}}} = \mathop {\hbox {arg min}}\limits _{\varvec{\beta } \in {\mathcal {B}}} -l(\varvec{\beta }) + R(\varvec{\beta }), \end{aligned}$$
(2)

where \(R(\varvec{\beta })\) is a penalty function that has an analytic form, such as Lasso, ridge, or elastic net regularization functions. Note that the penalty function often depends on tuning parameters. Throughout this paper, we assume these tuning parameters are fixed and thus do not explicitly indicate them in the objective function (2). In practice, tuning parameters are often unknown and need to be chosen by cross-validation or certain information criterion. We point out that many commonly used estimators take the form of (2), including the MML estimator, the CML estimator, and regularized estimators based on the MML and CML. We also point out that despite its general applicability to latent variable estimation problems, the proposed method is more useful for complex problems that cannot be easily solved by the classical EM algorithm. For certain problems, such as the estimation of linear factor models and simple latent class models, both the E- and M-step of the EM algorithm have closed-form solutions. In that situation, the classical EM algorithm may be computationally more efficient, though the proposed method can still be used.

1.2 High-dimensional Item Factor Analysis

Item factor analysis models are commonly used in social and behavioral sciences for analyzing categorical response data. For exposition, we focus on binary response data and point out that the extension to ordinal response data is straightforward. Consider N individuals responding to J binary-scored items. Let \(Y_{ij}\in \{0, 1\}\) be a random variable denoting person i’s response to item j and let \(y_{ij}\) be its realization. Thus, we have \({\mathbf {Y}}= (Y_{ij})_{N\times J}\) and \({\mathbf {y}}= (y_{ij})_{N\times J}\), where \({\mathbf {Y}}\) and \({\mathbf {y}}\) are the generic notations introduced in Sect. 1.1 for our data. A comprehensive review of IFA models and their estimation can be found in Chen & Zhang (2020a).

It is assumed that the dependence among an individual’s responses is driven by a set of latent factors, denoted by \(\varvec{\xi } = (\xi _{ik})_{N\times K}\), where \(\xi _{ik}\) represents person i’s kth factor. Recall that \(\varvec{\xi }\) is our generic notation for the latent variables in Sect. 1.1 and here the state space \(\Xi = {\mathbb {R}}^{N\times K}\). Throughout this paper, we assume the number of factors K is known.

An IFA model makes the following assumptions:

  1. 1.

    \(\varvec{\xi }_i = (\xi _{i1},\ldots , \xi _{iK})^\top \), \(i = 1,\ldots , N\), are independent and identically distributed (i.i.d.) random vectors, following a multivariate normal distribution \(N({\mathbf {0}}, \varvec{\Sigma })\). The diagonal terms of \(\varvec{\Sigma } = (\sigma _{kk'})_{K\times K}\) are set to one for model identification. As \(\varvec{\Sigma }\) is a positive semi-definite matrix, it is common to reparametrize \(\varvec{\Sigma }\) by Cholesky decomposition,

    $$\begin{aligned} \varvec{\Sigma } = {\mathbf {B}}{\mathbf {B}}^\top , \end{aligned}$$

    where \({\mathbf {B}} = (b_{kk'})_{K\times K}\) is a lower triangular matrix. Let \({\mathbf {b}}_k\) be the kth row of \({\mathbf {B}}\). Then \(\Vert {\mathbf {b}}_k \Vert = 1\), \(k=1,\ldots , K\), since the diagonal terms of \(\varvec{\Sigma }\) are constrained to value 1.

  2. 2.

    \(Y_{ij}\) given \(\varvec{\xi }_i\) follows a Bernoulli distribution satisfying

    $$\begin{aligned} {\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i) = \frac{\exp (d_j + {\mathbf {a}}_j^\top \varvec{\xi }_i)}{1+\exp (d_j + {\mathbf {a}}_j^\top \varvec{\xi }_i)}, \end{aligned}$$
    (3)

    where \(d_j\) and \({\mathbf {a}}_j = (a_{j1},\ldots , a_{jK})^\top \) are item-specific parameters. The parameters \(a_{jk}\) are often known as the loading parameters.

  3. 3.

    \(Y_{i1}\),..., \(Y_{iJ}\) are assumed to be conditionally independent given \(\varvec{\xi }_i\), which is known as the local independence assumption.

Note that we consider the most commonly used logistic model in (3). It is worth pointing out that the proposed algorithm also applies to the normal ogive (i.e., probit) model which assumes that \( {\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i) =\Phi (d_j + {\mathbf {a}}_j^\top \varvec{\xi }_i)\). Under the current setting and using the reparametrization for \(\varvec{\Sigma }\), our model parameters are \({\varvec{\beta }}= \{{\mathbf {B}}, d_j, {\mathbf {a}}_j, j =1,\ldots , J\}\). The marginal likelihood function takes the form

$$\begin{aligned} l({\varvec{\beta }}) = \sum _{i=1}^N \log \left( \int _{{\mathbf {x}}\in {\mathbb {R}}^{K}} \prod _{j=1}^J \frac{\exp [y_{ij}(d_j + {\mathbf {a}}_j^\top {\varvec{x}})]}{1+\exp (d_j + {\mathbf {a}}_j^\top {\varvec{x}})} \phi ({\mathbf {x}}\mid {\mathbf {B}}) d{\mathbf {x}}\right) , \end{aligned}$$
(4)

where \(\phi ({\mathbf {x}}\mid {\mathbf {B}})\) is the density function for multivariate normal distribution \(N({\mathbf {0}}, {\mathbf {B}}{\mathbf {B}}^\top )\). The K-dimensional integrals involved in (4) cause a high computational burden for a relatively large K (e.g., \(K \ge 5\)).

IFA models are commonly used for both exploratory and confirmatory analyses. In exploratory IFA, an important problem is to learn a sparse loading matrix \((a_{ij})_{J\times K}\) from data, which facilitates the interpretation of the factors. One approach is by the \(L_1\)-regularized estimator (Sun et al., 2016) which takes the form

$$\begin{aligned} \begin{aligned} {\hat{{\varvec{\beta }}}} =&\mathop {\hbox {arg min}}\limits _{{\varvec{\beta }}\in {\mathcal {B}}} - l({\varvec{\beta }}) + R({\varvec{\beta }}), \end{aligned} \end{aligned}$$
(5)

where the parameter space

$$\begin{aligned} {\mathcal {B}} = \{{\varvec{\beta }}\in {\mathbb {R}}^{p}: b_{kk'} = 0, 1\le k < k' \le K, \sum _{k' = 1}^K b_{kk'}^2 = 1, k = 1,\ldots , K\}, \end{aligned}$$

and the penalty term

$$\begin{aligned} R({\varvec{\beta }}) = \lambda \sum _{j = 1}^J\sum _{k=1}^K \vert a_{jk}\vert . \end{aligned}$$
(6)

In \(R({\varvec{\beta }})\), \(\lambda > 0\) is a tuning parameter assumed to be fixed throughout this paper. This regularized estimator resolves the rotational indeterminacy issue in exploratory IFA, as the \(L_1\) penalty term is not rotational invariant. Consequently, under mild regularity conditions, the loading matrix can be consistently estimated only up to a column swapping. Note that only the \({\mathbf {B}}\) matrix has constraints, as reflected by the parameter space \({\mathcal {B}}\). Here \(b_{kk'} = 0\) is due to that \({\mathbf {B}}\) is a lower triangle matrix and \(\sum _{k' = 1}^K b_{kk'}^2 = 1\) is due to that the diagonal terms of \(\varvec{\Sigma } = {\mathbf {B}}{\mathbf {B}}^\top \) are all 1. We remark that it is possible to replace the \(L_1\) penalty in \(R({\varvec{\beta }})\) by other penalty functions for imposing sparsity, such as the elastic net penalty (Zou & Hastie, 2005)

$$\begin{aligned} R({\varvec{\beta }}) = \lambda _1 \sum _{j = 1}^J\sum _{k=1}^K a_{jk}^2 + \lambda _2 \sum _{j = 1}^J\sum _{k=1}^K \vert a_{jk}\vert , \end{aligned}$$
(7)

where \(\lambda _1, \lambda _2 > 0\) are two tuning parameters.

In confirmatory IFA, zero constraints are imposed on loading parameters, based on prior knowledge about the measurement design. More precisely, these zero constraints can be coded by a binary matrix \({\mathbf {Q}} = (q_{jk})_{J\times K}\). If \(q_{jk} = 0\), then item j does not load on factor k and \(a_{jk}\) is set to 0. Otherwise, \(a_{jk}\) is freely estimated. These constraints lead to parameter space \({\mathcal {B}} = \{{\varvec{\beta }}: b_{kk'} = 0, 1\le k < k' \le K; \sum _{k' = 1}^K b_{kk'}^2 = 1, a_{jk} = 0 ~\text{ for }~ q_{jk} = 0, j = 1,\ldots ,J, k = 1,\ldots , K\}\). The MML estimator for confirmatory IFA is then given by

$$\begin{aligned} \begin{aligned} {\hat{{\varvec{\beta }}}} =&\mathop {\hbox {arg min}}\limits _{{\varvec{\beta }}\in {\mathcal {B}}} - l({\varvec{\beta }}). \end{aligned} \end{aligned}$$
(8)

Besides parameter estimation, another problem of interest in confirmatory IFA is to make statistical inference, for which it is required to compute the asymptotic variance of \({\hat{{\varvec{\beta }}}}\). The estimation of the asymptotic variance often requires to compute the Hessian matrix of \(l({\varvec{\beta }})\) at \({\hat{{\varvec{\beta }}}}\), which also involves intractable K-dimensional integrals. As we will see in Sect. 2.1, this Hessian matrix, as well as quantities taking a similar form, can be easily obtained as a by-product of the proposed algorithm.

1.3 Restricted Latent Class Model

Our second example is restricted latent class models which are also widely used in social and behavioral sciences. For example, they are commonly used in education for cognitive diagnosis (von Davier & Lee, 2019). These models differ from IFA models in that they assume discrete latent variables. Here, we consider a setting for cognitive diagnosis when both data and latent variables are binary. Consider data taking the same form as that for IFA, denoted by \({\mathbf {Y}}= (Y_{ij})_{N\times J}\) and \({\mathbf {y}}= (y_{ij})_{N\times J}\). In this context, \(Y_{ij} = 1\) means that item j is answered correctly and \(Y_{ij} = 0\) means an incorrect answer.

The restricted latent class model assumes that each individual is characterized by a K-dimensional latent vector \(\varvec{\xi }_i = (\xi _{i1},\ldots , \xi _{iK})^\top \), \(i = 1,\ldots , N\), where \(\xi _{ik} \in \{0, 1\}\). Thus, the latent variables are \(\varvec{\xi } = (\xi _{ik})_{N\times K}\), whose state space \(\Xi = \{0,1\}^{N\times K}\) contains all \(N\times K\) binary matrices. Each dimension of \(\varvec{\xi }_i\) represents a skill, and \(\xi _{ik} = 1\) indicates that person i has mastered the kth skill and \(\xi _{ik} = 0\) otherwise.

The restricted latent class model can be parametrized as follows.

  1. 1.

    The person-specific latent vectors \(\varvec{\xi }_i\), \(i = 1,\ldots , N\), are i.i.d., following a categorical distribution satisfying

    $$\begin{aligned} {\mathbb {P}}(\varvec{\xi }_i = \varvec{\alpha }) = \frac{\exp (\nu _{\varvec{\alpha }})}{\sum _{\varvec{\alpha }' \in \{0, 1\}^K}\exp (\nu _{\varvec{\alpha }'})}, \end{aligned}$$

    where \(\varvec{\alpha } \in \{0, 1\}^K\) represents an attribute profile representing the mastery status on all K attributes, and we set \(\nu _{\varvec{\alpha }'} = 0\) as the baseline, for \(\varvec{\alpha }' = (0,\ldots , 0)^\top \).

  2. 2.

    \(Y_{ij}\) given \(\varvec{\xi }_i\) follows a Bernoulli distribution, satisfying

    $$\begin{aligned} {\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i = \varvec{\alpha }) = \frac{\exp (\theta _{j, \varvec{\alpha }})}{1+\exp (\theta _{j, \varvec{\alpha }})}, ~\varvec{\alpha } \in \{0, 1\}^K. \end{aligned}$$
  3. 3.

    Local independence is still assumed. That is, \(Y_{i1}\),..., \(Y_{iJ}\) are conditionally independent given \(\varvec{\xi }_i\).

The above model specification leads to a marginal likelihood function

$$\begin{aligned} l({\varvec{\beta }}) = \sum _{i=1}^N \log \left( \sum _{\varvec{\alpha }\in \{0,1\}^K} \frac{\exp (\nu _{\varvec{\alpha }})}{\sum _{\varvec{\alpha }' \in \{0, 1\}^K}\exp (\nu _{\varvec{\alpha }'})} \prod _{j=1}^J \frac{\exp (y_{ij}\theta _{j, \varvec{\alpha }})}{1+\exp (\theta _{j, \varvec{\alpha }})} \right) , \end{aligned}$$
(9)

where \({\varvec{\beta }}= \{\nu _{\varvec{\alpha }}, \theta _{j, \varvec{\alpha }}, \varvec{\alpha }\in \{0, 1\}^K, j = 1,\ldots , J\}\).

We consider a confirmatory setting where there exists a design matrix, similar to the \({\mathbf {Q}}\)-matrix in confirmatory IFA. With slight abuse of notation, we still denote \({\mathbf {Q}} = (q_{jk})_{J\times K}\), where \(q_{jk} \in \{0, 1\}\). Here, \(q_{jk} = 1\) indicates that solving item j requires the kth skill and \(q_{jk} = 0\) otherwise. As will be explained below, this design matrix leads to equality and inequality constraints in model parameters.

Denote \({\mathbf {q}}_{j} = (q_{j1},\ldots , q_{jK})^\top \) as the design vector for item j. For \(\varvec{\alpha } = (\alpha _1,\ldots , \alpha _K)^\top \), we write

$$\begin{aligned} \varvec{\alpha } \succeq {\mathbf {q}}_{j}, ~\text{ if }~ \alpha _k \ge q_{jk} ~\text{ for } \text{ all }~ k \in \{1,\ldots , K\}, \end{aligned}$$

and write

$$\begin{aligned} \varvec{\alpha } \nsucceq {\mathbf {q}}_{j}, ~\text{ if } \text{ there } \text{ exists } k \text{ such } \text{ that }~ \alpha _k < q_{jk}. \end{aligned}$$

That is, \(\varvec{\alpha } \succeq {\mathbf {q}}_{j}\) if profile \(\varvec{\alpha }\) has all the skills needed for solving item j and \(\varvec{\alpha } \nsucceq {\mathbf {q}}_{j}\) if not. The design information leads to the following constraints:

  1. 1.

    \({\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i = \varvec{\alpha }) = {\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i = \varvec{\alpha }')\), if both \(\varvec{\alpha }, \varvec{\alpha }' \succeq {\mathbf {q}}_{j}\). That is, individuals who have mastered all the required skills have the same chance of answering the item correctly.

  2. 2.

    \({\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i = \varvec{\alpha }) \ge {\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i = \varvec{\alpha }')\) if \(\varvec{\alpha }\succeq {\mathbf {q}}_{j}\) and \(\varvec{\alpha }'\nsucceq {\mathbf {q}}_{j}\). That is, students who have mastered all the required skills have a higher chance of answering the item correctly than those who do not.

  3. 3.

    \({\mathbb {P}}(Y_{ij} = 1\mid \varvec{\xi }_i=\varvec{\alpha })\ge {\mathbb {P}}(Y_{ij}=1\mid \varvec{\xi }_i={\varvec{0}})\) for all \(\varvec{\alpha }\). That is, students who have not mastered any skill have the lowest chance of answering correctly.

We refer the readers to Xu (2017) for more discussions on these constraints which are key to the identification of this model. Under these constraints, the MML estimator is given by

$$\begin{aligned} \begin{aligned} {\hat{{\varvec{\beta }}}} =&\mathop {\hbox {arg min}}\limits _{{\varvec{\beta }}\in {\mathcal {B}}} - l({\varvec{\beta }}), \end{aligned} \end{aligned}$$
(10)

where

$$\begin{aligned} \begin{aligned} {\mathcal {B}} = \{{\varvec{\beta }}: \max _{\varvec{\alpha }\succeq {\mathbf {q}}_{j}} \theta _{j, \varvec{\alpha }} = \min _{\varvec{\alpha }\succeq {\mathbf {q}}_{j}} \theta _{j, \varvec{\alpha }} \ge \theta _{j, \varvec{\alpha }'} \ge \theta _{j,{\varvec{0}}}, ~\text{ if }~ \varvec{\alpha }^\prime \nsucceq \mathbf {q}_j, \,\, \nu _{{\varvec{0}}} = 0\}. \end{aligned} \end{aligned}$$

When K is relatively large, the computation for solving (10) becomes challenging, due to both the summation over \(2^K\) possible values of \(\varvec{\alpha }\) in \(l({\varvec{\beta }})\), and the large number of inequality constraints.

2 Stochastic Proximal Algorithm

In this section, we propose a quasi-Newton stochastic proximal algorithm for the computation of (2). The description in this section will focus on the computation aspect, without emphasizing the regularity conditions needed for its convergence. A rigorous theoretical treatment will be given in Sect. 3. In what follows, we describe the algorithm in its general form in Sect. 2.1, followed by details for two specific models in Sects. 2.2 and 2.3, and finally comparisons with related algorithms in Sect. 2.4.

2.1 General Algorithm

For ease of exposition, we introduce some new notation. We write the penalty function as the sum of two terms, \(R({\varvec{\beta }}) = R_1({\varvec{\beta }}) + R_2({\varvec{\beta }})\), where \(R_1({\varvec{\beta }})\) is a smooth function and \(R_2({\varvec{\beta }})\) is non-smooth. In the example of regularized estimation for exploratory IFA, \(R_1({\varvec{\beta }}) = 0\) and \(R_2({\varvec{\beta }}) = \lambda \sum _{j = 1}^J\sum _{k=1}^K \vert a_{jk}\vert \), when \(R({\varvec{\beta }})\) is an \(L_1\) penalty as in (6). When an elastic net penalty is used as in (7), \(R_1({\varvec{\beta }}) = \lambda _1 \sum _{j = 1}^J\sum _{k=1}^K a_{jk}^2 \) and \(R_2({\varvec{\beta }}) = \lambda _2 \sum _{j = 1}^J\sum _{k=1}^K \vert a_{jk}\vert \).

The optimization problem can be reexpressed as

$$\begin{aligned} \min _{\varvec{\beta }}~ h({\varvec{\beta }}) + g({\varvec{\beta }}), \end{aligned}$$
(11)

where \(h({\varvec{\beta }}) = -l(\varvec{\beta }) + R_1(\varvec{\beta })\) and \(g: {\mathbb {R}}^{p} \rightarrow {\mathbb {R}} \cup \{+\infty \}\) is a generalized function taking the form \(g({\varvec{\beta }}) = R_2({\varvec{\beta }}) + I_{{\mathcal {B}}}({\varvec{\beta }})\), where

$$\begin{aligned} I_{{\mathcal {B}}}({\varvec{\beta }}) = \left\{ \begin{array}{ll} 0, &{} ~\text{ if }~ {\varvec{\beta }}\in {\mathcal {B}},\\ +\infty , &{} ~\text{ otherwise. }~ \end{array}\right. \end{aligned}$$
(12)

Note that since both \(l({\varvec{\beta }})\) and \(R_1({\varvec{\beta }})\) are smooth in \({\varvec{\beta }}\), \(h({\varvec{\beta }})\) is still smooth in \({\varvec{\beta }}\). The second term \(g({\varvec{\beta }})\) is non-smooth in \({\varvec{\beta }}\), unless it is degenerate (i.e., \(g({\varvec{\beta }}) \equiv 0\)). We further write

$$\begin{aligned} H(\varvec{\xi }, {\varvec{\beta }}) = - \log f({\mathbf {y}}, \varvec{\xi }\mid {\varvec{\beta }}) + R_1({\varvec{\beta }}), \end{aligned}$$
(13)

which can be viewed as a complete-data version of \(h({\varvec{\beta }})\) that will be used in the algorithm.

The algorithm relies on a scaled proximal operator (Lee et al., 2014) for the g function, defined as

$$\begin{aligned} \text {Prox}_{\gamma , g}^{{\mathbf {D}}}({\varvec{\beta }}) = \mathop {\hbox {arg min}}\limits _{{\mathbf {x}}\in {\mathbb {R}}^{p}} \left\{ g({\mathbf {x}}) + \frac{1}{2\gamma } \Vert {\mathbf {x}}- {\varvec{\beta }}\Vert ^2_{{\mathbf {D}}} \right\} , \end{aligned}$$

where \(\gamma >0\), \({\mathbf {D}}\) is a strictly positive definite matrix, and \(\Vert \cdot \Vert _{{\mathbf {D}}}\) is a norm defined by \(\Vert {\mathbf {x}}\Vert _{{\mathbf {D}}}^2 = \langle {\varvec{x}},{\varvec{x}}\rangle _{{\mathbf {D}}}= {\mathbf {x}}^\top {\mathbf {D}} {\mathbf {x}}\). The choices of \(\gamma \), \({\mathbf {D}}\), and the intuition behind the proximal operator will be explained in the sequel.

Our general algorithm is described in Algorithm 1, followed by implementation details. The proposed algorithm is an extension of a perturbed proximal gradient algorithm (Atchadé et al., 2017). The major difference is that the proposed algorithm makes use of second-order information from the smooth part of the objective function, which can substantially speed up its convergence. See Sect. 2.4 for further comparison.

Algorithm 1

(Stochastic Proximal Algorithm)

  • Input Data \({\mathbf {y}}\), initial parameters \({\varvec{\beta }}^{(0)}\), a sequence of step size \(\gamma _s, s= 1, 2,\ldots \), pre-specified tuning parameters \(c_{2} \ge c_{1} > 0\), and burn-in size \(\varpi \).

  • Update At tth iteration where \(t \ge 1\), we perform the following two steps:

    1. 1.

      Stochastic step Sample \(\varvec{\xi }\) from the conditional distribution of \(\varvec{\xi }\) given \({\mathbf {y}}\),

      $$\begin{aligned} \psi (\varvec{\xi }) \propto f({\mathbf {y}}, \varvec{\xi }\mid {\varvec{\beta }}^{(t-1)}), \end{aligned}$$

      and obtain \(\varvec{\xi }^{(t)}\). The sampling can be either exact or approximated by MCMC.

    2. 2.

      Proximal step Update model parameters by

      $$\begin{aligned} {\varvec{\beta }}^{(t)} = \text {Prox}_{\gamma _t, g}^{{\mathbf {D}}^{(t)}}\big ({\varvec{\beta }}^{(t-1)} - \gamma _t({\mathbf {D}}^{(t)})^{-1}{\mathbf {G}}^{(t)}\big ), \end{aligned}$$
      (14)

      where

      $$\begin{aligned} {\mathbf {G}}^{(t)} = \frac{\partial H(\varvec{\xi }^{(t)}, {\varvec{\beta }})}{\partial {\varvec{\beta }}} \bigg \vert _{{\varvec{\beta }}= {\varvec{\beta }}^{(t-1)}}. \end{aligned}$$

      \({\mathbf {D}}^{(t)}\) is a diagonal matrix with diagonal entries

      $$\begin{aligned} \delta _i^{(t)} = \frac{t-1}{t} \delta _i^{(t-1)} + \frac{1}{t}T\left( {\tilde{\delta }}_i^{(t)}; c_{1}, c_{2}\right) , \end{aligned}$$

      where \(T(x;c_1,c_2)\) is a truncation function defined as

      $$\begin{aligned} T(x;c_1,c_2) = \left\{ \begin{array}{ll} c_1, &{} ~\text{ if }~ x < c_1, \\ x, &{} ~\text{ if }~ x \in [c_1,c_2],\\ c_2, &{} ~\text{ if }~ x > c_2. \end{array}\right. \end{aligned}$$
      (15)

      Here \({\tilde{\delta }}_i^{(t)} = {\tilde{\delta }}_{i,1}^{(t)} + ({\tilde{\delta }}_{i,2}^{(t)})^2,\) where

      $$\begin{aligned} {\tilde{\delta }}_{i,1}^{(t)}= & {} (1\!-\!\gamma _t){\tilde{\delta }}_{i,1}^{(t-1)} \!+\! \gamma _t \left( \frac{\partial ^2 H(\varvec{\xi }^{(t)}, {\varvec{\beta }})}{\partial \beta _i^2}\bigg \vert _{{\varvec{\beta }}= {\varvec{\beta }}^{(t-1)}} \!-\! \left( \frac{\partial H(\varvec{\xi }^{(t)},{\varvec{\beta }})}{\partial \beta _i}\right) ^2\bigg \vert _{{\varvec{\beta }}={\varvec{\beta }}^{(t-1)}}\right) ,\\ {\tilde{\delta }}_{i,2}^{(t)}= & {} (1-\gamma _t){\tilde{\delta }}_{i,2}^{(t-1)} + \gamma _t \left( \frac{\partial H(\varvec{\xi }^{(t)},{\varvec{\beta }})}{\partial \beta _i}\right) ^2 \bigg \vert _{{\varvec{\beta }}={\varvec{\beta }}^{(t-1)}}. \end{aligned}$$

    Iteratively perform these two steps until a stopping criterion is satisfied and let n be the last iteration number.

  • Output \({\bar{{\varvec{\beta }}}}_n = {\sum _{t=\varpi +1}^n {\varvec{\beta }}^{(t)}}/(n-\varpi )\).

In what follows, we make a few remarks to provide some intuitions about the algorithm.

Remark 1

(Connection with stochastic gradient descent) To provide some intuitions about the proposed method, we first make a connection between the proposed method and the stochastic gradient descent (SGD) algorithm. In fact, when the sampling of \(\varvec{\xi }\) is exact in the stochastic step, then \({\mathbf {G}}^{(t)}\) is a stochastic gradient of the smooth part of our objective function, in the sense that \({\mathbb {E}}({\mathbf {G}}^{(t)}\mid {\mathbf {y}}, {\varvec{\beta }}^{(t-1)}) = \nabla h({\varvec{\beta }})\vert _{{\varvec{\beta }}= {\varvec{\beta }}^{(t-1)}}.\) If, in addition, there is no constraint or non-smooth penalty, i.e., \(g({\varvec{\beta }}) \equiv 0\), then the proximal step degenerates to an SGD update \({\varvec{\beta }}^{(t)} = {\varvec{\beta }}^{(t-1)} - \gamma _t({\mathbf {D}}^{(t)})^{-1}{\mathbf {G}}^{(t)}\). In that case, the proposed method becomes a version of SGD.

Remark 2

(Proximal step) We provide some intuitions about the proximal step. We start with two special cases. First, as mentioned in Remark 1, if there is no constraint or non-smooth penalty, then the proximal step is nothing but a stochastic gradient descent step. This is because, the scaled proximal operator degenerates to an identity map, i.e., \(\text {Prox}_{\gamma , g}^{{\mathbf {D}}}({\varvec{\beta }}) = {\varvec{\beta }}\). Second, when the g function involves constraints but does not contain a non-smooth penalty, then the proximal step is a projected stochastic gradient descent step. That is, one first performs a stochastic gradient descent update \({\tilde{{\varvec{\beta }}}}^{(t)} = {\varvec{\beta }}^{(t-1)} - \gamma _t({\mathbf {D}}^{(t)})^{-1}{\mathbf {G}}^{(t)}\). Then \({\tilde{{\varvec{\beta }}}}^{(t)}\) is projected back to the feasible region \({\mathcal {B}}\) by the scaled proximal operator:

$$\begin{aligned} {\hat{{\varvec{\beta }}}} = \mathop {\hbox {arg min}}\limits _{{\varvec{\beta }}\in {\mathcal {B}}} \Vert {\varvec{\beta }}- {\tilde{{\varvec{\beta }}}}^{(t)}\Vert _{{\mathbf {D}}}, \end{aligned}$$

which is a projection under the norm \(\Vert \cdot \Vert _{{\mathbf {D}}}\). When \({\mathbf {D}}\) is an identity matrix as in the vanilla (i.e., non-scaled) proximal operator, then the projection is based on the Euclidean distance.

More generally, when the g function involves non-smooth penalties, then the proximal step can be viewed as minimizing the sum of \(g({\varvec{\beta }})\) and a quadratic approximation of \(h({\varvec{\beta }})\) at \({\varvec{\beta }}^{(t-1)}\); see Lee et al. (2014) for more explanations. We provide an example to facilitate the understanding. Suppose that

$$\begin{aligned} g({\varvec{\beta }}) = \lambda \sum _{i=1}^p \vert \beta _i\vert \end{aligned}$$

is the Lasso penalty, and \({\mathbf {D}} = diag(\delta _1,\ldots , \delta _p)\) is a diagonal matrix, where \(\lambda , \delta _i > 0\), \(i = 1,\ldots , p\). Then \(\text {Prox}_{\gamma , g}^{{\mathbf {D}}}({\tilde{{\varvec{\beta }}}}^{(t)})\) involves solving p optimization problems separately, each of which takes the form

$$\begin{aligned} {\hat{\beta }}_i = \mathop {\hbox {arg min}}\limits _{x} \frac{1}{2} (x - {\tilde{\beta }}_i^{(t)})^2 + \frac{\lambda \gamma }{\delta _i} |x|. \end{aligned}$$
(16)

It is well known that (16) has a closed-form solution given by soft-thresholding (see Chapter 3, Friedman et al., 2001):

$$\begin{aligned} {\hat{\beta }}_i = \left\{ \begin{array}{ll} {\tilde{\beta }}_i^{(t)} - \frac{\lambda \gamma }{\delta _i}, &{} ~\text{ if }~ {\tilde{\beta }}_i^{(t)} > \frac{\lambda \gamma }{\delta _i}, \\ {\tilde{\beta }}_i^{(t)} + \frac{\lambda \gamma }{\delta _i}, &{} ~\text{ if }~ {\tilde{\beta }}_i^{(t)} < - \frac{\lambda \gamma }{\delta _i}, \\ 0, &{} ~\text{ otherwise }. \end{array}\right. \end{aligned}$$

Remark 3

(Role of \({\mathbf {D}}^{(t)}\)) Our proximal step is a quasi-Newton proximal update proposed in Lee et al. (2014) under a non-stochastic optimization setting. As shown in Lee et al. (2014), quasi-Newton proximal methods converge faster than first-order proximal methods under the non-stochastic setting. Here, the diagonal matrix \({\mathbf {D}}^{(t)}\) is used to approximate the Hessian matrix of \(h({\varvec{\beta }})\) at \({\varvec{\beta }}^{(t)}\). When \({\varvec{\beta }}^{(t)}\) converges to \({\hat{{\varvec{\beta }}}}\), then \(\delta _i^{(t)}\), the ith diagonal term of \({\mathbf {D}}^{(t)}\) converges to \(T\left( \frac{\partial ^2 h}{\partial \beta _i^2}\vert _{{\varvec{\beta }}= {\hat{{\varvec{\beta }}}}}; c_{1}, c_{2}\right) \) where T is the truncation function defined in (15); see Remark 8 for more explanations.

In the proposed update, we choose \({\mathbf {D}}^{(t)}\) to be a diagonal matrix for computational convenience. Specifically, as discussed in Remark 2, the proximal step is in a closed form when \({\mathbf {D}}^{(t)}\) is a diagonal matrix. In addition, the proximal step requires to calculate the inverse of \({\mathbf {D}}^{(t)}\), whose complexity is much lower when \({\mathbf {D}}^{(t)}\) is diagonal.

We point out that using a diagonal matrix to approximate the Hessian matrix is a popular and effective trick in numerical optimization (e.g., Chapter 5, Bertsekas et al., 1992; Becker & Le Cun, 1988), especially for large-scale optimization problems. In principle, it is possible to allow \({\mathbf {D}}^{(t)}\) to be non-diagonal. In fact, it is not difficult to generalize the BFGS updating formula for \({\mathbf {D}}^{(t)}\) given in Lee et al. (2014) to a stochastic version.

Our choice of \({\mathbf {D}}^{(t)}\) guarantees its eigenvalues to be constrained in the interval \([c_{1}, c_{2}]\). It rules out the singular situation when \({\mathbf {D}}^{(t)}\) is not strictly positive definite. In the implementation, we set \(c_{1}\)’s to be a sufficiently small constant and set \(c_{2}\)’s to be a sufficiently large constant. According to simulation, the algorithm tends to be insensitive to these choices.

We further provide some remarks regarding the implementation details.

Remark 4

(Choices of step size) As will be shown in Sect. 3, the convergence of the proposed method requires the step size to satisfy \(\sum _{t=1}^{\infty } \gamma _t = \infty \) and \(\sum _{t=1}^{\infty } \gamma _t^2 < \infty \). This requirement is also needed in the Robbins–Monro algorithm. Here, we choose the step size \(\gamma _t = \mu t^{-\frac{1}{2} - \varepsilon }\) so that the above requirement is satisfied, where \(\mu \) is a positive constant and \(\varepsilon \) is a small positive constant. As will be shown in Sect. 3, with sufficiently small \(\varepsilon \), \({\bar{{\varvec{\beta }}}}_n\) is almost optimal in terms of its convergence speed. We point out that \(\varepsilon \) is needed to prove the convergence of \({\bar{{\varvec{\beta }}}}_n\), under our non-convex setting. It is not needed, if the objective function (2) is convex; see Atchadé et al. (2017). The requirement of \(\varepsilon \) may be an artifact due to our proof strategy. Simulation results show that the algorithm converges well even if we set \(\varepsilon = 0\). For the numerical analysis in this paper, we set \(\varepsilon = 10^{-2}\).

We point out that our choice of step size is very different from the step size in the Robbins–Monro algorithm, for which asymptotic results (Fabian, 1968) suggest that the optimal choice of step size satisfies \(\gamma _t = O(1/t)\).

Remark 5

(Starting point) As the objective function (2) is typically non-convex for most latent variable models, the choice of the starting point \({\varvec{\beta }}^{(0)}\) matters. The algorithm is more likely to converge to the global optimum given a good starting point. One strategy is to run the proposed algorithm with multiple random starting points and then choose the best-fitting solution. Alternatively, one may find a good starting point using less accurate but computationally faster estimators, such as the constrained joint maximum likelihood estimator (Chen et al., 2019; 2020) or spectral methods (H. Zhang et al., 2020a). Moreover, to further avoid convergence to local optima, one may also use multiple random starting points and choose the one with the smallest objective function value.

Remark 6

(Sampling in stochastic step) As mentioned in Remark 1, when the latent variables \(\varvec{\xi }\) can be sampled exactly in the stochastic step, then \({\mathbf {G}}^{(t)}\) is a stochastic gradient of \(h({\varvec{\beta }})\). Unfortunately, exact sampling is only possible under some situations such as restricted latent class analysis. In most cases, we only have approximate samples from an MCMC algorithm. For example, as discussed below, the latent variables in IFA can be sampled by a block-wise Gibbs sampler. With approximate samples, \({\mathbf {G}}^{(t)}\) is only approximately unbiased. As we show in Sect. 3, such \({\mathbf {G}}^{(t)}\) may still yield convergence of \({\bar{{\varvec{\beta }}}}_n\).

Remark 7

(Stopping criterion) In the implementation of Algorithm 1, we stop the iterative update by monitoring a window of successive differences in \({\varvec{\beta }}^{(t)}\). More precisely, we stop the iteration if all differences in the window are less than a given threshold. Unless otherwise stated, the numerical analysis in this paper uses a window size 3. The same stopping criterion is also adopted by the Metropolis–Hasting Robins–Monro algorithm proposed by Cai (2010a).

Finally, as we explain in Remark 8, certain quantities, including the Hessian matrix of \(l({\varvec{\beta }})\), can be obtained as a by-product of the proposed algorithm.

Remark 8

(By-product) It is often of interest to compute quantities of the form

$$\begin{aligned} {\hat{M}} = {\mathbb {E}}\left[ m({\mathbf {y}}, \varvec{\xi }\mid {\varvec{\beta }})\mid {\mathbf {y}}, {\varvec{\beta }}\right] \big \vert _{{\varvec{\beta }}= {\hat{{\varvec{\beta }}}}}, \end{aligned}$$
(17)

where \(m({\mathbf {y}}, \varvec{\xi }\mid {\varvec{\beta }})\) is a given function with an analytic form and the conditional expectation \({\mathbb {E}}\left[ \cdot \mid {\mathbf {y}}, {\varvec{\beta }}\right] \) is with respect to the conditional distribution of \(\varvec{\xi }\) given \({\mathbf {y}}\). The quantity (17) is intractable due to the high-dimensional integral with respect to \(\varvec{\xi }\). One such example is the Hessian matrix of \(l(\varvec{\beta })\) at \({\hat{{\varvec{\beta }}}}\) as discussed in Sect. 1.2 that is a key quantity for the statistical inference of \({\hat{{\varvec{\beta }}}}\). In fact, by Louis’ formula (Louis, 1982),

$$\begin{aligned} \begin{aligned} \frac{\partial ^{2} l(\varvec{\beta })}{\partial \varvec{\beta } \partial \varvec{\beta }^{\top }}=&{\mathbb {E}}\left[ \left. \frac{\partial ^{2} (\log f({\mathbf {y}}, \varvec{\xi }\mid \varvec{\beta }))}{\partial {\varvec{\beta }}\partial {\varvec{\beta }}^{\top }} + \frac{\partial (\log f({\mathbf {y}}, \varvec{\xi }\mid \varvec{\beta }))}{\partial {\varvec{\beta }}}\left[ \frac{\partial (\log f({\mathbf {y}}, \varvec{\xi }\mid \varvec{\beta }))}{\partial {\varvec{\beta }}}\right] ^{\top } \right| {\mathbf {y}}, {\varvec{\beta }}\right] \\&- {\mathbb {E}}\left[ \left. \frac{\partial (\log f({\mathbf {y}}, \varvec{\xi }\mid \varvec{\beta }))}{\partial {\varvec{\beta }}} \right| {\mathbf {y}}, {\varvec{\beta }}\right] \left( {\mathbb {E}}\left[ \left. \frac{\partial (\log f({\mathbf {y}}, \varvec{\xi }\mid \varvec{\beta })))}{\partial {\varvec{\beta }}} \right| {\mathbf {y}}, {\varvec{\beta }}\right] \right) ^\top . \end{aligned} \end{aligned}$$

The computation of (17) is a straightforward by-product of the proposed algorithm. To approximate \({\hat{M}}\), we only need to add the following update in each iteration

$$\begin{aligned} M^{(t)} = M^{(t-1)} + \gamma _t\big (m({\mathbf {y}},\varvec{\xi }^{(t)}\mid {\varvec{\beta }}^{(t)})-M^{(t-1)}\big ), \end{aligned}$$
(18)

for \(t \ge 2\), where \(M^{(1)} = m({\mathbf {y}},\varvec{\xi }^{(1)}\mid {\varvec{\beta }}^{(1)})\). We approximate \({\hat{M}}\) by the Polyak–Ruppert averaging \({\bar{M}}_n = (\sum _{t=\varpi + 1}^n M^{(t)})/(n-\varpi )\). When the sequence \({\varvec{\beta }}^{(t)}\) converges to \({\hat{{\varvec{\beta }}}}\) (see Theorem 1 for the convergence analysis), under mild conditions, Theorem 3.17 of Ben-veniste et al. (1990) suggests the convergence of \(M^{(n)}\) to \({\hat{M}}\) with probability 1, which further implies the convergence of \({\bar{M}}_n\) to \({\hat{M}}\). Note that we use the averaged estimator \({\bar{M}}_n\) as it tends to converge faster than the pre-average sequence \(M^{(n)}\). We point out that the updating rule for the diagonal matrix \({\mathbf {D}}^{(t)}\) in Algorithm 1 makes use of such an averaged estimator.

Remark 9

(Burn-in size) Like MCMC algorithms, the proposed method also has a burn-in period, where parameter updates from that period are not used in the Polyak–Ruppert averaging. The choice of the burn-in size will not affect the asymptotic property of the method, but does affect the empirical performance. This is because, the parameter updates may be far away from the solution due to the effect of the starting point. Including them in the Polyak–Ruppert averaging may introduce a high bias. In our numerical analysis, the burn-in size \(\varpi \) is fixed to be sufficiently large in each of our examples. Adaptive choice of the burn-in size is possible; see S. Zhang et al. (2020b).

2.2 Example I: Item Factor Analysis

We now explain the details of using the proposed method to solve (5) for exploratory IFA. The computation is similar when replacing the \(L_1\) regularization by the elastic net regularization. For confirmatory IFA, the stochastic step is the same as that of exploratory IFA and the proximal update step is straightforward as no penalty is involved. Therefore, the details for the computation of confirmatory IFA are omitted here.

We first consider the stochastic step for solving (5). Note that \(\varvec{\xi }_1\),..., \(\varvec{\xi }_N\) are conditionally independent given data, and thus can be sampled separately. For each \(\varvec{\xi }_i\), we sample its entries by Gibbs sampling. More precisely, each entry is sampled by adaptive rejection sampling (Gilks & Wild, 1992; S. Zhang et al., 2020b), as the conditional distribution of \(\xi _{ik}\) given data and the other entries of \(\varvec{\xi }_i\) is log-concave. We refer the readers to S. Zhang et al. (2020b) for more explanations of this sampling procedure. If a normal ogive IFA is considered instead of the logistic model above, then we can sample \(\varvec{\xi }^{(t)}_i\) by a similar Gibbs method with a data augmentation trick; see Chen & Zhang (2020a) for a review.

We now discuss the computation for the proximal step. Recall that \({\varvec{\beta }}= \{{\mathbf {B}}, d_j, {\mathbf {a}}_j, j =1,\ldots , J\}\). We denote

$$\begin{aligned} {\tilde{{\varvec{\beta }}}}^{(t)} = {\varvec{\beta }}^{(t-1)} - \gamma _t({\mathbf {D}}^{(t)})^{-1}{\mathbf {G}}^{(t)} \end{aligned}$$

as the input of the scaled proximal operator. The parameter update is given by

$$\begin{aligned} {\varvec{\beta }}^{(t)} = \text {Prox}_{\gamma , g}^{{\mathbf {D}}}({\tilde{{\varvec{\beta }}}}^{(t)}) = \mathop {\hbox {arg min}}\limits _{{\varvec{\beta }}} \left\{ g({\varvec{\beta }}) + \frac{1}{2\gamma _t} \sum _{i=1}^p \delta _i^{(t)}(\beta _i - {\tilde{\beta }}_i )^2\right\} , \end{aligned}$$

where the parameter space

$$\begin{aligned} {\mathcal {B}} = \{{\varvec{\beta }}\in {\mathbb {R}}^{p}: b_{kk'} = 0, 1\le k < k' \le K, \sum _{k' = 1}^K b_{kk'}^2 = 1, k = 1,\ldots , K\}, \end{aligned}$$

and \(g({\varvec{\beta }}) = \lambda \sum _{j = 1}^J\sum _{k=1}^K \vert a_{jk}\vert + I_{{\mathcal {B}}}({\varvec{\beta }})\) only involves loading parameters \(a_{jk}\) and parameters \({\mathbf {B}}\) for the covariance matrix.

We first look at the update for \(d_j\)s. As the g function does not involve \(d_j\), its update is simply \(d_j^{(t)} = {\tilde{d}}_j^{(t)}\), where \({\tilde{d}}_j^{(t)}\) is the corresponding component in \({\tilde{{\varvec{\beta }}}}^{(t)}\). We then look at the update for the loading parameters \(a_{jk}\). Suppose that \(a_{jk}\) corresponds to the \(i_{a_{jk}}\)th component of \({\varvec{\beta }}\). Then the update is given by solving the optimization

$$\begin{aligned} a_{jk}^{(t)} = \mathop {\hbox {arg min}}\limits _{a_{jk}}~ \lambda |a_{jk}| + \frac{1}{2\gamma _t} \delta _{i_{a_{jk}}}^{(t)} (a_{jk} - {\tilde{a}}_{jk}^{(t)})^2. \end{aligned}$$

As discussed in Remark 2, this optimization has a closed-form solution via soft-thresholding. We finally look at the update for \({\mathbf {B}}\). Suppose that \(b_{kl}\) corresponds to the \(i_{b_{kl}}\)th component of \({\varvec{\beta }}\). Then the update of \({\mathbf {b}}_k\), the kth row of \({\mathbf {B}}\), is given by solving the following optimization problem:

$$\begin{aligned} {\mathbf {b}}_k^{(t)} = \mathop {\hbox {arg min}}\limits _{{\mathbf {b}}_k: \Vert {\mathbf {b}}_k\Vert =1, b_{kk'} = 0, k' > k} \left\{ \sum _{l=1}^K \delta _{i_{b_{kl}}}^{(t)} (b_{kl} - {\tilde{b}}_{kl}^{(t)})^2 \right\} , \end{aligned}$$

which can be easily solved by the method of Lagrangian multiplier.

2.3 Example II: Restricted LCA

We now provide a brief discussion on the computation for the restricted LCA model. First, the stochastic step is straightforward, as the posterior distribution for each \(\varvec{\xi }_i\) is still a categorical distribution which can be sampled exactly. Second, the proximal step requires to solve a quadratic programming problem. Again, we denote

$$\begin{aligned} {\tilde{{\varvec{\beta }}}}^{(t)} = {\varvec{\beta }}^{(t-1)} - \gamma _t({\mathbf {D}}^{(t)})^{-1}{\mathbf {G}}^{(t)}. \end{aligned}$$

The proximal step requires to solve the following quadratic programming problem

$$\begin{aligned} \begin{aligned}&\min _{{\varvec{\beta }}} ({\varvec{\beta }}- {\tilde{{\varvec{\beta }}}}^{(t)})^\top {\mathbf {D}}^{(t)} ({\varvec{\beta }}- {\tilde{{\varvec{\beta }}}}^{(t)}), \\ s.t.~&\max _{\varvec{\alpha }\succeq {\mathbf {q}}_{j}} \theta _{j, \varvec{\alpha }} = \min _{\varvec{\alpha }\succeq {\mathbf {q}}_{j}} \theta _{j, \varvec{\alpha }} \ge \theta _{j, \varvec{\alpha }'}\ge \theta _{j,{\varvec{0}}}, ~\text{ for } \text{ all }~ \varvec{\alpha }'\nsucceq {\varvec{q}}_j,\\&~\text{ and }~ \nu _{{\mathbf {0}}} = 0. \end{aligned} \end{aligned}$$
(19)

Quadratic programming is the most studied nonlinear convex optimization problem (Chapter 4, Boyd et al., 2004) and many efficient solvers exist. In our simulation study in Sect. 4.3, we use the dual method of Goldfarb & Idnani (1983) implemented in the R package quadprog (Turlach et al., 2019).

2.4 Comparison with Related Algorithms

We compare Algorithm 1 with several related algorithms in more details.

2.4.1 Robbins-Monro SA and Variants

The proposed method is closely related to the stochastic approximation approach first proposed in Robbins & Monro (1951), and its variants given in Gu & Kong (1998) and Cai (2010a) that are specially designed for latent variable model estimation. Note that the Robbins–Monro method is the first SGD method with convergence guarantee. Both the methods of Gu & Kong (1998) and Cai (2010a) approximate the original Robbins–Monro method by using MCMC sampling to generate an approximate stochastic gradient in each iteration, when an unbiased stochastic gradient is difficult to obtain. All these methods do not handle complex constraints or non-smooth objective functions.

When there is no constraint or penalty on parameters (i.e., \(g({\varvec{\beta }}) \equiv 0\)), the proximal operator degenerates to an identity map. In this case, the proposed method is essentially the same as Gu & Kong (1998) and Cai (2010a), except for the sampling method in the stochastic step, the way the Hessian matrix is approximated, the specific choices of step size, and the averaging in the last step of the proposed method. Among these differences, the step size and the trajectory averaging are key to the advantage of the proposed method.

As pointed out in Remark 4, the Robbins–Monro procedure has the same general requirement on the step size as the proposed method. Specially, the Robbins–Monro procedure, as well as its MCMC variants (Gu & Kong 1998; Cai 2010a), typically let the step size \(\gamma _t\) decay in the order 1/t as suggested by asymptotic theory (Fabian, 1968). However, this step is often too short at the early stage of the algorithm, resulting in poor performance in practice (Sect. 4.5.3., Spall, 2003). On the other hand, the proposed method adopts a longer step size. By further adopting Polyak–Ruppert averaging (Ruppert, 1988; Polyak & Juditsky, 1992), we show in Sect. 3 that the proposed method almost achieves the optimal convergence speed.

2.4.2 Perturbed Proximal Gradient Algorithm

Proximal gradient descent algorithm (Parikh & Boyd, 2014) is a non-stochastic algorithm for solving nonsmooth and/or constrained optimization algorithms. For example, the widely used gradient projection algorithm for oblique rotation in factor analysis (Jennrich, 2002) is a special case. The vanilla proximal gradient descent algorithm does not use the second-order information of the objective function and thus sometimes converges slowly. To improve convergence speed, proximal Newton-type methods have been proposed in Lee et al. (2014) that utilize the second-order information of the smooth part of the objective function.

The perturbed proximal gradient algorithm (Atchadé et al., 2017) solves a similar optimization problem as in (2) by combining the methods of stochastic approximation, proximal gradient decent, and Polyak–Ruppert averaging. The proposed method extends Atchadé et al., (2017) by adopting a Newton-type proximal update suggested in (2014). The method of Atchadé et al., (2017) can be viewed as a special case of the proposed one with \(c_{1} = c_{2}\). As shown by simulation study in the sequel, thanks to the second-order information, the proposed method converges much faster than that of Atchadé et al., (2017). We also point out that the theoretical analysis of Atchadé et al., (2017) focuses on convex optimization, while in Sect. 3 we consider a more general setting of non-convex optimization that includes a wide range of latent variable model estimation problems as special cases.

2.4.3 Stochastic EM Algorithm

The proposed method is also closely related to the stochastic-EM algorithm (Celeux, 1985; Ip, 2002; Nielsen, 2000; S. Zhang et al., 2020b). The stochastic-EM algorithm is a similar iterative algorithm, consisting of a stochastic step and a maximization step in each iteration, where the stochastic step is the same as that in the proposed algorithm. The maximization step plays a similar role as the proximal step in the proposed algorithm. More precisely, when there is no constraint or penalty, the maximization step of the stochastic-EM algorithm obtains parameter update \({\varvec{\beta }}^{(t)}\) by minimizing the negative complete data log-likelihood function \(-\log f({\mathbf {y}}, \varvec{\xi }^{(t)}\mid {\varvec{\beta }})\), instead of a stochastic gradient update. It is also recommended to perform a trajectory averaging in the stochastic-EM algorithm (Nielsen, 2000; S. Zhang et al., 2020b), like the last step of the proposed algorithm. As pointed out in S. Zhang et al. (2020b), the stochastic EM algorithm can potentially handle constraints and non-smooth penalties on parameters by incorporating them into the maximization step.

The stochastic-EM algorithm is typically not as fast as the proposed method, which is revealed by simulation studies below. This is because, it requires to solve an optimization problem completely in each iteration, which is time consuming, especially when constraints and non-smooth penalties are involved. On the other hand, the proximal step of the proposed algorithm can often be efficiently performed.

3 Theoretical Properties

In what follows, we establish the asymptotic properties of the proposed algorithm, under suitable technical conditions. For readers who are not interested in the asymptotic theory, this section can be skipped without affecting the reading of the rest of the paper. Note that in this section, we view data as fixed and the randomness comes from sampling of the latent variables. The following expectation is taken with respect to latent variable \(\varvec{\xi }\) given data \({\varvec{y}}\) and parameters \({\varvec{\beta }},\) denoted by \({\mathbb {E}}(\cdot \mid {\varvec{\beta }})=\int \cdot \pi _{{\varvec{\beta }}}(\varvec{\xi })d\varvec{\xi }\), where \(\pi _{\varvec{\beta }}\) is the posterior distribution for \(\varvec{\xi }\) given \({\varvec{y}}\) and \({\varvec{\beta }}.\) Let \(\Vert \cdot \Vert \) denote the vector \(l_2\)-norm. Following the typical convergence analysis of non-convex optimization (e.g., Chapter 3, Floudas, 1995), we will first discuss the convergence of the sequence \({\varvec{\beta }}^{(t)}\) to a stationary point of the objective function \(h({\varvec{\beta }}) + g({\varvec{\beta }})\) in Theorem 1, which follows the theoretical development in Duchi & Ruan (2018). Then with some additional assumptions on the local geometry of the objective function at the stationary point being converged to, we will show the convergence rate of the Polyak–Ruppert averaged sequence \({\bar{{\varvec{\beta }}}}_n\) in Theorem 2 which extends the results of Atchadé et al. (2017) to the setting of non-convex optimization.

For a function \(f:{\mathbb {R}}^d\mapsto {\mathbb {R}}\cup +\infty ,\) denote the Fréchet subdifferential (Chapter 8.B Rockafellar & Wets, 1998) of f at the point \({\varvec{x}}\) by \(\partial f({\varvec{x}}),\)

$$\begin{aligned} \partial f({\varvec{x}})=\left\{ {\varvec{z}} \in {\mathbb {R}}^{p}: f({\varvec{y}}) \ge f({\varvec{x}})+ {\varvec{z}}^\top ({\varvec{y}}-{\varvec{x}})+o(\Vert {\varvec{y}}-{\varvec{x}}\Vert ) \text{ as } {\varvec{y}} \rightarrow {\varvec{x}}\right\} . \end{aligned}$$

Define the set of stationary points of the objective function as

$$\begin{aligned} {\mathcal {B}}^* = \{{\varvec{\beta }}\in {\mathcal {B}}: \exists \ {\mathbf {x}} \in \partial h({\varvec{\beta }}) + \partial g({\varvec{\beta }}) ~\text{ with }~ {\mathbf {x}}^\top ({\mathbf {y}}-{\varvec{\beta }}) \ge 0, ~\text{ for } \text{ all }~ {\mathbf {y}}\in {\mathcal {B}}\}. \end{aligned}$$

Note that the global minimum \({\hat{{\varvec{\beta }}}}\) is a stationary point, i.e., \({\hat{{\varvec{\beta }}}} \in {\mathcal {B}}^*\). In addition, when the objective function is smooth, i.e., \(g({\varvec{\beta }}) \equiv 0\), then \({\mathcal {B}}^* = \{{\varvec{\beta }}\in {\mathcal {B}}: \nabla h({\varvec{\beta }}) = 0 \},\) which is the standard definition of stationary points set for a smooth function.

The following assumptions are assumed for our objective function.

  1. H1.

    \({\mathcal {B}}\) is compact and contains finite stationary points. For stationary points \({\varvec{\beta }},{\varvec{\beta }}^\prime \in {\mathcal {B}}^*,\) \(h({\varvec{\beta }})+ g({\varvec{\beta }})= h({\varvec{\beta }}^\prime )+g({\varvec{\beta }}^\prime )\) if and only if \({\varvec{\beta }}= {\varvec{\beta }}^\prime .\)

  2. H2.

    \(H(\varvec{\xi },{\varvec{\beta }})\) is a differentiable function with respect to \({\varvec{\beta }}\) for given \(\varvec{\xi }\) and let \({\mathbf {G}}_{{\varvec{\beta }}}(\varvec{\xi })= \partial H(\varvec{\xi },{\varvec{\beta }})/\partial {\varvec{\beta }}.\) Define function \(M_\epsilon \): \(\Theta \times \Xi \rightarrow {\mathbb {R}}_+\) as

    $$\begin{aligned} M_\epsilon ({\varvec{\beta }};\varvec{\xi }) = \sup _{{\varvec{\beta }}'\in {\mathcal {B}},\Vert {\varvec{\beta }}'-{\varvec{\beta }}\Vert <\epsilon }\Vert {\mathbf {G}}_{{\varvec{\beta }}'}(\varvec{\xi })\Vert . \end{aligned}$$

    There exists \(\epsilon _0>0\) such that for all \(0<\epsilon <\epsilon _0,\)

    $$\begin{aligned} {\mathbb {E}}[M_\epsilon ({\varvec{\beta }};\varvec{\xi })^2\mid {\varvec{\beta }}]<\infty \text { for all }{\varvec{\beta }}\in {\mathcal {B}}. \end{aligned}$$
  3. H3.

    There exists \(\epsilon _0>0\) such that for all \({\varvec{\beta }}^\prime \in {\mathcal {B}},\) there exists \(\lambda (\varvec{\xi },{\varvec{\beta }}^\prime )\ge 0\) such that

    $$\begin{aligned} {\varvec{\beta }}\mapsto H(\varvec{\xi },{\varvec{\beta }}) + \frac{\lambda (\varvec{\xi },{\varvec{\beta }}^\prime )}{2}\Vert {\varvec{\beta }}-{\varvec{\beta }}_0\Vert ^2 \end{aligned}$$

    is convex on the set \(\{{\varvec{\beta }}: \Vert {\varvec{\beta }}-{\varvec{\beta }}^\prime \Vert \le \epsilon _0\}\) for any \({\varvec{\beta }}_0,\) and \({\mathbb {E}}[\lambda (\varvec{\xi },{\varvec{\beta }}^\prime )\mid {\varvec{\beta }}]<\infty .\)

  4. H4.

    The stochastic gradient \({\mathbf {G}}_{{\varvec{\beta }}^{(t-1)}}(\varvec{\xi }^{(t)})\) is a Monte Carlo approximation of \(\nabla h({\varvec{\beta }}^{(t-1)}).\) That is, if computationally feasible, we take \(\varvec{\xi }^{(t)}\) as an exact sample from \(\pi _{{\varvec{\beta }}^{(t-1)}}\), where, as defined earlier, \(\pi _{{\varvec{\beta }}^{(t-1)}}\) is the posterior distribution of \(\varvec{\xi }\) given \({\varvec{y}}\) and \({\varvec{\beta }}^{(t-1)}\). If not, we sample \(\varvec{\xi }^{(t)}\) from a Markov kernel \(P_{{\varvec{\beta }}^{(t-1)}}\) with invariant distribution \(\pi _{{\varvec{\beta }}^{(t-1)}}\).

  5. H5.

    Define

    $$\begin{aligned} \begin{aligned}&{\varvec{\beta }}_{\gamma }^+(\varvec{\xi }) = \underset{{\varvec{x}} \in {\mathcal {B}}}{{\text {argmin}}}\left\{ [{\varvec{G}}_{{\varvec{\beta }}}(\varvec{\xi })]^\top ({\varvec{x}}-{\varvec{\beta }})+{\mathbf {g}}({\varvec{x}})+\frac{1}{2 \gamma }\left\| {\varvec{x}}-{\varvec{\beta }}\right\| ^{2}_{{\mathbf {D}}}\right\} ,\\&{\varvec{U}}_{\gamma }(\varvec{\xi };{\varvec{\beta }}) = \frac{1}{\gamma }({\varvec{\beta }}- {\varvec{\beta }}_{\gamma }^+(\varvec{\xi })), \end{aligned} \end{aligned}$$
    (20)

    where step size satisfy \(\sum _{t=1}^\infty \gamma _t = \infty ,\) \(\sum _{t=1}^\infty \gamma _t^2<\infty .\) Then with probability 1,

    $$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{t=1}^n\gamma _t\left( {\varvec{U}}_{\gamma _t} (\varvec{\xi }^{(t)};{\varvec{\beta }}^{(t-1)}) - {\mathbb {E}}[{\varvec{U}}_{\gamma _t}(\varvec{\xi }^{(t)};{\varvec{\beta }}^{(t-1)})\mid {\varvec{\beta }}^{(t-1)}]\right) \end{aligned}$$

    exists and is finite.

We remark that conditions H1 through H5 are quite mild. Condition H1 imposes mild requirements on the compactness of the parameter space and the properties of the stationary points of the objective function. Specifically, the compactness of the parameter space is often assumed when analyzing stochastic optimization problems without assuming convexity, see e.g., Gu & Kong (1998), Nielsen (2000), Cai (2010b), and Duchi & Ruan (2018). It also requires that the objective function has different values at different stationary points. Conditions H2 and H3 require the complete-data log-likelihood function \(H(\varvec{\xi },\cdot )\) is locally Lipschitzian and weakly convex, respectively. These conditions hold when the complete-data log-likelihood function \(H(\varvec{\xi },\cdot )\) is Lipschitzian and convex on the entire parameter space. Requiring locally Lipschitzian and weakly convex enables our theory to be applicable to a wider range of problems. Similar conditions are imposed in Duchi & Ruan (2018). For the examples that we consider in Sects. 1.2 and 1.3, these two conditions are satisfied because \(H(\varvec{\xi },{\varvec{\beta }})\) is smooth and convex in \({\varvec{\beta }}\). Condition H4 is automatically satisfied according to the way the latent variables are sampled in Algorithm 1. Finally, H5 is a key condition for the convergence of the sequence \({\varvec{\beta }}^{(t)}.\) When exact samples from the posterior distribution are used, Lemma 1 guarantees that H5 is satisfied. With approximate samples from an MCMC algorithm, H5 may still hold when the bias from the MCMC samples is small.

Lemma 1

Define the filtration of \(\sigma \)-algebra \({\mathcal {F}}_{t-1} = \sigma \left( {\varvec{\beta }}^{(0)}, \varvec{\xi }^{(k)}, 0 \le k \le t-1\right) .\) \(\varvec{\xi }\) is a sample from \(\pi _{\varvec{\beta }}.\) Let

$$\begin{aligned} \varvec{\epsilon }_{\gamma }(\varvec{\xi };{\varvec{\beta }}) = {\varvec{U}}_{\gamma }(\varvec{\xi };{\varvec{\beta }}) - {\mathbb {E}}[{\varvec{U}}_{\gamma }(\varvec{\xi };{\varvec{\beta }})\mid {\varvec{\beta }}], \end{aligned}$$

then \(\gamma _t\varvec{\epsilon }_{\gamma _t}(\varvec{\xi }^{(t)},{\varvec{\beta }}^{(t-1)})\) is a square-integrable martingale difference sequence adapted to \({\mathcal {F}}_{t-1},\) and with probability 1, \(\lim _n \sum _{t=1}^n \gamma _t \varvec{\epsilon }_{\gamma _t}(\varvec{\xi }^{(t)},{\varvec{\beta }}^{(t-1)})\) exists and is finite.

Theorem 1

Apply Algorithm 1 to optimization problem (11) with step size \(\gamma _t=t^{-\frac{1}{2}-\epsilon },\epsilon \in (0,\frac{1}{2}]\), for which conditions H1-H5 hold. Then with probability 1, the sequence \({\varvec{\beta }}^{(n)}\) converges to a stationary point in \({\mathcal {B}}^*\).

We remark that the convergence of the proposed method is similar to that of the EM algorithm. In fact, for marginal maximum likelihood estimation that is non-convex, the EM algorithm also only guarantees the convergence to a stationary point (Wu, 1983). Moreover, when the objective function has a single stationary point (e.g., when the objective function is strictly convex), then Theorem 1 guarantees global convergence.

The convergence of \({\varvec{\beta }}^{(n)}\) guarantees the convergence of the Polyak–Ruppert averaging sequence \({\bar{{\varvec{\beta }}}}_n\). However, Theorem 1 does not provide information on the convergence speed. In what follows, we establish the convergence speed of \({\bar{{\varvec{\beta }}}}_n\). Without loss of generality, by Theorem 1, we assume that \({\varvec{\beta }}^{(n)}\) converges to \({\varvec{\beta }}_* \in {\mathcal {B}}^*.\)

  1. H6.

    There exists \(\delta >0\), such that \(h({\varvec{\beta }})\) is strongly convex in \({\mathcal {B}}_1 = \{{\varvec{\beta }}\in {\mathcal {B}}: \Vert {\varvec{\beta }}- {\varvec{\beta }}_*\Vert \le \delta \}\) and \(\nabla h({\varvec{\beta }})\) is Lipschitz in \({\mathcal {B}}_1\) with Lipschitz constant L.

  2. H7.

    For \({\varvec{\beta }},{\varvec{\beta }}' \in {\mathcal {B}}_1,\) any \(\gamma >0,\) and diagonal matrix \({\mathbf {D}}\) with diagonal entries \(\delta _i\in [c_1,c_2],\) the following conditions hold.

    1. (i)

      \(g\left( {\text {Prox}}^{{\mathbf {D}}}_{\gamma , g}({\varvec{\beta }})\right) -g\left( {\varvec{\beta }}^{\prime }\right) \le -\frac{1}{\gamma }\left\langle {\text {Prox}}^{{\mathbf {D}}}_{\gamma , g}({\varvec{\beta }})-{\varvec{\beta }}^{\prime }, {\text {Prox}}^{{\mathbf {D}}}_{\gamma , g}({\varvec{\beta }})-{\varvec{\beta }}\right\rangle _{{\mathbf {D}}}\).

    2. (ii)

      \(\left\| {\text {Prox}}^{{\mathbf {D}}}_{\gamma , g}({\varvec{\beta }})-{\text {Prox}}^{{\mathbf {D}}}_{\gamma ,{} g}\left( {\varvec{\beta }}^{\prime }\right) \right\| _{{\mathbf {D}}}^{2} +\left\| \left( {\text {Prox}}^{{\mathbf {D}}}_{\gamma , g}({\varvec{\beta }})-{\varvec{\beta }}\right) \!-\!\left( {\text {Prox}}^{{\mathbf {D}}}_{\gamma , g}\left( {\varvec{\beta }}^{\prime }\right) -{\varvec{\beta }}^{\prime }\right) \right\| _{{\mathbf {D}}}^{2} \le \left\| {\varvec{\beta }}-{\varvec{\beta }}^{\prime }\right\| _{{\mathbf {D}}}^{2}\).

    3. (iii)

      \(\sup _{\gamma \in (0,c_1 / L]} \sup _{{\varvec{\beta }}\in {\mathcal {B}}_1} \gamma ^{-1}\left\| {\text {Prox}}_{\gamma , g}^{{\mathbf {D}}}({\varvec{\beta }})-{\varvec{\beta }}\right\| <\infty \).

  3. H8.

    For a measurable function \(V: \Xi \rightarrow [1,+\infty ),\) a signed measure \(\mu \) on the \(\sigma \)-field of \(\Xi ,\) and a function \(f: \Xi \rightarrow {\mathbb {R}},\) define

    $$\begin{aligned} |f|_{V} {\mathop {=}\limits ^{ \text{ def } }} \sup _{\varvec{\xi } \in \Xi } \frac{|f(\varvec{\xi })|}{V(\varvec{\xi })}, \quad \Vert \mu \Vert _{V} {\mathop {=}\limits ^{ \text{ def } }} \sup _{f,|f|_{V} \le 1}\left| \int f \mathrm {d} \mu \right| . \end{aligned}$$

    There exist \(\lambda \in (0,1), b<\infty , m\ge 4\) and a measurable function \(W: \Xi \rightarrow [1,+\infty )\) such that

    $$\begin{aligned} \sup _{{\varvec{\beta }}\in {\mathcal {B}}_1}\left| {\mathbf {G}}_{{\varvec{\beta }}}\right| _{W}<\infty , \quad \sup _{{\varvec{\beta }}\in {\mathcal {B}}_1} P_{{\varvec{\beta }}} W^{m} \le \lambda W^{m}+b, \end{aligned}$$

    where \({\mathbf {G}}_{{\varvec{\beta }}}(\varvec{\xi })= \partial H(\varvec{\xi },{\varvec{\beta }})/\partial {\varvec{\beta }}\) and \(P_{{\varvec{\beta }}}\) is the Markov kernel defined in condition H4. In addition, for any \(\ell \in (0,m], \) there exists \(C<\infty , \rho \in (0,1)\) such that for any \(\varvec{\xi }\in \Xi ,\)

    $$\begin{aligned} \sup _{{\varvec{\beta }}\in {\mathcal {B}}_1}\left\| P_{{\varvec{\beta }}}^{n}(\varvec{\xi }, \cdot )-\pi _{{\varvec{\beta }}}\right\| _{W^{\ell }} \le C \rho ^{n} W^{\ell }(\varvec{\xi }). \end{aligned}$$
  4. H9.

    There exists a constant C such that for any \({\varvec{\beta }},{\varvec{\beta }}^{\prime } \in {\mathcal {B}}_1, \)

    $$\begin{aligned} \left| {\mathbf {G}}_{{\varvec{\beta }}}-{\mathbf {G}}_{{\varvec{\beta }}^{\prime }}\right| _{W}+\sup _{\varvec{\xi }\in \Xi } \frac{\left\| P_{{\varvec{\beta }}}(\varvec{\xi }, \cdot )-P_{{\varvec{\beta }}^{\prime }}(\varvec{\xi }, \cdot )\right\| _{W}}{W(\varvec{\xi })}+\left\| \pi _{{\varvec{\beta }}}-\pi _{{\varvec{\beta }}^{\prime }}\right\| _{W} \le C\left\| {\varvec{\beta }}-{\varvec{\beta }}^{\prime }\right\| . \end{aligned}$$

We provide a few remarks on conditions H6-H9, which are needed for establishing the convergence speed in addition to conditions H1–H5. Condition H6 requires that the smooth part of the objective function is strongly convex and its derivative is Lipschitz continuous in a small neighborhood of \({\varvec{\beta }}_*\). Specifically, \(h({\varvec{\beta }})\) being strongly convex in \({\mathcal {B}}_1\) means that there exists a positive constant C, such that \((\nabla h({\varvec{\beta }})-\nabla h({\varvec{\beta }}'))^\top ({\varvec{\beta }}- {\varvec{\beta }}') \ge C\Vert {\varvec{\beta }}- {\varvec{\beta }}'\Vert ^2\), for any \({\varvec{\beta }}\) and \({\varvec{\beta }}' \in {\mathcal {B}}_1\). Condition H7 imposes some requirements on the non-smooth part of the objective function, with regard to the proximal operator. As verified in Lemma C.1, H7 holds when g is a generalized function that indicates constraints or when g is locally Lipschitz continuous and convex that holds when g is a \(L_1\) regularization function. Thus, H7 holds for the examples we consider in Sects. 2.2 and 2.3. Conditions H8 and H9 impose mild regularity conditions on the stochastic gradient in a local neighborhood of \({\varvec{\beta }}_*\), especially when the stochastic gradients are generated by a Markov kernel. These conditions are used to control the bias caused by MCMC sampling. H8 is essentially a uniform-in-\({\varvec{\beta }}\) ergodic condition and H9 is a local Lipschitzian condition on the Markov kernel. These regularity conditions are commonly adopted in the stochastic approximation literature (Benveniste et al., 1990; Andrieu et al., 2005; Fort et al., 2016) and have been shown to hold for general families of MCMC kernels including Metropolis–Hastings and Gibbs samplers (Andrieu & Moulines, 2006; Fort et al., 2011; Schmidt et al., 2011).

Theorem 2

Suppose that H1–H9 hold. Then there exists a constant C, such that for the Polyak–Ruppert averaging sequence \({\bar{{\varvec{\beta }}}}_{n} = \frac{1}{n}\sum _{t=1}^n {\varvec{\beta }}^{(t)}\) from Algorithm 1,

$$\begin{aligned} {\mathbb {E}}\Vert {\bar{{\varvec{\beta }}}}_n - {\varvec{\beta }}_\star \Vert ^2 \le C n^{-\frac{1}{2} + \varepsilon }. \end{aligned}$$
(21)

Note that the expectation is taken with respect to \(\varvec{\xi }^{(1)},\ldots ,\varvec{\xi }^{(t)}\) given \({\varvec{\beta }}^{(0)}\) and \(\varvec{\xi }^{(0)}.\)

We now provide a few remarks regarding the convergence speed (21). First, the small positive constant \(\varepsilon \) comes from the requirement on step size that \(\sum _{t=1}^\infty \gamma _t^2 < \infty \) in H5. Since \(\sum _{t=1}^\infty \gamma _t^2 < \infty \) is satisfied when \(\gamma _t = \mu t^{-\frac{1}{2} - \varepsilon }\), for any \(\mu , \varepsilon > 0\), the convergence speed of \({\mathbb {E}}\Vert {\bar{{\varvec{\beta }}}}_n - {\varvec{\beta }}_\star \Vert ^2\) can be arbitrarily close to \(O(n^{-\frac{1}{2}})\) by choosing an arbitrarily small \(\varepsilon \). Second, this \(\varepsilon \) might be an artifact due to our proof strategy to overcome the non-convexity of the problem. In fact, if the objective function is convex, similar to Atchadé et al. (2017), we can choose \(\epsilon = 0\) and then prove under similar conditions that \({\mathbb {E}}\Vert {\bar{{\varvec{\beta }}}}_n - {\varvec{\beta }}_\star \Vert ^2 \le C n^{-\frac{1}{2}}\). Lastly, it is well-known that for non-smooth convex optimization, the minimax optimal convergence rate is \(O(n^{-\frac{1}{2}})\); see Chapter 3, Nesterov (2004). In this sense, our algorithm is almost minimax optimal, when \(\varepsilon \) is very close to zero. It is well-known that Polyak–Ruppert averaging typically improves the convergence speed of a slowly convergent sequence (Ruppert, 1988, Bonnabel, 2013).

4 Simulation Study

4.1 Study I: Confirmatory IFA

Table 1 Comparison of five stochastic algorithms.

In the first study, we compare the performance of four variants of the proposed method and the stochastic EM (StEM) algorithm. The five methods, including their abbreviations, are given in Table 1. For a fair comparison, the same Gibbs sampling method is used. We further explain the differences below.

  1. 1.

    USP is the method that we recommend. It has a step size \(\gamma _t\) close to \(t^{-1/2}\), applies Polyak–Ruppert averaging, and uses a quasi-Newton update in the proximal step.

  2. 2.

    The USP-PPG method is the perturbed proximal gradient method that is implemented the same as the USP method except that \(c_1 = c_2\) so that it does not involve a quasi-Newton update. \(c_1\) is set to be 1 without tuning in this study.

  3. 3.

    The USP-RM1 method is implemented the same as the USP method, except that \({\varvec{\beta }}^{(n)}\) from the last iteration is taken as the estimator instead of applying Polyak–Ruppert averaging. This method is very similar to a Robbins–Monro algorithm, except for the update of parameters \({\mathbf {B}}\) for the covariance matrix where constraints involve.

  4. 4.

    The USP-RM2 method is the same as USP-RM1, except that we set the step size \(\gamma _t = 1/t\) which is the asymptotic optimal step size for the Robbins–Monro algorithm (Fabian, 1968).

  5. 5.

    The implementation of the StEM algorithm is the same as USP, except for the proximal step. Instead of making stochastic gradient update, StEM obtains \({\varvec{\beta }}^{(t)}\) by completely solving an optimization problem

    $$\begin{aligned} {\varvec{\beta }}^{(t)} = \mathop {\hbox {arg max}}\limits _{{\varvec{\beta }}\in {\mathcal {B}}} H(\varvec{\xi }^{(t)}, {\varvec{\beta }}) + g({\varvec{\beta }}). \end{aligned}$$

    In our implementation, this optimization problem is solved by making the quasi-Newton proximal update (14) iteratively until convergence.

We consider a confirmatory IFA setting with only two factors (i.e., \(K=2\)), so that an EM algorithm with sufficient numbers of quadrature points and EM steps can be used to obtain a more accurate approximation of \({\hat{{\varvec{\beta }}}}\) that will be used as the standard when comparing the five methods. We emphasize that it is important to compare the convergence speed of difference algorithms based on \({\hat{{\varvec{\beta }}}}\) rather than the true model parameters. This is because, under suitable conditions, these algorithms converge to \({\hat{{\varvec{\beta }}}}\) rather than the true model parameters. If we compare the algorithms based on the true model parameters, the difference in the convergence speed cannot be observed clearly, as the statistical error (i.e., the difference between \({\hat{{\varvec{\beta }}}}\) and the true model parameters) tends to dominate the computational errors (i.e., the difference between \({\hat{{\varvec{\beta }}}}\) and the results given by the stochastic algorithms).

More precisely, we consider sample size \(N = 1000\) and the number of items \(J = 20\). The design matrix \({\mathbf {Q}}\) is specified by the assumptions that items 1 through 5 only measure the first factor, items 6 through 10 only measure the second factor, and items 11 through 20 measure both. The intercept parameters \(d_j\) are drawn i.i.d. from the standard normal distribution, and the non-zero loading parameters are drawn i.i.d. from a uniform distribution over the interval (0.5, 1.5). The variances of the two factors are set to be 1 and the covariance is set to be 0.4. Under these parameters, 100 independent datasets are generated, based on which the five methods are compared. To ensure a fair comparison, the true parameters are used as the starting point for all the methods. In addition, 1000 iterations are run (i.e., \(n = 1000\)) for each method, instead of using an adaptive stopping criterion. For USP, USP-PPG, and StEM, the burn-in size \(\varpi \) is chosen to be 500. All algorithms are implemented in C++ and run on the same platformFootnote 1 using a single core.

Fig. 1
figure 1

The boxplot of mean squared errors for estimated parameters from the five methods.

Fig. 2
figure 2

The boxplot of mean squared errors for estimated parameters from ‘USP,’ ‘USP-RM1,’ and ‘StEM’ method.

The results regarding the accuracy of the proposed methods are given in Figs. 1 and 2 that are based on the following performance metrics. Specifically, for the intercept parameters \(d_j\), the following mean squared error (MSE) is calculated for each simulated dataset and each method,

$$\begin{aligned} \frac{1}{J}\sum _{j=1}^{J} \left( {\tilde{d}}_j - {\hat{d}}_j\right) ^2, \end{aligned}$$

where \({\hat{d}}_j\), which is treated as the global optimum, is obtained by an EM algorithm with 31 Gaussian–Hermite quadrature points per dimension, and \({\tilde{d}}_j\) is given by one of the five stochastic methods after 1000 iterations. Similarly, the MSEs for the loading parameters and for the correlation \(\sigma _{12}\) between the factors are calculated, where the MSE for the loading parameters is calculated for the unrestricted ones, i.e.,

$$\begin{aligned} \frac{\sum _{j,k} 1_{\{q_{jk}\ne 0\}} ({\tilde{a}}_{jk} - {\hat{a}}_{jk})^2 }{\sum _{j,k} 1_{\{q_{jk}\ne 0\}} }. \end{aligned}$$

Again, \({\hat{a}}_{jk}\) is given by the EM algorithm, and \({\tilde{a}}_{jk}\) is given by one of the five methods.

Figure 1 compares the accuracy of all the five methods. As we can see, the USP, USP-RM1, and StEM methods have much smaller MSEs than the USP-PPG and USP-USP-RM2 methods. Since the USP-PPG method only differs from the USP method by whether using a quasi-Newton update, the inferior performance of USP-PPG implies the importance of the second-order information in the stochastic proximal gradient update. As the USP-RM2 method only differs from USP-RM1 by their step sizes, the inferior performance of USP-RM2 is mainly due to the use of short step size.

In Fig. 2, we zoom in to further compare the USP, USP-RM1, and StEM methods. First, we see that the USP method performs the best among the three, for all the parameters. As the USP-RM1 method is the same as the USP method except for not applying Polyak–Ruppert averaging, this result suggests that averaging does improve accuracy. Moreover, the USP method and the StEM method only differ by the way the parameters are updated, where the USP method takes a quasi-Newton proximal update, while the StEM method completely solves an optimization problem. It is likely that the way parameters are updated in the USP method yields more smoothing (i.e., averaging) than the StEM, which leads to the outperformance of the USP method.

Table 2 The elapsed time (seconds) for the five methods in confirmatory IFA.

On the computational efficiency, we show in Table 2 the elapsed time for the five methods. ‘USP’, ‘USP-RM1’, ‘USP-PPG’, and ‘USP-RM2’ share similar computation time since their floating point operations per iteration are at the same level. ‘StEM’ is most time consuming because an inner loop of optimization is involved in each iteration. In summary, the proposed USP algorithm is computationally the most efficient among the five algorithms, in the sense that it achieves the highest accuracy (see Figs. 1 and 2), within a similar or smaller amount of time (see Table 2).

4.2 Study II: Exploratory IFA by Regularization

In the second study, we apply the proposed method to regularized estimation for exploratory IFA as discussed in Sect. 1.2. We consider increasing sample size \(N=1000, 2000, 4000,\) eighty items and five correlated latent factors (i.e., \(J=80,K=5\)). The true loading matrix is sparse, where the items each factor loads on are given in Table 3. Similar to Study I, the intercept parameters \(d_j\) are drawn i.i.d. from the standard normal distribution, and the non-zero loading parameters \(a_{jk}\) are drawn i.i.d. from a uniform distribution over the interval (0.5, 1.5). The elements of covariance matrix \(\Sigma =(\sigma _{k,k})_{5\times 5}\) are set to be \(\sigma _{k,k'}=1,\) for \(k=k'\) and \(\sigma _{k,k'}=0.4\) for \(k\ne k'.\)

Table 3 The sparse loading structure in the data generation IFA model.

For each sample size, 50 independent datasets are generated. In the proposed algorithm, we adopt a burn-in size \(\varpi =50\) and stop based on the criterion discussed in Sect. 2, where the stopping threshold is set to be \(10^{-3}\). A decreasing penalty parameter \(\lambda _N = \sqrt{\log J /N}\) is used to ensure estimation consistency (Chapter 6, Bühlmann & van de Geer, 2011). Other implementation details can be found in Sect. 2.2. The algorithm in this example is implemented in C++ and is run on the same platform as in Study I. Although a regularized EM algorithm (Sun et al., 2016) can also solve this problem, it suffers from a very high computational cost. Due to the five-dimensional numerical integrals involved, it takes a few hours to fit one dataset. We thus do not consider it here.

We focus on the accuracy in the estimation of the loading matrix \({\mathbf {A}} = (a_{jk})_{J\times K}\). Note that although the rotational indeterminacy issue is resolved in this regularized estimator, the loading matrix can still only be identified up to column swapping. That is, two estimates of the loading matrix have the same objective function value, if one can be obtained by swapping the columns of the other. The following mean-squared-error measure is used that takes into account column swapping of the loading matrix

$$\begin{aligned} \min _{{\mathbf {A}}'\in {\mathcal {P}}(\tilde{{\mathbf {A}}})}\left\{ \frac{1}{JK}\Vert {\mathbf {A}}' - {\mathbf {A}}\Vert _F^2\right\} , \end{aligned}$$
(22)

where \(\Vert \cdot \Vert _F\) is the Frobenius norm, \({\mathbf {A}}\) is the true loading matrix, \(\tilde{{\mathbf {A}}}\) is the output of Algorithm 1, and \({\mathcal {P}}(\tilde{{\mathbf {A}}})\) denotes the set of \(J\times K\) matrices that can be obtained by swapping the columns of \(\tilde{{\mathbf {A}}}\).

Results are given in Tables 4 and 5. In Table 4, we see that the MSE for the loading matrix is quite small and decreases as the sample size grows, suggesting that consistency of the regularized estimator. In Table 5, the quantiles of time consumption under different sample sizes are given, which suggests the computational efficiency of the proposed method.

Table 4 The mean squared errors for estimated loading parameters in exploratory IFA with \(L_1\) regularization.
Table 5 The elapsed time (seconds) for exploratory IFA with \(L_1\) regularization.

4.3 Study III: Restricted LCA

In this study, we apply the proposed method to the estimation of a restricted latent class model as discussed in Sect. 1.3, where the optimization involves complex inequality constraints. Specifically, data are from a Deterministic Input, Noisy ‘And’ gate (DINA) model (Junker & Sijtsma, 2001) that is a special restricted latent class model. Note that the DINA assumptions are only used in the data generation. We solve optimization (10) which is based on a general restricted latent class model considered in Xu (2017) instead of the DINA model, mimicking the practical situation when the parametric form is unknown.

We consider a test consisting of twenty items (i.e., \(J = 20\)) that measure four binary attributes (i.e., \(K=4\)). Three sample sizes are considered, including \(N=1000, 2000\), and 4000. The design matrix \({\mathbf {Q}}\) is given in Table 6. In addition, the guessing and slipping parameters \(s_j\) and \(g_j\) of the DINA model are drawn i.i.d. from a uniform distribution over the interval (0.05, 0.2), which gives the values of \(\theta _{j, \varvec{\alpha }}\). That is,

$$\begin{aligned} \theta _{j, \varvec{\alpha }} =\left\{ \begin{array}{cl} \log ( (1-s_j)/s_j), &{} ~\text{ if }~ \varvec{\alpha } \succeq {\mathbf {q}}_{j}, \\ \log ( g_j/(1-g_j)), &{} ~\text{ otherwise. } \end{array}\right. \end{aligned}$$

Finally, we let \(\nu _{\varvec{\alpha }} = 0,\) for all \(\varvec{\alpha } \in \{0, 1\}^K\), so that \({\mathbb {P}}(\varvec{\xi } = \varvec{\alpha }) = 1/2^K\). According to the results in Xu (2017), the model parameters are identifiable, given the \({\mathbf {Q}}\)-matrix in Table 6.

Table 6 The design matrix \({\mathbf {Q}}\) for the restricted LCA model.

For each sample size, 50 independent datasets are generated. The proposed algorithm adopts a burn-in size \(\varpi =50\) and stops based on the criterion discussed in Sect. 2, where the stopping threshold is set to be \(10^{-3}\). Other implementation details can be found in Sect. 2.3. The following metrics are used to evaluate the estimation accuracy. For item parameters \(\theta _{j, \varvec{\alpha }}\), the MSE is calculated as

$$\begin{aligned} \frac{1}{J\times 2^K}\sum _{j=1}^J\sum _{\varvec{\alpha }\in \{0,1\}^K}\left( {\tilde{\theta }}_{j,\varvec{\alpha }}-\theta _{j,\varvec{\alpha }}\right) ^2. \end{aligned}$$

For structural parameters \(\nu _{\varvec{\alpha }}\), the MSE is calculated as

$$\begin{aligned} \frac{1}{2^K-1}\sum _{\varvec{\alpha }\in \{0,1\}^K,\ \varvec{\alpha }\ne {\varvec{0}}}\left( {\tilde{\nu }}_{\varvec{\alpha }} - \nu _{\varvec{\alpha }}\right) ^2. \end{aligned}$$

Our results are given in Tables 7 and 8. As we can see, the estimation becomes more accurate as the sample size increases for both sets of parameters. It confirms that the current model is identifiable as suggested by Xu (2017) and thus can be consistently estimated.

Table 7 The MSE for item parameters \(\theta _{j, \varvec{\alpha }}\) in the restricted latent class model.
Table 8 The MSE for structural parameters \(\nu _{\varvec{\alpha }}\) in the restricted latent class model.

5 Concluding Remarks

In this paper, a unified stochastic proximal optimization framework is proposed for the computation of latent variable model estimation. This framework is very general that applies to a wide range of estimators for almost all commonly used latent variable models. Comparing with existing stochastic optimization methods, the proposed method not only solves a wider range of problems including regularized and constrained estimators, but also is computationally more efficient. Theoretical properties of the proposed method are established. These results suggest that the convergence speed of the proposed method is almost optimal in the minimax sense.

The power of the proposed method is shown via three examples, including confirmatory IFA, exploratory IFA by regularized estimation, and restricted latent class analysis. Specifically, the proposed method is compared with several stochastic optimization algorithms, including a stochastic-EM algorithm and a Robbin–Monro algorithm with MCMC sampling, in the simulation study of confirmatory IFA, where there is no complex constraint or penalty. Using the same starting point and the same number of iterations, the proposed one is always more accurate than its competitors. The simulation studies on exploratory IFA and restricted latent class analysis further show the power of the proposed method for handling optimization problems with non-smooth penalties and complex inequality constraints.

The implementation of the proposed algorithm involves several tuning parameters. First, we need to choose a step size \(\gamma _t\). Our theoretical results suggest that \(\gamma _t =t^{-0.5 - \epsilon }\) for any \(\epsilon \in (0,0.5]\), and a smaller \(\epsilon \) leads to faster convergence. In practice, we suggest to set \(\gamma _t =t^{-0.51}\) that performs well in all our simulations. This choice of step size is very different from the choice of \(\gamma _t =t^{-1}\) in the MCMC stochastic approximation algorithms. Second, a burn-in size \(\varpi \) is needed. The burn-in in the proposed algorithm is similar to the burn-in in MCMC algorithms. It does not affect the asymptotic convergence of the algorithm but improves the finite sample performance. In practice, the burn-in size can be decided similarly as in MCMC algorithms by monitoring the parameter updates using trace plots. Third, two positive constraints \(c_1\) and \(c_2\) are needed to regularize the second-order matrix in the scaled proximal update. Depending on the scale of each particular problem, we suggest to choose \(c_1\) to be sufficiently small and \(c_2\) to be sufficiently large. It is found that the performance of our algorithm is not sensitive to their choices. Finally, a stopping criterion is needed. We suggest to stop the iterative update by monitoring a window of successive differences in parameter updates.

The proposed framework may be improved from several aspects that are left for future investigation. First, the sampling strategy in the stochastic step needs further investigation. Although in theory any reasonable MCMC sampler can yield the convergence of the algorithm, a good sampler will lead to superior finite sample performance. More sophisticated MCMC algorithms need to be investigated regarding their performance under the proposed framework. Second, methods for parallel and distributed computing need to be developed. As we can see, many steps of Algorithm 1 can be performed independently. This enables us to design parallel and/or distributed computing systems for solving large-scale and/or distributed versions of latent variable model estimation problems (e.g., fitting models for assessment data from online learning platforms and large-scale mental health records). Finally, the performance of the proposed method under other latent variable models needs to be investigated. For example, the proposed method can also be applied to latent stochastic process models (e.g., Chow et al., 2016; Chen & Zhang, 2020) that are useful for analyzing intensive longitudinal data. These models bring additional challenges, as stochastic processes need to be sampled in the stochastic step of our algorithm.

In summary, the proposed method is computationally efficient, theoretically solid, and applicable to a broad range of latent variable model inference problems. Like the EM algorithm as the standard tool for low-dimensional latent variable models, we believe that the proposed method may potentially serve as the standard approach to the estimation of high-dimensional latent variable models.