1 Introduction

Mixed models are useful for analysing clustered data, wherein observations that come from the same cluster/group are likely to be correlated. Example datasets include records of students clustered within schools, and repeated measurements of biomarkers on patients. Mixed models account for intra-group dependencies by incorporating cluster/group-specific “random effects”. Inference with these models is made challenging by the fact that the likelihood function involves integrals over the random effects that are not usually tractable except for the few cases where the distribution of the random effects is conjugate to the distribution of the data, such as in the linear mixed model (Verbeke et al. 1997), the beta-binomial model (Crowder 1979), and Rasch’s Poisson count model (Jansen 1994). Notably, there is no closed-form expression for the likelihood function in the case of the ubiquitous logistic mixed model.

Maximum-likelihood-based approaches are often used for parameter inference in mixed models. In the case of linear mixed models, parameter inference via maximum likelihood estimation is straightforward (e.g., Wakefield 2013). For mixed models with an intractable likelihood, integrals over random effects need to be numerically approximated, for example by using Gaussian quadrature (Naylor and Smith 1982) or the Laplace approximation (Tierney and Kadane 1986). The likelihood may also be indirectly maximised using an expectation-maximisation type algorithm (Dempster et al. 1977), which treats the random effects as missing, and iteratively maximises the “expected complete-data log-likelihood” of the data and the random effects. Quasi-likelihood approaches such as penalised quasi-likelihood (PQL, Breslow and Clayton 1993) and marginal quasi-likelihood (MQL, Goldstein 1991) approximate nonlinear mixed models with linear mixed models, so that well-developed estimation routines for linear mixed models can be applied; see Tuerlinckx et al. (2006) for a detailed discussion of these methods. These maximum-likelihood-based methods provide point estimates and not full posterior distributions over the parameters.

Full posterior distributions can be obtained using Markov chain Monte Carlo (MCMC, e.g., Zhao et al. 2006; Fong et al. 2010). MCMC provides exact, sample-based posterior distributions, but at a higher computational cost than maximum-likelihood-based methods. Alternatively, variational Bayes (VB) methods (e.g., Ong et al. 2018; Tan and Nott 2018) are becoming increasingly popular for estimating parameters in complex statistical models. These methods approximate the exact posterior distribution with a member from a simple and tractable family of distributions; this family is usually chosen to balance the accuracy of the approximation against the computational cost required to obtain the approximation. VB methods are usually computationally cheaper than MCMC methods. VB approaches can either batch-process the data (e.g., Tran et al. 2016; Ong et al. 2018; Tan and Nott 2018) or sequentially process data points (e.g., Broderick et al. 2013; Gunawan et al. 2021; Lambert et al. 2022). For settings with large amounts of data, a method that targets the posterior distribution via sequential processing of the data offers several advantages. The so-called Recursive Variational Gaussian Approximation (R-VGA, Lambert et al. 2022) algorithm is a recently-developed sequential variational Bayes method that provides a fast and accurate approximation to the posterior distribution with only one pass through the data, making it computationally efficient when compared to MCMC or batch variational Bayes. Lambert et al. (2022) apply the R-VGA algorithm to linear and logistic regression models without random effects.

In this paper, we build on the R-VGA algorithm by proposing a novel recursive variational Gaussian approximation, called Recursive Variational Gaussian Approximation for Latent variable models (R-VGAL), for estimating the parameters in GLMMs. At each update, R-VGAL requires the gradient and Hessian of the “partial” log-likelihood evaluated at the new observation, which are often not available in closed form. To circumvent this issue, we propose an importance-sampling-based approach for estimating the gradient and Hessian that uses Fisher’s and Louis’ identities  (Cappé et al. 2005). This approach was inspired by the work of Nemeth et al. (2016), who used Fisher’s and Louis’ identities to approximate the gradient and Hessian in a sequential Monte Carlo context. The efficacy of R-VGAL is illustrated using linear, logistic and Poisson mixed effect models on simulated and real datasets. The examples show that R-VGAL provides good approximations to the exact posterior distributions estimated using Hamiltonian Monte Carlo (HMC, Neal 2011; Betancourt and Girolami 2015) and at a low computational cost.

The paper is organised as follows. Section 2 provides some background on the sequential variational Bayes framework and presents the R-VGAL algorithm. Section 3 applies the R-VGAL algorithm to simulated and real datasets. Section 4 concludes with a discussion of our results and an overview of future research directions. This article has an online supplement containing additional technical details, and the code to reproduce results from the simulation and real-data experiments is available on https://github.com/bao-anh-vu/R-VGAL.

2 The R-VGAL algorithm

This section reviews GLMMs (e.g. Demidenko 2013; Faraway 2016) and provides some background on the R-VGA algorithm of Lambert et al. (2022), and then introduces the R-VGAL algorithm for making parameter inference with GLMMs.

2.1 Generalised linear mixed models

GLMMs are statistical models that contain both fixed effects and random effects. Typically, the fixed effects are common across groups, while the random effects are group-specific, and this is the setting we focus on. We briefly discuss the potential application of R-VGAL to models with more complicated random effect structures, such as crossed or nested random effects, in Section S7 of the online supplement.

Denote by \(y_{ij}\) the jth response in the ith group, for \(i = 1, \dots , N\) groups and \(j = 1, \dots , n_i\), where \(n_i\) is the number of responses in group i. Let \(\textbf{y}\equiv ({\textbf{y}_1^\top ,\dots ,\textbf{y}_N^\top })^\top \) be a vector of observations, where \(\textbf{y}_i \equiv (y_{i1}, \dots , y_{in_i})^\top \) are the responses from the ith group. The GLMMs we consider are constructed by first assigning each \(y_{ij}\) a distribution \(y_{ij} \mid \varvec{\beta }, \varvec{\alpha }_i, \phi \sim p(\cdot )\), where \(p(\cdot )\) is a member of the exponential family with a dispersion parameter \(\phi \) that is usually related to the variance of the datum, \(\varvec{\beta }\) are the fixed effect parameters, and \(\varvec{\alpha }_i\) are the group-specific random effects for \(i = 1, \dots , N\). Then, the mean of the responses, \(\mu _{ij} \equiv \mathbb {E}(y_{ij} \mid \varvec{\beta }, \varvec{\alpha }_i, \phi )\), is modelled as

$$\begin{aligned} g(\mu _{ij}) = \textbf{x}_{ij}^\top \varvec{\beta }+ \textbf{z}_{ij}^\top \varvec{\alpha }_i, \quad i = 1, \dots , N, \quad j = 1, \dots , n_i,\nonumber \\ \end{aligned}$$
(1)

where \(\textbf{x}_{ij}\) is a vector of fixed effect covariates corresponding to the jth response in the ith group; \(\textbf{z}_{ij}\) is a vector of predictor variables corresponding to the jth response and the ith random effect; and \(g(\cdot )\) is a link function that links the response mean \(\mu _{ij}\) to the linear predictor \(\textbf{x}_{ij}^\top \varvec{\beta }+ \textbf{z}_{ij}^\top \varvec{\alpha }_i\). We further assume that \(\varvec{\alpha }_i \perp \!\!\!\!\perp \varvec{\alpha }_{i'}\) for \(i \ne i'\). The random effects \(\varvec{\alpha }_i\), for \(i = 1, \dots , N\), are assumed to follow a normal distribution with mean \(\textbf{0}\) and covariance matrix \(\varvec{\Sigma }_\alpha \), that is, each \(\varvec{\alpha }_i \mid \varvec{\Sigma }_\alpha \sim N(\textbf{0}, \varvec{\Sigma }_\alpha )\). In practice, some structure is often assumed for the random effects covariance matrix so that it is parameterised in terms of a smaller number of parameters \(\varvec{\tau }\), that is, \(\varvec{\Sigma }_\alpha = \varvec{\Sigma }_\alpha (\varvec{\tau })\). Inference is then made on the parameters \(\varvec{\theta }= (\varvec{\beta }^\top , \varvec{\tau }^\top , \phi )^\top \).

The main objective of Bayesian inference is to obtain the posterior distribution of the model parameters \(\varvec{\theta }\) given the observations \(\textbf{y}\) and the prior distribution \(p(\varvec{\theta })\). Through Bayes’ rule, the posterior distribution of \(\varvec{\theta }\) is

$$\begin{aligned} p(\varvec{\theta }\mid \textbf{y}) = p(\varvec{\beta }, \varvec{\tau }, \phi \mid \textbf{y}) \propto p(\textbf{y}\mid \varvec{\beta }, \varvec{\tau }, \phi ) p (\varvec{\beta }, \varvec{\tau }, \phi ). \end{aligned}$$
(2)

The likelihood function,

$$\begin{aligned} p(\textbf{y}\mid \varvec{\beta }, \varvec{\tau }, \phi ) = \prod _{i=1}^N \int p(\textbf{y}_i \mid \varvec{\alpha }_i, \varvec{\beta }, \phi ) p(\varvec{\alpha }_i \mid \varvec{\tau }) \, \textrm{d}\varvec{\alpha }_i, \end{aligned}$$
(3)

involves integrals over the random effects \(\varvec{\alpha }_i, i = 1, \dots , N\). The likelihood function can be calculated exactly for the linear mixed model with normally distributed random effects, for which

$$\begin{aligned} y_{ij}{} & {} = \textbf{x}_{ij}^\top \varvec{\beta }+ \textbf{z}_{ij}^\top \varvec{\alpha }_i + \epsilon _{ij}, \quad \nonumber \\{} & {} \quad \varvec{\alpha }_i \sim N(\textbf{0}, \varvec{\Sigma }_\alpha (\varvec{\tau })), \quad \epsilon _{ij} \sim N(0, \sigma _\epsilon ^2), \end{aligned}$$
(4)

for \(i = 1, \dots , N\) and \(j = 1, \dots , n_i\), where \(\epsilon _{ij}\) is a zero-mean, independent, normally distributed error term with variance \(\sigma _\epsilon ^2\) that is associated with the jth response from the ith group. At the group level, this model can be written as

$$\begin{aligned} \textbf{y}_i{} & {} = \textbf{X}_i \varvec{\beta }+ \textbf{Z}_i \varvec{\alpha }_i + \varvec{\varepsilon }_i, \quad \varvec{\alpha }_i \sim N(\textbf{0}, \varvec{\Sigma }_\alpha (\varvec{\tau })), \\{} & {} \quad \varvec{\varepsilon }_i \sim N(\textbf{0}, \sigma _\epsilon ^2 \textbf{I}_{n_i}), \end{aligned}$$

where \(\textbf{X}_i = (\textbf{x}_{i1}, \dots , \textbf{x}_{in_i})^\top \), and \(\textbf{Z}_i = (\textbf{z}_{i1}, \dots , \textbf{z}_{in_i})^\top \), \(\varvec{\varepsilon }_i = (\epsilon _{i1}, \dots , \epsilon _{in_i})^\top \), with \(n_i\) being the number of observations in the ith group, for \(i = 1, \dots , N\), and \(\textbf{I}_m\) denotes an identity matrix of size \(m \times m\). The likelihood function for this linear mixed model is

$$\begin{aligned} p(\textbf{y}\mid \varvec{\beta }, \varvec{\tau }, \sigma _\epsilon ^2){} & {} = \prod _{i = 1}^{N} p(\textbf{y}_{i} \mid \varvec{\beta }, \varvec{\tau }, \sigma _\epsilon ^2) \nonumber \\{} & {} = \prod _{i = 1}^{N} N(\textbf{X}_i \varvec{\beta }, \textbf{Z}_i \varvec{\Sigma }_\alpha (\varvec{\tau }) \textbf{Z}_i^\top + \sigma _\epsilon ^2 \textbf{I}_{n_i}). \end{aligned}$$
(5)

The gradient and Hessian of the log-likelihood for the linear mixed model are also available in closed form. However, the likelihood \(p(\textbf{y}_i \mid \varvec{\alpha }_i, \varvec{\beta }, \phi )\) in (3) cannot be computed exactly for general random effects models. One important case is the logistic mixed model given by

$$\begin{aligned}{} & {} y_{ij} \sim \text {Bernoulli}(\pi _{ij}), \quad \text {logit}(\pi _{ij}) = \textbf{x}_{ij}^\top \varvec{\beta }+ \textbf{z}_{ij}^\top \varvec{\alpha }_i, \nonumber \\{} & {} \quad i = 1, \dots , N, \quad j = 1, \dots , n_i, \end{aligned}$$
(6)

where \(\text {logit}(\pi _{ij}) = \log \left( \frac{\pi _{ij}}{1 - \pi _{ij}} \right) \). The gradient and Hessian of the log-likelihood function for this model can, however, be estimated unbiasedly, as we show in Sects. 2.3.1 and 2.3.2.

2.2 Sequential VB and R-VGA

We begin this section with a review of VB and the sequential VB framework. We then present the main steps in the derivations of the R-VGA algorithm of Lambert et al. (2022), on which our algorithm is based.

2.2.1 Sequential VB

VB is usually used for posterior inference in complex statistical models when inference using asymptotically exact methods such as MCMC is too costly; for a review see, for example, Blei et al. (2017). Let \(\varvec{\theta }\) be a vector of model parameters. Here, we consider the class of VB methods where the posterior distribution \(p(\varvec{\theta }\mid \textbf{y})\) is approximated by a tractable density \(q(\varvec{\theta }; \varvec{\lambda })\) parameterised by \(\varvec{\lambda }\). The variational parameters \(\varvec{\lambda }\) are optimised by minimising the Kullback–Leibler (KL) divergence between the variational distribution and the posterior distribution, that is, by minimising

$$\begin{aligned} \text {KL} \left( q(\varvec{\theta }; \varvec{\lambda })\,\left\| \right. \,p(\varvec{\theta }\mid \textbf{y})\right) \equiv \int q(\varvec{\theta }; \varvec{\lambda }) \log \frac{q(\varvec{\theta }; \varvec{\lambda })}{p(\varvec{\theta }\mid \textbf{y})} \, \textrm{d}\varvec{\theta }. \end{aligned}$$
(7)

Many VB algorithms require processing the data as a batch; see, for example, Ong et al. (2018) and Tan and Nott (2018). The variational parameters \(\varvec{\lambda }\) are typically updated in an iterative manner using stochastic gradient descent (SGD, Hoffman et al. 2013; Kingma and Welling 2013). In settings with large amounts of data or continuously-arriving data, it is often more practical to use online or sequential variational Bayes algorithms that update the approximation to the posterior distribution sequentially as new observations become available. These online/sequential algorithms are designed to handle data that are too large to fit in memory or that arrive in a continuous stream.

In a sequential VB framework, such as that proposed by Broderick et al. (2013), the observations \(\textbf{y}_1, \dots , \textbf{y}_N\) are incorporated sequentially so that at iteration i, \(i = 1, \dots , N\), one targets an approximation \(q_i(\varvec{\theta }) \equiv q(\varvec{\theta }; \varvec{\lambda }_i)\) that is closest in a KL sense to the “pseudo-posterior” \(p(\textbf{y}_i \mid \varvec{\theta }) q_{i-1}(\varvec{\theta })/\mathcal {Z}_i\), where

$$\begin{aligned} \mathcal {Z}_i \equiv \int p(\textbf{y}_i \mid \varvec{\theta }) q_{i-1}(\varvec{\theta }) \, \textrm{d}\varvec{\theta }. \end{aligned}$$
(8)

In this framework, \(q_{i-1}(\varvec{\theta })\) is treated as the “prior” for the next iteration i, and the KL divergence between \(q_i(\varvec{\theta })\) and the “pseudo-posterior” is minimised at each iteration. Broderick et al. (2013) use a mean field VB approach (e.g., Ormerod and Wand 2010), which assumes no posterior dependence between the elements of \(\varvec{\theta }\). The R-VGA algorithm proposed by Lambert et al. (2022) follows closely that of Broderick et al. (2013), but uses a variational distribution of the form \(q_i(\varvec{\theta }) = N(\varvec{\mu }_i, \varvec{\Sigma }_i)\), where \(\varvec{\Sigma }_i\) is a full covariance matrix, and seeks closed-form updates for \(\varvec{\lambda }_i \equiv \{\varvec{\mu }_i,\varvec{\Sigma }_i\}\) that minimise the KL divergence between \(q_i(\varvec{\theta })\) and \({p(\textbf{y}_i \mid \varvec{\theta }) q_{i-1}(\varvec{\theta }) / \mathcal {Z}_i}\) for \(i = 1, \dots , N\). Another sequential VB algorithm that is similar to that of Broderick et al. (2013) is the Updating Variational Bayes (UVB, Tomasetti et al. 2022) algorithm, which uses SGD (Bottou 2010) at every iteration, \(i = 1, \dots , N\), to minimise the KL divergence between \(q_i(\varvec{\theta })\) and \({p(\textbf{y}_i \mid \varvec{\theta }) q_{i-1}(\varvec{\theta })/\mathcal {Z}_i}\). One advantage of UVB compared to R-VGA is that it does not have to assume that the prior and variational distributions are Gaussian; see Section 5.2 of Tomasetti et al. (2022) for an example of UVB where a beta prior is used for one of the parameters and the variational distribution is a mixture of multivariate normal distributions. However, due to the lack of restrictions on the form of the variational distribution, UVB requires running a full optimisation algorithm at each iteration, whereas the R-VGA updates are available in closed form.

Detailed derivations for the R-VGA algorithm can be found in Lambert et al. (2022). We provide below a sketch of the derivations to aid the exposition of the methodology in subsequent sections.

2.2.2 The R-VGA algorithm

Denote by \(\textbf{y}_{1:i} \equiv (\textbf{y}_1^\top , \dots , \textbf{y}_i^\top )^\top \) a collection of observations from groups 1 to i, \(i = 1, \dots , N\). By assumption of conditional independence between observations \(\textbf{y}_1, \dots , \textbf{y}_i\) given the parameters \(\varvec{\theta }\), the KL divergence between the variational distribution \(q_i(\varvec{\theta })\) and the posterior distribution \(p(\varvec{\theta }\mid \textbf{y}_{1:i})\) can be expressed as

$$\begin{aligned} \text {KL} \left( q_i(\varvec{\theta })\,\left\| \right. \,p(\varvec{\theta }\mid \textbf{y}_{1:i})\right)&\equiv \int q_i(\varvec{\theta }) \log \frac{q_i(\varvec{\theta })}{p(\varvec{\theta }\mid \textbf{y}_{1:i})} \, \textrm{d}\varvec{\theta }\\&= \mathbb {E}_{q_i} \left( \log q_i(\varvec{\theta }) - \log p(\varvec{\theta }\mid \textbf{y}_{1:i-1}) \right. \\&\quad \left. - \log p(\textbf{y}_i \mid \varvec{\theta }) \right) + \log p(\textbf{y}_{1:i})\\&\quad - \log p(\textbf{y}_{1:i-1}). \end{aligned}$$

The posterior distribution after incorporating the first \(i-1\) groups of observations, \(p(\varvec{\theta }\mid \textbf{y}_{1:i-1})\), is approximated by the variational distribution \(q_{i-1} (\varvec{\theta })\) to give

$$\begin{aligned}{} & {} \text {KL} \left( q_i(\varvec{\theta })\,\left\| \right. \,p(\varvec{\theta }\mid \textbf{y}_{1:i})\right) \approx \mathbb {E}_{q_i}(\log q_i(\varvec{\theta }) - \log q_{i-1} (\varvec{\theta })\nonumber \\{} & {} - \log p(\textbf{y}_i \mid \varvec{\theta })) + \log p(\textbf{y}_{1:i}) - \log p(\textbf{y}_{1:i-1}). \end{aligned}$$
(9)

The R-VGA algorithm assumes a variational distribution of the form \(q_i(\varvec{\theta }) = N(\varvec{\mu }_i, \varvec{\Sigma }_i)\) and seeks parameters \(\varvec{\mu }_i\) and \(\varvec{\Sigma }_i\) that minimise (9). As the last two terms in the right hand side of (9) do not depend on \(\varvec{\theta }\), the optimisation problem is equivalent to finding

$$\begin{aligned} \mathop {\mathrm {arg\,min}}\limits _{\varvec{\mu }_i, \varvec{\Sigma }_i} \, \mathbb {E}_{q_i}(\log q_i(\varvec{\theta }) - \log q_{i-1} (\varvec{\theta }) - \log p(\textbf{y}_i \mid \varvec{\theta })). \end{aligned}$$
(10)

Differentiating the expectation (10) with respect to \(\varvec{\mu }_i\) and \(\varvec{\Sigma }_i\), setting the derivatives to zero, and rearranging the resulting equations, yields the following recursive updates for the variational mean \(\varvec{\mu }_i\) and precision matrix \(\varvec{\Sigma }_i^{-1}\):

$$\begin{aligned} \varvec{\mu }_i&= \varvec{\mu }_{i-1} + \varvec{\Sigma }_{i-1} \nabla _{\varvec{\mu }_i} \mathbb {E}_{q_i}(\log p(\textbf{y}_i \mid \varvec{\theta })), \end{aligned}$$
(11)
$$\begin{aligned} \varvec{\Sigma }_i^{-1}&= \varvec{\Sigma }_{i-1}^{-1} - 2\nabla _{\varvec{\Sigma }_i} \mathbb {E}_{q_i}(\log p(\textbf{y}_i \mid \varvec{\theta })). \end{aligned}$$
(12)

Then, using Bonnet’s Theorem (Bonnet 1964) on (11) and Price’s Theorem (Price 1958) on (12), we rewrite the gradient terms as

$$\begin{aligned} \nabla _{\varvec{\mu }_i} \mathbb {E}_{q_i}(\log p(\textbf{y}_i \mid \varvec{\theta }))&= \mathbb {E}_{q_i}(\nabla _{\varvec{\theta }} \log p(\textbf{y}_i \mid \varvec{\theta })), \end{aligned}$$
(13)
$$\begin{aligned} \nabla _{\varvec{\Sigma }_i} \mathbb {E}_{q_i}(\log p(\textbf{y}_i \mid \varvec{\theta }))&= \frac{1}{2} \mathbb {E}_{q_i}(\nabla _{\varvec{\theta }}^2 \log p(\textbf{y}_i \mid \varvec{\theta })). \end{aligned}$$
(14)

Thus the updates (11) and (12) become

$$\begin{aligned} \varvec{\mu }_i&= \varvec{\mu }_{i-1} + \varvec{\Sigma }_{i-1} \mathbb {E}_{q_i}(\nabla _{\varvec{\theta }} \log p(\textbf{y}_i \mid \varvec{\theta })), \end{aligned}$$
(15)
$$\begin{aligned} \varvec{\Sigma }_i^{-1}&= \varvec{\Sigma }_{i-1}^{-1} - \mathbb {E}_{q_i}(\nabla _\theta ^2 \log p(\textbf{y}_i \mid \varvec{\theta })). \end{aligned}$$
(16)

These updates are implicit as they require the evaluation of expectations with respect to \(q_i(\varvec{\theta })\). Under the assumption that \(q_i(\varvec{\theta })\) is close to \(q_{i-1}(\varvec{\theta })\), Lambert et al. (2022) propose replacing \(q_i(\varvec{\theta })\) with \(q_{i-1}(\varvec{\theta })\) in (15) and (16), and replacing \(\varvec{\Sigma }_{i-1}\) with \(\varvec{\Sigma }_i\) on the right hand side of (15), to yield an explicit scheme

$$\begin{aligned} \varvec{\mu }_i&= \varvec{\mu }_{i-1} + \varvec{\Sigma }_i \mathbb {E}_{q_{i-1}}(\nabla _{\varvec{\theta }} \log p(\textbf{y}_i \mid \varvec{\theta })), \end{aligned}$$
(17)
$$\begin{aligned} \varvec{\Sigma }_i^{-1}&= \varvec{\Sigma }_{i-1}^{-1} - \mathbb {E}_{q_{i-1}}(\nabla _{\varvec{\theta }}^2 \log p(\textbf{y}_i \mid \varvec{\theta })). \end{aligned}$$
(18)

Equations (17) and (18) form the so-called R-VGA algorithm of Lambert et al. (2022).

We note that an “order 1 form” of the R-VGA algorithm exists, which allows the variational precision matrix to be updated using the first order derivatives of the log-likelihood without the need for the Hessian matrix. However, these updates are implicit and not directly implementable. Corollary 1 of Lambert et al. (2022) provides more details on this Hessian-free form.

2.3 R-VGAL

The R-VGA updates in (17) and (18) require the gradient \(\nabla _\theta \log p(\textbf{y}_i \mid \varvec{\theta })\) and Hessian \({\nabla _\theta ^2 \log p(\textbf{y}_i \mid \varvec{\theta })}\) of the “partial” log-likelihood for the ith observation. However, for the GLMMs discussed in Sect. 2.1, there are usually no closed-form expressions for said quantities, as evaluation of the partial log-likelihood involves an intractable integral over the random effects \(\varvec{\alpha }_i\). Our R-VGAL algorithm circumvents this issue by replacing \(\nabla _\theta \log p(\textbf{y}_i \!\!\mid \!\! \varvec{\theta })\) and \(\nabla ^2_\theta \log p(\textbf{y}_i \mid \varvec{\theta })\) with their unbiased estimates, and , respectively. These unbiased estimates are obtained by using an importance-sampling-based approach applied to Fisher’s and Louis’ identities (Cappé et al. 2005), which we discuss in more detail in Sects. 2.3.1 and 2.3.2. We summarise the R-VGAL algorithm in Algorithm 1.

To approximate the expectations with respect to \(q_{i-1}(\varvec{\theta })\) in the updates of the variational mean and precision matrix in Algorithm 1, we generate Monte Carlo samples, \({\varvec{\theta }^{(l)} \sim q_{i-1}(\varvec{\theta })}\), \(l = 1, \dots , S\), and compute:

for \(i = 1, \dots , N\).

Algorithm 1
figure a

R-VGAL

The following sections discuss approaches to obtain unbiased estimates of the gradient and the Hessian of the log-likelihood with respect to the parameters.

2.3.1 Approximation of the gradient with Fisher’s identity

Consider the ith iteration. Fisher’s identity (Cappé et al. 2005) for the gradient of \(\log p(\textbf{y}_{i} \mid \varvec{\theta })\) is

$$\begin{aligned} \nabla _\theta \log p(\textbf{y}_i \mid \varvec{\theta }) = \int p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta }) \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i \mid \varvec{\theta }) \, \textrm{d}\varvec{\alpha }_i.\nonumber \\ \end{aligned}$$
(19)

If it is possible to sample directly from \(p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta })\) (e.g., as it is with the linear random effects model in Sect. 3.1), the above identity can be approximated by

$$\begin{aligned}{} & {} \nabla _\theta \log p(\textbf{y}_i \mid \varvec{\theta }) \approx \frac{1}{S_\alpha } \sum _{s=1}^{S_\alpha } \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta }),\nonumber \\{} & {} \quad \varvec{\alpha }_i^{(s)} \sim p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta }). \end{aligned}$$
(20)

In the case where direct sampling from \(p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta })\) is difficult, we use importance sampling (e.g., Tokdar and Kass 2010) to estimate the gradient of the log-likelihood in (19). Specifically, we draw samples \({\{\varvec{\alpha }_i^{(s)}: s = 1, \dots , S_\alpha \}}\) from an importance distribution \(r(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta })\), and then compute the weights

$$\begin{aligned} w^{(s)}_i = \frac{p(\textbf{y}_i \mid \varvec{\alpha }_i^{(s)}, \varvec{\theta }) p(\varvec{\alpha }_i^{(s)} \mid \varvec{\theta })}{r(\varvec{\alpha }_i^{(s)} \mid \textbf{y}_i, \varvec{\theta })}, \quad s = 1, \dots , S_\alpha . \end{aligned}$$

The gradient of the log-likelihood is then approximated as

$$\begin{aligned} \nabla _\theta \log p(\textbf{y}_i \mid \varvec{\theta }) \approx \sum _{s=1}^{S_\alpha } \bar{w}_i^{(s)} \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta }), \end{aligned}$$
(21)

where \(\mathcal {W}_i \equiv \{\bar{w}^{(s)}_i: s = 1, \dots , S_\alpha \}\) are the normalised weights given by

$$\begin{aligned} \bar{w}^{(s)}_i = \frac{w^{(s)}_i}{\sum _{q=1}^{S_\alpha } w^{(q)}_i}, \quad s = 1, \dots , S_\alpha . \end{aligned}$$

One possible choice for the importance distribution is the distribution of the random effects, that is, \(p(\varvec{\alpha }_i \mid \varvec{\theta })\). In this case, the weights \(\mathcal {W}_i\) reduce to

$$\begin{aligned} w^{(s)}_i = p(\textbf{y}_i \mid \varvec{\alpha }_i^{(s)}, \varvec{\theta }), \quad s = 1, \dots , S_\alpha . \end{aligned}$$

We use this importance distribution in all of the case studies illustrated in Sect. 3.

2.3.2 Approximation of the Hessian with Louis’ identity

Consider again the ith iteration. Louis’ identity (Cappé et al. 2005) for the Hessian \(\nabla ^2_\theta \log p(\textbf{y}_i \mid \varvec{\theta })\) is

$$\begin{aligned} - \nabla ^2_\theta \log p(\textbf{y}_i \mid \varvec{\theta })= & {} \nabla _\theta \log p(\textbf{y}_i \mid \varvec{\theta }) \nabla _\theta \log p(\textbf{y}_i \mid \varvec{\theta })^\top \nonumber \\{} & {} \quad - \frac{\nabla ^2_\theta p(\textbf{y}_i \mid \varvec{\theta })}{p(\textbf{y}_i \mid \varvec{\theta })}, \end{aligned}$$
(22)

where

$$\begin{aligned} \frac{\nabla ^2_\theta p(\textbf{y}_i \mid \varvec{\theta })}{p(\textbf{y}_i \mid \varvec{\theta })} =&\int p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta }) \nabla _\theta \log p\nonumber \\&(\textbf{y}_i, \varvec{\alpha }_i \mid \varvec{\theta }) \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i \mid \varvec{\theta })^\top \, \textrm{d}\varvec{\alpha }_i \nonumber \\&+ \int p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta }) \nabla ^2_\theta \log p(\textbf{y}_i, \varvec{\alpha }_i \mid \varvec{\theta }) \, \textrm{d}\varvec{\alpha }_i . \end{aligned}$$
(23)

The first term on the right-hand side of (22) is obtained using Fisher’s identity, as discussed in Sect. 2.3.1. The second term consists of two integrals (see (23)), which can also be approximated using samples. Specifically,

$$\begin{aligned}&\frac{\nabla ^2_\theta p(\textbf{y}_i \mid \varvec{\theta })}{p(\textbf{y}_i \mid \varvec{\theta })} \\&\quad \approx \frac{1}{S_\alpha } \sum _{s = 1}^{S_\alpha }\left( \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta }) \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta })^\top \right. \\&\qquad \left. + \nabla ^2_\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta }) \right) , \end{aligned}$$

where \(\varvec{\alpha }_i^{(s)} \sim p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta })\) for \(s = 1, \dots , S_\alpha \). If obtaining samples from \(p(\varvec{\alpha }_i \mid \textbf{y}_i, \varvec{\theta })\) is not straightforward, importance sampling (as in Sect. 2.3.1) can be used instead. Following Nemeth et al. (2016), for computational efficiency, we use the same samples \(\{\varvec{\alpha }_i^{(s)}: s = 1, \dots , S_\alpha \}\) that were used to approximate the score using Fisher’s identity and their corresponding normalised weights \(\mathcal {W}_i\) to obtain the estimates of the second term in Louis’ identity. Then

$$\begin{aligned}&\frac{\nabla ^2_\theta p(\textbf{y}_i \mid \varvec{\theta })}{p(\textbf{y}_i \mid \varvec{\theta })} \\&\quad \approx \sum _{s=1}^{S_\alpha } \bar{w}^{(s)}_i\left( \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta }) \nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta })^\top \right. \\&\qquad \left. + \nabla ^2_\theta \log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta }) \right) . \end{aligned}$$

2.4 Damped R-VGAL

A possible problem with R-VGAL is its instability in the first few observations, making it sensitive to the ordering of the observations. In Section S3 of the online supplement, we run the R-VGAL algorithm on a dataset in its original order, and also on a random reordering of the observations, and find that the R-VGAL parameter estimates from these two runs differ. Figures S13 and S14 in Section S3 show that the first few observations can heavily influence the trajectory of the variational mean. Here, we propose a damping approach to stabilise the R-VGAL algorithm during the initial few steps.

In damped R-VGAL, the updates of the mean and precision matrix for each observation are split into K steps, where K is selected on a case by case basis. In each step, we multiply the gradient and the Hessian of \(\log p(\textbf{y}_i \mid \varvec{\theta })\) by a factor \(a= \frac{1}{K}\) (which acts as a “step size”), and then update the variational parameters K times during the ith iteration. Intuitively, in this way, one observation is split into K “parts” and incorporated into the updates one part at a time. Using a smaller step size helps stabilise the R-VGAL algorithm, particularly for the first few observations. Section S3 of the online supplement shows that damping the first few iterations makes the R-VGAL algorithm more robust to different orderings of the data.

The damped R-VGAL approach we present here is inspired by the so-called damped Newton’s method. In the case where the model is linear and the likelihood is Gaussian, the original R-VGA algorithm, upon which R-VGAL is based, can be shown to be equivalent to an online version of Newton’s method; see Appendix 8.2 of Lambert et al. (2022) for a proof. Newton’s method seeks the minimiser of a continuously differentiable function \(f: \mathbb {R}^d \rightarrow \mathbb {R}, d \in \mathbb {N}\), by beginning with some starting value \(\textbf{u}_0 \in \mathbb {R}^d\) and sequentially minimising the quadratic approximation of the function \(f(\cdot )\) around the current value in order to find the next value:

$$\begin{aligned} \textbf{u}_{k+1}{} & {} = \mathop {\mathrm {arg\,min}}\limits _{\textbf{u}} f(\textbf{u}_k) + \nabla _\textbf{u} f(\textbf{u}_k)^\top (\textbf{u}-\textbf{u}_k) \\{} & {} \quad + \frac{1}{2} (\textbf{u}- \textbf{u}_k)^\top \nabla ^2_\textbf{u} f(\textbf{u}_k)(\textbf{u}- \textbf{u}_k), \quad k = 0,1,2,\ldots . \end{aligned}$$

Provided that \(\nabla ^2 f(\textbf{u}_k)\) is positive definite, the minimiser of \(f(\cdot )\) is unique and can be computed iteratively as

$$\begin{aligned} \textbf{u}_{k+1} = \textbf{u}_{k} - (\nabla ^2_\textbf{u} f(\textbf{u}_k))^{-1} \nabla _\textbf{u} f(\textbf{u}_k), \quad k = 0,1,2,\ldots . \end{aligned}$$
(24)

These iterations stop when \(\Vert \nabla _\textbf{u} f(\textbf{u}_{k+1})\Vert \le \epsilon _0\), where \(\epsilon _0\) is some small tolerance parameter. Often, in practice, Newton’s method is modified to include a step size \(0 < \rho \le 1\) to improve convergence:

$$\begin{aligned} \textbf{u}_{k+1} = \textbf{u}_{k} - \rho (\nabla ^2_\textbf{u} f(\textbf{u}_k))^{-1} \nabla _\textbf{u} f(\textbf{u}_k), \quad k = 0,1,2,\ldots ,\nonumber \\ \end{aligned}$$
(25)

resulting in the damped Newton’s method. This step size \(\rho \) is similar to the multiplicative factor a in our damped R-VGAL approach.

We also note that, in the case where the model is linear or when the likelihood function comes from an exponential family and the model is linearised, the R-VGA algorithm of Lambert et al. (2022) is equivalent to an online natural gradient algorithm with step size \(\frac{1}{1+t}\), where t denotes the iteration. A proof of this equivalence can be found in Appendix 8.3 of Lambert et al. (2022). Viewed from the perspective of natural gradient optimisation, the damping factor a in damped R-VGAL can be interpreted as a reduction of the step size in natural gradient updates.

We summarise the damped R-VGAL algorithm in Algorithm 2.

Algorithm 2
figure b

Damped R-VGAL

3 Applications of R-VGAL

In this section, we apply R-VGAL to estimate parameters in linear, logistic and Poisson mixed models using three simulated datasets and two real datasets: the Six City dataset from Fitzmaurice and Laird (1993), and the Polypharmacy dataset from Hosmer et al. (2013). The linear and logistic models have univariate random effects, while the Poisson model has bivariate random effects. There are two additional examples in Section S6 of the online supplement: a real data example with the Poisson model applied to the Epilepsy dataset from Thall and Vail (1990), and a synthetic data example with a high number of observations simulated from the logistic mixed model.

We validate R-VGAL against Hamiltonian Monte Carlo (HMC, Neal 2011; Betancourt and Girolami 2015), which is implemented using the Stan programming language (Stan Development Team 2023) in R (Core 2022). In examples with real data, the true parameters are unknown. We instead compute the maximum likelihood estimates for the parameters using the R package lme4 (Bates et al. 2015), and also treat results from HMC as the “ground truth”, as HMC provides samples from the true posterior distributions. For all examples, we run 2 HMC chains for 15,000 iterations each, and discard the first 5000 from each chain as burn in. We find that the effective sample sizes are high and the \(\hat{R}\) statistics are close to 1 for all examples, indicating that the HMC chains are well-mixed and have converged; see Section S5 of the online supplement for further details. Reproducible R code for all examples is available on https://github.com/bao-anh-vu/R-VGAL.

For all applications in this paper, we use the damped R-VGAL algorithm described in Sect. 2.4. We show that damping makes the algorithm more robust to different orderings of the observations in Section S3 of the online supplement. The values of \(n_{damp}\) and K used in damping observations should be kept as small as possible to limit the extra computational overhead, while also be sufficiently large to reduce the instability observed with the R-VGAL algorithm in the initial stages. In our applications, we experimented with a few different settings of \(n_{damp}\) and K and plotted the trajectories of the variational mean under those settings. We found that the trajectories were most unstable during the first 10 observations, so we chose \(n_{damp} = 10\) observations and the number of steps \(K = 4\) to reduce the initial instability at the expense of a small additional computational cost. These values are used throughout our examples. Adaptive schemes for selecting the values of \(n_{damp}\) and K are left as future research directions.

3.1 Linear mixed effect model

In this example, we generate data from a linear mixed model with \(N = 200\) groups and \(n = 10\) responses per group. The jth response from the ith group is modelled as

$$\begin{aligned} y_{ij}= & {} \textbf{x}_{ij}^\top \varvec{\beta }+ z_{ij} \alpha _i + \epsilon _{ij}, \quad \alpha _i \sim N(0, \sigma _\alpha ^2), \nonumber \\ \epsilon _{ij}\sim & {} N(0, \sigma _\epsilon ^2), \end{aligned}$$
(26)

for \(i = 1, \dots , N\) and \( j = 1, \dots , n\), where \(\textbf{x}_{ij}\) is drawn from a \(N(\textbf{0}, \textbf{I}_4)\) distribution and \(z_{ij}\) is drawn from a N(0, 1) distribution. For this example, we did not include an intercept term, but it can be added if necessary. The true parameter values are \(\varvec{\beta }= (-1.5, 1.5, 0.5, 0.25)^\top \), \(\sigma _\alpha = 0.9\), and \(\sigma _\epsilon = 0.7\). Since R-VGAL uses a multivariate normal distribution as the variational approximation, we consider the log-transformed variables \(\phi _\alpha \equiv \log (\sigma _{\alpha }^2)\) and \(\phi _\epsilon \equiv \log (\sigma _{\epsilon }^2)\) so that \(\phi _\alpha \) and \(\phi _\epsilon \) are unconstrained. We then make inference on the parameters \(\varvec{\theta }= (\varvec{\beta }^\top , \phi _\alpha , \phi _\epsilon )^\top \) using R-VGAL.

At the group level, the linear mixed model is

$$\begin{aligned} \textbf{y}_i = \textbf{X}_i \varvec{\beta }+ \textbf{z}_i \alpha _i + \varvec{\varepsilon }_i, \quad i = 1, \dots , N, \end{aligned}$$
(27)

where \(\textbf{y}_i \equiv (y_{i1}, \dots , y_{in})^\top \), \(\textbf{X}_i \equiv (\textbf{x}_{i1}, \dots , \textbf{x}_{in})^\top , \textbf{z}_i \equiv (z_{i1}, \dots , z_{in})^\top \), and \(\varvec{\varepsilon }_i \equiv (\epsilon _{i1}, \dots , \epsilon _{in})^\top \). At each iteration, \(i = 1, \dots , N\), the R-VGAL algorithm makes use of the “partial” likelihood of the observations from the ith group, \(p(\textbf{y}_i \mid \varvec{\theta }) = N(\varvec{\mu }_{y \mid \theta }, \varvec{\Sigma }_{y \mid \theta })\), where \(\varvec{\mu }_{y \mid \theta } = \textbf{X}_i \varvec{\beta }\) and \(\varvec{\Sigma }_{y \mid \theta } = \sigma _\alpha ^2 \textbf{z}_i \textbf{z}_i^\top + \sigma _{\epsilon }^2 \textbf{I}_n\). For this model, the gradient and Hessian of \(\log p(\textbf{y}_i \mid \varvec{\theta })\) with respect to each of the parameters are available in closed form; see Section S1.1 of the online supplement. In this case, we are therefore able to compare the accuracy of R-VGAL implemented using approximate gradients and Hessians with that of R-VGAL implemented using exact gradients and Hessians.

The prior distribution we use, which is also the “initial” variational distribution, is

$$\begin{aligned} p(\varvec{\theta }) = q_0(\varvec{\theta }) = N \left( \begin{bmatrix} \textbf{0} \\ \log (0.5^2) \\ \log (0.5^2) \end{bmatrix}, \begin{bmatrix} 10\textbf{I}_4 &{} \textbf{0} &{} \textbf{0} \\ \textbf{0}^\top &{} 1 &{} 0 \\ \textbf{0}^\top &{} 0 &{} 1 \\ \end{bmatrix} \right) . \end{aligned}$$
(28)

A \(N(\log (0.5^2), 1)\) prior distribution for \(\phi _\alpha \) and \(\phi _\epsilon \) is equivalent to a log-normal prior distribution with mean 0.41 and variance 0.29 for both \(\sigma _\alpha ^2\) and \(\sigma _\epsilon ^2\). Using this prior distribution, the 2.5th and 97.5th percentiles for both \(\sigma _\alpha ^2\) and \(\sigma _\epsilon ^2\) are (0.035, 1.775).

At each iteration \(i = 1, \dots , 200\), we use \(S_\alpha = 100\) Monte Carlo samples (of \(\alpha _i\)) to approximate the gradient and Hessian of \(\log p(\textbf{y}_i \mid \varvec{\theta })\) using Fisher’s and Louis’ identities. We use \(S = 100\) Monte Carlo samples (of \(\varvec{\theta }\)) to approximate the expectations with respect to \(q_{i-1}(\varvec{\theta })\) in the R-VGAL updates of the mean and precision matrix. These values were chosen based on an experimental study on the effect of S and \(S_\alpha \) on the posterior estimates of R-VGAL in Section S2 of the online supplement.

We validate R-VGAL against HMC, which we implemented in Stan. Figure 1 shows the marginal posterior distributions of the parameters, along with bivariate posterior distributions as estimated using R-VGAL with approximate gradients and Hessians, R-VGAL with exact gradients and Hessians, and HMC. The posterior distributions obtained using R-VGAL are clearly very similar to those obtained using HMC, irrespective of whether exact or approximate gradients and Hessians are used.

Fig. 1
figure 1

Exact posterior distributions (from HMC, in blue) and approximate posterior distributions (from R-VGAL with estimated gradients and Hessians in red, and from R-VGAL with exact gradients and Hessians in yellow) for the linear mixed model experiment. Diagonal panels: Marginal posterior distributions with true parameters denoted using dotted lines. Off-diagonal panels: Bivariate posterior distributions with true parameters denoted using the symbol \(\times \). (Color figure online)

3.2 Logistic mixed effect model

In this example, we generate simulated data from a random effects logistic regression model with \(N = 500\) groups and \(n = 10\) responses per group. The random effect logistic regression model we use is

$$\begin{aligned} y_{ij}\sim & {} \text {Bernoulli} (\pi _{ij}),\nonumber \\ \pi _{ij}= & {} p(y_{ij} = 1 \mid \varvec{\beta }, \tau ^2) \nonumber \\= & {} \frac{\exp (\textbf{x}_{ij}^\top \varvec{\beta }+ \alpha _i)}{1 + \exp (\textbf{x}_{ij}^\top \varvec{\beta }+ \alpha _i)}, \nonumber \\ \alpha _i\sim & {} N(0, \tau ^2), \end{aligned}$$
(29)

where \(\textbf{x}_{ij}\) is drawn from a \(N(\textbf{0}, \textbf{I}_4)\) distribution, for \(i = 1, \dots , N\) and \(j = 1, \dots , n\). For this example, we did not include an intercept term, but it can be added if necessary. The true parameter values are \(\varvec{\beta }= (-1.5, 1.5, 0.5, 0.25)^\top \) and \(\tau = 0.9\).

As in the linear case, although the parameters of the model are \(\varvec{\beta }\) and \(\tau \), we work with \(\varvec{\theta }= (\varvec{\beta }^\top , \phi _\tau )^\top \) where \(\phi _\tau \equiv \log (\tau ^2)\). The gradient and Hessian of the “partial” log-likelihood \(\log p(\textbf{y}_i \mid \varvec{\theta })\) in this model are not analytically tractable, but can be estimated unbiasedly using Fisher’s and Louis’ identities as discussed in Sects. 2.3.1 and 2.3.2. These identities require the expressions for \({\nabla _\theta \log p(\textbf{y}_i, \alpha _i \mid \varvec{\theta })}\) and \({\nabla ^2_\theta \log p(\textbf{y}_i, \alpha _i \mid \varvec{\theta })}\), which are provided in Section S1.2 of the online supplement.

The prior distribution we use, which is also the “initial” variational distribution, is

$$\begin{aligned} p(\varvec{\theta }) = q_0(\varvec{\theta }) = N \left( \begin{bmatrix} \textbf{0} \\ \log (0.5^2) \end{bmatrix}, \begin{bmatrix} 10\textbf{I}_4 &{} \textbf{0}\\ \textbf{0}^\top &{} 1 \end{bmatrix} \right) . \end{aligned}$$
(30)

A \(N(\log (0.5^2), 1)\) prior distribution for \(\phi _\tau \) is equivalent to a log-normal prior distribution with mean 0.41 and variance 0.29 for \(\tau ^2\). The prior 2.5th and 97.5th percentiles for \(\tau ^2\) are (0.035, 1.775). At each iteration \(i = 1, \dots , 500\), we use \(S_\alpha = 100\) Monte Carlo samples (of \(\alpha _i\)) to approximate the gradient and Hessian of \(\log p(\textbf{y}_i \mid \varvec{\theta })\) during importance sampling, and \(S = 100\) samples (of \(\varvec{\theta }\)) to approximate the expectations with respect to \(q_{i-1}(\varvec{\theta })\) in the R-VGAL updates of the mean and precision matrix.

Figure 2 shows the marginal posterior distributions of the parameters, along with bivariate posterior distributions as estimated using R-VGAL and HMC. The posterior distributions obtained using R-VGAL are again very similar to those obtained using HMC.

Fig. 2
figure 2

Exact posterior distributions from HMC (in blue) and approximate posterior distributions from R-VGAL with estimated gradients and Hessians (in red) for the logistic mixed model experiment. Diagonal panels: Marginal posterior distributions with true parameters denoted using dotted lines. Off-diagonal panels: Bivariate posterior distributions with true parameters denoted using the symbol \(\times \). (Color figure online)

3.3 Poisson mixed model

We now apply R-VGAL to a model with bivariate random effects. For this example, we simulate data with \(N = 200\) groups and \(n = 10\) responses per group from the following Poisson mixed effect regression model:

$$\begin{aligned} y_{ij}\sim & {} \text {Poisson} (\lambda _{ij}), \nonumber \\ \lambda _{ij}= & {} \exp (\textbf{x}_{ij}^\top \varvec{\beta }+ \textbf{z}_{ij}^\top \varvec{\alpha }_i), \\{} & {} \quad \varvec{\alpha }_i \sim N(\textbf{0}, \varvec{\Sigma }_\alpha ), \end{aligned}$$

where \(\textbf{x}_{ij} \equiv (1, x_{ij,1})^\top \), with \(x_{ij, 1}\) drawn from a N(0, 1) distribution, and \(\textbf{z}_{ij} \equiv (1, z_{ij,1})^\top \), with \(z_{ij,1}\) drawn from a N(0, 1) distribution, for \(i = 1, \dots , N\) and \(j = 1, \dots , n\). We denote the fixed and random effects as \(\varvec{\beta }\equiv (\beta _0, \beta _1)^\top \) and \(\varvec{\alpha }_i \equiv (\alpha _{i, 1}, \alpha _{i, 2})^\top \), respectively. The true parameter values are

$$\begin{aligned} \varvec{\beta }= (-1.5, -0.5)^\top , \quad \varvec{\Sigma }_\alpha = \begin{bmatrix} 0.15 &{} 0.05 \\ 0.05 &{} 0.20 \end{bmatrix}. \end{aligned}$$

We parameterise \(\varvec{\Sigma }_\alpha = \textbf{L}\textbf{L}^\top \), where \(\textbf{L}\) denotes the lower Cholesky factor of \(\varvec{\Sigma }_\alpha \) and takes the form

$$\begin{aligned} \textbf{L}= \begin{bmatrix} \exp (\zeta _{11}) &{} 0 \\ \zeta _{21} &{} \exp (\zeta _{22}) \end{bmatrix}. \end{aligned}$$

In the algorithm, we consider the unconstrained parameters \(\varvec{\theta }= (\varvec{\beta }^\top , \varvec{\zeta }^\top )^\top \), where \(\varvec{\zeta }\equiv (\zeta _{11}, \zeta _{22}, \zeta _{21})^\top \). The gradient \(\nabla _\theta \log p(\textbf{y}_i, \varvec{\alpha }_i \mid \varvec{\theta })\) and Hessian \(\nabla ^2_\theta \log p(\textbf{y}_i, \varvec{\alpha }_i \mid \varvec{\theta })\), which are necessary in the computation of the gradient and Hessian of the group-specific log likelihood \(\log p(\textbf{y}_i \mid \varvec{\theta })\), are provided in Section S1.3 of the online supplement.

We use the following prior/initial variational distribution:

$$\begin{aligned} p(\varvec{\theta }) = q_0(\varvec{\theta }) = N \left( \begin{bmatrix} \textbf{0} \\ \textbf{0} \end{bmatrix}, \begin{bmatrix} \textbf{I}_2 &{} \textbf{0}\\ \textbf{0}^\top &{} 0.1 \textbf{I}_3 \end{bmatrix} \right) . \end{aligned}$$

A N(0, 0.1) prior distribution for \(\zeta _{11}\), \(\zeta _{22}\) and \(\zeta _{21}\) leads to having 2.5th and 97.5th percentiles of (0.290, 3.485) for \(\varvec{\Sigma }_{\alpha _{11}}\), (0.342, 3.577) for \(\varvec{\Sigma }_{\alpha _{22}}\), and (\(-\)0.713, 0.713) for the off-diagonal entries \(\varvec{\Sigma }_{\alpha _{21}}\) and \(\varvec{\Sigma }_{\alpha _{12}}\).

As with the linear and logistic examples, we use \(S_\alpha = 100\) for the importance sampling step and \(S = 100\) samples for approximating the expectations with respect to \(q_{i-1}(\varvec{\theta })\) in the R-VGAL updates. Figure 3 shows the marginal posterior distributions of the parameters, along with bivariate posterior distributions as estimated using R-VGAL and HMC. For all parameters, the R-VGAL and HMC posterior densities are very similar, though the posterior densities of \(\varvec{\Sigma }_{\alpha _{11}}\) from both methods appear a bit biased.

Fig. 3
figure 3

Exact posterior distributions from HMC (in blue) and approximate posterior distributions from R-VGAL with estimated gradients and Hessians (in red) for the Poisson mixed model experiment. Diagonal panels: Marginal posterior distributions with true parameters denoted using dotted lines. Off-diagonal panels: Bivariate posterior distributions with true parameters denoted using the symbol \(\times \). (Color figure online)

To assess the robustness of the results in these simulation studies, we also include repeated simulation studies on the linear, logistic and Poisson mixed models in Section S4 of the online supplement. For each of these models, we simulate 100 datasets using the same parameter settings, and compare the posterior estimates from R-VGAL and HMC on these simulated datasets. We find that the R-VGAL and HMC posterior estimates are very similar across simulations for the linear and logistic models, while for the Poisson model, the estimates from the two methods are close for most simulations, with only a few cases where estimates are slightly different. We also find that the posterior standard deviations from R-VGAL tend to be slightly smaller than those from HMC.

3.4 Real data examples

We now apply R-VGAL to two real datasets: the Six City dataset from Fitzmaurice and Laird (1993), and the Polypharmacy dataset from Hosmer et al. (2013).

For the Six City dataset, we follow Tran et al. (2017) and consider the random intercept logistic regression model

$$\begin{aligned} \log \left( \frac{\pi _{ij}}{1 - \pi _{ij}} \right)&= \beta _0 + \beta _{age} \texttt {Age}_{ij} + \beta _{smoke} \texttt {Smoke}_{ij}+ \alpha _i, \nonumber \\ \alpha _i&\sim N(0, \tau ^2), \end{aligned}$$
(31)

where \(\pi _{ij} \equiv p(y_{ij} = 1 \mid \varvec{\beta }, \tau ^2)\), with \(\varvec{\beta }\equiv (\beta _0, \beta _{age}, \beta _{smoke})^\top \), for \(i = 1, \dots , 537\) and \(j = 1, \dots , 4\). The binary response variable \(y_{ij} = 1\) if child i is wheezing at time point j, and 0 otherwise. The covariate \(\texttt {Age}_{ij}\) is the age of child i at time point j, centred at 9 years, while the covariate \(\texttt {Smoke}_{ij} = 1\) if the mother of child i is smoking at time point j, and 0 otherwise. Finally, \(\alpha _i\) is the random effect associated with the ith child. The parameters of the model are \(\varvec{\theta }= (\varvec{\beta }^\top , \phi _\tau )^\top \), where \(\phi _\tau \equiv \log (\tau ^2)\).

For the Polypharmacy dataset, we consider the random intercept logistic regression model from Tan and Nott (2018):

$$\begin{aligned} \log \left( \frac{\pi _{ij}}{1 - \pi _{ij}} \right)&= \beta _0 + \beta _{gender} \texttt {Gender}_{i} + \beta _{race} \texttt {Race}_{i} \nonumber \\&+ \beta _{age} \texttt {Age}_{ij} + \beta _{M1} \texttt {MHV1}_{ij} + \beta _{M2} \texttt {MHV2}_{ij} \nonumber \\&+ \beta _{M3} \texttt {MHV3}_{ij} + \beta _{IM} \texttt {INPTMHV}_{ij}+ \alpha _i, \nonumber \\ \alpha _i&\sim N(0, \tau ^2), \end{aligned}$$
(32)

where \(\pi _{ij} \equiv \text {Pr}(y_{ij} = 1 \mid \varvec{\beta },\tau ^2)\), \(\varvec{\beta }\equiv (\beta _0, \beta _{gender}, \beta _{race}, \beta _{age}, \beta _{M1}, \beta _{M2}, \beta _{M3}, \beta _{IM})^\top \), for \(i = 1, \dots , 500\) and \(j = 1, \dots , 7\). The response variable \(y_{ij}\) is 1 if subject i in year j is taking drugs from three or more different classes (of drugs), and 0 otherwise. The covariate \(\texttt {Gender}_i = 1\) if subject i is male, and 0 if female, while \(\texttt {Race}_i = 0\) if the race of subject i is white, and 1 otherwise. The covariate \(\texttt {Age}_{ij}\) is the age (in years and months, to two decimal places) of subject i in year j. The number of outpatient mental health visits (MHV) for subject i in year j is split into three dummy variables: \(\texttt {MHV1}_{ij} = 1\) if \(1 \le \texttt {MHV}_{ij} \le 5\), and 0 otherwise; \(\texttt {MHV2}_{ij} = 1\) if \(6 \le \texttt {MHV}_{ij} \le 14\), and 0 otherwise; and \(\texttt {MHV3}_{ij} = 1\) if \(\texttt {MHV}_{ij} \ge 15\), and 0 otherwise. The covariate \(\texttt {INPTMHV}_{ij} = 0\) if there were no inpatient mental health visits for subject i in year j, and 1 otherwise. Finally, \(\alpha _i\) is a subject-level random effect for subject i. The parameters of the model are \(\varvec{\theta }= (\varvec{\beta }^\top , \phi _\tau )^\top \), where \(\phi _\tau \equiv \log (\tau ^2)\).

We use similar priors/initial variational distributions for both examples. For the Six City dataset, the prior/initial variational distribution we use is

$$\begin{aligned} p(\varvec{\theta }) = q_0(\varvec{\theta }) = N \left( \begin{bmatrix} \textbf{0} \\ 1 \end{bmatrix}, \begin{bmatrix} 10\textbf{I}_3 &{} \textbf{0} \\ \textbf{0}^\top &{} 1 \end{bmatrix} \right) , \end{aligned}$$
(33)

and for the Polypharmacy dataset, we use

$$\begin{aligned} p(\varvec{\theta }) = q_0(\varvec{\theta }) = N \left( \begin{bmatrix} \textbf{0} \\ 1 \end{bmatrix}, \begin{bmatrix} 10\textbf{I}_8 &{} \textbf{0} \\ \textbf{0}^\top &{} 1 \end{bmatrix} \right) . \end{aligned}$$
(34)

A N(1, 1) prior distribution for \(\phi _\tau \) leads to a log-normal prior distribution with mean 4.48 and variance 34.51 for \(\tau ^2\). Using this prior distribution, the 2.5th and 97.5th percentiles for \(\tau ^2\) are (0.383, 19.297), which cover most values of \(\tau ^2\) in practice. At each R-VGAL iteration, the gradient and Hessian of \({\log p(\textbf{y}_i \mid \varvec{\theta })}\) are approximated using \(S_\alpha = 200\) Monte Carlo samples (of \(\alpha _i\)), and the expectations with respect to \(q_{i-1}(\varvec{\theta })\) in the R-VGAL updates are approximated using \(S = 200\) Monte Carlo samples (of \(\varvec{\theta }\)).

Fig. 4
figure 4

Exact posterior distributions from HMC (in blue) and approximate posterior distributions from R-VGAL with estimated gradients and Hessians (in red) for the experiment with the Six City dataset. Diagonal panels: Marginal posterior distributions with the maximum likelihood estimates marked using dotted lines. Off-diagonal panels: Bivariate posterior distributions with the maximum likelihood estimates marked using the symbol \(\times \). (Color figure online)

As there are no ground truths to these examples, we compare the posterior density estimates from R-VGAL to those from HMC. In addition, we also compute the maximum likelihood estimates using the lme4 package in R. Figures 4 and 5 show the marginal posterior distributions with maximum likelihood estimates of the parameters, along with bivariate posterior distributions estimated using R-VGAL and HMC for the Six City and Polypharmacy datasets, respectively. In the Six City example, there is a slight difference in the marginal and bivariate posterior densities from R-VGAL and HMC for the fixed effect \(\beta _{smoke}\), but the posterior densities for other parameters are very similar between the two methods. For the intercept \(\beta _0\) and the random effect standard deviation \(\tau \), the posterior modes of HMC are closer to the maximum likelihood estimates than the posterior modes of R-VGAL, but for the other parameters, the posterior modes from both R-VGAL and HMC are close to the maximum likelihood estimates. For the Polypharmacy example, there are slight differences between the R-VGAL and HMC marginal and bivariate posterior densities for the intercept \(\beta _0\) and the fixed effects \(\beta _{gender}\) and \(\beta _{race}\), but for other parameters, the posterior densities are comparable between the two methods. The posterior modes of both R-VGAL and HMC are close to the maximum likelihood estimates for all parameters in this example.

Fig. 5
figure 5

Exact posterior distributions from HMC (in blue) and approximate posterior distributions from R-VGAL with estimated gradients and Hessians (in red) for the experiment with the Polypharmacy dataset. Diagonal panels: Marginal posterior distributions with the maximum likelihood estimates marked using dotted lines. Off-diagonal panels: Bivariate posterior distributions with the maximum likelihood estimates marked using the symbol \(\times \). (Color figure online)

3.5 Computing time

Table 1 compares the computing time (in minutes) of R-VGAL and HMC for all simulated and real data examples that we have discussed in Sect. 3 and Section S6 of the online supplement, and includes the corresponding dataset size for each example. The last column in the table shows the average time taken (in seconds) for a single iteration of R-VGAL. For the linear example, where we run R-VGAL with both the theoretical and estimated gradients/Hessians, the displayed time is that of R-VGAL with the estimated gradients/Hessians. All experiments were carried out on the High Performance Computer system of the National Institute for Applied Statistics Research Australia, with an NVIDIA Tesla V100 PCIe 32GB graphics processing unit (GPU). The GPU was used to parallelise the computations in the importance sampling step, so that the gradient and Hessian of the joint log-likelihood \(\log p(\textbf{y}_i, \varvec{\alpha }_i^{(s)} \mid \varvec{\theta }), s = 1, \dots , S_\alpha \), and their corresponding weights \(\mathcal {W}_i\), are computed all at once. The GPU was also used to parallelise over the Monte Carlo samples used in the estimation of the expectations with respect to \(q_{i-1}(\cdot )\) in Algorithm 1. We use the R interface to Tensorflow (Abadi et al. 2015) to facilitate GPU computations.

The table shows that the R-VGAL algorithm is generally 3 to 8 times faster than HMC. This is substantial given that our code is not as highly optimised as that in Stan. The difference in computing times also becomes more notable with a bigger dataset: in the logistic example with 50000 synthetic observations (see Section S6.2 of the online supplement), R-VGAL takes only 17 min to produce posterior estimates, while HMC takes more than 2 h. Furthermore, since R-VGAL is a sequential algorithm, posterior approximations from R-VGAL can be easily updated as new observations become available. To incorporate an additional observation, R-VGAL needs to perform a single update, while an algorithm like HMC requires rerunning the entire sampling procedure.

Table 1 Computing time (in minutes) for the R-VGAL and HMC methods for the simulated and real datasets, with accompanying dataset sizes

4 Conclusion

In this article, we propose a sequential variational Bayes algorithm for estimating parameters in GLMMs based on an extension of the R-VGA algorithm of Lambert et al. (2022). The original R-VGA algorithm requires the gradient and Hessian of the partial log-likelihood at each observation, which are computationally intractable for most GLMMs. To overcome this, we use Fisher’s and Louis’ identities to obtain unbiased estimates of the gradient and Hessian, which can be used in place of the closed form gradient and Hessian in the R-VGAL algorithm.

We apply R-VGAL to the linear, logistic and Poisson mixed effect models with simulated and real datasets. In all examples, we compare the posterior distributions of the parameters estimated using R-VGAL to those obtained using HMC (Neal 2011; Betancourt and Girolami 2015). The examples show that R-VGAL yields comparable posterior estimates to HMC while being substantially faster, and the R-VGAL posterior modes are very close to the maximum likelihood estimates for most parameters in the models we consider. R-VGAL would be especially useful in situations where new observations are being continuously collected.

In the current paper, we assume that the random effects are independent and identically distributed between subjects or groups. We discuss the potential application of R-VGAL to models with more complicated random effect structures, such as crossed or nested effects, in Section S7 of the online supplement. Future work will attempt to extend R-VGAL to cases where the random effects are temporally correlated. This will expand the set of models on which R-VGAL can be used to include time series and state space models.