Appendix
We begin this section by providing correction of some results presented in Xie and Wei (2008).
In Xie and Wei (2008, p. 52), the correct expression for \(E(\nu _i^{-1}|Y_o,\theta ^{(r)})\) in the case \(y_i=0\) is \(\lambda ^{(r)}\big [ 1+\sqrt{(\lambda ^{(r)})^{-1}(2\mu _i+(\lambda ^{(r)})^{-1})}\big ]\) instead of \(\lambda ^{(r)}\big [ 1+\sqrt{(\lambda ^{(r)})^{-1}}(2\mu _i+(\lambda ^{(r)})^{-1})\big ]\) .
In Xie and Wei (2008, p. 58), where one finds “\(-T\varvec{1}_n\varvec{1}_m^TS\)
\(- \beta ^TS\varvec{1}_mGX\)”, the correct expression is “\(-T\varvec{1}_n\varvec{1}_m^TS\)
\(-GX\beta \varvec{1}_mS\)”.
Finally, the perturbation of responses (case 3) found in Xie and Wei (2008, p. 58) does not make sense since the random variable \(Y_i\) is discrete and, therefore, does not admit infinitesimal perturbations (all perturbations of \(Y_i\) must be integer-valued). Therefore, this perturbation scheme is wrong and should not be considered. What does make sense is to do an infinitesimal perturbation on the latent variable Z, since it is continuous. We provide such a perturbation scheme in the present article (see the perturbation of the hidden variable scheme).
In what follows we present the proofs of two propositions given in the main text.
Proof of Proposition 1
We have that
$$\begin{aligned} E(Z|Y=y)= & {} \dfrac{\mu ^y}{y!p(y;\mu ,\phi )}\int _0^\infty e^{-\mu z}z^{y+1}\\&\times \,\exp \{\phi [z\xi _0-b(\xi _0)]+c(z;\phi )\}dz\\= & {} \dfrac{y+1}{\mu p(y;\mu ,\phi )}\int _0^\infty \dfrac{\mu ^{y+1}}{(n+1)!}e^{-\mu z}z^{y+1}\\&\times \,\exp \{\phi [z\xi _0-b(\xi _0)]+c(z;\phi )\}dz\\= & {} \dfrac{(y+1)p(y+1;\mu ,\phi )}{\mu p(y;\mu ,\phi )}. \end{aligned}$$
To obtain the second conditional expectation, we first get the conditional moment generating function of g(Z) given \(Y=y\), that is
$$\begin{aligned} E\left( \exp \{t\,g(Z)\}|Y=y\right)&=\dfrac{\mu ^y}{n!p(y;\mu ,\phi )}\int _0^\infty e^{-\mu z}z^y\\&\quad \times \,\exp \{\phi [z\xi _0-b(\xi _0)]+d(\phi )\\&\quad +\,(\phi +t)g(z)+h(z)\}dz\\&=\exp \left\{ \phi b\left( \dfrac{\phi \xi _0}{\phi +t}\right) -d(\phi +t)\right. \\&\quad +\,\left. d(\phi )-\phi b(\xi _0)\right\} \\&\quad \times \,\dfrac{p(y;\mu ^*_t,\phi +t)}{p(y;\mu ,\phi )}, \end{aligned}$$
with \(\mu _t^*\) as defined in the proposition. Hence, computing the derivative of \(E\left( \exp \{t\,g(Z)\}|Y=y\right) \) with respect to t at \(t=0\), after some simplifications we get the desired result. \(\square \)
Proof of Proposition 2
A simple computation shows that
$$\begin{aligned} \left. \frac{\partial Q_{[i]}}{\partial {\varvec{\beta }}}({\varvec{\theta }};\widehat{{\varvec{\theta }}})\right| _{{\varvec{\theta }}=\widehat{{\varvec{\theta }}}} = a_i\mathbf x_i, \end{aligned}$$
and
$$\begin{aligned} \left. \frac{\partial Q_{[i]}}{\partial \varvec{\alpha }}({\varvec{\theta }};\widehat{{\varvec{\theta }}})\right| _{{\varvec{\theta }}=\widehat{{\varvec{\theta }}}} = b_i\mathbf w_i. \end{aligned}$$
Furthermore,
$$\begin{aligned} \left. \frac{\partial ^2 Q_{[i]}}{\partial {\varvec{\beta }}\partial {\varvec{\beta }}^\top }({\varvec{\theta }};\widehat{{\varvec{\theta }}})\right| _{{\varvec{\theta }}=\widehat{{\varvec{\theta }}}}= & {} \sum _{i=1}^n -\mu _i\lambda _i\mathbf x_i\mathbf x_i^\top \\= & {} -\mathbf X^\top \mathbf G_1\mathbf X, \end{aligned}$$
and
$$\begin{aligned} \left. \frac{\partial ^2 Q_{[i]}}{\partial \varvec{\alpha }\partial \varvec{\alpha }^\top }({\varvec{\theta }};\widehat{{\varvec{\theta }}})\right| _{{\varvec{\theta }}=\widehat{{\varvec{\theta }}}}= & {} \sum _{i=1}^n \Big (\phi _i(\xi _0\lambda _i-b(\xi _0)\\&+\kappa _i+d''(\phi _i)\phi _i\Big )\mathbf w_i\mathbf w_i^\top \\= & {} -\mathbf W^\top \mathbf G_2\mathbf W. \end{aligned}$$
Thus
$$\begin{aligned} \left. \frac{\partial ^2 Q_{[i]}}{\partial {\varvec{\theta }}\partial {\varvec{\theta }}^\top }({\varvec{\theta }};\widehat{{\varvec{\theta }}})\right| _{{\varvec{\theta }}=\widehat{{\varvec{\theta }}}} = - \begin{bmatrix} \mathbf X^\top \mathbf G_1\mathbf X&\quad \mathbf 0 \\ \mathbf 0&\quad \mathbf W^\top \mathbf G_2\mathbf W \end{bmatrix}. \end{aligned}$$
\(\square \)
The elements of the observed information matrix (6) can be obtained by using the following quantities:
$$\begin{aligned} E\left( -\dfrac{\partial \ell _c^2}{\partial \beta _j\partial \beta _l}\big |\mathbf{Y}\right) =\sum _{i=1}^n\mu _i\lambda _ix_{ij}x_{il}, \end{aligned}$$
for \(j,l=1,\ldots ,p\),
$$\begin{aligned} E\left( -\dfrac{\partial \ell _c^2}{\partial \alpha _j\partial \alpha _l}\big |\mathbf{Y}\right)&=\sum _{i=1}^n\phi _i\{b(\xi _0)-\xi _0\lambda _i-\kappa _i-d'(\phi _i)\\&\quad -\,\phi _id''(\phi _i)\}w_{ij}w_{il}, \end{aligned}$$
for \(j,l=1,\ldots ,q\),
$$\begin{aligned} E\left( -\dfrac{\partial \ell _c^2}{\partial \beta _j\partial \alpha _l}\big |\mathbf{Y}\right) =0, \end{aligned}$$
for \(j=1,\ldots ,p\) and \(l=1,\ldots ,q\),
$$\begin{aligned} E\left( \dfrac{\partial \ell _c}{\partial \beta _j}\dfrac{\partial \ell _c}{\partial \beta _l}\big |\mathbf{Y}\right) =\sum _{i=1}^n(y_i^2-2y_i\mu _i\lambda _i +\mu _i^2\gamma _i)x_{ij}x_{il}+\\ \sum _{i\ne k}(y_i-\mu _i\lambda _i)(y_k-\mu _k\lambda _k)x_{ij}x_{kl}, \end{aligned}$$
for \(j,l=1,\ldots ,p\),
$$\begin{aligned}&E\left( \dfrac{\partial \ell _c}{\partial \beta _j}\dfrac{\partial \ell _c}{\partial \alpha _l}\big |\mathbf{Y}\right) \\&\quad =\sum _{i=1}^n\phi _i\{y_i[\xi _0\lambda _i+\kappa _i+d'(\phi _i)-b(\xi _0)]\\&\qquad -\,\mu _i[\xi _0\gamma _i+\rho _i+\lambda _i(d'(\phi _i)-b(\xi _0))]\} x_{ij}z_{il}\\&\qquad +\,\sum _{i\ne k}\phi _k(y_i-\mu _i\lambda _i)(\xi _0\lambda _k-b(\xi _0)\\&\qquad +\,\kappa _k+d'(\phi _k)) x_{ij}w_{kl}, \end{aligned}$$
for \(j,1,\ldots ,p\) and \(l=1,\ldots ,q\),
$$\begin{aligned} E\left( \dfrac{\partial \ell _c}{\partial \alpha _j}\dfrac{\partial \ell _c}{\partial \alpha _l}\big |\mathbf{Y}\right)&=\sum _{i=1}^n\phi _i^2w_{ij}w_{il}\{(d'(\phi _i)-b(\xi _0))^2\\&\quad +\,2(d'(\phi _i)-b(\xi _0))(\xi _0\lambda _i+\kappa _i)\\&\quad +\,\xi _0^2\gamma _i+2\xi _0\rho _i+\nu _i\}\\&\quad +\,\sum _{i\ne k}\phi _i\phi _k(\xi _0\lambda _i-b(\xi _0)\\&\quad +\,\kappa _i+d'(\phi _i))\\&\quad \times \,(\xi _0\lambda _k-b(\xi _0)+\kappa _k+d'(\phi _k)) w_{ij}w_{kl}, \end{aligned}$$
for \(j,l=1,\ldots ,q\), where \(\lambda _i\) and \(\kappa _i\) are defined as before and here we have defined \(\gamma _i=E(Z_i^2|\mathbf{Y})\), \(\rho _i=E(Z_i g(Z_i)|\mathbf{Y})\) and \(\nu _i=E(g(Z_i)^2|\mathbf{Y})\), with \(i=1,\ldots ,n\).
The conditional expectations that appears above are given explicitly in the following proposition.
Proposition 3
Let \(Y\sim \hbox {MP}(\mu ,\phi )\) with latent random effect Z belonging the exponential family as defined previously. Then, we have that
$$\begin{aligned}&E(Z^2|Y=y)=\dfrac{(y+1)(y+2)p(y+2;\mu ,\phi )}{\mu ^2 p(y;\mu ,\phi )},\\&E(g(Z)^2|Y=y)=(d'(\phi )+\xi _0)^2-2(d'(\phi )+\xi _0)\\&\quad \times \,\phi ^{-1}\xi _0^2b''(\xi _0)+\dfrac{d\,p(y;\mu ^*_t,\phi +t)/dt|_{t=0}}{p(y;\mu ,\phi )}\\&\quad +\,2\phi ^{-1}\xi _0-d''(\phi )+\dfrac{d^2p(y;\mu ^*_t,\phi +t)/dt^2|_{t=0}}{p(y;\mu ,\phi )} \end{aligned}$$
and
$$\begin{aligned}&E(Z g(Z)|Y=y)=\dfrac{(y+1)p(y+1;\mu ,\phi )}{\mu p(y;\mu ,\phi )}\\&\quad \times \left\{ \dfrac{d\,p(y+1;\mu ^*_t,\phi +t)/dt|_{t=0}}{p(y+1;\mu ,\phi )}-\xi _0-d'(\phi )\right\} . \end{aligned}$$
Corollary 1
For the PIG case, we have that
$$\begin{aligned} \rho= & {} -1/2,\\ \nu= & {} \left\{ \begin{array}{ll} \dfrac{1}{4\phi ^2}\{\phi (2\mu +\phi )+3\left[ 1+\sqrt{\phi (2\mu +\phi )}\right] \},&{}\quad \hbox {for}\,\, y=0,\\ \dfrac{(2\mu +\phi )^{1/2}}{4\phi ^{3/2}}[1+\sqrt{\phi (2\mu +\phi )}],&{}\quad \hbox {for}\,\, y=1,\\ \dfrac{\mu ^2}{4y(y-1)}\dfrac{p(y-2;\mu ,\phi )}{p(y;\mu ,\phi )},&{}\quad \hbox {for}\,\, y\ge 2. \\ \end{array}\right. \end{aligned}$$
In the NB case, we have that
$$\begin{aligned} \rho= & {} \dfrac{y+\phi }{\mu +\phi }[\Psi (y+\phi +1)-\log (\mu +\phi )],\\ \nu= & {} \Psi '(y+\phi )+\Psi (y+\phi )^2-2\Psi (y+\phi )\log (\mu +\phi )\\&+\log ^2(\mu +\phi ), \end{aligned}$$
where \(\Psi '(x)=d\Psi (x)/dx\).