Skip to main content
Log in

A General Inferential Framework for Singly-Truncated Bivariate Normal Models with Applications in Economics

  • Published:
Computational Economics Aims and scope Submit manuscript

Abstract

To analyze the singly-truncated bivariate economic data, we establish a class of singly-truncated bivariate normal distributions via stochastically representing the original bivariate normal random vector as a mixture of the singly-truncated part and its complementary components. Aided with the stochastic representaion, we creatively construct two novel unified and simple algorithms—the expectation–maximization algorithm as well as the minorization–maximization algorithm—to calculate the maximum likelihood estimates of the means and covariance matrix for the model of interest. In addition, we also develop a DA algorithm for posterior sampling in Bayesian analysis. Both simulation results and two real data applications in economics, collaborated by comparisons with existing methods, demonstrate the effectiveness and stability of proposed methodologies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Azzalini, A., & Dalla Valle, A. (1996). The multivariate skew-normal distribution. Biometrika, 83(4), 715–726.

    Article  Google Scholar 

  • Amemiya, T. (1974). Multivariate regression and simultaneous equation models when the dependent variables are truncated normal. Econometrica, 42(6), 999–1012.

    Article  Google Scholar 

  • Arismendi, J. C. (2013). Multivariate truncated moments. Journal of Multivariate Analysis, 117, 41–75.

    Article  Google Scholar 

  • Breslaw, J. A. (1994). Random sampling from a truncated multivariate normal distribution. Applied Mathematics Letters, 7(1), 1–6.

    Article  Google Scholar 

  • Cohen, A. C. (1957). Restriction and selection in multinormal distributions. The Annals of Mathematical Statistics, 28, 731–741.

    Article  Google Scholar 

  • Deb, P., & Trivedi, P. K. (1997). Demand for medical care by the elderly: A finite mixture approach. Journal of Applied Econometrics, 12, 313–336.

    Article  Google Scholar 

  • Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1), 1–38.

    Google Scholar 

  • Dyer, D. D. (1973). On moments estimation of the parameters of a truncated bivariate normal distribution. Journal of the Royal Statistical Society, Series C, 22, 287–291.

    Google Scholar 

  • Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions and Pattern Analysis and Machine Intelligence, 6, 721–741.

    Article  Google Scholar 

  • Griffths, W. (2002). A Gibbs’ sampler for the parameters of a truncated multivariate normal distribution. Department of Economics–Working Papers Series 856, The University of Melbourne.

  • Gupta, A. K., & Tracy, D. S. (1976). Recurrence relations for the moments of truncated multinormal distribution. Communications in Statistics-Theory and Methods, 5, 855–865.

    Article  Google Scholar 

  • Horrace, W. C. (2005). Some results on the multivariate truncated normal distribution. Journal of Multivariate Analysis, 94(1), 209–221.

    Article  Google Scholar 

  • Kan, R., & Robotti, C. (2017). On moments of folded and truncated multivariate normal distributions. Journal of Computational and Graphical Statistics, 26(4), 930–934.

    Article  Google Scholar 

  • Khatri, C. G., & Jaiswal, M. C. (1963). Estimation of parameters of a truncated bivariate normal distribution. Journal of the American Statistical Association, 58, 519–526.

    Article  Google Scholar 

  • Leroux, B. G. (1992). Consistent estimation of a mixing distribution. Annals of Statistics, 20(3), 1350–1360.

    Article  Google Scholar 

  • Leppard, P., & Tallis, G. M. (1989). Algorithm AS 249: Evaluation of the mean and covariance of the truncated multinormal distribution. Journal of Royal Statistical Society, Series C, 38(3), 543–553.

    Google Scholar 

  • Manjunath, B. G., & Wilhelm, S. (2021). Moments calculation for the doubly truncated multivariate normal density. Journal of Behavioral Data Science, 1(1), 17–33.

    Article  Google Scholar 

  • Murphy, K. P. (2007). Conjugate Bayesian analysis of the Gaussian distribution [Online]. Available: http://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf.

  • Muthén, B. (1990). Moments of the censored and truncated bivariate normal distribution. British Journal of Mathematical and Statistical Psychology, 43, 131–143.

    Article  Google Scholar 

  • Nurminen, H., Rui, R., Ardeshiri, T., Bazanella, A., & Gustafsson, F. (2016). Mean and covariance matrix of a multivariate normal distribution with one doubly-truncated component. Sweden: Technical Report from Automatic Control at Linköpings Universitet.

    Google Scholar 

  • Okun, A. M. (1962). Potential GNP: Its measurement and significance. In Proceedings of the Business and Economics Statistics Section of the American Statistical Association (pp. 89–104).

  • Phillips, A. W. (1958). The relation between unemployment and the rate of change of money wage rates in the United Kingdom, 1861–1957. Economica, 25(100), 283–299.

    Google Scholar 

  • Rosenbaum, S. (1961). Moments of a truncated bivariate normal distribution. Journal of the Royal Statistical Society, Series B, 23, 405–408.

    Google Scholar 

  • SAS Institute Inc. (1998). Solving Business Problems Using SAS Enterprise Miner Software. SAS Institute White Paper, SAS Institute Inc., Cary, NC.

  • Shah, S. M., & Parikh, N. T. (1964). Moments of single and doubly truncated standard bivariate normal distribution. Vidya, 7, 82–91.

    Google Scholar 

  • Singh, N. (1960). Estimation of parameters of a multivariate normal population from truncated and censored samples. Journal of Royal Statistical Society, Series B, 22, 307–311.

    Google Scholar 

  • Tallis, G. M. (1961). The moment generating function of the truncated multi-normal distribution. Journal of Royal Statistical Society, Series B, 23, 223–229.

    Google Scholar 

  • Tanner, M. A., & Wong, W. H. (1987). The calculation of posterior distribution by data augmentation (with discussion). Journal of the American Statistical Association, 83, 528–540.

    Article  Google Scholar 

  • Tian, G. L., Huang, X. F., & Xu, J. F. (2019). An assembly and decomposition (AD) approach for constructing seperable minorizing functions in a class MM algorithms. Statistica Sinica, 29, 961–982.

    Google Scholar 

  • Umeki, K., Sumida, A., Seino, T., Lim, E., & Honjo, T. (2006). Fitting the truncated bivariate normal distribution to the relationship between diameter and length of current-year shoots in Betula Platyphylla in Hokkaido, Northern Japan. Published in Second International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications. https://doi.org/10.1109/pma.2006.15.

  • Wilhelm, S., & Manjunath, B. G. (2010). tmvtnorm: A package for the truncated multivariate normal distribution. The R Journal, 2(1), 1–25.

    Article  Google Scholar 

  • Xiang, Y., & Zhu, Z. (2013). Comparison of two ZIP models with an application to ratemaking. Journal of Applied Statistics and Management, 32(5), 854–862.

    Google Scholar 

  • Yip, K. C. H., & Yau, K. K. W. (2005). On modeling claim frequency data in general insurance with extra zeros. Insurance, Mathematics & Statistics, 36, 153–163.

    Article  Google Scholar 

  • Yu, J. W., & Tian, G. L. (2011). Efficient algorithms for generating truncated multivariate normal distributions. Acta Mathematicae Applicatae Sinica (English Series), 27(4), 601–612.

    Article  Google Scholar 

Download references

Funding

Y LIU’s research was fully supported by grants (23YJC910004 &22YJAZH038) from Humanities and Social Science Foundation of Ministry of Education of China and a grant (12171483) from National Natural Science Foundation of China. GL TIAN’s research was fully supported by a grant (12171225) from National Natural Science Foundation of China. C ZHANG’s research was fully supported by a grant (11801380) from National Natural Science Foundation of China. H QIN’s research was fully supported by a grant (12371261) from National Natural Science Foundation of China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi Zhang.

Ethics declarations

Conflict of interest

This work does not have any conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proof of Theorem 2.1

To obtain the conditional distribution of \(\textbf{x}|\textbf{y}\), we first derive the conditional distribution of \((\textbf{x},\textbf{u})|\textbf{y}\). Given both \(\textbf{y}={\varvec{y}}\) and \(U_4=1\), i.e., \(\textbf{u}=(0,0,0,1)\), the complete-data random vector \(\textbf{x}\) and the observed-data random vector \(\textbf{y}\) share the same distribution, i.e., the conditional distribution of \(\textbf{x}|(\textbf{y}={\varvec{y}},\textbf{u}=(0,0,0,1))\) degenerates to the point \({\varvec{y}}\). Therefore,

$$\begin{aligned}{} & {} \Pr (\textbf{x}={\varvec{x}},\textbf{u}=(0,0,0,1)|\textbf{y}={\varvec{y}}) \nonumber \\= & {} \Pr (\textbf{x}={\varvec{x}}|\textbf{y}={\varvec{y}},\textbf{u}=(0,0,0,1))\cdot \Pr (\textbf{u}=(0,0,0,1)) = p_{11}({\varvec{\mu }},{\varvec{\Sigma }})I({\varvec{x}}={\varvec{y}}). \end{aligned}$$
(A.1)

Note that when \(U_4=0\), then \(\textbf{u}\) could be (1,0,0,0) or (0,1,0,0) or (0,0,1,0), thus, given \(\textbf{y}={\varvec{y}}\) and \(U_4=0\), i.e., \(\textbf{u}\ne (0,0,0,1)\), the conditional distribution of \(\textbf{x}|(\textbf{y}={\varvec{y}},\textbf{u}\ne (0,0,0,1))\) may have the same distribution of the unobserved random vector \(\textbf{y}_1\) or \(\textbf{y}_2\) or \(\textbf{y}_3\) depending on the value of \(\textbf{u}\). Therefore,

$$\begin{aligned}{} & {} f(\textbf{x}={\varvec{x}},\textbf{u}\ne (0,0,0,1)|\textbf{y}={\varvec{y}}) \nonumber \\= & {} f(\textbf{x}={\varvec{x}}|\textbf{y}_1^{\small \rm C}={\varvec{y}}_1^{\small \rm C},\textbf{u}=(1,0,0,0))\cdot \Pr (\textbf{u}=(1,0,0,0)) \nonumber \\{} & {} + f(\textbf{x}={\varvec{x}}|\textbf{y}_2^{\small \rm C}={\varvec{y}}_2^{\small \rm C},\textbf{u}=(0,1,0,0))\cdot \Pr (\textbf{u}=(0,1,0,0)) \nonumber \\{} & {} + f(\textbf{x}={\varvec{x}}|\textbf{y}_3^{\small \rm C}={\varvec{y}}_3^{\small \rm C},\textbf{u}=(0,0,1,0))\cdot \Pr (\textbf{u}=(0,0,1,0)) \nonumber \\= & {} p_{00}({\varvec{\mu }},{\varvec{\Sigma }})f_1(\textbf{y}_1^{\small \rm C}={\varvec{x}}; {\varvec{\mu }}, {\varvec{\Sigma }}, {{\mathbb {D}}}_1)\cdot I({\varvec{x}}\ne {\varvec{y}}) \nonumber \\{} & {} + p_{01}({\varvec{\mu }},{\varvec{\Sigma }})f_2(\textbf{y}_2^{\small \rm C}={\varvec{x}}; {\varvec{\mu }}, {\varvec{\Sigma }}, {{\mathbb {D}}}_2)\cdot I({\varvec{x}}\ne {\varvec{y}}) \nonumber \\{} & {} + p_{10}({\varvec{\mu }},{\varvec{\Sigma }})f_3(\textbf{y}_3^{\small \rm C}={\varvec{x}}; {\varvec{\mu }}, {\varvec{\Sigma }}, {{\mathbb {D}}}_3)\cdot I({\varvec{x}}\ne {\varvec{y}}), \end{aligned}$$
(A.2)

where \(f_1(\cdot )\), \(f_2(\cdot )\) and \(f_3(\cdot )\) denote the density functions of the complementary singly-truncated bivariate normal vector \(\textbf{y}_1\), \(\textbf{y}_2\) and \(\textbf{y}_3\), respectively.

By combining Eq. (A.1) with Eq. (A.2), the conditional distribution of \(\textbf{x}|\textbf{y}\) is determined by the following density function:

$$\begin{aligned}{} & {} f_{\textbf{x}|\textbf{y}}(\textbf{x}={\varvec{x}}|\textbf{y}={\varvec{y}}) = \Pr (\textbf{x}={\varvec{x}},\textbf{u}=(0,0,0,1)|\textbf{y}={\varvec{y}}) + f(\textbf{x}={\varvec{x}},\textbf{u}\ne (0,0,0,1)|\textbf{y}={\varvec{y}}) \\= & {} p_{11}({\varvec{\mu }},{\varvec{\Sigma }})I({\varvec{x}}={\varvec{y}}) + p_{00}({\varvec{\mu }},{\varvec{\Sigma }})f_1(\textbf{y}_1^{\small \rm C}={\varvec{x}}; {\varvec{\mu }}, {\varvec{\Sigma }}, {{\mathbb {D}}}_1)\cdot I({\varvec{x}}\ne {\varvec{y}}) \\{} & {} + p_{01}({\varvec{\mu }},{\varvec{\Sigma }})f_2(\textbf{y}_2^{\small \rm C}={\varvec{x}}; {\varvec{\mu }}, {\varvec{\Sigma }}, {{\mathbb {D}}}_2)\cdot I({\varvec{x}}\ne {\varvec{y}}) + p_{10}({\varvec{\mu }},{\varvec{\Sigma }})f_3(\textbf{y}_3^{\small \rm C}={\varvec{x}}; {\varvec{\mu }}, {\varvec{\Sigma }}, {{\mathbb {D}}}_3)\cdot I({\varvec{x}}\ne {\varvec{y}}). \end{aligned}$$

Appendix B: Details for Deriving \(Q({\varvec{\mu }},{\varvec{\Sigma }}|{\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})\) specified by Eq. (3.9) in the proposed MM algorithm

The difficulty in deriving the explicit expressions of the MLEs for \(({\varvec{\mu }},{\varvec{\Sigma }})\) from the observed-data log-likelihood function \(\ell ({\varvec{\mu }},{\varvec{\Sigma }}|Y_\textrm{obs})\) defined by (3.1) lies in the integration part, i.e., \(-n\log [p_{11}({\varvec{\mu }},{\varvec{\Sigma }})]\). Thus, the key to find the surrogate Q-function for \(\ell ({\varvec{\mu }},{\varvec{\Sigma }}|Y_\textrm{obs})\) is to make an appropriate amplification and minification on this term.

Note that \(p_{00}({\varvec{\mu }},{\varvec{\Sigma }})+p_{01}({\varvec{\mu }},{\varvec{\Sigma }})+p_{10}({\varvec{\mu }},{\varvec{\Sigma }})+p_{11}({\varvec{\mu }},{\varvec{\Sigma }})=1\) and \(\log (\cdot )\) is a concave function, thus,

$$\begin{aligned} 0&=\; \log \left[ p_{00}({\varvec{\mu }},{\varvec{\Sigma }})+p_{01}({\varvec{\mu }},{\varvec{\Sigma }})+p_{10}({\varvec{\mu }},{\varvec{\Sigma }})+p_{11}({\varvec{\mu }},{\varvec{\Sigma }})\right] \nonumber \\&=\; \log \left[ \frac{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot p_{11}({\varvec{\mu }},{\varvec{\Sigma }}) + \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot (1-p_{11}({\varvec{\mu }},{\varvec{\Sigma }})) \right] \nonumber \\&\geqslant \; p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})\log \left[ \frac{p_{11}({\varvec{\mu }},{\varvec{\Sigma }})}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})} \right] \nonumber \\&\quad+ (1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}))\log \left[ \frac{1-p_{11}({\varvec{\mu }},{\varvec{\Sigma }})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})} \right] , \end{aligned}$$
(B.1)

where \(({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})\) is the t-th approximation of the MLEs \(({\varvec{{\hat{\mu }}}}, {\varvec{{\hat{\Sigma }}}})\). Based on (B.1), we have

$$\begin{aligned} {}- \log [p_{11}({\varvec{\mu }},{\varvec{\Sigma }})] \geqslant&-\log [p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})] - \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log [1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})] \\&+ \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log [1-p_{11}({\varvec{\mu }},{\varvec{\Sigma }})]. \end{aligned}$$

Then,

$$\begin{aligned} \ell ({\varvec{\mu }},{\varvec{\Sigma }}|Y_\textrm{obs}) \geqslant&\; c_1^{(t)} - \frac{n}{2}\log |{\varvec{\Sigma }}| - \frac{1}{2}\sum _{j=1}^n ({\varvec{y}}_j-{\varvec{\mu }})^{\top}{\varvec{\Sigma }}^{-1}({\varvec{y}}_j-{\varvec{\mu }}) \nonumber \\&+ \frac{n[1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})]}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log \left[ p_{00}({\varvec{\mu }},{\varvec{\Sigma }})+p_{01}({\varvec{\mu }},{\varvec{\Sigma }})+p_{10}({\varvec{\mu }},{\varvec{\Sigma }})\right] \nonumber \\ \,\hat{=}\,&\; Q_1({\varvec{\mu }},{\varvec{\Sigma }}|{\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}), \end{aligned}$$
(B.2)

where \(c_1^{(t)}\) is a constant that does not depend on \(({\varvec{\mu }},{\varvec{\Sigma }})\).

However, from \(Q_1({\varvec{\mu }},{\varvec{\Sigma }}|{\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})\), we still can not resolve the MLEs due to the complex integral form of \(\log [ p_{00}({\varvec{\mu }},{\varvec{\Sigma }})+p_{01}({\varvec{\mu }},{\varvec{\Sigma }})+p_{10}({\varvec{\mu }},{\varvec{\Sigma }}) ]\). Next, we try to construct a surrogate function for it, again, we need to use the concavity of the log function. Since

$$\begin{aligned}&\; \log \left[ p_{00}({\varvec{\mu }},{\varvec{\Sigma }})+p_{01}({\varvec{\mu }},{\varvec{\Sigma }})+p_{10}({\varvec{\mu }},{\varvec{\Sigma }})\right] \\&=\; \log \bigg [ \frac{p_{00}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{00}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot p_{00}({\varvec{\mu }},{\varvec{\Sigma }}) \\&\quad+ \frac{p_{01}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{01}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot p_{01}({\varvec{\mu }},{\varvec{\Sigma }}) \\&\quad+ \frac{p_{10}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{10}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot p_{10}({\varvec{\mu }},{\varvec{\Sigma }})\bigg ] \\&\geqslant \; \frac{p_{00}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log \left[ \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{00}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot p_{00}({\varvec{\mu }},{\varvec{\Sigma }}) \right] \\&\quad+ \frac{p_{01}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log \left[ \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{01}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot p_{01}({\varvec{\mu }},{\varvec{\Sigma }}) \right] \\&\quad+ \frac{p_{10}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log \left[ \frac{1-p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{10}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\cdot p_{10}({\varvec{\mu }},{\varvec{\Sigma }}) \right] , \end{aligned}$$

therefore,

$$\begin{aligned} Q_1({\varvec{\mu }},{\varvec{\Sigma }}|{\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}) \geqslant&\; c_2^{(t)} - \frac{n}{2}\log |{\varvec{\Sigma }}| - \frac{1}{2}\sum _{j=1}^n ({\varvec{y}}_j-{\varvec{\mu }})^{\top}{\varvec{\Sigma }}^{-1}({\varvec{y}}_j-{\varvec{\mu }}) ] \nonumber \\&+ \frac{np_{00}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log [p_{00}({\varvec{\mu }},{\varvec{\Sigma }})] \nonumber \\&+ \frac{np_{01}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log [p_{01}({\varvec{\mu }},{\varvec{\Sigma }})] \nonumber \\&+ \frac{np_{10}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}{p_{11}({\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)})}\log [p_{10}({\varvec{\mu }},{\varvec{\Sigma }})] \nonumber \\ \,\hat{=}\,&\; Q_2({\varvec{\mu }},{\varvec{\Sigma }}|{\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}), \end{aligned}$$
(B.3)

where \(c_2^{(t)}\) is a constant that does not depend on \(({\varvec{\mu }},{\varvec{\Sigma }})\).

Unfortunately, the Q-function given by Eq. (B.3) is still not the right one because the complex integral forms included in \(\log [p_{00}({\varvec{\mu }},{\varvec{\Sigma }})]\), \(\log [p_{01}({\varvec{\mu }},{\varvec{\Sigma }})]\) and \(\log [p_{10}({\varvec{\mu }},{\varvec{\Sigma }})]\). To solve this problem, we need to use the following integration version of the Jensen’s inequality:

$$\begin{aligned} \log \left[ \int _{{\varvec{z}}\in {{\mathbb {Z}}}} f({\varvec{z}})\cdot g({\varvec{z}}) \ \text{ d }{\varvec{z}}\right] \geqslant \int _{{\varvec{z}}\in {{\mathbb {Z}}}} \log [f({\varvec{z}})]\cdot g({\varvec{z}}) \ \text{ d }{\varvec{z}}, \end{aligned}$$
(B.4)

where \({\varvec{z}}\) is a real number vector defined on the domain \({{\mathbb {Z}}}\), \(f(\cdot )\) is a positive multivariate real function defined on \({{\mathbb {Z}}}\) and \(g(\cdot )\) is a multivariate pdf defined on \({{\mathbb {Z}}}\). By employing Eq. (B.4), we have

$$\begin{aligned} \log [p_{00}({\varvec{\mu }},{\varvec{\Sigma }})] =&\; \log \left[ \int _{{\varvec{y}}_1^{\small \rm C}\in {{\mathbb {D}}}_1} \frac{\phi ({\varvec{y}}_1^{\small \rm C}| {\varvec{\mu }},{\varvec{\Sigma }})}{f_1({\varvec{y}}_1^{\small \rm C}; {\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}, {{\mathbb {D}}}_1)} \cdot f_1({\varvec{y}}_1^{\small \rm C}; {\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}, {{\mathbb {D}}}_1) \ \text{ d }{\varvec{y}}_1^{\small \rm C} \right] \nonumber \\ \quad \geqslant&\; \int _{{\varvec{y}}_1^{\small \rm C}\in {{\mathbb {D}}}_1} \log \left[ \frac{\phi ({\varvec{y}}_1^{\small \rm C}| {\varvec{\mu }},{\varvec{\Sigma }})}{f_1({\varvec{y}}_1^{\small \rm C}; {\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}, {{\mathbb {D}}}_1)} \right] \cdot f_1({\varvec{y}}_1^{\small \rm C}; {\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}, {{\mathbb {D}}}_1) \ \text{ d }{\varvec{y}}_1^{\small \rm C} \\&\!\!\!\!\!\!\!= c_{31}^{(t)} - \frac{1}{2}\log |{\varvec{\Sigma }}| \nonumber \\&\quad - \frac{1}{2} \int _{{\varvec{y}}_1^{\small \rm C}\in {{\mathbb {D}}}_1} \left[ ({\varvec{y}}_1^{\small \rm C}-{\varvec{\mu }})^{\top}{\varvec{\Sigma }}^{-1}({\varvec{y}}_1^{\small \rm C}-{\varvec{\mu }})\right] f_1({\varvec{y}}_1^{\small \rm C}; {\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}, {{\mathbb {D}}}_1) \ \text{ d }{\varvec{y}}_1^{\small \rm C}, \end{aligned}$$
(B.5)

where \(c_{31}^{(t)}\) is a constant that free of \(({\varvec{\mu }},{\varvec{\Sigma }})\). Similarly,

$$\begin{aligned} \log [p_{01}({\varvec{\mu }},{\varvec{\Sigma }})] \geqslant&\; c_{32}^{(t)} - \frac{1}{2}\log |{\varvec{\Sigma }}| \nonumber \\&- \frac{1}{2} \int _{{\varvec{y}}_2^{\small \rm C}\in {{\mathbb {D}}}_2} \left[ ({\varvec{y}}_2^{\small \rm C}-{\varvec{\mu }})^{\top}{\varvec{\Sigma }}^{-1}({\varvec{y}}_2^{\small \rm C}-{\varvec{\mu }})\right] f_2({\varvec{y}}_2^{\small \rm C}; {\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}, {{\mathbb {D}}}_2) \ \text{ d }{\varvec{y}}_2^{\small \rm C}, \hspace{0.5cm} \end{aligned}$$
(B.6)
$$\begin{aligned} \log [p_{10}({\varvec{\mu }},{\varvec{\Sigma }})] \geqslant&\; c_{33}^{(t)} - \frac{1}{2}\log |{\varvec{\Sigma }}| \nonumber \\&- \frac{1}{2} \int _{{\varvec{y}}_3^{\small \rm C}\in {{\mathbb {D}}}_3} \left[ ({\varvec{y}}_3^{\small \rm C}-{\varvec{\mu }})^{\top}{\varvec{\Sigma }}^{-1}({\varvec{y}}_3^{\small \rm C}-{\varvec{\mu }})\right] f_3({\varvec{y}}_3^{\small \rm C}; {\varvec{\mu }}^{(t)},{\varvec{\Sigma }}^{(t)}, {{\mathbb {D}}}_3) \ \text{ d }{\varvec{y}}_3^{\small \rm C}, \hspace{0.5cm} \end{aligned}$$
(B.7)

where \(c_{32}^{(t)}\) and \(c_{33}^{(t)}\) are normalizing constants. By combining Eqs. (B.5)–(B.7), the result shown in Eq. (3.9) can be obtained immediately.

Appendix C: Numerical Simulation Results

See Tables 10, 11, 12, 13, 14, 15, 16

Table 10 Estimation performances under four types of singly-truncation with \(\sigma _{12}=\sigma _{21}=0.1\), i.e., \(\rho =0.1155\)
Table 11 Estimation performances under four types of singly-truncation with \(\sigma _{12}=\sigma _{21}=0.5\), i.e., \(\rho =0.5774\)
Table 12 Estimation performances under four types of singly-truncation with \(\sigma _{12}=\sigma _{21}=0.8\), i.e., \(\rho =0.9238\)
Table 13 Comparison of estimation performances among three methods with \(n=100\)
Table 14 Comparison of estimation performances among three methods with \(n=1000\)
Table 15 Posterior estimation comparison with Griffiths’ method when \({\varvec{r}}=(2.6000,3.3147)\)
Table 16 Posterior estimation comparison with Griffiths’ method when \({\varvec{r}}=(0,0)\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Y., Tian, GL., Zhang, C. et al. A General Inferential Framework for Singly-Truncated Bivariate Normal Models with Applications in Economics. Comput Econ (2024). https://doi.org/10.1007/s10614-023-10525-w

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10614-023-10525-w

Keywords

Navigation