Abstract
This paper considers the Bayesian empirical likelihood (BEL) inference and order shrinkage for a class of sparse autoregressive models without assuming the distributions for the errors. By introducing a nonparametric likelihood, parameters’ point and interval estimators, as well as some asymptotic properties of the estimators are obtained. By introducing a spike-and-slab prior, the order and the non-zero autoregressive coefficients of the model can be easily and accurately determined together via the Markov Chain Monte Carlo (MCMC) techniques. Simulation studies are conducted to evaluate the proposed methods. Finally, a real data example of the US industrial production index data set is applied to show the good performances of the BEL methods.
Similar content being viewed by others
Notes
Fox example, in Table 6, the “CZ” value of BELe is 2, meaning that if a method perform well, its “CZ result will colse to 2.
References
Bahari F, Parsi S, Ganjali M (2019) Empirical likelihood inference in general linearmodel with missing values in response and covariates by MNAR mechanism. Stat Pap. https://doi.org/10.1007/s00362-019-01103-0
Bedoui A, Lazar NA (2020) Bayesian empirical likelihood for ridge and lasso regressions. Comput Stat Data Anal. https://doi.org/10.1016/j.csda.2020.106917
Bernardo JM, Smith AFM (1994) Bayesian theory. Wiley, New York
Billingsley P (1961) Statistical inference for Markov processes. The University of Chicago Press, Chicago
Broersen PMT (2006) Automatic autocorrelation and spectral analysis. Springer, Berlin
Chan NH, Ling S (2006) Empirical likelihood for GARCH models. Econ Theory 22(3):403–428
Chang IH, Mukerjee R (2008) Bayesian and frequentist confidence intervals arising from empirical-type likelihoods. Biometrika 95(1):139–147
Chaudhuri S, Ghosh M (2011) Empirical likelihood for small area estimation. Biometrika 98:473–480
Chaudhuri S, Mondal D, Yin T (2017) Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation. J R Stat Soc B 79(1):293–320
Chib S, Greenberg E (1995) Understanding the Metropolis-Hastings algorithm. Am Stat 49(4):327–335
Chuang CS, Chan NH (2002) Empirical likelihood for autoregressive models, with applications to unstable time series. Stat Sini 12(2):387–407
Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96(456):1348–1360
George EI, McCulloch RE (1993) Variable selection via Gibbs sampling. J Am Stat Assoc 88(423):881–889
Ishwaran H, Rao JS (2005) Spike and slab variable selection: frequentist and Bayesian strategies. Ann Stat 33(2):730–773
Ishwaran H, Rao JS (2011) Consistency of spike and slab regression. Stat Probab Lett 81(12):1920–1928
Kitamura Y (1997) Empirical likelihood method for weakly dependent processes. Ann Stat 25(5):2084–2112
Klimko LA, Nelson PI (1978) On conditional least squares estimation for stochastic processes. Ann Stat 6(3):629–642
Kolaczyk ED (1994) Empirical likelihood for generalized linear models. Stat Sin 4(1):199–218
Kwon S, Lee S, Na O (2017) Tuning parameter selection for the adaptive lasso in the autoregressive model. J Korean Stat Soc 46(2):285–297
Lazar NA (2003) Bayesian empirical likelihood. Biometrika 90(2):319–326
Liu T, Yuan X (2016) Weighted quantile regression with missing covariates using empirical likelihood. Statistics 50(1):89–113
Malsiner-Walli G, Wagner H (2011) Comparing spike and slab priors for Bayesian variable selection. Aust J Stat 40(4):241–264
Mengersen KL, Pudlo P, Robert CP (2013) Bayesian computation via empirical likelihood. Proc Natl Acad Sci USA 110(4):1321–1326
Mitchell TJ, Beauchamp JJ (1988) Bayesian variable selection in linear regression. J Am Stat Assoc 83(404):1023–1032
Monahan JF, Boos DD (1992) Proper likelihoods for Bayesian analysis. Biometrika 79(2):271–278
Monti AC (1997) Empirical likelihood confidence regions in time series models. Biometrika 84(2):395–405
Mykland PA (1995) Dual likelihood. Ann Stat 23(2):396–421
Nardi Y, Rinaldo A (2011) Autoregressive process modeling via the lasso procedure. J Multivar Anal 102(3):528–549
Narisetty NN, He X (2014) Bayesian variable selection with shrinking and diffusing priors. Ann Stat 42(2):789–817
Nordmana DJ, Lahiri SN (2014) A review of empirical likelihood methods for time series. J Stat Plan Inference 155:1–18
Owen A (1988) Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75(2):237–249
Owen A (1990) Empirical likelihood ratio confidence regions. Ann Stat 18(1):90–120
Owen A (1991) Empirical likelihood for linear models. Ann Stat 19(4):1725–1747
Owen A (2001) Empirical likelihood. Chapman and Hall, New York
Qin J, Lawless J (1994) Empirical likelihood and general estimating equations. Ann Stat 22(1):300–325
Rao JNK, Wu C (2010) Bayesian pseudo-empirical-likelihood intervals for complex surveys. J R Stat Soc B 72:533–544
Sang H, Sun Y (2015) Simultaneous sparse model selection and coefficient estimation for heavy-tailed autoregressive processes. Statistics 49(1):187–208
Schmidt DF, Makalic E (2013) Estimation of stationary autoregressive models with the Bayesian LASSO. J Time Ser Anal 34(5):517–531
Shi J, Lau TS (2000) Empirical likelihood for partially linear models. J Multivar Anal 72(1):132–148
Shibata R (1976) Selection of the order of an autoregressive model by Akaike’s information criterion. Biometrika 63(1):117–126
Tang CY, Leng C (2010) Penalized high-dimensional empirical likelihood. Biometrika 97(4):905–920
Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc B 58(1):267–288
Wang H, Li G, Tsai CL (2007) Regression coefficient and autoregressive order shrinkage and selection via the lasso. J R Stat Soc B 69(1):63–78
Wei C, Luo Y, Wu X (2012) Empirical likelihood for partially linear additive errors-in-variables models. Stat Pap 53:485–496
Xi R, Li Y, Hu Y (2016) Bayesian quantile regression based on the empirical likelihood with spike and slab priors. Bayesian Anal 11(3):821–855
Yang K, Li H, Wang D (2018a) Estimation of parameters in the self-exciting threshold autoregressive processes for nonlinear time series of counts. Appl Math Model 57:226–247
Yang K, Wang D, Li H (2018b) Threshold autoregression analysis for finite-range time series of counts with an application on measles data. J Stat Comput Simul 88(3):597–614
Yang Y, He X (2012) Bayesian empirical likelihood for quantile regression. Ann Stat 40(2):1102–1131
Zhang Y, Tang N (2017) Bayesian empirical likelihood estimation of quantile structural equation models. J Syst Sci Complex 30(1):122–138
Zhang C (2010) Nearly unbiased variable selection under minimax concave penalty. Ann Stat 38(2):894–942
Zhang H, Wang D, Sun L (2017) Regularized estimation in GINAR(\(p\)) process. J Korean Stat Soc 46(4):502–517
Zhao P, Ghosh M, Rao JNK, Wu C (2019) Bayesian empirical likelihood inference with complex survey data. J R Stat Soc B 82(1):155–174
Zhong X, Ghosh M (2016) Higher-order properties of Bayesian empirical likelihood. Electron J Stat 10(2):3011–3044
Zhu L, Xue L (2006) Empirical likelihood confidence regions in a partially linear single-index model. J R Stat Soc B 68(3):549–570
Zou H (2006) The adaptive lasso and its oracle properties. J Am Stat Assoc 101(476):1418–1429
Acknowledgements
This work is supported by National Natural Science Foundation of China (No. 11901053, 11871028, 11731015), Natural Science Foundation of Jilin Province (No. 20180101216JC), Program for Changbaishan Scholars of Jilin Province (2015010).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Proof of Theorem 3.1
It it easy to see that (S1) and (S2) are equivalent. Therefore, we only prove (S2). Recall that \({\varvec{Y}}_{t}=\left( y_{t}, \dots , y_{t-p+1}\right) ^{\textsf {T}}\) and \({\varvec{Z}}_n=\sum _{t=p+1}^n {\varvec{Y}}_{t-1}{\varvec{Y}}_{t-1}^{\textsf {T}}\). Then, we have \( {\varvec{\Sigma }}_n=\sum _{t=p+1}^n {\varvec{m}}_t({\varvec{\beta }}){\varvec{m}}_t({\varvec{\beta }})^{\textsf {T}} =\sum _{t=p+1}^n (y_t-{\varvec{\beta }}^{\textsf {T}}{\varvec{Y}}_{t-1})^2 {\varvec{Y}}_{t-1}{\varvec{Y}}_{t-1}^{\textsf {T}} \). By Theorem 1.1 in Billingsley (1961) and continuous mapping theorem, we have
and
where \({\varvec{C}}_1\) and \({\varvec{C}}_2\) are constant matrices. Note that as \(n\rightarrow \infty \), \( \frac{1}{n}{\varvec{\Lambda }}^{-1}\rightarrow {\varvec{0}}. \) Thus,
Thus, by continuous mapping theorem, we have as \(n\rightarrow \infty \),
Since \(\hat{{\varvec{\Sigma }}}\) is a consistent estimator of \({\varvec{\Sigma }}\), by Slutsky’s theorem, (S2) holds.
The proof is complete. \(\square \)
The full conditional density of \(\beta _{i} | {\varvec{Y}}, {\varvec{\theta }}, \eta , {\varvec{\beta }}_{-i}\) is:
where \( \tilde{\theta }_{i}= \theta _{i}\left( \theta _{i}+\left( 1-\theta _{i}\right) \exp \left\{ \frac{1}{2} \frac{V_i^2\mu _i^2}{\eta +V_i}\right\} \sqrt{ \frac{\eta }{V_i+\eta } } \right) ^{-1}\).
Rights and permissions
About this article
Cite this article
Yang, K., Ding, X. & Yuan, X. Bayesian empirical likelihood inference and order shrinkage for autoregressive models. Stat Papers 63, 97–121 (2022). https://doi.org/10.1007/s00362-021-01231-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-021-01231-6