Skip to main content
Log in

Bayesian empirical likelihood inference and order shrinkage for autoregressive models

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This paper considers the Bayesian empirical likelihood (BEL) inference and order shrinkage for a class of sparse autoregressive models without assuming the distributions for the errors. By introducing a nonparametric likelihood, parameters’ point and interval estimators, as well as some asymptotic properties of the estimators are obtained. By introducing a spike-and-slab prior, the order and the non-zero autoregressive coefficients of the model can be easily and accurately determined together via the Markov Chain Monte Carlo (MCMC) techniques. Simulation studies are conducted to evaluate the proposed methods. Finally, a real data example of the US industrial production index data set is applied to show the good performances of the BEL methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Fox example, in Table 6, the “CZ” value of BELe is 2, meaning that if a method perform well, its “CZ result will colse to 2.

References

  • Bahari F, Parsi S, Ganjali M (2019) Empirical likelihood inference in general linearmodel with missing values in response and covariates by MNAR mechanism. Stat Pap. https://doi.org/10.1007/s00362-019-01103-0

    Article  MATH  Google Scholar 

  • Bedoui A, Lazar NA (2020) Bayesian empirical likelihood for ridge and lasso regressions. Comput Stat Data Anal. https://doi.org/10.1016/j.csda.2020.106917

  • Bernardo JM, Smith AFM (1994) Bayesian theory. Wiley, New York

    MATH  Google Scholar 

  • Billingsley P (1961) Statistical inference for Markov processes. The University of Chicago Press, Chicago

    MATH  Google Scholar 

  • Broersen PMT (2006) Automatic autocorrelation and spectral analysis. Springer, Berlin

    Google Scholar 

  • Chan NH, Ling S (2006) Empirical likelihood for GARCH models. Econ Theory 22(3):403–428

    MathSciNet  MATH  Google Scholar 

  • Chang IH, Mukerjee R (2008) Bayesian and frequentist confidence intervals arising from empirical-type likelihoods. Biometrika 95(1):139–147

    MathSciNet  MATH  Google Scholar 

  • Chaudhuri S, Ghosh M (2011) Empirical likelihood for small area estimation. Biometrika 98:473–480

    MathSciNet  MATH  Google Scholar 

  • Chaudhuri S, Mondal D, Yin T (2017) Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation. J R Stat Soc B 79(1):293–320

    MathSciNet  MATH  Google Scholar 

  • Chib S, Greenberg E (1995) Understanding the Metropolis-Hastings algorithm. Am Stat 49(4):327–335

    Google Scholar 

  • Chuang CS, Chan NH (2002) Empirical likelihood for autoregressive models, with applications to unstable time series. Stat Sini 12(2):387–407

    MathSciNet  MATH  Google Scholar 

  • Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96(456):1348–1360

    MathSciNet  MATH  Google Scholar 

  • George EI, McCulloch RE (1993) Variable selection via Gibbs sampling. J Am Stat Assoc 88(423):881–889

    Google Scholar 

  • Ishwaran H, Rao JS (2005) Spike and slab variable selection: frequentist and Bayesian strategies. Ann Stat 33(2):730–773

    MathSciNet  MATH  Google Scholar 

  • Ishwaran H, Rao JS (2011) Consistency of spike and slab regression. Stat Probab Lett 81(12):1920–1928

    MathSciNet  MATH  Google Scholar 

  • Kitamura Y (1997) Empirical likelihood method for weakly dependent processes. Ann Stat 25(5):2084–2112

    MathSciNet  MATH  Google Scholar 

  • Klimko LA, Nelson PI (1978) On conditional least squares estimation for stochastic processes. Ann Stat 6(3):629–642

    MathSciNet  MATH  Google Scholar 

  • Kolaczyk ED (1994) Empirical likelihood for generalized linear models. Stat Sin 4(1):199–218

    MathSciNet  MATH  Google Scholar 

  • Kwon S, Lee S, Na O (2017) Tuning parameter selection for the adaptive lasso in the autoregressive model. J Korean Stat Soc 46(2):285–297

    MathSciNet  MATH  Google Scholar 

  • Lazar NA (2003) Bayesian empirical likelihood. Biometrika 90(2):319–326

    MathSciNet  MATH  Google Scholar 

  • Liu T, Yuan X (2016) Weighted quantile regression with missing covariates using empirical likelihood. Statistics 50(1):89–113

    MathSciNet  MATH  Google Scholar 

  • Malsiner-Walli G, Wagner H (2011) Comparing spike and slab priors for Bayesian variable selection. Aust J Stat 40(4):241–264

    Google Scholar 

  • Mengersen KL, Pudlo P, Robert CP (2013) Bayesian computation via empirical likelihood. Proc Natl Acad Sci USA 110(4):1321–1326

    Google Scholar 

  • Mitchell TJ, Beauchamp JJ (1988) Bayesian variable selection in linear regression. J Am Stat Assoc 83(404):1023–1032

    MathSciNet  MATH  Google Scholar 

  • Monahan JF, Boos DD (1992) Proper likelihoods for Bayesian analysis. Biometrika 79(2):271–278

    MathSciNet  MATH  Google Scholar 

  • Monti AC (1997) Empirical likelihood confidence regions in time series models. Biometrika 84(2):395–405

    MathSciNet  MATH  Google Scholar 

  • Mykland PA (1995) Dual likelihood. Ann Stat 23(2):396–421

    MathSciNet  MATH  Google Scholar 

  • Nardi Y, Rinaldo A (2011) Autoregressive process modeling via the lasso procedure. J Multivar Anal 102(3):528–549

    MathSciNet  MATH  Google Scholar 

  • Narisetty NN, He X (2014) Bayesian variable selection with shrinking and diffusing priors. Ann Stat 42(2):789–817

    MathSciNet  MATH  Google Scholar 

  • Nordmana DJ, Lahiri SN (2014) A review of empirical likelihood methods for time series. J Stat Plan Inference 155:1–18

    MathSciNet  Google Scholar 

  • Owen A (1988) Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75(2):237–249

    MathSciNet  MATH  Google Scholar 

  • Owen A (1990) Empirical likelihood ratio confidence regions. Ann Stat 18(1):90–120

    MathSciNet  MATH  Google Scholar 

  • Owen A (1991) Empirical likelihood for linear models. Ann Stat 19(4):1725–1747

    MathSciNet  MATH  Google Scholar 

  • Owen A (2001) Empirical likelihood. Chapman and Hall, New York

    MATH  Google Scholar 

  • Qin J, Lawless J (1994) Empirical likelihood and general estimating equations. Ann Stat 22(1):300–325

    MathSciNet  MATH  Google Scholar 

  • Rao JNK, Wu C (2010) Bayesian pseudo-empirical-likelihood intervals for complex surveys. J R Stat Soc B 72:533–544

    MathSciNet  MATH  Google Scholar 

  • Sang H, Sun Y (2015) Simultaneous sparse model selection and coefficient estimation for heavy-tailed autoregressive processes. Statistics 49(1):187–208

    MathSciNet  MATH  Google Scholar 

  • Schmidt DF, Makalic E (2013) Estimation of stationary autoregressive models with the Bayesian LASSO. J Time Ser Anal 34(5):517–531

    MathSciNet  MATH  Google Scholar 

  • Shi J, Lau TS (2000) Empirical likelihood for partially linear models. J Multivar Anal 72(1):132–148

    MathSciNet  MATH  Google Scholar 

  • Shibata R (1976) Selection of the order of an autoregressive model by Akaike’s information criterion. Biometrika 63(1):117–126

    MathSciNet  MATH  Google Scholar 

  • Tang CY, Leng C (2010) Penalized high-dimensional empirical likelihood. Biometrika 97(4):905–920

    MathSciNet  MATH  Google Scholar 

  • Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc B 58(1):267–288

    MathSciNet  MATH  Google Scholar 

  • Wang H, Li G, Tsai CL (2007) Regression coefficient and autoregressive order shrinkage and selection via the lasso. J R Stat Soc B 69(1):63–78

    MathSciNet  Google Scholar 

  • Wei C, Luo Y, Wu X (2012) Empirical likelihood for partially linear additive errors-in-variables models. Stat Pap 53:485–496

    MathSciNet  MATH  Google Scholar 

  • Xi R, Li Y, Hu Y (2016) Bayesian quantile regression based on the empirical likelihood with spike and slab priors. Bayesian Anal 11(3):821–855

    MathSciNet  MATH  Google Scholar 

  • Yang K, Li H, Wang D (2018a) Estimation of parameters in the self-exciting threshold autoregressive processes for nonlinear time series of counts. Appl Math Model 57:226–247

    MathSciNet  MATH  Google Scholar 

  • Yang K, Wang D, Li H (2018b) Threshold autoregression analysis for finite-range time series of counts with an application on measles data. J Stat Comput Simul 88(3):597–614

    MathSciNet  MATH  Google Scholar 

  • Yang Y, He X (2012) Bayesian empirical likelihood for quantile regression. Ann Stat 40(2):1102–1131

    MathSciNet  MATH  Google Scholar 

  • Zhang Y, Tang N (2017) Bayesian empirical likelihood estimation of quantile structural equation models. J Syst Sci Complex 30(1):122–138

    MathSciNet  MATH  Google Scholar 

  • Zhang C (2010) Nearly unbiased variable selection under minimax concave penalty. Ann Stat 38(2):894–942

    MathSciNet  MATH  Google Scholar 

  • Zhang H, Wang D, Sun L (2017) Regularized estimation in GINAR(\(p\)) process. J Korean Stat Soc 46(4):502–517

    MathSciNet  MATH  Google Scholar 

  • Zhao P, Ghosh M, Rao JNK, Wu C (2019) Bayesian empirical likelihood inference with complex survey data. J R Stat Soc B 82(1):155–174

    MathSciNet  MATH  Google Scholar 

  • Zhong X, Ghosh M (2016) Higher-order properties of Bayesian empirical likelihood. Electron J Stat 10(2):3011–3044

    MathSciNet  MATH  Google Scholar 

  • Zhu L, Xue L (2006) Empirical likelihood confidence regions in a partially linear single-index model. J R Stat Soc B 68(3):549–570

    MathSciNet  MATH  Google Scholar 

  • Zou H (2006) The adaptive lasso and its oracle properties. J Am Stat Assoc 101(476):1418–1429

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China (No. 11901053, 11871028, 11731015), Natural Science Foundation of Jilin Province (No. 20180101216JC), Program for Changbaishan Scholars of Jilin Province (2015010).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaohui Yuan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Theorem 3.1

It it easy to see that (S1) and (S2) are equivalent. Therefore, we only prove (S2). Recall that \({\varvec{Y}}_{t}=\left( y_{t}, \dots , y_{t-p+1}\right) ^{\textsf {T}}\) and \({\varvec{Z}}_n=\sum _{t=p+1}^n {\varvec{Y}}_{t-1}{\varvec{Y}}_{t-1}^{\textsf {T}}\). Then, we have \( {\varvec{\Sigma }}_n=\sum _{t=p+1}^n {\varvec{m}}_t({\varvec{\beta }}){\varvec{m}}_t({\varvec{\beta }})^{\textsf {T}} =\sum _{t=p+1}^n (y_t-{\varvec{\beta }}^{\textsf {T}}{\varvec{Y}}_{t-1})^2 {\varvec{Y}}_{t-1}{\varvec{Y}}_{t-1}^{\textsf {T}} \). By Theorem 1.1 in Billingsley (1961) and continuous mapping theorem, we have

$$\begin{aligned} \frac{1}{n}\sum _{t=p+1}^n {\varvec{Y}}_{t-1}{\varvec{Y}}_{t-1}^{\textsf {T}} \overset{P}{\longrightarrow }E({\varvec{Y}}_{p}{\varvec{Y}}_{p}^{\textsf {T}}):={\varvec{C}}_1 <\infty ,~n\rightarrow \infty , \end{aligned}$$

and

$$\begin{aligned}&\frac{1}{n}\sum _{t=p+1}^n (y_t-{\varvec{\beta }}^{\textsf {T}}{\varvec{Y}}_{t-1})^2 {\varvec{Y}}_{t-1}{\varvec{Y}}_{t-1}^{\textsf {T}} \overset{P}{\longrightarrow }E((y_{p+1}-{\varvec{\beta }}^{\textsf {T}} {\varvec{Y}}_{p})^2{\varvec{Y}}_{p}{\varvec{Y}}_{p}^{\textsf {T}})\\&\qquad :={\varvec{C}}_2 <\infty ,~n\rightarrow \infty , \end{aligned}$$

where \({\varvec{C}}_1\) and \({\varvec{C}}_2\) are constant matrices. Note that as \(n\rightarrow \infty \), \( \frac{1}{n}{\varvec{\Lambda }}^{-1}\rightarrow {\varvec{0}}. \) Thus,

$$\begin{aligned} \frac{1}{n}({\varvec{Z}}_n^{\textsf {T}}{\varvec{\Sigma }}^{-1}{\varvec{Z}}_n+{\varvec{\Lambda }}^{-1}) \overset{P}{\longrightarrow }{\varvec{C}}_1^{\textsf {T}}{\varvec{C}}_2^{-1}{\varvec{C}}_1,~n\rightarrow \infty . \end{aligned}$$

Thus, by continuous mapping theorem, we have as \(n\rightarrow \infty \),

$$\begin{aligned} ({\varvec{Z}}_n^{\textsf {T}}{{\varvec{\Sigma }}}^{-1}{\varvec{Z}}_n+{\varvec{\Lambda }}^{-1})^{-1}{\varvec{\Lambda }}^{-1} \overset{P}{\longrightarrow }\frac{1}{n}({\varvec{C}}_1^{\textsf {T}}{\varvec{C}}_2^{-1}{\varvec{C}}_1)^{-1}{\varvec{\Lambda }}^{-1} \overset{P}{\longrightarrow }{\varvec{0}}. \end{aligned}$$

Since \(\hat{{\varvec{\Sigma }}}\) is a consistent estimator of \({\varvec{\Sigma }}\), by Slutsky’s theorem, (S2) holds.

The proof is complete. \(\square \)

The full conditional density of \(\beta _{i} | {\varvec{Y}}, {\varvec{\theta }}, \eta , {\varvec{\beta }}_{-i}\) is:

$$\begin{aligned} \begin{aligned}&f\left( \beta _{i} | {\varvec{Y}}, {\varvec{\theta }}, \eta , {\varvec{\beta }}_{-i}\right) \\&\quad \propto \exp \left\{ -\frac{1}{2} \left( \beta _{i}-\mu _{i}\right) ^{2}V_{i} \right\} \pi \left( \beta _{i} | \theta _{i}, \eta \right) \\&\quad \propto \exp \left\{ -\frac{1}{2} \left( \beta _{i}-\mu _{i}\right) ^{2}V_{i} \right\} \left[ \theta _{i} I_{\left\{ \beta _{i}=0\right\} }+\left( 1-\theta _{i}\right) I_{\{\beta _i \ne 0\}} \frac{1}{\sqrt{2 \pi \eta ^{-1}}}\exp \left( -\frac{\beta _{i}^{2}}{2} \eta \right) \right] \\&\quad \propto \theta _{i} \exp \left\{ -\frac{1}{2} \mu _{i}^{2}V_{i} \right\} I_{\left\{ \beta _{i}=0\right\} } +\left( 1-\theta _{i}\right) \exp \left\{ -\frac{1}{2} \left( \beta _{i}-\mu _{i}\right) ^{2}V_{i} \right\} \frac{1}{\sqrt{2 \pi \eta ^{-1}}}\\&\qquad \exp \left( -\frac{\beta _{i}^{2}}{2} \eta \right) I_{\{\beta _i \ne 0\}} \\&\quad \propto \theta _{i} I_{\left\{ \beta _{i}=0\right\} }+\left( 1-\theta _{i}\right) \exp \left\{ \frac{1}{2} \frac{V_i^2\mu _i^2}{\eta +V_i}\right\} \sqrt{ \frac{\eta }{V_i+\eta } } \sqrt{\frac{V_i+\eta }{ 2 \pi }} \exp \\&\qquad \left\{ -\frac{1}{2}(\eta +V_i) \left( \beta _i- \frac{V_i\mu _i}{\eta +V_i} \right) ^2 \right\} I_{\{\beta _i \ne 0\}} \\&\quad \propto \tilde{\theta }_{i} I_{\left\{ \beta _{i}=0\right\} }+\left( 1-\tilde{\theta }_{i}\right) N\left( \frac{V_{i} \mu _{i}}{V_{i}+\eta }, \frac{1}{V_{i}+\eta }\right) I_{\left\{ \beta _{i}\ne 0\right\} }, \end{aligned} \end{aligned}$$

where \( \tilde{\theta }_{i}= \theta _{i}\left( \theta _{i}+\left( 1-\theta _{i}\right) \exp \left\{ \frac{1}{2} \frac{V_i^2\mu _i^2}{\eta +V_i}\right\} \sqrt{ \frac{\eta }{V_i+\eta } } \right) ^{-1}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, K., Ding, X. & Yuan, X. Bayesian empirical likelihood inference and order shrinkage for autoregressive models. Stat Papers 63, 97–121 (2022). https://doi.org/10.1007/s00362-021-01231-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-021-01231-6

Keywords

Navigation