Skip to main content
Log in

Oracle Inequality for Sparse Trace Regression Models with Exponential β-mixing Errors

  • Published:
Acta Mathematica Sinica, English Series Aims and scope Submit manuscript

Abstract

In applications involving, e.g., panel data, images, genomics microarrays, etc., trace regression models are useful tools. To address the high-dimensional issue of these applications, it is common to assume some sparsity property. For the case of the parameter matrix being simultaneously low rank and elements-wise sparse, we estimate the parameter matrix through the least-squares approach with the composite penalty combining the nuclear norm and the l1 norm. We extend the existing analysis of the low-rank trace regression with i.i.d. errors to exponential β-mixing errors. The explicit convergence rate and the asymptotic properties of the proposed estimator are established. Simulations, as well as a real data application, are also carried out for illustration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Chen, J., Ye, J.: Sparse trace norm regularization. Comput. Statist., 29(4), 623–639 (2013)

    MathSciNet  MATH  Google Scholar 

  2. Christensen, J., Becker, E. M., Frederiksen, C. S.: Fluorescence spectroscopy and para face in the analysis of yogurt. Chemom. Intell. lab. Syst, 75, 201–208 (2005)

    Article  Google Scholar 

  3. Chung, K. L.: A Course in Probability Theory (3rd version), Academic Press, San Diego, CA, 2001

    Google Scholar 

  4. Fan, J., Gong, W., Zhu, Z.: Generalized high-dimensional trace regression via nuclear norm regularization. J. Econometrics, 212(1), 177–202 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  5. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc., 96(456), 1348–1360 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ji, S., Ye, J.: An accelerated gradient method for trace norm minimization. In: Proceedings of the 26th Annual International Conference on Machine Learning, 457–464, 2009, Association for Computing Machinery, New York, NY, USA

    Chapter  Google Scholar 

  7. Koltchinskii, V., Lounici, K., Tsybakov, A. B.: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist., 39(5), 2302–2329 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  8. Lin, Z., Bai, Z.: Probability Inequalities, Springer-Verlag, Berlin Heidelberg, 2011

    Book  MATH  Google Scholar 

  9. Mcdonald, D., Shalizi, C., Schervish, M.: Estimating beta-mixing coefficients. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 516–524, 2011, PMLR, Fort Lauderdale, FL, USA

  10. Mei, S., Cao, B., Sun, J.: Encoding low-rank and sparse structures simultaneously in multi-task learning. In: Advances in Neural Information Processing Systems (NIPS), 1–16, 2012, Curran Associates, Inc. Red Hook, NY, USA

    Google Scholar 

  11. Negahban, S. N., Ravikumar, P., Wainwright, M. J., Yu, B.: A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statist. Sci., 27(4), 538–557 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Negahban, S. Wainwright, M. J.: Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Ann. Statist., 39, 1069–1097 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Richard, E., Savalle, P., Vayatis, N.: Estimation of simultaneously sparse and low-rank matrices. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, 51–58, 2012, Omnipress, Madison, WI, USA

    Google Scholar 

  14. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol., 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  15. Tikhomirov, A.T.: On the rate of convergence in the central limit theorem for weakly dependent random variables. Teor. Veroyatn. Prime., 25(4), 800–818 (1980)

    MATH  Google Scholar 

  16. Vershynin, R.: High-dimensional Probability: An Introduction with Applications in Data Science, volume 47, Cambridge University Press, Cambridge, 2018

    Book  MATH  Google Scholar 

  17. Wong, K. C., Li, Z., Tewari, A.: Lasso guarantees for β-mixing heavy-tailed time series. Ann. Statist., 48(2), 1124–1142 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Xie, F. Xiao, Z.: Square-root Lasso for high-dimensional sparse linear systems with weakly dependent errors. J. Time Series Anal., 39(2), 212–238 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  19. Xie, F., Xu, L., Yang, Y.: Lasso for sparse linear regression with exponentially β-mixing errors. Statist. Probab. Lett., 125, 64–70 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  20. Yu, B.: Rates of convergence for empirical processes of stationary mixing sequences. Ann. Probab., 22(1), 94–116 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhang, C. H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., 38(2), 894–942 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  22. Zhao, J., Niu, L., Zhan, S.: Trace regression model with simultaneously low rank and row (column) sparse parameter. Comput. Statist. Data Anal., 116, 1–18 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao Hui Liu.

Ethics declarations

Conflict of Interest The authors declare no conflict of interest.

Additional information

Peng’s research was supported by the NSF of China (Grant No. 12201259), Jiangxi Provincial NSF (Grant No. 20224BAB211008), Science and Technology research project of the Education Department of Jiangxi Province (Grant No. GJJ2200537), Liu’s research was supported by NSF of China (Grant No. 11971208) and NSSF of China (Grant No. 21&ZD152); Tan’s research was supported by the NSF of China (Grant No. 12201260), and NSSF of China (Grant No. 20BTJ008), Science and Technology Research Project of the Education Department of Jiangxi Province (Grant No. GJJ200545), Jiangxi Provincial NSF (Grant No. 20212BAB211010), and China Postdoctoral Science Foundation (Grant No. 2022M711425)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, L., Tan, X.Y., Xiao, P.W. et al. Oracle Inequality for Sparse Trace Regression Models with Exponential β-mixing Errors. Acta. Math. Sin.-English Ser. 39, 2031–2053 (2023). https://doi.org/10.1007/s10114-023-2153-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10114-023-2153-3

Keywords

MR(2010) Subject Classification

Navigation