Skip to main content
Log in

Regularized least square regression with dependent samples

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

In this paper we study the learning performance of regularized least square regression with α-mixing and ϕ-mixing inputs. The capacity independent error bounds and learning rates are derived by means of an integral operator technique. Even for independent samples our learning rates improve those in the literature. The results are sharp in the sense that when the mixing conditions are strong enough the rates are shown to be close to or the same as those for learning with independent samples. They also reveal interesting phenomena of learning with dependent samples: (i) dependent samples contain less information and lead to worse error bounds than independent samples; (ii) the influence of the dependence between samples to the learning process decreases as the smoothness of the target function increases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aronszajn, N.: Theory of reproducing kernels. Trans. Amer. Math. Soc. 68, 337–404 (1950)

    Article  MATH  MathSciNet  Google Scholar 

  2. Athreya, K.B., Pantula, S.G.: Mixing properties of Harris chains and autoregressive processes. J. Appl. Probab. 23, 880–892 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  3. Bartlett, P.L., Mendelson, S.: Rademacher and Gaussian complexities: risk bounds and structural results. J. Mach. Learn. Res. 3, 463–482 (2002)

    Article  MathSciNet  Google Scholar 

  4. Billingsley, P.: Convergence of Probability Measures. Wiley, New York (1968)

    MATH  Google Scholar 

  5. Bousquet, O., Elisseeff, A.: Stability and generalization. J. Mach. Learn. Res. 2, 499–526 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  6. Cucker, F., Zhou, D.X.: Learning Theory: An Approximation Theory Viewpoint. Cambridge University Press, Cambridge (2007)

    MATH  Google Scholar 

  7. Davydov, Y.A.: The invariance principle for stationary processes. Theory Probab. Appl. 14, 487–498 (1970)

    Article  Google Scholar 

  8. Dehling, H., Philipp, W.: Almost sure invariance principles for weakly dependent vector-valued random variables. Ann. Probab. 10, 689–701 (1982)

    Article  MATH  MathSciNet  Google Scholar 

  9. Evgeniou, T., Pontil, M., Poggio, T.: Regularization networks and support vector machines. Adv. Comput. Math. 13, 1–50 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  10. Li, L.Q., Wan, C.G.: Support vector machines with beta-mixing input sequences. In: Wang, J., et al. (eds.) Lecture Notes on Computer Science, vol. 3971, pp. 928–935. Springer, New York (2006)

    Google Scholar 

  11. Modha, D.S.: Minimum complexity regression estimation with weakly dependent observations. IEEE. Trans. Inform. Theory 42, 2133–2145 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  12. Smale, S., Zhou, D.X.: Shannon sampling and function reconstruction from point values. Bull. Amer. Math. Soc. 41, 279–305 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  13. Smale, S., Zhou, D.X.: Shannon sampling II: connections to learning theory. Appl. Comput. Harmon. Anal. 19, 285–302 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  14. Smale, S., Zhou, D.X.: Learning theory estimates via integral operators and their approximations. Constr. Approx. 26, 153–172 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  15. Vidyasagar, M.: Learning and Generalization with Applications to Neural Networks. Springer, Berlin Heidelberg New York (2003)

    Google Scholar 

  16. Withers, C.S.: Connectionist nonparametric regression: multilayer feedforward networks can learn arbitrary mappings. Neural Netw. 3, 535–549 (2000)

    Google Scholar 

  17. Wu, Q., Ying, Y.M., Zhou, D.X.: Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  18. Xu, Y.L., Chen, D.R.: Learning rates of regularized regression for exponentially strongly mixing sequence. J. Statist. Plann. Inference 138(7), 2180–2189 (2008)

    MATH  MathSciNet  Google Scholar 

  19. Zhang, T.: Leave-one-out bounds for kernel methods. Neural Comput. 15, 1397–1437 (2003)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiang Wu.

Additional information

Communicated by Yuesheng Xu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sun, H., Wu, Q. Regularized least square regression with dependent samples. Adv Comput Math 32, 175–189 (2010). https://doi.org/10.1007/s10444-008-9099-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-008-9099-y

Keywords

Mathematics Subject Classifications (2000)

Navigation