Skip to main content
Log in

Quantile regression with 1—regularization and Gaussian kernels

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

The quantile regression problem is considered by learning schemes based on 1—regularization and Gaussian kernels. The purpose of this paper is to present concentration estimates for the algorithms. Our analysis shows that the convergence behavior of 1—quantile regression with Gaussian kernels is almost the same as that of the RKHS-based learning schemes. Furthermore, the previous analysis for kernel-based quantile regression usually requires that the output sample values are uniformly bounded, which excludes the common case with Gaussian noise. Our error analysis presented in this paper can give satisfactory convergence rates even for unbounded sampling processes. Besides, numerical experiments are given which support the theoretical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aronszajn, N.: Theory of reproducing kernels. Trans. Am. Math. Soc. 68, 337–404 (1950)

    Article  MATH  MathSciNet  Google Scholar 

  2. Belloni, A., Chernozhukov, V.: 1—penalized quantile regression in high dimensional sparse models. Ann. Stat. 39, 82–130 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  3. Bennett, G.: Probability inequalities for the sum of independent random variables. J. Am. Stat. Assoc. 57, 33–45 (1962)

    Article  MATH  Google Scholar 

  4. Bradley, P., Mangasarian, O.: Massive data discrimination via linear support vector machines. Optim. Methods Softw. 13, 1–10 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  5. Cherkassky, V., Gehring, D., Mulier, F.: Comparison of adaptive methods for function estimation from samples. IEEE Trans. Neural Netw. 7, 969–984 (1996)

    Article  Google Scholar 

  6. Christmann, A., Messem, A.V.: Bouligand derivatives and robustness of support vector machines for regression. J. Mach. Learn. Res. 9, 915–936 (2008)

    MATH  MathSciNet  Google Scholar 

  7. Chen, D.R., Wu, Q., Ying, Y., Zhou, D.X.: Support vector machine soft margin classifiers: error analysis. J. Mach. Learn. Res. 5, 1143–1175 (2004)

    MATH  MathSciNet  Google Scholar 

  8. Cucker, F., Zhou, D.X.: Learing Theory: An Approxiamtion Theory Viewpoint. Cambridge University Press, Cambridge (2007)

    Book  Google Scholar 

  9. Eberts, M., Steinwart, I.: Optimal regression rates for SVMs using Gaussian kernels. Electron. J. Stat. 7, 1–42 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  10. González, J., Rojas, I., Ortega, J., Pomares, H., Fernández, F.J., Díaz, A.F.: Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for function approximation. IEEE Trans. Neural Netw. 14, 1478–1495 (2003)

    Article  Google Scholar 

  11. Guo, Z.C., Zhou, D.X.: Concentration estimates for learning with unbounded sampling. Adv. Comput. Math. 38, 207–223 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  12. Heagerty, P., Pepe, M.: Semiparametric estimation of regression quantiles with application to standardizing weight for height and age in US children. J. Royal Stat. Soc. Ser. C 48, 533–551 (1999)

    Article  MATH  Google Scholar 

  13. Huang, X., Jun, X., Wang, S.: Nonlinear system identification with continuous piecewise linear neural network. Neurocomputings 77, 167–177 (2012)

    Article  Google Scholar 

  14. Huang, X., Shi, L., Suykens, J.A.K. Support Vector Machine Classifier with Pinball Loss. Internal Report 13-31, ESAT-SISTA, KU Leuven, Leuven

  15. Koenker, R., Hallock, K.: Quantile regression: an introduction. J. Econ. Perspect. 15, 43–56 (2001)

    Article  Google Scholar 

  16. Koenker, R., Geling, O.: Reappraising medfly longevity: a quantile regression survival analysis. J. Am. Stat. Assoc. 96, 458–468 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  17. Koenker, R.: Quantile Regression. Cambridge Univeristy Press, Cambridge (2005)

    Book  MATH  Google Scholar 

  18. Micchelli, C.A., Xu, Y., Zhang, H.: Universal kernles. J. Mach. Learn. Res. 7, 2651–2667 (2006)

    MATH  MathSciNet  Google Scholar 

  19. Niyogi, P., Girosi, F.: On the relationship between generalization error, hypothesis complexity, and sample complexity for radial basis functions. Neural Comput. 8, 819–842 (1996)

    Article  Google Scholar 

  20. Poggio, T., Girosi, F.: Networks for approximation and learning. Proc. IEEE 9, 1481–1497 (1990)

    Article  Google Scholar 

  21. Suárez, A., Lutsko, J.F.: Globally optimal fuzzy decision trees for classification and regression. IEEE Trans. Pattern Anal. Mach. Intel. 21, 1297–1311 (1999)

    Article  Google Scholar 

  22. Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  23. Song, G., Zhang, H., Hickernell, F.J.: Reproducing kernel Banach spaces with the l1 norm. Appl. Comput. Harmonic Anal. 34, 96–116 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  24. Steinwart, I., Scovel, C.: Fast rates for support vector machines using Gaussian kernels. Ann. Stat. 35, 575–607 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  25. Steinwart, I.: How to compare different loss functions and their risks. Construct. Approx. 26, 225–287 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  26. Steinwart, I., Christmann, A.: Support Vector Machines. Springer-Verlag, New York (2008)

    MATH  Google Scholar 

  27. Steinwart, I., Christmann, A.: Estimate conditional quantiles with the help of the pinball loss. Bernoulli 17, 211–225 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  28. Shi, L., Feng, Y.L., Zhou, D.X.: Concentration estimates for learning with 1—regularizer and data dependent hypothesis spaces. Appl. Comput. Harmonic Anal. 31, 286–302 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  29. Smale, S., Zhou, D.X.: Estimating the approximation error in learning theory. Appl. Anal. 1, 17–41 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  30. Takeuchi, I., Le, Q.V., Sears, T.D., Smola, A.J.: Nonparametric quantile estimation. J. Mach. Learn. Res. 7, 1231–1264 (2006)

    MATH  MathSciNet  Google Scholar 

  31. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996)

    MATH  MathSciNet  Google Scholar 

  32. Van Der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer-Verlag, New York (1996)

    Book  MATH  Google Scholar 

  33. Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)

    MATH  Google Scholar 

  34. Wang, C., Zhou, D.X.: Optimal learning rates for least squares regularized regression with unbounded sampling. J. Complex. 27, 55–67 (2011)

    Article  MATH  Google Scholar 

  35. Wendland, H.: Scattered Data Approximation. Cambridge University Press, Cambridge (2005)

    MATH  Google Scholar 

  36. Wahba, G.: Spline Models for Observational Data. Society for Industrial Mathematics (1990)

  37. Wu, Q., Zhou, D.X.: SVM soft margin classifiers: linear programming versus quadratic programming. Neural Comput. 17, 1160–1187 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  38. Wu, Q., Ying, Y., Zhou, D.X.: Multi-kernel regularized classifiers. J. Complex. 23, 108–134 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  39. Wu, Q., Zhou, D.X.: Learning with sample dependent hypothesis spaces. Comput. Math. Appl. 56, 2896–2907 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  40. Wang, S., Huang, X., Yam, Y.: A neural network of smooth hinge functions. IEEE Trans. Neural Netw. 21, 1381–1395 (2010)

    Article  Google Scholar 

  41. Xiang, D.H., Zhou, D.X.: Classification with Gaussians and convex loss. J. Mach. Learn. Res. 10, 1447–1468 (2009)

    MATH  MathSciNet  Google Scholar 

  42. Xiang, D.H.: Conditional quantiles with varying Gaussians. Adv. Comput. Math. 38, 723–735 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  43. Yu, K., Lu, Z., Stander, J.: Quantile regression: applications and current research areas. J. R. Stat. Soc. Ser. D 52, 331–350 (2003)

    Article  MathSciNet  Google Scholar 

  44. Zhou, X.J., Zhou, D.X.: High order Parzen windows and randomized sampling. Adv. Comput. Math. 31, 349–368 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  45. Zhao, P., Yu, B.: On model selection consistency of Lasso. J. Mach. Learn. Res. 7, 2541–2567 (2007)

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Shi.

Additional information

Communicated by: Alexander Barnett

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shi, L., Huang, X., Tian, Z. et al. Quantile regression with 1—regularization and Gaussian kernels. Adv Comput Math 40, 517–551 (2014). https://doi.org/10.1007/s10444-013-9317-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-013-9317-0

Keywords

Mathematics Subject Classifications (2010)

Navigation