Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Learning capability of the truncated greedy algorithm

截断贪婪算法的学习性能

Abstract

Pure greedy algorithm (PGA), orthogonal greedy algorithm (OGA) and relaxed greedy algorithm (RGA) are three widely used greedy type algorithms in both nonlinear approximation and supervised learning. In this paper, we apply another variant of greedy-type algorithm, called the truncated greedy algorithm (TGA) in the realm of supervised learning and study its learning performance. We rigorously prove that TGA is better than PGA in the sense that TGA possesses the faster learning rate than PGA. Furthermore, in some special cases, we also prove that TGA outperforms OGA and RGA. All these theoretical assertions are verified by both toy simulations and real data experiments.

摘要

创新点

在监督学习和非线性逼近研究领域里, 朴素贪婪算法、 正交贪婪算法和松弛贪婪算法是三种广泛研究与应用的贪婪类算法。 在本文中, 们在监督学习的框架下研究另一种贪婪类算法——截断贪婪算法的学习性能。我们在理论上严格的证明了截断贪婪算法的学习能力优于朴素贪婪算法, 甚至在一些特殊情况下, 优于正交贪婪算法或松弛贪婪算法。 并且, 我们通过充分的人工及真实数据的实验验证了我们理论上的结果。

This is a preview of subscription content, log in to check access.

References

  1. 1

    Barron A R, Cohen A, Dahmen W, et al. Approximation and learning by greedy algorithms. Ann Stat, 2008; 36: 64–94

  2. 2

    Chen H, Li L, Pan Z. Learning rates of multi-kernel regression by orthogonal greedy algorithm. J Statist Plan & Infer, 2013; 143: 276–282

  3. 3

    Fang J, Lin S B, Xu Z B. Learning and approximation capabilities of orthogonal super greedy algorithm. Know Based Syst, 2016; 95: 86–98

  4. 4

    Friedman J. Greedy function approximation: a gradient boosting machine. Ann Stat, 2001; 29: 1189–1232

  5. 5

    Lin S B, Rong Y H, Sun X P, et al. Learning capability of relaxed greedy algorithms. IEEE Trans Neural Netw Learn Syst, 2013; 24: 1598–1608

  6. 6

    Mannor S, Meir R, Zhang T. Greedy algorithms for classification-consistency, convergence rates, and adaptivity. J Mach Learn Res, 2003; 4: 713–742

  7. 7

    Xu L, Lin S B, Zeng J S, et al. Greedy metrics in orthogonal greedy learning. ArXiv:1411.3553, 2014

  8. 8

    Schmidt E. Zur Theorie der linearen und nichtlinearen integralgleichungen I. Math Annalen, 1906; 63: 433–476

  9. 9

    Temlyakov V. Greedy approximation. Acta Numer, 2008; 17: 235–409

  10. 10

    DeVore R A, Temlyakov V. Some remarks on greedy algorithms. Adv Comput Math, 1996; 5: 173–187

  11. 11

    Livshitz E, Temlyakov V. Two lower estimates in greedy approximation. Constr Approx, 2003; 19: 509–523

  12. 12

    Livshits E. Lower bounds for the rate of convergence of greedy algorithms. Izvestiya: Mathematics, 2009; 73: 1197–1215

  13. 13

    Temlyakov V. Relaxation in greedy approximation. Constr Approx, 2008; 28: 1–25

  14. 14

    Cucker F, Zhou D X. Learning Theory: an Approximation Theory Viewpoint. Cambridge: Cambridge University Press, 2007

  15. 15

    Zhang T, Yu B. Boosting with early stopping: convergence and consistency. Ann Statis, 2005; 33: 1538–1579

  16. 16

    Bagirov A, Clausen C, Kohler M. An L2 boosting algorithm for estimation of a regression function. IEEE Trans Inf Theory, 2010; 56: 1417–1429

  17. 17

    Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc, 1996; 1: 267–288

  18. 18

    Golub G H, Heath M T, Wahba G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 1979; 21: 215–223

  19. 19

    Efron B, Hastie T, Johnstone I, et al. Least angle regression. Ann Stat, 2004; 32: 407–499

  20. 20

    Wendland H. Scattered Data Approximation. Cambridge: Cambridge University Press, 2005

  21. 21

    Zhou D X, Jetter K. Approximation with polynomial kernels and SVM classifiers. Adv Comput Math, 2006; 25: 323–344

  22. 22

    Breiman L, Friedman J, Stone C, et al. Classification and Regression Trees. Boca Raton: CRC Press, 1984

  23. 23

    Blake C L, Merz C J. UCI Repository of machine learning databases. Irvine: University of California. http://www. ics.uci.edu/~mlearn/MLRepository.html. 1998. 55

  24. 24

    Harrison D, Rubinfeld D L. Hedonic prices and the demand for clean air. J Environ Econ, 1978; 5: 81–102

  25. 25

    Ye I C. Modeling of strength of high performance concrete using artificial neural networks. Cement Concrete Res, 1998; 28: 1797–1808

  26. 26

    Nash W J, Sellers T L, Talbot S R, et al. The Population Biology of Abalone (Haliotis Species) in Tasmania: Blacklip Abalone (H. Rubra) From the North Coast and Islands of Bass Strait. Technical Report. 1994

  27. 27

    Kreyszig E. Applied Mathematics. Hoboken: Wiley Press, 1979

  28. 28

    Shi L, Feng Y, Zhou D X. Concentration estimates for learning with l1-regularizer and data dependent hypothesis spaces. Appl Comput Harmon Anal, 2011; 31: 286–302

  29. 29

    Wu Q, Ying Y, Zhou D X. Multi-kernel regularized classifiers. J Complex, 2007; 23: 108–134

Download references

Author information

Correspondence to Shaobo Lin.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Xu, L., Lin, S. & Xu, Z. Learning capability of the truncated greedy algorithm. Sci. China Inf. Sci. 59, 052103 (2016). https://doi.org/10.1007/s11432-016-5536-6

Download citation

Keywords

  • supervised learning
  • learning theory
  • generalization capability
  • greedy algorithm
  • truncated greedy algorithm

关键词

  • 监督学习
  • 学习理论
  • 泛化能力
  • 贪婪算法
  • 截断贪婪算法