Sparse Gaussian Processes Using Backward Elimination

  • Liefeng Bo
  • Ling Wang
  • Licheng Jiao
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


Gaussian Processes (GPs) have state of the art performance in regression. In GPs, all the basis functions are required for prediction; hence its test speed is slower than other learning algorithms such as support vector machines (SVMs), relevance vector machine (RVM), adaptive sparseness (AS), etc. To overcome this limitation, we present a backward elimination algorithm, called GPs-BE that recursively selects the basis functions for GPs until some stop criterion is satisfied. By integrating rank-1 update, GPs-BE can be implemented at a reasonable cost. Extensive empirical comparisons confirm the feasibility and validity of the proposed algorithm.


Support Vector Machine Basis Function Gaussian Process Forward Selection Radial Basis Function Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Williams, C.K.I., Rasmussen, C.E.: Gaussian Processes for Regression. Advances in Neural Information Processing Systems 8, 514–520 (1996)Google Scholar
  2. 2.
    Rasmussen, C.E.: Evaluation of Gaussian Processes and Other Methods for Non-linear Regression. Ph.D. thesis, Dep.of Computer Science, University of Toronto., Available from,
  3. 3.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)MATHGoogle Scholar
  4. 4.
    Tipping, M.E.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal Machine Learning Research 1, 211–244 (2001)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Figueiredo, M.A.T.: Adaptive Sparseness for Supervised Learning. IEEE Trans. Pattern Analysis and Machine Intelligence 25, 1150–1159 (2003)CrossRefGoogle Scholar
  6. 6.
    Smola, A.J., Bartlett, P.L.: Sparse Greedy Gaussian Processes Regression. Advances in Neural Information Processing Systems 13, 619–625 (2000)MathSciNetGoogle Scholar
  7. 7.
    Csato, L., Opper, M.: Sparse Online Gaussian Processes. Neural Computation 14, 641–669 (2002)MATHCrossRefGoogle Scholar
  8. 8.
    Berger, J.O.: Statistical Decision Theory and Bayesian Analysis, 2nd edn. Springer, Heidelberg (1985)MATHGoogle Scholar
  9. 9.
    Williams, C.K.I.: Prediction with Gaussian Processes: from Linear Regression to Linear Prediction and Beyond. Learning and Inference in Graphical Models, 1–17 (1998)Google Scholar
  10. 10.
    Friedman, J.H.: Multivariate Adaptive Regression Splines. Annals of Statistics 19, 1–141 (1991)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases, Technical Report. Department of Information and Computer Science. University of California, Irvine (1998), Data available at,
  12. 12.
    Chen, S., Cowan, C.F.N., Grant, P.M.: Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks. IEEE Trans. Neural Networks 2, 302–309 (1991)CrossRefGoogle Scholar
  13. 13.
    Bo, L.F., Wang, L., Jiao, L.C.: Sparse Bayesian Learning Based on an Efficient Subset Selection, Lecture Notes in Computer Science 3173. In: Yin, F.-L., Wang, J., Guo, C. (eds.) ISNN 2004. LNCS, vol. 3173, pp. 264–269. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Liefeng Bo
    • 1
  • Ling Wang
    • 1
  • Licheng Jiao
    • 1
  1. 1.Institute of Intelligent Information Processing and, National Key Laboratory for Radar Signal ProcessingXidian UniversityXi’anChina

Personalised recommendations