Advertisement

Kernel Basis Pursuit

  • Vincent Guigue
  • Alain Rakotomamonjy
  • Stéphane Canu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3720)

Abstract

Estimating a non-uniformly sampled function from a set of learning points is a classical regression problem. Kernel methods have been widely used in this context, but every problem leads to two major tasks: optimizing the kernel and setting the fitness-regularization compromise.

This article presents a new method to estimate a function from noisy learning points in the context of RKHS (Reproducing Kernel Hilbert Space). We introduce the Kernel Basis Pursuit algorithm, which enables us to build a ℓ1-regularized-multiple-kernel estimator. The general idea is to decompose the function to learn on a sparse-optimal set of spanning functions. Our implementation relies on the Least Absolute Shrinkage and Selection Operator (LASSO) formulation and on the Least Angle Regression (LARS) solver. The computation of the full regularization path, through the LARS, will enable us to propose new adaptive criteria to find an optimal fitness-regularization compromise. Finally, we aim at proposing a fast parameter-free method to estimate non-uniform-sampled functions.

Keywords

Regression Multiple Kernels LASSO Parameter Free 

References

  1. 1.
    Tikhonov, A., Arsénin, V.: Solutions of ill-posed problems. W.H. Winston (1977)Google Scholar
  2. 2.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Computation 7, 219–269 (1995)CrossRefGoogle Scholar
  3. 3.
    Wahba, G.: Spline Models for Observational Data. Series in Applied Mathematics, vol. 59. SIAM, Philadelphia (1990)zbMATHGoogle Scholar
  4. 4.
    Kimeldorf, G., Wahba, G.: Some results on Tchebycheffian spline functions. J. Math. Anal. Applic. 33, 82–95 (1971)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Royal. Statist. 58, 267–288 (1996)zbMATHMathSciNetGoogle Scholar
  6. 6.
    Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Annals of statistics 32, 407–499 (2004)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Bach, F., Thibaux, R., Jordan, M.: Computing regularization paths for learning multiple kernels. In: Neural Information Processing Systems, vol. 17 (2004)Google Scholar
  8. 8.
    Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing 41, 3397–3415 (1993)zbMATHCrossRefGoogle Scholar
  9. 9.
    Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuits: recursive function approximation with applications to wavelet decomposition. In: 27th Asilomar Conference in Signals, Systems, and Computers (1993)Google Scholar
  10. 10.
    Vincent, P., Bengio, Y.: Kernel matching pursuit. Machine Learning Journal 48, 165–187 (2002)zbMATHCrossRefGoogle Scholar
  11. 11.
    Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing 20, 33–61 (1998)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Chen, S.: Basis Pursuit. PhD thesis, Department of Statistics, Stanford University (1995)Google Scholar
  13. 13.
    Grandvalet, Y.: Least absolute shrinkage is equivalent to quadratic penalization. In: ICANN, pp. 201–206 (1998)Google Scholar
  14. 14.
    Loosli, G., Canu, S., Vishwanathan, S., Smola, A.J., Chattopadhyay, M.: Une boîte á outils rapide et simple pour les svm. In: CAp (2004)Google Scholar
  15. 15.
    Ljung, L.: System Identification - Theory for the User (1987)Google Scholar
  16. 16.
    Schölkopf, B., Smola, A.: Learning with kernels (2002)Google Scholar
  17. 17.
    Bi, J., Bennett, K., Embrechts, M., Breneman, C., Song, M.: Dimensionality reduction via sparse support vector machines. Journal of Machine Learning Research 3, 1229–1243 (2003)zbMATHCrossRefGoogle Scholar
  18. 18.
    Donoho, D., Johnstone, I.: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81, 425–455 (1994)zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Chang, M., Lin, C.: Leave-one-out bounds for support vector regression model selection. Neural Computation (2005)Google Scholar
  20. 20.
    Blake, C., Merz, C.: UCI rep. of machine learning databases (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Vincent Guigue
    • 1
  • Alain Rakotomamonjy
    • 1
  • Stéphane Canu
    • 1
  1. 1.Lab. Perception, Systémes, Information, CNRS, FRE 2645St Étienne du Rouvray

Personalised recommendations