Advertisement

Nature Inspiration for Support Vector Machines

  • Davide Anguita
  • Dario Sterpi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4252)

Abstract

We propose in this paper a new kernel, suited for Support Vector Machines learning, which is inspired from the biological world. The kernel is based on Gabor filters that are a good model for the response of the cells in the primary visual cortex and have been shown to be very effective in processing natural images. Furthermore, we build a link between energy-efficiency, which is a driving force in biological processing systems, and good generalization ability of learning machines. This connection can be the starting point for developing new kernel-based learning algorithms.

Keywords

Support Vector Machine Generalization Ability Statistical Learn Theory Support Vector Machine Algorithm Structural Risk Minimization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anguita, D., Pischiutta, S., Ridella, S., Sterpi, D.: Feed–forward Support Vector Machines without Multipliers. IEEE Trans. on Neural Networks (in press, 2006)Google Scholar
  2. 2.
    Anguita, D., Boni, A., Ridella, S.: A Digital Architecture for Support Vector Machines: Theory, Algorithm and FPGA Implementation. IEEE Trans. on Neural Networks 14, 993–1009 (2003)CrossRefGoogle Scholar
  3. 3.
    Anthony, M., Bartlett, P.L.: Neural Networks Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)CrossRefGoogle Scholar
  4. 4.
    Bartlett, P.L.: The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network. IEEE Transactions on Information Theory 44, 525–536 (1998)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Chandrakasan, A., Brodersen, R.: Minimizing power consumption in digital CMOS circuits. Proc. of the IEEE 83, 498–523 (1995)CrossRefGoogle Scholar
  6. 6.
    Cortes, C., Vapnik, V.: Support–vector networks. Machine Learning 27, 273–297 (1991)Google Scholar
  7. 7.
    Genton, M.G.: Classes of kernel for machine learning. Journal of Machine Learning Research 2, 299–312 (2001)CrossRefGoogle Scholar
  8. 8.
    Herbrich, R.: Learning Kernel Classifiers: Theory and Algorithms. MIT Press, Cambridge (2002)Google Scholar
  9. 9.
    Herbrich, R., Graepel, T., Shawe-Taylor, J.: Sparsity vs. Margins for Linear Classifiers. In: Proc. of the 13th Conf. on Computational Learning Theory, pp. 304–308 (2000)Google Scholar
  10. 10.
    Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1997)Google Scholar
  11. 11.
    The International Technology Roadmap for Semiconductors. ITRS (2005), http://public.itrs.net
  12. 12.
    Jones, J.P., Palmer, L.A.: An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. J. Neurophysiol. 58, 1233–1258 (1987)Google Scholar
  13. 13.
    Mitchell, T.: Machine learning. McGraw-Hill, New York (1997)MATHGoogle Scholar
  14. 14.
    Parhami, B.: Computer arithmetic: algorithms and hardware design. Oxford University Press, Oxford (2000)Google Scholar
  15. 15.
    Poggio, T., Girosi, F.: Networks for approximation and learning. Proc. of the IEEE 78, 1481–1497 (1987)CrossRefGoogle Scholar
  16. 16.
    Poggio, T., Girosi, F.: A Theory of Networks for Approximation and Learning. Technical Report 1140, MIT AI Lab (1989)Google Scholar
  17. 17.
    Poggio, T., Mukherjee, S., Rifkin, R., Rahklin, A., Verri, A.: b. Technical Report 198, MIT CBCL (2001)Google Scholar
  18. 18.
    Rätsch, G., Onoda, T., Müller, K.R.: Soft margins for AdaBoost. Machine Learning 42, 287–320 (2001)MATHCrossRefGoogle Scholar
  19. 19.
    Rumelhart, D.E., McClelland, J.L.: Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge (1986)Google Scholar
  20. 20.
    Schmidhuber, J.: Discovering neural nets w1997ith low Kolmogorov complexity and high generalization capability. Neural Networks 10, 857–873 (1997)CrossRefGoogle Scholar
  21. 21.
    Schölkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V.: Comparing support vector machines with Gaussian kernels to radial basis function classifiers. IEEE Trans. on Signal Processing 45, 2758–2765 (1997)CrossRefGoogle Scholar
  22. 22.
    Shawe-Taylor, J., Bartlett, P.L., Williamson, R.C., Anthony, M.: Structural Risk Minimization over Data-dependent Hierarchies. IEEE Trans. on Information Theory 44, 1926–1940 (1998)MATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Valiant, L.G.: A theory of the learnable. Comm. of the ACM 27, 1134–1142 (1984)MATHCrossRefGoogle Scholar
  24. 24.
    Vapnik, V.: Statistical Learning Theory. John Wiley & Sons, Chichester (1998)MATHGoogle Scholar
  25. 25.
    Vapnik, V.: The Elements of Statistical Learning Theory, 2nd edn. Springer, Heidelberg (2000)Google Scholar
  26. 26.
    Vincent, B.T., Baddeley, R.J.: Synaptic energy efficiency in retinal processing. Vision Research 43, 1283–1290 (2003)CrossRefGoogle Scholar
  27. 27.
    Wang, Y., Chua, C.-S.: Face recognition using 2D and 3D images using 3D Gabor filters. Image and Vision Computing 23, 1018–1028 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Davide Anguita
    • 1
  • Dario Sterpi
    • 1
  1. 1.Dept. of Biophysical and Electronic EngineeringUniversity of GenoaGenoaItaly

Personalised recommendations