Skip to main content

Discriminant Subspace Learning Based on Support Vectors Machines

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNAI,volume 7376)

Abstract

A new method for dimensionality reduction and feature extraction based on Support Vector Machines and minimization of the within-class data dispersion is proposed. An iterative procedure is proposed that successively applies Support Vector Machines on perpendicular subspaces using the deflation transformation in such a way that the within-class variance is minimized. The proposed approach is proved to be a successive SVM using deflation kernels. The normal vectors of the successive hyperplanes contain discriminant information and they can be used as projection vectors for feature extraction and dimensionality reduction of the data. Experiments on various datasets are conducted in order to highlight the superior performance of the proposed algorithm.

Keywords

  • Support Vector Machine
  • Feature Extraction
  • Linear Discriminant Analysis
  • Scatter Matrix
  • Discriminant Information

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cover, T.M., Hart, P.E.: Nearest Neighbor Pattern Classification. IEEE Transactions In Information Theory, 21–26 (1967)

    Google Scholar 

  2. Duda, O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley (2001)

    Google Scholar 

  3. Scholkopf, B., Smola, A.: Learning with Kernels. MIT, Cambridge (2002)

    Google Scholar 

  4. Jolliffe, I.T.: Principal Component Analysis, 2nd edn. Springer (2002)

    Google Scholar 

  5. Pearson, K.: On Lines and Planes of Closest Fit to Systems of Points in Space. Philosophical Magazine 2, 559–572 (1901)

    Google Scholar 

  6. Hotelling, H.: Analysis of a Complex of Statistical Variables into Principal Components. Journal of Educational Psychology 24, 417–441 (1933)

    CrossRef  Google Scholar 

  7. Scholkopf, B., Smola, A., Muller, K.R.: Nonlinear component analysis as a Kernel eigenvalue problem. Neural Computation 10, 1299–1319 (1998)

    CrossRef  Google Scholar 

  8. Alpaydin, E.: Introduction to Machine Learning. MIT Press (2004)

    Google Scholar 

  9. Juwei, L., Plataniotis, K.N., Venetsanopoulos, A.N.: Face recognition using Kernel direct discriminant analysis algorithms. IEEE Transactions on Neural Networks 14, 117–126 (2003)

    CrossRef  Google Scholar 

  10. Kocsor, A., Kovács, K., Szepesvári, C.: Margin Maximizing Discriminant Analysis. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 227–238. Springer, Heidelberg (2004)

    CrossRef  Google Scholar 

  11. Tefas, A., Kotropoulos, C., Pitas, I.: Using Support Vector Machines to Enhance the Performance of Elastic Graph Matching for Frontal Face Authentication. IEEE Transactions on Pattern Analalysis and Machine Intelligence 23(7), 735–746 (2001)

    CrossRef  Google Scholar 

  12. Zafeiriou, S., Tefas, A., Pitas, I.: Minimum Class Variance Support Vector Machines. IEEE Transactions on Image Processing 16(10), 2551–2564 (2007)

    CrossRef  MathSciNet  Google Scholar 

  13. Kung, S.Y., Diamantaras, K.I.: Neural networks for extracting unsymmetric principal components. In: Neural Networks for Signal Processing, pp. 50–59. IEEE Press, New York (1991)

    CrossRef  Google Scholar 

  14. Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. In: Data Minining Knowledge Discovery, vol. 2, pp. 121–167 (1998)

    Google Scholar 

  15. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)

    MATH  Google Scholar 

  16. Fletcher, R.: Practical Methods of Optimization, 2nd edn. Wiley, New York (1987)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Pitelis, N., Tefas, A. (2012). Discriminant Subspace Learning Based on Support Vectors Machines. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2012. Lecture Notes in Computer Science(), vol 7376. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31537-4_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-31537-4_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-31536-7

  • Online ISBN: 978-3-642-31537-4

  • eBook Packages: Computer ScienceComputer Science (R0)