Skip to main content

Feature Extraction of Weighted Data for Implicit Variable Selection

  • Conference paper
Computer Analysis of Images and Patterns (CAIP 2007)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 4673))

Included in the following conference series:

  • 1817 Accesses

Abstract

Approaches based on obtaining relevant information from overwhelmingly large sets of measures have been recently adopted as an alternative to specialized features. In this work, we address the problem of finding a relevant subset of features and a suitable rotation (combined feature selection and feature extraction) as a weighted rotation. We focus our attention on two types of rotations: Weighted Principal Component Analysis and Weighted Regularized Discriminant Analysis. The objective function is the maximization of the J4 ratio. Tests were carried out on artificially generated classes, with several non-relevant features. Real data tests were also performed on segmentation of naildfold capillaroscopic images, and NIST-38 database (prototype selection).

This research has been made under grant of the project titled ”Técnicas de Computación de Alto Rendimiento en la Interpretación Automatizada de Imágenes Médicas y Biosen̈ales”. DIMA, Universidad Nacional de Colombia Sede Manizales.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Skočaj, D., Leonardis, A.: Weighted incremental subspace learning. In: Workshop on Cognitive Vision, Zurich (September 2002)

    Google Scholar 

  2. Jolliffe, I.T.: Principal Component Analysis, 2nd edn. Springer Series in Statistics. Springer, Heidelberg (2002)

    MATH  Google Scholar 

  3. Jebara, T., Jaakkola, T.: Feature selection and dualities in maximum entropy discrimination. In: 16th Conference on Uncertainty in Artificial Intelligence (2000)

    Google Scholar 

  4. Weston, J., Mukherjee, S., Chapelle, O., Pontill, M., Poggio, T., Vapnik, V.: Feature selection for svms. In: NIPS (2001)

    Google Scholar 

  5. Webb, A.R.: Statistical Pattern Recognition, 2nd edn. John Willey and Sons, West Sussex, England (2002)

    MATH  Google Scholar 

  6. Blum, A.L., Langley, P.: Selection of relevant features and examples in machine learning. AI 97(1-2) (1997)

    Google Scholar 

  7. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Computer Vision and Pattern Recognition (1991)

    Google Scholar 

  8. Tipping, M., Bishop, C.: Probabilistic principal component analysis. J. R. Statistics Society 61, 611–622 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  9. Roweis, S.: Em algorithms for pca and spca. Neural Information Processing Systems 10, 626–632 (1997)

    Google Scholar 

  10. Friedman, J.H.: Regularized discriminant analysis. Journal of the American Statistical Association 84, 165–175 (1989)

    Article  MathSciNet  Google Scholar 

  11. Wolf, L., Shashua, A.: Feature selection for unsupervised and supervised inference: the emergence of sparsity in a weighted-based approach. Journal of Machine Learning Research (2005)

    Google Scholar 

  12. Direct feature selection with implicit inference. In: ICCV (2003)

    Google Scholar 

  13. Li, S., Fevens, T., Krzyzak, A., Li, S.: Automatic clinical image segmentation using pathological modelling, pca and svm. In: Perner, P., Imiya, A. (eds.) MLDM 2005. LNCS (LNAI), vol. 3587, Springer, Heidelberg (2005)

    Google Scholar 

  14. Pekalska, E., Duin, R.P.W., PaclÌk, P.: Prototype selection for dissimilarity-based classifiers. Elsevier, Pattern Recognition 39 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Walter G. Kropatsch Martin Kampel Allan Hanbury

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sánchez, L., Martínez, F., Castellanos, G., Salazar, A. (2007). Feature Extraction of Weighted Data for Implicit Variable Selection. In: Kropatsch, W.G., Kampel, M., Hanbury, A. (eds) Computer Analysis of Images and Patterns. CAIP 2007. Lecture Notes in Computer Science, vol 4673. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74272-2_104

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74272-2_104

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74271-5

  • Online ISBN: 978-3-540-74272-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics