Skip to main content

Feature Extraction Methods and Manifold Learning Methods

  • Chapter

Part of the book series: Advanced Information and Knowledge Processing ((AI&KP))

In the previous chapters we presented several learning algorithms for classification and regression tasks. In many applicative problems data cannot be straightaway used to feed learning algorithms; they first need to have undergone a preliminary preprocessing. To illustrate this concept, we consider the following example. Suppose we want to build an automatic handwriting character recognizer, that is a system able to associate to a given bitmap the correct alphabet letter or digit. We assume that the data have the same sizes, that the data are bitmaps of n × m pixels; for the sake of simplicity we assume n = m = 28. Therefore the number of possible configurations is 28 × 28 = 216. This consideration implies that a learning machine straightly fed by character bitmaps will perform poorly since a representative training set can not be built. A common approach for overcoming this problem consists in representing each bitmap by a vector of d (with d ª nm) measures computed on the bitmap, called features, and then feeding the learning machine with the feature vector. The feature vector has the aim of representing in a concise way the distinctive characteristics of each letter. The more features represent the distinctive characteristics of each single character the higher is the performance of the learning machine. In machine learning, the preprocessing stage that converts the data into feature vectors is called feature extraction. One of the main aims of the feature extraction is to obtain the most representative feature vector using a number as small as possible of features. The use of more features than strictly necessary leads to several problems. A problem is the space needed to store the data. As the amount of available information increases, the compression for storage purposes becomes even more important. The speed of learning machines using the data depends on the dimension of the vectors, so a reduction of the dimension can result in reduced computational time. The most important problem is the sparsity of data when the dimensionality of the features is high. The sparsity of data implies that it is usually hard to make learning machines with good performances when the dimensionality of input data (that is, the feature dimensionality), is high. This phenomenon, discovered by Bellman, is called the curse of dimensionality [7].

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1. Principal Component Analysis. Principal Component Analysis. Springer-Verlag, 1986.

    Google Scholar 

  2. F.R. Bach and M.I. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3(1):1-48, 2002.

    Article  MathSciNet  Google Scholar 

  3. P. Baldi and K. Hornik. Neural networks and principal component analysis: learning from examples without local minima. Neural Networks, 2(1):53-58, 1989.

    Article  Google Scholar 

  4. A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930-945, 1993.

    Article  MATH  MathSciNet  Google Scholar 

  5. M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373-1396, 2003.

    Article  MATH  Google Scholar 

  6. A. Bell and T. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6):1129-1159, 1995.

    Article  Google Scholar 

  7. R. Bellman. Adaptive Control Processes: A Guided Tour. Princeton University Press, 1961.

    Google Scholar 

  8. C. Bishop. Neural Networks for Pattern Recognition. Cambridge University Press, 1995.

    Google Scholar 

  9. L. Breiman. Hinging hyperplanes for regression, classification, and function approximation. IEEE Transactions on Information Theory, 39(3):999-1013, 1993.

    Article  MATH  MathSciNet  Google Scholar 

  10. J Bruske and G. Sommer. Intrinsic dimensionality estimation with optimally topology preserving maps. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(5):572-575, May 1998.

    Article  Google Scholar 

  11. F. Camastra. Data dimensionality estimation methods: A survey. Pattern Recog- nition, 36(12):2945-2954, December 2003.

    Article  MATH  Google Scholar 

  12. F. Camastra and A. Vinciarelli. Estimating the intrinsic dimension of data with a fractal-based method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(10):1404-1407, October 2002.

    Article  Google Scholar 

  13. J.-F. Cardoso and B. Laheld. Equivalent adaptive source separation. IEEE Transactions on on Signal Processing, 44(12):3017-3030, 1996.

    Article  Google Scholar 

  14. G. Cayton. Algorithms for manifold learning. Technical report, Computer Science and Engineering department, University of California, San Diego, 2005.

    Google Scholar 

  15. C. L. Chang and R. C. T. Lee. A heuristic relaxation method for nonlinear mapping in cluster analysis. IEEE Transactions on Computers, C-23:178-184, February 1974.

    Article  Google Scholar 

  16. P. Comon. Independent component anaysis - a new concept? Signal Processing, 36(?):287-314, 1994.

    Article  MATH  Google Scholar 

  17. T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, 1990.

    Google Scholar 

  18. J. Costa and A. O. Hero. Geodetic entropic graphs for dimension and en- tropy dimension in manifold learning. IEEE Transactions on Signal Processing, 52(8):2210-2221, 2004.

    Article  MathSciNet  Google Scholar 

  19. T. M. Cover and J. A. Thomas. Elements of Information Theory. Jphn Wiley, 1991.

    Google Scholar 

  20. P. Demartines and J. Herault. Curvilinear component analysis: A self-organizing neural network for nonlinear mapping in cluster analysis. IEEE Transactions on Neural Networks, 8(1):148-154, January 1997.

    Article  Google Scholar 

  21. R. A. DeVore. Degree of nonlinear approximation. In Approximation Theory, Vol. VI, pages 175-201. Academic Press, 1991.

    Google Scholar 

  22. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley, 2001.

    Google Scholar 

  23. J. P. Eckmann and D. Ruelle. Ergodic theory of chaos and strange attractors. Review of Modern Physics, 57(3):617-659, 1985.

    Article  MathSciNet  Google Scholar 

  24. J. P. Eckmann and D. Ruelle. Fundamental limitations for estimating dimen- sions and lyapounov exponents in dynamical systems. Physica, D-56:185-187, 1992.

    MathSciNet  Google Scholar 

  25. B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, 1993.

    Google Scholar 

  26. R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7(2):179-188, 1936.

    Google Scholar 

  27. D. Fotheringhame and R. J. Baddeley. Nonlinear principal component analysis of neuronal spike train data. Biological Cybernetics, 77(4):282-288, 1997.

    Article  Google Scholar 

  28. J. H. Friedman. Exploratory projection pursuit. Journal of the American Sta-tistical Association, 82(397):249-260, 1987.

    Article  MATH  Google Scholar 

  29. J. H. Friedman and J. W. Tukey. A projection pursuit algorithm for expoloratory data analysis. IEEE Transactions on Computers, C-23(9):881-890, 1974.

    Article  Google Scholar 

  30. K. Fukunaga. Intrinsic dimensionality extraction. In Classification, Pattern Recognition and Reduction of Dimensionality, Vol. 2 of Handbook of Statistics, pages 347-362. North Holland, 1982.

    Google Scholar 

  31. K. Fukunaga. An Introduction to Statistical Pattern Recognition. Academic Press, 1990.

    Google Scholar 

  32. K. Fukunaga and D. R. Olsen. An algorithm for finding intrinsic dimensionality of data. IEEE Transactions on Computers, 20(2):165-171, 1976.

    Google Scholar 

  33. F. Girosi. Regularization theory, radial basis functions and networks. In From Statistics to Neural Networks, pages 166-187,. Springer-Verlag, 1994.

    Google Scholar 

  34. F. Girosi and G. Anzellotti. Rates of convergence of approximation by translates. Technical report, Artificial Intelligence Laboratory, Massachussets Institute of Technology,, 1993.

    Google Scholar 

  35. P. Grassberger and I. Procaccia. Measuring the strangeness of strange attrac- tors. Physica, D9(1-2):189-208, 1983.

    MathSciNet  Google Scholar 

  36. F. Hausdorff. Dimension und äusseres mass. Math. Annalen, 79(1-2):157-179, 1918.

    Article  MATH  MathSciNet  Google Scholar 

  37. A. Heyting and H. Freudenthal. Collected Works of L.E.J Brouwer. North- Holland Elsevier, 1975.

    Google Scholar 

  38. P. Huber. Projection pursuit. The Annals of Statistics, 13(2):435-475, 1985.

    Article  MATH  MathSciNet  Google Scholar 

  39. U. Hübner, C. O. Weiss, N. B. Abraham, and D. Tang. Lorenz-like chaos in nh3 -fir lasers. In Time Series Prediction. Forecasting the Future and Understanding the Past, pages 73-104. Addison Wesley, 1994.

    Google Scholar 

  40. A. Hyvärinen. New approximations of differential entropy for independent com-ponent analysis and projection pursuit. In Advances in Neural Information Processing Systems 10, pages 273-279. MIT Press, 1998.

    Google Scholar 

  41. A. Hyvärinen. The fixed-point algorithm and maximum likelihood for indepen-dent component analysis. Neural Processing Letters, 10(1):1-5, 1999.

    Article  Google Scholar 

  42. A. Hyvärinen and E. Oja. A fast fixed-point algorithm for independent compo-nent analysis. Neural Computation, 9(7):1483-1492, 1997.

    Article  Google Scholar 

  43. A. Hyvärinen and E. Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13(4-5):411-430, 2000.

    Article  Google Scholar 

  44. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice-Hall, 1988.

    Google Scholar 

  45. L. K. Jones. A simple lemma on greedy approximation in hilbert space and convergence rates for projection pursuit regression and neural network training. Journal of the Royal Statistical Society, 20(1):608-613, March 1992.

    MATH  Google Scholar 

  46. C. Jutten and J. Herault. Blind separation of sources, part i: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24(1):1-10, 1991.

    Article  MATH  Google Scholar 

  47. D. Kaplan and L. Glass. Understanding Nonlinear Dynamics. Springer-Verlag, 1995.

    Google Scholar 

  48. J. Karhunen and J. Joutsensalo. Representations and separation of signals using nonlinear pca type learning. Neural Networks, 7(1):113-127, 1994.

    Article  Google Scholar 

  49. J. Karhunen, E. Oja, L. Wang, R. Vigario, and J. Joutsensalo. A class of neural networks for independent component analysis. IEEE Transactions on Neural Networks, 8(3):486-504, 1997.

    Article  Google Scholar 

  50. B. Kégl. Intrinsic dimension estimation using packing numbers. In Advances in Neural Information Processing 15, pages 681-688. MIT Press, 2003.

    Google Scholar 

  51. M. Kirby. Geometric Data Analysis: An Empirical Approach to Dimensionality Reduction and the Study of Patterns. John Wiley, 2001.

    Google Scholar 

  52. T. Kohonen. Self-Organizing Map. Springer-Verlag, 1995.

    Google Scholar 

  53. G. A. Korn and T. M. Korn. Mathematical Handbook for Scientists and Engi- neers. Dover, 1961.

    Google Scholar 

  54. J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1-27, 1964.

    Article  MATH  MathSciNet  Google Scholar 

  55. J. B. Kruskal. Comments on a nonlinear mapping for data structure analysis. IEEE Transaction on Computers, C-20:1614, December 1971.

    Article  Google Scholar 

  56. J. B. Kruskal. Linear transformation of multivariate data to reveal clustering. In Multidimensional Scaling, vol. I, pages 101-115. Academic Press, 1972.

    Google Scholar 

  57. J. B. Kruskal and J. D. Carroll. Geometrical models and badness-of-fit functions. In Multivariate Analisys, vol. 2, pages 639-671. Academic Press, 1969.

    Google Scholar 

  58. E. Levina and P. Bickel. Maximum likelihood estimation of intrinsic dimension. In Advances in Neural Information Processing 17, pages 777-784. MIT Press, 2005.

    Google Scholar 

  59. Y. Linde, A. Buzo, and R. Gray. An algorithm for vector quantizer design. IEEE Transaction on Communications, 28(1):84-95, 1980.

    Article  Google Scholar 

  60. G. G. Lorentz. Approximation of Functions. Chelsea Publishing, 1986.

    Google Scholar 

  61. E. C. Malthouse. Limitations of nonlinear pca as performed with generic neural networks. IEEE Transaction on Neural Networks, 9(1):165-173, 1998.

    Article  Google Scholar 

  62. B. Mandelbrot. Fractals: Form, Chance and Dimension. Freeman, 1977.

    Google Scholar 

  63. T. Martinetz and K. Schulten. Topology representing networks. Neural Net-works, 7(3):507-522, 1994.

    Article  Google Scholar 

  64. B. Mohar. Laplace eigenvalues of graphs: a survey. Discrete Mathematics, 109(1-3):171-183, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  65. J.-P. Nadal and N. Parga. Nonlinear neurons in the low noise limit: a factorial code maximizes information transfer. Networks, 5(4):565-581, 1994.

    Article  MATH  Google Scholar 

  66. E. Ott. Chaos in Dynamical Systems. Cambridge University Press, 1993.

    Google Scholar 

  67. B. A. Pearlmutter and L. C. Parra. Maximum likelihood blind source separation: A context-sensitive generalization of ica. In Advances in Neural Information Processing 9, pages 613-619. MIT Press, 1997.

    Google Scholar 

  68. K. Pettis, T. Bailey, T. Jain, and R. Dubes. An intrinsic dimensionality esti-mator from near-neighbor information. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1(1):25-37, 1979.

    Article  MATH  Google Scholar 

  69. D.-T. Pham, P. Garrat, and C. Jutten. Separation of a mixture of independent sources through a maximum likelihood approach. In Proceeding EUSIPCO92, pages 771-774, 1992.

    Google Scholar 

  70. W. H. Press, B. P. Flannery, S. A. Teulkosky, and W. T. Vetterling. Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, 1989.

    Google Scholar 

  71. A. K. Romney, R. N. Shepard, and S. B. Nerlove. Multidimensionaling Scaling, vol. 2, Applications. Seminar Press, 1972.

    Google Scholar 

  72. A. K. Romney, R. N. Shepard, and S. B. Nerlove. Multidimensionaling Scaling, vol. I, Theory. Seminar Press, 1972.

    Google Scholar 

  73. O. Samko, A. D. Marshall, and P.L. Rosin. Selection of the optimal parameter value for the isomap algorithm. Pattern Recognition Letters, 27(9):968-979, 2006.

    Article  Google Scholar 

  74. J. W. Jr. Sammon. A nonlinear mapping for data structure analysis. IEEE Transaction on Computers, C-18(5):401-409, May 1969.

    Article  Google Scholar 

  75. L. K. Saul and S. Roweis. Think globally, fit locally: unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research, 4:119-155, June 2003.

    Article  MathSciNet  Google Scholar 

  76. R. N. Shepard. The analysis of proximities: Multimensional scaling with an unknown distance function. Psychometrika, 27(3):219-246, June 1962.

    Article  MathSciNet  Google Scholar 

  77. R. N. Shepard. Representation of structure in similarity data problems and prospects. Psychometrika, 39(4):373-421, December 1974.

    Article  MATH  MathSciNet  Google Scholar 

  78. R. N. Shepard and J. D. Carroll. Parametric representation of nonlinear data structures. In Multivariate Analysis, pages 561-592. Academic Press, 1969.

    Google Scholar 

  79. L. A. Smith. Intrinsic limits on dimension calculations. Physics Letters, A133(6):283-288, 1988.

    Google Scholar 

  80. R. L. Smith. Optimal estimation of fractal dimension. In Nonlinear Modeling and Forecasting, SFI Studies in the Sciences of Complexity vol. XII, pages 115-135. Addison Wesley, 1992.

    Google Scholar 

  81. F. Takens. On the numerical determination of the dimension of an attractor. In Dynamical Systems and Bifurcations, Proceedings Groningen 1984, pages 99-106. Springer-Verlag, 1984.

    Google Scholar 

  82. J. B. Tanenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(12):2319-2323, December 2000.

    Article  Google Scholar 

  83. J. Theiler. Lacunarity in a best estimator of fractal dimension. Physics Letters, A133(4-5):195-200, 1988.

    MathSciNet  Google Scholar 

  84. J. Theiler. Statistical precision of dimension estimators. Physical Review, A41:3038-3051, 1990.

    Google Scholar 

  85. J. Theiler, S. Eubank, A. Longtin, B. Galdrikian, and J. D. Farmer. Testing for nonlinearity in time series: the method for surrogate date. Physica, D58(1-4):77-94, 1992.

    Google Scholar 

  86. G. V Trunk. Statistical estimation of the intrinsic dimensionality of a noisy signal collection. IEEE Transaction on Computers, 25(2):165-171, 1976.

    Article  MATH  MathSciNet  Google Scholar 

  87. P. J. Verveer and R. Duin. An evaluation of intrinsic dimensionality estimators. IEEE Transaction on Pattern Analysis and Machine Intelligence, 17(1):81-86, January 1995.

    Article  Google Scholar 

  88. W. H. Wolberg and O. Mangasarian. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proceedings of the National Academy of Sciences, U.S.A., 87(1):9193-9196, 1990.

    Article  MATH  Google Scholar 

Download references

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer

About this chapter

Cite this chapter

(2008). Feature Extraction Methods and Manifold Learning Methods. In: Machine Learning for Audio, Image and Video Analysis. Advanced Information and Knowledge Processing. Springer, London. https://doi.org/10.1007/978-1-84800-007-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-84800-007-0_11

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84800-006-3

  • Online ISBN: 978-1-84800-007-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics