Skip to main content

Nonlinear Principal Component Analysis: Neural Network Models and Applications

  • Conference paper
Principal Manifolds for Data Visualization and Dimension Reduction

Nonlinear principal component analysis (NLPCA) as a nonlinear generalisation of standard principal component analysis (PCA) means to generalise the principal components from straight lines to curves. This chapter aims to provide an extensive description of the autoassociative neural network approach for NLPCA. Several network architectures will be discussed including the hierarchical, the circular, and the inverse model with special emphasis to missing data. Results are shown from applications in the field of molecular biology. This includes metabolite data analysis of a cold stress experiment in the model plant Arabidopsis thaliana and gene expression analysis of the reproductive cycle of the malaria parasite Plasmodium falciparum within infected red blood cells.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kramer, M. A.: Nonlinear principal component analysis using auto-associative neural networks. AIChE Journal, 37(2), 233-243 (1991)

    Article  Google Scholar 

  2. DeMers, D., Cottrell, G. W.: Nonlinear dimensionality reduction. In: Hanson, D., Cowan, J., Giles, L., eds.: Advances in Neural Information Processing Systems 5, San Mateo, CA, Morgan Kaufmann, 580-587 (1993)

    Google Scholar 

  3. Hecht-Nielsen, R.: Replicator neural networks for universal optimal source cod-ing. Science, 269, 1860-1863 (1995)

    Article  Google Scholar 

  4. Malthouse, E. C.: Limitations of nonlinear PCA as performed with generic neural networks. IEEE Transactions on Neural Networks, 9(1), 165-173 (1998)

    Article  Google Scholar 

  5. Kirby, M. J., Miranda, R.: Circular nodes in neural networks. Neural Compu-tation, 8(2), 390-402 (1996)

    Article  Google Scholar 

  6. Hsieh, W. W., Wu, A., Shabbar, A.: Nonlinear atmospheric teleconnections. Geo-physical Research Letters, 33(7), L07714 (2006)

    Article  Google Scholar 

  7. Herman, A.: Nonlinear principal component analysis of the tidal dynamics in a shallow sea. Geophysical Research Letters, 34, L02608 (2007)

    Article  Google Scholar 

  8. MacDorman, K., Chalodhorn, R., Asada, M.: Periodic nonlinear principal com-ponent neural networks for humanoid motion segmentation, generalization, and generation. In: Proceedings of the Seventeenth International Conference on Pattern Recognition (ICPR), Cambridge, UK, 537-540 (2004)

    Google Scholar 

  9. Scholz, M.: Analysing periodic phenomena by circular PCA. In: Hochreiter, M., Wagner, R. (eds. ) Proceedings BIRD conference. LNBI 4414, Springer-Verlag Berlin Heidelberg, 38-47 (2007)

    Google Scholar 

  10. 10. Scholz, M., Vigário, R.: Nonlinear PCA: a new hierarchical approach. In: Verleysen, M., ed.: Proceedings ESANN, 439-444 (2002)

    Google Scholar 

  11. 11. Hassoun, M. H., Sudjianto, A.: Compression net-free autoencoders. Workshop on Advances in Autoencoder/Autoassociator-Based Computations at the NIPS’97 Conference (1997)

    Google Scholar 

  12. Oh, J. H., Seung, H.: Learning generative models with the up-propagation al-gorithm. In: Jordan, M. I., Kearns, M. J., Solla, S. A., eds.: Advances in Neural Information Processing Systems. Vol. 10., The MIT Press, 605-611 (1998)

    Google Scholar 

  13. Lappalainen, H., Honkela, A.: Bayesian nonlinear independent component analysis by multi-layer perceptrons. In: Girolami, M. (ed. ) Advances in In-dependent Component Analysis. Springer-Verlag, 93-121 (2000)

    Google Scholar 

  14. Honkela, A., Valpola, H.: Unsupervised variational bayesian learning of non-linear models. In: Saul, L., Weis, Y., Bottous, L. (eds. ) Advances in Neural Information Processing Systems, 17 (NIPS’04), 593-600 (2005)

    Google Scholar 

  15. Scholz, M., Kaplan, F., Guy, C., Kopka, J., Selbig, J.: Non-linear PCA: a missing data approach. Bioinformatics, 21(20), 3887-3895 (2005)

    Article  Google Scholar 

  16. Hinton, G. E., Salakhutdinov, R. R.: Reducing the dimensionality of data with neural networks. Science, 313 (5786), 504-507 (2006)

    Article  MathSciNet  Google Scholar 

  17. Roweis, S. T., Saul, L. K.: Nonlinear dimensionality reduction by locally linear embedding. Science, 290 (5500), 2323-2326 (2000)

    Article  Google Scholar 

  18. Saul, L. K., Roweis, S. T.: Think globally, fit locally: Unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research, 4 (2), 119-155 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  19. Tenenbaum, J., de Silva, V., Langford, J.: A global geometric framework for nonlinear dimensionality reduction. Science, 290 (5500), 2319-2323 (2000)

    Article  Google Scholar 

  20. Hastie, T., Stuetzle, W.: Principal curves. Journal of the American Statistical Association, 84, 502-516 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  21. Kohonen, T.: Self-Organizing Maps. 3rd edn. Springer (2001)

    MATH  Google Scholar 

  22. Schölkopf, B., Smola, A., Müller, K. R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10, 1299-1319 (1998)

    Article  Google Scholar 

  23. Mika, S., Schölkopf, B., Smola, A., Müller, K. R., Scholz, M., Rätsch, G.: Kernel PCA and de-noising in feature spaces. In: Kearns, M., Solla, S., Cohn, D., eds.: Advances in Neural Information Processing Systems 11, MIT Press, 536-542 (1999)

    Google Scholar 

  24. Harmeling, S., Ziehe, A., Kawanabe, M., Müller, K. R.: Kernel-based nonlinear blind source separation. Neural Computation, 15, 1089-1124 (2003)

    Article  MATH  Google Scholar 

  25. Jutten, C., Karhunen, J.: Advances in nonlinear blind source separation. In: Proc. Int. Symposium on Independent Component Analysis and Blind Signal Separation (ICA2003), Nara, Japan, 245-256 (2003)

    Google Scholar 

  26. Cichocki, A., Amari, S.: Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications. Wiley, New York (2003)

    Google Scholar 

  27. 27. Scholz, M.: Approaches to analyse and interpret biological profile data. PhD thesis, University of Potsdam, Germany (2006) URN: urn:nbn:de:kobv:517-opus-7839, URL: http://opus. kobv. de/ubp/volltexte/2006/783/.

  28. Bishop, C.: Neural Networks for Pattern Recognition. Oxford University Press (1995)

    Google Scholar 

  29. Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singu-lar value decomposition. Biological Cybernetics, 59 (4-5), 291-294, (1988)

    Article  MATH  MathSciNet  Google Scholar 

  30. Scholz, M.: Nonlinear PCA based on neural networks. Master’s thesis, Dep. of Computer Science, Humboldt-University Berlin (2002) (in German)

    Google Scholar 

  31. Hestenes, M. R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards, 49(6), 409-436(1952)

    MATH  MathSciNet  Google Scholar 

  32. Little, R. J. A., Rubin, D. B.: Statistical Analysis with Missing Data. 2nd edn. John Wiley & Sons, New York (2002)

    MATH  Google Scholar 

  33. 33. Ghahramani, Z., Jordan, M.: Learning from incomplete data. Technical Report AIM-1509 (1994)

    Google Scholar 

  34. Vesanto, J.: Neural network tool for data mining: SOM toolbox. In: Proceed-ings of Symposium on Tool Environments and Development Methods for Intelli-gent Systems (TOOLMET2000), Oulu, Finland, Oulun yliopistopaino, 184-196 (2000)

    Google Scholar 

  35. Oba, S., Sato, M., Takemasa, I., Monden, M., Matsubara, K., Ishii, S.: A bayesian missing value estimation method for gene expression profile data. Bioinformatics, 19(16), 2088-2096 (2003)

    Article  Google Scholar 

  36. 36. Bishop, C.: Variational principal components. In: Proceedings Ninth Interna-tional Conference on Artificial Neural Networks, ICANN’99, 509-514 (1999)

    Google Scholar 

  37. Stock, J., Stock, M.: Quantitative stellar spectral classification. Revista Mexi-cana de Astronomia y Astrofisica, 34, 143-156 (1999)

    Google Scholar 

  38. Webber Jr., C., Zbilut, J.: Dynamical assessment of physiological systems and states using recorrence plot strategies. Journal of Applied Physiology, 76, 965-973(1994)

    Google Scholar 

  39. Mewett, D. T., Reynolds, K. J., Nazeran, H.: Principal components of recur-rence quantification analysis of EMG. In: Proceedings of the 23rd Annual IEEE/EMBS Conference, Istanbul, Turkey (2001)

    Google Scholar 

  40. Kaplan, F., Kopka, J., Haskell, D., Zhao, W., Schiller, K., Gatzke, N., Sung, D., Guy, C.: Exploring the temperature-stress metabolome of Arabidopsis. Plant Physiology, 136(4), 4159-4168 (2004)

    Article  Google Scholar 

  41. Bozdech, Z., Llinas, M., Pulliam, B., Wong, E., Zhu, J., DeRisi, J.: The tran-scriptome of the intraerythrocytic developmental cycle of Plasmodium falci-parum. PLoS Biology, 1 (1), E5 (2003)

    Article  Google Scholar 

  42. Kissinger, J., Brunk, B., Crabtree, J., Fraunholz, M., Gajria B., et al., : The plasmodium genome database. Nature, 419 (6906), 490-492 (2002)

    Article  Google Scholar 

  43. Fridman, E., Carrari, F., Liu, Y. S., Fernie, A., Zamir, D.: Zooming in on a quantitative trait for tomato yield using interspecific introgressions. Science, 305 (5691), 1786-1789 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Scholz, M., Fraunholz, M., Selbig, J. (2008). Nonlinear Principal Component Analysis: Neural Network Models and Applications. In: Gorban, A.N., Kégl, B., Wunsch, D.C., Zinovyev, A.Y. (eds) Principal Manifolds for Data Visualization and Dimension Reduction. Lecture Notes in Computational Science and Enginee, vol 58. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73750-6_2

Download citation

Publish with us

Policies and ethics