Skip to main content

Nonlinear Feature Extraction: Deterministic Neural Networks

  • Chapter
An Information-Theoretic Approach to Neural Computing

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

  • 124 Accesses

Abstract

Independent feature extraction, i.e independent component analysis, was formulated in Chapter 4 as a search for an invertible, volume preserving map which statistically decorrelates the output components in the case of an arbitrary, possibly non-Gaussian input distribution. The same chapter discussed in detail the case where input-output maps are linear, i.e. matrices. The first extension to nonlinear transformation was carried out in Chapter 5 where stochastic neural networks were used to perform statistical decorrelation of Boolean outputs. This chapter further extends nonlinear independent feature extraction by introducing a very general class of nonlinear input-output deterministic maps whose architecture guarantees bijectivity and volume preservation. The criteria for evaluating statistical dependence are those defined in Chapter 4: the cumulant expansion method and the minimization of the mutual information among output components. Atick and Redlich [6.1] and especially the two papers of Redlich [6.2]–[6.3] use similar information theoretic concepts and reversible cellular automata architectures in order to define how nonlinear decorrelation can be performed. Taylor and Coombes [6.4] presented an extension of Oja’s learning rule for polynomial, i.e. higher order neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. Atick and A. Redlich: What Does the Retina Know about Natural Scenes. Neural Computation, 4, 196–210, 1992.

    Article  Google Scholar 

  2. A. Redlich: Redundancy Reduction as a Strategy for Unsupervised Learning. Neural Computation, 5, 289–304, 1993.

    Article  Google Scholar 

  3. A. Redlich: Supervised Factorial Learning. Neural Computation, 5, 750–766, 1993.

    Article  Google Scholar 

  4. J. G. Taylor and S. Coombes: Learning Higher Order Correlations. Neural Networks, 6, 423–427,1993.

    Article  Google Scholar 

  5. G. Deco and Brauer G: Nonlinear Higher-Order Statistical Decorrelation by Volume-Conserving Neural Architectures. Neural Networks, 8, 525–535, 1995.

    Article  Google Scholar 

  6. G. Deco and B. Schürmann: Learning Time Series Evolution by Unsupervised Extraction of Correlations. Physical Review E, 51, 1780–1790, 1995.

    Article  Google Scholar 

  7. G. Deco, L. Parra and S. Miesbach: Redundancy Reduction with Information-Preserving Nonlinear Maps. Network: Computation in Neural Systems, 6, 61–72, 1995.

    Article  MATH  Google Scholar 

  8. J. Rubner and P. Tavan: A Self-Organization Network for Principal-Component Analysis. Europhysics Letters, 10, 693–698, 1989.

    Article  Google Scholar 

  9. R. Durbin and D. Rumelhart: Product Units: A Computationally Powerful and Biologically Plausible Extension to Backpropagation Networks. Neural Computation, 1, 133–142, 1989.

    Article  Google Scholar 

  10. C. Beck and F. Schlögl: Thermodynamics of Chaotic Systems. Cambridge Nonlinear Science Series, University Press, Cambridge, 1993.

    Google Scholar 

  11. P. Grassberger and I. Procaccia. Characterization of Strange Attractors. Phys. Rev. Lett., 50, 346, 1983.

    Article  MathSciNet  Google Scholar 

  12. J.P. Eckmann and D. Ruelle. Ergodic Theory of Chaos and Strange Attractors. Rev. Mod. Phys., 57, 617–656, 1985.

    Article  MathSciNet  Google Scholar 

  13. J.P. Crutchfield and McNamara: Equations of Motion from Data Series. Complex Systems, 1, 417–452, 1987.

    MathSciNet  MATH  Google Scholar 

  14. H.D. I. Abarbanel, R. Brown and J.B. Kadtke: Prediction and System Identification in Chaotic Time Series with Broadband Fourier Spectra. Phys. Lett. A, 138, 401–408, 1989.

    Article  Google Scholar 

  15. H.D.I. Abarbanel, R. Brown and J.B. Kadtke: Prediction in Chaotic Nonlinear Systems: Methods for Time Series with Broadband Fourier Spectra. Phys. Rev. A, 41, 1782–1807, 1990.

    Article  MathSciNet  Google Scholar 

  16. J. Farmer and J. Sidorowich. Predicting Chaotic Time Series. Phys. Rev. Letters, 59, 845, 1987.

    Article  MathSciNet  Google Scholar 

  17. M. Casdagli: Nonlinear Prediction of Chaotic Time Series. Physica D, 35, 335–356, 1989.

    Article  MathSciNet  MATH  Google Scholar 

  18. A. Lapedes and R. Farber: Nonlinear Signal Processing Using Neural Networks: Prediction and System Modeling. Tech. Rep. n LA-UR-87-2662, Los Alamos National Laboratory, Los Alamos, NM, 1987.

    Google Scholar 

  19. A. Weigend, D. Rumelhart and B. Huberman: Back-Propagation, Weight Elimination and Time Series Prediction. In Connectionist Models, Proc. 1990, Touretzky, Elman, Sejnowski and Hinton eds., 105–116, 1990.

    Google Scholar 

  20. G. Deco and B. Schürmann: Recurrent Neural Networks Capture the Dynamical Invariance of Chaotic Time Series. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 77-A (11), 1840–1845, 1994.

    Google Scholar 

  21. W. Liebert and H.G. Schuster: Proper Choice of the Time Delay for the Analysis of Chaotic Time Series. Phys. Lett. A, 142, 107–111, 1989.

    Article  MathSciNet  Google Scholar 

  22. W. Liebert, K. Pawelzik and H.G. Schuster: Optimal Embedding of Chaotic Attractors from Topological Considerations. Europhysics Lett., 14, 521–526, 1991.

    Article  MathSciNet  Google Scholar 

  23. K. Pawelzik and H.G. Schuster: Unstable Periodic Orbits and Prediction. Phys. Rev. A, 43, 1808–1812, 1991.

    Article  Google Scholar 

  24. F. Takens: Detecting Strange Attractors in Turbulence. In Dynamical Systems and Turbulence, Warwick, 1980, ed. D.A. Rand, and L.S. Young, Lecture Notes in Mathematics, 898, Springer-Verlag, New York, 366–381, 1980.

    Chapter  Google Scholar 

  25. T. Sauer, J. Yorke and M. Casdagli: Embedology. J. Stat.Phys., 65, 579–616, 1991.

    Article  MathSciNet  MATH  Google Scholar 

  26. M. Hénon: A Two-Dimensional Mapping with a Strange Attractor. Comm. Math. Phys., 50, 69, 1976.

    Article  MathSciNet  MATH  Google Scholar 

  27. M. Mackey and L. Glass: Oscillation and chaos in physiological control systems. Science 197, 287, 1977.

    Article  Google Scholar 

  28. A.M. Fraser and H.L. Swinney: Independent Coordinates for Strange Attractors from Mutual Information. Phys. Rev. A., 33, 1134, 1986.

    Article  MathSciNet  MATH  Google Scholar 

  29. R. Abraham and J. Marsden: Theoretical Mechanics. The Benjamin-Cummings Publishing Company, Inc., London, 1978.

    Google Scholar 

  30. C.L. Siegel: Symplectic Geometry. Amer. Jour. Math., 65, 1–86, 1943.

    Article  Google Scholar 

  31. Feng Kang, Qin Meng-zhao: The Symplectic Methods for the Computation of Hamiltonian Equations. In: Zhu You-lan, Guo Ben-yu, eds., Numerical Methods for Partial Differential Equations. Lecture Notes in Mathematics., 1297, 1–35. Springer-Verlag, New York, 1985.

    Google Scholar 

  32. S. Miesbach, H.J. Pesch: Symplectic phase flow approximation for the numerical integration of canonical systems. Numer. Math., 61, 501–521, 1992.

    Article  MathSciNet  MATH  Google Scholar 

  33. K. Hornik, M. Stinchcombe, H. White: Multilayer Feedforward Neural Networks are Universal Approximators. Neural Networks, 2, 359–366, 1989.

    Article  Google Scholar 

  34. J. Stoer and R. Bulirsch: Introduction to Numerical Analysis. Springer-Verlag, New York, 1993.

    MATH  Google Scholar 

  35. A. Papoulis: Probability, Random Variables, and Stochastic Processes. Third Edition, McGraw-Hill, New York, 1991.

    Google Scholar 

  36. J. Atick and A. Redlich: Towards a theory of early visual processing. Neural Computation, 2, 308–320, 1990.

    Article  Google Scholar 

  37. J. Atick: Could Information Theory Provide an Ecological Theory of Sensory Processing. Network, 3, 213–251, 1992.

    Article  MATH  Google Scholar 

  38. D.J. Field: Relation Between the Statistics of Natural Images and the Response Properties of Cortical Cells. J. Opt. Soc. Am. A, 4, 2379–2394, 1987.

    Article  Google Scholar 

  39. R. De Valois, H. Morgan and D. Snodderly: Psycophysical Studies of Monkey Vision: III Spatial Luminance Contrast Sensitivity Test of Macaque Retina and Human Observers. Invest. Opthalmol. Vis. Sci., 14, 75–81, 1974.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag New York, Inc.

About this chapter

Cite this chapter

Deco, G., Obradovic, D. (1996). Nonlinear Feature Extraction: Deterministic Neural Networks. In: An Information-Theoretic Approach to Neural Computing. Perspectives in Neural Computing. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-4016-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-4016-7_6

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4612-8469-7

  • Online ISBN: 978-1-4612-4016-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics