Skip to main content

Dataflow Learning in Coupled Lattices: An Application to Artificial Neural Networks

  • Chapter
Handbook of Global Optimization

Part of the book series: Nonconvex Optimization and Its Applications ((NOIA,volume 62))

  • 1507 Accesses

Abstract

This chapter describes artificial neural networks (ANNs) as coupled lattices of dynamic nonlinear processing elements and studies ways to adapt their parameters. This view extends the conventional paradigm of static neural networks and shines light on the principles of parameter optimization, both for the static and dynamic cases. We show how gradient descent learning can be naturally implemented with local rules in coupled lattices. We review the present state of the art in neural network training and present recent results that take advantage of the distributed nature of coupled lattices for optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Almeida, L.B. (1987). A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In 1st IEEE Int. Conf. on Neural Networks, vol. 2, pages 609–618.

    Google Scholar 

  • Amari, S. (1972). Characteristics of random nets of analog neuron-like elements. IEEE Transactions on Systems, Man, and Cybernetics, SMC2(5): 643–657.

    Google Scholar 

  • Baldi, P. (1995). Gradient descent learning algorithms: A unified approach. In Chauvin and Rumelhart, editors, Backpropagation: theory, architectures and applications, pages 509–541. Lawrence Erlbaum Associates, New Jersey.

    Google Scholar 

  • Birkhoff, G. (1967). Lattice Theory. American Mathematical Society.

    Google Scholar 

  • Bryson, A. and Ho, Y. (1975). Applied Optimal Control, Optimization, estimation and control. Hemispheric Publishing Co., New York.

    Google Scholar 

  • deVries, B. and Principe, J. (1992). The gamma model–a new neural model for temporal processing. Neural Networks, 5 (4): 565–576.

    Article  Google Scholar 

  • Grossberg, S. and Cohen, M. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC13:815–826.

    Google Scholar 

  • Hopfield, J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the USA, 79(2554–2558).

    Google Scholar 

  • Jordan, M. (1986). Attractor dynamics and parallelism in a connectionist sequential machine. In Proc. 8th annual Conf. on Cognition Science Society, pages 531–546.

    Google Scholar 

  • Lang, K., Waibel, A., and Hinton, G. (1990). A time dealy neural network architecture for isolated word recognition. Neural Networks, 3 (1): 23–44.

    Article  Google Scholar 

  • LeCun, Y. (1988). A theoretical framework for backpropagation. Technical Report CRG-TR-88–6, Department of Computer Science, University of Toronto, Toronto, Canada.

    Google Scholar 

  • Lefebvre, C. (1992). An object-oriented methodology for the analysis of artificial neural networks. Master’s thesis, University of Florida, Gainesville, Florida.

    Google Scholar 

  • Lippman, R. (1987). An introduction to computing with neural nets. IEEE Trans. ASSP Magazine, 4: 4–22.

    Article  Google Scholar 

  • Pineda, F. (1987). Generalization of backpropagation to recurrent neural networks. Physical Rev. Let., 59 (19): 2229–2232.

    Article  MathSciNet  Google Scholar 

  • Pontryagin, L.S. (1962). The mathemetical theory of optimal processes. Interscience, New York.

    Google Scholar 

  • Principe, J., Euliano, N., and Lefebvre, C. (2000). Neural and Adaptive systems: Fundamentals through simulation. Wiley, New York, New York.

    Google Scholar 

  • Rumelhart, D., Hinton, G., and Williams, R. (1986). Learning internal representations by error propagation. In Rumelhart and McClelland, editors, Parallel Distributed Processing. MIT Press.

    Google Scholar 

  • Werbos, P. (1974). Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD thesis, Harvard University.

    Google Scholar 

  • Werbos, P. (1990). Backpropagation through time: what it does and how to do it. Proc. IEEE, 78 (10).

    Google Scholar 

  • Widrow, B. and Hoff, M. (1960). Adaptive switching circuits. IRE Wescon rep pt 4.

    Google Scholar 

  • Williams, R. and Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1: 270–280.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Principe, J.C., Lefebvre, C., Fancourt, C.L. (2002). Dataflow Learning in Coupled Lattices: An Application to Artificial Neural Networks. In: Pardalos, P.M., Romeijn, H.E. (eds) Handbook of Global Optimization. Nonconvex Optimization and Its Applications, vol 62. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-5362-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-1-4757-5362-2_10

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4419-5221-9

  • Online ISBN: 978-1-4757-5362-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics