Abstract
This chapter provides an overview of some adaptive control methods and how artificial neural networks are being used as components of adaptive control systems. It suggests, however, that the adaptive control methods developed by control engineers can be misleading guides to thinking about control in biological systems. Furthermore, it suggests that neural networks, whether artificial or real, might be most effective when used as components of architectures that are not conservative extensions of conventional adaptive control architectures. After a brief discussion of control, several approaches to adaptive control as developed by control engineers are described, followed by presentation of a view of artificial neural networks and their potential roles in control systems. Two examples are described in which artificial neural networks have been applied successfully to difficult control problems, and a model of the cerebellum is discussed in light of conceptual schemes based on engineering control practice.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Agre, P. E. (1988). The dynamic structure of everyday life. Technical Report TR 1085, Massachusetts Institute of Technology Artificial Intelligence Laboratory.
Albus, J. S. (1971). A theory of cerebellar function. Mathematical Biosciences, 10, 25–61.
Arbib, M. A. (1987). Brains, Machines, and Mathematics, Second Edition. New York: Springer-Verlag.
Ashby, W. R. (1960). Design for a Brain. London: Associated Book Publishers.
Atkeson, C. G. & Reinkensmeyer, D. J. (1988). Using associative content-addressable memories to control robots. In Proceedings of the IEEE Conference on Decision and Control, pages 792–797.
Barto, A. (1991). Some learning tasks from a control perspective. In Nadel, L. & Stein, D. L. (Eds.), 1990 Lectures in Complex Systems, pages 195–223. Redwood City, CA: Addison-Wesley Publishing Company, The Advanced Book Program.
Barto, A. G. (1985). Learning by statistical cooperation of self-interested neuron-like computing elements. Human Neurobiology, 229–256.
Barto, A. G. (1989). From chemotaxis to cooperativity: Abstract exercises in neuronal learning strategies. In Durbin, R., Maill, R., & Mitchison, G. (Eds.), The Computing Neuron, pages 73–98. Reading, MA: Addison-Wesley.
Barto, A. G. (1990). Connectionist learning for control: An overview. In Miller, T., Sutton, R. S., & Werbos, P. J. (Eds.), Neural Networks for Control, pages 5–58. Cambridge, MA: MIT Press.
Bellman, R. E. (1957). Dynamic Programming. Princeton, NJ: Princeton University Press.
Berthier, N., Singh, S., Barto, A., & Houk, J. (1991). Distributed representation of limb motor programs in arrays of adjustable pattern generators. NPB Technical Report 3, Institute for Neuroscience, Northwestern University, Chicago, IL.
Brooks, R. A. (1986). A robust layered central system for a mobile robot. IEEE J. of Robotics and Automation, RA-2, 14–23.
Dean, T. L. & Wellman, M. P. (1991). Planning and Control. San Mateo, CA: Morgan Kaufmann.
Dickmanns, E. D., Mysliwetz, B., & Christians, T. (1990). An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles. IEEE Transactions on Systems, Man, and Cybernetics, 20, 1273–1284.
Goodwin, G. C. & Sin, K. S. (1984). Adaptive Filtering Prediction and Control. Englewood Cliffs, N.J.: Prentice-Hall.
Gullapalli, V. (1990). A stochastic reinforcement algorithm for learning real-valued functions. Neural Networks, 671–692.
Gullapalli, V. (1992). Reinforcement learning and its application to control. Technical Report COINS Technical Report 92–10, University of Massachusetts, Amherst, MA.
Gullapalli, V., Grupen, R. A., & Barto, A. G. (1992). Learning reactive admittance control. In 1992 IEEE Conference on Robotics and Automation. To appear.
Hollerbach, J. M. (1982). Computers, brains and the control of movement. Trends in Neuroscience, 5, 189–192.
Houk, J. C. & Barto, A. G. (To appear). Distributed sensorimotor learning. In Stelmach, G. E. & Requin, J. (Eds.), Tutorials in Motor Behavior II. Amsterdam, The Netherlands: Elsevier Science Publishers B. V.
Houk, J. C., Singh, S. P., Fisher, C., k Barto, A. G. (1990). An adaptive network inspired by the anatomy and physiology of the cerebellum. In Miller, T., Sutton, R. S., & Werbos, P. J. (Eds.), Neural Networks for Control, pages 301–348. Cambridge, MA: MIT Press.
Jacobs, R. A., Jordan, M. I., & Barto, A. G. (1991). Task decomposition through competition in a modular connectionist architecture: The what and where vision task. Cognitive Science, 15, 219–250.
Jordan, M. I. & Rumelhart, D. E. (In press). Forward models: Supervised learning with a distal teacher. Cognitive Science.
Kawato, M. (1990). Computational schemes and neural network models for formation and control of multijoint arm trajectory. In Miller, T., Sutton, R. S., & Werbos, P. J. (Eds.), Neural Networks for Control. Cambridge, MA: MIT Press.
Kawato, M., Furukawa, K., & Suzuki, R. (1987). A hierarchical neural-network model for control and learning of voluntary movement. Biological Cybernetics, 57, 169–185.
Kohonen, T. (1977). Associative Memory: A System Theoretic Approach. Berlin: Springer- Varlag.
le Cun, Y. (1985). Une procedure d’apprentissage pour reseau a sequil assymetrique [A learning procedure for asymmetric threshold network]. Proceedings of Cognitiva, 85, 599–604.
Marr, D. (1969). A theory of cerebellar cortex. Journal of Physiology (Lond), 202, 437–470.
Miller, T., Sutton, R. S., & Werbos, P. J. (1990). Neural Networks for Control. Cambridge, MA: MIT Press.
Narendra, K. & Thathachar, M. A. L. (1989). Learning Automata: An Introduction. Englewood Cliffs, NJ: Prentice Hall.
Parker, D. B. (1985). Learning logic. Technical Report TR-47, Massachusetts Institute of Technology.
Pomerleau, D. A., Gowdy, J., & Thorpe, C. E. (1991). Combining artificial neural network and symbolic processing for autonomous robot guidance. Engineering Applications of Artificial Intelligence, 279–285.
Rohrs, C. E. (1990). Rethinking adaptive control for the 90’s. In Proceedings of the 29th Conference on Decision and Control, pages 3143–3145, Honolulu, Hawaii.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In Rumelhart, D. E. & McClelland, J. L. (Eds.), Parallel Distributed Processing: Explorations in the Micro structure of Cognition, vol.1: Foundations. Cambridge, MA: Bradford Books/MIT Press.
Stanfill, C. & Waltz, D. (1986). Toward memory-based reasoning. Communications of the ACM, 29, 1213–1228.
Sutton, R. S., Barto, A. G., & Williams, R. J. (1991). Reinforcement learning is direct adaptive optimal control. In Proceedings of the American Control Conference, pages 2143–2146, Boston, MA.
Tomovic, R. & McGhee, R. B. (1966). A finite state approach to the synthesis of bioengineering control systems. IEEE Trans. HFE, 7, 65–69.
Werbos, P. J. (1974). Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University.
Wiener, N. (1948). Cybernetics. Cambridge, MA: MIT Press.
Ydstie, B. E. (1986). Bifurcation and complex dynamics in adaptive control systems. In Proceedings of the 25th Conference on Decision and Control, pages 2232–2236, Athens.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Barto, A.G., Gullapalli, V. (1993). Neural Networks and Adaptive Control. In: Rudomin, P., Arbib, M.A., Cervantes-Pérez, F., Romo, R. (eds) Neuroscience: From Neural Networks to Artificial Intelligence. Research Notes in Neural Computing, vol 4. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-78102-5_28
Download citation
DOI: https://doi.org/10.1007/978-3-642-78102-5_28
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-56501-7
Online ISBN: 978-3-642-78102-5
eBook Packages: Springer Book Archive