Conclusion
I think that the examples I have given raise many questions: nodal function, learning to learn, and modularisation. It is clear that much work lies ahead.
Similar content being viewed by others
References
Shepard RN. Stimulus and response generalisation: Deduction of the generalisation gradient from a trace model. Psychol Rev 1958; 65: 242–56
Shepard RN. Towards a universal law of generalisation for psychological science. Science 1987; 237: 1317–1323
Rumelhart DE, McClelland JL. Parallel Distributed Processing — Explorations in the microstructure of cognition. Vol. 1: Foundations. MIT Press, Cambridge, MA, 1986
Giles C, Griffin R, Maxwell T. Encoding geometric invariances in higher order neural networks. Neural Information Processing Systems, American Institute of Physics, New York 1988; 301–9
Chalmers DJ. The evolution of learning: An experiment in genetic connectionism. Proceedings of the Connectionist Models Summer School; 1990; San Marco, CA. Morgan Kaufmann
Giles CL, Sun GZ, Chen HH, Lee YC, Chen D. Higher order recurrent networks and grammatical inference. In: Touretzky DS, editor. Advances in neural information processing 2. New York: Morgan Kaufmann, 1990
Pollack JB. Implications of recursive distributed representations. In: Touretzky DS, editor. Advances in neural information processing 1. New York: Morgan Kaufmann, 1989
Jacobs RA, Jordan ML. A competitive modular connectionist architecture. In: Touretzky DS, editor. Advances in neural information processing 3. New York: Morgan Kaufmann, 1991
Nowlan SJ, Hinton GE. Evaluation of adaptive mixtures of competing experts. In: Touretzky DS, editor. Advances in neural information processing 3. New York: Morgan Kaufmann, 1991
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Jones, A.J. The modular construction of dynamic nets. Neural Comput & Applic 1, 91–95 (1993). https://doi.org/10.1007/BF01411377
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF01411377