Learning Structurally Analogous Tasks
A method for training overlapping feedforward networks on analogous tasks is extended and analyzed. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the networks. The influence of one task on the other can be studied by examining the output of one network in response to a stimulus to the other network. Using backpropagation to train networks with shared hidden layers, a “crosstraining” mechanism for specifying corresponding components between structurally similar environments is introduced. Analysis of the resulting mappings reveals the potential for analogical inference.
Unable to display preview. Download preview PDF.
- 4.Halford, G., Wilson, W., Guo, J., Gayler, R., Wiles, J., Stewart, J.: Connectionist implications for processing capacity limitations in analogies. In: Holyoak, K., Barnden, J. (eds.) Advances in connectionist and neural computation theory. Analogical Connections, vol. 2, pp. 363–415. Ablex, Norwood (1994)Google Scholar
- 6.Mitchell, M.: Analogy-making as Perception: A computer model. MIT Press, Cambridge (1993)Google Scholar
- 7.Pratt, L.Y., Mostow, J., Kamm, C.A.: Direct transfer of learned information among neural networks. In: Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI 1991), Anaheim CA (1991)Google Scholar
- 9.Ghiselli-Crippa, T., Munro, P.: Emergence of global structure from local associations. In: Cowan, J.D., Tesauro, G., Alspector, J. (eds.) Advances in Neural Information Processing Systems 6. Morgan Kaufmann Publishers, San Mateo (1994)Google Scholar
- 10.Hinton, G.: Learning distributed representations of concepts. In: Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pp. 1–12. Amherst, Lawrence Erlbaum, Hillsdale (1986)Google Scholar