Minds and Machines

, Volume 3, Issue 2, pp 125–153 | Cite as

Currents in connectionism

  • William Bechtel
Recent Work

Abstract

This paper reviews four significant advances on the feedforward architecture that has dominated discussions of connectionism. The first involves introducing modularity into networks by employing procedures whereby different networks learn to perform different components of a task, and a Gating Network determines which network is best equiped to respond to a given input. The second consists in the use of recurrent inputs whereby information from a previous cycle of processing is made available on later cycles. The third development involves developing compressed representations of strings in which there is no longer an explicit encoding of the components but where information about the structure of the original string can be recovered and so is present functionally. The final advance entails using connectionist learning procedures not just to change weights in networks but to change the patterns used as inputs to the network. These advances significantly increase the usefulness of connectionist networks for modeling human cognitive performance by, among other things, providing tools for explaining the productivity and systematicity of some mental activities, and developing representations that are sensitive to the content they are to represent.

Key words

Connectionism neural networks expert networks recurrent networks RAAM networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ackley, D.H., Hinton, G.E., and Sejnowski, T.J. (1985), ‘A Learning Algorithm for Boltzmann Machines’Cognitive Science,9, 147–69.Google Scholar
  2. Anderson, J.A. and Rosenfeld, E. (1988),Neurocomputing: Foundations of Research, Cambridge, MA: MIT Press.Google Scholar
  3. Anderson, J.A., Pellionisz, A., and Rosenfeld, E. (1991),Neurocomputing 2: Foundations of Research, Cambridge, MA: MIT Press.Google Scholar
  4. Bechtel, W. and Abrahamsen, A. (1991),Connectionism and the Mind. An Introduction to Parallel Processing in Networks, Oxford: Basil Blackwell.Google Scholar
  5. Bechtel, W. and Richardson, R.C. (1992),Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research, Princeton Princeton University Press.Google Scholar
  6. Blank, D.S., Meeden, L. and Marshall, J.B. (1992), ‘Exploring the Symbolic/Subsymbolic Continuum: A Case Study of RAAM,’ in J. Dinsmore, ed.,Closing the Gap: Symbolism vs. Connectionism, Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  7. Chalmers, D.J. (1990), ‘Mapping Part-whole Hierarchies into Connectionist Networks,’Artificial Intelligence,46, 47–75.Google Scholar
  8. Dolan, C.P. (1989),Tensor Manipulation Networks: Connectionist and Symbolic Approaches to Comprehension, Learning, and Planning, AI Lab Report, University of California, Los Angeles.Google Scholar
  9. Elman, J.L. (1990), ‘Finding Structure in Time,’Cognitive Science,14, 179–212.Google Scholar
  10. Fodor, J.A. and Pylsyshyn, Z.W. (1988), ‘Connectionism and Cognitive Architecture: A Critical Analysis,’Cognition,28, 3–71.Google Scholar
  11. Hetherington, P.A. and Seidenberg, M.S. (1989), ‘Is there “Catastrophic” Interference in Connectionist Networks?’Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, pp. 26–33. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  12. Hinton, G.E. (1989), ‘Connectionist Learning System,’Artificial Intelligence,40, 185–234.Google Scholar
  13. Holyoak, K. and Thagard, P. (1989), ‘Analogical Mapping by Constraint Satisfaction,’Cognitive Science,13, 295–355.Google Scholar
  14. Hopfield, J.J. (1982), ‘Neural networks and physical systems with emergent collective computational abilities,’Proceedings of the National Academy of Sciences,79, 2554–2558.Google Scholar
  15. Jacobs, R.A., Jordan, M.I., and Barto, A.G. (1991a), ‘Task Decomposition Through Competition in a Modular Connectionist Architecture: The What and Where Vision Tasks,Cognitive Science,15, 219–250.Google Scholar
  16. Jacobs, R.A., Jordan, M.I. Nowlan, S.J. and Hinton, G.E. (1991b), ‘Adaptive Mixtures of Local Experts,’Neural Computation,3, 79–87.Google Scholar
  17. Jordan, M. (1986), ‘Attractor Dynamics and Parallelism in a Connectionist Sequential Machine’,Proceedings of the Eighth Annual Conference of the Cognitive Science Society, Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  18. Kohonen, T. (1988),Self Organization and Associative Memory, Third Edition, New York: Springer-Verlag.Google Scholar
  19. McCloskey, M. and Cohen, N.J. (1989), Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem, in G.H. Bower ed,The Psychology of Learning and Motivation, pp. 109–65. New York: Academic Press.Google Scholar
  20. Miikkulainen, R. and Dyer, M.G. (1989),A Modular Neural network Architecture for Sequential Paraphrasing of Script-based Stories, Technical Report UCLA-AI-89-02, Artificial Intelligence Laboratory, Computer Science Department, University of California, Los Angeles, Los Angeles, CA 90024.Google Scholar
  21. Miikkulainen, R. and Dyer, M. (1991), ‘Natural Language Processing with Modular PDP Networks and Distributed Lexicon,’Cognitive Science,15, 343–399.Google Scholar
  22. Mishkin, M., Ungerleider, L.G., and Macko, K.A. (1983), ‘Object Vision and Spatial Vision: Two Cortical Pathways,’Trends in Neurosciences,6, 414–417.Google Scholar
  23. Nowlan, S.J. (1990),Competing Experts: An Experimental Investigation of Associative Mixture Models, Technical Report, Department of Computer Science, University of Toronto.Google Scholar
  24. Nolfi, S., Elman, J.L., and Parisi, D. (1990a),Learning and Evolution in Neural Networks, Technical report 9019, Center for Research in Language, University of California, San Diego.Google Scholar
  25. Nolfi, S., Parisi, D., Vallar, G., and Burani, C. (1990b), ‘Recall of Sequences of Items by a Neural Network’, in Touretzky, D.S., Elman, J.L., Sejnowski, T.J., and Hinton, G. E. eds.,Proceedings of the 1990 Connectionist Models Summer School, San Mateo, CA: Morgan Kaufman.Google Scholar
  26. Pinker, S. and Prince, A. (1988), ‘On Language and Connectionism: Analysis of a Parallel Distributed Processing Model of Language Acquisition,’Cognition,28, 72–193.Google Scholar
  27. Plunkett, K. and Marchman, V. (1991), ‘U-shaped Learning and Frequency Effects in a Multilayered Perception: Implications for Child Language Acquisition,’Cognition,38, 1–60.Google Scholar
  28. Pollack, J. (1988), ‘Recursive Auto-associative Memory: Devising Compositional Distributed Representations’,Proceedings of the Tenth Annual Conference of the Cognitive Science Society, Erlbaum: Lawrence Erlbaum Associates.Google Scholar
  29. Pollack, J. (1990), ‘Recursive Distributed Representations,’Artificial Intelligence,46, 77–105.Google Scholar
  30. Rueckl, J.G., Cave, K.R., and Kosslyn, S.M. (1989). ‘Why are “What” and “Where” Processed by Separate Cortical Visual Systems? A Computational Investigation’,Journal of Cognitive Neuroscience,1, 171–186.Google Scholar
  31. Rumelhart, D.E. and McClelland, J.L. (1986a), ‘On Learning the Past Tense of English Verbs,’ in J.L. McClelland, D.E. Rumelhart, and the PDP Research Group,Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2:Psychological and Biological Models, pp. 216–71. Cambridge, MA: MIT Press.Google Scholar
  32. Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986b), ‘Learning Internal Representations by Error Propagation,’ in D.E. Rumelhart, J.L. McClelland, and the PDP Research Group,Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 2:Psychological and Biological Models. Cambridge, MA: MIT Press.Google Scholar
  33. Rumelhart, D.E., Smolensky, P., McClelland, J.L., and Hinton, G.E. (1986c), ‘Schemas and Sequential Thought Processes in PDP Models’, in J.L. McClelland, D.E. Rumelhart, and the PDP Research Group,Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2:Psychological and Biological Models. Cambridge, MA: MIT Press.Google Scholar
  34. Searle, J.R. (1980), ‘Minds, Brains, and Programs,’The Behavioral and Brain Sciences,78, 720–733.Google Scholar
  35. Shallice, T. (1988),From Neuropsychology to Mental Structure, Cambridge, England: Cambridge University Press.Google Scholar
  36. Simon, H.A. (1980),The Sciences of the Artificial, Cambridge, MA: MIT Press.Google Scholar
  37. Smolensky, P. (1990), ‘Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems,’Artificial Intelligence,46, 159–216.Google Scholar
  38. St. John, M.F. and McClelland, J.L. (1990), ‘Learning and Applying Contextual Constraints in Sentence Comprehension,’Artificial Intelligence,46, 217–257.Google Scholar
  39. Thagard, P. Holyoak, K., Nelson, G., and Gochfeld, D. (1990). ‘Analog Retrieval by Constraint Satisfaction’,Artificial Intelligence,46, 259–310.Google Scholar
  40. Touretzky, D.S. (1990). ‘BoltzCONS: Dynamic Symbol Structures in a Connectionist Network’,Artificial Intelligence,46, 5–46.Google Scholar
  41. Touretzky, D.S. and Hinton, G.E. (1988), ‘A Distributed Connectionist Production System,’Cognitive Science,12, 423–466.Google Scholar
  42. van Gelder, T. (1990), ‘Compositionality: A Connectionist Variation on a Classical Theme’,Cognitive Science,14 355–384.Google Scholar

Copyright information

© Kluwer Academic Publishers 1993

Authors and Affiliations

  • William Bechtel
    • 1
  1. 1.Department of PhilosophyGeorgia State UniversityAtlantaUSA

Personalised recommendations