Encyclopedia of Machine Learning

2010 Edition
| Editors: Claude Sammut, Geoffrey I. Webb


  • Thomas R. Shultz
  • Scott E. Fahlman
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-30164-8_92


 Cascor;  CC


Cascade-Correlation (often abbreviated as “Cascor” or “CC”) is a  supervised learning algorithm for  artificial neural networks. It is related to the  back-propagation algorithm (“backprop”). CC differs from backprop in that a CC network begins with no hidden units, and then adds units one-by-one, as needed during learning.

Each new hidden unit is trained to correlate with residual error in the network built so far. When it is added to the network, the new unit is frozen, in the sense that its input weights are fixed. The hidden units form a cascade: each new unit receives weighted input from all the original network inputs and from the output of every previously created hidden unit. This cascading creates a network that is as deep as the number of hidden units. Stated another way, the CC algorithm is capable of efficiently creating complex, higher-order nonlinear basis functions – the hidden units – which are then combined to form the desired outputs.


This is a preview of subscription content, log in to check access.

Recommended Reading

  1. Baluja, S., & Fahlman, S. E. (1994). Reducing network depth in the cascade-correlation learning architecture. Pittsburgh, PA: School of Computer Science, Carnegie Mellon University.Google Scholar
  2. Buckingham, D., & Shultz, T. R. (2000). The developmental course of distance, time, and velocity concepts: A generative connectionist model. Journal of Cognition and Development, 1, 305–345.Google Scholar
  3. Dandurand, F., Berthiaume, V., & Shultz, T. R. (2007). A systematic comparison of flat and standard cascade-correlation using a student-teacher network approximation task. Connection Science, 19, 223–244.Google Scholar
  4. Fahlman, S. E. (1988). Faster-learning variations on back-propagation: An empirical study. In D. S. Touretzky, G. E. Hinton, & T. J. Sejnowski (Eds.), Proceedings of the 1988 connectionist models summer school (pp. 38–51). Los Altos, CA: Morgan Kaufmann.Google Scholar
  5. Fahlman, S. E. (1991). The recurrent cascade-correlation architecture. In D. S. Touretzky (Ed.), Advances in neural information processing systems. (Vol. 3) Los Altos CA: Morgan Kaufmann.Google Scholar
  6. Fahlman, S. E., & Lebiere, C. (1990). The cascade-correlation learning architecture. In D. S. Touretzky (Ed.), Advances in neural information processing systems (Vol. 2, pp. 524–532). Los Altos, CA: Morgan Kaufmann.Google Scholar
  7. Mareschal, D., & Shultz, T. R. (1999). Development of children’s seriation: A connectionist approach. Connection Science, 11, 149–186.Google Scholar
  8. Oshima-Takane, Y., Takane, Y., & Shultz, T. R. (1999). The learning of first and second pronouns in English: Network models and analysis. Journal of Child Language, 26, 545–575.Google Scholar
  9. Schlimm, D., & Shultz, T. R. (2009). Learning the structure of abstract groups. In N. A. Taatgen & H. V. Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society (pp. 2950–2955). Austin, TX: Cognitive Science Society.Google Scholar
  10. Shultz, T. R. (1998). A computational analysis of conservation. Developmental Science, 1, 103–126.Google Scholar
  11. Shultz, T. R. (2003). Computational developmental psychology. Cambridge, MA: MIT Press.Google Scholar
  12. Shultz, T. R. (2006). Constructive learning in the modeling of psychological development. In Y. Munakata & M. H. Johnson (Eds.), Processes of change in brain and cognitive development: Attention and performance XXI (pp. 61–86). Oxford, UK: Oxford University Press.Google Scholar
  13. Shultz, T. R., & Bale, A. C. (2006). Neural networks discover a near-identity relation to distinguish simple syntactic forms. Minds and Machines, 16, 107–139.Google Scholar
  14. Shultz, T. R., & Cohen, L. B. (2004). Modeling age differences in infant category learning. Infancy, 5, 153–171.Google Scholar
  15. Shultz, T. R., Mareschal, D., & Schmidt, W. C. (1994). Modeling cognitive development on balance scale phenomena. Machine Learning, 16, 57–86.Google Scholar
  16. Shultz, T. R., & Rivest, F. (2001). Knowledge-based cascade-correlation: Using knowledge to speed learning. Connection Science, 13, 1–30.Google Scholar
  17. Shultz, T. R., Rivest, F., Egri, L., Thivierge, J.-P., & Dandurand, F. (2007). Could knowledge-based neural learning be useful in developmental robotics? The case of KBCC. International Journal of Humanoid Robotics, 4, 245–279.Google Scholar
  18. Shultz, T. R., & Takane, Y. (2007). Rule following and rule use in simulations of the balance-scale task. Cognition, 103, 460–472.Google Scholar
  19. Shultz, T. R., & Vogel, A. (2004). A connectionist model of the development of transitivity. In Proceedings of the twenty-sixth annual conference of the cognitive science society (pp. 1243–1248). Mahwah, NJ: Erlbaum.Google Scholar
  20. Sirois, S., & Shultz, T. R. (1998). Neural network modeling of developmental effects in discrimination shifts. Journal of Experimental Child Psychology, 71, 235–274.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Thomas R. Shultz
  • Scott E. Fahlman

There are no affiliations available