Reference Work Entry

Encyclopedia of Child Behavior and Development

pp 872-874


  • Daniel PatanellaAffiliated withNew York City Department of Education


Behaviorism; Cognitive-behaviorism; Conditioning; Latent learning; Literacy; Neo-behaviorism; Social learning


Learning refers to changes in behavior and cognition as the result of experience. Traditional learning theory is closely associated with behaviorism, cognitive-behavioral research, and an empiricist-associationist philosophy.


As a psychological discipline, learning refers to changes in behavior and changes in cognition as the result of experience. Traditional learning theory is closely associated with behaviorism, cognitive-behavioral research, and an empiricist-associationist philosophy. A broad distinction may be drawn between conditioning, as represented by Watson [10] and Skinner [7] and the eclectic cognitive-behavioral learning perspectives of such psychologists as Tolman [9] and Bandura [1]. Regardless of their theoretical diversity, all learning theorists subscribe to the behaviorist maxim that the proper dependent variables in psychological research should be observable, verifiable behaviors.


Conditioning refers to any process by which an organism acquires new behaviors through repeated experience. There are two main types of conditioning. The first is known as classical conditioning, represented by such researchers as Pavlov [5] and the Watson [10], who popularized the term “behaviorism.” The second, known as operant or instrumental conditioning, derives from the works of Skinner [7] and his followers. Classical conditioning is a form of learning in which an organism comes to associate one stimulus with another, usually prompting a behavior previously associated only with the first stimulus. It is concerned with the re-association of a reflexive behavior to a formerly neutral stimulus. Operant conditioning is a form of learning primarily concerned with the effects of reinforcement and punishment upon behaviors. The behaviors addressed by operant conditioning are much broader in scope than the reflexes of classical conditioning.

Within classical conditioning, several components of the conditioned response may be measured and serve as the dependent variable of interest. The amount of time it takes for an organism to respond to a conditioned stimulus is referred to as “latency,” the strength of the response is referred to as “magnitude,” and the likelihood that the conditioned response will occur at all is called “probability.” Examination of these three aspects of learned behaviors has partially led to two important and influential elaborations of our understanding of classical conditioning, namely opponent-process theory, and the Rescorla-Wagner theory. The opponent-process theory, largely developed by Solomon [8], provides an associationist context for explaining certain instances (such as habituation to drugs) in which the conditioned response becomes the opposite of the unconditioned response. The Rescorla-Wagner theory, created by Rescorla and Wagner [6] allows for mathematical prediction of responses based upon the given trajectory of a learning curve, offers a possible learning-based explanation on habituation to repetitive stimuli and also helps explain how organisms distinguish among multiple conditioned stimuli at any given time.

Operant conditioning, in contrast, is not as dependent upon reflexive behavior and is primarily concerned with the effects of reinforcement and punishment upon behaviors. Operant conditioning focuses on the consequences of behaviors, and what makes learned behaviors stronger or weaker. Behaviors that are reinforced are likely to persist whereas lack of reinforcement will likely result in a decrease in the behavior. Ferster and Skinner [2] distinguished among four different schedules of reinforcement in Schedules of Reinforcement (1957), and their definitions have become part of standard operant conditioning lexicon. These schedules of reinforcement have been observed in both the laboratory and the real world, and behaviors reinforced using unpredictable schedules are not only highly resistant to extinction but will also continue long after reinforcement has ceased. Series of discrete behaviors can be combined in lengthy chains, resulting in very complex learning and the actual reinforcements used can be far removed from the typical biologically-based primary reinforcements that one normally associates with laboratory research. The Skinnerian concept of “radical behaviorism” conceptualizes even thought processes as subject to the rules and laws of conditioning.

Neo-Behaviorism and Social Learning

In contrast to the orthodoxy of classical and operant conditioning, neo-behaviorism is a set of diverse theories that allows for the inference of nonobservable constructs such as motivation and internal states. (As is evident from both the Rescorla-Wagner and opponent-process theories, however, classical and operant conditioning do include complex and abstract theorizing.) Social learning bridges the gap between behaviorism and social psychology.

The work of both Guthrie [3] and Hull [4] is primarily of historical interest, yet both neobehaviorists introduced important concepts to learning theory. Guthrie stressed the role of contiguity in learning, elaborating upon the necessity of properly linking the unconditioned and conditioned stimuli during classical conditioning and reconceptualizing forgetting as the establishment of new contiguities that successfully competed against the old ones. Hull attempted not only to incorporate the mathematical rigor of proofs and postulates into learning, but also stressed the importance of intervening variables, such as drive, habit strength, and incentive value of the reinforcement. The work of both researchers helps point the way toward contemporary neural network cognitive psychology.

Tolman [9] is considered the father of cognitive behaviorism. Set apart from both classical and operant conditioning, Tolman’s system examined learning aside from the confines of strict minimalist environments, preferring to use mazes rather than puzzles or Skinner boxes in the laboratory. Tolman reported that his lab rats were able to learn the routes of their mazes even prior to reinforcement, as evidenced by their shorter than expected latencies when reinforcement was finally introduced; he referred to this non-reinforced learning as “latent learning,” a concept that is still current. Tolman also introduced the study of insight into behavioral learning. While insight had been a topic explored by Gestalt psychologists, Tolman studied insight learning within the context of rats in a maze. Both latent learning and insight learning are similar in that neither seems to be strict stimulus-response learning, and both make use of what Tolman called a “cognitive map,” a broad purely mental schematic of the immediate environment to be utilized in order to behave efficiently and solve problems.

Social learning theory, as introduced by Bandura and elaborated upon by subsequent researchers, is even more removed from traditional behaviorist tradition. According to social learning theory, it is not necessary to be a participant in an activity in order to learn. Rather, by observing the actions of others, one can learn vicariously through modeling. This is largely learning through imitation, an aspect of learning previously overlooked by other theorists. Initially, the theory focused on the acquisition of antisocial behavior in children but the utility of social learning theory has broadened considerably, and social learning theory is not only an integral part of contemporary learning theory but also an essential component of many behavior modification programs.


This work represents the scholarship of the author and does not imply any official position of the New York City Department of Education.

Copyright information

© Springer Science+Business Media, LLC 2011
Show all