Learning & Behavior

, Volume 41, Issue 1, pp 1–24 | Cite as

Adjunctive behaviors are operants



Adjunctive behaviors such as schedule-induced polydipsia are said to be induced by periodic delivery of incentives, but not reinforced by them. That standard treatment assumes that contingency is necessary for conditioning and that delay of reinforcement gradients are very steep. The arguments and evidence for this position are reviewed and rejected. In their place, data are presented that imply different gradients for different classes of responses. Proximity between response and reinforcer, rather than contingency or contiguity, is offered as a key principle of association. These conceptions organize a wide variety of observations and provide the rudiments for a more general theory of conditioning.


Adjunctive Conditioning Contingency Contiguity Proximity Schedule induced 

Response acquisition with delayed reinforcement is a robust phenomenon that may not depend on a mechanically defined response or an immediate external stimulus change to mediate the temporal gap between response and reinforcer.

Critchfield and Lattal (1993, p. 373)

Schedule-induced drinking as the prototypical adjunctive behavior: Arguments against reinforcement

Ever since the laboratory discovery of adjunctive behavior—also known as schedule-induced or interim behavior—by John Falk (1961), analysts have treated these anomalies as belonging to a separate class of behavior, induced by incentives such as periodic food, but not reinforced by them. The discovery of adjunctive behavior was a bombshell in the behavioral community, since it seemed an exception to the orderly account of all behavior subsumed under the tripartite hegemony of operant, respondent, and unconditioned responses. What were the implications for the Skinnerian project of applied behavior analysis, if so substantial a proportion of the behavior that was induced by reinforcement was adamant to control by reinforcement? In the case of schedule-induced polydipsia, Falk (1971) voiced the contemporary amazement:

It was an outright absurd [finding]. It was absurd because food deprivation in rats yields a decrease in water intake, not an increase. It was absurd because heating a large quantity of room-temperature water to body heat and expelling it as copious urine is wasteful for an animal already pressed for energy stores by food deprivation. It is absurd for an animal to drink itself into a dilutional hyponatremia bordering on water intoxication. But perhaps most absurd was not the lack of a metabolic or patho-regulatory reason for the polydipsia, but the lack of an acceptable behavioral account. (p. 577)

Falk detailed the arguments against various “acceptable behavioral accounts,” which he summarized in the following: “Polydipsia is not the result of food delivery directly or adventitiously reinforcing water intake. Nor does it serve a problem-solving mediational, or timing function. Furthermore, drinking is not an unconditioned response to eating” (Falk, 1971, p. 577). Such opinions were expanded and generalized by Staddon (1977; Staddon & Simmelhag, 1971) so effectively that the claim that adjuncts are not caused by contingent reinforcement—that they are not operant responses—is now generally accepted.

The most straightforward behavioral account, the adventitious hypothesis (Staddon, 1977), held that accidental contiguity between a protoadjunctive response and a “reinforcer” increased the response frequency to adjunctive levels through a process of adventitious, or “superstitious” (Skinner, 1948), conditioning. (“Reinforcer” is hedged because it is so entangled with a theoretical process, operant conditioning, that it predisposes that interpretation. “Unconditioned Stimulus” is no freer of theoretical implication. A new term for such behaviorally salient stimuli, phylogenetically important event [PIE; Baum, 2012] has everything to recommend it except its novelty. We will use the conventional term “reinforcer” here, warning readers that we typically mean by it only “food for a hungry organism.”) In an early report, Clark (1962) noted the development of excessive drinking by rats on variable-interval schedules of food reinforcement and described various manipulations to discourage it once established (which met with mixed success). Clark concluded that the drinking “obviously was developed and maintained by adventitious reinforcement” (p. 63).

Clark (1962) was quickly challenged by Stein (1964), who found that rats did not continue to lick a dry tube, that drinking was not sustained when the rats were switched to a liquid reinforcer, and that drinking occurred postpellet rather than prepellet. There ensued hundreds of research articles on the topic (for reviews, see Christian, Schaeffer, & King, 1977; Wallace & Singer, 1976) and a dozen hypotheses as to its nature, including displacement activity, displaced consummatory activity, activation in ethological, physiological, or behavioral senses, induction, induced variation, frustration, and reinforcement. Some of the empirical results were in conflict because of the path-dependent nature of adjunctive responses: If a response is allowed or encouraged early in conditioning, it could persist through conditions that, had they been present at the start, would have prevented its emergence (e.g., Chapman & Richardson, 1974). This important issue is revisited in the Reconsideration of the Arguments section and the Adjuncts as Operants: Functions section.

In a landmark chapter, Staddon (1977, pp. 127–128, 132) listed six reasons why adventitious reinforcement—the noncontingent association of the adjunctive response and a reinforcer—could not account for adjunctive behavior. (1) A terminal response, such as head-in-hopper, may be dominant early in conditioning, only to later be supplanted by another response, such as pecking; “adventitious reinforcement cannot account for either the decline of the first response or the appearance of the second.” (2) Terminal responses like pecking are resistant to negative contingencies (“negative automaintenance”). (3) When negative contingencies are effective in suppressing the behavior, “much of the effect is attributable to . . . changing the pattern of food delivery.” (4) “Showing that a response is sensitive to a real negative contingency does not force the conclusion that its prior occurrence was owing to an accidental positive one.” (5) Noncontingent reinforcers added to a schedule of contingent reinforcement will often decrease response rates, even though the absolute number of response food conjunctions is thereby increased. Responses must be predictive of reinforcement to be manipulated by it (and adjunctive responses, having no contingent relation to reinforcement, are not predictive of it). (6) In the case of induced drinking, “it rarely occurs contiguously with food delivery” and is little affected by lick-contingent delays of food. This chapter became definitive of the phenomenon.

In place of the adventitious reinforcement hypothesis, disavowed for the above reasons, Staddon (1977) divided the class of adjunctive responses into three types, with associated behavioral states or “moods” (p. 137): interim, which was induced by reinforcement and occurred early in the interval between reinforcers; terminal, which occurred toward the end of the interval and which, he seemed to argue, was an example of Pavlovian induction (pp. 127, 138); and facultative, which arise to fill the time between interim and terminal responses during the middle of long intervals. By referencing the responses to their states, or moods, he was explicitly offering a motivational interpretation: Thirst was induced early in the interval, and this led to polydipsic drinking. The motivational hypothesis, when now invoked, tends to refer more to generalized activation—increasing arousal by shock (King, 1974a) or other means increases drinking—rather than to specific motivational moods. The legacy of this chapter and of articles of similar thrust (e.g., Lucas, Timberlake & Gawley, 1988; Segal, 1972; Staddon & Simmelhag, 1971) was that adjunctive behaviors, be they interim, facultative, or terminal, occur because they are induced by the conditions of stimulation, not reinforced by operant conditioning. By induction was meant the appearance of action patterns—kinds of phylogenetically specific, Pavlovian unconditioned responses—elicited by the conditions of stimulation.

Reconsideration of the arguments

After 35 years of ensuing research, do Staddon’s (1977) arguments stand?

One adjunctive response replaces another

If, as is often assumed, reinforcement acts only on the response that is most contiguous with it, once established, such a response should never be displaced. This is explicit in the report of Timberlake and Lucas (1985), who pretrained particular responses (turning and pecking) to make them predominant. When the pigeons were then exposed to a free-feeding schedule, those were quickly displaced by other, wall-oriented responses. They argued that their results ruled out an adventitious reinforcement explanation for wall-oriented responses that occurred earlier in the interval. (As will be seen below, their results might have been different had they used longer interfood intervals.) A more spectacular, if less systematic, report of such displacement was published by Breland and Breland (1961), who called the displacement of trained operant responses by phylogenetically more appropriate responses instinctive drift.

But what if this contiguity requirement for reinforcement is incorrect: What if a reinforcer’s effect can spread over multiple prior responses, to strengthen each in a kind of delay of reinforcement gradient? Catania (1971) demonstrated that this is what happens, replicating a venerable tradition of research, and went on to show that when multiple “B” responses preceed an “A” response that is then reinforced, the B responses increase substantially in rate and do so as a function of the number of them preceding the terminal A response. He later generated a real-time computer simulation that delivered good renditions of basic schedule effects (Catania, 2005b), with a key premise being that reinforcement affects more than the single response that preceded it. The Adjuncts as Operants: The Constructs section and the A Model of Competing Traces section extend Catania’s argument to adjunctive behavior. There we argue, inter alia, that with extended proximity, such “B” responses can displace “A” responses, despite their imperfect contiguity with reinforcement.

Terminal responses are resistant to negative contingencies

D. R. Williams and Williams (1969) demonstrated persistent directed keypecking despite contingent nonreward. Their report was interpreted as showing that Pavlovian contingencies that fostered auto-shaped pecking were dominant over the Skinnerian contingencies that discouraged it. Nonreinforcement has a much smaller “corrective effect” than does reinforcement (Killeen, Sanabria & Dolgov, 2009): Terminal responses can be maintained by very thin payoff ratios, with thousands of responses maintained by a single reinforcer. But it does have an effect: Under extended exposure to the Williams and Williams’s procedure, keypecking extinguishes (Sanabria, Sitomer & Killeen, 2006). The average time constant for extinction in those experiments was 20 min, so that by an hour of trial time, extinction was 95 % complete. Is this “resistant”? For many of the pigeons, the pecking moved off-key. How are we to interpret this? That pecking cannot be discouraged but only moved around, or that pecking the wall was adventitiously reinforced by the ensuing presentation of food.

Suppression by negative contingencies is due largely to increased intervals between food

The proper experiment to test this hypothesis is the yoked-control design, in which yoked animals receive food at the same time as master animals, with the master animals subject to a delay between licks and access to food. Such contingencies deter the development of polydipsia, with master animals showing much less drinking than their partners (Moran & Rudolph, 1980). Even in the case of established drinking, such delays discourage drinking in masters more than in controls (e.g., Pellón & Blackman, 1987). Lengthening of interfood interval could certainly lead to changes in the amount or rate of drinking; but these data show that the effect of contingencies of reinforcement on the development and maintenance of polydipsia is substantial, over and above the effects of rate of feeding.


Allowing that a response is sensitive to a negative contingency, as above, does not—pace Staddon—force the conclusion that its original occurrence was due to a positive one. Behaviors that become excessive are natural parts of animals’ repertoires. Subjects enter experimental situations with a set of action patterns, including approach to features of the environment that are correlated with reward. Reinforcement selects from among these (Stokes & Balsam, 1991). Good treatments of such repertoires are Timberlakes’s (1994, 2000) and Shettleworth’s (1988).

The tenacity of established adjunctive behavior is not present ab ovo. The research of Moran and Rudolph (1980) is a paragon of scores of studies, starting with Reynierse and Spanier (1968) and Toates (1971), showing that the development of adjunctive behaviors is quite sensitive to environments that foster or discourage instances of those responses. Phylogeny may offer candidate responses—priors in Staddon’s (1983) evocative analogy—but if those are not associated with reinforcement, they extinguish. Reinforcement selects those that become parts of the animals’ repertoire, often to extreme. Once selected, they persist and are often adamant to counterevidence. Extensive training also makes them obdurate to motivational control, an effect captured in Dickinson’s distinction between actions and overlearned, automonous habits (Dickinson, 1985; Dickinson, Balleine, Watt, Gonzalez, & Boakes, 1995; Holland, 2004). The initial default priors on candidate responses are diffuse, weighted by little evidence. With each session of conditioning, those that survive become increasingly robust posteriors, confirmed by each reinforcement, increasingly difficult to dislodge. By increasing responding, reinforcement fosters proximity, becomes a self-fullfilling prophecy; and then steps back.

Reinforcement requires a “real contingency”

Staddon (1977) argued that a schedule can be “effective in modifying behavior only to the extent that it arranges . . . a real contingency . . . between an event (a stimulus or response) and the occurrence of a reinforcer” (p. 128). By “real contingency” is meant a change in the probability of reinforcement conditional on the presence of a stimulus or a response. Egger and Miller’s (1962) seminal research on the blocking of stimulus control by a more predictive stimulus, followed by the landmark analyses of Rescorla (starting with Rescorla, 1967), revolutionized our conception of Pavlovian conditioning and underscored the importance of informativeness in conditioning (Rescorla, 1988). Recent work continues to enrich and exploit this hypothesis (Anselme, 2010; Balleine & Dickinson, 1998; Ward et al., 2012). (The discovery of robust backward conditioning (Keith-Lucas & Guttman, 1975) provides an interesting challenge to this approach, since the informativeness of a stimulus that postdicts reinforcement should be overshadowed by the immediately prior reinforcer itself.)

The argument of contingency being necessary for conditioning is typically delivered as a brief against temporal contiguity being sufficient for it. Staddon (1977), for instance, noted that noncontingent reinforcement added to a schedule of contingent presentations often decreases response rates, even though “the number of response-food conjunctions is increased by this operation” (p. 128). But without precision as to what an animal considers a conjunction, that operation might as readily be counted as decreasing the proportion of conjunctions. Reinforcers delivered when the animal may or may not be in the act of responding denatures the conjunction, since asynchronies in response–outcome latencies of only scores of milliseconds can easily be detected (Killeen & Smith, 1984), and may strengthen or weaken responding, depending on context (Killeen, 1978, 1981; Madden & Perone, 2003) and species (Boakes, Halliday, & Poli, 1975). In a study of probabilistic classical conditioning of keypeck and leverpress responses (Killeen et al., 2009), every pairing of stimulus with food increased the probability of a target response; and if the trials also contained such a response, there was often a significant additional increment in the probability of responding on the next trial: Both stimulus and response conjunctions with the reinforcer mattered. Thus, Staddon’s example of the effect of noncontingent reinforcers introduced into a behavior stream provides no evidence for the necessary role of contingency.

Like probability, contingency is undefined for unique events: The perception of contingency requires replication to establish the defining conditional probability (Baum, 2012). The ubiquity of superstitious behaviors and their potential for inception after a single response–outcome pairing (Albert & Mah, 1972; Armstrong, DeVito, & Cleland, 2006; Bevins & Besheer, 2006) provide additional, if circumstantial, evidence against the contingency account. As was noted by Papini and Bitterman, “The evidence suggests that CS–US contingency is neither necessary nor sufficient for conditioning and that the concept has long outlived any usefulness that it may have once had” (1990, p. 396).

Contiguity versus proximity

Historically, reinforcement delay has been thought to impede or, at values exceeding a few seconds, prevent acquisition (Critchfield & Lattal, 1993, p. 374). Yet drinking develops in experiments where lick–food contiguities are specifically excluded. Staddon (1977) cited this as evidence against the reinforcement hypothesis. Baum (2012), commenting on two of Staddon’s figures, concurred: “Activities that disappeared before food delivery could not be reinforced” (p. 103). One recent textbook capitulates this 20th century attitude: “Interim behaviors were never occurring at the moment a reinforcer was delivered. . . . They seem to have little to do with reinforcement in the traditional law-of-effect sense: No direct response–reinforcer pairing has ‘stamped’ them in” (Bouton, 2007, pp. 397, 399).

In retrospect, it is not at all clear why it seemed so obvious that adjunctive behavior could not be a manifestation of delayed reinforcement. Responses can be acquired and maintained with much longer delays between the response and reinforcer (D'Amato, Safarjan, & Salmon, 1981; Dickinson, Watt, & Griffiths, 1992; Spetch & Honig, 1988). Capaldi (1978) showed acquisition of running with 20-s delays, Lattal and Gleeson (1990) the acquisition of keypecking by pigeons and leverpressing by rats with delays of 30 s, and Critchfield and Lattal (1993) the acquisition and maintenance of spatial behavior with similar delays of reinforcement. Okouchi (2009) reviewed subsequent research on acquisition with delayed reinforcement, extending it to humans. In no cases were the responses “occurring at the moment a reinforcer was delivered,” yet the not-so-direct reinforcement successfully “’stamped’ them in.”

Challenging Staddon’s (1977) arguments at most clears the ground for alternate accounts. The next section examines the categories “adjunctive,” “operant,” and “conditioning,” since such conceptual analysis is the necessary foundation for a new construction (Machado & Silva, 2007). The Adjuncts as Operants: Functions section comprises a functional analysis of behaviors called “operant” and behaviors called “adjunctive.” The A Model of Competing Traces section develops a mathematical model of the central new explanatory construct, competing memorial traces of different classes of behavior.

Adjuncts as operants: The constructs

When is it reasonable to claim that one thing or process is the same as another? Names are rudimentary models, picking out replicable aspects of the environment for our attention and assigning them to common or distinct classes. Like other models, they should be as powerful as possible, while being as parsimonious as possible. This means that if we can increase our predictive ability by asserting that adjuncts are operants, showing that, for example, the laws of shaping, extinction, and motivation are the same for both classes, we gain both power (extension of the laws) and parsimony (one, rather than two, classes of behavior). In this section, we argue that (1) definitions do not distinguish them and (2) operations do not distinguish them—for instance, that neither strict contiguity nor contingency is necessary for either operants or adjuncts. In the next section, we invoke a classic behavior analytic technique, functional analysis (Iwata, Kahng, Wallace, & Lindberg, 2000) to compare how adjuncts and operants change as a function of experimental operations and conclude that (3) functional relations do not distinguish them.


Operant responses, like adjunctive responses, occur for the first time “for other reasons” than an ensuing reinforcer (Skinner, 1984). What is their provenance? Many are part of the instinctive heritage of the organism; others are modifications of those basic actions (Blass, 2001; Fentress, 1983). Ethograms are inventories of such behaviors, ones that are stereotyped enough to permit identification and cataloging (e.g., Baerends, 1976) and sufficiently general in the species (Gallistel, 1980). Many are elicited by features of the context: various forms of predatory behavior in the presence of prey, antipredator behavior in a fear-inducing context (Fanselow, 1989; Fanselow & Sigmundi, 1986; Pear, Moody, & Persinger, 1972), mating behavior, displacement activities (Slater & Ollason, 1972), and so on (Alcock, 2005). Baum (2005, 2012) calls these action patterns PIE-related activities. They naturally vary with drive state and species (Campbell, Smith, Misanin, & Jaynes, 1966; Wong, 1977). Kissileff (1969) showed that around 20 % of a rat’s water intake occurs in the 5 min preceding a meal and about 40 % in the 5 min after a meal, with steep gradients as temporal remove increases. Contra Falk (1971, p. 577), drinking is an unconditioned response to eating: Dry food elicits drinking by rats. This establishes its candidacy for entrapment by reinforcement.

The set of action patterns associated with foraging in laboratory animals has been exploited by psychologists to study learning for over a century (Boakes, 1984). As Hogan noted, “Most cases of operant conditioning do not involve the shaping of a response . . . , motor mechanisms that already exist become attached to specific central mechanisms” (1994, p. 448). Bolles (1983) conceived “of behavior, which we have always thought of as highly modifiable, as consisting of a lot of fixed packages, software programs as it were. These preformed packages can be shifted around from one application, or from one object, to another” (p. 43; see also Premack, 1965). Those packages may be differentially memorable—that is, differentially associable with different reinforcers and contexts (Timberlake & Lucas, 1989) and differentially opaque to those preceding them.

Timberlake (e.g., 2001) and Shettleworth (e.g., 1988) highlighted the role of niche-specific learning in the research paradigms of general-process theorists. Timberlake noted the Lamarckian coevolution of experimental apparatus and research programs and subsequently constructed behavior systems theory as a framework for integrating organisms’ niche-appropriate behaviors into the conceptual milieu of the experimentalist (Timberlake, 1994; Timberlake & Lucas, 1985, 1989). A similar approach was mooted by Cleaveland, Jäger, Rößner and Delius (2003). It is the motor programs discussed by these researchers, these action patterns, that we argue are entrained by reinforcement (Davis & Hubbard, 1972; Palya & Zacny, 1980).

Herrnstein (1966) called the variants of a response that are consistent with delivery of contingent reinforcement style. Much of the variance in the emission and recording of instrumental and adjunctive responses arises from the stochastic drift of response styles that are loosely coupled with reinforcement. A leverpress is a member of any class of action patterns that happens to impinge on a lever; variants compete both with other classes of behavior and with other styles of “leverpressing.” The next section provides evidence that the paragon adjunctive behavior, schedule-induced polydipsia (SIP), is an action pattern (drinking water in the vicinity of a meal; e.g., Penney & Schull, 1977) that is driven to exaggerated levels by proximity to reinforcement.

Details of context and procedure elicit the candidate action patterns that are variously captured by reinforcement. This is the case for patterns that are called adjunctive, terminal, sign tracking, and goal tracking (Boakes, 1977; Silva, Silva, & Pear, 1992) and those that are called operants, such as leverpressing (Graham & John, 1989) and pecking (Neuringer, 1970). Rats display a dozen different behavioral patterns close to the bar, which get winnowed by reinforcement (Gallo, Duchatelle, Elkhessaimi, Lepape, & Desportes, 1995; see also Stokes & Balsam, 1991). Brackney (2012) has suggested that this winnowing among competing forms may be the process that, pace Dickinson (1985), transforms actions into habits. The A Model of Competing Traces section will develop this conception: Variants, both within and across nominal classes of behavior, having differential memorability and, thus, differential susceptibility to reinforcement at different temporal proximities, compete for expression. In sum, no response occurs for the first time because of a reinforcer that might follow it. Understanding provenance of any behavior requires an ethological analysis, and this is as true of operants as of adjuncts.


Reinforcement is a premier construct in modern learning theory. Definitions abound, but all approximate this: Reinforcement is “the response-produced presentation of a positive reinforcer . . . or the increase or maintenance of responding that occurs as a consequence of this operation” (Catania, 1968, p. 344). This definition, like most, stresses a contingent relation between behavior and outcome. But how do animals know when a reinforcer is contingent upon their response? Which response? How do they “assign credit”? There are many clues that can support the inference, clues that echo Hume’s cues for causal inference (Dickinson, 2001; Killeen, 1981). One of the most potent is temporal proximity. This is sometimes called contiguity, but that term can mislead us into requiring that response and reinforcer be touching in time; temporal proximity is more general. Contingent events are often temporally proximate.

Does contingency affect conditioning beyond its role in arranging temporal proximity? Hume phrased this possibility in terms of “regularity of succession” of putative cause and effect; moderns would phrase it in terms of the the relative probability of the reinforcer given the response, as compared with the base probability of the reinforcer. In I. J. Good’s (1961) causal calculus, the evidence for a causal relation between C and E—the tendency of C to cause E—is \( G=\mathrm{Max}\left\{ {\log \left[ {{{{p\left( {\overline{E}|\overline{C}} \right)}} \left/ {{p\left( {\overline{E}|C} \right)}} \right.}} \right],0} \right\} \). In a response-independent procedure, with C standing for a response and E a reinforcer, the numerator (the probability of no food given no response) and denominator (the probability of no food given a response) are equal, entailing zero evidence for a causal reationship. In a response-contingent procedure, the numerator (equal to 1) is always greater than the denominator, maximally greater on continuous reinforcement schedules. In classical conditioning, with C the perception of a conditioned stimulus (CS) and E the unconditioned stimulus (US), the numerator is 1—except where unsignaled USs denature the evidence or the animal is inattentive to that CS due to the presentation of other, more salient stimuli (e.g., overshadowing or blocking stimuli). With longer CSs or with partial reinforcement, there are more epochs in which the sight or sound of the CS is not conjunctive with the US, and so evidence once again decreases. So interpreted, G predicts behavior in many conditioning situations, including the trial/intertrial interval (ITI) effect in autoshaping (Gibbon, Baldock, Locurto, Gold & Terrace, 1977; Gibbon, Farrell, Locurto, Duncan, & Terrace, 1980).

There are, however, serious problems with such contingency accounts. On large variable ratio schedules, hundreds of responses go unreinforced for every one reinforced. This is a weak contingency generating strong behavior. Partially reinforced responses often persist in extinction longer than continually reinforced ones. A more serious problem with the account, however, is that the correlation between a response and reinforcer can be exactly the same in two situations, one with a 5-s delay between response and outcome, and the other with no delay; yet while this severely affects behavior, it does not affect the metric. How large should the window be for “conjunction of response and reinforcement” on which probability is calculated and contingency inferred? Because there is a continuously decreasing impact of reinforcement with delay, there can be no one window. Despite admirable efforts (Baum, 1973; Gibbon, Berryman, & Thompson, 1974), the correlations in these accounts must remain metaphorical or become much more sophisticated than typically rendered (Tonneau, 2005).

Baum (2012) recently addressed this issue, noting that “the presence of a contingency requires comparison across two temporally separated occasions” (p. 110). Delays, such as that in the previous paragraph, “must affect the tightness of the correlation. A measure of correlation such as the correlation coefficient (r) would decrease as the average delay increased. Delays affect the clarity of a contingency. ” But exactly what are the two temporally separated occasions, when behavior changes systematically over a continuum of delays? How is the “clarity of a contingency” mapped into r? If some correlation coefficient could be found that decreased uniformly with delay—r will certainly not do so, but Killeen’s (1994) coupling coefficient ζ might—to that extent it is de facto a measure of proximity.

Although Skinner emphasized the importance of the conditional (contingent) relationship as the defining character of operant responses (cf. Donahoe, 2006), in a landmark article best remembered for other reasons, he introduced a class of responses that are operants without having any contingent relation to reinforcement (Skinner, 1948). He named this process adventitious conditioning and argued that its mechanism was delayed reinforcement: Any response occurring before a reinforcer may be strengthened by that reinforcer—may be “assigned credit” for it ( Staddon & Zhang, 1991)—absent the contingent relation. This recognition of operant conditioning absent contingency was a significant shift in Skinner’s theoretical position (Timberlake, 1995). The absence of a positive contingent relation between an adjunctive response and reinforcement is thus no impediment to defining adjuncts as operants.

Temporal proximity

Operations that affect contingency typically also affect relative proximity. Unpaired presentations of the US or reinforcer are, in fact, proximate to other stimuli or responses that can compete with the target response (Stout & Miller, 2007). Few deny the importance of proximity, and fewer the existence of a delay of reinforcement gradient representing the graded importance of proximity of stimulus (or response) and reinforcer. The question is not whether imperfect proximity works but, rather, just how much proximity is required for conditioning. Garcia, Revusky and others showed that causes could be quite remote from their consequences and still be conditioned to them (Garcia, McGowan, & Green, 1972; Revusky & Garcia, 1970). In these cases, a novel taste could precede illness by hours, resulting in subsequent aversion to that taste. It was important that other candidate causes not interpose between the stimulus or response and that reinforcer (Lett, 1975; Williams, 1975), since the latter, more proximate event could interfere with the acquisition of control by the former. Contingency matters because it protects proximity: “The role of conditionality [contingency] in protecting a given response from being displaced by the reinforcement of some other response—a response perhaps more prevalent in the animal’s repertoire—may be one of the most important factors [in performance]” (Jenkins, 1970, p. 101), a comment echoed by Mackintosh (1974, pp. 156–157). Absent close proximity, other stimuli or responses can intervene and capture associative strength. Strict contiguity is neither necessary for conditioning, nor is it sufficient for it (Rescorla, 1988); strict contingency is neither necessary for conditioning, nor is it sufficient for it (see the Adjuncts as Operants: functions section).

Proximity is typically qualified by temporal order, with cause preceding effect and with response preceding reinforcer. There is good evidence, however, for conditioning when stimuli or responses succeed the reinforcer (cf. Arcediano, Escobar, & Miller, 2003; Spetch, Wilkie, & Pinel, 1981) and some evidence and arguments that those gradients might be symmetric (Arcediano, Escobar, & Miller, 2005; Thorndike, 1933, p. 55) or close to symmetric (Jenkins, 1943a, b, c). They may be differentially effective in supporting different kinds of action patterns appropriate to their temporal relation to the reinforcer (Silva & Timberlake, 2000; Silva, Timberlake, & Ozlem Cevik, 1998). We argue that it is not contiguity, not contingency, not predictive ability, but proximity that is central to conditioning. Adjunctive and operant responses may be maintained by proximity, without contiguity or contingency.

Light theories of reinforcement versus gravity theories of reinforcement

An implicit assumption in discussions of contiguity is that a US or reinforcer can affect only one thing—condition one CS or strengthen one response. We call this the light model: If a sunbeam falls on one object, then other objects behind it fade to umbral. Gravity has a different modus operandi; the sun that throws the beam also attracts the object, and it attracts objects behind it equally. There are no gravity shadows. Is the force of a reinforcer more like that of light or gravity?

The familiar metaphors of blocking and overshadowing suggest that reinforcement functions more like light. But those interference effects are seldom complete: neither blackness nor brilliance but, rather, penumbra. Reinforcers can strengthen a pattern of behavior by increasing the probability of its constituent elements directly and the pattern as a whole at the same time (Rachlin, 1988, 2000; Rescorla, 1972; Shimp, 1981), not requiring a handing off of associative strength by a process of behavioral chaining. Proximity can be understood in a molar vein: as a proximity in time between aggregates of responses and aggregates of reinforcers (Baum, 2005; Rachlin, 1994).

Catania (1971; Catania, Sagvolden, & Keller, 1988) has emphasized that reinforcers can affect behavior preceding the few most proximal responses, and a simulation based on that assumption efficiently delivers the results of most schedules of reinforcement (Catania, 2005b). Killeen (1994) incorporated those results into a general theory of reinforcement schedules and provided a metric that integrated proximity over the relevant portions of the gradient, called coupling. Because gradients may extend for many seconds, responses in context 1 can be affected by reinforcers in a succeeding context 2. Reinforcers contingent on response 2 in context 2 may strengthen responses other than the target response in the prior context 1: Reinforcers shine through contexts. This “following schedule effect” (Williams, 1981) may be the basis for both behavioral contrast and successive negative contrast. Both adjunctive and operant responses show analogous shifts under multiple schedules of reinforcement (Haight & Killeen, 1991; Hinson & Staddon, 1978). Salient events such as the delivery of food may partially erase memory of prior stimuli and responses, as a graded function of the magnitude of the event (Killeen & Smith, 1984). Perhaps the best simile for reinforcement is light; the stimuli and responses on which and through which it shines vary in their opacity, so that a strong reinforcer can affect many prior and concurrent events and can do so even when they may be signaled by distinct contexts.

Adjuncts as operants: Functions


SIP covaries with food motivation, not with water motivation (Brush & Schaeffer, 1974), although water has ancillary reinforcing properties in making the food pellet more digestable (Keehn & Burton, 1978; Roper & Crossland, 1982). The rate of acquisition of SIP depends on factors more relevant to eating than to drinking; asymptotic levels of drinking are affected by both types of causal factors (Roper & Posadas-Andrews, 1981). Drinking followed by delayed food and leverpressing followed by delayed food are acquired faster in more highly food-motivated organisms (Lamas & Pellón, 1997; Lattal & Williams, 1997). Schedule-induced wood chewing covaries with food deprivation level (Roper & Crossland, 1982). The rate of schedule-induced activity in pigeons is positively correlated with reinforcement magnitude (Osborne, 1978), as is the rate of operant responding. The covariation of SIP and food motivation provides evidence that SIP is reinforced primarily by food, not by water. Water deprivation (e.g., Roper & Posadas-Andrews, 1981) and water preloads (e.g., Porter, Young, & Moeschl, 1978) do not consistently alter the level of SIP. SIP is quite resistant to reduction via taste aversion procedures (Riley, Hyson, Baker, & Kulkosky, 1980). On the other hand, food deprivation (Roper & Nieto, 1979) and reinforcement frequency (Roca & Bruner, 2011b) and magnitude (e.g., Flory, 1971; Roca & Bruner, 2011a) all modulate the level of SIP. In summary, schedule-induced drinking is modulated more by the incentive value of the ensuing reinforcer than by the incentive value of water. Schedule-induced drinking functions more like leverpressing reinforced by food than like drinking induced by thirst.


Shettleworth and Juergensen (1980) have shown that those action patterns of hamsters that do not increase in rate when reinforced (“inner-directed” ones such as face washing, scent marking, and scratching the body) also do not occur as adjuncts on periodic response-independent schedules. Conversely, those patterns that do show increases when reinforced (“outer-directed” ones such as rearing, scrabbling, and digging) also appear adjunctively on schedules of periodic noncontingent reinforcement. This supports our hypothesis that adjunctive behaviors may be sustained by reinforcement and complements it with the hypothesis that behaviors that cannot be reinforced cannot appear as adjuncts.


The rate at which SIP is acquired when reinforcement is delayed falls within the range of the rates at which leverpressing and keypecking are acquired at similar delays. The slow acquisition is not simply a matter of accommodation to the schedule of feeding, since even after extended pretraining with that feeding schedule, an acquisition function for SIP ensues (Reynierse & Spanier, 1968; Williams, Tang, & Falk, 1992). The left panel of Fig. 1 shows the mean rate of schedule-induced drinking by Wistar rats on schedules of periodic feeding when food was delayed by 15 or 30 s from the last touch of the drinking spout (from López-Crespo, Rodríguez, Pellón, & Flores, 2004). Similar data have been reported by Cope, Sanger and Blackman (1976). In all cases, acquisition had not reached asymptotic levels by 20 sessions. Now compare these data with the median increase in response rates of three Sprague-Dawley rats trained to leverpress with 15- and 30-s resetting delays by Lattal and Gleeson (1990, right panel of Fig. 1).
Fig. 1

Left panel: Average licking rates for 10 male Wistar rats when food pellets were delayed by 15 or 30 s from their last lick (López-Crespo et al., 2004). Right panel: Median response rates of 3 Sprague-Dawley rats trained by Lattal and Gleeson (1990) to press a lever for food pellets delayed by 10 s from the last response and 3 trained with a 30-s delay of reinforcement

Although the ordinates for these different responses are incommensurate, it is clear that none of the curves had reached asymptote by the end of the data collection. It is also clear that rate of acquisition decreases with the delay to food for both kinds of response, consummatory and instrumental. B. A. Williams (1999) also found that rats’ acquisition of leverpressing under unsignaled 30-s delay was robust but required at least 20 sessions to approach asymptote. Amsel and Work (1961) have shown similar acquisition curves for general activity in rats and demonstrated a marked increase in rate (an “FI scallop”) through the course of the interpellet interval during a long prefeeding CS (Amsel, Work, & Penick, 1962). Even when the scheduled reinforcer is not delayed, acquisition of SIP and leverpressing follows similar time courses.

Contingency control

Contingent efficacy of responses is often supposed to be necessary for operant conditioning. For the data in the left panel of Fig. 1, nothing was required of the rats but inaction; for the data on the right, a reinforcement was contingent on a target response—leverpressing. Both kinds of responses were acquired along similar time courses, showing similar parametric effects of delay. There is no evidence here that the contingency on leverpressing speeded its acquisition.

Contingencies work, when they do, because they guarantee a minimum proximity. If proximity to reinforcement is important, then when licking a waterspout is required for access to delayed reinforcers, licking should be acquired faster. The relevant experiment was conducted (Pellón, Bayeh, & Pérez-Padilla, 2006): A pellet of food was programmed to occur 30 s after the first 20 licks during each interfood interval for 8 master rats, to which 8 other rats were yoked. By the 16th session, licking had increased substantially (from 4 to 35 licks/min) for the masters and marginally (from 4 to 10 licks/min) for the controls. Schedule-induced drinking, like leverpressing, can be increased with contingent reinforcement, even at a delay. We argue that it is not the contingency but the proximity that contingencies guarantee that is important. When contingent responses are remote from their reinforcers, response latency decreases uniformly with that remoteness (as in schedules of response-initiated delay; Shull, 1970) and does so despite invariant contingency. It is their engineering of temporal proximity that makes contingencies effective, not informativeness about the impending reinforcer or causal relatedness to it.

Conversly, licking decreases when it delays food, and this decrease is greater than that seen in yoked animals (Lamas & Pellón, 1995). Moran and Rudolph (1980) found that 6 rats receiving either 1- or 4-min lick-contingent delays during a periodic food schedule did not develop SIP, whereas their 6 yoked partners drank copiously. Lick-contingent delays of 10 and 30 s did not prevent the development of SIP. Pellón and Blackman (1987) showed that a 10-s delay would reduce, but not eliminate, SIP. SIP can develop de novo with moderate delays of reinforcement—delays on the same order as those demonstrated for leverpressing by Sutphin, Byrne and Poling (1998), who were able to train differential leverpressing with 8-, 16-, and 32-s, but not 64-s, delays. Thus, delays of reinforcement affect adjuncts and operants similarly, and contingencies, which guarantee that those delays are the minimal proximity the animals experience, operate similarly.

Path dependency

Behavior is path dependent. Histories of operant reinforcement for low response rates versus high rates give rise to differential effects in subsequent operant performance under fixed interval (FI) schedules (e.g., Bickel, Higgins, Kirby, & Johnson, 1988). Development of schedule-induced drinking is likewise a function of behavioral history. Tang, Williams and Falk (1988) trained a group of rats with an FI 1-min schedule of food presentation (with water unavailable in the experimental chamber) for 17 weeks. Another group of rats was maintained in their home cages during this period. The two groups subsequently got an FI 1-min schedule with water freely available. The rats that had the reinforcement schedule without water took longer to acquire schedule-induced drinking and failed to attain the same level of ingestion as the group without prior conditioning history. The degree of schedule-induced drinking is also lower when the previous history included access to an activity wheel (Williams et al., 1992). Johnson, Bickel, Higgins and Morris (1991) trained rats on a DRL 11-s schedule or on an FR 40-s schedule. When switched to an FI 15-s schedule, those with a history of DRL developed polydipsia, and the operant response rate decreased in direct proportion to the amount of available water. Rats with FR schedule experience did not develop polydipsia. Behavior is path dependent: When a path encourages (or discourages) a pattern of actions, those actions will be more or less available to capture by reinforcement (or elusive to it). This is equally true of adjunctive responses and of operant responses. When proximity is forced by instituting a contingent relation, these histories become less important in the ensuing behavioral trajectories. That is why operant conditioning is so important.

Stimulus control: Marking

When the instrumental response is long-delayed from reinforcement, it suffers severe decrements. Making the response more memorable by marking it with a brief stimulus change can greatly enhance conditioning (Lieberman, Davidson, & Thomas, 1985; Lieberman, McIntosh, & Thomas, 1979; Schaal & Branch, 1990). Williams (1975, 1991) nicely showed both effects of suppression of conditioning (when a signal immediately preceded a delayed reinforcer) and enhancement of conditioning (when the same signal immediately succeeded the target response). This earmark of instrumental conditioning is also an earmark of schedule-induced drinking: Patterson and Boakes (2012) found substantial enhancement of drinking when a 100-ms 95-dB tone occurred after each lick. A similar marking effect on schedule-induced licking was obtained by Pellón and Blackman (1987) on an established pattern of drinking: When licks were followed by 10-s blackouts, schedule-induced licking increased, with the signal elevating already substantial levels of licking. Moran and Rudolph (1980) found strong marking effects, more for light stimuli than for tones. Their stimuli were continued until reinforcement, so this also could be interpreted as conditioned reinforcement of drinking, another earmark of instrumental conditioning.

We also have found evidence for the efficacy of marking in enhancing SIP. Eight experimental and eight control Wistar rats, food deprived and maintained at 85 % of their free-feeding weight, received a pellet of food according to a FT 60-s schedule for 60 pellet presentations. The first lick in each interfood interval was marked by a 1-s 80-dB white noise, as were all subsequent licks after a 5-s absence of licks. Control rats were yoked to experimental rats for delivery of the tones, but their timing was independent of the behavior of the yoked rats. By the 10th session, the effect size of marking on response rates exceeded 0.6 (Cohen’s d) and were maintained at that level for the next and last 10 sessions (see Fig. 2). Thus, this powerful marking effect on operant responses is equally manifest on schedule-induced drinking, the paragon adjunctive response.
Fig. 2

Total number of licks by control rats and rats with licks marked by white noise, at the start and end of conditioning. Error bars represent standard errors of the means

Temporal locus

On periodic food schedules, SIP is a “postpellet” phenomenon, whereas instrumental responses are “prepellet.” How can SIP be postpellet, if proximity to reinforcement increases the strength of association? Some of the difference is due to the DRO contingencies often imposed on adjuncts or to their displacement by the required operant response, as on FI schedules, or by goal-tracking responses (Costa & Boakes, 2009). Furthermore, drinking is stimulated by the consumption of dry food and may compete poorly with other responses proximal to reinforcement absent elicitation by food. When water is available only in the first 15 s of a 120-s FT, SIP develops over sessions along its typical trajectory (similar to that shown in Fig. 1); but when available only during the last 15 s of the interval, it develops more slowly and to a lower level (Álvarez, Íbias, & Pellón, 2011). Conversely, once estabished—but not permitted to pervade the interval—SIP increases with proximity to reinforcement, as in a fixed-interval “scallop” (Avila & Bruner, 1994). Other adjuncts take typical places in the interval (Roper, 1978), places that depend on the interval’s length (Silva & Timberlake, 1998). Signals of reinforcement may function in similar manner: A CS signaling forthcoming noncontingent food elicits a pattern of pecking that changes through the signal like other adjunctive responses (Lattal & Abreu-Rodrigues, 1997; Osborne & Killeen, 1977). These heterogeneous observations require a more explicit model of the proposed processes to draw out common themes and reduce the demand on our working memory.

A model of competing traces

A particular leverpress may be followed by food, but it cannot increase in frequency, since it is a unique event. Only other, similar responses can subsequently increase in frequency: Only a class of responses is amenable to reinforcement (Catania, 1973). That class is a proper subset of all the responses occasioned by reinforcement. They are the ones that the experimenter has engineered to be counted with the same meter. Different meters—drinkometers, activity platforms, running wheels, visual observations—capture different behaviors occasioned by reinforcement. These compete for expression and, thus, display different temporal patterns. The following model of that process borrows from Staddon’s (1977) account of behavioral states and competing causal factors, from Timberlake’s (1993, 1994) behavior systems theory, and from the ideas of many of the researchers cited above. It is complementary to the work of Baum (2012). It applies equally to adjunctive and operant behavior.

Ainslie (1992, 2001) and Livnat and Pippenger (2006) described self-control (or the lack of it) in the face of delayed rewards as the result of competition between various “interests.” Some of the interests are more future oriented—more prudent—others more impulsive. This is the scenario that we propose for classes of responses, but our “interests” are behavioral classes, not mental negotiators. Consider the possibility of different degrees of associability of responses with reinforcers, as demonstrated repeatedly in the literature of the 1970s and 1980s (e.g., Killeen, Hanson, & Osborne, 1978; Seligman, 1970; Shettleworth & Juergensen, 1980). Much of the excitement in the field, anticipatory to and contemporary with the research on adjunctive behavior, has been concerned with such differential associability, discussed under the rubric of constraints on conditioning (Breland & Breland, 1961; Seligman, 1970), with a retrospect taken by Domjan (1983). This research is part of the canon, as are demonstrations of different delay-of-aversion gradients for different aspects of the CS to different punishers (Garcia et al., 1972; Revusky & Garcia, 1970). Flavors associated with nausea have shallower gradients than do the shapes of the cups that hold those flavors (Revusky & Parker, 1976). The present model of reinforcement and competition invokes both of these facts and extends them to responses: Differential associability and differential decay of the potential for association over time.

Is it no leap of the imagination to conceive of both interim responses, such as general activity or drinking, and terminal responses, such as hopper-orientation, as being selected by reinforcement—not just with differential associability, but with different time courses of associability. Assume for simplicity that the probability of associating a response class—drinking, pacing, leverpressing, or focal search—with subsequent reinforcement decays at a constant rate, but at a different rate for different classes of behavior (Fig. 3).
Fig. 3

The probability density for associating three different classes of behavior with reinforcement at points in time through the interval (after Catania, 2005a)

The traces in Fig. 3 correspond to classical delay-of-reinforcement gradients (Kwok, Livesey, & Boakes, 2012), and are duals of complementary curves showing the decline of memory traces of the class as a function of time since their occupancy (Killeen, 2005, 2011). There is evidence for difference in associability, both in absolute terms and as a function of delay. Johansen and associates show different delay gradients for two strains of rats (Johansen, Killeen, & Sagvolden, 2007). Avila and Bruner (1994) measured both leverpressing and drinking at a spout that was introduced for 16-s periods at different points in the interval. Figure 4 shows gradients similar to those that might result from the forces diagrammed in Fig. 3. An important difference is the decrease in drinking near the end of the interval, where leverpressing competes for available time (Reid, Vazquez, & Rico, 1985). Figure 3 shows the selective forces, Fig. 4 the results of those forces on behaviors competing for expression through the course of the interval.
Fig. 4

Median consumption of water (circles) and number of leverpresses (squares) of 3 rats over the course of a session of periodic feeding. Water was available only during the 16 s around which each circle is centered. The data for leverpressing was taken from the condition in which water occurred during the first 16 s. Data are from Avila and Bruner (1994)

The steepest gradient in Fig. 3 is dominant during the epoch within 5 s of reinforcement. Other reinforced responses that cannot be executed simultaneously must compete for expression. Interim responses are action patterns with longer time constants than terminal responses, so they dominate performance during early and middle of the interval, with terminal responses out-competing them toward the end of the interval. Interim responses have long courses of associability, terminal responses shorter courses. Some actions, such as area-restricted search and SIP, are given additional help by their innate association with the just-consumed reinforcer and, therefore, are favored for an early start in the interval.

Sessional consumption

We have premised that the memories of different classes of responses may decay at different rates and, thus, have different delay-of-reinforcement gradients associated with them (Garcia et al., 1972; Kwok et al., 2012; Revusky & Garcia, 1970). At short delays—near the terminus of the interval—responses with steep gradients are able to displace the interim behaviors supported by shallower gradients. In Fig. 3, at the shortest times to reinforcement (t < 5), the activities characterized by the steepest curve dominate the intermediate class, and that dominates the earliest class out to 17 s. This is also seen in Fig. 4, where leverpressing competes with drinking during the last 16 s of the interval. If reinforcement always occurred within 5 s, the terminal behaviors depicted in Fig. 3 will always be most strongly associated with reinforcement. If reinforcement occurs at longer intervals, interim behaviors can receive some strengthening. Such a transition from feeder-directed to instrumental to more general behavior as the interval lengthens was noted by Innis, Simmelhag-Grant, and Staddon (1983), among many others.

By setting the gradients of associability equal to one another, we may solve for the time before reinforcement at which they cross. That is,
$$ \begin{array}{*{20}c} {{t_{{{a_i}={a_j}}}}=\frac{{\ln \left( {{\alpha_j}/{\alpha_i}} \right)}}{{{\lambda_j}-{\lambda_i}}},} & {{\lambda_i}<{\lambda_j}} \\ \end{array} $$

Class I will dominate before this boundary, Class J after, and Class K after that; in Fig. 3, the crossings are near 5 and 17 s. When a context supports a heterogeneous repertoire, islands of dominance of one behavior over others will arise along the timeline. When the time to the reinforcer is less than the smallest boundary, the response associated with the steepest gradient, some form of goal tracking (Boakes, 1977; Costa & Boakes, 2009), will dominate. If the interfood interval approximates that crossing, then such fast-course responses will dominate all others, which will compete at a disadvantage. As the interfood interval exceeds the next crossing time, the behavior associated with the slower courses (smaller lambdas) will enter its island of dominance. As the interval is further lengthened, yet other classes of behavior will enter seriatim. If the slopes of the gradients are approximately equal, the gradients may never cross within the studied context, and one behavior may always dominate or be dominated, depending on the values of associability, or they may alternate stochastically, contributing to the variability seen in repertoires.

Given those islands of dominance, the extent of the interval over which a class is most associable with the reinforcer is a linearly increasing function of the time until reinforcement, t, until the time at which another behavior becomes more associable. Adopt the convention that the steepness of the gradients increases with their indexical, so that λ h < λ i < λ j < λ k < λ l corresponding, say, to the behaviors called postprandial, interim, facultative, terminal, and goal tracking. Class I will be subordinate to J until the time to reinforcement exceeds the I–J boundary; its opportunity for association thereafter increases linearly with t, as \( t-{t_{{{a_i}={a_j}}}} \). It continues to increase until the next more remote adjunct gains supremacy at \( t-{t_{{{a_h}={a_i}}}} \) and then decreases. Consistent with this hypothesis, evidence for a linear increase in SIP was reported by Falk (1966): “As FI length was increased, the degree of polydipsia increased linearly to a maximum value.” Flory (1971) and others (Segal, Oden & Deadwyler, 1965) replicated the result.

The extent of the interval over which trace strength for SIP is dominant—most associable with the reinforcer—predicts the total amount of drinking. The proportion of the interval during which the response is dominant predicts the rate of drinking. For a feeding period of T, that proportion, P(i,T) is
$$ \begin{array}{*{20}c} {P\left( {i,T} \right)=0}{\mathrm{when}}{T<{t_{{{a_i}={a_j}}}}} \\ {=\frac{{T-{t_{{{a_i}={a_j}}}}}}{T}}{}{{t_{{{a_i}={a_j}}}}<T<{t_{{{a_h}={a_i}}}}} \\ {=\frac{{{t_{{{a_h}={a_i}}}}-{t_{{{a_i}={a_j}}}}}}{T}}{}{{t_{{{a_h}={a_i}}}}<T} \\ \end{array} $$
These equations state that (2.1) if the interfood interval (T) is less than the time at which the delay-of-reinforcement gradient for Class I is dominant over Class J, Class I will not occur [P(i,T) = 0] or will eventually cease to occur; (2.2) if T is greater than this value, the proportion of Class I will increase with further increases in T; (2.3) but if T exceeds the crossing with Class H, the proportion will decrease with further increases in T. The longer a class is dominant, the greater the associative strengthening with reinforcement. This is demonstrated in Fig. 5. As long as rates are below their ceiling, it is our hypothesis that rate of responding will be proportional to P(i,T). (This hypothesis holds only for stable performance; during acquisition, we expect the differences in the area under the gradients to play a role in the speed of acquisition.)
Fig. 5

For periods (interfood intervals, or interstimulus intervals in general) less than the point of the first gradient crossing—here, at 10—the second fastest class of responses will not occur, as stated in the first line of Eq. 2. At longer periods, the proportion of time that it is dominant grows as stated in the second line in Eq. 2. When the next class becomes dominaint—here, at 20—the proportion decreases inversely with T as stated in the third line in Eq. 2

In an early parametric study of SIP, Flory (1971) increased the fixed interval (T in Eq. 2) from 2 to 480 s, measuring the consequent levels of drinking. Figure 6 shows the resulting average session intake of his 3 rats. The curve is proportional to P(i,T), Eq. 2 multiplied by a scale factor k. The conformity to the data is uninspired, possibly because Flory ran an ascending series for T with an unspecified number of sessions per condition and interleaved the one- and two-pellet conditions. Nonetheless, these figures show that the hypothesized trend is found in the data. The crossover points (at the x-intercept and mode) were substantially later for the two-pellet condition, suggesting that increased magnitude of reinforcement exerts its influence over a longer range by increasing αs or decreasing λs.
Fig. 6

The median rate of schedule-induced drinking by 3 rats receiving one (top panel) or two food pellets for leverpressing on fixed interval schedules. The curves from Eq. 2 had crossover times of 1.4 and 12 s with constant of proportionality k = 2.0 (top panel) and 7 and 43 s with k = 1.5 (bottom panel). Note the logarithmically transformed x-axis

Other adjunctive responses also show an increase up to interfood intervals of 1 or 2 min, followed by a decrease. Figure 7 gives some examples for schedule-induced chewing of wood blocks by rats and schedule-induced attack against targets by pigeons; additional studies of schedule-induced attack, all showing a form similar to those seen in Fig. 7, are reviewed by Looney and Cohen (1982).
Fig. 7

The relative incidence of schedule-induced chewing by rats (top panel) and schedule-induced attack by pigeons (bottom panel). The curves from Eq. 2 had crossover times of 20 and 55 s with constant of proportionality k = 0.23 (top panel) and 25 and 114 s with k = 0.93 (bottom panel)

If the slow-course responses such as drinking or chewing are given a relative advantage early in the interval by their temporal remove from reinforcement, aperiodic (random interval) schedules, which typically contain many more short interfood intervals, should favor the fast time course instrumental responses and support substantially less adjunctive responding. This seems to be the case (Millenson, Allen, & Pinker, 1977; Plonsky, Driscoll, Warren, & Rosellini, 1984). Consistent with this inference, Harris, Gharaei and Pincham (2011) demonstrated that response rates during a CS track the relative frequencies, not the marginal probabilities, of reinforcement. Gaps in irregular distribution are likely to be filled with adjunctive responses. Reid and Staddon (1982) summarized their analysis of SIP: “All subjects seemed to follow a simple rule: during any stimulus signaling an increase in the local probability of food delivery within a session, engage in food-related behavior to the exclusion of drinking” (p. 1). It is during such stimuli that the faster-course general and focal search responses are most likely to be reinforced and, thus, able to dominate the slower course adjunctive responses, which fill the gaps. On the other hand, Clark (1962) noted that the shorter intervals on a random interval schedule could help to entrain postprandial drinking and, along with placement of the spout near the feeder, could foster the initiation of SIP. This effect was essentially replicated in Patterson and Boakes (2012). All response classes benefit from close proximity to reinforcers, even if slow-track interim responses eventually get displaced to gaps by faster-track ones.

The above analyses hold stochastically. The relative supremacy of memories for different actions at different removes from reinforcement will often be slight, and vicissitudes of history and attention will easily sway behavior in the particular. Once one action has commenced, it may run through an epoch normally commanded by other actions and, by happenstance proximity to reinforcement, gain momentary advantage.

Distribution in time

Any action patterns that occur within a reinforcement schedule, whether elicited by the context or feeding schedule, appearing randomly, or shaped by the experimenter, will be differentially reinforced if they fall within windows of opportunity appropriate to that class of responses, bounded by Eq. 1. Different classes of behavior may share similar constants; then which one is recorded depends on the happenstance of which, by chance, was first selected and strengthened, to the disadvantage of others in its cohort. Accordingly, Innis et al. (1983) found that through the middle of FT 12-s intervals, two pigeons would pace the back wall, while another would push into a corner and flap its wings. Anderson and Shettleworth (1977) found different action patterns emerging and receding over the course of conditioning. With the competition between traces often being close, chance and history play an important role in which members of a class prevail. Once fate smiles on one action pattern, its emission, followed (even remotely) by reinforcement, will increase its frequency. Moore (1973) calls this a “Pavlovian trapping mechanism, which leads to instrumental learning” (p. 175), a process also proposed by Bindra (1972).

Conversely, the same measured response may consist of different forms, each with different time courses of associability. Gallo and associates (1995) found that on schedules of continuous reinforcement, rats emitted 14 actions around a lever, with the target leverpressing comprising three of those disparate actions. Some of the variance in the time course of measured responding may be due to the stochastic engagement of these different forms of responding. These styles of responding depend in part on the nature of the reinforcer (Kohman, Leising, Shaffer, & Higa, 2006; LaMon & Zeigler, 1984). Because of this dependency, different reinforcers can be discounted at different rates (Smith & Renner, 1976; Staddon & Zhang, 1989), if the stylistic differences in the instrumental responses have different gradients. It is not so much that the reinforcers themselves are “discounted” but, rather, that they differentially associate with different classes of behavior with different memorability. Changes in CS or trace duration elicit different forms of the conditioned response (Holland, 1980; Silva & Timberlake, 1997), perhaps because those variants have different gradients and, thus, fall under the sway of Eq. 2.

Reinforcement schedules force arbitrary instrumental responses to a privileged position, displacing other responses, such as wall-oriented motions, that otherwise might drift in to displace it (Davis & Platt, 1983). When an instrumental response is required for reinforcement, behaviors such as hopper approach will be conditional on it for their success and will generally be displaced by the instrumental response, even if that is not intrinsically a terminal response (i.e., not one with a fast time course, corresponding to a large value of λ). But this displacement can be precarious, as was noted by Breland and Breland (1961). Where the traditional FI contingencies are maintained in force, the temporal diffusion of the instrumental response is propagated back through the interval, as in the filled squares of Fig. 4. If the forced contiguity with food is relaxed, instrumental responses with slower time courses will drift backward to their plateau of dominance (Lattal & Abreu-Rodrigues, 1997), leaving the proximal field to fast-course responses such as goal tending. Timberlake and Lucas (1985) made analogous observations for other reinforced action patterns. When pigeons are restrained near the key, decreasing the strength of competing behaviors, autoshaped keypecking is accelerated (Locurto, Travers, Terrace, & Gibbon, 1980). The selective competititive advantage during different epochs is, in all likelihood, the mechanism that generates instinctive drift and other forms of “misbehavior” (Boakes, Poli, Lockwood, & Goodall, 1978; Breland & Breland, 1961).

Different classes of behavior are differentially affected by reinforcement at different delays and compete for dominance at those delays. Reid and Dale (1985) found an almost perfect negative correlation between rates of drinking and of hopper orientation from one moment to the next through the course of an interval, results consistent with the data of Osborne (1978) for pigeons and with the account given here. Instrumental or goal-oriented behaviors do not always out-compete adjunctive behaviors: Powell and Curley (1976) showed how adjuncts, such as scratching and biting, would displace leverpressing in gerbils as fixed-ratio requirements were increased. Under the gradients such as those shown in Fig. 3 and the appropriate interfood interval, there is a longer period of time when more remote classes of behaviors are cumulatively dominant over the fast-course ones: More replicates from the same response class can catch the effect of reinforcement. Not only do motor responses compete for expression and association, perceptual responses also compete for expression and association. What mediates success in the latter case is attention; seeing or hearing or palpating a stimulus is itself a response, it generates a trace, and those traces that are present at the same time as a reinforcer are strengthened.

As temporal learning evolves, the times at which an animal emits an operant response, versus an interim or goal response, will vary stochastically. Since competing responses are occasionally captured by a closer proximity with reinforcement or released by a series of unreinforced emissions of these responses, there will be a dynamic rhythm to the behaviors that are ultimate, penultimate, and antepenultimate in the sequence. Once engaged, a class may run through the epochs in which other action patterns might have emerged (Costa & Boakes, 2007; King, 1974b; Lucas et al., 1988; Reid & Dale, 1985).

A review of Fig. 3 shows that the difference in the associative strength of the curves is much greater when closer to reinforcement. At more remote times, the curves are much closer to each other. This suggests that behavior will be more canalized close to reinforcement and more labile remote from reinforcement, where competitive advantage is marginal. There is good evidence that this is the case (e.g., Cherot, Jones, & Neuringer, 1996; Gharib, Gade, & Roberts, 2004; Stahlman, Roberts, & Blaisdell, 2010).

Wearden and Lejeune (2006), Tonneau (2005), Catania (2005b), and Killeen (1994), among others, have suggested that the decaying traces from every response before reinforcement may be affected by the eventual reinforcer and have used that assumption to effect predictions of schedule effects. What are the shapes of these traces? One clean example is given in Fig. 8, which displays the average rates of learning of groups of rats conditioned with various delays of reinforcement, from Wilkenfield, Nickel, Blakely, and Poling’s (1992) Fig. 7. To capture these data with the above analysis, we integrate to find the area under one of the traces in Fig. 3, from the stipulated delay d, off to the left. The integration permits all instances of the response before d to be strengthened by the distant reinforcer. This area is ce d , where c is a constant of integration. (If “off to the left” is not very far, either because of closely spaced trials or reinforcers[Killeen & Smith, 1984] or because only a single response is permitted, then a difference of exponentials results, giving an extended version of this account (Killeen & Sitomer, 2003) that remains an exponential function of d.) Assigning values of c = 5.2 and λ = 0.080/s draws the curve through the data.
Fig. 8

Rate of acquisition of leverpressing by groups of rats learning under different fixed delays, measured as average response rate during the first session. Data are from Wilkenfield et al. (1992); the curve is an exponential gradient

These traces are exponential, as postulated, and there are good reasons for why this simplest, maximum-entropy distribution should characterize memorial processes (Johansen et al., 2009). However, that particular form is not a necessary part of the theory. Kwok and associates (2012) developed a trace-decay Rescorla–Wagner account of taste aversion learning, using similar exponential gradients, with a similar demurral. Indeed, once animals become habituated to a delay, signals of that delay may act as conditioned reinforcers (Sosa, dos Santos, & Flores, 2011). Integration of the exponential trace over that delay gives an average strength that is a hyperbolic function of the delay that they signal (Killeen, 2011). Examples are shown in Fig. 9. Eight rats were trained to leverpress with water available, then initiated an FT 30-s schedule with no requirement for reinforcement, except that it would not be delivered within d s of either a leverpress or a lick. The average data shown in Fig. 9 are representative of individuals: Drinking has a much shallower gradient than does leverpressing. The curves are the predicted conditioned reinforcement strength, (1 − e −λt )/t, which is essentially congruent with the inverse linear function called “hyperbolic.” Hyperbolic gradients have much longer tails than do their mother exponentials. This may be part of the reason that response classes that cannot be established under a particular delay can yet be maintained at that delay once acquired.
Fig. 9

Proportional rate of responding (drinking or pressing) when not required for reinforcement and discouraged by delays corresponding to the abcissae. Note that drinking has a much shallower gradient than does leverpressing. Data are from Pellón and Pérez-Padilla (2013); curves are from a model of conditioned reinforcement

It is yet to be determined whether marking the response increases the associability α or decreases the rate of memory decay λ. If the latter, they will be most effective at long delays; if the former, they will be equally effective at any delay. For a recent review of the effects of delay of reinforcement on conditioning, see Lattal (2010); for an nice procedure to measure such gradients, see Reilly and Lattal (2004).

Subtle effects of competition

In autoshaping paradigms, goal tracking will displace sign tracking for CSs of short duration (Boakes, 1977; Gibbon et al., 1980; or distant from the hopper, Silva et al., 1992), consistent with Eq. 2. In typical autoshaping paradigms, the CS offsets with food delivery (delay conditioning); if the same CS is offset sooner (trace conditioning), it may signal an epoch that is dominated by different responses than those dominant in delay conditioning and different from the ones that the experimenter is measuring (Brown, Hemmes, & Cabeza de Vaca, 1997; Brown, Hemmes, Cabeza de Vaca, & Pagano, 1993; Killeen, 1975, Fig. 6; Williams, Johns, & Brindas, 2008). Costa and Boakes (2009) showed that changes in context could differentially affect rates of sign tracking versus goal tracking. Patterson and Boakes (2012) demonstrated a reliable blocking of the acquisition of SIP when a houselight flashed briefly prior to the delivery of food. We hypothesize that all of these effects are due to the clarification that the signals or context offers to the animals concerning the time until reinforcement, permitting the dominant response classes to be focused therein. As ITI lengthens and, with it, temporal uncertainty, the effectiveness of these signals increases apace.

The delay gradients in Figs. 3 and 8 are sketches of the potential for associative conditioning. But if responses from a disadvantaged pattern are established by intent or accident, they may be trapped and resistant to displacement by fast-course responses. Some of these enduring effects have been elegantly reported by Gottlieb (2006), who reviewed the data and arguments for displacement of sign tracking by goal tracing on continuous- versus partial-reinforcement schedules. Holland (1979) showed that the responses that emerge as conditioned responses to a CS are variants of observing responses in the early parts of the CS, but more dependent on the nature of the US in the latter parts of the CS. Again, this is consistent with different response classes being differentially advantaged at different remoteness from the US and with the nature of the US also being an important factor (shown in a nice double dissociation by Davey, Phillips, & Witty, 1989). It may be the case that species-specific defense (Bolles, 1970) and species-specific appetitive reactions are those with the fastest time courses; preparedness (Seligman, 1970) may be another name for the height of the delay gradient of an activity for a particular reinforcer over an interval of interest.

Relative associability

Equation 1 defines the boundaries of epochs of dominance, and Eq. 2 the changes in relative coupling of one corresponding response class to reinforcement as a function of the interfood interval. It is also possible to predict the strength of one class of responses relative to another as a function of time to reinforcement:
$$ \begin{array}{*{20}{c}} {{{S}_{{j,i,t}}} = \frac{{{{\alpha }_{j}}{{e}^{{ - {{\lambda }_{j}}\left( {T - t} \right)}}}}}{{{{\alpha }_{i}}{{e}^{{ - {{\lambda }_{i}}\left( {T - t} \right)}}} + {{\alpha }_{i}}{{e}^{{ - {{\lambda }_{j}}\left( {T - t} \right)}}}}}} \\ {{{S}_{{j,i,t}}} = {{{\left( {1 + {{a}^{\prime }}{{e}^{{{{\lambda }^{\prime }}\left( {T - t} \right)}}}} \right)}}^{{ - 1}}}} \\ {{{a}^{\prime }} = {{\alpha }_{i}}/{{\alpha }_{j}},\quad {{\lambda }^{\prime }} = {{\lambda }_{j}} - {{\lambda }_{i}}} \\ \end{array} $$
Equation 3.1 is simply the proportional height of one gradient relative to another. It can be given a simpler appearance (and fewer parameters) in 3.2 by letting a′ stand for the ratio of associative strengths and λ′ the difference of rate constants. Interim behaviors await the termination of consummatory behavior associated with the prior reinforcer. Immediately after food, there is a brief interlude where postprandial behavior, such as searching for more food (“area restricted search”; Whishaw & Gorny, 1991), dominates. That behavior is not so much reinforced by the next pellet as forced by the one just received. The duration of that interlude depends on the species and the reinforcer (Bradshaw & Killeen, 2012). If there is a constant probability of quitting such postconsummatory behavior, relative associability will change as this function of time:
$$ {S_{j,i,t }}=\frac{{1-{e^{{-{\lambda_c}t}}}}}{{1+{a^{\prime }}{e^{{-{\lambda^{\prime }}\left( {T-t} \right)}}}}}. $$
The numerator draws the release from postprandial behavior, which occurs with a rate constant \( {\lambda_{\mathrm{c}}} \). The denominator gives the relative competition from the terminal response that displaces the interim response near the end of the interval. Equation 4, the competing trace model, successfully maps the time course of general activity, as shown by the curves through the data in the left panel of Fig. 10. A rate conversion factor of five responses per second (the upper limit of the floor switches) was assigned for all conditions; it multiplies the relative strength given by Eq. 4, permitting it to trace the flow and ebb of responding. Common values of relative associability a′ = 60 and rate of consummatory termination \( {\lambda_{\mathrm{c}}} \) = 0.21/s served for all functions. The differential rate constant λ′ decreased monotonically with T.
Fig. 10

Left panel: The general activity of pigeons during periodic delivery of food at the intervals indicated in the legend (Killeen, 1975), demonstrating the effective suppression of activity by a 5-s delay contingency. Right panel: The number of licks by Wistar rats at a water spout during periodic delivery of food pellets. The top curve is from a noncontingent FT 30-s food schedule, and the diamonds from an FT 15-s schedule. The crossed squares show drinking when water was never available during the last 15 s of the interval (López-Crespo et al., 2004). Contrasting the last with the top curve shows that the reduction of proximity with reinforcement decreases the level of SIP. Contrasting them with the bottom curve shows the advantage of greater proximity (note the higher mode in the 15-s curve) and the effect of competition with fast-course terminal responses (note the suppressed right tail in the 15-s curve). All fuctions drawn by Eq. 4

The same model draws the curves through the data in the right panel of Fig. 10, with scale factor 1,500 licks, a′ = 136, and \( {\lambda_{\mathrm{c}}} \) = 0.13/s for all conditions. The differential rate constants λ′were 0.50/s for the FT 30-s condition, 0.23/s for the FT 15-s condition, and 0.18/s for the FT 30-s condition with no access in the last 15 s. Equation 4 also describes the pattern of eating on periodic water schedules (Myerson & Christiansen, 1979). Similar activity patterns are seen during CSs that predict food (Hanson, 1977; Osborne & Killeen, 1977; Sheffield & Campbell, 1954). The measured response may not start directly after the reinforcer or stimulus onset, in which case the deployment of various action patterns within an interval shift to generalized gamma distributions (Killeen, 1975; Roca & Bruner, 2011a; Roper, 1978). Exactly where they will occur depends on details of apparatus and schedule (Reid, Bachá & Morán, 1993); their prediction requires a full-scale timing model.

A proper timing model works forward from the most proximate signal, a reinforcer or CS, not backward from the sustaining reinforcer, as does Eq. 4. There are many timing models that do this (e.g., Jozefowiez, Staddon, & Cerutti, 2009). Most congenial to the present account are those that posit transitions from one behavioral state to the next (Killeen & Fetterman, 1993)—for instance, the learning-to-time model (Machado, 1997) and the stochastic counter model (Killeen, 2002; Killeen & Taylor, 2000a, b), which preserve essential features of timing, such as scalar invariance. The various states of the model correspond to classes of behavior, since they are differentialy strengthened by greater or lesser proximity to reinforcement, as given by Eq. 4. In temporal production procedures such as the peak timing, and free-operant psychophysical choice procedures, the state supporting the terminal instrumental response is measured more or less directly. In temporal estimation procedures, animals are asked to report imposed times by making a binary instrumental response (Killeen, Fetterman, & Bizo, 1997). The classes of behavior in which they are engaged at the time of the question serve as conditional stimuli to mediate that instrumental response (Fetterman, Killeen, & Hall, 1998). It is our hypothesis that the states purportedly underlying all of these measures of time perception reflect the islands of dominance of select response classes.

Respondent conditioning as gradient concentration

The elements for our conception of respondent conditioning are now in place. The delay gradients, prepared to grace different responses differentially as a function of their temporal distance and islands of dominance, cannot act backward in time; they act on memorial traces. Temporal uncertainty, which increases with time through the interval, spreads, or diffuses, the coupling of slow and intermediate-course behaviors through the interval to the eventual reinforcer. It does this because the location of the various classes of behavior through the interval is conditional on the state of the timing mechanism, and the error in that state grows with time. Presentation of a CS permits segregation of fast-course from competing intermediate-course behaviors. A CS signaling imminance of reinforcement establishes a sharp boundary and, like a very short interstimulus interval, permits terminal responses with steep gradients to then dominate. Thus, the CS–US relationships that characterize respondent conditioning are part and parcel of our competing traces hypothesis, with the CSs acting as lenses to focus the otherwise diffused power of the US.


An analysis based on response–reinforcer proximity is proposed to explain the emergent behavioral patterns of animals, including operant and adjunctive responses. The absence of explicitly arranged contingencies, as in FT schedules, does not impede the creation and maintenance of classes of behavior by delayed reinforcement (Lattal, 1995; Papini & Bitterman, 1990). We extend Skinner’s argument for adventitious condition through contiguity by transmuting “contiguity” into an exponential trace of proximity. Those traces define areas of proximity at which some response classes will tend to be dominant over others. The resulting models are parsimonious of parameters, even if the hypothetical construct—the traces—are inferred. A similar mechanism, involving competition between temporally privileged concurrent actions, has been posited for instrumental responses by Jozefowiez et al. (2009).

In their classic analysis of adjunctive behavior, Staddon and Simmelhag (1971) noted that “the division of the field of learning into two classes—classical and instrumental conditioning—each governed by a separate set of principles, has no basis in fact” (p. 27). We agree, as does Baum (2012), Donahoe, Palmer, and Burgos (1997), and many others. The difference between operant and respondent operations is that the former force an arbitrary response into proximity with reinforcement and record what happens to it under various manipulations, including signaling stimuli. The latter arrange predictable reinforcers in a context of signaling stimuli, and the free responses to those stimuli—or the disruption or modification of concurrent operant responses—are recorded. Staddon (1977) traded those two kinds of learning for three classes of behavior, interim, facultative, and terminal, subject to separate sets of principles. We go further. We argue that there are many classes of behavior; but all are subject to the same set of principles. The same principles, but different parameters.

The theory proposed here explains three important characteristics of schedule-induced behavior: its excessiveness, temporal location, and dependency on interfood interval length. There is little new here, except perhaps our recombination of ideas that others have had before us. Patterson (2009) conducted a number of experiments, some reported in Patterson and Boakes (2012), making the case that schedule-induced drinking was superstitiously conditioned and followed the same principles as operants. Timberlake and colleagues (e.g., Timberlake, 2000) have shown how temporal and conditional aspects of schedules can foster different actions associated with different modes of a behavioral system.

The competing trace model will sustain refinement. It needs elaboration to predict the acquisition of behavior (exemplified in Kwok et al., 2012) and the dynamic evolution of competitive action patterns, perhaps developing the systems devised by Myerson and Miezin (1980) or Ferrell (2012) for this purpose. Notwithstanding, the case for adjuncts as maintained by reinforcement is parsimonious of mechanism, productive of models, and superior to alternative explanations of these important phenomena. In outline, the logic seems to us plausible: Proximity between events is manifestly important. If reinforcers can work absent contingency, the primary role of contingency is to arrange proximity. Reinforcers can work absent contingency. Therefore, contingency works, when it does, by arranging proximity. Proximity may extend over dozens of seconds. Reinforceable behaviors that are proximate to reinforcers may be increased by reinforcement. Adjunctive behaviors are proximate to reinforcers and are reinforceable. Therefore, adjunctive behaviors occur at high rates because they are reinforced. Different classes of responses may have different associabilities with reinforcers over delays. Therefore, different classes of responses may emerge at different times before reinforcement and compete for expression. Those that are expressed are reinforced. This generates an unstable dynamic system. Signals of reinforcement may favor perceptual and motor responses with steeper delay gradients over those with shallower gradients and may strengthen already-dominant ones over novel ones, leading to many of the phenomena of classical conditioning. This article constitutes an extensive grounding of these arguments, an adduction of evidence for their premises, and a validation of their conclusions.

The demand for different gradients for different classes of behavior may seem profligate; but it is no more so than nature. The existence of long-tailed gradients for some classes of behaviors has important implications for applied behavior analysis, an idea adumbrated by Catania (1971; 2005a), Madden and Perone (2003), and Kwok et al. (2012). Thirty-six years ago, Herrnstein (1977, p. 602) noted the following: “We seem destined to undertake Watsonian botanizing [of behavior], but with better prospects for success than Watson would have had 50 years ago. We now know enough about the quantitative laws of conditioning to see that we are lacking the parameters that could make behaviorism truly practical.” This article contributes two of those parameters, α and λ, or equivalent indices of their relation to other gradients, a′ and λ′.



Preparation of the manuscript was aided by support from research grants from the Spanish Goverment (PSI2008-03660, Ministerio de Ciencia e Innovación: Secretaría de Estado de Investigación; and PSI2011-29399, Ministerio de Economía y Competitividad: Secretaría de Estado de Investigación, Desarrollo e Innovación) to R.P. and by support from Banco de Santander and Universidad Nacional de Educación a Distancia to P.K. We thank Lucía Díez de la Riva for her help in running the animals for the marking study. We thank Francois Tonneau for critical discussions, Allen Neuringer for observations on variability, Bob Boakes and Alliston Reid for many helpful comments and corrections, and R.C.G. as skillful accoucheur. It takes a village to raise a paper.


  1. Ainslie, G. (1992). Picoeconomics. New York: Cambridge University Press.Google Scholar
  2. Ainslie, G. (2001). Breakdown of will. Cambridge University Press.Google Scholar
  3. Albert, D., & Mah, C. (1972). An examination of conditioned reinforcement using a one-trial learning procedure. Learning and Motivation, 3, 369–388.CrossRefGoogle Scholar
  4. Alcock, J. (2005). Animal behavior: An evolutionary approach. Sunderland: Sinauer Association.Google Scholar
  5. Álvarez, A., Íbias, J., & Pellón, R. (2011). Facilitación de la adquisición de bebida adjuntiva tras la entrega de comida más que en anticipación a la misma [Facilitation of acquisition of adjunctive drinking after food delivery more than in anticipation to food]. In H. Martínez, J. J. Irigoyen, F. Cabrera, J. Varela, P. Covarrubias & A. Jiménez (Eds.), Estudios sobre Comportamiento y Aplicaciones (Vol. II, pp. 55–69). Tlajomulco de Zúñiga, Jalisco, México: Segunda Generación. .Google Scholar
  6. Amsel, A., & Work, M. S. (1961). The role of learned factors in "spontaneous" activity. Journal of Comparative and Physiological Psychology, 54, 527–532.PubMedCrossRefGoogle Scholar
  7. Amsel, A., Work, M. S., & Penick, E. C. (1962). Activity during and between periods of stimulus change related to feeding. Journal of Comparative and Physiological Psychology, 55, 1114–1117.PubMedCrossRefGoogle Scholar
  8. Anderson, M. C., & Shettleworth, S. J. (1977). Behavioral adaptation to fixed-interval and fixed-time food delivery in golden hamsters. Journal of the Experimental Analysis of Behavior, 25, 33–49.CrossRefGoogle Scholar
  9. Anselme, P. (2010). The uncertainty processing theory of motivation. Behavioural Brain Research, 208, 291–310.PubMedCrossRefGoogle Scholar
  10. Arcediano, F., Escobar, M., & Miller, R. R. (2003). Temporal integration and temporal backward associations in human and nonhuman subjects. Learning & Behavior, 31, 242–256.CrossRefGoogle Scholar
  11. Arcediano, F., Escobar, M., & Miller, R. R. (2005). Bidirectional associations in humans and rats. Journal of Experimental Psychology. Animal Behavior Processes, 31, 301–318.PubMedCrossRefGoogle Scholar
  12. Armstrong, C. M., DeVito, L. M., & Cleland, T. A. (2006). One-trial associative odor learning in neonatal mice. Chemical Senses, 31, 343–349.PubMedCrossRefGoogle Scholar
  13. Avila, R., & Bruner, C. A. (1994). Varying the temporal placement of a drinking opportunity in a fixed-interval schedule. Journal of the Experimental Analysis of Behavior, 62, 307–314.PubMedCrossRefGoogle Scholar
  14. Baerends, G. P. (1976). The functional organization of behaviour. Animal Behaviour, 24, 726–738.CrossRefGoogle Scholar
  15. Balleine, B. W., & Dickinson, A. (1998). Goal-directed instrumental action: Contingency and incentive learning and their cortical substrates. Neuropharmacology, 37, 407–419.PubMedCrossRefGoogle Scholar
  16. Baum, W. M. (1973). The correlation-based law of effect. Journal of the Experimental Analysis of Behavior, 20, 137–153.PubMedCrossRefGoogle Scholar
  17. Baum, W. M. (2005). Understanding behaviorism: Behavior, culture, and evolution (2nd ed.). Malden, MA: Blackwell Publishing.Google Scholar
  18. Baum, W. M. (2012). Rethinking reinforcement: Allocation, induction, and contingency. Journal of the Experimental Analysis of Behavior, 97, 101–124.PubMedCrossRefGoogle Scholar
  19. Bevins, R. A., & Besheer, J. (2006). Object recognition in rats and mice: A one-trial non-matching-to-sample learning task to study'recognition memory'. Nature Protocols, 1, 1306–1311.PubMedCrossRefGoogle Scholar
  20. Bickel, W. K., Higgins, S. T., Kirby, K., & Johnson, L. M. (1988). An inverse relationship between baseline fixed-interval response rate and the effects of a tandem response requirement. Journal of the Experimental Analysis of Behavior, 50, 211–218.PubMedCrossRefGoogle Scholar
  21. Bindra, D. (1972). A unified account of classical conditioning and operant training. In A. H. Black & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory (pp. 453–481). New York: Appleton-Century-Crofts.Google Scholar
  22. Blass, E. (Ed.). (2001). Developmental psychobiology (Vol. 13). New York: Kluwer Academic.Google Scholar
  23. Boakes, R. A. (1977). Performance on learning to associate a stimulus with positive reinforcement. In H. David & H. M. B. Hurwitz (Eds.), Operant-Pavlovian interactions (pp. 67–101). Hillsdale, NJ: Erlbaum.Google Scholar
  24. Boakes, R. A. (1984). From Darwin to behaviourism: Psychology and the minds of animals. Cambridge: Cambridge University Press.Google Scholar
  25. Boakes, R. A., Halliday, M. S., & Poli, M. (1975). Response additivity: Effects of superimposed free reinforcement on a variable-interval baseline. Journal of the Experimental Analysis of Behavior, 23, 177–191.PubMedCrossRefGoogle Scholar
  26. Boakes, R. A., Poli, M., Lockwood, M. J., & Goodall, G. (1978). A study of misbehavior: Token reinforcement in the rat. Journal of the Experimental Analysis of Behavior, 29, 115–134.PubMedCrossRefGoogle Scholar
  27. Bolles, R. C. (1970). Species-specific defense reactions and avoidance learning. Psychological Review, 71, 32–48.CrossRefGoogle Scholar
  28. Bolles, R. C. (1983). The explanation of behavior. Psychological Record, 33, 31–48.Google Scholar
  29. Bouton, M. E. (2007). Learning and behavior: A contemporary synthesis. Sunderland, MA: Sinauer Associates, Inc.Google Scholar
  30. Brackney, R. (2012). Habits and actions. In P. Killeen (Ed.) (Observation ed.). Tempe, AZ.Google Scholar
  31. Bradshaw, C. M., & Killeen, P. R. (2012). A theory of behaviour on progressive ratio schedules, with applications in behavioural pharmacology. Psychopharmacology, OnLine First.Google Scholar
  32. Breland, K., & Breland, M. (1961). The misbehavior of organisms. American Psychologist, 16, 681–684.CrossRefGoogle Scholar
  33. Brown, B. L., Hemmes, N. S., & Cabeza de Vaca, S. (1997). Timing of the CS-US interval by pigeons in trace and delay autoshaping. The Quarterly Journal of Experimental Psychology. B, 50, 40–53.CrossRefGoogle Scholar
  34. Brown, B. L., Hemmes, N. S., Cabeza de Vaca, S., & Pagano, C. (1993). Sign and goal tracking during delay and trace autoshaping in pigeons. Animal Learning & Behavior, 21, 360–368.CrossRefGoogle Scholar
  35. Brush, M. E., & Schaeffer, R. W. (1974). Effects of water deprivation on schedule-induced polydipsia. Bulletin of the Psychonomic Society, 4, 69–72.Google Scholar
  36. Campbell, B. A., Smith, N. F., Misanin, J. R., & Jaynes, J. (1966). Species differences in activity during hunger and thirst. Journal of Comparative and Physiological Psychology, 61, 123–127.PubMedCrossRefGoogle Scholar
  37. Capaldi, E. J. (1978). Effects of schedule and delay of reinforcement on acquisition speed. Animal Learning & Behavior, 6, 330–334.CrossRefGoogle Scholar
  38. Catania, A. C. (Ed.). (1968). Contemporary research in operant behavior. Glenview, IL: Scott, Foresman and Company.Google Scholar
  39. Catania, A. (1971a). Elicitation, reinforcement, and stimulus control. In R. Glaser (Ed.), The nature of reinforcement (pp. 196–220). New York: Academic Press.Google Scholar
  40. Catania, A. C. (1971b). Reinforcement schedules: The role of responses preceding the one that produces the reinforcer. Journal of the Experimental Analysis of Behavior, 15, 271–287.PubMedCrossRefGoogle Scholar
  41. Catania, A. C. (1973). The concept of the operant in the analysis of behavior. Behaviorism, 1, 103–116.Google Scholar
  42. Catania, A. C. (2005a). Attention-deficit/hyperactivity disorder (ADHD): Delay-of-reinforcement gradients and other behavioral mechanisms. The Behavioral and Brain Sciences, 28, 419–424.Google Scholar
  43. Catania, A. C. (2005b). The operant reserve: A computer simulation in (accelerated) real time. Behavioural Processes, 69, 257–278.PubMedCrossRefGoogle Scholar
  44. Catania, A. C., Sagvolden, T., & Keller, K. J. (1988). Reinforcement schedules: Retroactive and proactive effects of reinforcers inserted into fixed-interval performances. Journal of the Experimental Analysis of Behavior, 49, 49–73.PubMedCrossRefGoogle Scholar
  45. Chapman, H. W., & Richardson, H. M. (1974). The role of systemic hydration in the acquisition of schedule-induced polydipsia by rats. Behavioral Biology, 12, 501–508.PubMedCrossRefGoogle Scholar
  46. Cherot, C., Jones, A., & Neuringer, A. (1996). Reinforced variability decreases with approach to reinforcers. Journal of Experimental Psychology. Animal Behavior Processes, 22, 497–508.PubMedCrossRefGoogle Scholar
  47. Christian, W. P., Schaeffer, R. W., & King, G. D. (1977). Schedule-induced behavior: Research and theory. Montreal: Eden Press.Google Scholar
  48. Clark, F. C. (1962). Some observations on the adventitious reinforcement of drinking under food reinforcement. Journal of the Experimental Analysis of Behavior, 5, 61–63.PubMedCrossRefGoogle Scholar
  49. Cleaveland, J. M., Jäger, R., Rößner, P., & Delius, J. D. (2003). Ontogeny has a phylogeny: Background to adjunctive behaviors in pigeons and budgerigars. Behavioural Processes, 61, 143–158.PubMedCrossRefGoogle Scholar
  50. Cope, C. L., Sanger, D. J., & Blackman, D. E. (1976). Intragastric water and the acquisition of schedule-induced drinking. Behavioral Biology, 17, 267–270.PubMedCrossRefGoogle Scholar
  51. Costa, D. S. J., & Boakes, R. A. (2007). Maintenance of responding when reinforcement becomes delayed. Learning & Behavior, 35, 95–105.CrossRefGoogle Scholar
  52. Costa, D. S. J., & Boakes, R. A. (2009). Context blocking in rat autoshaping: Sign-tracking versus goal-tracking. Learning and Motivation, 40, 178–185.CrossRefGoogle Scholar
  53. Critchfield, T. S., & Lattal, K. A. (1993). Acquisition of a spatially defined operant with delayed reinforcement. Journal of the Experimental Analysis of Behavior, 59, 373–387.PubMedCrossRefGoogle Scholar
  54. D'Amato, M. R., Safarjan, W. R., & Salmon, D. (1981). Long-delay conditioning and instrumental leaming: Some new findings. In N. E. Spear & R. R. Miller (Eds.), Information processing in animals: Memory mechanisms (pp. 113–142). Mawah, NJ: Lawrence Earlbaum Associates.Google Scholar
  55. Davey, G. C. L., Phillips, J. H., & Witty, S. (1989). Signal-directed behavior in the rat: Interactions between the nature of the CS and the nature of the UCS. Learning & Behavior, 17, 447–456.CrossRefGoogle Scholar
  56. Davis, H., & Hubbard, J. (1972). An analysis of superstitious behavior in the rat. Behaviour, 43, 1–12.CrossRefGoogle Scholar
  57. Davis, E. R., & Platt, J. R. (1983). Contiguity and contingency in the acquisition and maintenance of an operant. Learning and Motivation, 14, 487–512.CrossRefGoogle Scholar
  58. Dickinson, A. (1985). Actions and habits: The development of behavioural autonomy. Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 308, 67–78.CrossRefGoogle Scholar
  59. Dickinson, A. (2001). Causal learning: An associative analysis (The 28th Bartlett Memorial Lecture). The Quarterly Journal of Experimental Psychology. B, 54, 3–26.CrossRefGoogle Scholar
  60. Dickinson, A., Balleine, B., Watt, A., Gonzalez, F., & Boakes, R. A. (1995). Motivational control after extended instrumental training. Animal Learning & Behavior, 23, 197–206.CrossRefGoogle Scholar
  61. Dickinson, A., Watt, A., & Griffiths, W. J. H. (1992). Free-operant acquisition with delayed reinforcement. The Quarterly Journal of Experimental Psychology. B, 45, 241–258.Google Scholar
  62. Domjan, M. (1983). Biological constraints on instrumental and classical conditioning: Implications for general process theory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 19, pp. 215–277). New York: Academic Press.Google Scholar
  63. Donahoe, J. W. (2006). Contingency: Its meaning in the experimental analysis of behavior. European Journal of Behavior Analysis, 7, 111–114.Google Scholar
  64. Donahoe, J. W., Palmer, D. C., & Burgos, J. E. (1997). The SR issue: its status in behavior analysis and in Donahoe and Palmer's learning and complex behavior. Journal of the Experimental Analysis of Behavior, 67, 193.Google Scholar
  65. Egger, M. D., & Miller, N. E. (1962). Secondary reinforcement in rats as a function of information value and reliability of the stimulus. Journal of Experimental Psychology, 64, 97–104.PubMedCrossRefGoogle Scholar
  66. Falk, J. L. (1961). Production of polydipsia in normal rats by an intermittent food schedule. Science, 133, 195–196.PubMedCrossRefGoogle Scholar
  67. Falk, J. L. (1966). Schedule-induced polydipsia as a function of fixed interval length. Journal of the Experimental Analysis of Behavior, 9, 37–39.PubMedCrossRefGoogle Scholar
  68. Falk, J. L. (1971). The nature and determinants of adjunctive behavior. Physiology & Behavior, 6, 577–588.CrossRefGoogle Scholar
  69. Fanselow, M. S. (1989). The adaptive function of conditioned defensive behavior: an ecological approach to Pavlovian stimulus-substitution theory. NATO Advanced Study Institutes series. Series D, Behavioural and social sciences, 48, 151–166.Google Scholar
  70. Fanselow, M. S., & Sigmundi, R. A. (1986). Species-specific danger signals, endogenous opioid analgesia, and defensive behavior. Journal of Experimemal Psychology, 12, 301–309.Google Scholar
  71. Fentress, J. C. (1983). Ethological models of hierarchy and patterning of species-specific behavior. In P. Teitelbaum & E. Satinoff (Eds.), Handbook of behavioral neurobiology (Vol. 6, pp. 185–234). New York: Plenum Press.Google Scholar
  72. Ferrell, J. E. (2012). Bistability, bifurcations, and Waddington's epigenetic landscape. Current Biology, 22, R458–R466.PubMedCrossRefGoogle Scholar
  73. Fetterman, J. G., Killeen, P. R., & Hall, S. (1998). Watching the clock. Behavioural Processes, 44, 211–222.PubMedCrossRefGoogle Scholar
  74. Flory, R. K. (1971). The control of schedule-induced polydipsia: Frequency and amount of reinforcement. Learning and Motivation, 2, 215–227.CrossRefGoogle Scholar
  75. Gallistel, C. R. (1980). The organization of action: A new synthesis (Vol. 13). New York: Erlbaum Associates.Google Scholar
  76. Gallo, A., Duchatelle, E., Elkhessaimi, A., Lepape, G. L., & Desportes, J. P. (1995). Topographic analysis of the rat's bar behaviour in the Skinner box. Behavioural Processes, 33, 319–327.CrossRefGoogle Scholar
  77. Garcia, J., McGowan, B. K., & Green, K. F. (1972). Biological constraints on conditioning. In H. Black & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory (pp. 3–27). New York: Appleton-Century-Crofts.Google Scholar
  78. Gharib, A., Gade, C., & Roberts, S. (2004). Control of variation by reward probability. Journal of Experimental Psychology. Animal Behavior Processes, 30, 271–282.PubMedCrossRefGoogle Scholar
  79. Gibbon, J., Baldock, M. D., Locurto, C. M., Gold, L., & Terrace, H. S. (1977). Trial and intertrial durations in autoshaping. Journal of Experimental Psychology. Animal Behavior Processes, 3, 264–284.CrossRefGoogle Scholar
  80. Gibbon, J., Berryman, R., & Thompson, R. L. (1974). Contingency spaces and measures in classical and instrumental conditioning. Journal of the Experimental Analysis of Behavior, 21, 585–605.PubMedCrossRefGoogle Scholar
  81. Gibbon, J., Farrell, L., Locurto, C. M., Duncan, H. J., & Terrace, H. S. (1980). Partial reinforcement in autoshaping with pigeons. Animal Learning and Behavior, 8, 45–59.CrossRefGoogle Scholar
  82. Good, I. J. (1961). A causal calculus. The British Journal for the Philosophy of Science, 11, 305–318.CrossRefGoogle Scholar
  83. Gottlieb, D. A. (2006). Effects of partial reinforcement and time between reinforced trials on terminal response rate in pigeon autoshaping. Behavioural Processes, 72, 6–13.PubMedCrossRefGoogle Scholar
  84. Graham, C. L. D., & John, H. (1989). Signal-directed behavior in the rat: Interactions between the nature of the CS and the nature of the UCS. Animal Learning & Behavior, 17, 447–456.CrossRefGoogle Scholar
  85. Haight, P. A., & Killeen, P. R. (1991). Adjunctive behavior in multiple schedules of reinforcement. Animal Learning & Behavior, 19, 257–263.CrossRefGoogle Scholar
  86. Hanson, S. J. (1977). The Rescorla-Wagner model and the temporal control of behavior. Unpublished Master's thesis, Arizona State University, Tempe, AZ.Google Scholar
  87. Harris, J. A., Gharaei, S., & Pincham, H. L. (2011). Response rates track the history of reinforcement times. Journal of Experimental Psychology. Animal Behavior Processes, 37, 277–286.PubMedCrossRefGoogle Scholar
  88. Herrnstein, R. J. (1966). Superstition: A Corollary of the principles of operant conditioning. In W. K. Honig (Ed.), Operant behavior: Areas of research and application (pp. 33–51). New York: Appleton-Century-Crofts.Google Scholar
  89. Herrnstein, R. J. (1977). The evolution of behaviorism. American Psychologist, 32, 593–603.CrossRefGoogle Scholar
  90. Hinson, J. M., & Staddon, J. E. R. (1978). Behavioral competition: A mechanism for schedule interactions. Science, 202, 432–434.PubMedCrossRefGoogle Scholar
  91. Hogan, J. A. (1994). Structure and development of behavior systems. Psychonomic Bulletin & Review, 1, 439–450.CrossRefGoogle Scholar
  92. Holland, P. C. (1979). The effects of qualitative and quantitative variation in the US on individual components of Pavlovian appetitive conditioned behavior in rats. Learning & Behavior, 7, 424–432.CrossRefGoogle Scholar
  93. Holland, P. C. (1980). CS-US interval as a determinant of the form of Pavlovian appetitive conditioned responses. Journal of Experimental Psychology. Animal Behavior Processes, 6, 155–174.PubMedCrossRefGoogle Scholar
  94. Holland, P. C. (2004). Relations between Pavlovian-instrumental transfer and reinforcer devaluation. Journal of Experimental Psychology. Animal Behavior Processes, 30, 104–117.PubMedCrossRefGoogle Scholar
  95. Innis, N. K., Simmelhag-Grant, V. L., & Staddon, J. E. R. (1983). Behavior induced by periodic food delivery: The effects of interfood interval. Journal of the Experimental Analysis of Behavior, 39, 309–322.PubMedCrossRefGoogle Scholar
  96. Iwata, B. A., Kahng, S. W., Wallace, M. D., & Lindberg, J. S. (2000). The functional analysis model of behavioral assessment. In J. Austin & J. E. Carr (Eds.), Handbook of applied behavior analysis (pp. 61–89). Reno, NV: Context Press.Google Scholar
  97. Jenkins, W. O. (1943a). Studies in the spread of effect. I. The bi-directional gradient in the performance of white rats on a linear maze. Journal of Comparative Psychology, 35, 41–56.CrossRefGoogle Scholar
  98. Jenkins, W. O. (1943b). Studies in the spread of effect. II. The effect of increased motivation upon the bi-directional gradient. Journal of Comparative Psychology, 35, 57–63.CrossRefGoogle Scholar
  99. Jenkins, W. O. (1943c). Studies in the spread of effect. III. The effect of increased incentive upon the bi-directional gradient. Journal of Comparative Psychology, 35, 65–72.CrossRefGoogle Scholar
  100. Jenkins, H. M. (1970). Sequential organization in schedules of reinforcement. In W. N. Schoenfeld (Ed.), The theory of reinforcement schedules (pp. 63–109). New York: Appleton-Century-Crofts.Google Scholar
  101. Johansen, E. B., Killeen, P. R., Russell, V. A., Tripp, G., Wickens, J. R., Tannock, R., et al. (2009). Origins of altered reinforcement effects in ADHD. Behavior and Brain Functions, 5, 7.CrossRefGoogle Scholar
  102. Johansen, E. B., Killeen, P. R., & Sagvolden, T. (2007). Behavioral variability, elimination of responses, and delay-of-reinforcement gradients in SHR and WKY rats. Behavior Brain Functions.Google Scholar
  103. Johnson, L. M., Bickel, W. K., Higgins, S. T., & Morris, E. K. (1991). The effects of schedule history and the opportunity for adjunctive responding on behavior during a fixed-interval schedule of reinforcement. Journal of the Experimental Analysis of Behavior, 55, 313–322.PubMedCrossRefGoogle Scholar
  104. Jozefowiez, J., Staddon, J. E. R., & Cerutti, D. T. (2009). The behavioral economics of choice and interval timing. Psychological Review, 116, 519–539.PubMedCrossRefGoogle Scholar
  105. Keehn, J. D., & Burton, M. (1978). Schedule-induced drinking: Entrainment by fixed-and random-interval schedule-controlled feeding. T.-I.-T. Journal of Life Sciences, 8, 93.PubMedGoogle Scholar
  106. Keith-Lucas, T., & Guttman, N. (1975). Robust-single-trial delayed backward conditioning. Journal of Comparative and Physiological Psychology, 88, 468–476.CrossRefGoogle Scholar
  107. Killeen, P. R. (1975). On the temporal control of behavior. Psychological Review, 82, 89–115.CrossRefGoogle Scholar
  108. Killeen, P. R. (1978). Superstition: A matter of bias, not detectability. Science, 199, 88–90.PubMedCrossRefGoogle Scholar
  109. Killeen, P. R. (1981). Learning as causal inference. In M. Commons & J. A. Nevin (Eds.), Quantitative studies of behavior (pp. 289–312). New York: Pergamon.Google Scholar
  110. Killeen, P. R. (1994). Mathematical principles of reinforcement. The Behavioral and Brain Sciences, 17, 105–172.CrossRefGoogle Scholar
  111. Killeen, P. R. (2002). Scalar counters. Learning and Motivation, 33, 63–87.CrossRefGoogle Scholar
  112. Killeen, P. R. (2005). Gradus ad parnassum: Ascending strength gradients or descending memory traces? The Behavioral and Brain Sciences, 28, 432–434.CrossRefGoogle Scholar
  113. Killeen, P. R. (2011). Models of trace decay, eligibility for reinforcement, and delay of reinforcement gradients, from exponential to hyperboloid. Behavioural Processes, 8, 57–63.CrossRefGoogle Scholar
  114. Killeen, P. R., & Fetterman, J. G. (1993). Behavioral theory of timing: Transition analyses. Journal of the Experimental Analysis of Behavior, 59, 411–422.PubMedCrossRefGoogle Scholar
  115. Killeen, P. R., Fetterman, J. G., & Bizo, L. A. (1997). Time's causes. In C. M. Bradshaw & E. Szabadi (Eds.), Time and behaviour: Psychological and neurobiological analyses (pp. 79–131). Amsterdam: Elsevier Science Publishers BV.CrossRefGoogle Scholar
  116. Killeen, P. R., Hanson, S. J., & Osborne, S. R. (1978). Arousal: Its genesis and manifestation as response rate. Psychological Review, 85, 571–581.PubMedCrossRefGoogle Scholar
  117. Killeen, P. R., Sanabria, F., & Dolgov, I. (2009). The dynamics of conditioning and extinction. Journal of Experimental Psychology. Animal Behavior Processes, 35, 447–472.PubMedCrossRefGoogle Scholar
  118. Killeen, P. R., & Sitomer, M. T. (2003). MPR. Behavioural Processes, 62, 49–64.PubMedCrossRefGoogle Scholar
  119. Killeen, P. R., & Smith, J. P. (1984). Perception of contingency in conditioning: Scalar timing, response bias, and the erasure of memory by reinforcement. Journal of Experimental Psychology. Animal Behavior Processes, 10, 333–345.CrossRefGoogle Scholar
  120. Killeen, P. R., & Taylor, T. J. (2000a). How the propagation of error through stochastic counters affects time discrimination and other psychophysical judgments. Psychological Review, 107, 430–459.PubMedCrossRefGoogle Scholar
  121. Killeen, P. R., & Taylor, T. J. (2000b). Stochastic adding machines. Nonlinearity, 13, 1889–1903.PubMedCrossRefGoogle Scholar
  122. King, G. D. (1974a). The enhancement of schedule-induced polydipsia by preschedule noncontingent shock. Bulletin of the Psychonomic Society, 3, 46–48.Google Scholar
  123. King, G. D. (1974b). Wheel running in the rat induced by a fixed-time presentation of water. Animal Learning & Behavior, 2, 325–328.CrossRefGoogle Scholar
  124. Kissileff, H. R. (1969). Food-associated drinking in the rat. Journal of Comparative and Physiological Psychology, 67, 284–300.PubMedCrossRefGoogle Scholar
  125. Kohman, R., Leising, K., Shaffer, M., & Higa, J. J. (2006). Effects of breaks in the interval cycle on temporal tracking in pigeons. Behavioural Processes, 71, 126–134.PubMedCrossRefGoogle Scholar
  126. Kwok, D.W.S., Livesey, E.J., & Boakes, R.A. (2012). Serial overshadowing of taste aversion learning by stimuli preceding the target taste. Learning & Behavior [Epub].Google Scholar
  127. Lamas, E., & Pellón, R. (1995). Food-deprivation effects on punished schedule-induced drinking in rats. Journal of the Experimental Analysis of Behavior, 64, 47–60.PubMedCrossRefGoogle Scholar
  128. Lamas, E., & Pellón, R. (1997). Food deprivation and food-delay effects on the development of adjunctive drinking. Physiology & Behavior, 61, 153–158.CrossRefGoogle Scholar
  129. LaMon, B. C., & Zeigler, H. P. (1984). Grasping in the pigeon (Columba livia): Stimulus control during conditioned and consummatory responses. Animal Learning & Behavior, 12, 223–231.CrossRefGoogle Scholar
  130. Lattal, K. A. (1995). Contingency and behavior analysis. Behavior Analyst, 18, 209–224.PubMedGoogle Scholar
  131. Lattal, K. A. (2010). Delayed reinforcement of operant behavior. Journal of the Experimental Analysis of Behavior, 93, 129–139.PubMedCrossRefGoogle Scholar
  132. Lattal, K. A., & Abreu-Rodrigues, J. (1997). Response-independent events in the behavior stream. Journal of the Experimental Analysis of Behavior, 68, 375–398.PubMedCrossRefGoogle Scholar
  133. Lattal, K. A., & Gleeson, S. (1990). Response acquisition with delayed reinforcement. Journal of Experimental Psychology. Animal Behavior Processes, 16, 27–39.PubMedCrossRefGoogle Scholar
  134. Lattal, K. A., & Williams, A. M. (1997). Body weight and response acquisition with delayed reinforcement. Journal of the Experimental Analysis of Behavior, 67, 131–143.PubMedCrossRefGoogle Scholar
  135. Lett, B. T. (1975). Long-delay learning in the T-maze. Learning and Motivation, 6, 80–90.CrossRefGoogle Scholar
  136. Lieberman, D. A., Davidson, F. H., & Thomas, G. V. (1985). Marking in pigeons: The role of memory in delayed reinforcement. Journal of Experimental Psychology. Animal Behavior Processes, 11, 611–624.CrossRefGoogle Scholar
  137. Lieberman, D. A., McIntosh, D. C., & Thomas, G. V. (1979). Learning when reward is delayed: A marking hypothesis. Journal of Experimental Psychology. Animal Behavior Processes, 5, 224–242.PubMedCrossRefGoogle Scholar
  138. Livnat, A., & Pippenger, N. (2006). An optimal brain can be composed of conflicting agents. Proceedings of the National Academy of Sciences of the United States of America, 103, 3198–3202.PubMedCrossRefGoogle Scholar
  139. Locurto, C., Travers, T., Terrace, H., & Gibbon, J. (1980). Physical restraint produces rapid acquisition of the pigeon's key peck. Journal of the Experimental Analysis of Behavior, 34, 13–21.PubMedCrossRefGoogle Scholar
  140. Looney, T.A., & Cohen, P.S. (1982). Aggression induced by intermittent positive reinforcement. Neuroscience and Biobehavioral Reviews, 15–37.Google Scholar
  141. López-Crespo, G., Rodríguez, M., Pellón, R., & Flores, P. (2004). Acquisition of schedule-induced polydipsia by rats in proximity to upcoming food delivery. Learning & Behavior, 32, 491–499.CrossRefGoogle Scholar
  142. Lucas, G. A., Timberlake, W., & Gawley, D. J. (1988). Adjunctive behavior of the rat under periodic food delivery in a 24-hour environment. Animal Learning & Behavior, 16, 19–30.CrossRefGoogle Scholar
  143. Machado, A. (1997). Learning the temporal dynamics of behavior. Psychological Review, 104, 241–265.PubMedCrossRefGoogle Scholar
  144. Machado, A., & Silva, F. J. (2007). Toward a richer view of the scientific method: The role of conceptual analysis. American Psychologist, 62, 671–681.PubMedCrossRefGoogle Scholar
  145. Mackintosh, N. J. (1974). The psychology of animal learning. New York: Academic Press.Google Scholar
  146. Madden, G. J., & Perone, M. (2003). Effects of alternative reinforcement on human behavior: The source does matter. Journal of the Experimental Analysis of Behavior, 79, 193–206.PubMedCrossRefGoogle Scholar
  147. Millenson, J. R., Allen, R. B., & Pinker, S. (1977). Adjunctive drinking during variable and random-interval food reinforcement schedules. Animal Learning & Behavior, 5, 285–290.CrossRefGoogle Scholar
  148. Moore, B. W. (1973). The role of directed Pavlovian reactions in simple instrumental learning in the pigeon. In R. A. Hinde & J. Stevenson-Hinde (Eds.), Constraints on learning: Limitations and predispositions (pp. 159–188). New York: Academic Press.Google Scholar
  149. Moran, G., & Rudolph, R. (1980). Some effects of lick-contingent delays on the development of schedule-induced polydipsia. Learning and Motivation, 11, 366–385.CrossRefGoogle Scholar
  150. Myerson, J., & Christiansen, B. (1979). Temporal control of eating on periodic water schedules. Physiology & Behavior, 23, 279–282.CrossRefGoogle Scholar
  151. Myerson, J., & Miezin, F. M. (1980). The kinetics of choice: An operant systems analysis. Psychological Review, 87, 160–174.CrossRefGoogle Scholar
  152. Neuringer, A. J. (1970). Superstitious key pecking after three peck-produced reinforcements. Journal of the Experimental Analysis of Behavior, 13, 127–134.PubMedCrossRefGoogle Scholar
  153. Okouchi, H. (2009). Response acquisition by humans with delayed reinforcement. Journal of the Experimental Analysis of Behavior, 91, 377–390.PubMedCrossRefGoogle Scholar
  154. Osborne, S. R. (1978). A quantitative analysis of the effects of amount of reinforcement on two response classes. Journal of Experimental Psychology. Animal Behavior Processes, 4, 297–317.CrossRefGoogle Scholar
  155. Osborne, S. R., & Killeen, P. R. (1977). Temporal properties of responding during stimuli that preceed response-independent food. Learning and Motivation, 8, 533–550.CrossRefGoogle Scholar
  156. Palya, W. L., & Zacny, J. P. (1980). Stereotyped adjunctive pecking by caged pigeons. Animal Learning & Behavior, 8, 293–303.CrossRefGoogle Scholar
  157. Papini, M. R., & Bitterman, M. E. (1990). The role of contingency in classical conditioning. Psychological Review, 97, 396–403.PubMedCrossRefGoogle Scholar
  158. Patterson, A. E. (2009). Schedule-induced drinking: A re-examination of the “superstitious conditioning” hypothesis. Unpublished Dissertation, University of Sydney, Sydney.Google Scholar
  159. Patterson, A.E., & Boakes, R.A. (2012). Interval, blocking and marking effects during the development of schedule-induced drinking. Journal of Experimental Psychology: Animal Behavior Processes [Epub], 1–12.Google Scholar
  160. Pear, J. J., Moody, J. E., & Persinger, M. A. (1972). Lever attacking by rats during free-operant avoidance. Journal of the Experimental Analysis of Behavior, 18, 517.PubMedCrossRefGoogle Scholar
  161. Pellón, R., Bayeh, L., & Pérez-Padilla, Á. (2006). Schedule-induced polydipsia under explicit positive reinforcement. Paper presented at the Winter Conference on Animal Learning and Behavior.Google Scholar
  162. Pellón, R., & Blackman, D. E. (1987). Punishment of schedule-induced drinking in rats by signaled and unsignaled delays in food presentation. Journal of the Experimental Analysis of Behavior, 48, 417–434.PubMedCrossRefGoogle Scholar
  163. Pellón, R., & Pérez-Padilla, Á. (2013). Response-food delay gradients for lever pressing and schedule-induced licking in rats. Learning & Behavior, accepted.Google Scholar
  164. Penney, J., & Schull, J. (1977). Functional differentiation of adjunctive drinking and wheel running in rats. Animal Learning & Behavior, 5, 272–280.CrossRefGoogle Scholar
  165. Plonsky, M., Driscoll, C. D., Warren, D. A., & Rosellini, R. A. (1984). Do random time schedules induce polydipsia in the rat? Animal Learning and Behavior, 12, 355–362.CrossRefGoogle Scholar
  166. Porter, J. H., Young, R., & Moeschl, T. P. (1978). Effects of water and saline preloads on schedule-induced polydipsia in the rat. Physiology & Behavior, 21, 333–338.CrossRefGoogle Scholar
  167. Powell, R. W., & Curley, M. (1976). Instinctive drift in nondomesticated rodents. Bulletin of the Psychonomic Society, 8, 175–178.Google Scholar
  168. Premack, D. (1965). Reinforcement theory. In D. Levine (Ed.), Nebraska Symposium on Motivation: Lincoln: University of Nebraska Press.Google Scholar
  169. Rachlin, H. (1988). Molar behaviorism. In D. B. Fishman, F. Rotgers, & C. M. Franks (Eds.), Paradigms in behavior therapy: Present and promise (pp. 77–105). New York: Springer.Google Scholar
  170. Rachlin, H. (1994). Behavior and mind: The roots of modern psychology. New York: Oxford University Press.Google Scholar
  171. Rachlin, H. (2000). The science of self-control. Cambridge, MA: Harvard University Press.Google Scholar
  172. Reid, A. K., Bachá, G., & Morán, C. (1993). The temporal organization of behavior on periodic food schedules. Journal of the Experimental Analysis of Behavior, 59, 1–27.PubMedCrossRefGoogle Scholar
  173. Reid, A. K., & Dale, R. H. I. (1985). Dynamic effects of food magnitude on interim-terminal interaction. Journal of the Experimental Analysis of Behavior, 39, 135–148.CrossRefGoogle Scholar
  174. Reid, A. K., & Staddon, J. E. R. (1982). Schedule-induced drinking: Elicitation, anticipation, or behavioral interaction? Journal of the Experimental Analysis of Behavior, 38, 1–18.PubMedCrossRefGoogle Scholar
  175. Reid, A. K., Vazquez, P. P., & Rico, J. A. (1985). Schedule induction and the temporal distributions of adjunctive behavior on periodic water schedules. Learning & Behavior, 13, 321–326.CrossRefGoogle Scholar
  176. Reilly, M. P., & Lattal, K. A. (2004). Within-session delay-of-reinforcement gradients. Journal of the Experimental Analysis of Behavior, 82, 21–35.PubMedCrossRefGoogle Scholar
  177. Rescorla, R. A. (1967). Pavlovian conditioning and its proper control procedures. Psychological Review, 74, 71–80.PubMedCrossRefGoogle Scholar
  178. Rescorla, R. A. (1972). "Configural" conditioning in discrete-trial bar pressing. Journal of Comparative and Physiological Psychology, 79, 307–317.PubMedCrossRefGoogle Scholar
  179. Rescorla, R. A. (1988). Pavlovian conditioning: It's not what you think it is. American Psychologist, 43, 151–160.PubMedCrossRefGoogle Scholar
  180. Revusky, S., & Garcia, J. (1970). Learned associations over long delays. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 6, pp. 1–83). San Diego: Academic Press.Google Scholar
  181. Revusky, S., & Parker, L. A. (1976). Aversions to unflavored water and cup drinking produced by delayed sickness. Journal of Experimental Psychology. Animal Behavior Processes, 2, 342–353.PubMedCrossRefGoogle Scholar
  182. Reynierse, J. H., & Spanier, D. (1968). Excessive drinking in rats' adaptation to the schedule of feeding. Psychonomic Science, 10, 95–96.Google Scholar
  183. Riley, A. L., Hyson, R. L., Baker, C. S., & Kulkosky, P. J. (1980). The interaction of conditioned taste aversions and schedule-induced polydipsia: Effects of repeated conditioning trials. Learning & Behavior, 8, 211–217.CrossRefGoogle Scholar
  184. Roca, A., & Bruner, C. A. (2011a). An analysis of the origin of excessive water intake of schedule-induced drinking. Revista Mexicana de Análisis de la Conducta, 37, 177–204.CrossRefGoogle Scholar
  185. Roca, A., & Bruner, C. A. (2011b). Effects of reinforcement frequency on lever pressing for water in food-deprived rats. Revista Mexicana de Análisis de la Conducta, 29, 119–130.CrossRefGoogle Scholar
  186. Roper, T. J. (1978). Diversity and substitutability of adjunctive activities under fixed-interval schedules of food reinforcement. Journal of the Experimental Analysis of Behavior, 30, 83–96.PubMedCrossRefGoogle Scholar
  187. Roper, T. J., & Crossland, G. (1982). Schedule-induced wood-chewing in rats and its dependence on body weight. Animal Learning & Behavior, 10, 65–71.CrossRefGoogle Scholar
  188. Roper, T. J., & Nieto, J. (1979). Schedule-induced drinking and other behavior in the rat, as a function of body weight deficit. Physiology & Behavior, 23, 673–678.CrossRefGoogle Scholar
  189. Roper, T. J., & Posadas-Andrews, A. (1981). Are schedule-induced drinking and displacement activities causally related? The Quarterly Journal of Experimental Psychology. B, 33, 181–193.Google Scholar
  190. Sanabria, F., Sitomer, M. T., & Killeen, P. R. (2006). Negative automaintenance omission training is effective. Journal of the Experimental Analysis of Behavior, 86, 1–10.PubMedCrossRefGoogle Scholar
  191. Schaal, D. W., & Branch, M. N. (1990). Responding of pigeons under variable-interval schedules of signaled-delayed reinforcement: Effects of delay-signal duration. Journal of the Experimental Analysis of Behavior, 53, 103–121.PubMedCrossRefGoogle Scholar
  192. Segal, E. F. (1972). Induction and the provenance of operants. In R. M. Gilbert & J. R. Millenson (Eds.), Reinforcement: Behavioral analyses (pp. 1–34). New York: Academic Press.Google Scholar
  193. Segal, E. F., Oden, D. L., & Deadwyler, S. A. (1965). Determinants of polydipsia: IV. Free-reinforcement schedules. Psychonomic Science, 3, 11–12.Google Scholar
  194. Seligman, M. E. P. (1970). On the generality of the laws of learning. Psychological Review, 77, 406–418.CrossRefGoogle Scholar
  195. Sheffield, F. D., & Campbell, B. A. (1954). The role of experience in the "spontaneous" activity of hungry rats. Journal of Comparative and Physiological Psychology, 47, 97–100.PubMedCrossRefGoogle Scholar
  196. Shettleworth, S. J. (1988). Foraging as operant behavior and operant behavior as foraging: What have we learned. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 22, pp. 1–49). New York: Academic Press.Google Scholar
  197. Shettleworth, S. J., & Juergensen, M. R. (1980). Reinforcement and the organization of behavior in golden hamsters: Brain stimulation reinforcement for seven action patterns. Journal of Experimental Psychology. Animal Behavior Processes, 6, 352–375.PubMedCrossRefGoogle Scholar
  198. Shimp, C. P. (1981). The local organization of behavior: Discrimination of and memory for simple behavioral patterns. Journal of the Experimental Analysis of Behavior, 36, 303–315.PubMedCrossRefGoogle Scholar
  199. Shull, R. L. (1970). A response-initiated fixed-interval schedule of reinforcement. Journal of the Experimental Analysis of Behavior, 13, 13–15.PubMedCrossRefGoogle Scholar
  200. Silva, F. J., Silva, K. M., & Pear, J. J. (1992). Sign-versus goal-tracking: Effects of conditioned-stimulus-to-unconditioned-stimulus distance. Journal of the Experimental Analysis of Behavior, 57, 17–31.PubMedCrossRefGoogle Scholar
  201. Silva, K. M., & Timberlake, W. (1997). A behavior systems view of conditioned states during long and short CS-US intervals. Learning and Motivation, 28, 465–490.CrossRefGoogle Scholar
  202. Silva, K. M., & Timberlake, W. (1998). The organization and temporal properties of appetitive behavior in rats. Animal Learning & Behavior, 26, 182–195.CrossRefGoogle Scholar
  203. Silva, F. J., & Timberlake, W. (2000). A clarification of the nature of backward excitatory conditioning. Learning and Motivation, 31, 67–80.CrossRefGoogle Scholar
  204. Silva, F. J., Timberlake, W., & Ozlem Cevik, M. (1998). A behavior systems approach to the expression of backward associations. Learning and Motivation, 29, 1–22.CrossRefGoogle Scholar
  205. Skinner, B. F. (1948). Superstition in the pigeon. Journal of Experimental Psychology, 38, 168–172.PubMedCrossRefGoogle Scholar
  206. Skinner, B. F. (1984). The phylogeny and ontogeny of behavior. The Behavioral and Brain Sciences, 7, 669–711.CrossRefGoogle Scholar
  207. Slater, P. J. B., & Ollason, J. C. (1972). The temporal pattern of behaviour in isolated male zebra finches: Transition analysis. Behaviour, 42, 248–269.CrossRefGoogle Scholar
  208. Smith, S. S., & Renner, K. E. (1976). Preference for food and water in rats as a function of delay of reward. Animal Learning & Behavior, 4, 299–302.CrossRefGoogle Scholar
  209. Sosa, R., dos Santos, C. V., & Flores, C. (2011). Training a new response using conditioned reinforcement. Behavioural Processes, 87, 231–236.PubMedCrossRefGoogle Scholar
  210. Spetch, M. L., & Honig, W. K. (1988). Characteristics of pigeons' spatial working memory in an open-field task. Learning & Behavior, 16, 123–131.CrossRefGoogle Scholar
  211. Spetch, M. L., Wilkie, D. M., & Pinel, J. P. (1981). Backward conditioning: A reevaluation of the empirical evidence. Psychological Bulletin, 89, 163–175.PubMedCrossRefGoogle Scholar
  212. Staddon, J. E. R. (1977). Schedule-induced behavior. In W. K. Honig & Staddon (Eds.), Handbook of operant behavior (pp. 125–152). Englewood Clifffs, NJ: Prentice-Hall.Google Scholar
  213. Staddon, J. E. R. (1983). Adaptive behavior and learning. New York: Cambridge University Press.Google Scholar
  214. Staddon, J. E. R., & Simmelhag, V. (1971). The "superstition" experiment: A re-examination of its implications for the principles of adaptive behavior. Psychological Review, 78, 3–43.CrossRefGoogle Scholar
  215. Staddon, J. E. R., & Zhang, Y. (1989). Response selection in operant learning. Behavioural Processes, 20, 189–197.CrossRefGoogle Scholar
  216. Staddon, J. E. R., & Zhang, Y. (1991). On the assignment-of-credit problem in operant learning. In M. Commons, S. Grossberg & Staddon (Eds.), Neural network models of conditioning and action (pp. 279–293). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
  217. Stahlman, W. D., Roberts, S., & Blaisdell, A. P. (2010). Effect of reward probability on spatial and temporal variation. Journal of Experimental Psychology. Animal Behavior Processes, 36, 77–91.PubMedCrossRefGoogle Scholar
  218. Stein, L. (1964). Excessive drinking in the rat: Superstition or thirst? Journal of Comparative and Physiological Psychology, 58, 237–242.PubMedCrossRefGoogle Scholar
  219. Stokes, P. D., & Balsam, P. D. (1991). Effects of reinforcing preselected approximations on the topography of the rat's bar press. Journal of the Experimental Analysis of Behavior, 55, 213–231.PubMedCrossRefGoogle Scholar
  220. Stout, S. C., & Miller, R. R. (2007). Sometimes-competing retrieval (SOCR): A formalization of the comparator hypothesis. Psychological Review, 114, 759–783.PubMedCrossRefGoogle Scholar
  221. Sutphin, G., Byrne, T., & Poling, A. (1998). Response acquisition with delayed reinforcement: A comparison of two- lever procedures. Journal of the Experimental Analysis of Behavior, 69, 17–28.PubMedCrossRefGoogle Scholar
  222. Tang, M., Williams, S. L., & Falk, J. L. (1988). Prior schedule exposure reduces the acquisition of schedule-induced polydipsia. Physiology & Behavior, 44, 817–820.CrossRefGoogle Scholar
  223. Thorndike, E.L. (1933). An experimental study of rewards. Teachers College Contributions to Education.Google Scholar
  224. Timberlake, W. (1993). Behavior systems and reinforcement: An integrative approach. Journal of the Experimental Analysis of Behavior, 60, 105–128.PubMedCrossRefGoogle Scholar
  225. Timberlake, W. (1994). Behavior systems, associationism, and Pavlovian conditioning. Psychonomic Bulletin & Review, 1, 405–420.CrossRefGoogle Scholar
  226. Timberlake, W. (1995). Reconceptualizing reinforcement - a causal, system approach to reinforcement and behavior-change. In W. Odonohue & L. Krasner (Eds.), Theories of behavior therapy (pp. 59–96). 750 First Street NE, Washington, DC 20002: Amer Psychological Assoc.Google Scholar
  227. Timberlake, W. (2000). Motivational modes in behavior systems. In R. R. Mowrer & S. B. Klein (Eds.), Handbook of contemporary learning theories (pp. 155–209). Mawah, NJ: Erlbaum Associates.Google Scholar
  228. Timberlake, W. (2001). Integrating niche-related and general process approaches in the study of learning. Behavioural Processes, 54, 79–94.PubMedCrossRefGoogle Scholar
  229. Timberlake, W., & Lucas, G. A. (1985). The basis of superstitious behavior: Chance contingency, stimulus substitution, or appetitive behavior? Journal of the Experimental Analysis of Behavior, 44, 279–299.PubMedCrossRefGoogle Scholar
  230. Timberlake, W., & Lucas, G. A. (1989). Behavior systems and learning: From misbehavior to general principles. In S. B. Klein & R. R. Mowrer (Eds.), Contemporary learning theories: Instrumental conditioning theory and the impact of constraints on learning (pp. 237–275). Hillsdale, NJ: Erlbaum.Google Scholar
  231. Toates, F.M. (1971). The effect of pretraining on schedule induced polydipsia. Psychonomic Science, 219–220.Google Scholar
  232. Tonneau, F. (2005). Windows. Behavioural Processes, 69, 237–247.PubMedCrossRefGoogle Scholar
  233. Wallace, M., & Singer, G. (1976). Schedule induced behavior: A review of its generality, determinants and pharmacological data. Pharmacology Biochemistry and Behavior, 5, 483–490.CrossRefGoogle Scholar
  234. Ward, R.D., Gallistel, C., Jensen, G., Richards, V.L., Fairhurst, S., & Balsam, P.D. (2012). Conditioned stimulus informativeness governs conditioned stimulus-unconditioned stimulus associability. Journal of Experimental Psychology: Animal Behavior Processes, in press.Google Scholar
  235. Wearden, J., & Lejeune, H. (2006). "The stone which the builders rejected": Delay of reinforcement and response rate on fixed-interval and related schedules. Behavioural Processes, 71, 77–87.PubMedCrossRefGoogle Scholar
  236. Whishaw, I. Q., & Gorny, B. P. (1991). Postprandial scanning by the rat (Rattus norvegicus): The importance of eating time and an application of "warm-up" movements. Journal of Comparative Psychology, 105, 39–44.PubMedCrossRefGoogle Scholar
  237. Wilkenfield, J., Nickel, M., Blakely, E., & Poling, A. (1992). Acquisition of lever-press responding in rats with delayed reinforcement: A comparison of three procedures. Journal of the Experimental Analysis of Behavior, 58, 431–443.PubMedCrossRefGoogle Scholar
  238. Williams, B. A. (1975). The blocking of reinforcement control. Journal of the Experimental Analysis of Behavior, 24, 215–226.PubMedCrossRefGoogle Scholar
  239. Williams, B. A. (1981). The following schedule of reinforcement as a fundamental determinant of steady state contrast in multiple schedules. Journal of the Experimental Analysis of Behavior, 35, 293–310.PubMedCrossRefGoogle Scholar
  240. Williams, B. A. (1991). Marking and bridging versus conditioned reinforcement. Animal Learning & Behavior, 19, 264–269.CrossRefGoogle Scholar
  241. Williams, B. A. (1999). Associative competition in operant conditioning: Blocking the response-reinforcer association. Psychonomic Bulletin & Review, 6, 618–623.CrossRefGoogle Scholar
  242. Williams, D. A., Johns, K. W., & Brindas, M. (2008). Timing during inhibitory conditioning. Journal of Experimental Psychology. Animal Behavior Processes, 34, 237–246.PubMedCrossRefGoogle Scholar
  243. Williams, S. L., Tang, M., & Falk, J. L. (1992). Prior exposure to a running wheel and scheduled food attenuates polydipsia acquisition. Physiology & Behavior, 52, 481–483.CrossRefGoogle Scholar
  244. Williams, D. R., & Williams, H. (1969). Auto-maintenance in the pigeon: Sustained pecking despite contingent non-reinforcement. Journal of the Experimental Analysis of Behavior, 12, 511–520.PubMedCrossRefGoogle Scholar
  245. Wong, P. T. P. (1977). A behavioral field approach to instrumental learning in the rat: I. Partial reinforcement effects and sex differences. Animal Learning and Behavior, 5, 5–13.CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2013

Authors and Affiliations

  1. 1.Department of PsychologyArizona State UniversityTempeUSA
  2. 2.Universidad Nacional de Educación a DistanciaMadridSpain

Personalised recommendations