At the August 28, 1963, March on Washington, DC, Martin Luther King, Jr., spoke these powerful words concerning the need to act in the moment:

We are now faced with the fact that tomorrow is today. We are confronted with the fierce urgency of now. In this unfolding conundrum of life and history, there “is” such a thing as being too late. This is no time for apathy or complacency. This is a time for vigorous and positive action.

I have included in the title of my paper the famous catchphrase of this quotation—the fierce urgency of now—in order to highlight the demand to act in the moment rather than to vacillate or to delay determined action. The propulsive impulse to act figures prominently in many familiar maxims:

Procrastination is the thief of time.

There’s no time like the present.

Time and tide wait for no man.

Make hay while the sun shines.

Strike while the iron is hot.

One who hesitates is lost.

Time is of the essence.

Carpe diem.

However, as is often the case with such common adages, there are other, opposing pearls of wisdom for us to consider before hastening to act:

Fools rush in where angels fear to tread.

Marry in haste, repent at leisure.

Measure twice, cut once.

Look before you leap.

Haste makes waste.

Given such conflicting advice, how is one to decide whether to take positive action now or to demur from doing so? This question confronts all of us on a daily basis, admittedly without the high drama of Shakespeare’s character Hamlet in his famous Soliloquy (Act III.1):

Thus conscience does make cowards of us all,

and thus the native hue of resolution

Is sicklied o’er, with the pale cast of thought,

And enterprises of great pith and moment

with this regard their currents turn away,

And lose the name of action.

The unending tug-of-war between acting now or later can be seen to map onto two clashing psychological tendencies: precrastination and procrastination, respectively. Procrastination is the more familiar and discussed inclination: putting off until later what can be done now. Taxpayers commonly delay submitting their annual returns until the last minute, thereby risking possible computational errors in their last-ditch haste to file. Lawmakers notoriously dally and filibuster before passing often rash and poorly drafted legislation at the eleventh hour. And students frequently burn the midnight oil in order to submit their term papers just prior to the looming deadline, thus thwarting proper proofreading and polishing. For these reasons, we are admonished not to procrastinate.

Precrastination is the less well-known and examined tendency: the predisposition to complete tasks quickly just to get them done sooner rather than later. People immediately respond to e-mails rather than carefully considering the possible consequences of their hasty replies. Purchasers pay bills as soon as they arrive, thereby foregoing interest income on their savings. And shoppers snatch items from the shelves when they first enter a market, carry them to the back of the store, grab still more groceries, and then return to the front of the market to pay, in the process toting the initial items much farther than necessary.

In this paper, I will pay particular attention to precrastination: its relation to procrastination, its role in adaptive action, and its significance for better understanding the nature of future-oriented cognition and self-control. I will first sketch some critical speculations regarding brain and behavior by two contemporaries and Nobel laureates in physiology: Charles Scott Sherrington (1857–1952) and Ivan Petrovich Pavlov (1849–1936). I will then relate their speculations to more recent conjectures by the prominent philosopher Daniel Clement Dennett (1942–). Speculations and conjectures are all well and good, but empirical data must ultimately be examined to assess the merits of such notions. So, I will review the results of several research studies that importantly relate to the matters of precrastination, procrastination, future-oriented cognition, and self-control. Finally, I will offer some proposals as to how we might interconnect the seemingly fragmented pieces of this challenging psychological puzzle.

Brain evolution and adaptive anticipatory behavior

Sherrington

In his most famous work, The Integrative Action of the Nervous System, Sherrington (1906) proposed an expansive view of the role of distance reception in the evolution of the brain and its participation in stimulating and directing an organism’s adaptive action:

The “distance-receptors” seem to have peculiar importance for the construction and evolution of the nervous system. In the higher grades of the animal scale one part of the nervous system has . . . evolved with singular constancy a dominant importance to the individual. That is the part which is called the brain. The brain is always the part of the nervous system which is constructed upon and evolved upon the “distance-receptor” organs. (p. 325)

Just why did Sherrington so heavily stress the distance receptors in the evolution of brain and behavior?

[The] ability on the part of an organism to react to an object when still distant from it allows an interval for preparatory reactive steps which can go far to influence the success of attempt either to obtain actual contact or to avoid actual contact with the object. [To simply state the matter:] The “distance-receptors” initiate anticipatory, i.e. precurrent, reactions. (p. 326)

[Unlike the physical pain or the physical pleasure that is provoked by stimuli which excite non-distance receptors like touch and taste] “conative feeling” [that is, will or driving force; Hilgard, 1980] is salient as a psychical character of the reactions which the . . . “distance-receptors” . . . guide. [These goal-directed actions are] characterized by tendency to work or control the musculature of the animal as a whole . . . and in a manner suitably anticipatory of a later event. (p. 327)

Sherrington further proposed that cognitive or mental evolution might effectively build on this vital foundation of anticipatory action through the Darwinian process of natural selection:

It is the long serial reactions of the “distance-receptors” that allow most scope for the selection of those brute organisms that are fittest for survival in respect to elements of mind. The “distance-receptors” hence contribute most to the uprearing of the cerebrum. (p. 333)

Putting together all of these extremely interesting ideas, Sherrington summarized his important proposal in the following lines:

We thus, from the biological standpoint, see the cerebrum, and especially the cerebral cortex, as the latest and highest expression of a nervous mechanism which may be described as the organ of, and for, the adaptation of nervous reactions. The cerebrum, built upon the distance receptors and entrusted with the reactions which fall in an anticipatory interval so as to be precurrent, comes . . . to be the organ par excellence for the readjustment and the perfecting of the nervous reactions of the animal as a whole, so as to improve and extend their suitability to, and advantage over, the environment. . . . For this conquest its cerebrum is its best weapon. It is then around the cerebrum, its physiological and psychological attributes, that the main interest of biology must ultimately turn. (pp. 392–393)

Pavlov

Readers might be quite surprised to learn that the prior lines had been written by Sherrington, not Pavlov. Proximal stimuli might have been regarded as Pavlovian unconditioned stimuli, whereas distal stimuli might have been regarded as Pavlovian conditioned stimuli (Domjan, 2005, considered several interesting examples in which different features of the same object serve in both capacities and deemed such cases to represent most naturally occurring instances of Pavlovian conditioning). And, did Pavlov not suggest that associative learning took place in the cerebral cortex to promote an animal’s survival? Indeed, he did.

Key to Pavlov’s own formulation was the notion of signalization. Here is how Pavlov (1927/1960) generically introduced this idea in his most famous work, Conditioned Reflexes:

The complex conditions of everyday existence require a much more detailed and specialized correlation between the animal and its environment than is afforded by the inborn reflexes alone. This more precise correlation can be established only through the medium of the cerebral hemispheres; and we have found that a great number of all sorts of stimuli always act through the medium of the hemispheres as temporary and interchangeable signals for the comparatively small number of agencies of a general character which determine the inborn reflexes, and that this is the only means by which a most delicate adjustment of the organism to the environment can be established. To this function of the hemispheres we gave the name “signalization.” (pp. 16–17)

Pavlov expanded on this point after describing his experiments involving “artificial” conditioned stimuli (such as a ticking metronome) which are arbitrarily paired with food and “natural” conditioned stimuli (such as the smell and sight of food) which are ordinarily paired with food in the development of a dog’s normal feeding behavior (for more on the related idea of stimulus substitution, see García-Hoz, 2003):

It is obvious [in each case] that the underlying principle of this activity is signalization. The sound of the metronome is the signal for food, and the animal reacts to the signal in the same way as if it were food; no distinction can be observed between the effects produced on the animal by the sounds of the beating metronome [an artificial conditioned stimulus] and showing it real food [a natural conditioned stimulus]. (p. 22)

Pavlov more particularly elucidated how anticipatory reactions could enhance the adaptability of organisms by forewarning them of impending stimuli:

If food or some rejectable substance finds its way into the mouth, a secretion of saliva is produced. The purpose of this secretion is in the case of food to alter it chemically, in the case of a rejectable substance to dilute and wash it out of the mouth. This is an example of a reflex due to the physical and chemical properties of a [proximal, unconditioned stimulus] when it comes into contact with the mucous membrane of the mouth and tongue. But, in addition to this, a similar reflex secretion is evoked when these substances are placed at a distance from the dog and the receptor organs affected are only those of smell and sight [distal conditioned stimuli]. Even the vessel from which the food has been given is sufficient to evoke an alimentary reflex complete in all its details; and, further, the secretion may be provoked even by the sight of the person who brought the vessel, or by the sound of his footsteps [artificial conditioned stimuli]. (p 13).

The great advantage to the organism of a capacity to react to [any distal conditioned] stimuli is evident, for it is in virtue of their action that food finding its way into the mouth immediately encounters plenty of moistening saliva, and rejectable substances, often nocuous to the mucous membrane, find a layer of protective saliva already in the mouth which rapidly dilutes and washes them out. Even greater is their importance when they evoke the motor component of the complex reflex of nutrition, i.e. when they act as stimuli to the reflex of seeking food. (pp. 13–14)

The convergence of Sherrington’s and Pavlov’s conceptualizations is rather remarkable. Indeed, in another work, Pavlov even commented on the fact that many reflexive responses can be triggered both by stimuli directly contacting the organism as well as by stimuli acting upon the distance receptors. Stressing the latter, Pavlov (1928/1963) exclaimed: “How many simple physiological reflexes start from the nose, the eye, and the ear, and therefore originate at a distance!” (p. 51). He then stressed the role that the locomotor responses of the whole animal play in adapting to its environment:

The importance of the remote signs (signals) of objects can be easily recognised in the movement reaction of the animal. By means of distant and even accidental characteristics of objects the animal seeks his food, avoids his enemies, etc. (1928/1963, p. 52)

Also notable is the fact that, although Sherrington and Pavlov were acquaintances and even visited one another’s laboratories (Fig. 1 depicts just such a visit of Sherrington to Pavlov’s laboratory in 1913), they do not seem to have cross-referenced one another’s ideas on brain evolution, function, and associative learning. Indeed, a bit of ill-will appears to have found its way into their personal relationship (based on a few reported anecdotes; Granit, 1982; Razran, 1959). Pavlov even appears to have considered Sherrington to be a “dualist,” thereby straying from Pavlov’s more materialist philosophy (Razran, 1959).

Fig. 1
figure 1

Group portrait of 16 scientists, including Ivan Petrovich Pavlov (third from left in the front row) and Charles Scott Sherrington (fourth from left in the front row), at the Department of Physiology of the Institute of Experimental Medicine, St. Petersburg, Russia, in May 1913. Pavlov was elected a Fellow of the Royal Society in 1907. Sherrington was elected a Fellow of the Royal Society in 1893. He served as President of the Royal Society from 1920 to 1925. Photo Credit: ©The Royal Society.

Dennett

Proclaiming mind–body dualism to be “forlorn,” Dennett (1991) nevertheless did revert to using mentalistic language when proposing what he believed to be the prime task of the mind:

The task of a mind is to produce future. . . . A mind is fundamentally an anticipator, an expectation-generator. It mines the present for clues, which it refines with the help of the materials it has saved from the past, turning them into anticipations of the future. And then it acts, rationally, on those hard-won anticipations. (1996, pp. 57–58)

Once again, we see that the ability to anticipate the future by drawing on past experience allows organisms to prepare for adaptive action. Hence, the saying “forewarned is forearmed” figures prominently in the writings of all three authors: for Sherrington, the cerebrum is the best weapon for enabling the individual to conquer its environment through suitable anticipatory actions of the whole organism; for Pavlov, the cerebral hemispheres help the individual more delicately adjust to the complexities of its environment via both isolated autonomic and directed skeletal reactions; and, for Dennett, the mind prepares the organism to engage in rational action.

Empirical evidence on precrastination and anticipatory action

It can hardly be said to those familiar with my research career that it has continually concentrated on anticipatory action, in general, or on precrastination, in particular. Nonetheless, connections can sometimes be gleaned in hindsight that might not otherwise have been evident. That is certainly the case with the work that I will next discuss. In this research review, I will not contend that some shrewd master plan was in play—that would simply be false. Rather, I will follow a strict chronology to provide a more factual narrative of the development of my research in this realm. Let’s start at the beginning.

Wasserman, Carr, and Deich (1978)

This initial experiment, published 40 years ago, sought to answer an interesting question in its own right: namely, when a series of two conditioned stimuli regularly precedes an unconditioned stimulus (CS2–CS1–US), is a direct associative connection formed between CS2 and CS1? We guessed that the answer would be “yes,” but how could this associative connection be behaviorally divulged?

Here, we built on our prior work in autoshaping (Wasserman, 1981). We suspected that pigeons might not only approach and peck a signal that was temporally associated with food, but we also suspected that they might be inclined to approach and peck another stimulus that was spatially associated with the location of the upcoming food-paired stimulus.

The two-step experimental design that we devised is depicted in Fig. 2. On any given trial, pigeons in Step 1 were shown a pair of lighted circular (2.5-cm diameter) keys prior to food delivery; for a sample bird, the left and right keys might both be lighted red (randomly, on 20 daily trials) or they might both be lighted green (randomly, on 20 daily trials). Following two red keys for 10 s, the left key in Step 2 would be lighted white for 10 s before food was delivered; following two green keys for 10 s, the right key in Step 2 would be lighted white for 10 s before food was delivered (other birds received the reversed assignment of the visual stimuli and the spatial locations in this experiment and in all of the other similar experiments reviewed in this paper). At no point was pecking any key ever required of the birds for food to be delivered; the food hopper was noncontingently made accessible for 2.5 s (Brown & Jenkins, 1968).

Fig. 2
figure 2

Sample trials given to pigeons in the two-step autoshaping task of Wasserman et al. (1978)

Yet the pigeons did learn to peck the red, green, and white illuminated keys. Overall, their pecking was more frequent to the white light, probably because it occurred in closer temporal proximity to food delivery than did the red or the green lights (the white light preceded food by 10 s, whereas the red and the green lights preceded food by 20 s). But the pigeons also robustly pecked the red and green lights. And, most critical for the question at issue, the earlier mentioned sample bird primarily pecked the left key when the two keys were lighted red and it primarily pecked the right key when the two keys were lighted green. Across all four of the experimentally naïve pigeons during the final 2 days of the 10-day investigation, 74% of the birds’ pecks during red and green were to the key on which the white light would next be presented. In short, we had successfully documented that CS2–CS1 associations were being established during CS2–CS1–US pairings.

Nonetheless, we did not fully appreciate or discuss the importance of the anticipatory spatial responding that we had used to assess CS2–CS1 association formation. That appreciation would only come decades later when, in laying the foundation for other research projects, we tried to obtain clear evidence of discrimination learning that did not require differential feedback for correct and incorrect responding—what is commonly called “unsupervised” learning (see Castro, Wasserman, & Lauffer, 2018, for recent work directly studying such learning).

Brooks (2010, Experiment 1)

In his dissertation research in the Iowa Comparative Cognition Laboratory, Dan Brooks conducted several experiments in discrimination learning that deployed variants of the Wasserman et al. (1978) task in order to gain further insight into unsupervised anticipatory responding. Rather than using an autoshaping procedure, Brooks required pigeons to make touchscreen pecks to advance through a variety of multistep tasks (many of his pigeons were “veterans” of other studies, but they had never before been given these tasks nor seen these visual stimuli). Critically, food delivery never depended on which of two or three responses pigeons made during the choice step of a trial. In addition, rather than using switch-activated keys and inline projectors, all of Brooks’s work was conducted with computer-controlled touchscreens and video monitors (Gibson, Wasserman, Frei, & Miller, 2004).

Consider the task that is outlined in Fig. 3 and that was given to four pigeons in Brooks’s Experiment 1. This variant represents the closest approximation to the Wasserman et al. (1978) task. In Step 1, pigeons had to peck five times to either a red square or a green square in the center of the display (all of the stimuli were 7.4-cm squares). This step centered the pigeons between the upcoming Step 2 choice stimuli and it also permitted recording of the spatial location of pecks to the Step 1 stimulus; Brooks wanted to see if the location of those pecks within the square stimulus area might betray the pigeons’ anticipation of the impending Step 3 stimulus. After the fifth and final peck in Step 1, the center square went off and two laterally presented squares of the same color as the Step 1 stimulus were presented; responses to those Step 2 choice stimuli provided the behavioral results that might most likely replicate those of Wasserman et al. (1978). After a total of five pecks to the Step 2 stimuli, the star stimulus in Step 3 was presented on either the left side or the right side of the screen; following five pecks to the star stimulus, food reinforcement was delivered.

Fig. 3
figure 3

Sample trials given to pigeons in the three-step color discrimination task of Brooks (2010, Experiment 1)

Despite the absence of differential food reinforcement contingencies for pecks to the left and right stimuli during Step 2, the pigeons learned to direct their pecks to the spatially congruent key in Step 2 that would next be illuminated in Step 3. All four pigeons quickly came to choose the spatially congruent key on over 80% of the trials during the first half of training, with individual birds reaching that level of performance in only one, two, three, and five daily (110-trial) sessions. During Sessions 7 to 12, the pigeons’ mean percentage of pecks to the spatially congruent key hovered near the extremely high score of 90%.

Also revealing was the location of pigeons’ pecks during Step 1. Here, within the confines of the 7.4-cm square center stimulus, the birds progressively came to respond to the left area of the square when the upcoming Step 3 stimulus was later to appear on the left side of the screen and to respond to the right area of the square when the upcoming Step 3 stimulus was later to appear on the right side of the screen. As quantified by the nonparametric rho statistic (Bamber, 1975), the anticipatory shift in Step 1 responding was statistically significant and was highly correlated (R = .85) with the anticipatory shift in Step 2 responding as training unfolded.

Clearly, Brooks’s experimental method proved to be extremely sensitive to pigeons’ anticipatory spatial responding, with such responding holding in both Steps 1 and 2 of his three-step paradigm. Given that success, Brooks proceeded to conduct several additional pigeon experiments, one of which will be discussed shortly. However, before proceeding to that investigation, it will be useful to describe one more experiment that Brooks conducted, with rats, to assess the species generality of this anticipatory responding.

Brooks (2010, Experiment 2)

In this investigation, Brooks deployed the same basic procedure with eight Long-Evans rats as he had with pigeons in Experiment 1 to see if rats, too, exhibit a penchant to move toward places where food-reinforced responding would soon be required. Although this strain of rats has pigmented eyes, Brooks chose to use black-and-white stimuli because of the rats’ possibly poor color vision. Half of the rats (orientation group, n = 4) received training with stimuli that varied in orientation (vertical vs. horizontal grids; see Fig. 4); the other half of the rats (brightness group, n = 4) received training with stimuli that varied in brightness (dark vs. light dot density displays; see Fig. 5). Sessions comprised 120 trials (half with each stimulus) and just one contact of the touchscreen was required for rats to advance from one step to another in each trial.

Fig. 4
figure 4

Sample trials given to rats in the three-step orientation discrimination task of Brooks (2010, Experiment 2)

Fig. 5
figure 5

Sample trials given to rats in the three-step brightness discrimination task of Brooks (2010, Experiment 2).

As was true for his pigeons, Brooks’s rats showed a robust tendency to direct their responding toward the spatially congruent Step 2 stimulus within the first few sessions of training. This rapid shift occurred for both types of discrimination task: orientation and brightness. Over Sessions 8 to 10 of training, rats in Group Orientation averaged over 80% anticipatory choice responding and rats in Group Brightness averaged over 90% anticipatory choice responding.

As was also true for the pigeons, the rats’ anticipatory Step 1 and Step 2 behaviors rose in close concert with one another. For Group Orientation, the correlation between Step 1 and Step 2 spatial discrimination scores across the 10 sessions of training was R = .94; for Group Brightness, the correlation between these scores was R = .97.

Brooks (2010, Experiment 11)

Having thus established the species generality of anticipatory spatial responding, in Experiment 11, Brooks returned to pigeons in order to learn more about the situational generality of anticipatory spatial responding.

He noted that, in all of his earlier experimental designs, the pigeons had to shift their responding in Step 1 to another location in Step 2—a shift that the birds might not otherwise have chosen to make. Hence, the question arose: What would happen if no spatial shift in responding was required to advance from Step 1 to Step 2?

This question prompted a single procedural change from his Experiment 1: He presented three stimuli rather than two stimuli in Step 2, as illustrated in Fig. 6. In this expanded spatial arrangement, the left and right stimuli continued to be available in Step 2; however, a third, centrally located stimulus was added that visually matched the two side stimuli. This spatial arrangement permitted the pigeons to persist in responding to the central stimulus during Step 2, now shifting only after Step 2 to peck the Step 3 stimulus and collect food reinforcement.

Fig. 6
figure 6

Sample trials given to pigeons in the three-step color discrimination task of Brooks (2010, Experiment 11)

To appreciate the importance of this procedural change, consider the response options in Step 2 on red trials. Pigeons could choose to respond to the right stimulus. That choice would require moving from the center stimulus to the right stimulus in Step 2 (call this a one-stride shift) and moving again from the right stimulus to the left stimulus in Step 3 (call this a two-stride shift), thereby requiring substantial movement effort overall (call this a three-stride shift).

The pigeons’ other two response options would be far less effortful. Critically, these two options would involve equally effortful one-stride response trajectories. Suppose that the pigeons chose to respond to the center stimulus in Step 2, just as they had in Step 1. That choice would involve no lateral movement, but it would require moving from the center stimulus in Step 2 to the left stimulus in Step 3, thereby involving a single-stride shift late in the trial. Instead, suppose that the pigeons chose to respond in Step 2 to the left stimulus. That choice would require moving from the center stimulus in Step 1 to the left stimulus in Step 2, involving a single-stride shift. Now, the pigeons would be in the perfect position to respond again to the left stimulus in Step 3. So, this choice option also involves a single-stride shift overall, but early in the trial.

It was quite unlikely that the pigeons would choose the first, highly effortful option, and they did so on only 10% of the trials. However, given two other, equally effortful options, which one would the pigeons choose? In Session 1 of training, the six pigeons in the experiment were slightly more inclined to peck the center stimulus in Step 2 (thereby making the center to side stimulus switch late in the trial) than to peck the side stimulus in Step 2 (thereby making the center to side stimulus switch early in the trial): approximately 45% versus 30% of the trials, respectively. However, this trend was rapidly and dramatically reversed; by Session 12 of training, the pigeons’ early switches greatly exceeded their late switches: approximately 80% versus 10% of the trials, respectively.

Wasserman and Brzykcy (2015)

The results from Brooks (2010, Experiment 11) were quite interesting. They suggested that pigeons were powerfully drawn toward the location where the terminal stimulus would be presented, preferring to move there sooner rather than later. Yet that three-stimulus experimental design included one choice option that complicated scoring the pigeons’ behavior and interpreting its theoretical significance, namely, the option involving the pigeons’ moving away from the location where the terminal stimulus would next be presented. Eliminating that complicating option prompted the subsequent study by Wasserman and Brzykcy (2015).

This revised experimental design is depicted in Fig. 7; it deleted in Step 2 the stimulus on red trials and green trials that would have been the pigeons’ most inefficient response option. Now, only early and late switch options of equal effort were afforded the pigeons, thereby simplifying the birds’ response options.

Fig. 7
figure 7

Sample trials given to pigeons in the three-step color discrimination task of Wasserman and Brzykcy (2015)

As in Brooks (2010, Experiment 11) and despite the lack of differential reinforcement for doing so, all four of the pigeons in this experiment quickly came to peck the side location in Step 2 where the upcoming star stimulus would next be presented in Step 3—early switching. Over the 14 days of training, the initial rise in early switching was very rapid, with individual pigeons first making over 95% early-switch responses in Sessions 2, 3, 8, and 8. This finding is especially striking given the initial tendency for Brooks’s (2010, Experiment 11) pigeons to continue pecking the center stimulus in Step 2 after having just done so in Step 1—amounting to late switching. In fact, the slowest learning pigeon in the Wasserman and Brzykcy (2015) study initially exhibited just such late-switching behavior, but it subsequently transitioned to the pattern of early-switching behavior.

We had now reached the point of being entirely convinced that something important was involved in all of these projects. We believed that it represented a striking instance of anticipatory spatial responding that was related to autoshaping. But, what might we call it? An answer was suggested by an intriguing paper published in 2014 by Rosenbaum, Gong, and Potts.

The framing for their project involved exploring the economics of human effort. Rosenbaum et al. (2014a) asked college students simply to carry one of two buckets down a walkway: One bucket was located on the left side of the walkway and one bucket was located on the right side of the same walkway. The investigators instructed the students to carry whichever bucket seemed easier to take to the end of the walkway. Based on the law of least effort, students would be expected to choose the bucket that was located closer to the end of the walkway, because it would have to be carried a shorter distance. Surprisingly, most of the students chose to carry the bucket that was located closer to the starting point (and farther from the endpoint), thereby carrying it a greater distance than necessary. When asked why they had made this choice, most of the students reported something like, “I wanted to get the task done as soon as possible,” even though this choice did not actually complete the task sooner.

Working from the common definition of procrastination, Rosenbaum et al. (2014a) termed this unexpected tendency to complete a task sooner rather than later, precrastination. We now could provide a name for our pigeons’ tendency to choose a response which more quickly brought them into closer physical contact with the final food-paired stimulus, thus entitling our 2015 paper, “Precrastination in the Pigeon.” A subsequent collaborative thought piece further developed this notion and compared the human research project with the pigeon research project (Rosenbaum & Wasserman, 2015).

García-Gallardo, Navarro, and Wasserman (2017)

All of the research that I have reviewed so far has involved an element of choice: one step in the trial sequence forced pigeons to choose one response option rather than another. Was such a choice a necessary feature of the experimental design in order to sensitively measure pigeons’ ability to anticipate responding in one spatial location rather than another? To find out, we adopted a different investigative tactic: namely, to see if pigeons were faster to peck a single stimulus when its upcoming location could be predicted than when it could not.

Figure 8 illustrates the experimental task that García-Gallardo et al. (2017) deployed to answer this question. On each of 120 daily trials, one of three different 6.5 × 6.5 cm stimuli (color fractals) was presented in one of two different locations in the top half of a touchscreen. (We placed the fractal stimuli in two different locations relative to the target stimulus in order to assess the robustness of any spatial predictiveness effect: some trials required pigeons shifting from one side of the touchscreen to the other, whereas others did not.) One fractal color (green on 30 trials) signaled that the 6.5 × 6.5 cm target stimulus (a black-and-white starburst pattern) would appear in the lower left portion of the touchscreen, whereas the second fractal color (blue on 30 trials) signaled that the same target stimulus would appear in the lower right portion of the touchscreen; these two colors reliably signaled the location of the upcoming target stimulus, thereby enabling the pigeon to anticipate and prepare to respond to the target stimulus. (We placed the target stimulus in a different spatial location from the fractal stimulus in order to prevent “spillover” fractal responses from counting as target responses on trials in which the fractal stimulus and the target stimulus appeared on the same side of the touchscreen.) A third fractal color (yellow on 60 trials) signaled that the target stimulus was equally likely to appear in the lower left or the lower right portion of the touchscreen; this color was therefore an unreliable signal for the location of the upcoming target stimulus, thus precluding the pigeons from anticipating and preparing to contact the target stimulus. Finally—and critical to minimizing differential reinforcement for faster responding—pecks to the target were reinforced with food according to a variable interval (VI) 3-s schedule. Because of this VI schedule, faster responding to the target should not lead to food being presented more quickly on trials with the green and blue fractals than on trials with the yellow fractal; the first peck after an average of 3 s delivered the food-pellet reinforcement.

Fig. 8
figure 8

Sample trials given to pigeons in the two-step spatial anticipation task of García-Gallardo et al. (2017)

The design of the experiment thereby permitted us to assess target response speed under identical physical conditions. Consider the leftmost pair of trials in Fig. 8. If a predictive green fractal was presented, then the pigeons were effectively informed that the target would appear directly below; but, if the nonpredictive yellow fractal was presented, then the pigeons were given no information as to the left–right location of the upcoming target. Note that the physical distance from the fractal to the target was the same between trials in the top and the bottom of this leftmost column and within each of the other three columns of trials in Fig. 8. If only the distance between the fractal and the target were to determine the pigeons’ speed of response, then the birds should respond equally quickly in each of the four top-bottom comparisons. However, if the pigeons more quickly contacted the target after the predictive fractal than after the nonpredictive fractal, then it would be fair to conclude that this disparity is due to the predictive relation between the fractal colors and the location of the upcoming target.

The results confirmed that our new method was quite effective in supporting anticipatory spatial responding. Pigeons exhibited reliably shorter target reaction times (RTs) on trials in which the target stimulus was preceded by a predictive fractal stimulus than on trials in which the target stimulus was preceded by a non-predictive fractal stimulus. This result was robust and held regardless of the location of the target stimulus: either on the same side of the touchscreen as the predictive stimulus or on the opposite side of the touchscreen from the predictive stimulus. In a later phase of training—when we reversed the target locations that were signaled by the two predictive fractal stimuli—the difference in target RT performance on predictive and nonpredictive trials initially vanished, but it progressively reappeared with further training.

Of additional interest was the fact that the obtained target RT disparity was not caused by differential food reinforcement; the obtained delays of reinforcement between the final peck to the fractal stimulus and the delivery of food were highly similar across trial types, fractal types, and experimental phases. Thus, our data suggest that the pigeons were able to anticipate the upcoming target location based solely on the predictive relationships between the colors of the fractal stimuli and the locations of the target stimuli, thereby implicating Pavlovian rather than operant conditioning processes in our pigeons’ anticipatory spatial responses.

We hasten to note that the García-Gallardo et al. (2017) study was not the first to report enhanced responding to a target stimulus whose location is reliably signaled by other training stimuli. Related findings have been reported in the contextual cueing literature (with pigeons: Couto, Navarro, Smith, & Wasserman, 2016, Wasserman et al. 2014a; Wasserman et al. 2014b; and with baboons: Goujon & Fagot, 2013). In most contextual cueing tasks, several large individual contexts are associated with several different locations of a target stimulus; RTs to locate and identify the target are faster in these contexts than in contexts where no reliable context-target associations can be formed. In the present experimental task, we greatly limited the number, size, and location of the predictive cues, thereby more tightly constraining the contingencies holding between the antecedent and target stimuli. Despite these procedural differences, each of these experimental paradigms has its merits and should prove to be useful in elucidating the acquisition and dynamics of anticipatory responding.

End-state comfort effect

The previously reviewed research has focused on tasks in which anticipatory action has been measured by the location of an organism’s response. But there are other circumstances in which anticipatory action can be measured by the form of an organism’s response.

A key observation in this respect is that people may differently grasp an object depending on what action they will next perform with it. This response dependency is instantiated in the end-state comfort effect: here, people may adopt what are initially uncomfortable postures in order to adapt to the motor comforts of later task demands (Rosenbaum, Chapman, Weigelt, Weiss, & Van der Wel, 2012; Rosenbaum, Herbort, van der Wel, & Weiss, 2014b; Rosenbaum et al., 1990).

Consider the pair of cup stacking tasks that is illustrated in Fig. 9. In the left panel, grasping the upright left cup with the thumb up is not only motorically comfortable, but it is congruent with placing it inside the right cup. However, in the right panel, grasping the inverted cup with the thumb up would make placing it inside the right cup extremely awkward and uncomfortable. Most human adults therefore choose to grasp the inverted cup with the thumb down and then rotate the wrist in order to place it inside the right cup; so, the more comfortable end-state governs the form of the grasp at the start of the motor sequence.

Fig. 9
figure 9

Diagram illustrating the end-state comfort effect with humans inserting one cup inside another

It may surprise readers that considerable research into the development of the end-state comfort effect (reviewed by Herbort, Büschelberger, & Janczyk, 2018, and Rosenbaum et al., 2012) has found that, not until children are somewhere between 5 and 10 years of age, do they comfortably and differentially adjust their initial grasps in several object manipulation tasks. What makes it so difficult for young children to do so is a matter of ongoing research and debate.

Also of interest is the comparative psychology of the end-state comfort effect. How many species other than humans exhibit the effect? Rosenbaum and his colleagues were the first to conduct research with nonhuman primates to shed light on the matter. Weiss, Wark, and Rosenbaum (2007) gave cotton-top tamarins an adaptation of the task depicted in Fig. 9. Although tamarins are not believed to use tools in the wild, they suitably and differently grasped the target object with the thumb-up and thumb-down postures characteristic of human adults: in the upright case doing so on 100% of the test trials and in the inverted case doing so on 83% of the test trials. These results clearly and convincingly document the end-state comfort effect in nonhuman primates.

In a later project, Chapman, Weiss, and Rosenbaum (2010) gave a generally similar task to 14 lemurs from six different species. The lemurs suitably and differently grasped the target object with the thumb-up and thumb-down postures: in the upright case doing so on 100% of the test trials and in the inverted case on 38% of the test trials. Although not as striking as the results for the tamarins, the lemurs exhibited an encouraging trend toward the end-state comfort effect; all but one of the six species showed the effect and even a 4-month-old infant did so.

Subsequent research using variants of different, but related tasks has also reported evidence of the end-state comfort effect in rhesus monkeys (Nelson, Berthier, Metevier, & Novak, 2011), capuchin monkeys (Sabbatini, Meglio, & Truppa, 2016), and chimpanzees (Frey & Povinelli, 2012). Thus, a wide range of nonhuman primate species—both close and distant evolutionary relatives of humans—appear to be capable of adaptively reordering the components of motor sequences in a manner that reveals clear anticipatory adjustment.

Precrastination, rationality, and optimality

Recall that Dennett (1991) had pointedly proclaimed the mind’s aim to be rational action. Although there may be obvious adaptive advantages to the anticipatory actions reviewed above, it is not altogether certain whether Sherrington or Pavlov would have insisted that those actions always be rational.

Of course, rationality is a knotty notion, especially when it is applied to the behavior of nonhuman animals. Theorists have proposed several different kinds of rationality: behavioral versus process, formal versus substantive, practical versus theoretical, among others. Yet no commonly agreed upon definition has been forthcoming (Hurley & Nudds, 2006). Even the more materialistic and less controversial term optimality admits of a variety of different mathematical determinations (Houston & McNamara, 1999).

Nevertheless, more than 30 years ago, research in my laboratory explored the merits of an experimental method that permitted us to assess the optimality of pigeons’ choice behavior in an investigation that was inspired by so-called optimal foraging theory (Krebs & McCleery, 1984). The findings from that project bear directly on the rationality of pigeons’ operant responding as well as on the possible intrusion of precrastination into the obtained results.

Bhatt and Wasserman (1987) initially trained pigeons on two different types of reinforcement schedules: (a) a depleting progressive schedule and (b) a multiple schedule that produced food according to four different fixed schedules. The birds were subsequently allowed to choose between the concurrently presented schedules to see if, as predicted by optimal foraging theory, they switched from the progressive schedule to an alternate fixed component of the multiple schedule in a manner that maximized the benefit/cost ratio.

Specifically, during the critical concurrent choice phase, the pigeons were given two circular keys to peck. On all trials, either the left side key or the right side key was randomly illuminated with a horizontal white line; on a random quarter of those trials, the center key could also be illuminated red, green, blue, or yellow. Pecks to the left or right side key produced food according to a progressive ratio (PR) schedule; on it, the number of pecks necessary to produce food delivery progressively increased with each reinforcer collected, with successive reinforcers requiring 10, 30, 50, 70, and 90 pecks. Each of the four color stimuli on the center key was uniquely associated with one of four different fixed ratio (FR) requirements of 20, 40, 60, or 80 pecks to produce each reinforcer. These color–ratio associations were counterbalanced to ensure that, across the eight birds, each stimulus equally often represented the four ratio requirements.

Each daily choice trial involved five food reinforcers. The pigeons were free to respond on either the left or right (PR) key as long as they chose to do so; this key initially required the fewest pecks to deliver food. However, as soon as the pigeons pecked the center (FR) key, the left or right (PR) key was turned off and all of the remaining food reinforcers had to be earned on the center (FR) key.

The experiment was therefore contrived so that there was an optimal solution to the choice problem: switch from the side key to the center key after one reinforcer when the FR schedule was 20, switch from the side key to the center key after two reinforcers when the FR schedule was 40, switch from the side key to the center key after three reinforcers when the FR schedule was 60, and switch from the side key to the center key after four reinforcers when the FR schedule was 80. This optimal pattern of choice responses minimized the work that the pigeons had expend to collect all of the food reinforcers.

The pigeons did indeed respond in clear accord with the FRs that were scheduled on the center key: Increasingly favorable FR alternatives led to earlier switchovers from the PR schedule to the FR alternative. Nevertheless, for all of the scheduled FR values, the pigeons prematurely switched from the PR schedule to the alternative FR schedule; this premature switching led the pigeons to make more than the minimum number of responses that were necessary to collect the five food reinforcers on each trial. Such demonstrable inefficiency thus entailed a measurable cost, not a benefit of engaging in anticipatory behavior. In this respect, the pigeons’ early switching behavior clearly parallels the behavior of Rosenbaum et al. (2014a) bucket-carrying human research participants.

Overeager anticipation can also be seen to have prompted highly inefficient behavior in a very different kind of discrimination task given to pigeons. Cook and Rosen (2010) taught three pigeons to learn both matching-to-sample and oddity-from-sample discrimination problems in the same session with the same stimuli. Cyan and red colors appeared as the sample, and comparison stimuli in three square areas that were located in the center (sample) and left/right (comparison) areas of the pigeons’ touchscreen. During the first half of each session, matching-to-sample contingencies were in place, whereas during the second half of each session, oddity-from-sample contingencies were in place.

Despite the clear challenge posed by these unsignaled contingency changes, the pigeons did learn to adjust to them. At the beginning of sessions, pigeons came to choose the matching comparison stimulus over 90% of the time, whereas at the end of sessions, pigeons came to choose the nonmatching comparison stimulus over 90% of the time. However, as time passed during the first half of sessions, the pigeons began prematurely choosing the odd comparison stimulus, thereby leading them to make many errors and to lose many food reinforcers. These anticipatory responses clearly indicate that the pigeons were preparing for the next problem to be performed (oddity from sample), but those anticipatory responses were diametrically opposed to the prevailing (matching to sample) contingencies of reinforcement. The birds might therefore be said to have been too smart for their own good; the fierce urgency of now seems to have taken a backseat to the leisurely calm of the future.

Minding the future

Let’s again recall Dennett’s striking proposal: “The task of mind is to produce future.” Producing future is not an easy task. As noted by the famous New York Yankee baseball player-philosopher, Yogi Berra: “It’s tough to make predictions, especially about the future.” A key part of the problem is that doing so critically depends on one’s past experience, as appreciated by one of America’s foremost Founding Fathers, Patrick Henry: “I know of no way of judging the future but by the past.” Finally, there’s the vital matter of effectively adapting to the anticipated future, a consideration stressed by businessman, Arnold H. Glasgow: “The trouble with the future is that it usually arrives before we’re ready for it.”

With so much riding on the future and aptly preparing for its arrival, it is no wonder that Martin E. P. Seligman and John Tierney have expressed such a keen interest in explaining “Why the future is always on your mind” (2017). Underlying their inquiry was a bold claim: namely, that “what sets us apart from other animals [is that] we contemplate the future”. Seligman and Tierney even went so far as to propose renaming our species: “A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain”.

Seligman and Tierney did concede that “some of our unconscious powers of prospection are shared by animals, but hardly any other creatures are capable of thinking more than a few minutes ahead”. Notwithstanding this quantitative concession, after entertaining the possibility that chimpanzees can engage in modest acts of prospection, Seligman and Tierney nevertheless insisted that “they are nothing like Homo prospectus”.

The temporal vistas involved in anticipating the future do vary considerably from seconds to minutes to hours to days to months, a prime point made by Seligman, Railton, Baumeister, and Sripada (2016) in their full-length presentation of the Homo prospectus hypothesis. The speculations of Sherrington and Pavlov clearly dealt with those rather brief natural delays prevailing between seeing, hearing, or smelling a distal stimulus and the organism potentially contacting that stimulus. The adaptive neural mechanisms deployed in those particular cases ought to have evolved in concert with such temporally limited contingencies.

Still other exigencies of survival might have prompted the evolution of other, possibly more specialized or advanced adaptive mechanisms. Those mechanisms may pertain to far longer time horizons, consistent with what we commonly construe to be characteristic of planning and self-control. This very proposal was made by Roberts (2012) in his examination of future-oriented cognition in animals (specific investigations of note include those of Raby, Alexis, Dickinson, & Clayton, 2007, studying western scrub jays; Wilson, Pizzo, & Crystal, 2013, studying Sprague-Dawley rats; and Evans & Beran, 2012, studying rhesus monkeys).

Roberts (2012) reviewed both experimental laboratory investigations and field observations of animals’ food gathering, storing, and pilfering as well as studies of animals’ tool selection and use. Overall, Roberts concluded that “there is enough evidence to be convinced that future-oriented cognition can be found in animals” (p. 178). He further noted that many of the clearest demonstrations of anticipation and planning in animals come from species that cache and retrieve food, such as scrub jays, black-capped chickadees, and tayras. “The importance of this observation is that it leads to the hypothesis that evidence for future-oriented cognition in these [species] might have been found because of evolutionary pressures that led to this ability. Animals that cache and later retrieve food from their caches may have to be particularly aware of the possible future fate of their caches” (p. 179).

These observations prompted Roberts (2012) to wonder whether “we might expect to find future-oriented cognition only in species of animals in which their survival depends on anticipation of future outcomes” (p. 179). An alternative to this ecological-specialization hypothesis would be that future-oriented cognition may be a more general adaptive trait or one that has evolved in conjunction with other exigencies of survival. Comparative study along these lines is definitely warranted.

Brain mechanisms and future-oriented cognition

Another avenue for assessing the Homo prospectus hypothesis is to systematically explore the brain mechanisms of future-oriented cognition. McElreath (2018) has interestingly noted that most of the earth’s organisms are brainless, yet they have survived far longer than our own species. That said, our expanded brain has not only afforded us the ability to survive and reproduce, but also to exert an outsized impact on the planet—to both good and bad ends.

The human brain is metabolically expensive to produce and sustain. In addition, the brain has tripled in size from our australopithecine relatives to modern humans, and it is nearly 6 times larger than would be estimated for a placental mammal of equivalent size to humans (González-Forero & Gardner, 2018). So, just what is it that pushed the human brain to expand to its present size?

One possibility has been called the social brain hypothesis (reviewed by Dunbar, 2009). According to this popular hypothesis, more complex social networks require more elaborate neural computing systems in order to anticipate and respond to the behaviors of conspecifics, with some individuals being involved in long-term reproductive relationships. Still more remote kin relations also span long intervals of time.

Another possibility is that diet, not sociality, is the central driving force for increasing brain size. This ecological brain hypothesis stresses the many dietary challenges that must be confronted in the nonsocial environment: finding, growing, catching, storing, or processing food. Several different lines of evidence are providing mounting support for this hypothesis (reviewed by Rosati, 2017).

González-Forero and Gardner (2018) have recently deployed an elaborate computer model to test the viability of these two accounts. The model incorporated the energy needs of an adult human female to nourish her brain, body tissues, and reproductive activities. It further considered the balance between brain size and body size, recognizing that the brain is a glutton for energy: it constitutes merely 4% of our body weight, but it guzzles 20% of our energy intake.

In this study, several different computer simulations were given a host of ecological challenges: for example, finding food in foul weather, preserving food to prevent spoilage, and storing food during famine or water during drought. Several different social challenges were also given to see how cooperation and competition affected brain and body weight.

The results suggested that ecological pressures were most likely to have increased the size of our brain. The impact of social cooperation and competition proved to be much less important. In fact, cooperation actually produced decreases in brain size, perhaps because this factor reduces the burdens placed on any one individual’s brain. There may really be a strong connection between the evolution of brain mechanisms and future-oriented feeding behavior.

A recent comparative study of animal brain and behavior corroborates that contention. MacLean et al. (2014) quantitatively compared the behavior of 567 individual animals from 36 species on two problem-solving tasks that are often used as measures of self-control: (a) searching in a previously rewarded but currently nonrewarded location and (b) directly reaching for visible but inaccessible food rather than indirectly reaching for and collecting it from another direction. The authors chose these specific tasks “to measure self-control—the ability to inhibit a prepotent but ultimately counterproductive behavior—because it is a crucial and well-studied component of executive function and is involved in diverse decision-making processes” (p. E2141). (Subsequent research by Beran and Hopkins, 2018, sought to assess self-control apart from the participation of inhibition—see ahead.)

The findings of MacLean et al. (2014) disclosed that—across all three dozen tested species—absolute brain volume best predicted performance on this pair of self-control tasks. Absolute brain volume even outperformed a previously touted measure relating brain volume to body mass. (Some caution should be exercised in interpreting these results, as only a small number of the species tested were birds; although birds have small brains, they do exhibit extremely flexible cognition; Güntürkün, Ströckens, Scarf, & Colombo, 2017; Kabadayi, Taylor, von Bayern, & Osvath, 2016.)

Yet the authors’ more strategic and detailed analysis of primate brain and behavior yielded still more remarkable results. Within the nonhuman primates that they studied, dietary breadth robustly predicted self-control behavior, whereas social group size failed to do so, thus confirming the computational analysis of human brain metabolism conducted by González-Forero and Gardner (2018). Quoting MacLean et al. (2014): “These results suggest that increases in absolute brain size provided the biological foundation for evolutionary increases in self-control and implicate species differences in feeding ecology as a potential selective pressure favoring these skills” (p. E2140).

An even more recent comparative study of brain and behavior involved 140 nonhuman primate species across all four primate groups: apes, monkeys, lemurs, and lorises. DeCasien, Williams, and Higham (2017) recorded brain size, social complexity, and dietary complexity. They specifically grouped the foods the animals ate into leaves alone, fruit alone, both leaves and fruit, and finally leaves, fruit, and animal protein. Their prime finding was that brain size was larger when fruit or protein was included in the primates’ diet; as in earlier studies, the animals’ social behavior proved to be less important.

As a final note, I would observe that comparing brain size and behavioral proxies of “intelligence” across species has long proven to be a difficult and controversial undertaking. Variations in overall brain size or even in the size of particular brain structures may not strongly correlate with specific cognitive processes (Logan et al., 2018). That said, there seems to be little doubt that our cognitive systems have been shaped by the food seeking, storing, preserving, and preparing behaviors of our evolutionary ancestors. Food for thought, indeed!

Many questions remain unanswered

We are still a long way from understanding what determines the nature and timing of learned anticipatory actions, including precrastination. Sherrington’s juxtaposition of brain evolution and distance reception is a fruitful starting point for explicating the origins of the neural mechanisms mediating future-oriented cognition. Pavlov’s notion of signalization provides an excellent framework for studying an individual organism’s learning of anticipatory behaviors. Below are a few of the many questions that remain to be answered as we delve more deeply into these important matters.

When and why do maladaptive anticipatory behaviors emerge?

One need look no further than autoshaping for a case in which learned anticipatory responding can be maladaptive. What at first blush appears to be highly functional behavior—a pigeon directing its responding toward a visual stimulus that has been paired with food—turns downright dysfunctional when such responding emerges and persists despite its having been programmed to omit food delivery (Williams & Williams, 1969). One can plausibly appeal to the natural evolutionary contingencies that may originally have shaped this behavior of contacting food-paired stimuli now misfiring when unnatural experimental contingencies are contrived in the laboratory that ought to discourage such behavior (Wasserman, 1981).

The case of autoshaping becomes even more complicated when one considers that animals may either approach and contact a food-paired stimulus (sign-tracking) or they may approach the site where food is about to appear when the food-paired stimulus is presented (goal-tracking)—(a distinction first made by Boakes, 1977). Current research is actively exploring these divergent response tendencies in rodents with an eye toward illuminating their neurobiological underpinnings.

In this vein, Sarter and Phillips (2018) have portrayed these two patterns of behavior as divergent cognitive-motivational styles implicating different neuronal mechanisms characterized by “hot” dopaminergic processing in the case of sign-tracking and by “cold” cholinergic processing in the case of goal-tracking. In their estimation: “The opponent cognitive-motivational styles that are indexed by sign- and goal-tracking bestow different cognitive–behavioral vulnerabilities that may contribute to the manifestation of a wide range of neuropsychiatric disorders” (p. 1). The implications of autoshaping for both adaptive and maladaptive behavior appear to be far reaching.

Is self-control universally preferable to impulsivity?

Precrastination can surely be said to represent hasty or impulsive responding. The urge to do things sooner rather than later ought to lead one astray on the famous marshmallow test (Mischel, 2014) as well as on other measures of self-control. Is such impetuous responding always to be inhibited in favor of exerting greater self-control? The lore of popular psychology certainly suggests just such a conclusion. However, this conclusion may not always hold true.

Uziel (2018) has defined self-control as the “ability to resist temptations, regulate emotions, control cognitions, and adjust behavior in the service of overarching long-term goals” (p. 79). High self-control has been strongly advocated for advancing an individual’s adaptation to life’s many challenges by promoting academic achievement, acquiring effective social skills, and engendering emotional stability. Reviewing the relevant literature, Uziel concluded that the research findings “have brought researchers to unequivocally argue that high self-control introduces benefits only and that more self-control is always better” (p. 79).

However, there may be downsides to pursuing higher levels of self-control (see Watts, Duncan, & Quan, 2018, for reconsideration of the long-term benefits of self-control, and Kivetz & Keinan, 2006, for discussion of regret arising from making a virtuous and farsighted choice prompting wistful feelings of missing out). Other research reviewed by Uziel (2018) indicates that the desire for more self-control can reduce the motivation to succeed when difficult tasks must be performed. In addition, long-term negative health outcomes can result from determined efforts to sustain high self-control. Finally, measures of life satisfaction may be lowered by an individual’s having previously engaged in too much self-control. Uziel thus concluded that “research on self-control should deal not only with the benefits of self-control but also with the costs associated with advocating, wanting, and even having high self-control” (p. 79).

Possible trade-offs between impulsivity and self-control have also been highlighted by Beran and Hopkins (2018) in their ambitious research project on self-control and general intelligence in chimpanzees. The key objective of this project was to see if individual differences in self-control correlated with individual differences in general intelligence. To measure self-control, Beran and Hopkins used a novel hybrid delay task (which not only monitored initial choice of a small reward received sooner versus a large reward received later, but also measured the chimpanzee’s sustained commitment to collect all of the reinforcers when the large-later option was chosen) and a battery of diverse cognitive tests (which did not directly involve self-control or behavioral inhibition).

The strongest correlation with the composite general intelligence score on the cognitive test battery was with overall task efficiency on the hybrid delay task as measured by the average number of grape reinforcers eaten across all trials—whether the smaller-sooner or the larger-later option was chosen on any given trial. The authors’ main takeaway from the project was that “as is true with humans, chimpanzee g is clearly and consistently related to self-control capacities and particularly to delay of gratification. [T]he fact that such a relation exists in species other than humans likely reflects something foundational about the role of inhibitory, cognitive processes in general intelligence” (Beran & Hopkins, 2018, p. 576).

Additional commentary by the authors directly addressed the possible interplay between impulsivity and self-control, particularly under circumstances in which it may not always be in one’s self-interest to choose the better, but delayed option. The connection to Uziel’s (2018) analysis of human self-control is remarkable:

Sometimes, taking something less preferred but more immediate is important as well. Otherwise, there is the risk that one might show pathological levels of self-control. The present results suggest that, among the generally more “intelligent” chimpanzees, [hybrid delay task] performance reflected this same occasional “failure” to delay, which might instead reflect the occasional need to disengage from self-control. The constant effort to wait for later things can be stressful. Thus, intelligent decision making would reflect the right balance of engaging in delay of gratification when it was best warranted, but not when it was least warranted. (Beran & Hopkins, 2018, p. 577)

A final matter concerns the fixity or flexibility of self-control. Since publication of the classic paper by Ainslie (1975), researchers have sought to understand and possibly to modify impulsivity in both humans and animals, with an eye toward encouraging stronger self-control (Rung & Madden, 2018). Recent research with rats has found that both genetic and experiential factors play key parts in impulsive and risky choice behavior (Kirkpatrick, Marshall, & Smith, 2015). Timing processes also participate, as would be expected by theories which deem future-oriented cognition to be important in the evolution of anticipatory responding and which discount the reinforcing consequences of delayed versus immediate reinforcement (Ainslie, 1975; Rung, & Madden, 2018). Further work on genetic and experiential factors will be critical to elucidating the neurobiological and cognitive foundations of both normal and disordered impulsive choice behavior.

Precrastination versus procrastination?

Distinguishing precrastination from procrastination would appear to be a straightforward matter. Responding sooner rather than later is surely different from responding later rather than sooner. Yet there are inevitable complications to this seemingly clear cut distinction.

Recall the limited options that we afforded the pigeons in our earlier series of experiments. Even under these conditions, pigeons may sometimes precrastinate and sometimes procrastinate. In my own laboratory, we have observed both precrastination (Wasserman & Brzykcy, 2015) and procrastination (Navarro & Wasserman, 2016) when pigeons were given different experimental tasks. In addition, researchers in other laboratories have reported procrastination in pigeons (Mazur, 1996; Zentall, Case, & Andrews, 2018) given still other tasks. We have yet to determine what accounts for these empirical disparities.

Similar empirical inconsistencies also hold for experimental studies of human behavior. Discovering precrastination was the unique contribution of Rosenbaum et al. (2014a). Nicely replicating and extending that finding was a recent study by Fournier, Stubblefield, Dyre, and Rosenbaum (2018); in it, research participants sequenced a pair of tasks involving object transfer, in which either task could easily or logically be performed before the other. The participants had to transfer several balls from two buckets into a bowl. The buckets were located on either side of a corridor. Participants simply walked down the corridor, picked up one of buckets, carried it to the end of the corridor, transferred the balls from that bucket into a bowl, carried the bucket back to the start position, and then did the same with the remaining bucket. Here, again, participants exhibited a strong tendency to begin by carrying the nearer bucket, despite the fact that carrying both buckets was required to complete the task. The same work was required whether they first chose the nearer or the farther bucket.

Yet there are also data suggesting that undergraduate research participants may not take the first available route to a goal, but rather the last available route. Christenfeld (1995) reported three relevant experiments in his project. These involved (a) indicating a path through a paper maze, (b) planning a route on a city map, or (c) walking through a college campus. In all three cases, the last available route proved to be college students’ favorite. Reconciling these and other discrepant research findings is surely important if we are to fully understand the relation between precrastination and procrastination.

This kind of empirical approach to the distinction between precrastination and procrastination treats the two behavioral proclivities as fundamentally different. But is that the best way to consider their relation to one another? Perhaps not. Let’s take a different approach.

Faced with a variety of tasks to complete every day, we do have to take some kind of action at different points in time. We may complete some tasks quickly and we may delay completing others. How do we go about making those innumerable momentary decisions?

Consider one example. Imagine a student facing a looming deadline for submitting a term paper at noon on Monday. She may have put off completing this assignment for several weeks. But when a friend phones late Saturday afternoon and suggests going out for a pizza, she opts for eating at the pizzeria rather than taking the time and trouble to prepare something at home. She was going to have to eat dinner anyway, so why not avoid the hassle of whipping up a meal and then cleaning up afterward? She can eat dinner then come right back home and get to work on the paper. Except that she and her friend linger over a pitcher of cold beer on the warm evening. Next, they stop at an ice cream parlor for a cool and tasty dessert. And, on the way home, they spot a theater showing an enticing film and decide to enjoy the movie in air-conditioned comfort. Finally, after engaging in all of these impulsive activities, the student arrives home at midnight too tired to begin work on the paper. Sunday is still available to get the job done, albeit hastily. What thus began as a single urge to precrastinate gradually evolved into an extended series of precrastinating acts that resulted in a lamentable case of procrastination.

This example prompts me to propose a possibly more general and principled way to address the relation between precrastination and procrastination. This approach centers on the related ideas of impulsivity and self-control, and invokes the hyperbolic discounting of delayed reinforcers introduced by Ainslie (1975) and recently reviewed by Rung and Madden (2018).

Consider Fig. 10. This figure illustrates the psychologically discounted values of small sooner (SS) and large later (LL) reinforcers at various points in time prior to the presentation of those reinforcers. Both the initial and the ultimate values of the large reinforcer exceed those of the small reinforcer. In addition, the value of each reinforcer is low when the reinforcer is presented far in the future, but it rises hyperbolically as its presentation approaches. The key point is that the functions for the SS and LL reinforcers cross. Prior to crossing, the functions predict that organisms will choose the LL over the SS reinforcer (exhibiting self-control); but, after crossing, organisms will choose the SS over the LL reinforcer (exhibiting impulsivity).

Fig. 10
figure 10

Hyperbolic delay discounting functions for Small Sooner (SS) and Large Later (LL) reinforcers. Discounted values of the reinforcers are depicted along the y-axis and different points in time for the decision maker are depicted along the x-axis. The darker shaded area represents those relatively short anticipatory intervals where the SS reinforcer is more highly valued, thereby leading to “impulsive” choices, whereas the lighter shader area represents those long anticipatory intervals where the LL reinforcer is more highly valued, thereby leading to “self-control” choices

Earlier consideration of the natural anticipatory intervals involved in distance reception offered by Sherrington and Pavlov were hypothesized to be relatively short; spotting a mate or a rival should afford a rather limited time to approach or withdraw, respectively. Therefore, the discounting functions associated with those events would best be represented by the SS function in Fig. 10; discrete signals for those events would best be represented by the portion of the SS function within the more darkly shaded region of the figure.

It is then a simple matter to predict what should happen if, with an LL reinforcer expected, a signal for an SS reinforcer were to be given at different points in time. Just that is illustrated in Fig. 11. In that figure, two different scenarios are depicted: the signal for the SS reinforcer is given early in the LL interval (SS1: t1–t2) and the signal for the SS reinforcer is given late in the LL interval (SS2: t3–t4). Early receipt of the SS1 signal would be predicted to lead to impulsivity because the SS1 option is always valued more than the LL option. Late receipt of the SS2 signal would be predicted to lead to self-control because the LL option is always valued more than the SS option.

Fig. 11
figure 11

This figure addresses the question, Given an expected LL reinforcer, what would the effect be of signaling an SS reinforcer if the signal were to be given at different points in time prior to presentation of the LL reinforcer? Two scenarios are depicted: The signal for SS is given early in the LL interval (SS1: t1–t2) and the signal for SS is given late in the LL interval –(SS2: t3–t4). Early receipt of the SS1 signal would lead to impulsivity because the SS1 option would always be valued more than the LL option, whereas late receipt of the SS2 signal would lead to self-control because the LL option would always be valued more than the SS2 option

The implications for precrastination and procrastination are telling. Early receipt of the SS1 signal should lead to impulsive choice to produce the SS reinforcer. That impulsive choice might be said to represent precrastination because it allows the decision maker to focus on obtaining an imminent but fleeting reward; interestingly, it might also be said to represent procrastination because it delays working toward receipt of the LL reinforcer. This is precisely the plight of the earlier procrastinating student. Note as well that late in the game, there is still a chance to produce the LL reinforcer; that choice becomes clear because no value of the SS2 reinforcer exceeds that of the LL reinforcer. So, the procrastinator still has time snatch at least Pyrrhic victory from the jaws of defeat—a hastily prepared paper is better than no paper at all.

This suggestion certainly needs greater development, but it might prove helpful in establishing a more quantitative approach to the intricacies of anticipatory action. At least this step would be one which would not qualitatively separate precrastination from procrastination and instead place them within a single theoretical rubric in which a large data base already exists (Rung & Madden, 2018).

Can precrastination help us answer the avoidance paradox?

Better appreciating the role that responding sooner rather than later may play in psychological science might even help us solve some lingering theoretical puzzles. Perhaps insights into precrastination could help crack the avoidance paradox.

Simply put, the paradox of avoidance is that when an organism responds to a danger signal in advance of a noxious event, the event is not presented. How, then, can the event’s nonoccurrence promote and sustain the avoidance response?

Many answers have been given, with little consensus emerging despite decades of research and theory (reviewed by Krypotos, Effting, Kindt, & Beckers, 2015). One possible answer springs from Sherrington’s and Pavlov’s original discussions of distance reception and responding to natural conditioned stimuli.

We might begin by noting that a danger signal usually involves one or more features of the noxious event itself, as in the case of the sight, scent, or sound of a rival. Initially, the organism might have to struggle in order to break free and flee the clutches of the rival. After a few altercations, however, the organism could anticipate the noxious contact from the distal stimuli emitted by the rival and flee in advance of the altercation, thereby avoiding injurious contact altogether. Avoidance may thus be tantamount to anticipatory escape.

Of course, this explanation may very well be far from complete (see Krypotos et al., 2015, for a review of key findings in the avoidance learning literature). In particular, it requires that distal danger signals—whether natural or artificial—retain their value despite infrequent pairing with physical contact, assuming that the avoidance response is regularly being performed.

One possible way to mitigate this limitation is to appreciate that danger signals may be both sources of information and triggers of affect. For instance, on the way home after working out at my club, I routinely lower my car’s visor on sunny mornings before making a left turn toward the rising sun; doing so prevents the bright sunlight from shining directly into my eyes. However, virtually no affect is involved in this avoidant act. My lowering the visor may simply have advanced from the sunlight striking my eyes to otherwise neutral spatial stimuli which have regularly preceded the upcoming sunlight. Strong affect may not have to play a prominent part in avoidance behavior so long as the informational value of the signal is supported by even infrequent pairings with the noxious stimulus; the key spatial signals along my route are there even on cloudy days.

The merits of this proposal and others derived from precrastination have yet to be tested; they might well prove to be useful.

Closing comments

The brain is . . . a kludge . . . [A] design that is . . . an inefficient and inelegant agglomeration of stuff, which nonetheless works surprisingly well. The brain is not the ultimate general-purpose supercomputer. It was not designed at once, by a genius, on a blank piece of paper. Rather, it is a very peculiar edifice that reflects millions of years of evolutionary history. In many cases, the brain has adopted solutions to particular problems in the distant past that have persisted over time and been recycled for other uses or have severely constrained the possibilities for further change. (Linden, 2008, p. 6)

I began this paper by reviewing Sherrington’s perceptive proposals as to the origin and function of the distance receptors and the corresponding evolution of the brain. Insofar as biologically significant stimuli are concerned, detecting these events at a distance was a groundbreaking evolutionary step that afforded organisms a vital adaptive advantage: by engaging in suitable anticipatory behavior, contact with appetitive stimuli could be hastened and contact with aversive stimuli could be delayed or averted. Pavlov relatedly proposed that stimuli which were either naturally or artificially associated with biologically significant events might themselves come to serve the same distal signaling function, further contributing to the intricacy and flexibility of an organism’s adaptive anticipatory actions.

Behavioral control by distal stimuli entails a temporal vista appropriate to the sense involved: vision, audition, or olfaction. This time window ought to depend on the stimulus and the organism involved, but it would not be estimated to extend beyond several seconds to many minutes or more. We might therefore expect organisms to react promptly and decisively to natural or artificial conditioned stimuli, thereby making precrastination the default response option and leading organisms to abide by the gist of Benjamin Franklin’s stern warning: “Don’t put off until tomorrow what you can do today.” In a world of unforgiving contingencies, there may be no tomorrow!

Yet as Linden (2008) suggested in the above quotation, specific vicissitudes of survival might have prompted even further brain evolution. This evolution may have effectively expanded the organism’s temporal vista, thereby allowing it to engage the more demanding processes of self-control and inhibition.

Fascinating recent research strongly indicates that finding, caching, or processing food may have instigated the necessary neural and behavioral changes in both people (González-Forero & Gardner, 2018) and animals (MacLean et al., 2014; Roberts, 2012) to contend with events extending far into the future. So, future-oriented cognition might not only have contributed to increased success in adapting to the many ecological challenges involved in feeding but to other ecological and social demands as well.

Finally, as with most generally adaptive traits arising from a trial-and-error evolutionary process, there may be negative accompaniments of increased self-control. Homo prospectus and other future-oriented organisms might rely too heavily on vital resources or response options becoming available at a future time. Waiting for that time to come might thereby seed procrastination—a possibly maladaptive response option, as in the case of the procrastinating paper writer.

Precrastination may be a new term in the psychologist’s lexicon, but it may be a proclivity with a long evolutionary history. Placing precrastination within the general rubric of anticipatory action may yield important insights into both adaptive and maladaptive behavior. Looking far into the future is anathema to precrastination. So I will not now endeavor to do so. I will, nonetheless, be on the lookout for increasing interest in the matter; it seems likely to come.