Introduction

Impulsivity is a robust risk factor for addiction [1,2,3,4]. It is relatively stable and measurable from a young age, providing opportunities for early interventions seeking to reduce addictive behaviors [5]. However, impulsivity-targeted interventions for addictive behavior have produced mixed results, with many trials finding no benefit over non-targeted intervention [6,7,8,9,10,11]. Much of the evidence informing these interventions comprises correlational studies administering a self-report or behavioral measure of impulsivity. While informative, association studies cannot test causal mechanisms, which is essential to developing targeted treatment that is effective [12]. Preclinical human studies on impulsivity can play a unique role in elucidating the key mechanisms through which impulsivity causes addictive behavior. This review summarizes recent advances in preclinical research on impulsivity that is relevant to addiction. Implications for theory development and intervention will be discussed.

Impulsivity is a predisposition to rapid (approach) behavior in response to internal/external stimuli without regard to negative consequences [13,14,15]. Within the context of addiction, the importance of two core dimensions has emerged across biological, behavioral, and self-report studies [3, 16,17,18,19,20,21,22]. The first relates to the strength of the approach impulse requiring inhibition, which we refer to here as reward drive, and relates to reward processing and the mesolimbic dopamine system [23]. Many models posit such a dimension that is variously referred to as appetitive motivation, sensation seeking, choice impulsivity, or reward-delay impulsivity [17]. The second dimension relates to the capacity for inhibition of a prepotent approach response due, in part, to the (lack of) consideration of negative consequences, which we refer to here as rash impulsiveness. It relates more to prefrontal inhibitory control mechanisms and the orbitofrontal and anterior cingulate cortices in particular [23]. Many models posit such a dimension that is variously referred to as impulsivity, (low) constraint, disinhibition, response impulsivity, or (lack of) premeditation [17]. Despite the heritability and stability of the two dimensions [5, 24], there is also high within-person variability in the expression of trait-related behaviors [25], providing opportunities for experimental research.

Biologically based theories of personality conceptualize traits like impulsivity as individual differences in baseline thresholds of activation to specific classes of stimuli (e.g., reward, punishment [26, 27]). Gullo and colleagues [28••] argued that this conceptualization implies that impulsivity can be experimentally induced by external stimuli (“state impulsivity”), irrespective of an individual’s average frequency of impulsive behaviors (“trait impulsivity”), which reflect baseline thresholds of activation. Reward-driven and rash-impulsive behavior could be experimentally induced by exposure to relevant cues, revealing important information on causal mechanisms involved in addictive behavior.

Overview of Latest Findings

This review focuses on preclinical human research on impulsivity, alcohol, and food/eating published from 2017. Laboratory studies involving alcohol self-administration that included a measure of impulsivity were of interest, with studies seeking to induce or model impulsive consumption prioritized (see Table 1). For food/eating, studies investigating reward drive and rash impulsiveness in disordered eating and food addiction using surveys or experimental designs were examined (see Table 2) along with new impulsivity-targeted interventions.

Table 1 Summary of recent laboratory studies of alcohol self-administration including a measure of impulsivity or seeking to induce impulsive consumption
Table 2 Summary of recent studies investigating the role of rash impulsiveness and reward drive in food addiction and disordered eating

Alcohol

Gullo et al. [28••] conducted the first comprehensive investigation of the causal effect of impulsivity on adolescent alcohol consumption with the Experimental Paradigm to Investigate Impulsive Consumption (EPIIC). This paradigm allowed the measurement of increased drinking in response to impulsivity arising from three theoretically derived psychological processes: reward-seeking (n = 40; induced by reward cue exposure; film clip of “fun” social context), disinhibition (n = 40; induced by ego depletion), and negative affect (n = 40; induced by mood induction). Participants (18- to 21-year-olds, 50% female) were allocated to one of these three arms and received the corresponding experimental manipulation in one testing session (e.g., reward cue exposure) and a control in the other session 1 week later (e.g., neutral cue exposure; counter-balanced). A within-subjects design was utilized to control for the influence of body weight, sex, alcohol metabolism, drinking history, and other factors on laboratory alcohol consumption [42]. The impact of impulsivity manipulations on behavioral processes related to reward drive (self-reported reward-seeking) and rash impulsiveness (disinhibition: Stop Signal Reaction Time; SSRT) was measured prior to alcohol consumption. EPIIC also modeled the effect of peer influence as a between-subjects variable, with half of participants undergoing both experimental sessions in the presence of a trained, gender-matched, heavy-drinking confederate. Alcohol consumption (ml) was measured using the Cocktail Taste Rating Task (C-TRT), a bogus taste test of 3 alcoholic cocktails (700 ml each of vodka and soda, 6.6% alcohol by volume, %v/v) during an (undisclosed) 15-min time period, ensuring no ceiling effects. Gullo et al. [28••] reported C-TRT consumption predicted next 7-day drinking quantity (r = .44, p < .001) and frequency (r = .19, p = .04), and had good temporal stability (1 week) in spite of experimental manipulations designed to affect consumption (r = .47, p < .001). EPIIC consumption was also predicted by self-reported impulsivity (trait and state reward drive) and alcohol-related problems.

The EPIIC paradigm revealed that only reward-related impulsivity caused heavier drinking [28••]. Bryant and Gullo [43] replicated this effect in a subsequent study of 18- to 25-year-olds with a between-subjects design. Importantly, the reward cue exposure effect was mediated by change in reward-seeking, not craving or disinhibition [28••]. That is, exposure to (non-alcohol) reward cues increased drinking by increasing a generalized reward drive. While the induction of negative mood did increase disinhibition (SSRT), there was no increase in alcohol consumption. Furthermore, change in disinhibition was unrelated to change in laboratory drinking. The “disinhibition” manipulation (ego depletion), which utilized the crossing-out letters task, did not increase disinhibition, reward-seeking, or laboratory alcohol consumption [45]. While it is possible that method variance might have differentially affected associations involving mechanisms of reward drive (self-report) and disinhibition (SSRT), findings are consistent with a meta-analysis by Jones et al. [44•], who found appetitive cue exposure (alcohol/food) does not increase disinhibition. The presence of a heavy-drinking peer significantly increased alcohol consumption, but did so in an additive fashion. It did not strengthen or weaken the effect of any of the induced impulsive states on drinking. Similar findings have been reported in risk-taking experiments, whereby the presence of peers has been found to directly increase risk-taking, but not to moderate the influence of other manipulations, including disinhibition [35, 46].

Subsequent preclinical alcohol studies have also found no causal effect of disinhibition. As in Gullo et al. [28••], Lindgren et al. [39] employed the crossing-out letters task and found no effect on alcohol consumption during a 10-min beer TRT in which participants were incentivized to limit consumption. The crossing-out letters task also failed manipulation checks, in that, it did not increase “exhaustion” and was not perceived to require more self-control than a control task. Looby et al. [36] found no effect of a complex working memory task on subsequent consumption in a 10-min beer TRT. They did report a moderating effect of executive functioning, such that individuals with lower functioning were more likely to drink after the complex task. However, this effect should be interpreted with caution as, even though it was found for ml consumed, it was not found for the primary outcome of interest (number of sips) and the study’s small sample size (N = 24) and between-groups design increased the risk of Type 1 error for this interaction effect. Furthermore, each of these studies reported problems with confirming the validity of the disinhibition manipulation employed despite it being used in previous studies.

McNeill et al. [34••] employed direct brain stimulation that was shown to causally increase disinhibition, as measured using SSRT. They employed continuous theta burst transcranial magnetic stimulation (TMS) of the dorsolateral prefrontal cortex in a sample of 80 college students. This increased both disinhibition and alcohol consumption on a beer TRT. However, mediation analysis revealed no association between changes in disinhibition and changes in alcohol consumption. These results replicate a smaller preliminary study by the same group [33] and conceptually replicate Gullo et al.’s [28••] observation that manipulations increasing disinhibition (SSRT) do not lead to increased TRT consumption in the absence of a concurrent increase in reward drive. While Gullo et al.’s [28••] disinhibition effects were caused by negative mood induction, serving as a potential confound, McNeill et al.’s [34••] TMS effect was independent of mood. Interestingly, they found dorsolateral prefrontal cortex TMS effects on drinking were instead mediated, in part, by increased alcohol craving—a mechanism related to reward processing [47, 48].

Another line of preclinical research has focused on impaired control over drinking, a construct related to rash impulsiveness [49, 50]. Rather than seeking to induce impulsive consumption, the Impaired Control Alcohol Self-Administration Paradigm (ICASP; [51]) introduces disincentives for heavy consumption during a free access period by imposing a drinking guideline (e.g., no more than 3 drinks [2 for women]) and financial penalties for poor performance on a cognitive task administered after the free access period [32•, 38, 51]. Within this context, excessive alcohol consumption could be considered to reflect impaired control over drinking. The primary measure derived from ICASP is peak estimated blood alcohol concentration (eBAC), calculated as ([number of drinks/2] × [constant of 9 for women and 7.5 for men/weight]) − (number of hours × 0.016) [32•]. Leeman et al. [32•] utilized ICASP to evaluate multi-session automatic action tendency (AAT) retraining on young adult alcohol use, finding no effect of AAT retraining on consumption, similar to a previous preclinical study employing a TRT [52].

The ICASP possesses unique strengths as an alcohol consumption paradigm, but questions remain over the extent to which it models impaired control. Unique to ICASP is the 3-hour free access period that takes place in an actual bar, enhancing ecological validity. Ecological validity is also enhanced by testing participants in groups of 2–4, introducing a social element common to the setting. However, the lack of control over conversation and social dynamics is a threat to internal validity, and there is evidence these uncontrolled effects not only influence consumption [32•] but, when statistically controlled for, extinguish the effect of the disincentives underlying the paradigm [51]. Another strength of the design is the close observation of participants, allowing for measurement of specific drinking behaviors: drinking duration, interdrink intervals, and number of non-alcoholic drinks ordered. The main ICASP outcome of peak eBAC has been shown to correlate with past-month quantity of alcohol consumption (r = .26, p < .05, n = 69) to a similar degree as C-TRT consumption in EPIIC (r = .27, p < .01, n = 120). However, two studies have failed to find a significant association between ICASP consumption and self-reported impaired control over drinking on an established measure of the construct and inconsistent associations with self-report measures of drinking problems, raising validity concerns [32•, 51]. This may be the result of the disincentives for excessive consumption being too weak. Leeman et al. [51] reported 45% of drinkers engaged in excessive consumption despite financial disincentives and their excessive alcohol consumption did not lead to significantly poorer performance on cognitive tasks anyway. Taken together, ICASP may be better considered an ecologically valid alcohol self-administration paradigm rather than an impaired control paradigm.

Wardell and colleagues [38] sought to combine the ICASP’s focus on impaired control with the internal validity strengths of intravenous alcohol self-administration paradigms (ivASA; [53]). While possessing the lowest ecological validity of paradigms reviewed here, ivASA brings benefits in experimenter control over ascending and descending blood alcohol rates, amount, and duration of incremental alcohol exposure, thereby reducing the substantial interindividual variability in alcohol absorption, distribution, and metabolism found in oral self-administration studies [53, 54]. After an initial priming dose (typically 30 mg% BrAC), participants engage in a free access self-administration period during which they can press an electronic button ad libitum to trigger an IV alcohol infusion, up to a predetermined safety limit (typically 100–120 mg% BrAC).

As in the ICASP, Wardell et al. [38] incorporated disincentives for heavy consumption that may have been too weak. Ten of 16 (62.5%) participants intended to breach the experimenter-imposed limit of 80 mg% BrAC. Participants were young heavy episodic drinkers selectively recruited to provide a mixture of high and low self-reported impaired control over drinking. Impaired control over drinking was operationalized in two ways: (1) exceeding the experimenter-imposed 80 mg% BrAC limit and (2) exceeding the participant’s self-imposed BrAC limit. However, neither operationalization was associated with self-reported impaired control on an established measure of the construct, nor with AUDIT score or the number of heavy-drinking episodes. Regression analysis controlling for the number of previous attempts at drinking control produced more promising results, with self-reported impaired control significantly predicting violation of both imposed (OR = 11.23, 95%CI 1.52–462.13) and self-imposed BrAC limits (b = 17.73, 95%CI 1.47–33.98), albeit with significant margins of error due to the small sample size. Exploratory analyses suggested that post-priming craving mediated the association between self-reported impaired control and the breach of self-imposed drinking limits. In other words, it may have been the strength of elicited craving (alcohol-approach impulse) in the context of perceived weak negative consequences that led to violation of drinking limits.

A number of recent studies have explored the role of impulsivity in predicting ivASA ad libitum consumption. These studies have also reported associations between free access ivASA consumption with indices of problematic drinking, including AUDIT scores [29, 31, 41]. The free access ad libitum version of ivASA has been demonstrated to have good test-reliability (r = .66) for peak BrAC (n = 52), although over a highly variable inter-test interval (3–30 days; [29]). In a sample of 159 young social drinkers, Gowin and colleagues [41] found choice impulsivity (delay discounting) predicted a greater rate of ivASA binge drinking, operationalized as exceeding 80 mg% BrAC (hazard ratio = 1.17, 95%CI = 1.00–1.37), as well as total alcohol exposure (U [67, 67] = 2839, p = .008). Stangl et al. [29] reported that delay discounting was associated with peak BrAC (r = .24, p < .05) in 112 adult non-dependent drinkers. However, in 85 German social drinkers (49 men), Obst and colleagues [31] found that self-report measures of trait rash impulsiveness (BIS-11) and sensation seeking (related more to reward drive) predicted ivASA binge drinking in women only and only cross-sectionally at age 18–19 years (not prospectively at age 21–22 years). Trait measures were tested individually as predictors, preventing examination of unique associations, and attrition at age 21–22 years was related to rash impulsiveness (76.5% retention at age 21–22 years). Cyders et al. [30] also failed to find associations between rash impulsiveness-related traits and ivASA consumption (N = 40).

In summary, recent human preclinical studies employing experimental manipulation of impulsivity find that the causal effect on alcohol consumption is largely driven by reward-related mechanisms. This effect operates independently, and in addition to, peer influence. Studies also find no causal effect of disinhibition-related mechanisms on alcohol consumption. The preclinical evidence base is confined to non-clinical samples of young alcohol users. Replication in older individuals and clinical populations is required. Lack of associations between disinhibition and alcohol use may also be impacted by the low ecological validity of commonly used tasks (e.g., those calculating SSRT), given that self-report measures of disinhibition-related traits (i.e., rash impulsiveness) do tend to correlate with self-reported alcohol use [23]. However, self-report measures of disinhibition-related traits were inconsistently predictive of laboratory alcohol use (see Table 1).

Food

In their meta-analysis of the effect of cues on disinhibited behavior, Jones et al. [44•] reviewed not only the effect of alcohol cues but also food cues. Reviews typically show significant moderate associations between self-reported rash impulsiveness (typically using the Barratt Impulsiveness Scale (BIS-11; [77]) or the UPPS-P scales [78]) and the compulsion to over-consume typically ultra-processed, high calorie foods, accompanied by intense cravings, withdrawal, and relapse (often referred to as food addiction and measured using the Yale Food Addiction Scale, YFAS, [79]) [4, 80•, 81]. Specifically, across these reviews and subsequent studies (see Table 2), the associations between the BIS-11 and scores on the YFAS typically ranged from .20 to .33 while associations between the UPPS-P scales ranged between .23 and .55. The associations between self-reported reward drive/sensitivity and YFAS scores in the handful of studies reported, however, were typically smaller (.06 to .31) and often non-significant [4, 80•]. As such, the conclusion in relation to food addiction is that inhibitory control impairment and emotion-induced impulsivity, rather than reward drive, play the dominant role.

However, food addiction lies at the extreme end of the spectrum of overeating, with occasional and frequent overeating at the less severe end of the spectrum, and binge-eating and binge-eating disorder subtypes at a moderate level of severity (i.e., subclinical) [82]. Recent systematic reviews [83•, 84•] find reward-related impulsivity to consistently correlate with binge-eating, emotional eating, food responsiveness, general eating pathology, excessive eating, externally driven eating, and self-reported consumption of high fat food. Further supporting the notion that reward drive plays a role at the subclinical level (like found in the alcohol literature above) and that other processes such as the tendency to seek out appetitive foods and engage in binge-eating episodes play intermediary roles in the pathway to compulsive overeating, Loxton and Tipman [85] found that knowing that food is available and self-reported binge-eating mediated self-reported reward drive and food addiction symptom count. Trait impulsivity is also found to be associated with binge-eating, uncontrolled eating, and food cravings (e.g., [62, 67, 76]; See Table 2), although Loxton and Tipman [85] still found the indirect effect between reward drive and food addiction symptom count to hold when controlling for self-report rash impulsiveness.

In sum, the bulk of the research in self-reported impulsivity and overeating has largely found support for the role of disinhibition/rash impulsiveness in more extreme compulsive overeating, while reward drive seems to play a role in less severe eating behavior, and potentially starts the progression from occasional overeating and snacking to more extreme eating patterns. While useful in potentially identifying those more likely to have concerns with excessive consumption, there are considerable limitations to looking at cross-sectional associations, such as those reviewed above. Making causal inferences is clearly limited by this approach. As such, experimental studies manipulating the impact of reward-eliciting appetitive cues and/or disinhibition on food consumption may help determine critical triggers to overeating. As noted by Van Baal et al. [86], “A disproportionate amount of research on impulsivity has focused on trait-related aspects rather than state fluctuations.” Mapping onto the earlier discussion of experimentally induced reward-seeking and disinhibited behavior with drinking, we review food consumption/appetitive response using experimental designs.

Previously, it was found that reward-seeking induced by reward cue exposure was associated with an increase in alcohol consumption. In the area of overeating, food cues may be considered potential reward cues that induce reward-seeking behavior (e.g., consumption of tasty foods). In a series of studies investigating exposure to food cues on the desire to eat and food consumed, Loxton and colleagues exposed participants to junk food advertisements or images in a computer task [57•, 58, 87, 88]. Using a computerized “Expectancies TASK” (ETASK, [89]), participants were initially presented with a series of either food images (typically foods classified as highly appetitive, e.g., hot chips, hamburgers) or a series of non-food, or less appetitive food, images (e.g., uncooked rice, lettuce). The participants then pressed a computer button to indicate their agreement or disagreement to a series of statements regarding their expectancies of eating. For example, participants are presented with statements such as “Eating… is pleasurable.” Innate reaction time was controlled by additional statements reflecting how they see themselves; “Usually I…am talkative.” Faster responses to the expectancy statements (controlling for innate RT) were considered an index of stronger implicit expectancies.

In the initial study, with 109 university women, Hennegan et al [87] found self-reported reward drive was associated with implicit eating expectancies generally and the expectation that eating is rewarding. Moreover, the association between reward drive and self-reported emotional and external driven eating was mediated by implicit expectancies that eating is rewarding (external eating) and that eating alleviates boredom (emotional eating). In a subsequent study using the ETASK, Maxwell et al [58] had 119 university women assigned to the same appetitive food cue condition or a neutral condition (colored squares). Reward drive was again significantly associated with responding faster to statements that eating is rewarding after controlling for reaction time to non-expectancy statements (i.e., showed stronger implicit expectancies) but only for those participants exposed to food cues. The implicit expectancy that eating was rewarding again mediated the association between reward drive and externally driven eating. To test the link between reward drive and eating behavior in a more ecologically valid context, Kidd and Loxton [57•] had 98 university students (34 male) view a 30-min documentary in which there were embedded “junk food” advertisements (e.g., fast food restaurants) or neutral advertisements (e.g., cars, travel). Participants then performed a 10-min filler task before being allowed to snack on chocolates while completing personality questionnaires. Reward drive was positively and significantly associated with the amount of chocolates eaten; again, this effect only occurred in the participants in the “junk food” advertisement condition. Loxton and Taylor [88] replicated this finding in 100 university students who were allowed to snack on chocolates during the 30-min video. There was a significant positive association between reward drive and amount of chocolates eaten in the junk food advertisement condition, but no relationship in the neutral advertisement condition. Together, these studies show that exposure to appetitive food images elicits positive beliefs about eating, the desire to eat, and an increase in chocolate consumption in those higher in trait reward drive. These findings parallel those found with alcohol consumption in the EPIIC paradigm [28••].

Inhibitory Control and Food Consumption

The link between disinhibition and food consumption, however, is tentative, with one recent meta-analysis by Jones et al. [44•] finding no evidence of an effect of food cue exposure on inhibitory control after controlling for publication bias and another by McGreen et al. [90•] finding the association between inhibitory control and food consumption or food choice to be significant but very small r = .09. Similar to Jones et al., McGreen et al. found the association between SSRT and food consumption was larger (r = .15) than the association with the Go/No-Go (r = .03). McGreen et al. noted that these differences may not necessarily be due to differences in the cognitive mechanisms assessed by the two measures (response selection in Go/No-Go and inhibitory control in SSRT; [91]), but may be due to differences in the measurement of food consumed. SSRT studies tended to use objective measures of food consumption (e.g., taste rating test) rather than self-report. Together, the findings support the role of reward drive in eating in response to appetitive food cues, with disinhibited eating supported in cross-sectional survey data but not under experimental conditions.

Implications for Theory and Intervention

Impulsivity-targeted intervention for addiction has largely failed to demonstrate superiority to non-targeted intervention [6, 7, 10, 11, 92, 93]. In some instances, disappointing outcomes were in spite of promising human preclinical evidence [93, 94]. It should be noted that meta-analytic reviews have raised concerns about a high risk of publication bias in some of the preclinical evidence and cognitive bias modification interventions literature [44•, 95]. It is also notable that the majority of impulsivity-targeted interventions have focused on strengthening inhibitory control, incontrast to what recent preclinical studies indicate might be a better target: reward drive (i.e., the impulses themselves). Indeed, in a clinical study, Coates et al. [7] found that, despite impulsivity-targeted CBT being superior to standard CBT in reducing dysfunctional impulsivity among patients with moderate-severe AUD, there was no additional benefit in drinking outcomes. This is in line with recent preclinical evidence pointing to the importance of the reward drive dimension in addictive behavior.

Impulsivity-targeted interventions may benefit from greater focus on reward drive. Consider that naltrexone, one of the most efficacious treatments for alcohol use disorder [96], has been shown in laboratory studies to reduce craving and alcohol stimulation [97], reward-related mechanisms, but not improve inhibitory control [98]. In personality-targeted substance use prevention, sensation seeking adolescents benefitted the most from a trait-targeted intervention, which overlaps with reward drive [99]. The intervention targets explicit reward-seeking cognitions (i.e., beliefs) and may operate independently of inhibitory control [28••, 44•]. It is also possible that there may be a complex, indirect effect of reward drive on substance/eating control through explicit cognition, as proposed in bioSocial Cognitive Theory [100, 101]. Lau et al. [102•] recently demonstrated that experimentally increasing reward drive undermined young adults’ belief in their ability to resist drinking in cued situations. This was a similar reward drive induction previously shown to not affect actual inhibitory control [28••, 43]. They also found that covertly weakening beliefs about positive expectancies of alcohol reward reduced the reward drive effect. Thus, impulsivity-targeted interventions may be enhanced by targeting reward drive-related biological and (explicit) cognitive processes that exaggerate the perceived reinforcement value of alcohol/food, which high reward drive individuals are particularly vulnerable to form [58, 103].

There may also be promise in redirecting reward-seeking impulses to sources other than alcohol/food in those high in reward drive. Juarascio and colleagues [104•] developed an intervention program to redirect the tendency to seek reward from food to other sources. In a telehealth pilot study in mid-2020, 59 women received either reward retraining (n = 29) or supportive therapy (n = 30) across ten 90-min weekly group sessions (approximately 10 participants per group; [104•]). While there was no difference between groups in change in binge-eating frequency or disordered eating, both showed significant improvement through to 3-month follow-up. The authors noted that the supportive treatment was particularly effective (compared to previous studies using this control group) and may have been due to widespread COVID-19 pandemic-related lockdowns at the time. Further research is required.

In summary, impulsivity-targeted interventions may be enhanced by a greater focus on the reward drive system. Indeed, effective pharmacological and psychosocial interventions may already be capitalizing on this. However, it should be noted that recent preclinical studies have largely recruited non-clinical adolescent/young adult samples, potentially inflating the role of reward drive in addictive behavior. Also, the use of food-related (general) reward cues, rather than non-food reward cues, may have inflated the association between reward drive and food consumption by tapping into processes related to craving. Some alcohol studies have employed paradigms that use general reward cues (e.g., EPIIC) and do not elicit craving [28••]. This would be a worthwhile line of research to pursue in food addiction. As noted above, in both the substance and food addiction literature, correlational evidence has long suggested that reward drive plays a greater direct role in early and in less severe addictive behavior, potentially starting the progression from problematic use to more extreme use, but not necessarily maintaining it [23, 103].

Future Directions

Comparatively few preclinical human studies investigate, or even measure, the reward drive-related dimension of impulsivity. This imbalance also exists in the broader impulsivity literature, as noted by Verges et al. [21], “...researchers interested in examining the relation between types of impulsivity and substance use outcomes should also consider including measures of impulsivity relevant to the Two-Factor Model.” This imbalance is likely due to (1) the focus in laboratory studies on the more narrow reward-related process of craving and (2) the poor coverage of reward sensitivity in many commonly used self-report impulsivity measures [17]. While there is a large body of preclinical evidence on craving, craving is distinct from generalized reward drive system activation and understanding it may be particularly important to prevention [28••].

The field has also suffered, until recently, from a lack of reliable means to experimentally manipulate impulsivity to test causation. This is understandable given that the dominance of (non-biological) lexical-based conceptualizations of personality (e.g., the “Big Five”) led to a widely held view that personality traits, including impulsivity, were static and could not be manipulated. However, biologically based theories of personality predict both stability and change in the expression of trait-relevant behavior and foundational research in the field involved pharmacological manipulation of neurotransmitter systems [26, 27]. Researchers must, therefore, exercise due care in theorizing and study design to avoid conflation of state and trait aspects of impulsivity or, alternatively, of erroneously concluding that they are wholly separate [25, 105,106,107]. For example, most self-report measures of impulsivity assess the (heritable) trait, and ask questions about general behavioral tendencies averaged across time and context. It is not appropriate to model it as a mediator of, say, the effect of past-month stress on tomorrow’s alcohol/food consumption. In such a situation, and assuming a prospective design, any variation in self-reported impulsivity related to recent stress would be the result of measurement error. It is more appropriate to use a measure of state impulsivity, such as that derived from a behavioral task (e.g., SSRT) or a modified self-report measure (e.g., [28••, 108]). While recognition that impulsivity can exist at both the trait and state level of analysis brings opportunities for testing causal effects, it also brings challenges in study design and measurement that are best navigated using strong theory.

Perhaps the most promising recent development is the emergence of laboratory paradigms focused on modeling impulsive consumption (see Fig. 1). These paradigms meaningfully vary in terms of internal and external validity and, perhaps most importantly, in the extent to which they can test causal associations between impulsivity and alcohol/food consumption. In addition to needing more studies that test causation, future research must endeavor to also measure hypothesized mediating mechanisms (e.g., reward-seeking, disinhibition) and non-hypothesized consumption-related mechanisms that may confound interpretation (e.g., craving, negative affect). Few studies do this, but their inclusion can yield valuable insights (e.g., [34••]). There is also need for greater detail in the reporting of studies involving ad libitum consumption, especially the range of consumption observed and the presence of any floor/ceiling effects, which have been found in some beer TRT and ivASA studies. Some studies lack detail on the length of the ad libitum period and how participants who prematurely cease consumption are dealt with [34••, 39]. Future studies should also report detailed information on the debriefing procedures employed to evaluate demand characteristics [28••, 32•, 36]). Ideally, this should be reported separately for the extent of participant awareness that alcohol/food consumption is being measured and awareness that impulsivity was being manipulated. It is preferable that most participants not be aware that their consumption is being measured or level of impulsivity manipulated [109]. That many experimental studies in food consumption tend to use female participants, it is possible that the findings may not generalize to men. However, in two of the reviewed studies in which there were male and female participants [57•, 88], there was no evidence of a gender effect in the consumption of chocolates. Nevertheless, given that men tend to be over-represented in those with clinically significant issues with alcohol and other drugs, and that women are over-represented in those with disordered eating, further research would benefit from a considered focus on gender effect in experimental studies.

Fig. 1
figure 1

Diagrammatic summary of evidence for validity of preclinical human models of impulsive alcohol consumption. Note. Models differ in the extent to which they were designed to specifically model impulsive consumption, which affects dimensional position. (+), paradigm manipulates impulsivity to test causation; EPIIC, Experimental Paradigm to Investigate Impulsive Consumption; ICASP, Impaired Control Alcohol Self-Administration Paradigm; TRT, Taste Rating Task; TMS, transcranial magnetic stimulation

Conclusion

Impulsivity is a complex, multidimensional construct that affects addictive behavior through several mechanisms [3, 16,17,18,19,20,21,22]. Impulsivity-targeted treatments producing disappointing outcomes might be targeting the wrong mechanisms or failing to affect the right mechanisms better than standard care. Further complicating matters is that impulsivity serves as both cause and consequence of addiction [110]. Given this complexity, research that experimentally induces impulsivity in a controlled setting to precisely observe effects is needed to understand its causal role in addiction. Preclinical models of impulsive consumption are ideally situated to pilot novel pharmacological and psychosocial interventions to inform the next generation of impulsivity-targeted treatment.