Introduction

The stressors we encounter in our daily lives can have a profound negative impact on cognitive performance. A critical determinant of the stress-cognition relationship may be variability in the extent to which individuals respond to stressors in a manner that is adaptive (appraising a stressor as a challenge) or maladaptive (appraising a stressor as a threat) (Blascovich & Tomaka, 1996; Dienstbier, 1989; McEwen & Sapolsky, 1995). Adaptive stress responses have generally been associated with enhanced cognitive performance whereas maladaptive stress responses have generally been associated with impaired cognitive performance (e.g., Blascovich, Mendes, Hunter, & Salomon, 1999). However, might there be aspects of cognition for which a maladaptive stress response is actually adaptive? Given the diverse nature of cognition, it is likely that any stress-related change in performance depends upon the cognitive systems that are being recruited to perform the particular task. This raises the intriguing possibility that maladaptive stress responses may lead to enhanced cognitive performance if the appropriate cognitive system is recruited.

Stress-response variability and cognition

Clearly there are many types of stressors one might encounter, but our focus is on a ubiquitous stressor in modern life: social evaluation. Performance situations in which we are evaluated by others in a domain of personal importance, and are motivated to do well, elicit a physiological and psychological stress response (Dickerson & Kemeny, 2004). The vast majority of the stress-cognition literature focuses on the relationship between the intensity of this stress response and the cognitive system mediating task performance. These studies generally find that increased stress is associated with impaired cognitive performance on tasks taxing working memory and declarative memory (Lupien, Maheu, Tu, Fiocco, & Schramek, 2007). There are, however, numerous reports of increased stress being associated with enhanced cognition (e.g., Smeets, Giesbrecht, Jelicic, & Merckelbach, 2007). Although this literature is important in demonstrating that the impact of stress may depend upon the cognitive system mediating performance, interpretation is complicated by tasks differing in the nature of the information that is learned (e.g., verbal vs. nonverbal), the role of awareness (e.g., implicit vs. explicit - Graf & Schacter, 1985), the processing requirements (e.g., data driven vs. conceptually driven - Roediger, 1990) and the nature of the stress response.

Variability in the extent to which individuals experience an adaptive or maladaptive stress response is likely to affect the stress-cognition relationship. Whether the stress response is adaptive or maladaptive depends critically upon an individual’s appraisal of the situation (Lazarus & Folkman, 1984). – that is, whether individuals are challenged or threatened by the stressor (Blascovich & Tomaka, 1996). Physiologically, both responses activate the sympathetic-adrenal-medullary axis and result in increases in heart rate and left ventricular contractility. Adaptive responses are marked by appraising the stressor as a challenge and increased cardiovascular efficiency: increased cardiac output (CO) and decreased total peripheral resistance (TPR). In contrast, maladaptive responses are characterized by appraising the stressor as a threat and decreased cardiovascular efficiency. Due to activation of the hypothalamic-pituitary-adrenal (HPA) axis, vasodilation is attenuated in threat leading to decreased, or little change in, CO and increased TPR (Blascovich, 2008). Increased threat reactivity is associated with worse cognitive performance than challenge (Blascovich et al., 1999; Kassam, Koslov, & Mendes, 2009). Although these studies are important in demonstrating that variability in the nature of the stress response is an important determinant of the stress-cognition relationship, they are limited in that they ignore variability in the cognitive system mediating performance.

We argue that both variability in the nature of the stress response and variability in the cognitive system mediating task performance should be considered in order to fully understand the stress-cognition relationship. We focus on a specific type of cognitive task: category learning (i.e., the process of establishing a memory trace that improves the efficiency of assigning novel objects to different groups). Category learning is a particularly useful paradigm given our goals because there is extensive evidence suggesting that processing can be biased towards different cognitive systems by simply manipulating the structure of the categories without any changes in how the dependent measure (i.e., the categorization response) is assessed (Ashby & Maddox, 2005). By using categorization, we can examine whether the relationship between threat and performance depends upon the system that is recruited and avoid the aforementioned limitations of previous studies investigating the stress-cognition relationship.

Categorization as a model task

To begin, consider the information-integration (II) categories in Fig. 1a. Learning in II tasks is thought to be mediated by a procedural-learning system that incrementally acquires associations between stimuli and the appropriate categorization response (Ashby, Alfonso-Reese, Turken, & Waldron, 1998). Learning in rule-based (RB) tasks (Fig. 1b), in contrast, is thought to be mediated by a hypothesis-testing system that learns to attend to the relevant dimension (i.e., bar width) and the optimal placement of the decision criterion on the relevant dimension (Ashby et al., 1998). The hypothesis-testing system, unlike the procedural-learning system, is highly dependent upon working memory and executive functions (e.g., Waldron & Ashby, 2001). Increased threat (as indexed by increased HPA axis activation) is associated with impaired performance on working memory tasks (e.g., Schoofs, Preub, & Wolf, 2008). Therefore, increased threat reactivity would be expected to impair the hypothesis-testing system, resulting in reduced accuracy on a RB task.

Fig. 1
figure 1

a Information-integration and b rule-based and category structures. Each point in the graph represents a Gabor pattern (i.e., a sine-wave grating in which contrast is modulated by a circular Gaussian filter) of a particular spatial frequency (bar width) and orientation (bar angle). Open circles represent category A stimuli and filled squares represent category B stimuli. The solid line is the decision strategy that would maximize accuracy (i.e., optimal decision strategy). The insets are example Gabor patterns. On each trial of the experiment, a stimulus was displayed and the participant pressed a key (labeled “A” or “B”) indicating category membership. Immediately following the response, corrective feedback was given. The participants were instructed that at first they would be guessing, but to use the corrective feedback to help them learn the correct classification by trial-and-error. The tasks and procedure were adapted from Markman et al. (2006)

Importantly, the hypothesis-testing and procedural-based systems are hypothesized to operate in parallel, and compete for control of the observable categorization response across trials (Ashby et al., 1998). Initially, the hypothesis-testing system is in control, but control will generally shift in favor of the procedural-based system in II tasks (e.g., Ell & Ashby, 2006). Because of this competition, manipulations designed to interfere with the hypothesis-testing system can actually facilitate learning in II tasks (De Caro, Thomas, & Beilock, 2008; Maddox, Love, Glass, & Filoteo, 2008; Markman, Maddox, & Worthy, 2006; Worthy, Markman, & Maddox 2009). Thus, increased threat reactivity would be expected to facilitate the procedural-based system, resulting in enhanced accuracy on an II task.

Method

Overview

As previous research has demonstrated that category learning tasks in and of themselves are unlikely to be physiologically arousing (Blascovich et al., 1999), we first subjected all participants to a social stressor in order to induce physiological arousal (and allow differentiation of challenge and threat reactivity) that would carry over into the category learning task. Immediately following the stressor, participants were randomly assigned to complete either the II or RB task. We hypothesized that the more threatened participants were the better they would perform on the II task and the worse they would perform on the RB task. Specifically, we predicted that increases in stress appraisals, increases in TPR and decreases in CO would be associated with better performance on the II task and worse performance on the RB task.

Participants and procedure

Participants (n = 33, 31 female, Age: M = 22.70; SD = 7.16) arrived for a study on “Health and Performance” and sensors to monitor cardiovascular and hemodynamic reactivity were applied (ECG: electrocardiogram, ICG: impedance cardiogram, BP: continual blood pressure). Participants then relaxed for a 5 min baseline.

Social stressor

To induce physiological arousal, participants performed a modified version of the Trier Social Stress Test (TSST - Kirschbaum, Pirke, & Helhammer, 1993) in front of two evaluators (one female, one male) trained to display flat affect and neutral facial expression throughout the test. Participants met the evaluators, the task instructions were explained, and they were left alone to prepare for 5 min (anticipatory stress). The evaluators returned and guided the participant in speech (5 min), interview (5 min), and serial subtraction (5 min) tasks.

Threat appraisal

To assess the extent to which participants found the social stressor threatening, we asked participants (during the social stressor, following speech preparation) the extent to which they found it: stressful, demanding, effortful, and distressing. Responses were made on a 0 (not at all) to 6 (very much) scale and were averaged to form a reliable index of threat appraisal (α = .87).

Categorization tasks

Immediately following the stressor, participants were randomly assigned to complete five 80-trial blocks in either the II or RB categorization task (see Fig. 1 for details).

Cardiovascular reactivity measures

ECG, ICG and BP data were recorded using BioPac hardware and analyzed with BioPac’s AcqKnowledge software. We calculated the average for heart rate (HR), CO (stroke volume X heart rate), and TPR (80 X mean arterial pressure/cardiac output) during baseline (last 4 min), the stressor (15 task min), and the category learning task (first 5 min).Footnote 1 We then created reactivity scores by subtracting baseline from the stressor and category learning averages. Thus, positive numbers indicate a rise in HR, TPR or CO while negative numbers indicate a decline.

Results

Preliminary analyses

Prior to testing our hypotheses we first needed to establish: (1) that our social stressor was indeed stressful, and (2) that there was not a failure of random assignment (i.e., differences in cardiovascular reactivity at baseline, during the stressor, or during category learning).

Was the social stressor equivalently stressful?

Participants in both the II and RB tasks appraised the stressor as equivalently threatening [t(31) = 1.61, p = .12], rating it above the midpoint of the scale (overall M = 3.92, SD = 1.28). We also observed significant increases in heart rate over baseline during the stressor [M = 90.10, SD = 13.10; Baseline: M = 72.90, SD = 10.80; F(1, 26) = 116.8, p < 0.05] a pre-requisite for examining patterns of challenge and threat. Heart rate remained elevated above baseline during the categorization tasks [M = 78.10, SD = 12.70; F(1, 26) = 22.20, p < 0.05]. Finally, we did not observe any differences between the II and RB tasks on HR, CO, or TPR at baseline, during the stressor, or during the categorization task (all t’s < │1.30│, p’s > 0.20) .

Hypothesis testing

We utilized moderated regression analyses to test the hypothesis that the relationship between threat and accuracy would differ by categorization task. Task performance was assessed as the average percent correct across the five blocks.Footnote 2 Separate analyses were conducted for our different threat indices: threat appraisals, TPR reactivity during the categorization task, and CO reactivity during the categorization task. On step 1 we entered the main effects of task (RB = 0; II = 1) and threat index (centered at the mean). On step 2, we entered the interaction between task and threat index.Footnote 3 Significant interactions were followed up by examining the significance of the simple slope for the II and RB tasks. The simple slopes and intercepts were derived from the overall model (Aiken & West, 1991) and graphed using estimated values at high (1 SD above the mean) and low (1 SD below the mean) values of the reactivity measures.

Threat appraisal

Consistent with previous research, overall accuracy was higher on the RB task (M = 77.98, SD = 8.76) than the II task [M = 63.43, SD = 7.24; β = –.66, p < 0.001; Step 1: R 2 = .47; F(2, 30) = 13.56, p < 0.001]. There was no main effect of threat appraisals on accuracy (β = .08, p > 0.50). Consistent with hypotheses, the relationship between threat appraisal and accuracy depended upon the categorization task [β = .52, p < 0.01; ΔR 2 = .12; F(1, 29) = 8.59, p < 0.001; See Fig. 2]. Threat appraisals were significantly associated with enhanced accuracy on the II task (β = .41, p < 0.05) and impaired accuracy (although not significantly so) on the RB task (β = –.31, p = 0.09).

Fig. 2
figure 2

Average percent correct as a function of the categorization task and threat appraisals

TPR

As in the previous analysis the main effect of task was significant while the main effect of TPR was not [β = .15, p = .26; Step 1: R 2 = .65; F(2, 21) = 19.36, p < 0.001]. Consistent with hypotheses, the relationship between TPR reactivity and accuracy depended upon the categorization task [β = .41; Step 2: ∆R 2 = .06, F(1, 20) = 4.36, p = .05; see Fig. 3]. Recall that increases in TPR during the categorization task are consistent with threat. In the II task, the more TPR increased the higher the accuracy (β = .71, p < 0.01). In contrast, TPR was not significantly associated with accuracy on the RB task (β = –.22, p = .51).

Fig. 3
figure 3

Average percent correct as a function of the categorization task and total peripheral resistance reactivity

CO. As with the previous analyses only the main effect of task was significant on Step 1 [CO: β = –.14, p = .26; R2 = .71; F(2, 21) = 25.13, p < 0.01]. Consistent with hypotheses, the relationship between CO reactivity and accuracy depended upon the categorization task [β = –.47; Step 2: ∆R2 = .06, F(1, 20) = 4.75, p = 0.04; see Fig. 4]. Recall that decreases in CO during the categorization task are consistent with threat. In the II task, the more CO decreased the higher the accuracy (β = –.67, p = 0.02). In contrast, CO was not significantly associated with accuracy on the RB task (β = .33, p = 0.33).

Fig. 4
figure 4

Average percent correct as a function of the categorization task and cardiac output reactivity

Model-based analyses

Given the finding that increased threat reactivity was associated with higher accuracy on the II task, we next examined whether increased threat reactivity also predicted the use of more optimal decision strategies on the II task. To test this hypothesis, we fit three types of decision-bound models to the last block of data from each participant (see Maddox & Ashby, 1993 for details of the models and fitting procedures). One type of model assumed that participants used a task appropriate, information-integration strategy (e.g., the solid line in Fig. 1a). Two types of models assumed that participants used a task inappropriate strategy: either a rule-based strategy (e.g., the solid line in Fig. 1b) or guessing. Next, we computed the point-biserial correlation between the best-fitting model type (task appropriate or inappropriate) and each of the three reactivity measures. For all three measures, the results were consistent with predictions. The more threatened participants were, the more they utilized task appropriate strategies on the II task [Threat Appraisal: r (18) = .66, p < 0.01; TPR: r (13) = .57, p < 0.05; CO: r (13) = –.48, p = 0.09). In sum, increased threat reactivity was associated with enhanced accuracy and task-appropriate strategy use on the II task.

Discussion

The present study demonstrates the first evidence, to our knowledge, that a maladaptive threat response is associated with enhanced performance on a cognitive task. We found a consistent pattern of enhanced performance across three different markers of threat in response to a psychosocial stressor: threat appraisals, TPR reactivity, and CO reactivity. Our predictions were motivated by the hypothesis that category learning is mediated in part by a competition between hypothesis-testing and procedural-based systems (Ashby et al., 1998). We proposed that threat impairs the hypothesis-testing system, and consequently, should lead to enhanced performance on II tasks that recruit the procedural-based system. In contrast, RB tasks recruit the hypothesis-testing system and therefore performance should be impaired by threat. Although there was a trend for increased threat appraisals and reactivity to predict impaired performance on the RB task, these results were not statistically significant.

The hypothesis-testing system recruited for RB tasks is constrained to use rules that are easily verbalizable and, as a consequence, cannot learn II tasks. Computationally, the procedural-based system uses a nonparametric classifier that is capable of mimicking any linear decision boundary (e.g., the strital pattern classifier - Ashby & Waldron, 1999). Thus, in contrast to the hypothesis-testing system, the procedural-based system is far more flexible during learning and, therefore, can eventually learn both RB and II tasks (Ashby et al., 1998). Thus, even though competition would predict that the hypothesis-testing system is inhibiting the procedural-based system on the RB task, increased threat reactivity may have offset this inhibition thereby providing a compensatory mechanism for learning on the RB task and weakening the association between threat and accuracy.

It is also possible that the absence of threat-related effects in the RB task is related to methodological issues. For instance, it may be that RB tasks in which a single, unidimensional decision criterion must be learned do not place sufficient demands on working memory resources (Ell, Ing, & Maddox, 2009) for the effect of increased threat reactivity to be detected. Alternatively, the impact of threat reactivity on RB tasks may be less robust than on II tasks and require greater statistical power to detect.

An attractive feature of our design is that it enabled us to examine the consequences of acute stress reactivity as it carried over into the subsequent cognitive task. This approach to investigating the stress-cognition relationship mimics many real-world situations (e.g., performing your job after a stressful meeting with your supervisor). A potential consequence of our design is that the categorization task, and not the stress test, was driving the stress response. It is unlikely, however, that the categorization task itself would induce arousal (i.e., an increase in heart rate) let alone a pattern of threat reactivity. For example, participants performing a categorization task in the absence of social evaluation demonstrated no appreciable cardiovascular reactivity from baseline (Blascovich et al., 1999). Furthermore, one could argue that because the II task was more difficult it was also more threatening. Importantly, however, we did not observe any differences in physiological reactivity between the two categorization tasks.

We elected to use a correlational design to investigate whether threat would predict increased cognitive performance. Now that this relationship has been demonstrated, future work could seek to manipulate threat vs. challenge patterns of physiological reactivity and examine the consequences for cognitive performance. It should be noted, however, that while contexts can be created that are more or less likely to elicit threat responses, individual variability in the stress response is likely to remain. For instance, even though we utilized a classic stressor known to activate the HPA axis (Kirschbaum et al., 1993), we still observed substantial variability in stress response. Future work will also manipulate the point at which participants experience the stressor. For example, would increased threat reactivity post-acquisition benefit performance during a delayed retention test (see Koessler, Engler, Riether, & Kissler, 2009 for a related approach in memory retrieval)?

In sum, we report the novel finding that a maladaptive threat response predicts enhanced performance on a cognitive task. Importantly, our results suggest that it is critical to consider how individual differences in the nature of the stress response interact with the cognitive system mediating task performance. Studies focusing on variability in the nature of the stress response have found that threat is associated with impaired cognitive performance (e.g., Blascovich et al., 1999; Kassam et al., 2009). Studies focusing on variability in the cognitive system mediating task performance have found inconsistent effects of the impact of stress on different cognitive systems (Lupien et al., 2007). Although this literature does consider variability in the magnitude of the stressor (Sapolsky, 2004) or magnitude of the stress response (Lupien et al., 2007), there is little consideration of the nature of the stress response (i.e., challenge vs. threat). Indeed, individual differences in the stress response may help explain the inconsistent results across studies investigating the stress-cognition relationship. Importantly, this interdisciplinary approach opens up new avenues of investigation for both psychophysiologists and cognitive scientists interested in understanding the stress-cognition relationship.