Avoid common mistakes on your manuscript.
Introduction
Non-invasive brain stimulation (NIBS), which involves sending electrical stimulation through the scalp to target sub-cranial regions (Knotkova et al., 2019; Miniussi et al., 2013), is gaining popularity among scientists, practitioners, and vendors (Simons et al., 2016) as a method to achieve cognitive enhancement. NIBS as a cognitive enhancement method has been investigated in many different populations, ranging from the average healthy layperson (Brunyé et al., 2021; Flöel et al., 2008; Hill et al., 2016; Horvath et al., 2015a, 2015b; Mancuso et al., 2016) to military personnel (Brunyé et al., 2020) and clinical populations (Brunyé et al., 2021; Ciullo et al., 2021; Freitas et al., 2011; Hill et al., 2016; Suarez-García et al., 2020). Using NIBS as a cognitive enhancement method is a tantalizing prospect because it is relatively inexpensive, largely safe, and easy to administer (Bikson et al., 2016).
However, findings on whether NIBS techniques enhance cognitive domains are mixed. Some studies have found NIBS to be an effective tool for cognitive enhancement (Au et al., 2016; Dockery et al., 2009; Katz et al., 2017; Morales-Quezada et al., 2015; Parasuraman & McKinley, 2014; Richmond et al., 2014; Southworth, 1999; Vodyanyk et al., 2021). Other studies have found detrimental effects (Brunyé et al., 2018; Pyke et al., 2020) or no reliable effects (Horne et al., 2020; Horvath et al., 2015a, 2015b; Mancuso et al., 2016; Medina & Cason, 2017; Nilsson et al., 2017; Rabipour et al. 2018b, 2019; Talsma et al., 2017; van Elk et al., 2020). These conflicting findings demonstrate how difficult it is to determine whether NIBS can reliably achieve cognitive enhancement.
A potential reason for these mixed results could be the failure to account for expectations of outcomes, which can skew participants’ perceptions of improvement from interventions and possibly lead to positive expectancy or placebo and placebo-like effects (Benedetti, 2014; Boot et al., 2013; Braga et al., 2021; Foroughi et al., 2016; Rabipour et al., 2018a; Rabipour et al., 2017, 2019; Rabipour et al., 2018b; Rabipour & Davidson, 2015; Schwarz et al., 2016; Simons et al., 2016), though this has not been observed in all cognitive enhancement studies (Denkinger et al., 2021; Tsai et al., 2018; Vodyanyk et al., 2021). Outcome expectations could partially or fully explain any cognitive enhancement gains of NIBS interventions. For instance, if participants partook in a study that explicitly stated the many benefits of NIBS for cognitive enhancement (e.g., in the consent form or other experimental materials) and are thus primed to believe that the intervention will be very effective, it is possible these participants may invest more time, energy, and resources into the NIBS intervention and associated outcome tasks than those who were not primed or were primed to have negative expectations about the intervention.
Recent research by Rabipour and colleagues (Rabipour & Davidson, 2015; Rabipour et al., 2017, 2018a, 2018b, 2019) has sought to understand better how expectations of outcomes relate to cognitive enhancement via NIBS. They have investigated NIBS outcome expectations by designing the Expectation Assessment Scale (EAS; Rabipour & Davidson, 2015; Rabipour et al., 2018a). The EAS is a questionnaire that asks respondents to provide success ratings, and confidence in those success ratings, for seven cognitive domains (e.g., memory) that could be impacted by NIBS cognitive enhancement methods (Rabipour & Davidson, 2015; Rabipour et al., 2018a). Researchers have administered the EAS with in-person, physically present samples (Rabipour et al., 2017, 2018b, 2019) and online, recruited entirely via the Internet samples (Rabipour & Davidson, 2015; Rabipour et al., 2017, 2018a), and the questionnaire has reliable psychometric properties in these particular samples (Rabipour et al., 2018a). When administering the EAS at baseline, individuals report feeling either neutral (Rabipour et al., 2019, 2018b) or somewhat optimistic (i.e., one rating above neutral; Rabipour & Davidson, 2015; Rabipour et al., 2017) toward NIBS for cognitive enhancement, and most people are typically confident in these responses (Rabipour et al., 2017).
Interestingly, participants’ expectations about NIBS for cognitive enhancement may be malleable. After a baseline administration of the EAS, Rabipour et al. (2017) used a within-subjects design to present participants with three sets of messages that primed different expectations about NIBS for cognitive enhancement. These messages set neutral expectations (e.g., “You are about to begin a brain stimulation program”), low or pessimistic expectations (e.g., “You are about to begin a brain stimulation program, designed based on unconfirmed theories about brain function”), and high or optimistic expectations (e.g., “You are about to begin a brain stimulation program designed by neuroscientists, based on work proven effective in scientific studies”) for using NIBS to achieve cognitive enhancement. Compared to baseline, there were significant decreases and increases in participants’ expectations of outcomes in the low and high expectation message conditions, respectively (Rabipour et al., 2017). This finding supports the possibility that the framing of recruitment materials, consent forms, and experimenter instructions could be playing a central role in the mixed results observed in NIBS cognitive enhancement studies.
The aim of the present study within this Registered Report was to understand for whom outcome expectations are most malleable. To achieve this aim, we conceptually replicated Rabipour et al.’s (2017) research with several changes. First, we used a between-subjects design for expectation primes, rather than a within-subjects design as Rabipour et al. (2017) did, to eliminate any possible carryover effects. Specifically, all participants completed the EAS at baseline, and then they completed it once again after being randomly assigned to read a neutral, low, or high expectation prime. We recruited a large enough sample to ensure sufficient statistical power for this design change as dictated by an a priori sample size estimate.
Further, we evaluated a refined version of the EAS. The original EAS included an item on “multitasking ability (i.e., managing multiple tasks at the same time)” (Rabipour & Davidson, 2015; Rabipour et al., 2018a). This definition implies dual-tasking ability, yet multitasking can also be conceptualized as task-switching ability or switching between multiple tasks (Koch et al., 2018; Ward et al., 2019). Task-switching and dual-tasking abilities are similar yet distinct manifestations of multitasking, and it is important to clarify the distinction between these processes because they have different theoretical implications (Koch et al., 2018; Ward et al., 2019). To account for this, we included an additional domain within the EAS to capture expectations surrounding task-switching ability. Rabipour et al. (2017) presented EAS items in a fixed order, leaving results open to potential order effects. To address this possibility in the current study, we presented EAS items in a different random order for each participant. In addition to individually evaluating each EAS item like Rabipour et al. (2017), we calculated a composite EAS by averaging all eight item areas to facilitate a more accessible interpretation of our results.
The present study also focused on cranial electrotherapy stimulation (CES), which involves administering alternating current with two bilateral electrodes typically attached to the earlobes or temples (Knotkova et al., 2019), rather than NIBS more broadly as Rabipour et al. (2017). We chose to focus on CES because participants are unlikely to hold preexisting expectations regarding CES effectiveness, unlike other methods like transcranial electric stimulation (tES) which is increasing in popularity, press coverage, and consumer productization. Our laboratory is also examining the physiological, neural, psychological, and behavioral influences of CES (Wooten et al., n.d.), and our research procedures will benefit from the outcomes of the current study.
In addition to conceptually replicating Rabipour et al. (2017), we also expanded upon their research in a novel way. We explored how individual differences across various self-reported psychological factors were related to the malleability of outcome expectations. Psychological factors have been explored within the context of cognitive training interventions (i.e., structured practice of cognitive tasks, typically digital), albeit with mixed results (Guye et al., 2017; Harrell et al., 2019; Jaeggi et al., 2014; Minear et al., 2016; Ørskov et al., 2021; Sprenger et al., 2013). Some of the previous studies that have investigated the effectiveness of NIBS for cognitive enhancement have primarily explored demographics (e.g., age) as an individual difference variable (Arciniega et al., 2018; Looi et al., 2016; Talsma et al., 2017), though some studies have found associations between motivation and outcomes (Jones et al., 2015; Katz et al., 2017; Rabipour et al., 2018b). We leveraged prior research on individual differences in psychological factors, primarily from cognitive training studies, to inform potential individual differences across psychological factors that might affect expectations of NIBS for cognitive enhancement.
We were specifically interested in exploring how situational motivation (i.e., intrinsic motivation, identified regulation, external regulation, amotivation), perceived stress, cognition-related beliefs (i.e., need for cognition, growth mindset, self-efficacy), and perceptions of real-world cognition (i.e., cognitive failures, attentional lapses) related to NIBS outcome expectations. An individual’s motivation for (Boot et al., 2013; Jones et al., 2015; Katz et al., 2017; Rabipour et al., 2018b; Simons et al., 2016) and stress toward (Minear et al., 2016; Ørskov et al., 2021) pursuing cognitive enhancement could influence their expectations about NIBS success rates. Regarding cognition-related beliefs, and in particular growth mindset, those who believe their cognitive abilities are fixed may not expect NIBS to be successful for cognitive enhancement or vice versa (Foroughi et al., 2016; Guye et al., 2017; Jaeggi et al., 2014). Similarly, if someone perceives that they commit many cognitive failures and that their cognitive abilities could be improved (or not), they may have higher (or lower) expectations for cognitive enhancement (Harrell et al., 2019; Jaeggi et al., 2014). Investigating if and how various psychological factors are related to NIBS outcome expectations can help identify if certain characteristics are more or less likely to make a person responsive to cognitive enhancement.
To address our aims, we pre-registered our analysis plans in our Stage 1 manuscript (https://osf.io/pysnu/) which received in principle acceptance from the Journal of Cognitive Enhancement on 6 September 2021, prior to data collection. Any deviations from our approved report are explicitly noted when applicable. For our first analysis, we tested for baseline differences among the expectation prime groups across select demographics, expectation success and confidence ratings, and all psychological constructs. This baseline analysis allowed us to confidently infer if our obtained results could be attributed to our manipulation and not preexisting group differences. Second, we sought to understand how different expectation primes could affect participants’ expectations of NIBS cognitive enhancement outcomes over time. We predicted that compared to baseline EAS success ratings, those who read the low and high expectation primes would have decreased and increased success ratings on all EAS item areas and the composite EAS, respectively. We did not anticipate any differences between baseline EAS success ratings and post-prime EAS success ratings among the neutral expectation prime group. This predicted pattern of results would replicate findings from Rabipour et al. (2017). Finally, we wanted to further tease apart the aforementioned analysis and investigate if the interaction between expectation prime and time changed when also considering situational motivation (i.e., intrinsic motivation, identified regulation, external regulation, amotivation), perceived stress, cognition-related beliefs (i.e., need for cognition, growth mindset, self-efficacy), and perceptions of real-world cognition (i.e., cognitive failures, attentional lapses). We did not pre-register any predictions about directionality or selectivity of effects for these additional exploratory analyses. However, we reasoned that any significant covariates or expectation prime interactions with covariates would signal a potential relationship between the individual difference variable and responses to expectation primes.
Method
Participants
We based our sample size on an a priori power analysis using the R package “pwr2” (version 1.0; Lu et al., 2017). We assumed a small effect size (f = 0.1) with three groups (expectation prime: neutral, low, high), two within-subjects timepoints (baseline/pre-prime, post-prime), alpha (α) ≤ 0.05, and power = 0.90. This resulted in 101 participants per group. Given this was one of the first investigations of expectations malleability for NIBS using a mixed analysis of variance (ANOVA) design, we proposed doubling our calculation to 202 participants per group. We also aimed to recruit an additional 20 participants for each group to account for attrition and data loss. Thus, our target sample size was 666 participants.
All participants were recruited through Prolific. Prolific is an online data collection platform tailored to social and behavioral researchers that offers high-quality data with a diverse and naïve population (Palan & Schitter, 2018; Peer et al., 2014). To have been eligible to participate in our study, Prolific workers must have been 18 years or older, living in the USA, and had a 95% or higher approval rating on Prolific. We planned to reject submissions from those who completed the study exceptionally fast, which we defined as three median absolute deviations (MADs) below the median completion time. MADs are less sensitive to outliers than means and standard deviations (Leys et al., 2013). However, no submissions met this criterion. We rejected submissions from two Prolific workers who failed two attention checks that were two separate items disguised within separate surveys and directed for a specific answer (e.g., “Please mark ‘Rarely’ for this question”; Hauser & Schwarz, 2016).
We recruited a sample of 667 participants and randomly assigned them to the neutral, low, or high expectation prime condition. Per our pre-registration, we excluded 12 participants from data analysis who indicated that they randomly responded during some point of the study (Pennycook et al., 2017; Ralph & Smilek, 2017), which left a total of 655 participants (n = 216 neutral expectation prime; n = 219 low expectation prime; n = 220 high expectation prime), meeting our intent to double the a priori sample size estimate. On average, participants were 30.12 years old (SD = 10.45 years; median = 27 years; minimum = 18 years; maximum = 82 years) (see Table 1 for information on categorical demographics).
For comparison, Rabipour et al. (2017) reported analyzing data from 428 participants that were classified as younger adults (n = 300; age M = 23.19 years, age SD = 5.39 years; 190 women; education M = 14.11 years, education SD = 4.72 years), middle-aged adults (n = 50; age M = 45.28 years, age SD = 6.70 years; 31 women; education M = 15.10 years, education SD = 2.48 years), or older adults (n = 78; age M = 66.58 years, age SD = 5.34 years; 51 women; education M = 14.64 years, education SD = 2.50 years). The weighted average age across Rabipour et al.’s (2017) sample was 33.68 years old (weighted SD = 17.03 years), with 40% (n = 172) of the sample identifying as women. The weighted average education was 14.32 years (weighted SD = 0.35 years). When comparing our results to Rabipour et al. (2017), we focused primarily on their young adult subsample because this subsample was closest in age to our sample.
Measures
Expectations of Outcomes
A modified version of the EAS (Rabipour & Davidson, 2015; Rabipour et al., 2018a) was used to measure the expected success of CES cognitive enhancement interventions. The original EAS includes seven items representing seven cognitive domains: (i) “general cognitive function,” (ii) “memory,” (iii) “concentration,” (iv) “distractibility (i.e., lowering how much you lose focus on a task),” (v) “reasoning ability,” (vi) “multitasking ability (i.e., managing multiple tasks at the same time),” and (vii) “performance in everyday activities (e.g., driving, remembering important dates, managing finances, etc.)”. For our purposes, we revised the EAS to include an additional cognitive domain: “task-switching ability (i.e., switch from performing one task to performing another task)”. Participants were asked how successful they would expect NIBS to be at improving the eight respective cognitive domains with the following 7-point Likert scale:
-
1 = Completely unsuccessful: No change in brain activity or noticeable behavior. Such a procedure would be a waste of time and resources.
-
2 = Fairly unsuccessful: Possible changes in specific brain activity (i.e., detectable at the neurological level), yet unnoticeable in daily life. Such a procedure would be a waste of time and resources.
-
3 = Somewhat unsuccessful: Possible changes in general brain activity (i.e., detectable at the neurological level), yet unnoticeable in daily life.
-
4 = I have absolutely no expectations.
-
5 = Somewhat successful: Possible changes in specific brain activity and behavior. Such a procedure would NOT be a waste of time or resources.
-
6 = Fairly successful: Possible changes in general brain activity as well as noticeable behavioral changes.
-
7 = Completely successful: Changes in general brain activity as well as noticeable changes in overall thought and behavior that positively impact daily life. Such a procedure would be a good investment of time and resources.
Participants also indicated if they were confident in their success rating with a yes/no response. Rabipour et al. (2017) individually evaluated each item, where a higher rating indicates a greater expectation of NIBS to successfully enhance the specific cognitive domain. In addition to analyzing the individual items, we calculated a composite EAS by averaging all eight areas. Cronbach’s alphas were α = 0.91 and α = 0.94 for baseline EAS success ratings and baseline EAS confidence ratings, respectively.
Situational Motivation: Intrinsic Motivation, Identified Regulation, External Regulation, Amotivation
The 16-item Situational Motivation Scale (SIMS; Guay et al., 2000) was used to assess anticipated situational motivation for engaging in CES for cognitive enhancement. The SIMS assesses four constructs: intrinsic motivation, identified regulation, external regulation, and amotivation. Each construct consists of 4 items. Example items from each respective construct include the following: “I would like to engage in CES because… I think that this activity would be interesting,” “I would like to engage in CES because… I think that this activity would be good for me,” “I would like to engage in CES because… I would feel like I have to do it,” and “I would like to engage in CES because… I don’t know; I don’t see what this activity would bring me”. Responses were given on a 7-point Likert scale (1 = “Corresponds not at all,” 2 = “Corresponds very little,” 3 = “Corresponds a little,” 4 = “Corresponds moderately,” 5 = “Corresponds enough,” 6 = “Corresponds a lot,” 7 = “Corresponds exactly”). Scores for each subscale were calculated by averaging together item responses.Footnote 1 A higher score on each respective subscale indicates higher intrinsic motivation, identified regulation, external regulation, and amotivation for engaging in CES for cognitive enhancement. Cronbach’s alphas for the four subscales were: intrinsic motivation subscale α = 0.89, identified regulation subscale α = 0.92, external regulation subscale α = 0.87, and amotivation subscale α = 0.68.
Perceived Stress
The Perceived Stress Scale (PSS; Cohen et al., 1983) is a 10-item survey that measures an individual’s perceptions of stress over the past month that has ultimately caused life to feel unpredictable, uncontrollable, and overloaded. Example items include “In the last month, how often have you been upset because of something that happened unexpectedly?” and “In the last month, how often have you felt confident about your ability to handle your personal problems?”. Responses were given with a 5-point Likert scale (0 = “Never,” 1 = “Almost never,” 2 = “Sometimes,” 3 = “Fairly often,” 4 = “Very often”). Scores were calculated by summing all item responses and could range from 0 to 40. A higher score indicates greater stress. Cronbach’s alpha for the PSS was α = 0.90.
Cognition-related Beliefs
Need for cognition
The Need for Cognition (NFC; Cacioppo & Petty, 1982; Cacioppo et al., 1984) is an 18-item questionnaire that assesses how much an individual enjoys cognitively effortful tasks. Example items include statements such as “I would prefer complex to simple problems” and “I really enjoy a task that involves coming up with new solutions to problems”. Responses were given with a 5-point Likert scale (1 = “Extremely uncharacteristic,” 2 = “Somewhat uncharacteristic,” 3 = “Uncertain,” 4 = “Somewhat characteristic,” 5 = “Extremely characteristic”). Scores were calculated by summing all item responses and could range from 18 to 90. A higher score indicates a higher need for cognition. Cronbach’s alpha for the NFC was α = 0.93.
Growth mindset
The Growth Mindset Questionnaire (GMQ; Dweck, 2006) is a 20-item measure that assesses how likely an individual is to believe that certain mental abilities are fixed versus flexible. Example items include “All humans are capable of learning” and “Intelligence is something people are born with that can’t be changed”. Responses were given on a 4-point Likert scale (1 = “Strongly agree,” 2 = “Agree,” 3 = “Disagree,” 4 = “Strongly disagree”). Scores were calculated by summing all item responses and could range from 20 to 80. A higher score indicates a growth, rather than fixed, mindset where intelligence is viewed as a malleable, changeable construct. Cronbach’s alpha for the GMQ was α = 0.83.
Self-efficacy
The General Self-Efficacy Scale (GSE; Schwarzer & Jerusalem, 1995) is a 10-item questionnaire that assesses a general sense of self-efficacy. Example items include “I can always manage to solve difficult problems if I try hard enough” and “I can solve most problems if I invest the necessary effort”. Responses were given on a 4-point Likert scale (1 = “Not true at all,” 2 = “Hardly true,” 3 = “Moderately true,” 4 = “Exactly true”). Scores were calculated by summing all item responses and could range from 10 to 40. A higher score indicates a greater degree of self-efficacy, specifically as it pertains to coping with daily obstacles and adapting to stressful life events. Cronbach’s alpha for the GSE was α = 0.89.
Perceptions of Real-world Cognition
Cognitive failures
The Attention-Related Cognitive Errors Scale (ARCES; Carriere et al., 2008; Cheyne et al., 2006) is a 12-item questionnaire that measures how frequently individuals make minor mistakes because of absent-mindedness. Previous work has shown that scores on the ARCES are correlated with overall performance on the Sustained Attention to Response Task (SART), which is a cognitive behavioral task measure of sustained attention (Smilek et al., 2010). Some items from the ARCES are “I make mistakes because I am doing one thing and thinking about another” and “I have absent-mindedly placed things in unintentional locations”. Responses were collected using a 5-point Likert-type scale (1 = “Never,” 2 = “Rarely,” 3 = “Sometimes,” 4 = “Often,” 5 = “Always)”. To easily interpret the ARCES alongside the other surveys, we reverse scored all items (1 = “Always,” 2 = “Often,” 3 = “Sometimes,” 4 = “Rarely,” 5 = “Never”). Survey scores were calculated by summing all item responses and could range from 12 to 60. With our scoring method, a higher score reflected a lower frequency of cognitive failures because of absentmindedness. Cronbach’s alpha for the ARCES was α = 0.90.
Attentional lapses
The Mindful Attention Awareness Scale – Lapses Only (MAAS-LO; Carriere et al., 2008) is a 12-item survey derived from the 15-item Mindful Attention Awareness Scale (MAAS) developed by Brown and Ryan (2003). The MAAS-LO assesses the frequency with which individuals experience attentional lapses in everyday situations. Previous work has shown that scores on the MAAS-LO are correlated with overall performance on the SART (Smilek et al., 2010). The MAAS-LO includes items such as “I find myself doing things without paying attention” and “I find it difficult to stay focused on what’s happening in the present”. Responses were collected using a 6-point Likert-type scale (1 = “Almost never,” 2 = “Very rarely,” 3 = “Rarely,” 4 = “Occasionally,” 5 = “Frequently,” 6 = “Almost always”). To easily interpret the MAAS-LO alongside the other surveys, we reverse scored all items (1 = “Almost always,” 2 = “Frequently,” 3 = “Occasionally,” 4 = “Rarely,” 5 = “Very rarely,” 6 = “Almost never”). Survey scores were calculated by summing all item responses and could range from 12 to 72. With our scoring method, a higher score on the MAAS-LO represented a lower frequency of attention lapses. Cronbach’s alpha for the MAAS-LO was α = 0.91.
Procedure
This study was approved by the Tufts University Institutional Review Board (#1908026). Prolific workers who met our screening criteria saw our study posting on the Prolific dashboard. If they decided to participate, they were re-directed to a Qualtrics survey. They first completed a captcha to confirm they were not a bot. If participants correctly completed this captcha, they were directed to the next page and provided informed consent. After answering demographic questions, all participants read a baseline expectations message (see Fig. 1a) and completed a baseline administration of the EAS. Then, participants were randomly assigned to one of three expectation primes that set neutral (see Fig. 1b), low (see Fig. 1c), or high (see Fig. 1d) expectations for CES as a method for cognitive enhancement before completing the EAS once again. After, participants completed the SIMS because it is state-sensitive, and then completed the PSS, NFC, GMQ, GSE, ARCES, and MAAS-LO in a randomized order. When participants completed all questionnaires, they were asked if they randomly responded at any point during the study. Finally, participants were thanked, debriefed, and re-directed to the study posting on the Prolific dashboard and compensated at a rate of $10/h.
Results
In both manuscripts, we report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study. All data processing and analysis was conducted with R (version 4.1.3; R Core Team, 2022) in RStudio (version 2022.02.1; RStudio Team, 2022). Data were imported and exported using the “rio” package (version 0.5.29; Chan et al., 2021). Data were processed and manipulated with a collection of “tidyverse” (version 1.3.1.; Wickham et al., 2019) packages: “dplyr” (version 1.0.8; Wickham et al., 2021) and “tidyr” (version 1.2.0; Wickham, 2021). Data were analyzed using the “jmv” package (version 2.3.4; Selker et al., 2021). The box plot data visualization was created using “ggplot2” (version 3.3.5; Wickham, 2016), “ggthemes” (version 4.2.4; Arnold, 2021), and a “wesanderson” (version 0.3.6; Ram & Wickham, 2018) color palette. Data processing (https://osf.io/9ab6w/) and analysis (https://osf.io/y5pwr/) code are available on OSF.
We examined multiple levels of analysis and corrected our alpha thresholds accordingly, which we specified in our Stage 1 manuscript. At the primary level, we first determined whether to reject the null hypothesis for each ANOVA. Because the likelihood of incorrectly rejecting the null hypothesis increases with each ANOVA, we rejected the null hypothesis for each ANOVA model if the p-value was less than 0.05 divided by the number of ANOVAs we were running. At the secondary level, for any of the ANOVAs that were statistically significant based on the adjusted alpha threshold, we examined the post hoc t-tests to better understand what is influencing the overall effects we detected at the primary ANOVA level. At this secondary level, we corrected for multiple comparisons using the Tukey HSD method with an alpha threshold of 0.05. We used the Tukey HSD method because we did not have a set number of planned comparisons, did not only make pairwise comparisons, and sample sizes were relatively equal among groups.
Pre-registered Analyses
Baseline Differences Among the Expectation Prime Groups
We tested for baseline differences among the three expectation prime groups for age, gender, the eight baseline EAS success ratings as well as their eight accompanying confidence ratings, and the 10 psychological factors of interest. The purpose of testing for any baseline differences among these variables was to ensure that our obtained results could be attributed to our manipulation versus preexisting group differences. Our first baseline analysis involved 20 univariate ANOVAs assuming equal variances. The predictor variable was the expectation prime (neutral, low, high). The outcome variables were age, gender, baseline EAS success ratings for the eight item areas (i.e., general cognitive function, memory, concentration, distractibility, reasoning ability, multitasking/dual-tasking ability, task-switching ability, performance in everyday activities), and scores for the 10 psychological factors (i.e., four SIMS subscales for intrinsic motivation, identified regulation, external regulation, and amotivation; PSS for perceived stress; NFC for need for cognition; GMQ for growth mindset; GSE for self-efficacy; ARCES for cognitive failures; and MAAS-LO for attentional lapses).
In our Stage 1 Registered Report, we originally proposed 17 univariate ANOVAS, and this was because we planned on using a single composite score for the SIMS (Guay et al., 2000). However, we later discovered that it is more common to separately examine the four sub-scale scores (Guay et al., 2000). For this reason, the number of univariate ANOVAs increased from 17 to 20. Because we were conducting many ANOVAs, we protected from Type I errors across the 20 models using a corrected alpha threshold of 0.003 (α = 0.05/20 ANOVAs) for our baseline analysis. If there were any baseline differences across groups, we pre-registered that we would control for any of these differentiating factors by including them as covariates in our primary analysis.
Table 2 displays results from the 20 univariate ANOVAs. To summarize, baseline success ratings for each EAS item were relatively and equally neutral (i.e., ratings around 4) across the three conditions. Our findings replicated previous findings from Rabipour et al.’s (2017) subsample of young adults who reported that they had relatively neutral expectations (i.e., ratings around 4) for NIBS to achieve cognitive enhancement across EAS items.
According to our corrected alpha threshold of 0.003, expectation prime groups differed on two of the four situational motivation subscales: identified regulation (F2,652 = 39.435, p < 0.001) and amotivation (F2,652 = 28.725, p < 0.001). Identified regulation and amotivation were not highly correlated (r = − 0.343). To determine where these differences existed, we examined post hoc comparison tests. For univariate ANOVAs, the “jmv” package (version 2.3.4; Selker et al., 2021) applies either Games-Howell or Tukey HSD post hoc corrections. We opted to apply Tukey HSD post hoc corrections. Participants in the high expectation prime group (M = 4.523, SD = 1.627) reported greater feelings of identified regulation compared to the neutral (M = 3.712, SD = 1.579) and low (M = 3.183, SD = 1.572) expectation prime groups (ps < 0.001). There was also a statistically significant difference in identified regulation between the neutral expectation prime group and the low expectation prime group (p = 0.002). Participants in the high expectation prime group (M = 2.645, SD = 1.149) reported significantly lower feelings of amotivation compared to the neutral (M = 3.410, SD = 1.280) and low (M = 3.428, SD = 1.270) expectation prime groups (ps < 0.001). There were additional post hoc differences for variables that were not statistically significant at the model level, and for conciseness, we are not reporting these values in our manuscript. Instead, we encourage readers to examine these values at https://osf.io/y5pwr/.
Like Rabipour and collaborators (2017), we also calculated eight χ2 tests of association to assess any differences in confidence ratings on each baseline EAS item across the three expectation prime groups. To reiterate, participants reported that they were either confident or not confident in their success ratings of each EAS item. To protect from type I errors, we used a corrected alpha threshold of 0.006 (α = 0.05/8 χ2 tests) for the eight tests. For all tests, df = 2 and N = 655. Our expectation prime groups did not differ on baseline item confidence ratings for everyday cognitive function (χ2 = 0.401, p = 0.818), memory (χ2 = 0.078, p = 0.962), concentration (χ2 = 2.320, p = 0.313), distractibility (χ2 = 0.149, p = 0.928), reasoning ability (χ2 = 0.061, p = 0.970), multitasking (i.e., dual-tasking) ability (χ2 = 3.365, p = 0.186), task-switching ability (χ2 = 0.29, p = 0.865), and performance in everyday activities (χ2 = 0.976, p = 0.614). On average, across all baseline confidence item ratings regardless of expectation prime group, between 59 to 64% of participants reported feeling confident in their responses. It is worth noting that the frequency of baseline confidence ratings observed in our sample was lower than observed in Rabipour et al.’s (2017) young adult subsample, who reported that ≥ 74% of participants (≥ 222 out of 300 participants) felt confident in their baseline EAS success ratings.
Effects of Time and Expectation Prime on EAS
To understand how priming different expectations affected perceptions of CES cognitive enhancement success over time, we performed nine 2 (time: baseline/pre-prime, post-prime) × 3 (expectation prime: neutral, low, high) mixed ANOVAs. We included identified regulation and amotivation as covariates because of the baseline differences across the expectation prime groups. We first conducted this analysis with composite EAS as the outcome variable. Like Rabipour et al. (2017), we also performed this analysis for each of the eight areas assessed by the EAS. The “jmv” package (version 2.3.4; Selker et al., 2021) uses type III sums of squares. Because we were conducting many ANOVAs, we used a corrected alpha threshold of 0.006 (α = 0.05/9 ANOVAs) to protect from type I errors across the models.
To summarize, participants’ individual expectations for NIBS as a cognitive enhancement method changed depending on situationally primed expectations, even while controlling for aspects of situation motivation (see Table 3). To identify where these differences existed, we examined Tukey HSD post-hoc comparisons. In the low expectation prime group, composite EAS significantly decreased from baseline (M = 4.616, SE = 0.074) to post-prime (M = 3.204, SE = 0.077; t650 = − 19.152, p < 0.001). In the high expectation prime group, composite EAS significantly increased from baseline (M = 4.286, SE = 0.076) to post-prime (M = 4.894, SE = 0.079; t650 = 8.113, p < 0.001). Participants who read the neutral expectation prime did not significantly change in terms of composite EAS from baseline (M = 4.383, SE = 0.073) to post-prime (M = 4.329, SE = 0.076; t650 = − 0.748, p = 0.976). See Fig. 2 for a visualization of these results. This pattern of our results replicated Rabipour et al.’s (2017) results and aligned with our hypotheses, supporting our claim that expectations are malleable.
We were also interested in possible differences between expectation prime groups for post-prime composite EAS. Notably, we observed lower post-prime composite EAS between the low expectation prime group (M = 3.204, SE = 0.077) and the neutral expectation prime group (M = 4.329, SE = 0.076; t650 = − 10.430, p < 0.001) as well as the high expectation prime group (M = 4.894, SE = 0.079; t650 = 14.840, p < 0.001). Additionally, post-prime composite EAS was lower in the neutral expectation prime group than the high expectation prime group (t650 = 5.090, p < 0.001).
We observed mostly similar results when we repeated the 3 × 2 mixed ANOVA for the eight items (i.e., once for each item area), which was not surprising considering the composite EAS was created by averaging these eight item areas. Specifically, according to our corrected alpha threshold, there was not a statistically main effect of time in the “multitasking (i.e., dual-tasking) ability” (F1,650 = 5.456, η2 = 0.001, p = 0.020) and “reasoning ability” (F1,650 = 3.567, η2 = 0.005, p = 0.059) models. Most importantly to our primary research question, however, the interaction between time and expectation prime was still statistically significant. For additional information, please see https://osf.io/y5pwr/.
Our results from this analysis align with what Rabipour et al. (2017) found, even with our design changes. Broadly, success ratings for each EAS item area shifted in the direction of the expectation prime. Specifically, our study and Rabipour et al. (2017) both found that across all cognitive domains, success ratings were above neutral for the high expectation prime condition and were below neutral for the low expectation prime condition.
Because we analyzed baseline confidence ratings across each EAS item, we repeated this analysis for participants’ post-prime confidence ratings. Specifically, we assessed differences in confidence levels across each post-prime EAS success rating across the three expectation prime groups by calculating eight χ2 tests of association with a corrected alpha of 0.006 (α = 0.05/8 χ2 tests). For all tests, df = 2 and N = 655. There was a statistically significant association between expectation prime and post-prime confidence ratings for the “everyday cognitive function” item (χ2 = 11.134, p = 0.004). 65% (n = 141) of participants in the neutral expectation prime condition said they were confident in their post-prime success rating, 73% (n = 159) in the low expectation prime condition said they were confident in their post-prime success rating, and 80% (n = 175) in the high expectation prime condition said they were confident in their post-prime success rating. Similarly, there was a statistically significant association between expectation prime and post-prime confidence ratings for performance for the ‘everyday activities’ item (χ2 = 14.668, p = 0.001), where 60% (n = 129) of participants in the neutral expectation prime condition said they were confident in their post-prime success rating, 72% (n = 158) in the low expectation prime condition said they were confident, and 76% (n = 167) in the high expectation prime condition said they were confident. Per our corrected alpha rate of 0.006, there were no statistically significant associations between expectation prime and post-prime item confidence ratings for memory (χ2 = 9.988, p = 0.007), concentration (χ2 = 8.937, p = 0.011), distractibility (χ2 = 9.682, p = 0.008), reasoning ability (χ2 = 7.339, p = 0.025), multitasking (i.e., dual-tasking) ability (χ2 = 5.696, p = 0.058), and task-switching ability (χ2 = 9.505, p = 0.009). Like what we found among baseline confidence ratings, the frequency of post-prime confidence ratings observed in our sample was lower than what was observed in Rabipour et al.’s (2017) young adult subsample, where ≥ 79% (≥ 239 out of 300 participants) participants felt confident in their post-prime EAS success ratings.
Exploratory Analyses
Influence of Individual Differences in Psychological Factors on Effects of Time and Expectation Prime on EAS
We explored if and how the interaction between expectation prime and time changed when accounting for the influence of various individual differences in psychological factors, even if the expectation prime groups did not differ on these variables at baseline. To achieve this aim, we repeated the 2 (time: baseline/pre-prime, post-prime) × 3 (expectation prime: neutral, low, high) mixed ANOVA with composite EAS as the dependent variable. We included the 10 psychological factors (intrinsic motivation, identified regulation, external regulation, amotivation, perceived stress, need for cognition, growth mindset, self-efficacy, cognitive failures, attentional lapses) as covariates to determine which psychological factors, if any, might impact the malleability of NIBS expectations (see Table 4 for results).
More important to our primary research question, when considering all 10 psychological factor covariates, there was no main effect of time (F1,642 = 0.701, η2 < 0.001, p = 0.403) on composite EAS. However, there was a statistically significant main effect of expectation prime (F2,642 = 31.376, η2 = 0.008, p < 0.001) as well as a significant interaction between time and expectation prime (F2,642 = 175.867, η2 = 0.015, p < 0.001) on composite EAS, which was similar to what we observed when we investigated the effects of expectation prime and time on composite EAS without including all 10 psychological factors as covariates. Thus, our primary finding, which was that baseline outcome expectations can shift in the direction of primed pessimistic and optimistic expectations, replicated when accounting for other potential explanations through various psychological factors. Further, there was a significant interaction between time and identified regulation (F1,642 = 10.094, η2 < 0.001, p = 0.002) on composite EAS, with no other significant interactions between time and the other psychological factors (F range = 0.003 – 3.320; all η2 < 0.001, all p ≥ 0.069). Based on our pre-registration, we repeated this analysis with one covariate at a time for a total of 10 tests, and these analyses are available at https://osf.io/y5pwr/ for interested readers.
Discussion
The malleability of expectations may be contributing to the mixed success rates of NIBS cognitive enhancement studies through positive expectancy and placebo and placebo-like effects (Benedetti, 2014; Boot et al., 2013; Braga et al., 2021; Schwarz et al., 2016; Simons et al., 2016), though very few studies have considered this possibility and teased it apart by priming different expectations (Foroughi et al., 2016; Rabipour et al., 2017; Rabipour et al., 2018a; 2018b; 2019). Specifically, if participants are primed to have optimistic outcome expectations of NIBS methods for cognitive enhancement, this may lead to high engagement with the intervention. Likewise, if participants are primed to have pessimistic outcome expectations of NIBS methods for cognitive enhancement, they may not pursue or adhere to such an intervention. In our Registered Report, we sought to research the malleability of expectations for NIBS as a cognitive enhancement method. We conceptually replicated and expanded Rabipour et al. (2017) using a large sample and mixed design. We used a refined version of Rabipour et al.’s (2017) outcome expectations measure that included an additional cognitive domain and focused on CES as a NIBS method. Additionally, we explored how various psychological factors related to outcome expectations to determine if certain characteristics were likelier to influence outcome expectations of CES for cognitive enhancement.
To confirm that our results could be attributed to our manipulation and not preexisting group differences, we first assessed baseline differences. We examined baseline differences among the expectation prime groups across select demographics (i.e., age, gender), outcome expectation success and confidence ratings, and various psychological constructs (i.e., intrinsic motivation, identified regulation, external regulation, amotivation, perceived stress, need for cognition, growth mindset, self-efficacy, cognitive failures, attentional lapses). To re-iterate, demographics were collected prior to exposure to the expectations prime, psychological constructs were assessed after exposure to one of the three expectation primes, and outcome expectation success and confidence ratings were measured before and after exposure to the expectations prime. At baseline, there were differences among the expectation prime groups for two aspects of situational motivation, identified regulation and amotivation, and we included these variables as covariates in our primary analysis. Our expectation prime groups did not differ in age, gender, all baseline EAS item success and confidence ratings, and the remaining eight psychological factors. Future researchers may wish to include baseline assessments of select psychological factors to observe how priming different expectations can influence short-term changes in situational motivation and other related variables.
In our primary analysis, our results replicated Rabipour et al.’s (2017) results. We found that participants initially had relatively neutral expectations toward CES as a cognitive enhancement method across various cognitive domains. A little more than half of the participants (≤ 64%) in our sample (N = 655) reported that they felt confident in their baseline responses for each EAS item. In contrast, most participants (≥ 83%) in Rabipour et al.’s (2017) sample (N = 428), and particularly the young adult subsample (≥ 74%, n = 300) felt confident in their baseline responses for each EAS item. These findings are interesting considering the past few years have seen reports from companies advertising the increasing benefits of various cognitive enhancement methods, including tES (Simons et al., 2016).
Further, and more critically to our research question, we found that expectations of CES for cognitive enhancement were malleable, even when controlling for relevant aspects of situational motivation. Relative to baseline, outcome expectations substantially decreased or increased after reading primes that set either low or high expectations for CES cognitive enhancement methods, respectively. In contrast, expectations about CES for cognitive enhancement did not change from baseline to post-prime for participants in the neutral expectation prime condition. These patterns of results replicate those of Rabipour et al. (2017), and they also align with related research that shows support for placebo effects (Foroughi et al., 2016; Rabipour et al., 2018b; 2019). There were also significant associations between expectation prime and post-prime confidence ratings on two of the eight EAS items. There were more participants who felt confident in their responses if they read either the low or high expectation prime compared to those who read the neutral expectation prime. Our results imply that the degree of effectiveness of NIBS interventions for cognitive enhancement may be based more on how the interventions were advertised in recruitment flyers or explained during experimenter instructions than on the effects from the interventions. Our results also suggest that using neutral language, especially for participants with relatively neutral expectations about NIBS methods, may be the best option to deter positive and negative expectancy effects of NIBS methods for cognitive enhancement.
To better understand if certain characteristics influence the potential responsiveness to cognitive enhancement, we also explored how various psychological factors, including situational motivation, perceived stress, cognition-related beliefs, and perceptions of real-world cognition, might interact with outcome expectations. Outcome expectations were related to all four areas of situational motivation (i.e., intrinsic motivation, identified regulation, external regulation, amotivation), which aligns with past research (Jones et al., 2015; Katz et al., 2017; Rabipour et al., 2018b). Outcome expectations were also related to cognition-related beliefs (i.e., need for cognition, growth mindset, self-efficacy), but not perceived stress and perceptions of real-world cognition (i.e., cognitive failures, attentional lapses). More importantly, we found that there was still a significant interaction between expectation prime and time when accounting for these various psychological factors. Thus, while these psychological factors might play a role in influencing expectations about NIBS malleability, the influence appears to be relatively minimal compared to expectation primes.
Several limitations could have influenced our results. These limitations lend themselves nicely as future directions for researchers who wish to further investigate outcome expectations of cognitive enhancement through NIBS. First, because participants were repeatedly exposed to the EAS (i.e., baseline/pre-prime, post-prime), they may have been aware that they were being primed and that we were interested in their outcome expectations. However, our change to a between-subjects design from Rabipour et al’s. (2017) within-subjects design helps lessen the likelihood of such carryover effects. Future outcome expectations research should include questions that assess participant bias to invest more confidence in this claim.
Further, our methodology was entirely based on self-report measures and this limits our findings’ generalizability. For instance, we asked participants if they had experience or familiarity with brain stimulation, and 13% (n = 85) of participants answered “yes.” Some of those who answered “yes” provided optional text explanations and clarified that their experience or familiarity with brain stimulation included puzzles, autonomous sensory meridian response (ASMR), and attention-deficit/hyperactivity disorder (ADHD) testing. However, these examples do not align with our operationalization of brain stimulation, meaning that it is likely less than 13% of our sample had neither experience nor familiarity with brain stimulation. The small number of participants who had experience or familiarity with brain stimulation could potentially explain why we did not observe more confidence in baseline EAS response ratings, especially since we observed lower proportions than Rabipour et al. (2017). Future studies should recruit more participants with experience or familiarity with brain stimulation and test if this accounts for differences in outcome expectations.
Critically, self-reported expectations of outcomes may not directly relate to actual outcomes. Outcomes might resolve differently for in-person studies with NIBS methods that employ active and sham stimulation (for studies, see Rabipour et al., 2018b, 2019). Furthermore, it is possible that any impact of expectations on intervention outcomes could be minimized by standardizing recruitment materials, consent forms, and participant-facing instructions, as well as using active control conditions that deliver stimulation of equal duration and magnitude to brain regions not targeted by the research question(s). Accompanying a cognitive training intervention with NIBS could further influence if expectations of outcomes align with actual outcomes, even when controlling for language used during recruitment, consenting, and instruction. Given the popularity of working memory training and recent discussion of its potential placebo effects (Baniqued et al., 2015; Boot et al., 2013; Foroughi et al., 2016; Melby-Lervåg et al., 2016; Rabipour & Raz, 2012; Schwaighofer et al., 2015; Shipstead et al., 2012; Tsai et al., 2018; Vodyanyk et al., 2021; Wiemers et al., 2019), we believe there is an acceptable, sound theory for future work to include a working memory item in the EAS and explore how outcome expectations and NIBS influence working memory training gains over time and transfer. We expect that our results, as well as others’ (Foroughi et al., 2016; Rabipour et al., 2017, 2019; Rabipour et al., 2018b), would largely replicate, and cognitive enhancement training gains would follow the direction of the expectation prime.
Overall, conceptually replicating and extending Rabipour et al.’s (2017) research contributed to a more holistic, mechanistic understanding of how various psychological factors and situational contributors might play a role in influencing expectations, and the outcomes of such expectations, surrounding NIBS cognitive enhancement methods. Our research could potentially inform the design of NIBS protocols by shaping the approaches of those who wish to increase participant expectations of NIBS to improve cognitive enhancement and those who want to decrease or neutralize participant expectations of NIBS to reduce possible placebo effects. Experimenters who research NIBS for cognitive enhancement should be wary of how their behavior can influence participants’ behavior and consider how participant characteristics, such as motivation for pursuing cognitive enhancement, may shape how expectations potentially influence outcomes.
Data availability
This Registered Report was accepted in principle prior to data collection (https://osf.io/pysnu/). All data and materials are available on the Open Science Framework (OSF) at https://osf.io/c4kfy/.
Code availability
Data processing (https://osf.io/9ab6w/) and analysis (https://osf.io/y5pwr/) code are available on OSF.
Notes
In our Stage 1 Registered Report, we anticipated using a summary or composite score for the SIMS. After further reading, we opted to use the four-subscale approach that is most common in the broader SIMS literature (for more information, see Guay et al., 2000).
Abbreviations
- NIBS:
-
Non-invasive brain stimulation
- EAS:
-
Expectation Assessment Scale
- CES:
-
Cranial electrotherapy stimulation
- tES:
-
Transcranial electric stimulation
- SIMS:
-
Situational Motivation Scale
- PSS:
-
Perceived Stress Scale
- NFC:
-
Need for cognition
- GMQ:
-
Growth Mindset Questionnaire
- GSE:
-
General Self-Efficacy Scale
- ARCES:
-
Attention-Related Cognitive Errors Scale
- MASS-LO:
-
Mindful Attention Awareness Scale – Lapses Only
References
Arciniega, H., Gözenman, F., Jones, K. T., Stephens, J. A., & Berryhill, M. E. (2018). Frontoparietal tDCS benefits visual working memory in older adults with low working memory capacity. Frontiers in Aging Neuroscience, 10, 57. https://doi.org/10.3389/fnagi.2018.00057
Arnold, J. B. (2021). ggthemes: Extra themes, scales and geoms for “ggplot2” (4.2.4) [R package]. https://CRAN.R-project.org/package=ggthemes
Au, J., Katz, B., Buschkuehl, M., Bunarjo, K., Senger, T., Zabel, C., Jaeggi, S. M., & Jonides, J. (2016). Enhancing working memory training with transcranial direct current stimulation. Journal of Cognitive Neuroscience, 28(9), 1419–1432. https://doi.org/10.1162/jocn_a_00979
Bahar-Fuchs, A., Clare, L., & Woods, B. (2013). Cognitive training and cognitive rehabilitation for mild to moderate Alzheimer’s disease and vascular dementia. Cochrane Database of Systematic Reviews, 6. https://doi.org/10.1002/14651858.CD003260.pub2
Baniqued, P. L., Allen, C. M., Kranz, M. B., Johnson, K., Sipolins, A., Dickens, C., Ward, N., Geyer, A., & Kramer, A. F. (2015). Working memory, reasoning, and task switching training: Transfer effects, limitations, and great expectations? PLoS ONE, 10(11), e0142169. https://doi.org/10.1371/journal.pone.0142169
Benedetti, F. (2014). Placebo effects: From the neurobiological paradigm to translational implications. Neuron, 84(3), 623–637. https://doi.org/10.1016/j.neuron.2014.10.023
Bikson, M., Grossman, P., Thomas, C., Zannou, A. L., Jiang, J., Adnan, T., Mourdoukoutas, A. P., Kronberg, G., Truong, D., Boggio, P., Brunoni, A. R., Charvet, L., Fregni, F., Fritsch, B., Gillick, B., Hamilton, R. H., Hampstead, B. M., Jankord, R., Kirton, A., … Woods, A. J. (2016). Safety of transcranial direct current stimulation: Evidence based update 2016. Brain Stimulation, 9(5), 641–661. https://doi.org/10.1016/j.brs.2016.06.004
Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445–454. https://doi.org/10.1177/1745691613491271
Braga, M., Barbiani, D., Emadi Andani, M., Villa-Sánchez, B., Tinazzi, M., & Fiorio, M. (2021). The role of expectation and beliefs on the effects of non-invasive brain stimulation. Brain Sciences, 11(11), 1526. https://doi.org/10.3390/brainsci11111526
Brown, K. W., & Ryan, R. M. (2003). The benefits of being present: Mindfulness and its role in psychological well-being. Journal of Personality and Social Psychology, 84(4), 822–848. https://doi.org/10.1037/0022-3514.84.4.822
Brunyé, T. T., Brou, R., Doty, T. J., Gregory, F. D., Hussey, E. K., Lieberman, H. R., Loverro, K. L., Mezzacappa, E. S., Neumeier, W. H., Patton, D. J., Soares, J. W., Thomas, T. P., & Yu, A. B. (2020). A review of U.S. Army research contributing to cognitive enhancement in military contexts. Journal of Cognitive Enhancement, 4(4), 453–468. https://doi.org/10.1007/s41465-020-00167-3
Brunyé, T. T., Patterson, J. E., Wooten, T., & Hussey, E. K. (2021). A critical review of cranial electrotherapy stimulation for neuromodulation in clinical and non-clinical samples. Frontiers in Human Neuroscience, 15, 1–12. https://doi.org/10.3389/fnhum.2021.625321
Brunyé, T. T., Smith, A. M., Horner, C. B., & Thomas, A. K. (2018). Verbal long-term memory is enhanced by retrieval practice but impaired by prefrontal direct current stimulation. Brain and Cognition, 128, 80–88. https://doi.org/10.1016/j.bandc.2018.09.008
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42, 116–131. https://doi.org/10.1037/0022-3514.42.1.116
Cacioppo, J. T., Petty, R. E., & Kao, C. F. (1984). The efficient assessment of need for cognition. Journal of Personality Assessment, 48, 306–307. https://doi.org/10.1207/s15327752jpa4803_13
Carriere, J. S. A., Cheyne, J. A., & Smilek, D. (2008). Everyday attention lapses and memory failures: The affective consequences of mindlessness. Consciousness and Cognition, 17(3), 835–847. https://doi.org/10.1016/j.concog.2007.04.008
Chan, C., Chan, G. C., Leeper, T. J., & Becker, J. (2021). rio: A Swiss-army knife for data file I/O (0.5.29) [R package]. https://cran.r-project.org/web/packages/rio/index.html
Cheyne, J. A., Carriere, J. S. A., & Smilek, D. (2006). Absent-mindedness: Lapses of conscious awareness and everyday cognitive failures. Consciousness and Cognition, 15(3), 578–592. https://doi.org/10.1016/j.concog.2005.11.009
Ciullo, V., Spalletta, G., Caltagirone, C., Banaj, N., Vecchio, D., Piras, F., & Piras, F. (2021). Transcranial direct current stimulation and cognition in neuropsychiatric disorders: Systematic review of the evidence and future directions. The Neuroscientist, 27(3), 285–309. https://doi.org/10.1177/1073858420936167
Cohen, S., Kamarck, T., & Mermelstein, R. (1983). A global measure of perceived stress. Journal of Health and Social Behavior, 24(4), 385–396. https://doi.org/10.2307/2136404
Denkinger, S., Spano, L., Bingel, U., Witt, C. M., Bavelier, D., & Green, C. S. (2021). Assessing the impact of expectations in cognitive training and beyond. Journal of Cognitive Enhancement. https://doi.org/10.1007/s41465-021-00206-7
Dockery, C. A., Hueckel-Weng, R., Birbaumer, N., & Plewnia, C. (2009). Enhancement of planning ability by transcranial direct current stimulation. Journal of Neuroscience, 29(22), 7271–7217.
Dweck, C. (2006). Mindset: The new psychology of success. Random House.
Flöel, A., Rösser, N., Michka, O., Knecht, S., & Breitenstein, C. (2008). Noninvasive brain stimulation improves language learning. Journal of Cognitive Neuroscience, 20(8), 1415–1422. https://doi.org/10.1162/jocn.2008.20098
Foroughi, C. K., Monfort, S. S., Paczynski, M., McKnight, P. E., & Greenwood, P. M. (2016). Placebo effects in cognitive training. Proceedings of the National Academy of Sciences, 113(27), 7470–7474. https://doi.org/10.1073/pnas.1601243113
Freitas, C., Mondragón-Llorca, H., & Pascual-Leone, A. (2011). Noninvasive brain stimulation in Alzheimer’s disease: Systematic review and perspectives for the future. Experimental Gerontology, S0531556511000908. https://doi.org/10.1016/j.exger.2011.04.001
Guay, F., Vallerand, R. J., & Blanchard, C. (2000). On the assessment of situational intrinsic and extrinsic motivation: The Situational Motivation Scale (SIMS). Motivation and Emotion, 24(3), 40. https://doi.org/10.1023/A:1005614228250
Guleyupoglu, B., Febles, N., Minhas, P., Hahn, C., & Bikson, M. (2014). Reduced discomfort during high-definition transcutaneous stimulation using 6% benzocaine. Frontiers in Neuroengineering, 7(28), 1–3. https://doi.org/10.3389/fneng.2014.00028
Guye, S., De Simoni, C., & von Bastian, C. C. (2017). Do individual differences predict change in cognitive training performance? A latent growth curve modeling approach. Journal of Cognitive Enhancement, 1(4), 374–393. https://doi.org/10.1007/s41465-017-0049-9
Harrell, E. R., Kmetz, B., & Boot, W. R. (2019). Is cognitive training worth it? Exploring individuals’ willingness to engage in cognitive training. Journal of Cognitive Enhancement, 3(4), 405–415. https://doi.org/10.1007/s41465-019-00129-4
Hauser, D. J., & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods, 48(1), 400–407. https://doi.org/10.3758/s13428-015-0578-z
Hill, A. T., Fitzgerald, P. B., & Hoy, K. E. (2016). Effects of anodal transcranial direct current stimulation on working memory: A systematic review and meta-analysis of findings from healthy and neuropsychiatric populations. Brain Stimulation, 9(2), 197–208. https://doi.org/10.1016/j.brs.2015.10.006
Horne, K. S., Filmer, H. L., Nott, Z. E., Hawi, Z., Pugsley, K., Mattingley, J. B., & Dux, P. E. (2020). Evidence against benefits from cognitive training and transcranial direct current stimulation in healthy older adults. Nature Human Behaviour. https://doi.org/10.1038/s41562-020-00979-5
Horvath, J. C., Forte, J. D., & Carter, O. (2015a). Evidence that transcranial direct current stimulation (tDCS) generates little-to-no reliable neurophysiologic effect beyond MEP amplitude modulation in healthy human subjects: A systematic review. Neuropsychologia, 66, 213–236. https://doi.org/10.1016/j.neuropsychologia.2014.11.021
Horvath, J. C., Forte, J. D., & Carter, O. (2015b). Quantitative review finds no evidence of cognitive effects in healthy populations from single-session transcranial direct current stimulation (tDCS). Brain Stimulation, 8(3), 535–550. https://doi.org/10.1016/j.brs.2015.01.400
Jaeggi, S. M., Buschkuehl, M., Shah, P., & Jonides, J. (2014). The role of individual differences in cognitive training and transfer. Memory & Cognition, 42(3), 464–480. https://doi.org/10.3758/s13421-013-0364-z
Jones, K. T., Gözenman, F., & Berryhill, M. E. (2015). The strategy and motivational influences on the beneficial effect of neurostimulation: A tDCS and fNIRS study. NeuroImage, 105, 238–247. https://doi.org/10.1016/j.neuroimage.2014.11.012
Katz, B., Au, J., Buschkuehl, M., Abagis, T., Zabel, C., Jaeggi, S. M., & Jonides, J. (2017). Individual differences and long-term consequences of tDCS-augmented cognitive training. Journal of Cognitive Neuroscience, 29(9), 1498–1508. https://doi.org/10.1162/jocn_a_01115
Knotkova, H., Nitsche, M. A., Bikson, M., & Woods, A. J. (Eds.). (2019). Practical guide to transcranial direct current stimulation: Principles, procedures, and applications. Springer International Publishing. https://doi.org/10.1007/978-3-319-95948-1
Koch, I., Poljac, E., Müller, H., & Kiesel, A. (2018). Cognitive structure, flexibility, and plasticity in human multitasking—An integrative review of dual-task and task-switching research. Psychological Bulletin, 144(6), 557–583. https://doi.org/10.1037/bul0000144
Lee, J., Lee, H., & Park, W. (2019). Effects of cranial electrotherapy stimulation on electrocephalogram. Journal of International Academy of Physical Therapy Research, 10(1), 1687–1694. https://doi.org/10.20540/JIAPTR.2019.10.1.1687
Leys, C., Ley, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766. https://doi.org/10.1016/j.jesp.2013.03.013
Looi, C. Y., Duta, M., Brem, A.-K., Huber, S., Nuerk, H.-C., & Cohen Kadosh, R. (2016). Combining brain stimulation and video game to promote long-term transfer of learning and cognitive enhancement. Scientific Reports, 6(1), 22003. https://doi.org/10.1038/srep22003
Lu, P., Liu, J., & Koestler, D. (2017). pwr2: Power and sample size analysis for one-way and two-way ANOVA models (1.0) [R package]. https://CRAN.R-project.org/package=pwr2
Mancuso, L. E., Ilieva, I. P., Hamilton, R. H., & Farah, M. J. (2016). Does transcranial direct current stimulation improve healthy working memory?: A meta-analytic review. Journal of Cognitive Neuroscience, 28(8), 1063–1089. https://doi.org/10.1162/jocn_a_00956
Medina, J., & Cason, S. (2017). No evidential value in samples of transcranial direct current stimulation (tDCS) studies of cognition and working memory in healthy populations. Cortex, 94, 131–141. https://doi.org/10.1016/j.cortex.2017.06.021
Melby-Lervåg, M., Redick, T. S., & Hulme, C. (2016). Working memory training does not improve performance on measures of intelligence or other measures of “far transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science, 11(4), 512–534. https://doi.org/10.1177/1745691616635612
Mellen, R. R., Case, J., & Ruiz, D. J. (2016). Cranial electrotherapy stimulation (CES) as a treatment for reducing stress and improving prefrontal cortex functioning in victims of domestic violence. International Association for Correctional and Forensic Psychology Newsletter, 48(3), 12–15.
Minear, M., Brasher, F., Guerrero, C. B., Brasher, M., Moore, A., & Sukeena, J. (2016). A simultaneous examination of two forms of working memory training: Evidence for near transfer only. Memory & Cognition, 44(7), 1014–1037. https://doi.org/10.3758/s13421-016-0616-9
Miniussi, C., Harris, J. A., & Ruzzoli, M. (2013). Modelling non-invasive brain stimulation in cognitive neuroscience. Neuroscience & Biobehavioral Reviews, 37(8), 1702–1712. https://doi.org/10.1016/j.neubiorev.2013.06.014
Morales-Quezada, L., Cosmo, C., Carvalho, S., Leite, J., Castillo-Saavedra, L., Rozisky, J. R., & Fregni, F. (2015). Cognitive effects and autonomic responses to transcranial pulsed current stimulation. Experimental Brain Research, 233(3), 701–709. https://doi.org/10.1007/s00221-014-4147-y
Nilsson, J., Lebedev, A. V., Rydström, A., & Lövdén, M. (2017). Direct-current stimulation does little to improve the outcome of working memory training in older adults. Psychological Science, 28(7), 907–920. https://doi.org/10.1177/0956797617698139
Ørskov, P. T., Norup, A., Beatty, E. L., & Jaeggi, S. M. (2021). Exploring individual differences as predictors of performance change during dual-N-back training. Journal of Cognitive Enhancement. https://doi.org/10.1007/s41465-021-00216-5
Palan, S., & Schitter, C. (2018). Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22–27. https://doi.org/10.1016/j.jbef.2017.12.004
Parasuraman, R., & McKinley, R. A. (2014). Using noninvasive brain stimulation to accelerate learning and enhance human performance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 56(5), 816–824. https://doi.org/10.1177/0018720814538815
Peer, E., Vosgerau, J., & Acquisti, A. (2014). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, 46(4), 1023–1031. https://doi.org/10.3758/s13428-013-0434-y
Pennycook, G., Ross, R. M., Koehler, D. J., & Fugelsang, J. A. (2017). Dunning-Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence. Psychonomic Bulletin & Review, 24(6), 1774–1784. https://doi.org/10.3758/s13423-017-1242-7
Pyke, W., Vostanis, A., & Javadi, A.-H. (2020). Electrical brain stimulation during a retrieval-based learning task can impair long-term memory. Journal of Cognitive Enhancement, 1–15. https://doi.org/10.1007/s41465-020-00200-5
R Core Team. (2022). R: A language and environment for statistical computing (4.1.3) [R]. R Foundation for Statistical Computing. https://www.R-project.org/
Rabipour, S., Andringa, R., Boot, W. R., & Davidson, P. S. R. (2017). What do people expect of cognitive enhancement? Journal of Cognitive Enhancement, 2(1), 70–77. https://doi.org/10.1007/s41465-017-0050-3
Rabipour, S., & Davidson, P. S. R. (2015). Do you believe in brain training? A questionnaire about expectations of computerised cognitive training. Behavioural Brain Research, 295, 64–70. https://doi.org/10.1016/j.bbr.2015.01.002
Rabipour, S., Davidson, P. S. R., & Kristjansson, E. (2018a). Measuring expectations of cognitive enhancement: Item response analysis of the Expectation Assessment Scale. Journal of Cognitive Enhancement, 2(3), 311–317. https://doi.org/10.1007/s41465-018-0073-4
Rabipour, S., & Raz, A. (2012). Training the brain: Fact and fad in cognitive and behavioral remediation. Brain and Cognition, 79(2), 159–179. https://doi.org/10.1016/j.bandc.2012.02.006
Rabipour, S., Vidjen, P. S., Remaud, A., Davidson, P. S. R., & Tremblay, F. (2019). Examining the interactions between expectations and tDCS effects on motor and cognitive performance. Frontiers in Neuroscience, 12, 999. https://doi.org/10.3389/fnins.2018.00999
Rabipour, S., Wu, A. D., Davidson, P. S. R., & Iacoboni, M. (2018b). Expectations may influence the effects of transcranial direct current stimulation. Neuropsychologia, 119, 524–534. https://doi.org/10.1016/j.neuropsychologia.2018.09.005
Ralph, B. C. W., & Smilek, D. (2017). Individual differences in media multitasking and performance on the n-back. Attention, Perception, & Psychophysics, 79(2), 582–592. https://doi.org/10.3758/s13414-016-1260-y
Ram, K., & Wickham, H. (2018). wesanderson: A Wes Anderson palette generator (0.3.6) [R package]. https://CRAN.R-project.org/package=wesanderson
Richmond, L. L., Wolk, D., Chein, J., & Olson, I. R. (2014). Transcranial direct current stimulation enhances verbal working memory training performance over time and near transfer outcomes. Journal of Cognitive Neuroscience, 26(11), 2443–2454. https://doi.org/10.1162/jocn_a_00657
RStudio Team. (2022). RStudio: Integrated development environment for R (2022.02.1) [R]. RStudio, PBC. http://www.rstudio.com/
Schwaighofer, M., Fischer, F., & Bühner, M. (2015). Does working memory training transfer? A meta-analysis including training conditions as moderators. Educational Psychologist, 50(2), 138–166. https://doi.org/10.1080/00461520.2015.1036274
Schwarz, K. A., Pfister, R., & Büchel, C. (2016). Rethinking explicit expectations: Connecting placebos, social cognition, and contextual perception. Trends in Cognitive Sciences, 20(6), 469–480. https://doi.org/10.1016/j.tics.2016.04.001
Schwarzer, R., & Jerusalem, M. (1995). Generalized Self-Efficacy Scale. In J. Weinman, S. Wright, & M. Johnston, Measures in health psychology: A user’s portfolio. Causal and control beliefs (pp. 35–37). NFER-NELSON.
Selker, R., Love, J., Dropmann, D., & Moreno, V. (2021). jmv: The “jamovi” analyses. (2.3.4) [R package]. https://CRAN.R-project.org/package=jmv
Shipstead, Z., Redick, T. S., & Engle, R. W. (2012). Is working memory training effective? Psychological Bulletin, 138(4), 628–654. https://doi.org/10.1037/a0027473
Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. L. (2016). Do “brain-training” programs work? Psychological Science in the Public Interest, 17(3), 103–186. https://doi.org/10.1177/1529100616661983
Smilek, D., Carriere, J. S. A., & Cheyne, J. A. (2010). Failures of sustained attention in life, lab, and brain: Ecological validity of the SART. Neuropsychologia, 48(9), 2564–2570. https://doi.org/10.1016/j.neuropsychologia.2010.05.002
Smith, R. B. (1999). Cranial electrotherapy stimulation in the treatment of stress related cognitive dysfunction, with an eighteen month follow up. Journal of Cognitive Rehabilitation, 17(6), 14–18.
Southworth, S. (1999). A study of the effects of cranial electrical stimulation on attention and concentration. Integrative Physiological and Behavioral Science, 34(1), 43–53. https://doi.org/10.1007/BF02688709
Sprenger, A. M., Atkins, S. M., Bolger, D. J., Harbison, J. I., Novick, J. M., Chrabaszcz, J. S., Weems, S. A., Smith, V., Bobb, S., Bunting, M. F., & Dougherty, M. R. (2013). Training working memory: Limits of transfer. Intelligence, 41(5), 638–663. https://doi.org/10.1016/j.intell.2013.07.013
Suarez-García, D. M. A., Grisales-Cárdenas, J. S., Zimerman, M., & Cardona, J. F. (2020). Transcranial direct current stimulation to enhance cognitive impairment in Parkinson’s disease: A systematic review and meta-analysis. Frontiers in Neurology, 11, 597955. https://doi.org/10.3389/fneur.2020.597955
Talsma, L. J., Kroese, H. A., & Slagter, H. A. (2017). Boosting cognition: Effects of multiple-session transcranial direct current stimulation on working memory. Journal of Cognitive Neuroscience, 29(4), 755–768. https://doi.org/10.1162/jocn_a_01077
Tsai, N., Buschkuehl, M., Kamarsu, S., Shah, P., Jonides, J., & Jaeggi, S. M. (2018). (Un)Great expectations: The role of placebo effects in cognitive training. Journal of Applied Research in Memory and Cognition, 7(4), 564–573. https://doi.org/10.1016/j.jarmac.2018.06.001
van Elk, M., Groenendijk, E., & Hoogeveen, S. (2020). Placebo brain stimulation affects subjective but not neurocognitive measures of error processing. Journal of Cognitive Enhancement, 4(4), 389–400. https://doi.org/10.1007/s41465-020-00172-6
Vodyanyk, M., Cochrane, A., Corriveau, A., Demko, Z., & Green, C. S. (2021). No evidence for expectation effects in cognitive training tasks. Journal of Cognitive Enhancement. https://doi.org/10.1007/s41465-021-00207-6
Ward, N., Hussey, E. K., Cunningham, E. C., Paul, E. J., McWilliams, T., & Kramer, A. F. (2019). Building the multitasking brain: An integrated perspective on functional brain activation during task-switching and dual-tasking. Neuropsychologia, 132, 107149. https://doi.org/10.1016/j.neuropsychologia.2019.107149
Wickham, H. (2016). ggplot2: Elegant graphics for data analysis. Springer-Verlag. https://ggplot2.tidyverse.org
Wickham, H. (2021). tidyr: Tidy messy data (1.2.0) [R package]. https://CRAN.R-project.org/package=tidyr
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T., Miller, E., Bache, S., Müller, K., Ooms, J., Robinson, D., Seidel, D., Spinu, V., … Yutani, H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686
Wickham, H., François, R., Henry, L., & Müller, K. (2021). dplyr: A grammar of data manipulation (1.0.8) [R package]. https://CRAN.R-project.org/package=dplyr
Wiemers, E. A., Redick, T. S., & Morrison, A. B. (2019). The influence of individual differences in cognitive ability on working memory training gains. Journal of Cognitive Enhancement, 3(2), 174–185. https://doi.org/10.1007/s41465-018-0111-2
Wooten, T., Sansevere, K. S., Siqueria, S., McWilliams, T., Peach, S., Hussey, E. K., Brunyé, T. T., & Ward, N. (n.d.). Stage 1 Registered Report: Evaluating the efficacy of cranial electrical stimulation in ameliorating anxiety-induced cognitive deficits [version 2; peer review: 2 approved with minor revisions]. International Journal of Psychophysiology.
Zaghi, S., Acar, M., Hultgren, B., Boggio, P. S., & Fregni, F. (2010). Noninvasive brain stimulation with low-intensity electrical currents: Putative mechanisms of action for direct and alternating current stimulation. The Neuroscientist, 16(3), 285–307. https://doi.org/10.1177/1073858409336227
Funding
Research was sponsored by the U.S. Army DEVCOM Soldier Center and was accomplished under Cooperative Agreement Number W911QY-15–2-0001. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army DEVCOM Soldier Center, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
Author information
Authors and Affiliations
Contributions
Kayla S. Sansevere: Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing—Original draft, Writing—Review and editing, Visualization. Thomas Wooten: Software, Validation, Writing—Review and editing. Thomas McWilliams: Software, Validation. Sidney Peach: Resources, Project administration. Erika K. Hussey: Writing – Review and editing. Tad T. Brunyé: Writing—Review and editing. Nathan Ward: Conceptualization, Methodology, Formal analysis, Resources, Data curation, Writing—Review and editing, Supervision, Project administration, Funding acquisition.
Corresponding author
Ethics declarations
Ethics approval
This study was approved by the Tufts University Institutional Review Board (#1908026) and was conducted in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.
Consent to participate
Freely given; informed consent to participate in this study was obtained from participants.
Consent for publication
We obtained consent from participants to publish their de-identified data prior to submitting this paper to a journal, and this data is available on OSF (https://osf.io/5bwec/).
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sansevere, K.S., Wooten, T., McWilliams, T. et al. Self-reported Outcome Expectations of Non-invasive Brain Stimulation Are Malleable: a Registered Report that Replicates and Extends Rabipour et al. (2017). J Cogn Enhanc 6, 496–513 (2022). https://doi.org/10.1007/s41465-022-00250-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41465-022-00250-x