Advertisement

Cognitive, Affective, & Behavioral Neuroscience

, Volume 18, Issue 5, pp 902–924 | Cite as

Does intrinsic reward motivate cognitive control? a naturalistic-fMRI study based on the synchronization theory of flow

  • Richard HuskeyEmail author
  • Britney Craighead
  • Michael B. Miller
  • René Weber
Article

Abstract

Cognitive control is a framework for understanding the neuropsychological processes that underlie the successful completion of everyday tasks. Only recently has research in this area investigated motivational contributions to control allocation. An important gap in our understanding is the way in which intrinsic rewards associated with a task motivate the sustained allocation of control. To address this issue, we draw on flow theory, which predicts that a balance between task difficulty and individual ability results in the highest levels of intrinsic reward. In three behavioral and one functional magnetic resonance imaging studies, we used a naturalistic and open-source video game stimulus to show that changes in the balance between task difficulty and an individual’s ability to perform the task resulted in different levels of intrinsic reward, which is associated with different brain states. Specifically, psychophysiological interaction analyses show that high levels of intrinsic reward associated with a balance between task difficulty and individual ability are associated with increased functional connectivity between key structures within cognitive control and reward networks. By comparison, a mismatch between task difficulty and individual ability is associated with lower levels of intrinsic reward and corresponds to increased activity within the default mode network. These results suggest that intrinsic reward motivates cognitive control allocation.

Keywords

Flow Synchronization theory Motivation Cognitive control Intrinsic reward fMRI Open source video game 

Planning, goal maintenance, performance monitoring, response inhibition, and reward processing are key features of cognitive control (Miller, 2000; Miller & Cohen, 2001). However, much of the work in this area has largely ignored motivation despite the fact that it is theorized to play a role in control allocation and task performance (Braver et al., 2014). Recent attempts at integrating these two constructs have largely focused on the ways in which reward expectation motivates the allocation of control (Botvinick & Braver, 2014). A key finding demonstrates that control allocation is a function of anticipated task difficulty and expected rewards where humans strive to find an optimal balance between the two (Kool & Botvinick, 2014). Upon task completion, consummatory reward mechanisms track task-related outcomes and motivate subsequent behavior to maximize future rewards (O’Doherty et al., 2004). By comparison, the way in which task-related intrinsic rewards (Deci & Ryan, 1985) motivate the sustained allocation of cognitive control during task execution remains largely unknown (Braver et al., 2014).

Mounting evidence has demonstrated that increased extrinsic rewards (e.g., monetary payments) are associated with increases in sustained task performance and increased neural activity in attentional, reward, and cognitive control networks (Engelmann, Damaraju, Padmala, & Pessoa, 2009; Locke & Braver, 2008). Similarly, the intrinsically rewarding nature of self-determined choice has been shown to elicit activity in reward-network structures and corresponds with increases in task enjoyment and performance (Kang et al., 2009; Leotti & Delgado, 2011; Murayama et al., 2015). Although robust evidence shows that, under some circumstances, demanding tasks can be intrinsically rewarding (for a review, see: Inzlicht, Shenhav, & Olivola, 2018), it is unknown how intrinsic rewards resulting from task demands (and not from choice) motivate cognitive control allocation. This may be due, at least in part, to the difficulty of manipulating task-based intrinsic reward in a laboratory setting.

Flow theory (Csikszentmihalyi, 1975) offers a potential solution for overcoming this challenge. Flow theory posits that the sustained execution of a task is experienced as being intrinsically rewarding when there is a balance between the task’s difficulty and an individual’s ability to meet the task’s demands (for a modern treatment, see Inzlicht et al., 2018). By comparison, the theory predicts that a mismatch between task difficulty and individual ability leads to different psychological states. Tasks for which difficulty is greater than individual ability leads to a state of anxiety, whereas tasks for which difficulty is less than individual ability leads to boredom (Nakamura & Csikszentmihalyi, 2005).

Importantly, flow is experienced as intrinsically rewarding such that that participants undertake a flow-inducing task “for its own sake, with little concern for what they will get out of it, even when it is difficult” (Csikszentmihalyi, 1990, p. 71). While flow states have been observed across a diversity of activities, including musical composition, athletics, creative and artistic work, etc., they also are shown to emerge during video game use as enjoyable video games depend on a balance between game difficulty and player ability (Sherry, 2004). Evidence using a video game stimulus demonstrates that allowing for task difficulty to vary in relationship to individual ability results in a curvilinear relationship where self-reported intrinsic reward is low when difficulty ≠ ability and high when difficulty ≈ ability (Keller & Bless, 2008). A recent behavioral and psychophysiological study using a racing video game also showed that the flow state (difficulty ≈ ability) resulted in the highest levels of absorption, attentional effort, and efficient gaze compared with conditions where difficulty ≠ ability (Harris, Vine, & Wilson, 2017a).

Progress also has been made towards understanding the neural basis of flow. Specifically, the synchronization theory of flow predicts intrinsically rewarding state of flow results from a network synchronization process between structures within cognitive control and reward networks when task difficulty ≈ individual ability (Weber, Huskey, & Craighead, 2016; Weber, Tamborini, Westcott-Baker, & Kantor, 2009). In two independent functional magnetic resonance imaging studies (fMRI), subjects answered math problems during a fMRI scanning session (Ulrich, Keller, & Grön, 2016b; Ulrich, Keller, Hoenig, Waller, & Grön, 2014). Problems that matched subject’s ability corresponded to the highest levels of intrinsic reward compared with problems that were too easy or difficult. This balance between difficulty and ability also was associated with increased activity in attentional and cognitive control structures, particularly the inferior frontal gyrus (IFG), anterior insula, and the superior and inferior parietal lobes (SPL, IPL). Increased activity was observed in the dorsal striatum (both caudate nucleus and putamen), regions implicated in consummatory reward processing (O’Doherty et al., 2004; Satterthwaite et al., 2007) and performance monitoring during cognitive control (Berkman, Falk, & Lieberman, 2012). Similar experimental paradigms using video game stimuli indicate that a balance between difficulty and ability corresponds with activation in attentional (lateral prefrontal cortex, cerebellum, thalamus, SPL) and reward (caudate nucleus, nucleus accumbens, putamen) structures (Klasen, Weber, Kircher, Mathiak, & Mathiak, 2012; Yoshida et al., 2014). These results provides preliminary support for synchronization theory’s structural predictions (for a recent review, see Harris, Vine, & Wilson, 2017b).

By comparison, a mismatch between difficulty and ability is associated with lower levels of intrinsic reward and increased levels of activity among default mode network structures (DMN; Ulrich, Keller, & Grön, 2016a; Ulrich et al., 2014). Similar findings have been observed in a study using a naturalistic video game stimulus (Mathiak, Klasen, Zvyagintsev, Weber, & Mathiak, 2013). Moreover, sustained performance on difficult cognitive tasks has been shown to exhaust subjects, resulting in a shift from activity in frontoparietal control networks to the DMN (Esposito, Otto, Zijlstra, & Goebel, 2014). These results suggest that intrinsic reward may motivate task engagement and be a key factor driving shifts in brain-network organization between one optimized for cognitive control and one that characterizes task disengagement. Converging evidence shows that the insula plays a key role in shifting between these networks (Chang, Yarkoni, Khaw, & Sanfey, 2013) where changes in activity within this structure predict task disengagement (Meyniel, Sergent, Rigoux, Daunizeau, & Pessiglione, 2013).

These results suggest that task-related intrinsic reward modulates the allocation of cognitive control during task performance and that variation in intrinsic reward impacts networked brain connectivity patterns. Accordingly, and consistent with flow theory, we predict that self-reported intrinsic reward should be highest when task difficulty ≈ individual ability compared with conditions where task difficulty ≠ individual ability. If true, then synchronization theory predicts functional connectivity between key structures within the cognitive control and reward networks when task difficulty ≈ individual ability but not when difficulty ≠ individual ability.

To date, much of the flow literature has relied on self-report measures administered after a flow-inducing task. As a source of convergent validity, and to overcome potential limitations associated with self-reports (Nisbett & Ross, 1980), we also included an online behavioral measure for evaluating our experimental manipulation. Previous experimentation has shown a curvilinear relationship between motivation and attentional engagement (Lang, 2000). Within the context of motivated attentional engagement, such results have a straightforward interpretation. All other things being equal, subjects should allocate more attentional resources to motivationally relevant tasks compared with less motivationally relevant tasks. It follows that tasks perceived as having higher levels of intrinsic reward should be more motivationally relevant than tasks that are perceived as having lower levels of intrinsic reward. Therefore, subjects should show more attentional engagement when intrinsic reward is high compared with low. To test this, we had subjects perform a secondary task reaction time procedure (STRT; Lang, Bradley, Park, Shin, & Chung, 2006) while playing the experimental video game stimulus. We predicted that reaction times will show an inverted U-shaped pattern where attentional engagement with the video game stimulus is highest (and therefore STRTs are the longest) when task difficulty ≈ individual ability compared to conditions where task difficulty ≠ individual ability.

This manuscript details the validation of an experimental protocol for manipulating intrinsic reward and its application to an fMRI context. Our results provide self-report, behavioral, and neuropsychological evidence (using both brain activation and functional connectivity analyses) demonstrating a relationship between intrinsic reward and cognitive control. We conclude with a discussion of the implications of our findings, consider how our behavioral paradigm answers recent calls for more naturalistic experimental designs within cognitive neuroscience literature, and outline next-steps for future research in this area.

Methods

General overview

Three behavioral experiments were conducted to evaluate a novel procedure for manipulating and measuring the relationship between task difficulty, individual ability, intrinsic reward, and cognitive control. This procedure was then adapted to an fMRI context. All four experiments shared the same conceptual logic such that subjects played a video game while responding to a STRT measure (Figure 1). We detail differences in gameplay and STRT parameters below.
Figure 1.

Schematic of the experimental paradigm. In all experiments, the subject’s goal was to use their mouse to collect targets while avoiding asteroids and responding to STRT trials as quickly as possible. For the behavioral experiments (A), visual STRT trials appeared in one of five different locations on a second screen. In the fMRI experiment (B), STRT trials appeared on the same screen in one of four different locations. While each experiment (behavioral or fMRI) required subjects to complete conceptually similar tasks (C), the number of STRT trials and the duration of each condition differed across experiments.

Subjects

Human subjects in each experiment were drawn from a pool of students at the University of California Santa Barbara (Table 1; final n's for experiment: one = 122, two = 110, three = 87, fMRI = 18). Subjects in all experiments (behavioral and fMRI) were screened prior to participation and were not recruited if they had participated in any of the previous studies. Accordingly, subjects in all experiments did not have prior experience with the video game stimulus or experimental paradigm. The University’s Institutional Review Board approved all experiments. Subjects in the fMRI experiment were right-handed, had normal or corrected to normal vision, and did not demonstrate any contraindication to fMRI scanning. Experiment three showed that self-reported video game ability was a significant predictor of actual video game performance. Accordingly, subjects were not recruited for the fMRI study if they reported very high or low video game ability.
Table 1

Summary statistics describing the subject samples in all four experiments.

 

n

Mean age (std. dev.)

% Female (% Male)

Mean self-reported video game ability (std. dev.)*

Experiment 1

122

19.40 (1.21)

64.8 (35.2)

1.80 (1.21)

Experiment 2

110

20.48 (1.93)

70.9 (29.1)

1.64 (0.85)

Experiment 3

87

19.49 (1.44)

77.0 (23.0)

3.23 (1.63)

fMRI Experiment

18

22.83 (4.02)

77.8 (22.2)

3.00 (1.03)

*Self-reported video game ability was measured using a 4-item scale in experiments 1 and 2 and with a 7-item scale in experiment 3 and the fMRI study.

Previous behavioral research evaluating engagement with video games has shown considerable variability in effect sizes (Raines, Levine, & Weber, 2018; Sherry, 2001). Accordingly, small effects were assumed when calculating a power analysis for the first behavioral experiment with subsequent behavioral experiments seeking to maintain comparable sample sizes. The fMRI sample size corresponded to related studies reported in the literature (Desmond & Glover, 2002; Friston, 2012). One run for one subject was excluded from the fMRI experiment due to equipment malfunction; two subjects voluntarily withdrew from the study during initial structural image acquisition.

Naturalistic video game stimulus

In experiments 1 and 2, participants played Star Reaction (ABiGames), a point-and-click style video game where subjects used their cursor to collect star-shaped targets that were displayed at different locations on a screen while avoiding rings that bounced around the screen. Thirteen levels incrementally manipulated difficulty by altering the number of targets a subject needed to collect, the number of objects to be avoided, and the rate at which these objects moved around the video game window. While useful for initial testing, Star Reaction offered few options for interface customization, thereby limiting experimental control. To overcome this issue, an open-source variant called Asteroid Impact (CC BY-SA 4.0) was developed for experiment three and the subsequent fMRI experiment. Asteroid Impact was designed to have similar mechanics to Star Reaction while allowing for tighter experimental control (the experimental video game stimulus and its supporting documentation can be downloaded from: https://github.com/richardhuskey/asteroid_impact).

Secondary task reaction time measurement

Subjects completed a STRT measure while playing the experimental video game (Figure 1). STRTs were defined as the latency between the onset time of a stimulus (trial) and the moment when a subject responded with a key press. For experiments 1 and 2, each condition included 48 trials that lasted for 1,500 ms. Only visual trials were used in experiment 1, whereas half of the visual trials were replaced with auditory trials (sine waveform, 440.0 Hz) in experiment 2. The intertrial interval (ITI) for each trial was calculated by adding a sample of normally distributed randomly generated numbers (M = 1,969 ms, SD = 1,000 ms) to a baseline of 1,500 ms. In experiment 3 and the fMRI experiment, 24 visual trials were shown for each condition. The ITI for these trials was jittered around a truncated Gaussian distribution with a floor of 1,500 ms and a standard deviation of 2.0. In the behavioral experiment, subjects responded to STRT trials by using their nondominant hand to press the spacebar key on a computer keyboard. In the fMRI experiment, subjects used the thumb on their left hand (all subjects were right-handed) to press a button on an MRI safe button box. Trials were shown in one of five possible locations on a second screen in the behavioral experiment and were shown in one of four possible locations on the same screen in the fMRI experiment.

Measuring intrinsic reward

In experiments 1 and 2, intrinsic reward was measured using a 4-item, 7-point scale (Bowman, Weber, Tamborini, & Sherry, 2013; Weber, Behr, & Bates, 2014). Experiment 3 used the Event Experience Scale, a better validated and more widely used measure of task-related intrinsic reward (Jackson & Marsh, 1996). Specifically, self-reported intrinsic reward was measured using the 4-item, 5-point autotelic experience subscale. Items on this scale included: “I really enjoyed the experience”; “I loved the feeling of performance and want to capture it again”; “The experience left me feeling great”; and “I found the experience extremely rewarding.”

Measuring individual differences in intrinsic reward sensitivity

Experiment 3 measured intrinsic reward sensitivity, which is understood as a trait-level measure, using the 4-item, 5-point autotelic personality subscale of the Activity Experience Scale (Jackson & Eklund, 2004).

Measuring video game ability

It is possible that subject's video game ability explains differences in self-reported intrinsic reward as well as STRT performance. Accordingly, video game ability was included as an a priori defined covariate of no interest. In experiments 1 and 2, video game ability was evaluated using a 4-point single-item measure where subjects were asked to “indicate their general video game skill.” In experiment 3 and the fMRI study, this was changed to a 7-point single-item measure. In addition, and based on evidence that performance on different cognitive tasks correlates with video game ability (Bowman et al., 2013; Sherry, 2004), established behavioral measures of targeting (Watson & Kimura, 1989), attentional vigilance (Robertson, Manly, Andrade, Baddeley, & Yiend, 1997), dual-tasking ability (Erickson et al., 2007), and three-dimensional mental rotation (Peters et al., 1995) were collected as independent behavioral proxies for video game ability in experiment 3 (Figures 2, 3, 4 and 5).
Figure 2.

Example of the redrawn Vandenberg and Kuse mental rotations test (Peters et al., 1995). This test was conducted as a potential measure of video game skill in experiment 3.

Figure 3.

Experimental schematic of the sustained attention response test (Robertson et al., 1997). This test was conducted as a potential measure of video game skill in experiment 3.

Figure 4.

Experimental schematic of the dual-tasking paradigm (Erickson et al., 2007). This test was conducted as a potential measure of video game skill in experiment 3.

Figure 5.

Experimental schematic of the targeting task (Watson & Kimura, 1989). This test was conducted as a potential measure of video game skill in experiment 3.

Three-dimensional mental rotation

The redrawn Vandenberg and Kuse mental rotations test (Peters et al., 1995) was administered in two three-minute runs. For each run, subjects were shown 12 three-dimensional reference shapes. For each reference shape, subjects were asked to identify which two (out of four) shapes matched the reference. Subjects were given a point if they correctly identified both shapes (M = 7.298, SD = 3.894, range = 0–22).

Sustained attention response test

Following Robertson et al. (1997), subjects were shown a series of numbers (1–9) in five different font sizes for 250 ms (font sizes were balanced across all values). The trial was then masked for 900 ms. Subjects were instructed to press a key as quickly as possible for all numbers (a go trial) except the number 3 (a no-go trial). A total of 225 trials were shown, 25 of which were no-go trials. Mirroring previous studies (Unsworth et al., 2015), the two dependent measured included: (1) accuracy operationalized as the frequency count of no-go trials where a key press was withheld (M = 21.824, SD = 2.780, range = 11–25) and (2) the standard deviation of reaction times for correct go trials (M = 453.012, SD = 87.169, range = 102.07–544.40).

Dual-task paradigm

Consistent with Erickson et al. (2007), subjects were shown two types of trials (single-mixed, dual-mixed), which lasted for 2,500 ms and were separated by a 500-ms fixation cross. In single mixed trials, subjects were shown one of four possible stimuli: >, <, a red square, or a green square. Each stimulus was mapped to a specific key and subjects were instructed to press the correct key as quickly as possible when a trial was shown without sacrificing accuracy. In the dual-mixed condition, two of four possible stimuli were shown, and subjects were instructed to press the two keys that corresponded to each stimulus. A total of eight combinations of single- and dual-mixed trials were possible. Each was presented a total of 20 times in a randomized order. Two dependent measures were assessed: (1) accuracy, the total number of dual-mixed trials where both keys were correctly pressed (M = 67.279, SD = 13.495, range = 5–79), and (2) variability in task updating/monitoring for dual-mixed trials, operationalized as the standard deviation of Reaction Time 2–Reaction Time 1 for all correct dual mixed trials (M = 182.566, SD = 92.079, range = 14.25–612.65).

Targeting task

Subjects targeting abilities were evaluated using a dart-throwing procedure (Watson & Kimura, 1989). A 60-cm diameter circular target with the bullseye 152 cm from the floor was fixed to a wall 3 m from where subjects stood. Subjects completed 25 overhand throws of a 25-gram dart using their dominant hand. The distance of each throw from the center was recorded in millimeters and averaged for each subject (M = 137.838, SD = 27.085, range = 70.89– 207.00). Smaller values indicated greater accuracy.

Behavioral localizer tasks

This fMRI experiment used n-back and gambling tasks (Figures 67) to localize behaviorally the neural activations in cognitive control and reward regions of interest (respectively). These tasks were selected a priori to allow us to define seed regions of interest (ROIs) for psychophysiological interaction analyses (PPI, see below) where the ROIs were defined by two tasks that were independent of our main behavioral manipulation. This decision had two benefits. First, using independently localized ROIs prevented circularity in our analysis that might otherwise inflate our statistical results. Moreover, these localizer tasks were selected because they also were used in the Human Connectome Project, which helps to integrate our findings within the broader literature.
Figure 6.

Experimental schematic of the N-back procedure. This task was conducted in the fMRI experiment to independently localize cognitive control ROIs

Figure 7.

Experimental schematic of the Gambling task procedure. This task was conducted in the fMRI experiment to independently localize reward ROIs

The n-back task was used to behaviorally localize functional activity in cognitive control regions of interest. The n-back task was selected as it shows reliable activation patterns across subjects (Drobyshevsky, Baumann, & Schneider, 2006), sessions (Caceres, Hall, Zelaya, Williams, & Mehta, 2009), and does not show gender differences (Schmidt et al., 2009). In a series of 2 runs, subjects were shown 320 trials where each trial was a randomly selected letter from A–Z that was shown for 1,000 ms. In the 2-back condition, subjects were required to press a key when the letter shown was the same as one shown two trials back. In the 0-back condition, subjects pressed a key when the trial showed the letter X. Each run followed a 2-back (40 trials), 0-back (40 trials), 2-back (40 trials), and 0-back (40 trials) pattern. Subjects were instructed to prioritize accuracy before speed. The 2-back and 0-back conditions were modeled in a block design with a 2-back > 0-back contrast in subsequent fMRI data analyses. A priori hypothesized seed ROIs (in MNI152 space) for the PPI analysis were generated based on peak activations resulting from this contrast and included: right DLPFC (32, 54, 10), left DLPFC (−32, 54, 10), right thalamus (16, −16, 10), and the left thalamus (−8, −10, −2). Additionally, our primary brain activation analysis (discussed below) implicated an additional interesting a posteriori region of interest, which we also were able to localize independently by using the n-back task. This was the right insula (40, 16, −6).

Structures within the reward network were behaviorally localized using a gambling task that has been shown to activate the basal ganglia reliably (Delgado, Nystrom, Fissell, Noll, & Fiez, 2000; May et al., 2004; Tricomi, Delgado, & Fiez, 2004). In this task, subjects were shown a series of cards with a numeric value of 1–9. During an initial guessing period (2,500 ms), subjects were asked to indicate if they thought value of the card was greater than or less than 5. Subjects were then shown the outcome of their guess (1,000 ms) and then a fixation cross during the post-outcome period (11,500 ms) for a cumulative trial duration of 15,000 ms. A total of 100 trials were shown across 5 runs. Subjects were rewarded 1 dollar for correct guesses, lost 50 cents for incorrect guesses, and did not win or lose any money for tie trials. The ratio of wins, losses, and ties was set at 40:40:20 (balanced across all runs). Neural activity during the post-outcome period was modeled in an event-related design with a wins > loss contrast. Seed ROIs (in MNI152 space) for the PPI analysis were generated based on peak activations resulting from this contrast and included: right ventral striatum/nucleus accumbens (10, 16, −6), left ventral striatum/nucleus accumbens (−10, 16, −6), right dorsal striatum/putamen (16, 12, −6), and the left dorsal striatum/putamen (−18, 12, 6).

Procedures

Subjects provided informed consent before each experiment was conducted. Self-reported video game ability, intrinsic reward sensitivity, and baseline reaction times were collected at the beginning of each experiment. Subjects then familiarized themselves with the video game stimulus by reading the rules and by repeatedly playing the video game’s first level for a period of 2 minutes. Subjects then played three randomly ordered conditions that manipulated low-difficulty, high-difficulty, and balanced-difficulty (see Figure 1c for a conceptual schematic). Subjects were instructed to try to complete as many levels as possible during each condition. The low-difficulty condition (ability > difficulty) was operationalized as repeated play of the video game’s first and least challenging level, whereas the high-difficulty condition (ability < difficulty) required repeated play of the most challenging level.

Of critical importance for flow theory is the way in which task difficulty is balanced with individual ability. In the balanced-difficulty condition (ability ≈ difficulty), video game difficulty and player ability were matched by incrementally increasing the game’s difficulty after a subject completed a level. This manipulation relies on a logic common to video game design (Koster, 2005) where once an individual has developed sufficient skill to beat one level, the next level is incrementally more difficult. This simple procedure ensures that task difficulty is constantly matched with individual ability. In the present study, the balanced-difficulty condition started on the game’s second level. Each level required subjects to collect a certain number of targets. Level difficulty increased once subjects had collected all targets for a given level. In experiments 1 and 2, video game difficulty was determined based on the default Star Reaction settings. Asteroid Impact allowed us to tune the video game’s parameters in order to adjust difficulty. The parameters used in experiment 3 and the fMRI study are now discussed in more detail.

The low-difficulty condition required subjects to collect three targets while avoiding just one object. By comparison, the high-difficulty condition required that subjects collect 25 targets while avoiding seven objects of varying sizes that traveled at different speeds. The balanced-difficulty condition incrementally increased difficulty by modifying four parameters: (1) the number of targets to collect, (2) the number of objects to avoid, (3) the rate at which objects moved, and (4) the size of the objects to be avoided. Extensive pretesting (not reported in this manuscript, although experiment three reports the validation of these pretests) was conducted to determine the correct parameters for each of these settings. Such a design draws directly from flow theory by assuming that task-related intrinsic reward is not driven by actual task outcomes (e.g., performance) but instead by the perception of a balance between task difficulty and individual ability. Importantly, this assumption is corroborated by a large body of literature (Csikszentmihalyi, 1975, 1990). We also provide empirical support for the assumption that self-reported intrinsic reward is highest during the balanced difficulty condition (see Results section) and thereby validate our experimental procedure.

In experiments 1 and 2, each condition lasted for a total of 4 minutes. Because experiment 3 was designed to validate an fMRI procedure that would employ a block-design, and a 4-minute block is rather long and may create confounds with low-frequency scanner noise, we shortened each condition to 2 minutes in experiment 3 and the fMRI experiment. Self-reported measures of intrinsic reward were collected after each experimental condition in the behavioral experiments. Subjects completed each condition just once in experiments 1, 2, and 3, and these orders were randomized for all subjects. In the fMRI experiment, subjects completed a total of four runs where each run included all three conditions where each condition was separated by 57 s of rest (black screen) and 8 s of instructions. Conditions in the fMRI experiment were shown in a counterbalanced order. Researchers were not blind to the conditions.

In experiment 3, subjects then completed the three-dimensional mental rotation, attentional vigilance, dual-tasking, and targeting measures. In the fMRI experiment, subjects then completed an n-back and gambling task to localize independently the neural activity in key cognitive control and reward network regions of interest.

STRT and self-report data analysis

The STRT data analysis plan was determined a priori, and the same analytic approach was applied for all experiments. All STRT observations were capped at 1,500 ms, and the harmonic mean response time was calculated for each subject for each condition (for extended justifcation for this analytic decision, see Ratcliff, 1993). Repeated measures ANCOVAs were calculated to assess how intrinsic rewards and reaction times differed across experimental conditions. In each model, the variable of interest (i.e., reaction time, self-reported intrinsic reward) was included as a within-subjects factor, and condition order was included as a between-subjects factor to control for possible order effects. Self-reported video game ability and baseline reaction time covariates also were included in models evaluating reaction times. Statistics from the multivariate tests are reported as these are more robust against any violations of assumptions of normalcy and sphericity.

fMRI acquisition, preprocessing, and analysis

Data were acquired on a 3-tesla Siemens Magnetom Prisma scanner. Following recommendations established by the Human Connectome Project (Ugurbil et al., 2013), a multiband echo planar gradient sequence measured the blood oxygenated level-dependent contrast (TR = 720.0 ms, TE = 37.0 ms, FA = 52 degrees, FOV = 208 mm, multi-band acceleration factor = 8) with each volume consisting of 72 interleaved slices with a 2-mm isotropic spatial resolution acquired parallel to the AC-PC plane. A high-resolution T1-weighted sagittal sequence of the whole brain (TR = 2500.0 ms, TE = 2.22 ms, FA = 7 degrees, FOV = 241 mm, 0.9-mm isotropic resolution) was collected before functional scanning.

Data preprocessing and analysis was performed using FEAT (fMRI Expert Analysis Tool v6.0) from the Oxford Center for Functional MRI of the Brain (FMRIB) Software Library (FSL v5.0) using a three-stage pipeline (Weber, Mangus, & Huskey, 2015). The first stage included brain extraction (BET; Smith, 2002), spatially aligning volumes to a common coordinate system (MCFLIRT; Jenkinson, Bannister, Brady, & Smith, 2002), and spatial smoothing (7-mm FWHM kernel). In the second step, an independent components analysis (ICA-AROMA; Pruim, Mennes, van Rooij, et al., 2015; Pruim, Mennes, Buitelaar, & Beckmann, 2015) was applied to the filtered data to remove motion artifacts. Finally, the functional data were high-pass filtered (sigma = 360.0 s), coregistered to T1-weighted anatomical scans (FLIRT; Jenkinson et al., 2002; Jenkinson & Smith, 2001), registered to the MNI152 standard template using a nonlinear transformation (FNIRT; Andersson, Jenkinson, & Smith, 2007a, 2007b), prewhitened, and fit to a general linear model (GLM).

We first conducted analyses to evaluate brain activation in response to our experimental manipulation. Accordingly, a series of first-level GLMs were estimated for all subjects for all runs for the Asteroid Impact experimental conditions. Each block design model included an explanatory variable (EV) for each condition (i.e., low-difficulty, balanced-difficulty, high-difficulty), fixed for the entire duration of each condition, 120 s, which was convolved with a hemodynamic response function (gamma convolution = 6 s, SD = 3). Temporal derivatives of each EV also were included as covariates of no interest. Following a similar analytical logic established in related studies (Ulrich et al., 2016b, 2014), planned contrasts modeled neural activations unique to each condition. These contrasts included: balanced-difficulty > low- and high-difficulty (2, −1, −1), balanced-difficulty > low-difficulty (1, −1), balanced-difficulty > high-difficulty (1, −1), low-difficulty > balanced-difficulty (1, −1) high-difficulty > balanced difficulty (1, −1), and high-difficulty > low-difficulty (1, −1) contrasts.

These first-level models were then carried forward into a second-level mixed effects analysis (FLAME; Beckmann & Smith, 2004; Woolrich, Behrens, Beckmann, Jenkinson, & Smith, 2004). No additional contrasts were constructed at the second-level. In line with recommendations for applying cluster-based corrections for multiple comparisons (Eklund, Nichols, & Knutsson, 2016; Woo, Krishnan, & Wager, 2014), we applied a cluster-based procedure to correct for multiple comparisons (Worsley, 2001) with a cluster defining threshold of Z = 3.1 and a cluster extent threshold of p < 0.0001. Structures were evaluated using FSL’s probabilistic atlases and were cross-referenced with the Neurosynth database (Yarkoni, Poldrack, Nichols, Van Essen, & Wager, 2011).

A series of psychophysiological interaction analyses (PPI; Friston et al., 1997; Huskey, 2016) were then modeled to evaluate task-modulated functional connectivity between structures within cognitive control and reward networks. As discussed above, seed regions of interest (ROIs) were defined independently of our primary experimental task based on functional activations in the n-back and gambling localizer tasks. A 3-mm sphere was drawn around peak voxels for each ROI (in MNI152 space), warped to each subject’s native space, and used to extract the neural timeseries from filtered functional data for each subject for each run. The first level PPI model included an indicator variable that encoded the balanced-difficulty > low-difficulty and high-difficulty contrast, a physiological EV, and an interaction term. Second level mixed-effects models were then estimated for each seed ROI. Given that PPI analyses tend to suffer from decreased statistical power (Friston et al., 1997; O’Reilly, Woolrich, Behrens, Smith, & Johansen-Berg, 2012) FSL’s default cluster-based correction for multiple comparisons was applied with a cluster defining threshold of Z = 2.3 and a cluster extent threshold of p < 0.05. PPI results are reported for the interaction term, which reflects task-modulated changes in connectivity for the balanced-difficulty condition.

Results

Behavioral validation experiments (experiments one, two, and three)

Experiments one and two tested if manipulating a naturalistic video game stimulus modulated task engagement and intrinsic reward. Measures used to assess intrinsic reward showed high internal consistency in both experiments one (Cronbach's α = 0.906) and two (Cronbach's α = 0.896) and the overall intrinsic reward models were significant for experiment 1 (Wilks’ λ = 0.511, F(2,115) = 54.964, p < .001) and experiment 2 (Wilks’ λ = 0.710, F (2,103) = 21.027, p < 0.001). Significant results also were observed when modeling STRTs to visual trials in experiment 1 (Wilks’ λ = 0.654, F(2,113) = 29.842, p < 0.001) and experiment 2 (Wilks’ λ = 0.868, F(2,101) = 7.684, p < 0.001), and for reaction times to auditory trials in experiment 2 (Wilks’ λ = 0.822, F(2,101) = 10.937, p < 0.001). In both experiments, and consistent with previous findings, intrinsic reward was the greatest in the balanced-difficulty condition. The reaction time data also showed an inverted U-shaped pattern where the longest reaction times were observed during the balanced-difficulty condition.

Experiment 3 tested whether the video game ability covariate is best evaluated using self-report or behavioral measures as well as the hypothesis that individual differences in intrinsic reward sensitivity predict task performance (Buetti & Lleras, 2016). Bivariate Pearson correlations were calculated to assess the relationship between subject’s performance on each behavioral measure of ability and the total number of targets they successfully collected (M = 230.88, SD = 24.14, range = 119.00–274.00) while using Asteroid Impact (a measure of overall video game performance; Table 2). Self-reported video game ability (r = 0.337, p = 0.002), the standard deviation of reaction times during the dual-mixed procedure (r = −0.221, p = 0.043), and three-dimensional mental rotation ability (r = 0.287, p = 0.008) were significantly correlated with Asteroid Impact performance. These three variables were then regressed on Asteroid Impact performance to further characterize the nature of this relationship. Self-reported video game ability was entered into the first block (adjusted R2 = 0.094, F(1,82) = 9.628, p = 0.003) with dual-mixed standard deviation, three-dimensional mental rotation ability, and two- and three-way interaction terms entered in the second block (adjusted R2 change = 0.012, F(5,77) = 2.646, p = 0.022). Self-reported video game ability was the only variable that significantly predicted Asteroid Impact performance (B = 0.324, p = 0.003). Therefore, it was again used as a covariate in subsequent reaction time analyses.
Table 2

Pearson correlations between theoretical predictors of task performance and actual Asteroid Impact video game performance. These data were collected in experiment 3.

 

1

2

3

4

5

6

7

8

9

1 Video game performance

1

        

2 Self-reported ability

.337**

1

       

3 Targeting

-.06

-.053

1

      

4 Dual-mixed accuracy

.095

.045

-.042

1

     

5 Dual-mixed std. dev.

-.221*

-.083

.150

-.609**

1

    

6 SART accuracy

.187

.135

-.002

.368**

-.094

1

   

7 SART std. dev.

-.001

-.147

.283**

-.063

.131

-.131

1

  

8 Autotelic Personality

.104

.087

-.037

-.115

.029

.026

.085

1

 

9 Mental Rotation Ability

.287**

.400**

-.051

.179

-.173

.189

-.016

.253*

1

*Correlation is significant at the p = 0.05 level (two-tailed).

**Correlation is significant at the p = 0.01 level (two-tailed).

For experiment 3, the items used to assess self-reported intrinsic reward showed acceptable internal consistency (Cronbach's α = 0.751) and the overall repeated measures ANCOVA models were significant for intrinsic reward (Wilks’ λ = 0.406, F(2,80) = 58.432, p < 0.001) and reaction time (Wilks’ λ = 0.310, F(2,78) = 86.698, p < 0.001). Again, intrinsic reward was the greatest and response times to a distracting secondary task were longest in the balanced-difficulty condition (Tables 3 and 4). The results from these three studies demonstrate that the experimental paradigm successfully manipulated levels of intrinsic reward and task difficulty. These results also suggest that, within the context of this experimental procedure, the STRTs may serve as a behavioral correlate of intrinsic reward.
Table 3

Means and standard errors for self-reported intrinsic reward

 

Low-difficulty condition

mean (std. error)

(a)

Balanced-difficulty condition

mean (std. error)

(b)

High-difficulty condition

mean (std. error)

(c)

Experiment 1

12.721 (0.487)b,c

17.523 (0.426)a

16.617 (0.528)a

Experiment 2

15.084 (0.594)b,c

18.821 (0.520)a

17.589 (0.628)a

Experiment 3

16.562 (0.298)b,c

17.431 (0.333)a,c

12.694 (0.339)a,b

For each row, superscripted text indicates statistically significant pairwise comparisons after a Bonferroni correction for multiple comparisons at the p < 0.05 level.

Note that experiments 1 and 2 used a 4-item, 7-point scale (Bowman, Weber, Tamborini, & Sherry, 2013; Weber, Behr, & Bates, 2014), whereas experiment 3 used the 4-item, 5-point autotelic experience subscale (Jackson & Marsh, 1996).

Table 4

Means and standard errors for secondary task reaction times (STRTs) to visual and auditory trials

 

Low-difficulty condition mean (std. error) (a)

Balanced-difficulty condition mean (std. error) (b)

High-difficulty condition mean (std. error) (c)

Experiment 1 Visual

509.491 (9.399)b,c

594.163 (11.624)a,c

536.250 (10.905)a,b

Experiment 2 Visual

542.059 (11.464)b

589.354 (13.357)a

559.434 (13.028)

Experiment 2 Auditory

546.189 (12.941)b,c

618.888 (15.367)a

609.970 (13.575)a

Experiment 3 Visual

394.638 (6.473)b,c

516.009 (11.398)a,c

448.549 (11.480)a,b

Experiment 4 Visual

577.022 (16.383)b

702.562 (17.768)a,c

575.727 (39.386)b

For each row, superscripted text indicates statistically significant pairwise comparisons after a Bonferroni correction for multiple comparisons at the p < 0.05 level.

Brain imaging experiment (study 4)

As a manipulation check, and reconfirming the pattern observed in behavioral experiments 1, 2, and 3, STRTs measured during the fMRI experiment were the longest in the balanced-difficulty condition (Wilks’ λ = 0.095, F(2,9) = 42.96, p < 0.001; Table 4). Therefore, and following the rationale presented in the Introduction, we infer that our experimental procedure successfully manipulated intrinsic reward in an fMRI context.

Brain mapping results

The brain mapping analysis yielded several clusters (Tables 5, 6 and 7). Consistent with previous findings (Klasen et al., 2012; Ulrich et al., 2016b, 2014; Yoshida et al., 2014), results show that the balanced-difficulty condition elicited robust neural activity in cognitive control, attentional, and reward structures. Specifically, the balanced-difficulty > low-difficulty and high-difficulty contrast (Figure 8A) revealed broad activity in structures associated with cognitive control (dorsolateral prefrontal cortex; DLPFC), orienting attention (SPL, precentral gyrus), and attentional alerting (dorsoanterior insula). Neural activity also was observed in the putamen, a structure implicated in processing consummatory rewards during cognitive control tasks (Satterthwaite et al., 2007). Group-level parameter estimates for the DLPFC and putamen showed the characteristic inverted-U shaped pattern (Figure 9). The balanced-difficulty > low-difficulty as well as the balanced-difficulty > high-difficulty contrasts also were evaluated to aid in interpretation of these results. Activations in these contrasts are quite similar to the balanced-difficulty > low-difficulty and high-difficulty contrast. In fact, a comparison of the balanced-difficulty > low-difficulty and high difficulty to the balanced-difficulty > low-difficulty (Table 8) activation tables shows largely identical activations. However, the balanced-difficulty > high-difficulty contrast (Table 9) elicits activation in sensorimotor areas (e.g., premotor cortex, cerebellum, anterior precuneus), which are largely absent in the balanced-difficulty > low-difficulty and high-difficulty contrast.
Table 5

Neural activity in the balanced-difficulty > low-difficulty & high-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 3.1 and a cluster extent threshold of p < 0.0001; coordinates are in MNI152 space

Structure

Laterality

Cluster Size

Maximum Z-score

Coordinates

Superior frontal gyrus

Right

22775

7.13

24, 2, 50

Precentral gyrus

Left

 

6.46

-26, -8, 48

Central precuneus

Right

 

6.33

6, -50, 50

Superior parietal lobule

Right

 

6.19

28, -48, 66

Superior parietal lobule

Left

 

6.19

-32, -60, 64

Cerebellum

Right

6785

5.59

8, -62, -56

Cerebellum

Right

 

5.53

24, -56, -20

Cerebellum

Right

 

5.37

30, -54, -26

Cerebellum

Right

 

5.35

6, -70, -14

Occipital fusiform gyrus

Right

 

5.21

26, -64, -16

Cerebellum

Left

 

5.21

0, -76, -32

Dorsoanterior insula

Left

615

4.83

-32, 12, 6

Putamen

Left

 

4.70

-22, -2, 4

Putamen

Left

 

4.68

-30, 20, 10

Putamen

Left

 

4.67

-26, 14, 0

Posterior insula

Left

 

3.8

-42, -2, 6

Pallidum

Left

 

3.79

-22, -6, -4

Table 6

Neural activity in the low-difficulty > balanced-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 3.1 and a cluster extent threshold of p < 0.0001; coordinates are in MNI152 space.

Structure

Laterality

Cluster size

Maximum Z-score

Coordinates

Superior lateral occipital cortex

Left

1539

6.75

-42, -76, 42

Superior lateral occipital cortex

Left

 

6.24

-54, -72, 36

Superior lateral occipital cortex

Left

 

6.06

-44, -64, 30

Superior lateral occipital cortex

Left

 

5.59

-54, -66, 34

Superior lateral occipital cortex

Left

 

5.49

-48, -66, 38

Ventromedial prefrontal cortex

Left

1207

4.83

0, 28, -14

Paracingulate cortex

Right

 

4.65

8, 42, -4

Anterior cingulate cortex

Right

 

4.5

2. 36. -8

Anterior cingulate cortex

Left

 

4.18

-2, 42, 4

Paracingulate cortex

Left

 

4.15

-4, 44, -6

Ventromedial prefrontal cortex

Right

 

4.12

10, 48, -12

Posterior cingulate gyrus

Left

967

5.59

-10, -44, 34

Ventral posteromedial cortex

Left

 

5.07

-2, -60, 16

Ventral posteromedial cortex

Left

 

4.72

-4, -66, 24

Ventral posteromedial cortex

Left

 

4.47

-8, -54, 10

Posterior precuneus

Right

 

4.42

2, -70, 30

Posterior cingulate gyrus

Left

 

4.41

-8, -54, 28

Table 7

Neural activity in the high-difficulty > balanced-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 3.1 and a cluster extent threshold of p < 0.0001; coordinates are in MNI152 space.

Structure

Laterality

Cluster Size

Maximum Z-score

Coordinates

Visual cortex

Left

4914

7.01

-12, -90, 4

Occipital pole

Left

 

6.94

-6, -94, 14

Occipital pole

Left

 

6.74

-20, -94, 24

Visual cortex

Left

 

6.72

-14, -82, -10

Occipital fusiform gyrus

Left

 

5.60

-28, -76, -8

Occipital pole

Left

 

5.19

-2, -92, 30

Figure 8.

Neural activations for each experimental condition. (a) Balanced-difficulty > Low-Difficulty and High-Difficulty contrast, (b) Low-Difficulty > Balanced-Difficulty contrast, and (c) High-Difficulty > Balanced-Difficulty contrast. Red indicates lower significant Z-scores, whereas yellow indicates higher significant Z-scores. All results are cluster corrected for multiple comparisons at Z = 3.1, p < 0.0001. Figure generated using BrainNet Viewer (Xia, Wang, & He, 2013).

Figure 9

Group-level parameter estimates for the DMPFC (34, 44, 32), VMPFC (0, 28, -14), and Putamen (-22, -2, 4). These voxels were selected based on peak activations reported in the brain activation analysis for each experimental condition.

Table 8

Neural activity in the balanced-difficulty > low-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 3.1 and a cluster extent threshold of p < 0.0001; coordinates are in MNI152 space.

Structure

Laterality

Cluster size

Maximum Z-score

Coordinates

Cerebellum

Left

24244

8.1

-14, -60, -50

Superior parietal lobule

Right

 

7.15

14, -70, 58

Superior lateral occipital cortex

Right

 

6.99

22, -64, 52

Superior lateral occipital cortex

Left

 

6.83

-16, -76, 52

Superior parietal lobule

Left

 

6.82

-10, -60, 60

Superior parietal lobule

Left

 

6.76

-20, -60, 56

Precentral gyrus

Left

9047

7.87

-28, -8, 48

Superior frontal gyrus

Right

 

7.67

24, 2, 52

Superior frontal gyrus

Right

 

7.58

26, 2, 56

Superior frontal gyrus

Left

 

6.51

-22, 6, 54

Superior frontal gyrus

Left

 

6.37

-26, 4, 60

Paracingulate cortex

Right

 

6.24

2, 14, 46

Middle frontal gyrus

Left

852

5.12

-28, 30, 28

Middle frontal gyrus

Left

 

4.46

-40, 32, 26

Inferior frontal gyrus

Left

 

4.26

-40, 26, 20

Middle frontal gyrus

Left

 

4.24

-34, 24, 24

Dorsolateral prefrontal cortex

Left

 

3.94

-36, 40, 22

Dorsolateral prefrontal cortex

Left

 

3.75

-32, 38, 40

Dorsoanterior insula

Left

839

5.82

-30, 22, 6

Putamen

Left

 

4.32

-22, -2, 4

Caudate nucleus

Left

 

3.59

-18, 20, 10

Posterior insula

Left

 

3.48

-34, 0, 2

Table 9

Neural activity in the balanced-difficulty > high-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 3.1 and a cluster extent threshold of p < 0.0001; coordinates are in MNI152 space

Structure

Laterality

Cluster size

Maximum Z-score

Coordinates

Premotor cortex

Left

10750

6.1

0, -2, 58

Premotor cortex

Left

 

5.56

-24, -20, 72

Premotor cortex

Left

 

5.43

-34, -22, 70

Intraparietal sulcus

Left

 

5.42

-30, -38, 44

Premotor cortex

Left

 

5.41

-28, -18, 72

Cerebellum

Right

2490

6.31

24, -58, -22

Temporal occipital fusiform cortex

Right

 

5.25

24, -46, -22

Temporal occipital fusiform cortex

Right

 

4.9

42, -52, -26

Cerebellum

Right

 

4.46

26, -60, -50

Cerebellum

Right

 

4.18

4, -72, -30

Anterior precuneus

Right

583

4.71

8, -46, 58

Central precuneus

Right

 

4.55

8, -50, 48

Anterior precuneus

Right

 

3.87

-2, -52, 64

Anterior precuneus

Left

 

3.81

-12, -48, 48

Anterior precuneus

Left

 

3.66

-8, 58, 60

Further still, it is possible that the high-difficulty condition required similar levels of prefrontal control and reward processing as the balanced-difficulty condition. The high-difficulty > low-difficulty contrast also was evaluated to tease out differences between these conditions (Table 10). While both the balanced-difficulty > low-difficulty and high-difficulty > low-difficulty contrasts show similar activation patterns in occipital cortex, superior and middle frontal gyri, only the balanced-difficulty > low-difficulty contrast shows activations in cognitive control, reward, and salience network structures such as the DLPFC, putamen, caudate nucleus, dorsoanterior, and posterior insula.
Table 10

Neural activity in the high-difficulty > low-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 3.1 and a cluster extent threshold of p < 0.0001; coordinates are in MNI152 space

Structure

Laterality

Cluster size

Maximum Z-score

Coordinates

Occipital pole

Left

21666

7.72

-24, -92, 20

Occipital pole

Right

 

7.67

-8, -94, -2

Occipital fusiform gyrus

Left

 

7.59

-14, -86, -10

Occipital fusiform gyrus

Left

 

7.47

-12, -86, -18

Lateral occipital cortex

Left

 

7.25

-44, -78, 8

Occipital pole

Left

 

6.96

-4, -98, 18

Superior frontal gyrus

Right

923

5.79

26, 4, 56

Superior frontal gyrus

Right

 

5.56

26, 6, 62

Superior frontal gyrus

Right

 

4.35

16, 2, 72

Premotor cortex

Right

 

4.15

12, 10, 68

Ventroanterior insula

Left

718

5.52

-32, 22, -6

Precentral gyrus

Left

573

5.05

-34, -4, 50

Middle frontal gyrus

Left

 

4.41

-30, 0, 60

Superior frontal gyrus

Left

 

3.75

-22, 8, 56

Precentral gyrus

Left

 

3.30

-44, 0, 32

By comparison, the low-difficulty > balanced-difficulty contrast (Figure 8B) showed activity in structures commonly implicated in the DMN, particularly the dorsal and ventral medial prefrontal cortex (PFC), ventral posteromedial cortex, temporal pole, and hippocampus. Finally, the high-difficulty > balanced-difficulty contrast (Figure 8C) revealed activity in the occipital fusiform gyrus, temporal pole, orbitofrontal cortex, and inferior temporal gyrus.

PPI results

A series of PPI analyses was then conducted to characterize functional connectivity patterns between key cognitive control and reward structures in the balanced-difficulty condition > low- and high-difficulty condition. Independent seed ROIs were defined a priori for anticipatory (nucleus accumbens) and consummatory (putamen) reward structures as well as key cognitive control (dorsolateral prefrontal cortex, thalamus) ROIs. An a posteriori, and therefore exploratory, seed ROI also was evaluated for the right dorsoanterior insula—a structure that was implicated in the brain mapping results.

In the balanced-difficulty > low- and high-difficulty contrast, the bilateral nucleus accumbens showed functional connections with the occipital pole, paracingulate cortex, central operculum, DLPFC, middle temporal gyrus, and temporal-occipital fusiform cortex (Table 11; Figure 10a), whereas the bilateral DLPFC seed exhibited connectivity with the orbitofrontal cortex (OFC), frontopolar cortex, STG, central precuneus, and occipital fusiform gyrus with several clusters extending into the anterior cingulate (ACC) and paracingulate (PCC) cortices (Table 12; Figure 10b). Significant results were not observed when seeding from the putamen or thalamus.
Table 11

Psychophysiological interaction results when seeding from the bilateral (right: 10, 16, -6; left: -10, 16, -6) nucleus accumbens in the balanced-difficulty > low-difficulty and high-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 2.3 and a cluster extent threshold of p < 0.05; coordinates are in MNI152 space.

Structure

Laterality

Cluster Size

Maximum Z-score

Coordinates

Occipital pole

Left

1442

6.18

-34, -96, 4

Superior lateral occipital cortex

Left

 

3.96

-22, -74, 48

Paracingulate cortex

Right

841

4.32

4, 22, 44

Middle frontal gyrus

Left

 

3.73

-34, 34, 34

Superior frontal gyrus

Left

 

3.67

-18, 26, 42

Paracingulate cortex

Right

 

3.60

10, 36, 36

Central operculum

Right

578

4.64

44, -12, 22

Precentral gyrus

Right

 

3.42

34, 0, 36

Middle frontal gyrus

Right

 

3.19

44, 14, 32

Dorsolateral prefrontal cortex

Left

541

3.88

-30, 60, 8

Caudate nucleus

Left

 

3.74

-8, 12, 12

Middle temporal gyrus

Right

398

4.23

52, -50, 6

Superior temporal gyrus

Right

 

3.40

58, -12, -8

Tempo-occipital fusiform cortex

Left

378

3.70

-30, -52, -20

Lingual gyrus

Left

 

3.42

-20, -44, -14

Hippocampus

Left

 

2.93

-32, -34, -14

Figure 10.

Psychophysiological interaction analyses when seeding from the (a) bilateral (right: 10, 16, −6; left: −10, 16, −6) nucleus accumbens, (b) bilateral (right: 32, 54, 10; left: −32, 54, 10) dorsolateral prefrontal cortex, and (c) right (40, 16, -6) dorsoanterior insula. This figure shows the balanced-difficulty > low-difficulty and high-difficulty contrast. Red indicates lower significant Z-scores, while yellow indicates higher significant Z-scores. All results are cluster corrected for multiple comparisons at Z = 2.3, p < 0.05. Figure generated using BrainNet Viewer (Xia et al., 2013).

Table 12

Psychophysiological interaction results when seeding from the bilateral (right: 32, 54, 10; left: -32, 54, 10) dorsolateral prefrontal cortex in the balanced-difficulty > low-difficulty and high-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 2.3 and a cluster extent threshold of p < 0.05; coordinates are in MNI152 space.

Structure

Laterality

Cluster size

Maximum Z-score

Coordinates

Orbitofrontal cortex

Left

6615

5.12

-36, 32, -8

Superior temporal gyrus

Left

 

4.97

-58, -10, -8

Middle frontal gyrus

Left

 

4.73

-46, 22, 26

Frontopolar cortex

Left

6110

5.03

-8, 62, 28

Subcallosal cortex

Right

 

4.78

2, 24, -12

Superior frontal gyrus

Right

 

4.48

10, 24, 60

Frontopolar cortex

Right

 

4.34

8, 52, 42

Superior temporal gyrus

Right

1887

5.10

54, -26, 0

Posterior insula

Right

 

4.08

36, -16, 8

Secondary somatosensory cortex

Right

 

3.84

44, -14, 22

Broca’s area

Right

1271

4.16

58, 26, 22

Orbitofrontal

Left

 

3.91

24, 34, -10

Temporal pole

Right

 

3.57

48, 24, -18

Central precuneus

Left

754

4.12

-10, -48, 36

Ventral posteromedial cortex

Left

 

3.83

-4, -56, 14

Visual cortex

Right

 

3.28

4, -66, 8

Anterior precuneus

Left

 

3.13

-2, -48, 60

Occipital fusiform gyrus

Left

690

4.58

-16, -86, -18

Occipital pole

Left

 

3.24

-12, -98, -4

When evaluating the exploratory ROIs, a seed ROI in the right dorsoanterior insula showed connectivity with somatosensory cortices, medial PFC, temporal and occipital cortex (Table 13; Figure 10c).
Table 13

Psychophysiological interaction results when seeding from the right (40, 16, -6) dorsoanterior insula in the balanced-difficulty > low-difficulty and high-difficulty contrast; cluster corrected for multiple comparisons with a cluster defining threshold of Z = 2.3 and a cluster extent threshold of p < 0.05; coordinates are in MNI152 space.

Structure

Laterality

Cluster size

Maximum Z-score

Coordinates

Primary somatosensory cortex

Right

15664

5.28

44, -22, 64

Primary motor cortex

Right

 

5.18

12, -30, 74

Inferior frontal gyrus

Right

 

5.00

56, 18, 26

Secondary somatosensory cortex

Right

 

4.93

44, -10, 20

Hippocampus

Left

 

4.89

-24, -30, -10

Dorsomedial prefrontal cortex

Right

4415

4.81

4, 62, 14

Superior frontal gyrus

Left

 

3.95

4, 28, 50

Ventromedial prefrontal cortex

Left

 

3.80

-8, 46, -16

Superior lateral occipital cortex

Left

1096

3.98

-52, -72, 28

Angular gyrus

Left

 

3.93

-52, -60, 28

Middle temporal gyrus

Left

724

4.12

-58, -52, 0

Superior temporal gyrus

Left

 

3.45

-50, -16, -10

Superior lateral occipital cortex

Right

644

3.65

50, -62, 42

Inferior parietal lobule

Right

 

3.45

58, -58, 36

Inferior frontal gyrus

Left

544

3.53

-46, 32, -4

Orbitofrontal cortex

Left

 

3.35

-24, 34, -14

Frontopolar cortex

Left

 

2.97

-42, 40, -2

Subcallosal cortex

Right

546

4.28

2, 30, -18

Caudate nucleus

Left

 

2.71

-10, 14, 6

Discussion

Our self-report, behavioral, and fMRI hypotheses were largely supported. These results contribute to the nascent body of literature investigating the contributions of cognitive control and motivation to sustained control allocation during cognitively demanding tasks. In our study, we experientially manipulated the balance between task difficulty and individual ability, which resulted in different levels of intrinsic reward. Consistent with previous research (Keller & Bless, 2008; Ulrich et al., 2016b, 2014; Yoshida et al., 2014), a balance between task difficulty and individual ability resulted in the highest levels of self-reported intrinsic reward. Moreover, high levels of intrinsic reward corresponded to increased task-related attentional engagement as demonstrated by longer reaction times in the balanced-difficulty condition compared to the low- and high-difficulty conditions. This result also is reflected in the neuroimaging data. Differential levels of motivation were associated with different brain sates. We now turn our focus to these key findings and their broader implications.

Reward-processing and cognitive control

The behavioral and self-report measures indicate a successful experimental manipulation. Our fMRI results suggest intriguing updates to the nascent literature on cognitive control and motivation. First, our brain mapping results conform to previous findings implicating intrinsic reward processing during cognitive control tasks. Our novel contribution is in elucidating the functional connections between these structures. Of particular interest is the relationship between anticipatory and consummatory rewards during cognitive control. Our GLM-based results showed that the balanced-difficulty condition, relative to conditions of low- and high-difficulty, elicited activity in the putamen. This fits nicely with the notion this structure is implicated in consummatory reward processing (O’Doherty et al., 2004; Satterthwaite et al., 2007) and that a balance between task difficulty and individual ability elicits strong activity in this structure (Ulrich et al., 2016b, 2014). However, a balance between difficulty and ability also has been shown to elicit activity in the ventral striatum, particularly the nucleus accumbens (Klasen et al., 2012). How do we account for these seemingly contradictory findings? One possible answer is found in our PPI results when seeding from the ventral striatum. We show that the nucleus accumbens is more strongly functionally connected with the DLPFC when task difficulty is balanced with individual ability than when there is a mismatch between difficulty and ability. This result is consistent with the view that these two structures are implicated in reward anticipation and cognitive cost calculation (Botvinick, Huffstetler, & McGuire, 2009; Kool, McGuire, Wang, & Botvinick, 2013).

With that said, we did not design our study to manipulate directly the reward expectation, so it is difficult to tell if our results support the view that reward anticipation and consumption is dissociated between the dorsal and ventral striatum (O’Doherty et al., 2004) or, as some have suggested, if these structures subserve a common function related to either evaluating the cognitive costs associated with earning a particular reward (Vassena et al., 2014) or in consummatory reward processing (Pauli et al., 2016). It is entirely possible that there is no single neural correlate of intrinsic reward. Indeed, one current perspective argues that intrinsic and extrinsic rewards may not be dissociable at the neuroanatomical level, but instead at the temporal level where extrinsic rewards are temporally immediate and tangible where intrinsic rewards are less tangible and more temporally disperse (Braver et al., 2014). Our current study provides preliminary support for this view.

Admittedly, the naturalistic paradigm used in this study sacrifices some experimental control, and this poses some interpretation difficulties. While the putamen often is associated with reward processing, it also is implicated in task-learning. Specifically, the putamen shows strong activation for novel tasks, but this activation decreases for learned tasks (Jimura, Cazalis, Stover, & Poldrack, 2014a). Our decision to make two conditions consistent in terms of video game state (i.e., repeated play of the easiest or hardest conditions) may have allowed subjects to "learn" the low- and high-difficulty conditions, whereas the balanced-difficulty condition may be understood as a series of unlearned tasks. Putamen activation also has been shown to increase during a response-inhibition task among subjects with high behavioral performance and decrease among subjects with low behavioral performance (Jimura et al., 2014b). Liberally interpreted, this suggests that putamen activation should increase in response to high behavioral performance. In our study, the low-difficulty condition yielded fast reaction times (high-behavioral performance) and was easy such that subjects had high levels of video game performance. Inconsistent with the liberal interpretation that putamen activation tracks high behavioral performance presented above, we see the highest levels of putamen activation in the balanced-difficulty > low-difficulty contrast (but not also in the balanced-difficulty > high-difficulty contrast). A more stringent test would be among conditions that are similarly novel and do not afford task-learning. This presents an interesting opportunity for future research.

Similarly, the nucleus accumbens demonstrates sensitivity not only to extrinsic (e.g., monetary) reward anticipation but also to positive performance feedback (Daniel & Pollmann, 2010). We admit that experimentally accounting for this confound is not trivial. In our study, the balanced-difficulty condition provided positive performance feedback by increasing level difficulty (which remained invariant for the low- and high-difficulty conditions). However, positive performance feedback also was received during the low-difficulty condition when subjects successfully completed a level as they received a message indicating that they had beaten the level (this is the same message that subjects received in the balanced- and high-difficulty conditions). Accordingly, nucleus accumbens activation driven solely by level-completion feedback would be lost in the balanced-difficulty > low-difficulty contrast. It follows then, that remaining nucleus accumbens activation should track increases in difficulty, more closely aligning with the view presented above that this structure, in conjunction with the DLPFC, tracks reward anticipation and cognitive cost calculation. Nevertheless, this remains an important and unresolved issue for flow research as immediate and clear performance feedback is understood as a causal antecedent of flow (Csikszentmihalyi, 1990). Therefore, any manipulation of task-difficulty with individual ability is inherently conflated with different patterns of performance feedback.

Ultimately, the methodological limitations arising from the difficulty of manipulating intrinsic reward in a lab-setting constrain our interpretation of the results while suggesting new avenues for future research. Even with these considerations in mind, our results show that a balance between task difficulty and individual ability modulates reward-related subcortical processing and that these structures are functionally connected with frontocontrol structures during a cognitive control task. This finding provides novel evidence that intrinsic reward is associated with the allocation of cognitive control during sustained task performance.

Low levels of intrinsic reward and contributions to DMN activity

In the present study, we show different brain activity and functional connectivity patterns in the balanced-difficulty condition compared to the low-difficulty and high-difficulty conditions. While the balanced-difficulty condition elicited activity in structures commonly implicated in cognitive control and reward processing, the low-difficulty condition showed activations in the DMN. Such a finding is consistent with previous results showing that the DMN is down-regulated when there is a balance between task difficulty and individual ability (Ulrich et al., 2016a). Further evidence shows that failures to suppress the DMN are associated with lapses in attention (Weissman, Roberts, Visscher, & Woldorff, 2006) and decreased performance during cognitive control tasks (Kelly, Uddin, Biswal, Castellanos, & Milham, 2008).

Interestingly, we also see that STRTs were generally faster during the low-difficulty condition. This result, in conjunction with the observed activations in key DMN structures, provides additional evidence that the low-difficulty condition required low levels of cognitive control. Moreover, it contextualizes the extent to which low-difficulty tasks can be performed automatically or at least with very low levels of cognitive control (Vatansever, Menon, & Stamatakis, 2017). This, combined with previous evidence showing that boring video game play (Mathiak et al., 2013) and a mismatch between difficulty and ability (Ulrich et al., 2016a, 2016b, 2014), is associated with DMN activity, provides converging evidence that different levels of intrinsic reward may be driving the shift between DMN activation during low-difficulty and cognitive control network activation during the balanced-difficulty conditions.

Less clear is why similar DMN activation patterns were not observed in the high-difficulty condition. One possible explanation might be found in the STRT patterns observed during this condition. There is some evidence that attention to a secondary task does not necessarily increase when the primary task is difficult or even in response to increases in extrinsic rewards (Buetti & Lleras, 2016). Our high-difficulty condition had the second longest STRTs across all three of our behavioral studies. This suggests that subjects may have allocated more cognitive resources to the video game stimulus during this condition, even though the condition was rated as being comparatively low in intrinsic reward. Further experimentation is needed to determine if and at what level of mismatch between task difficulty and individual ability results in levels of task disengagement that correspond to DMN activation.

One intriguing possibility implicated by our exploratory PPI analyses is that the dorsoanterior insula may be involved in shifts between DMN and cognitive control networks. Foundational empirical investigations provide a network-level model for these switches (Sridharan, Levitin, & Menon, 2008), which is further supported by meta analytic results from Neurosynth (Yarkoni et al., 2011), implicating the insula (and broader salience network) in shifts between cognitively demanding tasks and task disengagement (Chang et al., 2013). The consistency between structures identified in our study with those identified in the reward-motivated cognitive control literature hints at a network-level architecture. Follow-up work using Asteroid Impact or similar naturalistic tasks should adopt the latest methodological advances in network neuroscience (Bassett & Sporns, 2017) to interrogate the way in which shifts in motivation drive dynamic shifts between frontoparietal control and DMN as facilitated by the insula.

Motivation drives task-related attentional engagement

One critique of the emerging cognitive control and motivation literature is that the highly controlled experimental tasks employed typically rely on extrinsic and not intrinsic rewards (Braver et al., 2014). In this study, we sacrificed some experimental control in favor of developing a task that allowed for modulating intrinsic rewards. As a failsafe, we used STRTs as a behavioral measure of the extent to which variation in intrinsic reward entrained attentional engagement with the task. The rational for this measure capitalizes on the insight that motivation has a curvilinear influence on task-related attentional engagement (Lang, 2000). This result is born out in our STRT data and is consistent with previous findings (Lang et al., 2006). That our STRT data show the same inverted U-shaped pattern as our self-reported intrinsic reward measure suggests that STRTs may serve as a behavioral correlate of intrinsic reward, particularly during motivationally relevant tasks. With that said, two important constraints are worth noting. First, the absolute mean STRT differences between conditions are quite small, thereby obscuring inferences about the magnitude of intrinsic rewards. A second issue is that STRTs are only a useful index of intrinsic reward when there is a firm understanding of how the stimulus balances task difficulty and individual ability. Nevertheless, our behavioral and neuroimaging results demonstrate that intrinsic reward motivate different levels of task engagement.

The synchronization theory of flow: alternative theoretical explanations and future opportunities

The results reported in this manuscript are situated within the context of reward-motivated cognitive control (Botvinick & Braver, 2014; Braver et al., 2014). Specifically, we used flow theory (Csikszentmihalyi, 1975) as a guide for manipulating intrinsic reward and the synchronization theory of flow (Weber et al., 2009) as a guide for making informed predictions about the neural basis of flow experiences. Accordingly, and consistent with the latest developments in flow theory (Harris et al., 2017b; Weber et al., 2016), we interpret our findings in terms of intrinsic-reward motivated cognitive control. Our results seem to fit nicely with both theory and previously published empirical results.

Some readers might question our decision to frame these issues in terms of cognitive control. From its earliest conceptualization, cognitive control research has focused on the processes that enable goal-directed behavior (Miller, 2000; Miller & Cohen, 2001), which modern evidence shows is motivated by reward (Botvinick & Braver, 2014; Braver et al., 2014). Such a high-level process necessarily requires multiple lower-level processes including attention, working memory, reward processing, sensory motor coordination, etc. We consider attention (using STRTs) and reward processing (using self-report) in the present study, but most certainly do not account for these other processes. One might reasonably ask, if attention is a component of the process of interest, why not frame this manuscript in classic attentional terms (Fan, McCandliss, Fossella, Flombaum, & Posner, 2005; Posner, Inhoff, Friedrich, & Cohen, 1987; Raz & Buhle, 2006)?

Interestingly, the original formulation of synchronization theory did exactly that (see p. 406 in Weber et al., 2009), framing flow from Posner’s tripartite theory of attention (Posner et al., 1987). While synchronization theory originally acknowledged executive attention as a potential component of flow, the theory primarily considered the phenomenon in terms of better specified processes (Raz & Buhle, 2006), such as alerting and orienting attention. The theory was later reformulated in terms of cognitive control to better specify the goal-directed nature of flow experiences (Weber et al., 2016). While cognitive control and executive attention both explain a considerable number of empirical findings and are often used to interpret similar processes (Long & Kuhl, 2018), there are important distinctions between the two models (Petersen & Posner, 2012). Once a sufficient body of evidence has accumulated in this area, it will be important to examine which model best accounts for the data.

Until then, and as we have taken pains to point out above, the potential for alternate explanations exists. The conditions in the present study do not systematically vary or otherwise control for a number of potential confounds including different event rates, different levels of feedback, different levels of visual complexity, etc. These differences allowed us to manipulate the balance between individual ability and task difficulty, which is central to flow theory. Along the way, we have endeavored to account for alternate explanations introduced by these confounds. Despite these limitations, we see results that are consistent with previous studies, which give us confidence in the findings.

Future research should, to the extent that is possible, seek to resolve these issues. We admit, as others have before us (Bohil, Alicea, & Biocca, 2011; Maguire, 2012; K. Mathiak & Weber, 2006; Spiers & Maguire, 2007), that designing naturalistic interventions with suitable levels of experimental control is a nontrivial task. However, and as has been forcefully argued by Marr (1982) and his contemporaries (Krakauer, Ghazanfar, Gomez-Marin, Maciver, & Poeppel, 2017), a focus on naturalistic behavior is essential if we are to advance our understanding to the mind/brain. To that end, we are pleased to offer an open-source stimulus, Asteroid Impact, so that interested researchers can adapt, replicate, and extend the paradigm in their own laboratories (Poldrack et al., 2017).

Conclusions

In their earliest writings, Miller and Cohen (Miller, 2000; Miller & Cohen, 2001) indicated that motivation may play a role in cognitive control. In the decades that have followed, most of the research in this area has treated the two as separable processes by choosing to focus on cognition rather than motivation. However, an emerging perspective argues that higher order cognitions and their resulting behaviors are not easily reducible to their lower-level constitute parts, especially when considering the relationship between cognition and motivation (Pessoa, 2008). Our results fit within this framework by showing how task-elicited differences in motivation are associated with shifts in task-related reward perceptions, attentional allocation, and control allocation.

Notes

Acknowledgements

The authors thank Dr. Daniel Linz for his valuable comments on the manuscript. This work was supported by the University of California Santa Barbara George D. McCune Dissertation Fellowship (given to R.H.), the University of California Santa Barbara Brain Imaging Center, the University of California Santa Barbara Academic Senate (grant AS-8-588817-19941-7 given to R.W.), and the University of California Santa Barbara Institute for Social, Behavioral, and Economic Research (grant ISBG-SS17WR-8-447631-19941 given to R.W.)

Author contributions

R.H., M.B.M., and R.W. designed research; R.H. and B.C. performed research; R.H. and R.W. analyzed the data; and R.H. and R.W wrote the paper.

Compliance with ethical standards

Competing financial interests

The authors declare no competing financial interests.

References

  1. Andersson, J. L. R., Jenkinson, M., & Smith, S. M. (2007a). Non-linear optimisation FMRIB technial eport TR07JA1. Oxford, United Kingdom.Google Scholar
  2. Andersson, J. L. R., Jenkinson, M., & Smith, S. M. (2007b). Non-linear registration aka spatial normalisation FMRIB technial report TR07JA2. Oxford, United Kingdom.Google Scholar
  3. Bassett, D. S., & Sporns, O. (2017). Network neuroscience. Nature Neuroscience, 20(3), 353–364. doi: https://doi.org/10.1038/nn.4502 CrossRefPubMedPubMedCentralGoogle Scholar
  4. Beckmann, C. F., & Smith, S. M. (2004). Probabilistic independent component analysis for functional magnetic resonance imaging. IEEE Transactions on Medical Imaging, 23(2), 137–152. doi: https://doi.org/10.1109/TMI.2003.822821 CrossRefPubMedGoogle Scholar
  5. Berkman, E. T., Falk, E. B., & Lieberman, M. D. (2012). Interactive effects of three core goal pursuit processes on brain control systems: Goal maintenance, performance monitoring, and response inhibition. PloS One, 7(6), e40334. doi: https://doi.org/10.1371/journal.pone.0040334 CrossRefPubMedPubMedCentralGoogle Scholar
  6. Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nature Neuroscience, 12(12), 752–62. doi: https://doi.org/10.1038/nrn3122 CrossRefGoogle Scholar
  7. Botvinick, M. M., & Braver, T. S. (2014). Motivation and cognitive control: From behavior to neural mechanism. Annual Review of Psychology, 66, 82–113. doi: https://doi.org/10.1146/annurev-psych-010814-015044 Google Scholar
  8. Botvinick, M. M., Huffstetler, S., & McGuire, J. T. (2009). Effort discounting in human nucleus accumbens. Cognitive, Affective, & Behavioral Neuroscience, 9(1), 16–27. doi: https://doi.org/10.3758/CABN.9.1.16 CrossRefGoogle Scholar
  9. Bowman, N. D., Weber, R., Tamborini, R., & Sherry, J. (2013). Facilitating game play: How others affect performance at and enjoyment of video games. Media Psychology, 16(1), 39–64. doi: https://doi.org/10.1080/15213269.2012.742360 CrossRefGoogle Scholar
  10. Braver, T. S., Krug, M. K., Chiew, K. S., Kool, W., Westbrook, J. A., Clement, N. J., … Somerville, L. H. (2014). Mechanisms of motivation-cognition interaction: Challenges and opportunities. Cognitive, Affective & Behavioral Neuroscience 14(2), 443–472. doi: https://doi.org/10.3758/s13415-014-0300-0
  11. Buetti, S., & Lleras, A. (2016). Distractibility is a function of engagement, not task difficulty: Evidence from a new oculomotor capture paradigm. Journal of Experimental Psychology: General. doi: https://doi.org/10.1037/xge0000213
  12. Caceres, A., Hall, D. L., Zelaya, F. O., Williams, S. C. R., & Mehta, M. A. (2009). Measuring fMRI reliability with the intra-class correlation coefficient. NeuroImage, 45(3), 758–768. doi: https://doi.org/10.1016/j.neuroimage.2008.12.035 CrossRefPubMedGoogle Scholar
  13. Chang, L. J., Yarkoni, T., Khaw, M. W., & Sanfey, A. G. (2013). Decoding the role of the insula in human cognition: Functional parcellation and large-scale reverse inference. Cerebral Cortex, 23(3), 739–749. doi: https://doi.org/10.1093/cercor/bhs065 CrossRefPubMedGoogle Scholar
  14. Csikszentmihalyi, M. (1975). Beyond boredom and anxiety: The experience of play in work and games. San Francisco, CA: Jossey-Bass, Inc.Google Scholar
  15. Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York, NY: HarperCollins Publishers.Google Scholar
  16. Daniel, R., & Pollmann, S. (2010). Comparing the neural basis of monetary reward and cognitive feedback during information-integration category learning. Journal of Neuroscience, 30(1), 47–55. doi: https://doi.org/10.1523/JNEUROSCI.2205-09.2010 CrossRefPubMedGoogle Scholar
  17. Deci, E., & Ryan, R. M. (1985). Intrinsic motivation and self- determination in human behavior. New York, NY: Plenum Press.CrossRefGoogle Scholar
  18. Delgado, M. R., Nystrom, L. E., Fissell, C., Noll, D. C., & Fiez, J. A. (2000). Tracking the hemodynamic responses to reward and punishment in the striatum. Journal of Neurophysiology, 84(6), 3072–3077.CrossRefPubMedGoogle Scholar
  19. Desmond, J. E., & Glover, G. H. (2002). Estimating sample size in functional MRI (fMRI) neuroimaging studies: Statistical power analyses. Journal of Neuroscience Methods, 118(2), 115–128. doi: https://doi.org/10.1016/S0165-0270(02)00121-8 CrossRefPubMedGoogle Scholar
  20. Drobyshevsky, A., Baumann, S. B., & Schneider, W. (2006). A rapid fMRI task battery for mapping of visual, motor, cognitive, and emotional function. NeuroImage, 31(2), 732–744. doi: https://doi.org/10.1016/j.neuroimage.2005.12.016 CrossRefPubMedPubMedCentralGoogle Scholar
  21. Eklund, A., Nichols, T. E., & Knutsson, H. (2016). Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Sciences, 1–6. doi: https://doi.org/10.1073/pnas.1602413113
  22. Engelmann, J. B., Damaraju, E., Padmala, S., & Pessoa, L. (2009). Combined effects of attention and motivation on visual task performance: Transient and sustained motivational effects. Frontiers in Human Neuroscience, 3(4), 1–17. doi: https://doi.org/10.3389/neuro.09.004.2009 Google Scholar
  23. Erickson, K. I., Colcombe, S. J., Wadhwa, R., Bherer, L., Peterson, M. S., Scalf, P. E., … Kramer, A. F. (2007). Training-induced plasticity in older adults: Effects of training on hemispheric asymmetry. Neurobiology of Aging, 28(2), 272–283. doi: https://doi.org/10.1016/j.neurobiolaging.2005.12.012
  24. Esposito, F., Otto, T., Zijlstra, F. R. H., & Goebel, R. (2014). Spatially distributed effects of mental exhaustion on resting-state FMRI networks. PLoS ONE, 9(4), 1–13. doi: https://doi.org/10.1371/journal.pone.0094222 CrossRefGoogle Scholar
  25. Fan, J., McCandliss, B. D., Fossella, J., Flombaum, J. I., & Posner, M. I. (2005). The activation of attentional networks. NeuroImage, 26(2), 471–479. doi: https://doi.org/10.1016/j.neuroimage.2005.02.004 CrossRefPubMedGoogle Scholar
  26. Friston, K. J. (2012). Ten ironic rules for non-statistical reviewers. NeuroImage, 61(4), 1300–1310. doi: https://doi.org/10.1016/j.neuroimage.2012.04.018 CrossRefPubMedGoogle Scholar
  27. Friston, K. J., Buechel, C., Fink, G. R., Morris, J., Rolls, E., & Dolan, R. J. (1997). Psychophysiological and modulatory interactions in neuroimaging. NeuroImage, 6(3), 218–229. doi: https://doi.org/10.1006/nimg.1997.0291 CrossRefPubMedGoogle Scholar
  28. Harris, D. J., Vine, S. J., & Wilson, M. R. (2017a). Is flow really effortless? The complex role of effortful attention. Sport, Exercise, and Performance Psychology, 6(1), 103–114. doi: https://doi.org/10.1037/spy0000083 CrossRefGoogle Scholar
  29. Harris, D. J., Vine, S. J., & Wilson, M. R. (2017b). Neurocognitive mechanisms of the flow state. Progress in Brain Research, 1–23. doi: https://doi.org/10.1016/bs.pbr.2017.06.012
  30. Huskey, R. (2016). Beyond blobology: Using psychophysiological interaction analyses to investigate the neural basis of human communication phenomena. Innovative Methods in Media and Communication Research. doi: https://doi.org/10.1007/978-3-319-40700-5_7
  31. Inzlicht, M., Shenhav, A., & Olivola, C. Y. (2018). The effort paradox: Effort is both costly and valued. Trends in Cognitive Sciences, 1–13. doi: https://doi.org/10.1016/j.tics.2018.01.007
  32. Jackson, S. A., & Eklund, R. C. (2004). The flow scales manual. Morgantown, EV: Fitness Information Technology, Inc.Google Scholar
  33. Jackson, S. A., & Marsh, H. W. (1996). Development and validation of a scale to measure optimal experience: The Flow State Scale. Journal of Sport & Exercise Psychology, 18, 17–35. doi: https://doi.org/10.1080/15298860309027 CrossRefGoogle Scholar
  34. Jenkinson, M., Bannister, P., Brady, M., & Smith, S. M. (2002). Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage, 17(2), 825–841. doi: https://doi.org/10.1006/nimg.2002.1132 CrossRefPubMedGoogle Scholar
  35. Jenkinson, M., & Smith, S. M. (2001). A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 5(2), 143–156. doi: https://doi.org/10.1016/S1361-8415(01)00036-6 CrossRefPubMedGoogle Scholar
  36. Jimura, K., Cazalis, F., Stover, E. R. S., & Poldrack, R. A. (2014a). The neural basis of task switching changes with skill acquisition. Frontiers in Human Neuroscience, 8(339). doi: https://doi.org/10.3389/fnhum.2014.00339
  37. Jimura, K., Hirose, S., Kunimatsu, A., Ohtomo, K., Koike, Y., & Konishi, S. (2014b). Late enhancement of brain-behavior correlations during response inhibition. Neuroscience, 274, 383–392. doi: https://doi.org/10.1016/j.neuroscience.2014.05.058 CrossRefPubMedGoogle Scholar
  38. Kang, M. J., Hsu, M., Krajbich, I. M., Loewenstein, G., McClure, S. M., Wang, J. T., & Camerer, C. F. (2009). The wick in the candle of learning. Psychological Science, 20(8), 963–974. doi: https://doi.org/10.1111/j.1467-9280.2009.02402.x CrossRefPubMedGoogle Scholar
  39. Keller, J., & Bless, H. (2008). Flow and regulatory compatibility: An experimental approach to the flow model of intrinsic motivation. Personality and Social Psychology Bulletin, 34(2), 196–209. doi: https://doi.org/10.1177/0146167207310026 CrossRefPubMedGoogle Scholar
  40. Kelly, A. M. C., Uddin, L. Q., Biswal, B. B., Castellanos, F. X., & Milham, M. P. (2008). Competition between functional brain networks mediates behavioral variability. NeuroImage, 39(1), 527–537. doi: https://doi.org/10.1016/j.neuroimage.2007.08.008 CrossRefPubMedGoogle Scholar
  41. Klasen, M., Weber, R., Kircher, T. T. J., Mathiak, K. A., & Mathiak, K. (2012). Neural contributions to flow experience during video game playing. Social Cognitive and Affective Neuroscience, 7(4), 485–495. doi: https://doi.org/10.1093/scan/nsr021 CrossRefPubMedGoogle Scholar
  42. Kool, W., & Botvinick, M. M. (2014). A labor/leisure tradeoff in cognitive control. Journal of Experimental Psychology, 143(1), 131–141. doi: https://doi.org/10.1037/a0031048 CrossRefPubMedGoogle Scholar
  43. Kool, W., McGuire, J. T., Wang, G. J., & Botvinick, M. M. (2013). Neural and behavioral evidence for an intrinsic cost of self-control. PLoS ONE, 8(8), 72626. doi: https://doi.org/10.1371/journal.pone.0072626 CrossRefGoogle Scholar
  44. Koster, R. (2005). A theory of fun for game design. Scottsdale, AZ: Paraglyph Press, Inc.Google Scholar
  45. Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., Maciver, M. A., & Poeppel, D. (2017). Neuroscience needs behavior: Correcting a reductionist bias. Neuron, 93(3), 480–490. doi: https://doi.org/10.1016/j.neuron.2016.12.041 CrossRefPubMedGoogle Scholar
  46. Lang, A. (2000). The limited capacity model of mediated message processing. Journal of Communication, 50(1), 46–70. doi: https://doi.org/10.1111/j.1460-2466.2000.tb02833.x CrossRefGoogle Scholar
  47. Lang, A., Bradley, S. D., Park, B., Shin, M., & Chung, Y. (2006). Parsing the resource pie: Using STRTs to measure attention to mediated messages. Media Psychology, 8(4), 369–394. doi: https://doi.org/10.1207/s1532785xmep0804_3 CrossRefGoogle Scholar
  48. Leotti, L. A., & Delgado, M. R. (2011). The inherent reward of choice. Psychological Science, 22(10), 1310–1318. doi: https://doi.org/10.1177/0956797611417005 CrossRefPubMedPubMedCentralGoogle Scholar
  49. Locke, H. S., & Braver, T. S. (2008). Motivational influences on cognitive control: Behavior, brain activation, and individual differences. Cognitive, Affective, & Behavioral Neuroscience, 8(1), 99–112. doi: https://doi.org/10.3758/CABN.8.1.99 CrossRefGoogle Scholar
  50. Long, N. M., & Kuhl, B. A. (2018). Bottom-up and top-down factors differentially influence stimulus representations across large-scale attentional networks. The Journal of Neuroscience, 38(10), 2495–2504. doi: https://doi.org/10.1523/JNEUROSCI.2724-17.2018 Google Scholar
  51. Maguire, E. A. (2012). Studying the freely-behaving brain with fMRI. NeuroImage, 62(2), 1170–1176. doi: https://doi.org/10.1016/j.neuroimage.2012.01.009 CrossRefPubMedGoogle Scholar
  52. Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco, CA: W.H. Freeman.Google Scholar
  53. Mathiak, K. A., Klasen, M., Zvyagintsev, M., Weber, R., & Mathiak, K. (2013). Neural networks underlying affective states in a multimodal virtual environment: Contributions to boredom. Frontiers in Human Neuroscience, 7(820). doi: https://doi.org/10.3389/fnhum.2013.00820
  54. Mathiak, K., & Weber, R. (2006). Toward brain correlates of natural behavior: fMRI during violent video games. Human Brain Mapping, 27(12), 948–956. doi: https://doi.org/10.1002/hbm.20234 CrossRefPubMedGoogle Scholar
  55. May, J. C., Delgado, M. R., Dahl, R. E., Stenger, V. A., Ryan, N. D., Fiez, J. A., & Carter, C. S. (2004). Event-related functional magnetic resonance imaging of reward-related brain circuitry in children and adolescents. Biological Psychiatry, 55(4), 359–366. doi: https://doi.org/10.1016/j.biopsych.2003.11.008 CrossRefPubMedGoogle Scholar
  56. Meyniel, F., Sergent, C., Rigoux, L., Daunizeau, J., & Pessiglione, M. (2013). Neurocomputational account of how the human brain decides when to have a break. Proceedings of the National Academy of Sciences, 110(7), 2641–2646. doi: https://doi.org/10.1073/pnas.1211925110 CrossRefGoogle Scholar
  57. Miller, E. K. (2000). The prefrontal cortex and cognitive control. Nature Reviews Neuroscience, 1(1), 59–65. doi: https://doi.org/10.1038/35036228 CrossRefPubMedGoogle Scholar
  58. Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24, 167–202. doi: https://doi.org/10.1146/annurev.neuro.24.1.167 CrossRefPubMedGoogle Scholar
  59. Murayama, K., Matsumoto, M., Izuma, K., Sugiura, A., Ryan, R. M., Deci, E. L., & Matsumoto, K. (2015). How self-determined choice facilitates performance: A key role of the ventromedial prefrontal cortex. Cerebral Cortex, 25(5), 1241–1251. doi: https://doi.org/10.1093/cercor/bht317 CrossRefPubMedGoogle Scholar
  60. Nakamura, J., & Csikszentmihalyi, M. (2005). The concept of flow. In C. R. Snyder & S. J. Lopez (Eds.), Handbook of positive psychology (pp. 89–105). New York, NY: Oxford University Press.Google Scholar
  61. Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, N.J.: Prentice-Hall.Google Scholar
  62. O’Doherty, J., Dayan, P., Schultz, J., Deichmann, R., Friston, K., & Dolan, R. J. (2004). Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science, 304(5669), 452–454. doi: https://doi.org/10.1126/science.1094285 CrossRefPubMedGoogle Scholar
  63. O’Reilly, J. X., Woolrich, M. W., Behrens, T. E. J., Smith, S. M., & Johansen-Berg, H. (2012). Tools of the trade: Psychophysiological interactions and functional connectivity. Social Cognitive and Affective Neuroscience, 7(5), 604–609. doi: https://doi.org/10.1093/scan/nss055 CrossRefPubMedPubMedCentralGoogle Scholar
  64. Pauli, W. M., O’Reilly, R. C., Yarkoni, T., Wager, T. D., O’Reilly, R. C., Yarkoni, T., & Wager, T. D. (2016). Regional specialization within the human striatum for diverse psychological functions. Proceedings of the National Academy of Sciences, 113(7), 1907–1912. doi: https://doi.org/10.1073/pnas.1507610113 CrossRefGoogle Scholar
  65. Pessoa, L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9(2), 148–158. doi: https://doi.org/10.1038/nrn2317 CrossRefPubMedGoogle Scholar
  66. Peters, M., Laeng, B., Latham, K., Jackson, M., Zaiyouna, R., & Richardson, C. (1995). A redrawn Vandenberg and Kuse mental rotations test: Different versions and factors that affect performance. Brain and Cognition, 28(1), 39–58. doi: https://doi.org/10.1006/brcg.1995.1032 CrossRefPubMedGoogle Scholar
  67. Petersen, S. E., & Posner, M. I. (2012). The attention system of the human brain: 20 years after. Annual Review of Neuroscience, 35, 73–89. doi: https://doi.org/10.1146/annurev-neuro-062111-150525 CrossRefPubMedPubMedCentralGoogle Scholar
  68. Poldrack, R. A., Baker, C. I., Durnez, J., Gorgolewski, K. J., Matthews, P. M., Munafò, M. R., … Yarkoni, T. (2017). Scanning the horizon: Towards transparent and reproducible neuroimaging research. Nature Reviews Neuroscience, 18(2), 115–126. doi: https://doi.org/10.1038/nrn.2016.167 CrossRefPubMedGoogle Scholar
  69. Posner, M., Inhoff, A. W., Friedrich, F. J., & Cohen, A. (1987). Isolating attentional systems: A cognitive-anatomical analysis. Psychobiology, 15(2), 107–121.Google Scholar
  70. Pruim, R. H. R., Mennes, M., Buitelaar, J. K., & Beckmann, C. F. (2015). Evaluation of ICA-AROMA and alternative strategies for motion artifact removal in resting state fMRI. NeuroImage, 112, 278–287. doi: https://doi.org/10.1016/j.neuroimage.2015.02.063 CrossRefPubMedGoogle Scholar
  71. Pruim, R. H. R., Mennes, M., van Rooij, D., Llera, A., Buitelaar, J. K., & Beckmann, C. F. (2015). ICA-AROMA: A robust ICA-based strategy for removing motion artifacts from fMRI data. NeuroImage, 112, 267–277. doi: https://doi.org/10.1016/j.neuroimage.2015.02.064 CrossRefPubMedGoogle Scholar
  72. Raines, S. A., Levine, T. R., & Weber, R. (2018). Sixty years of quantitative communication research summarized: Lessons from 149 meta-analyses. Annals of the International Communication Association. doi: https://doi.org/10.1080/23808985.2018.1446350
  73. Ratcliff, R. (1993). Methods for dealing with reaction time outliers. Psychological Bulletin, 114(3), 510–532. doi: https://doi.org/10.1037/0033-2909.114.3.510 CrossRefPubMedGoogle Scholar
  74. Raz, A., & Buhle, J. (2006). Typologies of attentional networks. Nature Reviews Neuroscience, 7, 367–379. doi: https://doi.org/10.1038/nrn1903 CrossRefPubMedGoogle Scholar
  75. Robertson, I. H., Manly, T., Andrade, J., Baddeley, B. T., & Yiend, J. (1997). “Oops!”: Performance correlates of everyday attentional failures in traumatic brain injured and normal subjects. Neuropsychologia, 35(6), 747–758. doi: https://doi.org/10.1016/S0028-3932(97)00015-8 CrossRefPubMedGoogle Scholar
  76. Satterthwaite, T. D., Green, L., Myerson, J., Parker, J., Ramaratnam, M., & Buckner, R. L. (2007). Dissociable but inter-related systems of cognitive control and reward during decision making: Evidence from pupillometry and event-related fMRI. NeuroImage, 37(3), 1017–1031. doi: https://doi.org/10.1016/j.neuroimage.2007.04.066 CrossRefPubMedGoogle Scholar
  77. Schmidt, H., Jogia, J., Fast, K., Christodoulou, T., Haldane, M., Kumari, V., & Frangou, S. (2009). No gender differences in brain activation during the N-back task: An fMRI study in healthy individuals. Human Brain Mapping, 30(11), 3609–3615. doi: https://doi.org/10.1002/hbm.20783 CrossRefPubMedGoogle Scholar
  78. Sherry, J. (2001). The effects of violent video games on aggression: A meta-analysis. Human Communication Research, 27(3), 409–431. doi: https://doi.org/10.1111/j.1468-2958.2001.tb00787.x Google Scholar
  79. Sherry, J. (2004). Flow and media enjoyment. Communication Theory, 14(4), 328–347. doi: https://doi.org/10.1111/j.1468-2885.2004.tb00318.x CrossRefGoogle Scholar
  80. Smith, S. M. (2002). Fast robust automated brain extraction. Human Brain Mapping, 17(3), 143–155. doi: https://doi.org/10.1002/hbm.10062 CrossRefPubMedGoogle Scholar
  81. Spiers, H. J., & Maguire, E. A. (2007). Decoding human brain activity during real-world experiences. Trends in Cognitive Sciences, 11(8), 356–365. doi: https://doi.org/10.1016/j.tics.2007.06.002 CrossRefPubMedGoogle Scholar
  82. Sridharan, D., Levitin, D. J., & Menon, V. (2008). A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proceedings of the National Academy of Sciences, 105(34), 12569–12574. doi: https://doi.org/10.1073/pnas.0800005105 CrossRefGoogle Scholar
  83. Tricomi, E. M., Delgado, M. R., & Fiez, J. A. (2004). Modulation of caudate activity by action contingency. Neuron, 41(2), 281–292. doi: https://doi.org/10.1016/S0896-6273(03)00848-1 CrossRefPubMedGoogle Scholar
  84. Ugurbil, K., Xu, J., Auerbach, E. J., Moeller, S., Vu, A. T., Duarte-Carvajalino, J. M., … Yacoub, E. (2013). Pushing spatial and temporal resolution for functional and diffusion MRI in the Human Connectome Project. NeuroImage, 80, 80–104. doi: https://doi.org/10.1016/j.neuroimage.2013.05.012 CrossRefPubMedPubMedCentralGoogle Scholar
  85. Ulrich, M., Keller, J., & Grön, G. (2016a). Dorsal raphe nucleus down-regulates medial prefrontal cortex during experience of flow. Frontiers in Behavioral Neuroscience, 10, 169. doi: https://doi.org/10.3389/fnbeh.2016.00169 CrossRefPubMedPubMedCentralGoogle Scholar
  86. Ulrich, M., Keller, J., & Grön, G. (2016b). Neural signatures of experimentally induced flow experiences identified in a typical fMRI block design with BOLD imaging. Social Cognitive and Affective Neuroscience, 11(3), 496–507. doi: https://doi.org/10.1093/scan/nsv133
  87. Ulrich, M., Keller, J., Hoenig, K., Waller, C., & Grön, G. (2014). Neural correlates of experimentally induced flow experiences. NeuroImage, 86(1), 194–202. doi: https://doi.org/10.1016/j.neuroimage.2013.08.019 CrossRefPubMedGoogle Scholar
  88. Unsworth, N., Redick, T. S., McMillan, B. D., Hambrick, D. Z., Kane, M. J., & Engle, R. W. (2015). Is playing video games related to cognitive abilities? Psychological Science, 26(6), 759–774. doi: https://doi.org/10.1177/0956797615570367 CrossRefPubMedGoogle Scholar
  89. Vassena, E., Silvetti, M., Boehler, C. N., Achten, E., Fias, W., & Verguts, T. (2014). Overlapping neural systems represent cognitive effort and reward anticipation. PLoS ONE, 9(3), 1–9. doi: https://doi.org/10.1371/journal.pone.0091008 CrossRefGoogle Scholar
  90. Vatansever, D., Menon, D. K., & Stamatakis, E. A. (2017). Default mode contributions to automated information processing. Proceedings of the National Academy of Sciences. doi: https://doi.org/10.1073/pnas.1710521114
  91. Watson, N. V., & Kimura, D. (1989). Right-hand superiority for throwing but not for intercepting. Neuropsychologia, 27(11–12), 1399–1414. doi: https://doi.org/10.1016/0028-3932(89)90133-4 CrossRefPubMedGoogle Scholar
  92. Weber, R., Behr, K.-M., & Bates, C. (2014). Measuring interactivity in video games. Communication Methods and Measures, 8(2), 79–115. doi: https://doi.org/10.1080/19312458.2013.873778 CrossRefGoogle Scholar
  93. Weber, R., Huskey, R., & Craighead, B. (2016). Flow experiences and well-being: A media neuroscience perspective. In M. B. Oliver & L. Reinecke (Eds.), Handbook of media use and well-being: International perspectives on theory and research on positive media effects (pp. 183–196). New York, NY: Routledge.Google Scholar
  94. Weber, R., Mangus, J. M., & Huskey, R. (2015). Brain Imaging in communication research: A practical guide to understanding and evaluating fMRI studies. Communication Methods and Measures, 9(1–2), 5–29. doi: https://doi.org/10.1080/19312458.2014.999754 CrossRefGoogle Scholar
  95. Weber, R., Tamborini, R., Westcott-Baker, A., & Kantor, B. (2009). Theorizing flow and media enjoyment as cognitive synchronization of attentional and reward networks. Communication Theory, 19(4), 397–422. doi: https://doi.org/10.1111/j.1468-2885.2009.01352.x CrossRefGoogle Scholar
  96. Weissman, D. H., Roberts, K. C., Visscher, K. M., & Woldorff, M. G. (2006). The neural bases of momentary lapses in attention. Nature Neuroscience, 9(7), 971–978. doi: https://doi.org/10.1038/nn1727 CrossRefPubMedGoogle Scholar
  97. Woo, C. W., Krishnan, A., & Wager, T. D. (2014). Cluster-extent based thresholding in fMRI analyses: Pitfalls and recommendations. NeuroImage, 91(1), 412–419. doi: https://doi.org/10.1016/j.neuroimage.2013.12.058 CrossRefPubMedPubMedCentralGoogle Scholar
  98. Woolrich, M. W., Behrens, T. E. J., Beckmann, C. F., Jenkinson, M., & Smith, S. M. (2004). Multilevel linear modelling for FMRI group analysis using Bayesian inference. NeuroImage, 21(4), 1732–1747. doi: https://doi.org/10.1016/j.neuroimage.2003.12.023 CrossRefPubMedGoogle Scholar
  99. Worsley, K. J. (2001). Statistical analysis of activation images. In P. Jezzard, P. M. Matthews, & S. M. Smith (Eds.), Functional MRI: An introduction to methods (pp. 251–270). Oxford, United Kingdom: Oxford University Press.Google Scholar
  100. Xia, M., Wang, J., & He, Y. (2013). BrainNet viewer: A network visualization tool for human brain connectomics. PLoS ONE, 8(7), e68910. doi: https://doi.org/10.1371/journal.pone.0068910 CrossRefPubMedPubMedCentralGoogle Scholar
  101. Yarkoni, T., Poldrack, R. A., Nichols, T. E., Van Essen, D. C., & Wager, T. D. (2011). Large-scale automated synthesis of human functional neuroimaging data. Nature Methods, 8(8), 665–670. doi: https://doi.org/10.1038/nmeth.1635 CrossRefPubMedPubMedCentralGoogle Scholar
  102. Yoshida, K., Sawamura, D., Inagaki, Y., Ogawa, K., Ikoma, K., & Sakai, S. (2014). Brain activity during the flow experience: A functional near-infrared spectroscopy study. Neuroscience Letters, 573(24), 30–34. doi: https://doi.org/10.1016/j.neulet.2014.05.011 CrossRefPubMedGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2018

Authors and Affiliations

  1. 1.School of CommunicationThe Ohio State UniversityColumbusUSA
  2. 2.Department of CommunicationUniversity of CaliforniaSanta BarbaraUSA
  3. 3.Department of Psychological and Brain SciencesUniversity of CaliforniaSanta BarbaraUSA

Personalised recommendations