Introduction

When tasks require little control, we typically feel more in control – that is, we feel a stronger sense of agency (van der Wel & Knoblich, 2013). For example, while one might not feel they use much control while driving on a clear, sunny day, one would probably feel very much in control. Accordingly, highly practiced, skilled tasks – those that are presumably associated with stronger feelings of being in control – use fewer control-related resources than less practiced tasks (Moors & De Houwer, 2006). However, when task demand is increased, one might expect the opposite relation. One might feel that they use a lot of control while driving during icy weather conditions while still feeling little control over the vehicle. This prediction is in line with previous work in the domain of cognitive control suggesting that increased task demand results in the recruitment of additional control (Botvinick et al., 2001; Dreisbach & Fischer, 2015). It seems intuitive that stronger feelings of control should relate to conditions in which less control is used, and vice versa. However, while previous studies have investigated variations in reports of control felt for conditions requiring more or less control, to our knowledge, the relation between reports of control used and control felt – two aspects of a broader conception of agency – has not been studied. We examine this relation in the experiments reported here.

A growing body of research has suggested a connection between one of these metacognitive aspects of control – how much control is felt – and task demand. Metcalfe and Greene (2007) designed a video game-style task that afforded various manipulations to demand, and has been used in a number of subsequent studies. The authors asked participants to use a mouse to move a virtual cart along the horizontal axis of a computer screen to catch falling targets while avoiding distractors. To influence the degree of task demand, the authors manipulated various parameters related to participants’ movements, such as the speed at which targets fell from the top of the screen, the turbulence of the mouse cursor, and the size of the virtual, though not visible, target widths. The latter manipulation was called the “good magic” condition, as participants could “magically” catch targets by touching them within the increased, invisible perimeter, though they were not informed of this manipulation beforehand. After each trial, participants rated how much in control they felt using a response line that ranged from “very little” to “very much” in control. Participants’ task performance, defined by the proportion of hits – the number of targets that were caught relative to the number of distractors – was compared to how much in control they felt. In general, participants’ feelings of control followed task performance – high-demand conditions resulted in poor performance and weaker feelings of control. Low demand conditions resulted in better performance and stronger feelings of control. There were, however, notable exceptions. For example, increased virtual target sizes led to significant improvements in aiming performance, though participants felt only slightly more in control for these conditions. Presumably they attributed the increase in their performance to the computer-based manipulation rather than their own actions. More recent studies using this general paradigm have revealed robust effects of increased demand on feelings of control from various aspects of action (Metcalfe, Eich, & Miele, 2013; Sidarus, Vuorre, Metcalfe, & Haggard, 2017) and across a range of populations (Metcalfe, Eich, & Castel, 2010; Metcalfe et al., 2012).

A second area of research that has tested the effect of increased task demand on feelings of control has used a different source of demand – response conflict. In a broad study of the subjective experiences associated with response conflict, of which agency was a part, Morsella et al. (2009) asked participants to complete a Stroop task and then report how much personal control (i.e., agency) they felt when naming the ink color for color words aloud. As one might predict given the degree of response conflict associated with congruent and incongruent Stroop task trials, participants felt more in control for congruent trials, where the color of the ink matched the color word, and less in control for incongruent trials, where the color of the ink did not match the color word. Consistent with these results, though employing a different experimental paradigm to induce response conflict, Sidarus and Haggard (2016) found that incongruent Eriksen flanker trials (Eriksen & Eriksen, 1974) were associated with weaker feelings of control over action-outcomes to follow. This finding emerged in conditions when participants were instructed to perform a given action, as well as conditions in which responses to incongruent trials were performed in a free-choice context (see also Sidarus, Vuorre, Metcalfe, & Haggard, 2017).

Drawing on this research, one might expect that task demand is inversely related to the experience of agency. That is, participants feel less control in response to increasing task demands, such as incongruent Stroop or flanker task trials. However, a study using a similar paradigm has shown the opposite relation between response conflict and reports of control. Damen, van Baaren, and Dijksterhuis (2014b) asked participants to engage in a task where they could choose to press a left or right keyboard button, which produced a tone after a delay. To induce response conflict, the participants were primed either supraliminally or subliminally with the words “left” or “right,” which could be congruent or incongruent with their subsequent button press selections. Then, the participants rated the extent to which they felt that they were responsible for the outcome relative to the computer, which they were informed could also produce a tone. When primes were presented supraliminally, participants more strongly attributed outcomes to themselves after they performed actions that were incongruent with primes. That is, the experience of agency was stronger for incongruent trials than congruent trials. It is worth noting, however, that the congruence of primes and outcomes did not affect response time, a typical proxy for difficulty. Thus, although there were effects at the metacognitive level, these effects were not reflected in participants’ performance.

This finding is in line with other work suggesting that more general manipulations of task demand can influence feelings of authorship. In a study by Minohara et al. (2016), participants moved a stimulus on a computer display by pressing buttons with different levels of resistance. Thus, the buttons required more or less effort to depress. The delay between the button press and the outcome on the screen varied randomly. At longer delays (700 ms), participants were more likely to attribute outcomes to themselves rather than the computer following more effortful button presses. The authors suggested that the visual feedback from the computer screen was less reliable at long delays, and therefore participants relied more heavily on effort as a cue to agency. This interpretation is in line with the cue integration theory of agency (Moore & Fletcher, 2012; Synofzik et al., 2008, 2013), which suggests that the cues that most heavily contribute to the experience of agency are those that are most reliable. Similar increases in feelings of authorship over action-outcomes have been found for an aiming task performed with the non-dominant compared to the dominant hand (Damen, Dijksterhuis, & van Baaren, 2014a). Additionally, people are more likely to inadvertently plagiarize solutions to problems while engaged in effortful activity, suggesting increased feelings of authorship (Preston & Wegner, 2007).

How might one reconcile results showing both increases and decreases in agency for more demanding conditions? As suggested by Sidarus and Haggard (2016), it is possible that the differences between experiments can be attributed to the kinds of questions used to assess agency. Studies that have reported decreases in reports of control as a result of increased task demand have asked participants how much in control they felt (e.g., Sidarus & Haggard, 2016). In studies that have reported increases in agency, participants gave attributions of agency – they rated the extent to which they, rather than the computer, caused the outcome (e.g., Damen et al., 2014b). Perhaps the participants made their attributions of agency by assessing their level of personal involvement in producing an outcome (Sidarus & Haggard, 2016). To state this another way, participants’ attributions of agency may have reflected the amount of control they felt that they used to produce an outcome – the more control they felt they used for a given outcome, the stronger feelings of authorship over the outcome would be. This claim is in line with the idea that the amount of effort one has expended during a task is a cue to the experience of agency (e.g., Minohara et al., 2016). Thinking about attributions of agency in this way – as more closely related to the amount of control participants used to produce an outcome than how in control they felt – brings sense to the discrepancies in previous research. As discussed earlier, when task demands increase, for example during incongruent conflict task trials, one might feel less in control while coping with the increased demand, though they might use more control to do so. Drawing this conclusion is muddied, however, by the invocation of alternative agents who, from the view of the participant, have an undetermined amount of control over action-outcomes.

Here we examined more directly the relation between reports of control used and control felt in two experiments, in which we manipulated task demand through a manipulation analogous to that used by Metcalfe and Greene (2007) – aiming difficulty (Experiment 1) – as well as response conflict (Experiment 2). In addition to asking participants how much in control they felt after each experimental block, we added what is, to our knowledge, a novel metacognitive report of control: What percentage of your total control did you use? We chose to phrase the question in this way, rather than as an attribution of agency – the extent to which participants felt that they were responsible for outcomes relative to an alternative agent – because we did not lead participants to believe that they shared control with another agent. They were always fully responsible for the outcomes, and therefore had no reason to attribute outcomes to an alternative source of control. A benefit of the “control used” question is that it affords clear ties to models of cognitive control. As was previously mentioned, models of control have suggested that control-related resources are recruited to cope with increasing task demands, whether the source of demand is competition between actions (Botvinick et al., 2001) or other sources of difficulty, such as dysfluency (Dreisbach & Fischer, 2015). Thus, an ancillary hypothesis to the present research is that reports of control used reflect the degree of activation necessary to cope with increasing demands.

Experiments 1a and b

In Experiments 1a and 1b, we tested the relation between reports of control used (Experiment 1a) and control felt (Experiment 1b) in an aiming task in which we manipulated task demands by varying the width of targets. The target width manipulation drew on an experiment by Metcalfe and Greene (2007), who effectively increased the virtual (though not visible) widths of falling targets so that participants could “catch” them more easily. While the authors included this condition to dissociate performance from feelings of control – participants realized that it was the computer, rather than their movements, that resulted in better performance – we were interested primarily in the effect of aiming difficulty on reports of control felt and control used.

The target width manipulation was particularly attractive due to its association with Fitts’ Law (1954) – a predictive model of human movement which states that aiming movement times are a function of the ratio of target amplitude to target width. Thus, participants’ movement times afforded a method to check whether the manipulation of task demand did, in fact, influence performance systematically. Moreover, previous research on the metacognition of action has suggested that people take the parameters described by Fitts’ Law into account during the planning of actions (Augustyn & Rosenbaum, 2005). The ratio between target amplitude and target width, called the Index of Difficulty, is expressed in bits. Index of Difficulty can be represented as follows:

$$ ID={\mathit{\log}}_2\left(\frac{2A}{W}\right) $$

In this equation, A is the movement amplitude, or the distance to the target. W represents the width of the target. Thus, smaller movement amplitudes and larger target widths will yield lower aiming difficulty values, while larger movement amplitudes and smaller target widths will yield higher difficulty values. For the experiments to follow, the target width manipulation will be described in terms of Index of Difficulty.

Finally, we reasoned that manipulations of aiming difficulty would influence feelings of control by affecting the fluency of the aiming task – a factor that has been shown to affect the experience of agency (Chambon et al., 2014; Chambon & Haggard, 2012), as well as other metacognitive variables, such as confidence (Stevenson & Carlson, 2018). Moreover, task fluency can affect adjustments to cognitive control (Dreisbach & Fischer, 2011). Our assumption that aiming difficulty would affect fluency was rooted in a two-component model of aiming, first proposed by Woodworth (1899), in which aiming movements are thought to consist of two broad stages – an initial ballistic phase followed by a homing-in phase as the target is approached (for a review, see Elliot, Helsen, & Chua, 2001). Because smaller targets require more time spent in the homing-in phase, in which fine-grained corrections are made while approaching the target, we thought that aiming for smaller targets would reduce movement fluency compared to larger targets. Specifically, we thought that this would result in a reduced ability to fall into rhythm from trial to trial in the more difficult aiming conditions. To test this prediction, we compared the coefficient of variation for participants’ movement times (MT), or the ratio of the standard deviation of MT to the mean, for each of the aiming difficulty conditions. The coefficient of variation is commonly used as an index of rhythmicity in musical entrainment tasks (e.g., Reniscow, Salovey, & Repp, 2004).

Participants were tested in a repeated-measures design in which we varied aiming difficulty by block through four possible target width conditions. They completed two blocks of each target size for a total of 8 blocks per participant arranged in a random order. After each block, the participants reported either the proportion of total control they used (Experiment 1a) or how much in control they felt (Experiment 1b). We chose to assess reports of control used and control felt using separate groups of participants in Experiment 1a and 1b, as well as Experiment 2 to follow, to prevent the effects of a prominent cognitive heuristic – anchoring and adjustment – in which forthcoming judgments are anchored to available numerical information (Tversky & Kahneman, 1974). Because we expected an inverse relation between reports of control used and control felt, we thought that having both questions present may have exaggerated the differences between them. Preventing such effects was important in the present context, as we had not yet tested the effect of increased demand on the novel “percentage of control used” question. Though these reports were tested in sequentially run experiments, we discuss the experiments in tandem below, treating the type of question participants answered as a between-groups variable, given the similarities between these experiments.

Method

Participants

Eighty-five participants participated in the aiming experiment in total, with 49 participants in Experiment 1a and 36 participants in Experiment 1b. The sample sizes here and in Experiment 2 were based on the availability of participants. All participants were students of the Pennsylvania State University. Participants were compensated with a small amount of course credit.

Procedure

To test the association between Index of Difficulty and reports of control, participants completed a computer-based aiming task in which they aimed for targets that appeared in peripheral locations on the screen. The aiming task was programmed using E-Prime 2.0 software (Psychology Software Tools, Pittsburgh, PA, USA). At the start of the task, the participant read through a series of instructions, which asked them which hand, left or right, they preferred to use the computer mouse with. When the participant answered this question, the experimenter moved the mouse to the participants’ preferred side, and then left the room.

During the instructions, participants read a brief description about the reports of control that would follow each block of trials. For the “Control Used” question, the description was: “We are interested in the sense of control that people feel for certain tasks. Some tasks require little control, such as those that are easy or well-practiced. Other tasks require full control, such as those that are difficult or new. Using the gray response line, please report the percentage of your maximum control that you used for that block of trials. To respond, use the mouse to click the point along the line that you think best represents the percentage of control you used.” For the “Control Felt” question, we paraphrased the description that Metcalfe and Greene (2007) provided for their participants, though we replaced their example involving driving a car with using a computer mouse, as it related more directly to our task. The description was: “Imagine that you are using a computer that is unfamiliar, such as one in the library. You may find that the mouse cursor moves too quickly or too slowly, and you don't feel like you are in control. When using your own computer, you might feel like you are in complete control. Regardless, then, of whether you are in control or not (that is not our question here) you may sometimes feel like you are in control (and hence have a high metacognition of control) or feel like you are not (and have a low metacognition of control). To respond, use the mouse to click the point along the gray response line that you think best represents how much in control you were in that block of trials.”

The trial sequence for the aiming task is shown in Fig. 1. Each trial of the experiment began with a red circle (r = .41 cm) which marked the center of the screen as well as a blue target circle located at one of the four corners of a virtual (though not visible) rectangle that encompassed the center of the screen. The center of the blue target circle was always located 22.56 cm from the red center circle. At the start of each trial, the mouse cursor was shown over the center of the red circle. The participant was instructed to click the red center circle once, causing it to turn green. This color change cued participants to begin their aiming movement from the (now) green center circle to the blue target circle. Once the participant clicked the blue target circle, it changed color from blue to green, and remained on the screen for an additional 200 ms. There was no time limit for the initial click on the center circle or the target circle click, though participants were told during the instructions to try to perform as quickly as they could. Designing the task in this way allowed for a measure of movement initiation time, or the duration between the onset of the screen display and the participant’s first mouse click, as an index of movement planning. Moreover, the design isolated the movement time to the target – the aspect of the total time that one would expect to relate to Fitts’ law – from components related to movement initiation, such as visual search. If, en route to clicking the target, the participant clicked outside of the target’s perimeter, they could continue to click as many times as necessary until the target was successfully clicked. The color change of the target circle indicated that the target had been successfully clicked, whether immediately or following a series of previous clicks. No additional feedback was given for clicks outside of the target’s perimeter, though the target circle remained the same color unless it was clicked. Following the color change of the target, there was a 1-s blank, and then the next trial began with the cursor over the center circle and a new target displayed at a random location among the four corners of the virtual rectangle.

Fig. 1
figure 1

Trial sequence for the aiming task. Each trial began with the cursor on the center circle. Once the participant clicked the center circle, it changed color from red to green. After the participant clicked the blue target circle, it turned green for 200 ms. The next trial began after a 1-s blank with a new target and the cursor on the red center circle. After eight trials, or eight target clicks, there was a 1-s blank screen, and the participant reported either the percentage of total control they used (shown here) or how much in control they felt

At the start of the experiment, participants completed four practice trials, in which they were exposed to the full range of target sizes, with radii of 10 pixels (.27 cm), 30 pixels (.81 cm), 50 pixels (1.35 cm), or 70 pixels (1.90 cm) in a random order, and at random locations among the four corners of the virtual rectangle encompassing the center circle. After completing these four practice trials, participants entered the first experimental block. Each experimental block consisted of eight trials of a given target size, and was followed by a second block of the same target size. This was done to test for calibration effects for a given target size. Other than this constraint, the order of blocks was random.

After completing each block of trials, participants were asked one of two questions about their control, depending on the experiment: either “What percentage of your total control did you use in this block of trials?” (Experiment 1a) or “How much in control were you during this block of trials?” (Experiment 1b). In either case, participants reported their agency by clicking at some point along a horizontal response line that was 11 cm long and .4 cm high. In the case of the control felt question, the leftmost side of the response line indicated “very little” control, while the rightmost side of the response line indicated “very much” control. For the control used question, the leftmost side was 0%, or “no control,” and the rightmost side was 100% or “full control.” The primary dependent variable of interest was the distance that the participant clicked from the leftmost portion of the response line. Participants’ reports were self-paced. We decided to collect reports of control on a block-wise rather than a trial-wise basis because of our interest in movement fluency, which we approximated using the coefficient of variation – the ratio of the standard deviation of movement time to the average movement time for each block. To be able to fall into a rhythm within a block, participants needed to encounter multiple aiming trials in a row without interruptions from the metacognitive report. Moreover, previous experiments have shown that the accumulative experience of demand can influence metacognitions of agency (Sidarus & Haggard, 2016; Sidarus, Vuorre, Metcalfe, & Haggard, 2017; Wenke, Fleming, & Haggard, 2010). After the participants reported their control, there was a 1-s delay, and then they began the next block. Each participant completed a total of 64 trials in the aiming task, or said another way, 64 target presentations (8 trials × 4 target sizes × 2 blocks per target size).

Results

To test the association between reports of control used and control felt, participants’ average control reports for each aiming condition, or the proportion of distance that the participants clicked from the leftmost portion of the response line, were submitted to a mixed-model ANOVA with one within-subjects factor (Index of Difficulty, with four levels) and one between-subjects factor (Question Type; Control Felt or Control Used). Preliminary analyses revealed no significant effects (p=.252; ηp2=.027) or interactions (p=.371; ηp2=.021) regarding the first or second block of the same Index of Difficulty. Therefore, the factor was dropped, and we averaged across the blocks of the same target size conditions for further analyses. We report Greenhouse-Geisser corrected values in cases where the assumption of sphericity was violated.

There was no significant main effect of Index of Difficulty, F(2.76, 228.78) = .442, p=.707, ηp2 =.005, or Question Type, F(1,83) = .047, p=.829, ηp2 =.001, due to an interaction between these factors (Fig. 2), F(2.76, 228.78) = 12.96, p<.001, ηp2 =. 135. Follow-up tests on the effect of Index of Difficulty within either Question Type group confirmed that the aiming difficulty manipulation systematically affected reports of control for both the Control Used, F(2.46, 118.14) = 7.42, p=.001, ηp2 =.134, and Control Felt groups, F(2.36, 82.62) = 6.22, p=.002, ηp2 =.151. For the Control Used group, mean reports of control increased as a function of aiming difficulty (in order of increasing Index of Difficulty; .57, .59, .65, and .72). Bonferroni adjusted post hoc comparisons showed that these reports differed between the smallest (r=.27 cm; M = .72) and second-largest targets (r=1.35 cm; M = .59), p=.001, as well as the smallest (r=.27 cm; M = .72) and largest targets (r=1.90 cm; M=.57), p=.008. The Control Felt question (orange) yielded an inverse pattern of results. Mean reports of control felt decreased with increasing aiming difficulty (M = .71, .65, .58, and .55). These reports differed significantly between the smallest (r=.27 cm; M = .55) and largest targets (r=1.90 cm; M=.71), p=.002, as well as the second-smallest (r=.81 cm; M=.58) and largest targets (r=1.90 cm; M=.71), p=.003. Thus, in conditions where participants in the Control Used group reported using little control, other participants in the Control Felt group felt very much in control. Conversely, when participants in the Control Used group reported using a larger percentage of their total control, other participants in the Control Felt group felt little control in those same conditions. This pattern of results will be discussed below.

Fig. 2
figure 2

Mean proportion of control reported plotted as a function of Fitts’ Index of Difficulty for the Control Felt (orange circles) and Control Used (blue circles) questions. Lower difficulty values indicate larger target widths, whereas higher difficulty values indicate smaller target widths. The error bars for this figure and all figures to follow show the standard error across subjects

To check whether our manipulation of aiming difficulty influenced participants’ motor performance, we analyzed participants’ movement times (MTs), or the duration between the initial click on the center circle and subsequent click on the target circle, across levels of Index of Difficulty. We also included Question Type in the analysis, as well as the analyses to follow, to test for differences in performance between participants in the Control Used and Control Felt groups. The analysis revealed a significant effect of aiming difficulty on MT in the expected direction (Fig. 3), with longer MTs for more difficult (smaller target) aiming conditions, F(3,249) = 82.36, p<.001, ηp2 =.498. Participants’ mean MTs, in order of increasing Index of Difficulty, were 732, 793, 849, and 981 ms. There was also a main effect of Question Type, F(1,83) = 5.68, p<.05, ηp2 =.064, with shorter mean MT for the Control Used group (803 ms) than the Control Felt group (887 ms). The interaction between Index of Difficulty and Question Type was not significant, F(1,249) = .918, p=.433, ηp2 =.011.

Fig. 3
figure 3

Mean movement time, or the average time between participants’ initial click on the center circle and subsequent click on the target circle, as a function of Index of Difficulty

For an additional measure of motor performance, we tested whether increases in Index of Difficulty influenced the frequency of clicks outside of the target’s perimeter. This analysis showed a significant effect of aiming difficulty, F(2.51, 208.24) = 15.78, p<.001, ηp2 =.160, with a larger average number of clicks for more difficult aiming conditions. The average number of clicks outside the target per trial, in order of increasing Index of Difficulty, was 0.20, 0.22, 0.25, and 0.37 clicks. Question Type was not significant, F(1,83) = 1.88, p=.174, ηp2 =.062, though there was a marginal interaction between Question Type and Index of Difficulty, F(2.51, 208.24) = 2.48, p=.073, ηp2 =.029. This was driven by an increase in the number of clicks for the smallest target condition in the Control Used group. In order of ascending aiming difficulty, the average number of extra clicks per trial was .20, .24, .27, and .43 (Control Used) and .20, .20, .22, and .28 (Control Felt). These results suggest a slight speed-accuracy tradeoff between the Control Used and Control Felt groups, where participants in the Control Used group were faster overall, but more likely to miss the target in the most difficult aiming condition.

To test for effects of movement planning, we submitted participants’ movement initiation times, or the duration between screen onset and the initial click on the center circle, to the same analysis. However, average initiation times, which ranged from 321 to 443 ms, did not vary systematically with Index of Difficulty, F(1.14, 95.56) = .892, p=.360, ηp2 =.011. Additionally, movement initiation time did not vary between the Control Used and Control Felt groups, F(1,83)=.063, p=.796, ηp2 =.001, and there was no interaction between Index of Difficulty and Question Type, F(1.14, 95.56) = .352, p=.583, ηp2 =.004.

Finally, for an index of movement fluency, we tested participants’ coefficient of variation (CV) for movement time (MT), or the ratio of the standard deviation of MT to the mean, across levels of aiming difficulty. The analysis revealed a significant main effect of Index of Difficulty on CV, F(2.71, 224.60) = 11.71, p<.001, ηp2 =.124, suggesting that aiming difficulty disrupted participants’ movement fluency. Lower aiming difficulty conditions were associated with lower CV values, while higher difficulty conditions were associated with higher CV values (in order of increasing Index of Difficulty; .29, .30, .32, and .39). There was no main effect of Question Type, F(1,83) = 2.07, p=.154, ηp2 =.024, and no interaction with Index of Difficulty, F(2.71, 224.60) = 2.07, p=.151, ηp2 =.021.

Discussion

In Experiments 1a and 1b, we tested the relation between reports of how much control participants used (Experiment 1a) and how much control participants felt (Experiment 1b) in an aiming task in which we manipulated task demand through variations in aiming difficulty. We found that manipulations of aiming difficulty, which we have described in terms of Fitts’ Index of Difficulty (Fitts, 1954), yielded inverse relations for the Control Used and Control Felt questions. For participants who were asked what percentage of their total control they used, reports of control increased as a function of aiming difficulty. For participants who were asked instead how much in control they felt, reports of control decreased as a function of aiming difficulty. To state this pattern another way, in conditions where participants in the Control Used group felt that they used little control, the participants in the Control Felt group felt very much in control. Conversely, when participants in the Control Used group felt that they used a lot of control, those in the Control Felt group felt very little control. In either case, the magnitude of the effect of Index of Difficulty on reports of control used and control felt was similar.

The inverse relation between reports of control used and control felt bears on the discrepancies found in previous investigations of task demand and agency. As discussed earlier, previous studies have yielded mixed results, with some studies reporting decreases in reports of control in more demanding experimental conditions (e.g., Sidarus & Haggard, 2016), while others have reported increases in reports of control (e.g., Damen et al., 2014). Studies that have reported decreases in agency for high-demand conditions have asked how much in control participants felt, while those that have reported increases in reports of control have asked for attributions of agency – the extent to which participants felt that they, rather than an alternative agent, caused an outcome. Here we suggest that participants may have made these attributions, at least in part, by judging how much control they used to produce the outcome. This idea is consistent with previous suggestions that effort (e.g., Minohara et al., 2016) can affect feelings of authorship. Viewing the attributions of agency in this way would bring sense to these discrepant results, as participants probably used more of their control during higher demand conditions, though they may have felt less in control. In Experiment 2, we test the generality of this relation using a different source of task demand – response conflict.

Finally, it is worth noting the promise of studying the metacognition of control using perceptual-motor variables that have been established as lawfully related to performance. A growing body of work has connected motor performance to metacognitions of control (Metcalfe & Greene, 2007; Metcalfe, Eich, & Castel, 2010, Metcalfe, Eich, & Miele, 2013; Metcalfe et al., 2012; Sidarus et al., 2017; Vuorre & Metcalfe, 2016). Here we added to this body of research by testing the effects of aiming difficulty on both motor performance and reports of control used and control felt. A strength of this approach is that it afforded a clearly defined metric, movement time, to ensure that the manipulation of task demand affected performance in the different aiming conditions. The orderly pattern of data that resulted suggests that aiming difficulty can be tied to not only motor performance, as has been established in Fitts’ Law (Fitts, 1954), but also to metacognitive aspects of action, such as feelings of using and being in control. Moreover, such a manipulation offers a promising method for studying the cues that inform the experience of agency, as participants’ reports of control used and felt were based on a veridical experience of performance and tied to easily quantifiable parameters.

Experiment 2

In Experiments 1a and 1b, we found that increases in aiming difficulty led participants to report using more of their control, though they felt less in control for the same conditions. In Experiment 2, we tested the generality of the relation between reports of control used and control felt using a different source of demand: response conflict. Participants completed a flanker task, in which we manipulated task demand through variations in: (1) the distance between targets and distractors; and (2) the proportion of incongruent trials. At the end of each block, we asked participants to report either the percentage of control they used or how much in control they felt, depending on their group. We predicted that the relation between reports of control used and control felt would be similar to that of Experiment 1. Because response conflict is associated with increases in control-related resources (Botvinick et al., 2001), we expected reports of control used to increase and reports of control felt to decrease following higher demand conditions.

Method

Seventy-six students from Pennsylvania State University completed a computer-based flanker task for a small amount of course credit. The sample size was determined by the availability of participants. The trial sequence for the flanker task is shown in Fig. 4. At the start of each block, participants saw “Beginning block of trials” displayed for 1 s, followed by a blank screen for 500 ms. Then, the display for the first trial appeared. In each trial, five black arrows were displayed in 20-pt Courier New font. Participants responded to the orientation of the middle arrow by pressing the ‘Z’ key if the arrow pointed toward the left, or the ‘/’ key if the arrow pointed toward the right. Though they were told to respond as quickly as possible during the instructions, there was no time limit for participants’ responses. Following their response, the screen went blank for 700 ms, and then the next trial began with the presentation of the next target and flanker set. After 20 trials, there was a 700 ms delay, and then participants reported either the percentage of total control they used or how much in control they felt, depending on their group. Half the participants answered the “control used” question, while the remaining half answered the “control felt” question. The descriptions participants saw for the control questions were identical to those used in Experiment 1, and they responded to the question by clicking at the point along a gray response bar that they felt best represented the amount of control used or control felt. There was no time limit for participants’ reports. After participants reported their control, the screen went blank for 500 ms, and then the participant received feedback about their accuracy for the block, or the number of times they correctly reported the direction of the target arrow, as well as their mean response time. This was displayed for 3 s, and consisted of two lines of centered text, the first of which was, “Your mean accuracy in this block was:” with the percentage of correct responses below, and the second of which was “Your mean response time in this block was:” with the mean response time (RT) value reported in milliseconds below. We decided to provide feedback on a block-wise rather than a trial-wise basis because we wanted participants’ reports of control at the end of each block to be based on metacognitions about performance, such as the fluency of action-decisions (e.g., Sidarus & Haggard, 2016), rather than explicit feedback about their performance. Though we included the feedback to encourage a high level of performance, we will comment on the implications for the reports of control in the Discussion.

Fig. 4
figure 4

Trial sequence for the Flanker task shown with the far flanker condition and the ‘Control Felt’ question

We manipulated the difficulty of the flanker task in two ways. To manipulate the degree of conflict between the target and flankers, we varied the distance to the flanking arrows on either side of the target by block. This distance either occupied approximately .13 (in the closer condition) or 1.27 (in the farther condition) degrees of participants’ visual angle. We calculated visual angle using the approximate distance participants sat from the computer screen (about 45 cm), though participants were not restricted from moving closer to or farther from the display throughout the experiment. We chose these particular values by drawing on pilot work, in which we verified that these distances influenced RT. The second way we manipulated task difficulty was to vary the proportion of incongruent flanker trials in each block, or trials in which the target arrow pointed in the opposite direction of the flanking arrows. Half of the participants had a lower proportion of incongruent trials (20% of trials) while the remaining half had a higher proportion (80% of trials). Aside from this constraint, the congruency of flankers varied randomly from trial to trial. We decided to manipulate the proportion of incongruent trials between subjects to keep the experimental duration comparable to Experiment 1, and because flanker distance had the stronger effect on performance in the pilot work. Thus, we assumed it would have a greater effect on metacognition. Participants were tested in a mixed design with two within-subjects variables (Flanker Type: Congruent or Incongruent; Flanker Distance: Close or Far) and two between-subjects variables (Proportion Incongruent: 20% or 80%; Question Type: Control Used or Control Felt). We could not include Flanker Type in the analysis of reports of control used and control felt, as participants reported their control only after each block of trials, and each block contained the same proportion of congruent and incongruent trials for a given participant. However, this factor was included in the analyses of performance.

Each participant completed a total of 160 flanker task trials (20 trials per block × 2 flanker distances × 4 blocks per distance) before they were debriefed and dismissed. Four participants whose accuracy was more than three standard deviations below the mean (M=.92; SD =.13) were omitted from the analyses to follow. The distribution of remaining participants was: Control Used, Low Incongruent: N=17; Control Used, High Incongruent: N=17; Control Felt, Low Incongruent: N=19; Control Felt, High Incongruent: N=19.

Results

To test the relation between reports of control used and control felt, we analyzed participants’ responses using a mixed-model ANOVA with one within-subjects factor (Flanker Distance: Close or Far), and two between-subjects variables (Question Type: Control Used or Control Felt; Proportion Incongruent: 20% or 80% of trials). The analysis revealed no main effect of Flanker Distance, F(1,68) = 756, p=.388, ηp2 =.011, though there was a significant interaction between Flanker Distance and Question Type, F(1,68) = 23.37, p<.001, ηp2 =.256. These data are shown in Fig. 5. Simple effects tests confirmed that there were significant differences between the close and far flanker conditions for both the Control Used, F(1,32) = 13.01, p=.001, ηp2 =.289, and the Control Felt groups, F(1,32) = 9.96, p=.003, ηp2 =.217. Similar to Experiment 1, participants in the Control Used group reported using more of their control for more difficult (close-flanker) conditions (M=.57) and less of their control for less difficult (far-flanker) conditions (M=.48). Accordingly, participants in the Control Felt group reported feeling less in control for close-flanker (M=.70) compared to far-flanker (M=.77) conditions. Thus, reports of control increased with task demand for participants in the Control Used group, and decreased for participants in the Control Felt group, as they did in Experiment 1. However, a difference emerged from the results of Experiment 1, namely a main effect of Question Type, F(1,68) = 18.29, p<.001, ηp2 =.212. Overall, participants in the Control Felt group gave higher scaled reports of control (M=.74) than participants in the Control Used group (M=.52). Finally, reports of control did not significantly differ between the low (M=.64) and high (M=.62) proportion of incongruent trial groups, F(1,68) = .222, p=.639, ηp2 =.003, and there was no interaction between this factor and Question Type (p=.600, ηp2 =.004), or Flanker Distance (p=.526, ηp2 =.006).

Fig. 5
figure 5

Means for the Control Felt (orange circles) and Control Used (blue circles) questions plotted as a function of the degree of conflict, where far flanker trials (left) represent lower conflict conditions and close flanker task trials (right) represent higher conflict conditions

To confirm that the flanker task affected performance, we submitted participants’ response accuracy and response time (RT) to mixed-model ANOVAs with two within-subjects factors (Flanker Type: Congruent or Incongruent; Flanker Distance: Close or Far) and one between-subjects factor (Proportion Incongruent: 20% or 80%). We included Question Type (Control Used or Control Felt) as a factor in preliminary analyses to test for between-groups differences in performance. There were no main effects and no interactions involving Question Type for RT or accuracy (F’s: .011-1.92; p’s: .220-.918; ηp2: .001-.022).

Accuracy varied significantly as a function of Flanker Type, F(1,70) = 21.01, p<.001, ηp2 =.231, with higher accuracy for congruent (M=.98) compared to incongruent (M=.90) trials. There was also an effect of Flanker Distance, F(1,70) = 17.90, p<.001, ηp2 =.204, with higher accuracy for trials in which the flankers were further from the target (M=.95) than trials in which the flankers were closer to the target (M=.93). Flanker Type and Flanker Distance interacted, as shown in Fig. 6, F(1,70) = 41.62, p<.001, ηp2 =.370. Simple effects tests showed that the effect of Flanker Type was larger within the close-flanker condition (Mcongruent=.99; Mincongruent=.87; MDifference=.12), F(1,71) = 33.48, p<.001, ηp2 =.320, compared to the far-flanker condition (Mcongruent=.98; Mincongruent=.92; MDifference=.06), F(1,71) = 8.43, p=.005, ηp2 =.106. Finally, there was no significant effect of the between-groups variable, namely the proportion of incongruent trials, on participants’ accuracy, F(1,70) = 2.74, p=.102, ηp2 =.038, though the interaction between the proportion of incongruent trials and Flanker Type approached significance, F(1,70) = 3.05, p=.085, ηp2 =.042. Incongruent trials had a larger effect on accuracy when they were less frequent (Mcongruent=.98; Mincongruent=.86; MDifference=.12) compared to when they were more frequent (Mcongruent=.98; Mincongruent=.93; MDifference=.05).

Fig. 6
figure 6

Mean accuracy for congruent (green) and incongruent (red) trials as a function of the degree of conflict. The error bars are not visible in the congruent condition because the standard error values for this condition were low (close: .002; far .004)

For the analysis of participants’ response times, only accurate trials were included. The analysis revealed significant effects of Flanker Type, F(1,70) = 63.35, p<.001, ηp2 =.483, Flanker Distance, F(1,70) = 134.45, p<.001, ηp2 =.658, and an interaction between these factors, F(1,70) = 40.30, p<.002, ηp2 =.365. Participants were faster for congruent trials (392 ms) than incongruent trials (469 ms), and for trials in which the flankers were far (372 ms) than trials in which the flankers were close (487 ms). Simple effects tests on the interaction between Flanker Type and Flanker Distance (Fig. 7) suggested that there was less response interference from the flankers for far-flanker conditions (Mcongruent=364 ms; Mincongruent=385 ms; MAbsDiff=21 ms), F(1,71) = 11.09, p<.001, ηp2 =.135, compared to close-flanker conditions (Mcongruent=420 ms; Mincongruent=553 ms; MAbsDiff=132 ms), F(1,71) = 57.41, p<.001, ηp2 =.447. Though the main effect of proportion of incongruent trials (20% or 80% of trials) was not significant, F(1,70) = .605, p=.439, ηp2 =.009, this factor interacted with Flanker Type, F(1,70) = 4.18, p=.045, ηp2 =.056. Follow-up tests showed that incongruent flanker trials were less disruptive to performance for the 80% incongruent group (Mcongruent=409 ms; Mincongruent=467 ms; MAbsDiff = 57 ms), F(1,71) = 23.86, p<.001, ηp2 =.251, compared to the 20% incongruent group (Mcongruent=374 ms; Mincongruent=471 ms; MAbsDiff = 97 ms), F(1,71) = 32.73, p<.001, ηp2 =.316.

Fig. 7
figure 7

Mean response times (RTs) for congruent and incongruent trials plotted against low conflict (far flanker) and high conflict (close flanker) conditions

Finally, there was a marginal three-way interaction between Proportion Incongruent, Flanker Type, and Flanker Distance for RT, F(1,70) = 3.64, p=.06, ηp2 =.049. Within the low incongruent group, there was a steeper increase in RT for incongruent trials across flanker distance conditions (MClose=569 ms; MFar=374 ms; MAbsDiff=195 ms) compared to the high incongruent group (MClose=536 ms; MFar=397 ms; MAbsDiff=139 ms), while the effect of flanker distance for congruent trials was similar for the low (MClose=401 ms; MFar=349 ms; MAbsDiff=52 ms) and high (MClose=440 ms; MFar=378 ms; MAbsDiff=62 ms) proportion incongruent groups. These interactive effects suggest a more successful use of control, or greater adaptation to conflict, when incongruent trials were more frequent than when they were less frequent.

Discussion

In Experiment 2, we tested the relation between control used and control felt using a paradigm in which the source of task demand, response conflict, differed from Experiment 1. We manipulated the degree of response conflict in two ways. The first was to vary the distance between targets and flankers to affect the extent to which flanking objects interfered with the target. The second was to vary the proportion of incongruent trials in each block, depending on participants’ group. We found reliable effects of flanker type (congruent or incongruent) and flanker distance on participants’ performance using accuracy and response time as dependent measures.

In general, participants reported using more of their control for higher demand (close-flanker) conditions and less of their control for lower demand (far-flanker) conditions. Accordingly, participants felt less in control for high-demand conditions than low demand conditions. Taken together, these results are qualitatively similar to the pattern of data from Experiment 1. However, the main effect of Question Type, in which participants in the Control Felt group gave higher scaled reports of control than the Control Used group, differed from Experiment 1. Said another way, compared to the aiming task, participants reported using less of their total control, though they felt more in control overall during the flanker task. While this may have been the result of a scaling phenomenon dependent on the particular experimental contexts, it is worth noting that participants exercised control in the aiming task for a longer period of time. The average movement time for the aiming task (839 ms) was nearly twice as long as the average response time in the flanker task. Moreover, control in the aiming task presumably consisted of multiple acts of control – that is, multiple adjustments to aiming trajectory based on visual information about the target (Elliot et al., 2010; Elliott, Helsen, & Chua, 2001). Thus, participants in the aiming task had to exercise control using more frequent adjustments, and perhaps for a longer total duration, which may have led them to feel that they used more control.

Finally, we acknowledge the possibility that reports of control were affected by the feedback given at the end of each block, as performance is an important cue for metacognitions of control (e.g., Metcalfe & Greene, 2007). Specifically, because participants performed quite well overall, it is possible that the external indication of a high level of performance led to distortions in feelings of using or being in control. Perhaps this accounted for the higher reports of feeling in control and lower reports of using control compared to Experiment 1. While we cannot conclusively rule this out, the high level of performance in Experiment 1 casts doubt on this hypothesis. Participants performed quite well on the aiming task, and received no explicit feedback about negative task performance (clicks outside of the target), other than the lack of color change of the target circle. Conversely, participants in the flanker task did receive explicit feedback about negative task performance. This suggests little reason for participants in Experiment 2 to have felt more in control, or to feel as if they used less control, than participants in Experiment 1 based on the feedback they received.

General discussion

In highly demanding tasks, such as driving in inclement weather conditions, we might feel as if we have used a lot of control while still feeling very little control over the task. This raises the possibility that the amount of control one feels they have used and how much in control one feels are dissociable components of the experience of agency. Here we examined the relation between reports of control used and control felt, varying the source of demand across two experiments. In Experiment 1, we varied task demands by manipulating aiming difficulty. In Experiment 2, we manipulated the degree of response conflict in a flanker task. The results were consistent with the prediction above. Participants felt like they used more control and felt less in control as task demands increased. There was, however, a difference between the two experiments. Compared to the aiming task, participants reported feeling more in control while using less control for the flanker task. In the previous section, we suggested that control in the aiming task may have consisted of more frequent acts of control, adjustments to aiming trajectory, over a longer duration. Thus, it makes sense that participants reported using more control in this experiment.

The latter point raises deeper questions about control-related differences between the aiming and flanker tasks. Models of aiming have suggested that the control of aiming consists of both discrete and continuous components. Aiming movements begin with a discrete, open-loop initial impulse from the start position to the vicinity of the target. This initial movement toward the target is followed by a more continuous, closed-loop homing-in phase, in which visual feedback is used to make adjustments to the aiming trajectory during the final approach of the target (Elliott, Helsen, & Chua, 2001). Adjustments to control in the flanker task are comparatively more discrete. According to models of cognitive control (Botvinick et al., 2001; Dreisbach & Fischer, 2015), the detection of conflict acts as a signal for the recruitment of additional control-related resources. The additional control serves to reduce the effect of conflict during subsequent performance. In the flanker task, this results in increased top-down control to the response associated with the center target, thereby raising the probability that it, rather than the response indicated by flanking distractors, will be performed (Botvinick et al., 2001). Thus, control in the flanker task differs from the aiming task in that it lacks a more continuous, closed-loop phase of control – at least within a given trial. This is not to suggest that more continuous adjustments do not occur across multiple trials (e.g., Eriksen & Schultz, 1979). Taken together, this evidence suggests important differences in control between the aiming task and flanker task, at least in terms of the frequency and duration of control, and perhaps in terms of the mechanism as well.

In spite of these differences in the nature of control, reports of control were qualitatively similar across experiments. A motivation for developing the “control used” question in particular was to compare experienced control to the predictions from models of cognitive control. In general, our participants reported using a greater amount of control as task difficulty increased. Moreover, and in line with the frequency with which control needed to be adjusted in either task, participants reported using more of their control in the aiming task. These results suggest that experienced control tracks actual control. That is, participants strategically adjusted their control in response to task demands in a way that was accessible to metacognition.

Another motivation for developing the ‘control used’ question was to attempt to resolve conflicting results in previous studies of task demand and the experience of agency, which have reported both increases and decreases in judgments of agency. Earlier we suggested that attributions of agency, or judgments about the extent to which one has caused an outcome compared to another agent, are more closely related to reports of control used. It is possible that participants make these attributions of agency by considering how much control they felt that they used to bring about an outcome. Consistent with this idea, studies that have used attributions of agency have reported increases, rather than decreases, in feelings of authorship for more demanding conditions (Damen et al., 2014), as we found here for reports of control used. Thus, we suggest that these conflicting results can be explained in part by the inverse association between control used and control felt.

Interestingly, a small number of studies using dual-task paradigms has reported decreases in attributions of agency when participants completed a primary task while maintaining a concurrent memory load (Hon, Poh, & Soon, 2013; Renes, van Haren, & Aarts, 2015; Wen, Yamashita, & Asama, 2016). As Hon (2017) has pointed out, the effect of demand on attributions of agency might depend on where attention is directed in relation to the primary task. Task-related demand may increase feelings of authorship due to increased attention to the primary task. Task-unrelated demand, on the other hand, might disrupt processes relevant to the comparator model of agency (Blakemore, Wolpert, & Frith, 2002) by occupying resources necessary for the formation of a forward model (von Holst & Mittlestaedt, 1950) – a representation of the predicted sensorimotor effects of an action – as well as the subsequent comparisons between predicted and actual sensorimotor effects (Wen, Yamashita, & Asama, 2016; Renes, van Haren, & Aarts, 2015).

Feelings of being in and using control are relevant to a number of other subjective states, including flow and effort. Vuorre and Metcalfe (2016) have investigated the relation between feelings of being in control and feelings of flow, which are characterized by intense concentration, a loss of reflective self-consciousness, a distortion of temporal experience, and intrinsic reward for the task at hand (Nakamura & Csikszentmihalyi, 2009). Using a task similar to Metcalfe and Greene (2007), where participants caught targets falling at different rates, the authors discovered an interesting dissociation between feelings of being in control and feelings of flow. Feelings of control decreased monotonically with the increased rate of falling targets. Feelings of flow, on the other hand, peaked around midrange target speeds. This suggests that feelings of control are based more heavily on inferences about performance, as participants performed better when targets fell more slowly, while flow appears to be based on an optimal balance between perceived skill and performance (Kennedy et al., 2014). While it is unclear whether feelings of using or being in control inform feelings of flow, or vice versa, the relation between flow and task demand suggests an interesting possibility. Feelings of flow would likely peak at mid-range task difficulties, or the point at which reports of control used and control felt were most similar. In accord with the balance plus hypothesis (Kennedy et al., 2014), this would suggest that feelings of flow would be strongest when there is balance between how in control an individual feels over a task and how much control they feel like they are using to maintain performance.

Additionally, these reports of control – particularly the control used question – seem closely related to effort. Similar to metacognitions of control (e.g., Metcalfe & Greene, 2007), recent work has suggested that judgments of effort can be conceptualized as general metacognitive evaluations that draw on performance-related variables, such as accuracy and task time (Dunn, Lutes & Risko, 2016; Risko & Dunn, 2015). Moreover, expended effort can affect attributions of agency (e.g., Minohara et al., 2016). Because we have suggested that attributions of agency might be made, in part, by assessing how much control one has used for a given outcome, it is possible that reports of control used are informed by, or related to, feelings about effort. To the extent that response time can be used as an index of effort (Gray et al., 2006, but see Dunn, Lutes, & Risko, 2016; Kool et al., 2010; Potts, Pastel, & Rosenbaum, 2018; Risko & Dunn, 2015), our results are generally consistent with a close link between effort and reports of control.

So far, we have focused on explicit reports of agency, as they relate more closely to our investigation of reports of using and feeling in control. However, discrepant relations involving task demand have also been shown using an implicit measure thought to be related to the experience of agency: intentional binding (for a review, see Moore & Obhi, 2012). Intentional binding refers to the subjective compression of the temporal interval (Haggard, Clark, & Kalogeras, 2002) or physical distance (Kirsch, Pfister, & Kunde, 2016) between intentional actions and subsequent outcomes. Demanet and colleagues (2013) have reported stronger intentional binding effects, and so shorter subjective durations between actions and outcomes, when participants completed an IB task while pulling on a high resistance band. Howard, Edwards, and Bayliss (2016) have shown the opposite pattern – longer subjective durations between action and outcomes for more effortful conditions – using a similar paradigm. Similar decreases in the strength of IB with higher demand have been reported in a response conflict paradigm (Vastano, Pozzo, & Brass, 2017). As Howard et al. (2016) have suggested, the divergent results may be due to differences in how perceived time was reported. Demanet et al. (2013) asked participants to report perceived time by estimating where the hand on a clock face had been at the onset of action and outcome, a method that captures shifts in the temporal position of the action or outcome. Studies that have reported decreases in IB with increased demand have asked participants to report the duration between the action and subsequent outcome, which emphasizes the relation between actions and outcomes. These discrepancies highlight the need for careful consideration about how agency is reported, both at the implicit and explicit level.

Finally, although we found an inverse relation between reports control used and control felt, the relation between these reports of control could have taken many different forms. For example, an expert gymnast presumably uses a great deal of control to pull off advanced acrobatics, but probably also feels very much in control due to previous experience. Moreover, a person who casually tosses an object toward a target, thereby using little control, would probably not feel much control over its trajectory. Future research is needed to further elucidate the relation between feelings of being in and using control across a broader array of tasks.