Advertisement

Behavior Research Methods

, Volume 50, Issue 5, pp 2097–2110 | Cite as

Stuck at the starting line: How the starting procedure influences mouse-tracking data

  • Stefan Scherbaum
  • Pascal J. Kieslich
Article

Abstract

Mouse-tracking is an increasingly popular method to trace cognitive processes. As is common for a novel method, the exact methodological procedures employed in an individual study are still relatively idiosyncratic and the effects of different methodological setups on mouse-tracking measures have not been explored so far. Here, we study the impact of one commonly occurring methodological variation, namely whether participants have to initiate their mouse movements to trigger stimulus presentation (dynamic starting condition) or whether the stimulus is presented automatically after a fixed delay and participants can freely decide when to initiate their movements (static starting condition). We compared data from a previous study in which participants performed a mouse-tracking version of a Simon task with a dynamic starting condition to data from a new study that employed a static starting condition in an otherwise identical setup. Results showed reliable Simon effects and Congruency Sequence effects on response time (RT) and discrete trial-level mouse-tracking measures (i.e., average deviation) in both starting conditions. In contrast, within-trial continuous measures (i.e., extracted temporal segments) were weaker and occurred in a more temporally compressed way in the static compared to the dynamic starting condition. This was in line with generally less consistent movements within and across participants in the static compared to the dynamic condition. Our results suggest that studies that use within-trial continuous measures to assess dynamic aspects of mouse movements should apply dynamic starting procedures to enhance the leakage of cognitive processing into the mouse movements.

Keywords

Mouse-tracking Methodology Boundary conditions Simon task 

Stuck at the starting line: How the starting procedure influences mouse-tracking data

To understand how the cognitive system brings forth an astonishing spectrum of behavior, the study of cognitive processes is a central endeavor. The tracing of cognitive processes has been an important tool, starting with process tracing methods such as verbal protocol analyses (Ericsson & Simon, 1984; Newell & Simon, 1972), complemented later by objective measures, such as eye-tracking. The latter method allowed researchers to trace cognitive processes from behavior instead of relying on introspective self-reports. In recent years, a further method extended the arsenal of process tracing methods: Mouse movement tracking offers a simple way to trace participants’ cognitive processing while they make choices and execute response movements. The central assumption behind mouse-tracking is that cognitive processing is continuously revealed in hand (and mouse) movements (Spivey, 2007; Spivey & Dale, 2006; Spivey, Grosjean, & Knoblich, 2005). In return, the analyses of mouse movement data can be used to make inferences about the development of the cognitive processes leading up to the final decision. The advantages of mouse-tracking are manifold: the hardware is cheap, mouse movement measuring can be implemented in most experimental software, and most participants are highly familiar with moving a computer mouse. Hence, mouse-tracking flourished in recent years (for a review, see Freeman, Dale, & Farmer, 2011), finding application in studies of language and semantic processing (Dale, Kehoe, & Spivey, 2007; Dshemuchadse, Grage, & Scherbaum, 2015; Spivey et al., 2005), conflict resolution (Scherbaum, Dshemuchadse, Fischer, & Goschke, 2010), and value-based decision making (Dshemuchadse, Scherbaum, & Goschke, 2013; Kieslich & Hilbig, 2014; Koop & Johnson, 2013; Scherbaum, Dshemuchadse, Leiberg, & Goschke, 2013; van Rooij, Favela, Malone, & Richardson, 2013).

As it is typical for uprising new methods, a variety of methodological approaches can be found that vary between application domains and even between research groups within the same domain. For example, in some mouse-tracking studies, participants need to start a trial actively by initiating a mouse movement that, in turn, starts the presentation of the imperative stimulus (e.g., Dshemuchadse et al., 2013; Scherbaum et al., 2010), while in other studies, participants respond by starting their movement after the imperative stimulus has already been presented (e.g., Dale et al., 2007; Kieslich & Hilbig, 2014; Koop & Johnson, 2011). Inevitably, such methodological differences pose two challenges to the community of mouse-tracking users: first, implementing a study becomes a relatively idiosyncratic process in which a researcher has to weigh different methodological options to the best of her knowledge. Second, without systematic investigation, it remains unclear in how far methodological differences influence the results with respect to the posed research question (Fischer & Hartmann, 2014). Therefore, it seems of urgent importance to start investigating how far methodological differences influence results of mouse-tracking studies, first to allow for a consistent interpretation and comparability of studies employing different methodological setups, and, second, as a basis for developing methodological standards as they are common for other process tracing techniques, for example, electroencephalography or eye-tracking.

Here, we present a first humble step into this direction by comparing how differences in the way participants start a trial influence mouse-tracking data and results. In this regard, two common approaches are compared: (1) a “dynamic starting procedure” in which participants have to initiate a movement first to trigger stimulus presentation, and (2) a “static starting procedure” in which a stimulus is presented after a fixed time interval and participants can freely initiate their movement. The discussion in how far these (and other) differences influence data quality – especially consistency and reliability – is an ongoing debate in the community, though still mainly at conferences and meetings.

To study differences in the starting procedure, we used the Simon task, a paradigm that is well established in cognitive psychology. In this task, participants have to select one of two possible response options depending on the magnitude of a number shown on the screen (e.g., the left option if the number is smaller than 5, otherwise right), but have to ignore the location of the number on the screen (e.g., left or right). This arrangement can lead to two types of trials. In conflict trials, the direction indicated by number magnitude (e.g., left) differs from the location of the number on the screen (e.g., right). In non-conflict trials, the direction indicated by number magnitude corresponds to the location of the number on the screen. Two reliable effects are commonly observed in the Simon task. The Simon effect refers to slower response times in conflict trials compared to non-conflict trials. The congruency-sequence effect refers to a decrease in the Simon effect if the current trial was preceded by a conflict trial compared to a preceding non-conflict trial. In an original study, we have investigated these effects and their dynamics via mouse-tracking using a dynamic starting procedure (Scherbaum et al., 2010). Participants clicked on a box at the bottom-center of the screen and then started to move the mouse cursor upwards. After meeting a movement threshold, the number was presented so that participants had to select their left-right response while already moving. We chose this procedure, first, to ensure that the cognitive processes influencing response selection leaked as strongly as possible into the mouse movements and, second, to establish a high level of consistency within and across participants regarding the movements at the start of the trial. In our study, we found that the Simon effect and the congruency sequence effect affected mouse movements, as mouse movements were more curved toward the incorrect response option in conflict trials and this conflict effect was reduced if the previous trial also was a conflict trial. We further analyzed the timing profile of these influences, determining when and how strongly congruency and congruency sequence influenced mouse movement direction. The pattern of results could be replicated across two consecutive studies speaking for the robustness of both the effects per se and their dynamics.

Hence, these effects and the results of the original study offer an ideal platform to investigate in how far differences in the starting condition, that is, a static versus a dynamic starting procedure, influence the consistency of movements within and across participants, and in how far these properties of the data influence different mouse-tracking measures. Currently, many mouse-tracking studies rely on discrete measures of effects on the trial level, calculating initiation times, movement times, movement deviation, or the number of changes in movement direction for statistical analysis (Freeman & Ambady, 2010). As many discrete measures integrate information over the course of the whole trial they should be relatively robust against changes in design. However, other groups have gone further and analyzed the movements as time-series within the trial (Dshemuchadse et al., 2013; Scherbaum et al., 2010, 2013; Sullivan, Hutcherson, Harris, & Rangel, 2015), similar to the analysis of EEG. Due to their higher temporal resolution, within-trial continuous measures should be more prone to changes in the setup or procedure of mouse-tracking.

Taken together, in a mouse movement version of the Simon task, we will investigate to what extent a specific change in the methodological setup influences the consistency of mouse-tracking data. Specifically, we will examine to what degree a static starting condition, in which the stimulus is presented automatically after a fixed delay and participants can freely decide when to initiate their movements, might decrease data quality compared to a dynamic starting condition that requires participants to initiate mouse movements in order to trigger stimulus presentation. We combined data from our original study (Scherbaum et al., 2010, Experiment 2) in which we used the dynamic starting condition, with data from a new sample of participants who performed the identical task, except that we used a static starting condition. We expected (1) that cognitive effects on discrete movement measures would only slightly be influenced by differences in the starting condition whereas (2) cognitive effects on within-trial continuous movement measures would be larger and more reliable in the dynamic starting condition compared to the static starting condition. Underlying this latter phenomenon, we expected (3) that the consistency of movements within trials, across trials, and across participants would be higher in the dynamic starting condition than in the static starting condition.

Method

Participants

Twenty right-handed students (17 female, mean age = 20.5 years) of the Technische Universität Dresden, Germany, participated in the experiment. In the original study, 20 right-handed students (17 female, mean age = 21.1 years) of the Technische Universität Dresden had participated. All participants had normal or corrected-to-normal vision. They gave informed consent to the study and received either class credit or 5€ payment.

Apparatus and stimuli

The apparatus and stimuli in the new experiment were identical to the apparatus and stimuli in the original experiment. Target stimuli (numbers 1–4 and 6–9) were presented in white on a black background on a 17-in. screen running at a resolution of 1,280 × 1 024 pixels (75 Hz refresh rate). They had a width of 6.44° and a horizontal distance to the screen center of 20.10°. Except for one procedural difference (see below), the setup of the current study was the same as in the original study. Response boxes (11.55° in width) were presented at the top left and top right of the screen. As presentation software, we used Psychophysics Toolbox 3 (Brainard, 1997; Pelli, 1997) in Matlab 2006b (the Mathworks Inc., Natick, MA, USA), running on a Windows XP SP2 personal computer. Responses were carried out by moving a standard computer mouse (Logitech Wheel Mouse USB). In the driver settings, non-linear acceleration (“optimize movements” option) was switched off to enable a linear ballistic arm movement and to ensure that the upwards movement (within the trial) and the downwards movement (in the inter-trial interval) cancelled out each other. Furthermore, the mouse speed was set to one-quarter of maximum speed, a setting that ensured that participants could reach the target box with one continuous upwards movement while at the same time ensuring that the movement range was as large as possible. Mouse trajectories were sampled with a frequency of 92 Hz and recorded from stimulus presentation until response in each trial.

Procedure

The procedure in the new experiment was identical to the procedure of the original experiment, with the exception of the starting condition. Participants were asked to move the cursor into the upper left response box for digits smaller than five and into the upper right response box for digits larger than five. Each trial consisted of three stages: the alignment stage, the start stage, and the response stage. In the alignment stage, participants had to click on a red box (11.55° in width) at the bottom of the screen within a deadline of 1.5 s. This served to align the starting area for each trial. After clicking on this box, the start stage began and two response boxes in the right and left upper corner of the screen were presented. The procedure of the start stage differed between the new experiment, in which we implemented a static starting condition, and the original experiment, in which we had implemented a dynamic starting condition. In the static starting condition, the start stage simply lasted 200 ms (this was the average duration of the start stage in the original experiment that used the dynamic starting condition) and participants simply had to wait for the start of the response stage. In contrast, in the dynamic starting condition, participants were required to start the mouse movement upwards within a deadline of 1.5 s. Specifically, the response stage only started after participants moved the mouse upwards for at least 4 pixels in each of two consecutive time steps. Usually, this procedure is applied to force participants to be already moving when entering the decision process to assure that they do not decide first and then only execute the final movement (Scherbaum et al., 2010). In the response stage, the imperative stimulus (the number) was presented. For this stage, participants in both starting conditions were instructed to respond as quickly and accurately as possible and to move the mouse continuously upwards once they had initialized their movement.

The trial ended after moving the cursor into one of the response boxes within a deadline of 2 s (see Fig. 1). If subjects missed the respective deadline in one of the three stages, the next trial started with the presentation of the red start box. Response times (RTs) were measured as the duration of the response stage, reflecting the interval between the onset of the target stimulus and the arrival of the mouse cursor in the response box area.
Fig. 1

Setup of the experiment: Participants had to click with the mouse cursor on a red box at the bottom of the screen. After clicking, response boxes appeared at the upper edge of the screen. In the static starting condition, the stimulus was presented 200 ms afterwards. In the dynamic starting condition, participants had to move the cursor upwards, in order to start stimulus presentation – only after reaching a movement threshold, the stimulus was presented. To respond, participants had to move the mouse cursor to the left or the right response box depending on the magnitude of the number (left response if < 5, right response if > 5)

After onscreen instructions and demonstration by the experimenter, participants practiced 40 trials (10 trials with feedback and no deadline for any stage of a trial, 10 trials with feedback and deadline, and 20 trials without feedback about timing errors and with deadline).

Design

The Simon task used here is based on the conflict between the direction indicated by the number (left vs. right) and the position on screen where the number was presented (left vs. right). Hence, we varied these properties orthogonally for the current trial and for the preceding trial resulting in the following independent variables: the direction and location of the number in the current trial (direction N [left vs. right] and location N [left vs. right]), and the direction and location of the number in the previous trial (direction N-1 [left vs. right] and location N-1 [left vs. right]). This resulted in four combinations for the current trial and four combinations for the previous trial. The sequence of trials was balanced within each block by pseudo randomization resulting in a balanced Trial N (4) × Trial N-1 (4) × trial repetition (16) transition matrix. This way, we obtained a balanced sequence of 256 trials with systematically manipulated congruency of direction/ location within the current trial (congruency N ), congruency of direction/location within the previous trial (congruency N-1), and sequences of designated responses. Three such sequences were generated, resulting overall in three blocks and 256 trials per block.

Data preprocessing and statistical analyses

We excluded erroneous trials and trials following an error (4.2 %). To avoid any bias in data analysis of the two methodologically different sets, we refrained from outlier analysis as performed in the original study. Mouse trajectories were remapped so that all trajectories would end in the left response box and horizontally aligned for common starting position (horizontal middle position of the screen corresponds to 0 pixels, and values increase towards the right, i.e., the non-chosen option). Each trajectory was normalized into 100 equal time steps (following Spivey et al., 2005).

Data preprocessing and aggregation was performed in Matlab 2010a (the Mathworks Inc.) and in R (R Core Team, 2016) using the mousetrap R package (Kieslich, Wulff, Henninger, Haslbeck, & Schulte-Mecklenbeck, 2016). Statistical analyses were performed in Matlab, R, and JASP 0.7.5.6 (JASP Team, 2016).

Results

Comparison of groups

Since our analysis builds on two independent groups of participants from different studies, we first checked for differences between these groups other than the start condition. All tested individuals were right-handed, and groups showed no significant differences in age, t(38) = 0.881, p = .384 or in sex (both groups contained 17 female and 3 male participants). All other descriptive variables also showed no significant differences (all p > 0.125, see Supplementary Material). To check for general differences in speed in the task, we analyzed the inter-trial interval (ITI), that is, the time between reaching the response box in the previous trial and clicking into the start box to begin the next trial. As we do not see a methodological reason why a difference in the setup of the starting condition should affect the ITI, we used it as a general indicator of speed differences in the task that are related to differences between participant groups. We found no significant differences between groups for the ITI, t(38) = 1.51, p = .140. Taken together, we found no significant differences between the two groups on indicator variables that should (or could) not be affected by the starting condition.

Cognitive effects

Next, we were interested in how far the study of cognitive processes via mouse movements would be influenced by the starting condition. We expected that discrete measures – which describe the whole movement in a trial by one value – would be relatively robust against differences in the starting condition (hypothesis 1), whereas effects for continuous measures – which capture the variation of the movement at each time point – would be weaker in the static starting condition compared to the dynamic starting condition (hypothesis 2).

Discrete effects

We first inspected discrete measures for the Simon effect (congruent vs. incongruent trials reflected in the factor congruency N ) and congruency sequence effects (the modulation of the Simon effect by the previous trial’s congruency reflected in the interaction congruency N × congruency N-1). As dependent variables, we computed the response time and the average deviation (AD) per condition and participant. AD is the average perpendicular deviation between the actual movement and a hypothetical straight line from the start to the end point of the movement. For both measures, we conducted repeated measures analyses of variance (ANOVA) with the within subject factors congruency N and congruency N-1 and the between-subject factor starting condition (dynamic vs. static).

ANOVA on AD revealed a significant main effect of the starting condition, F(1,38) = 59.08, p < .001, η 2 p = 0.61, with higher AD values in the dynamic than in the static condition. In addition, the main effects for congruency N , F(1,38) = 77.84, p < .001, η 2 p = 0.67, and congruency N-1, F(1,38) = 23.35, p < .001, η 2 p = 0.38, were significant as well as the interactions congruency N × congruency N-1, F(1,38) = 94.05, p < .001, η 2 p = 0.71 and congruency N-1 × starting condition, F(1,38) = 8.17, p = .007, η 2 p = 0.18. With regard to the variability of the theoretically important effects, both the Simon effect (congruency N ) and the congruency sequence effect (congruency N × congruency N-1) did not significantly interact with the starting condition, congruency N × starting condition, F(1,38) = 3.07, p = .09, η 2 p = 0.07, and congruency N × congruency N-1 × starting condition, F(1,38) = 0.65, η 2 p = 0.02, p = .43. However, both effects were descriptively larger in the dynamic starting condition (congruency N : η 2 p = 0.73; congruency N × congruency N-1: η 2 p = 0.77) than in the static starting condition (congruency N : η 2 p = 0.59; congruency N × congruency N-1: η 2 p = 0.68). Hence, in both conditions, mean AD showed the expected Simon effects and congruency sequence effects, though with descriptively lower effect sizes in the static starting condition than in the dynamic starting condition (Fig. 2, top panels).
Fig. 2.

Average deviation of mouse movements (top panels) and response times (bottom panels) as a function of previous trial congruency and current trial congruency separately for the dynamic and static starting condition. Error bars indicate 1 SE

Response time (RT) was calculated as the time difference between the moment of stimulus onset and the moment when the mouse cursor reached the response box (note that elsewhere this measure is also called movement time. Since in the dynamic starting condition participants are already moving, we chose referring to this measure as RT as it includes the whole process of response selection).

ANOVA revealed significant main effects for congruency N , F(1,38) = 108.03, p < .001, η 2 p = 0.74, congruency N-1, F(1,38) = 4.93, p = .03, η 2 p = 0.11, and starting condition, F(1,38) = 8.30, p = .006, η 2 p = 0.18, and a significant interaction congruency N × congruency N-1, F(1,38) = 156.95, p < .001, η 2 p = 0.81. The other interactions were not significant, congruency N × starting condition, F(1,38) = 0.07, p = .79, congruency N-1 × starting condition, F(1,38) = 3.12, p = 0.09, and congruency N × congruency N-1 × starting condition, F(1,38) = 1.36, p = .25. Looking at the effect sizes in each starting condition for the Simon effect (congruency N ) and congruency sequence effects (congruency N × congruency N-1) indicates similar effect sizes for the dynamic starting condition (congruency N : η 2 p = 0.71; congruency N × congruency N-1: η 2 p = 0.80) and the static starting condition (congruency N : η 2 p = 0.77; congruency N × congruency N-1: η 2 p = 0.81). Hence, in both conditions, RT showed the expected Simon effects and congruency sequence effects with similar effect sizes (Fig. 2, bottom panels).

Taken together, the discrete measures show the expected robustness against differences in the starting condition, though for AD descriptively weaker effect sizes were found in the static starting condition.

Continuous effects

In the next step, we inspected continuous mouse-tracking measures. We expected these measures to be more strongly influenced by differences in the starting condition compared to discrete measures, since they do not integrate information across the whole trial but are based on the instantaneous information in each time step. Hence, if one starting condition leads to lower consistency of movements, this should increase the noise in the data and particularly influence measures with a higher temporal resolution. Hypothesis 2 stated that effects on continuous measures will be weaker and less reliable in the static starting condition compared to the dynamic starting condition. Several mechanisms are assumed to contribute to this effect in the static starting condition: First, influences on mouse movements should show a time-lag and be compressed at the end of the trial due to the prolonged start of the main movement. Second, effects of cognitive processes should be weaker, because these processes can take place before the movement is initiated and hence only partly leak into the movement. Third, the lower data quality further decreases reliability by inducing noise into the strength and the timing of processes.

Visual inspection of heatmaps (Fig. 3) of mouse movements along the X-axis over time reveals a smoother, though wider spread distribution of movements in the dynamic starting condition (Fig. 3, left) for both, congruent and incongruent trials, compared to the trials in the static starting condition (Fig. 3, right). Averaged mouse movements for congruency N and congruency N-1 indicate a similar pattern of effects in the dynamic and the static starting condition, though time-lagged and less pronounced in the static starting condition compared to the dynamic starting condition (Fig. 4).
Fig. 3

Heatmaps of pooled mouse movements along the X-axis as a function of time and current trial congruency separately for each starting condition

Fig. 4

Average X coordinate per time step depending on congruency and starting condition. Coordinates were first averaged within and then across participants. Confidence bands indicate 1 SE

For the statistical analysis of movement dynamics, we performed time continuous multiple linear regression on mouse movement angles on the X/Y plane as done in the original study (for an analysis with linear-mixed models leading to comparable results, see Supplementary Material). Based on the remapped, time-normalized trajectory data, movement angle was calculated as the angle relative to the Y-axis for each difference vector between two time steps. This measure has two advantages over the raw trajectory data. First, it better reflects the instantaneous tendency of the mouse movement since it is based on a differential measure compared to the cumulative effects in raw trajectory data. Second, it integrates the movement on the X/Y-plane into a single measure. Based on this angle, we then dissected the influences of the independent variables on mouse movements within a trial. We applied a three step procedure. In the first step, we coded for each participant three predictors for all trials: location N (congruent/incongruent), congruency sequence (same/different), and previous response (same/different). Location N reflects the influence of the current stimulus location – the information that should be ignored and that induces the Simon effect. Congruency sequence reflects the expected influence of the previous trial’s congruency on the strength of the potentially conflict inducing location N influence of the current trial. Hence, it reflects the interaction of congruency N × congruency N-1, predicting how strong the mouse trajectory is deflected into the direction of the current stimulus location depending on the previously induced conflict. Previous response reflects a potential bias by the previously performed response. To provide comparable beta weights in the next step, we coded the predictors with values of -1 and 1. In the second step, we calculated multiple regressions with these predictors (angles were available for 99 time steps leading to 99 multiple regressions) on the trajectory angle that had also been standardized for each participant from -1 to 1 to provide comparable results. This yielded three time-varying beta weights (3 weights × 99 time steps) for each participant. Finally, in the third step, we computed grand averages of these three time-varying beta weights yielding a time-varying strength of influence curve for each predictor (Fig. 5).
Fig. 5

Results of time continuous regression analysis. Beta weights indicate the strength of influence of each regressor on the mouse movement angle in the dynamic condition (left) and the static condition (right). Peaks are marked by diamonds indicating jack-knifed standard errors. Lines above the graphs indicated segments of beta-weights that were significantly greater than zero (t-test, minimum of ten consecutive significant time steps)

We analyzed the dynamics of these three influences in two ways. First, we performed peak analysis, extracting strength and timing of peaks of the three influences via a jack-knifing procedure as has been used previously, for example, for peak detection in lateralized readiness potentials (Miller, Patterson, & Ulrich, 2001). We tested peak values and timing statistically with one-sided t-tests corrected for jack-knifing. Second, we detected significant temporal segments of influence by calculating t-tests against zero for each time step of the three time-varying beta-weights (Dshemuchadse et al., 2013; Scherbaum et al., 2010). We compensated for multiple comparisons of temporally dependent data by only accepting segments of more than ten consecutive significant t-tests (see Dale et al., 2007; Scherbaum, Gottschalk, Dshemuchadse, & Fischer, 2015, for a Monte Carlo analysis on this issue).

Results of peak analysis are shown in Table 1. With regard to peak timing, the static and the dynamic starting condition show the same order of peaks of influences. However, the peaks in the static starting condition show a significant lag compared to the dynamic starting condition, for location N , t j (38) = 3.61, p j < .001, d = 0.57, and the previous response, t j (38) = 2.19, p j = .02, d = 0.35, but not for congruency sequence, t j (38) = 0.71, p j = .15. Furthermore, the static starting condition shows a higher amount of noise. Hence, a repeatedly found effect (Scherbaum et al., 2010; Scherbaum, Frisch, Dshemuchadse, Rudolf, & Fischer, in press), the timing difference between the peaks for location N and congruency sequence, cannot be replicated in the static starting condition, t j (19) = 0.96, p j = .12, while it is significant for the dynamic starting condition, t j (19) = 3.57, p j < 0.01.
Table 1

Results from peak analysis on beta weights from continuous regression analysis separately for each starting condition. Segment times represent the projection of time steps to each participant’s mean RT. SE represent jack-knifed standard errors of the mean (see main text)

 

Dynamic start

Static start

Location N

Congruency sequence

Previous response

Location N

Congruency sequence

Previous response

M

SE

M

SE

M

SE

M

SE

M

SE

M

SE

Strength

0.41

0.06

0.10

0.01

0.11

0.03

0.26

0.04

0.08

0.01

0.06

0.02

Time step

38.95

1.67

49.95

3.77

1.00

0.00

48.10

1.90

57.10

9.34

23.85

10.43

Time (ms)

250

11

320

24

6

0

309

12

366

60

153

67

Concerning peak strength, the static starting condition showed a lower beta weight than the dynamic starting condition for location N , t j (38) = 2.06, p j = .025, d = 0.33, but not for congruency sequence, t j (38) = 1.09, p j = .11 and the previous response, t j (38) = 1.36, p j = .16.

Results of time segment analysis are shown in Table 2. In concordance with peak analysis, the dynamic starting condition yields more distinct time windows for all influences, especially showing less overlap between location N and congruency sequence (29 time steps, 186 ms) than the static starting condition (36 time steps, 231 ms). The larger overlap in the static starting condition is mainly caused by the pronounced time-lag of location N in the static starting condition compared to the dynamic starting condition. A similar lag is present for the influence of the previous response.
Table 2

Significant segments of beta weights from continuous regression analysis separately for each starting condition

 

Dynamic start

Static start

Location N

Congruency sequence

Previous response

Location N

Congruency sequence

Previous response

Time step

[14, 58]

[29, 67]

[1, 19]

[29, 65]

[26, 71]

[1, 41]

Time (ms)

[89, 372]

[186, 429]

[0, 121]

[186, 417]

[166, 455]

[0, 263]

Movement consistency

The measures of process dynamics show a pronounced influence of the starting condition on all three cognitive effects, the Simon effect (as reflected in the influence of the current stimulus’ location), the congruency sequence effects, and biases due to the previously performed response. This indicates that within-trial continuous measures are less robust against changes in the starting condition than discrete measures. As stated in our third hypothesis, we expected mouse movements to be less consistent within trials, across trials, and across participants in the static starting condition compared to the dynamic starting condition.

To check whether the manipulation of the starting condition led to different starts of participants’ movements and less consistent mouse movements within trials, we visually inspected heatmaps of mouse movements along the Y-axis and velocity profiles – the speed of movement at each time step (measured as the Euclidean distance traveled [in px] divided through the time passed [in ms]) – both pooled across all participants (Fig. 6). Heatmaps of movements along the Y-axis indicated that participants in the dynamic starting condition moved smoothly and consistently upwards, whereas participants in the static starting condition often stayed at the bottom of the screen for more than half of the trial before moving upwards quickly in the second half of the trial. Velocity profiles corroborated this interpretation, with low, but consistent movement speed in the dynamic starting condition, and, in contrast, a strongly increasing and inconsistent movement speed in the static starting condition.
Fig. 6

Heatmaps of movements along the Y-Axis (upper figures) and of movement speed (lower figures) as a function of time separately for each starting condition. Colors show the log-scaled probability of movements to cross the respective bin at a specific time step

To quantify the consistency of movements within trials, we created a continuous movement index. This index is calculated for each trial as the correlation of the actual Y-axis position at each time step and a projected Y-axis position assuming a constant straight upwards movement from the first to the last point of the trial. An index of 1 hence indicates a smooth and constant upwards movement. In concordance with the visual impression from the heatmaps, the movement index was significantly higher in the dynamic starting condition (M = 0.94, SE = 0.01) than in the static starting condition (M = 0.80, SE = 0.02), t(38) = 5.21, p < .001, d = 1.65. This indicates that the different start instructions indeed influenced how consistently participants moved at the start and across the whole trial.

So far, all mouse movement analyses were based on the mouse movements recorded from stimulus presentation until response. However, given that in the static starting condition participants could freely decide when to initialize their movement, the movement could also have started after the stimulus was already presented (and, consequently, after tracking onset). Therefore, an alternative analysis approach in the static starting condition could only focus on the part of each trial after movement has already been initiated (Buetti & Kerzel, 2008). This could, in principle, increase the similarity of the analyzed parts between the dynamic and the static starting condition for the movement index and potentially also for the continuous measures. While restricting each trial only on the part after movement initiation indeed improved the movement index in the static condition (M = 0.89, SE = 0.01), it still was significantly smaller than the movement index in the dynamic starting condition, t(38) = 1.97, p = 0.028, d = 0.62. Analyzing the restricted movements in the time-continuous regression analysis yielded worse results than the analysis of unrestricted movements reported above, with wider spread peaks for the Simon effect and a loss of the influence of the previous response (see Supplementary Material).

To check the consistency of data across participants, we calculated the movement initiation time, a frequently used measure in mouse-tracking studies. Specifically, in the dynamic starting condition, movements were initiated before stimulus presentation and triggered stimulus presentation when the movement criterion was fulfilled (4 pixels in two consecutive time steps). We hence took the time difference between the click on the start box and the triggering of stimulus presentation by the mouse movement as initiation time. In the static starting condition, the stimulus was presented 200 ms after the click on the start box and participants could freely decide when to initiate their movement. We hence determined initiation time for each trial as the first time step after stimulus presentation in which participants had moved the mouse by more than 8 pixels (matching the criterion of the dynamic starting condition). We averaged the initiation times of each trial per participant and compared them between conditions. Initiation times in the dynamic starting condition (M = 0.19, SE = 0.01) were comparable to the static starting condition (M = 0.21, SE = 0.02), as also indicated by a t-test, t(38) = 0.51, p = .611. However, the static starting condition showed a significantly higher variance (SD = 0.09) than the dynamic starting condition (SD = 0.04), as indicated by Levene’s test, F(1, 38) = 8.64, p = .01. This indicates a lower consistency in movement initiation strategies across participants in the static compared to the dynamic starting condition.

To assess the consistency of movements across trials, we calculated the bimodality coefficient of the distributions of AD and checked in how far this index differed between groups. Specifically, we calculated the bimodality coefficient of each participant’s distribution of AD which indicates how broadly and potentially bimodally distributed AD is across trials of a participant.

Distributions of AD differed between conditions, as the bimodality index was higher in the static starting condition (M = 0.57, SE = 0.03) than in the dynamic starting condition (M = 0.41, SE = 0.02), t(38) = 4.92, p < .001, d = 1.56 (see Supplementary Material for histograms of AD per condition). This is also reflected in the plot of individual response trajectories in which the dynamic starting condition shows a more coherent distribution of movements than the static condition, where many single movements leave the main area of movement (Fig. 7). More precisely, the dynamic condition shows a smooth spread of movements while the static condition shows a combination of mainly straight movements and a few strongly curved trajectories.
Fig. 7

Plot of individual time-normalized trajectories per starting condition. Movements start in the bottom center of the screen and end in the upper left target box (as all trajectories were remapped to the left and their starting position was horizontally aligned)

Taken together, our analysis of consistency of movements within trials, across trials, and across participants indicates that the static starting condition yielded less consistent movements within a trial, across trials, and across participants. This is in line with, and may indeed be the underlying cause of, the weaker cognitive effects on the continuous measures observed in the static starting condition.

Discussion

The present study investigated in how far methodological differences in the setup of mouse-tracking studies influence the consistency of mouse movements as well as the theoretically expected effects on mouse-tracking measures. In a mouse-tracking version of the Simon task, participants indicated their response by moving a computer mouse to a response box on the screen. The current study compared data from two experiments that varied in the way participants initialized their mouse movement. In a previously published experiment, a dynamic starting procedure was employed in which participants had to initialize their mouse movement in order to trigger the stimulus presentation. In a new experiment, a static starting procedure was used in which the stimulus was presented after a fixed delay and participants could freely decide when to initialize their mouse movement. As expected, we found that the static starting procedure yielded less consistent movements than the dynamic starting procedure. Concerning the influence of the starting procedure on theoretically predicted effects, we split our analysis in discrete measures that summarize the mouse movement in each trial in a single value, and continuous measures that examine the development of a specific movement characteristic within a trial over time. Effects on discrete measures (i.e., average deviation) were relatively robust against influences of the starting condition. In contrast, effects on continuous measures examined in the time continuous multiple regression analysis were weaker and more temporally compressed in the static condition compared to the dynamic condition.

Our results indicate that differences in the setup of mouse-tracking studies – here, specifically, in the starting condition – can indeed influence mouse movements and to some degree also the theoretically important effects investigated in such studies. The Simon task produces relatively robust experimental effects and hence all dynamic effects were present in both starting conditions – though they were much smaller and temporally compressed in the static starting condition. However, more subtle effects, for example, in value-based decision making (Dshemuchadse et al., 2013; Scherbaum et al., 2016) or semantic judgments (Dshemuchadse et al., 2015) might be more strongly affected when one studies them using a static starting condition, especially with within-trial continuous measures.

Does this mean that a dynamic starting procedure should always been applied? Our results indicate that at least for strong behavioral effects, a static starting procedure could be used (even for continuous measures). Such a static start setup might even be indispensable, if the logic of the experiment dictates strict sequences of stimulus timing which do not allow for participants starting the stimulus presentation themselves, for example, in priming experiments. Besides, other methodological considerations (with currently largely unknown consequences) might play a role when deciding whether the implementation of a dynamic starting procedure is feasible: This includes the question whether explicit instructions about the mouse movement should be provided as they might increase participants’ awareness of mouse-tracking, which might be especially relevant when studying behaviors where influences of social norms might be expected. Besides, the requirement of a continuous upwards movement might be challenging if the stimulus information is more complex and its acquisition more time-consuming (though stimuli with a high amount of decision critical information per trial represent a general challenge for mouse-tracking studies, cf. Kieslich & Hilbig, 2014); this challenge is also amplified as a dynamic starting procedure is typically implemented in combination with limited time for responding. Finally, depending on what constitutes the mouse-tracking variable of interest a different methodological setup might be desirable (see Fischer & Hartmann, 2014).

It is hence in the judgment of the experimenter, whether the implementation of a dynamic starting procedure is desired and feasible in a study and, if not, whether an additional explicit instruction to start moving as quickly as possible might be sufficient (Freeman & Ambady, 2010), especially when the pursued effects are robust enough for a static starting procedure. Besides, the implementation of a dynamic starting procedure typically is methodologically more demanding and requires extensive pretesting to determine the exact spatial and temporal setup – although mouse-tracking studies in general require careful design and pretesting. Regarding the methodological implementation, a recently presented open-source software (mousetrap) can be used that allows creating mouse-tracking experiments via a graphical user interface without programming (Kieslich & Henninger, 2017) and that also allows implementing a dynamic starting procedure by specifying tracking boundaries.

Integration

The present work adds to the emerging discussion about boundary conditions and standards for mouse-tracking studies (e.g., Faulkenberry & Rey, 2014; Fischer & Hartmann, 2014), but could also be applied to hand movement tracking studies in general (e.g., Buetti & Kerzel, 2008; Song & Nakayama, 2008, 2009). The basic idea of these studies is that cognitive processing leaks into the execution of the movements and hence cognitive processes become accessible to investigation by studying the differences in movements between different conditions. Our study indicates that the setup of the study influences the link between cognitive processing and mouse movements: Participants’ upwards movements were less consistent within trials, across trials, and across participants when participants were not forced by the setup to start moving. In this regard, another methodological precaution that has been helpful in our experience is watching participants during practice trials and – if necessary – reminding them to keep moving.

On a general level, our study has two implications for future mouse-tracking studies: first, researchers should provide a detailed description of the methodological setup of the mouse-tracking task to enable researchers to compare findings from different mouse-tracking studies (cf. Fischer & Hartmann, 2014). Second, if a study aims to interpret mouse movements as the continuous tracking of response selection (in cognitive tasks) or the preference development (in value-based decision tasks), it should strive to maximize the likelihood of “processing while moving” through the appropriate methodological setup, for example, by using a dynamic starting procedure (if feasible). If response selection or preference development is (partly) performed before the movement is started by the participants, this might weaken the direct link between cognitive processing and mouse movements, and the duration of the initial period without mouse movement (the initiation time) might also contain information about the competition between response alternatives that is not visible in the actual mouse movement (Fischer & Hartmann, 2014). More importantly, when participants act inconsistently within a study and sometimes think before moving while at other times move before thinking, the effect under study could be split up and found partially in initiation times and partially in movement measures. Such a split up might decrease the chances of studies to find the predicted effects in the movement measures. Furthermore, the split up might lead to more bimodally distributed movement measures. This bimodal distribution, in turn, could be interpreted as evidence for two distinct cognitive processes taking place in the psychological task that is studied. However, following the reasoning outlined previously, this might be (partially) methodologically confounded with the fact that in a static start condition people sometimes think before moving (leading to a straight line) while other times they think while moving. Both cases, thinking before moving, and inconsistent movements, might considerably complicate dynamic analyses of the ongoing processes.

Surprisingly, the application of dynamic analyses of mouse movements (regression analysis, e.g., Dshemuchadse et al., 2013; Scherbaum et al., 2010; Sullivan et al., 2015; decision spaces, e.g., O’Hora, Dale, Piiroinen, & Connolly, 2013) is still in its infancy and most published studies so far focus on discrete measures of movements, which might raise the question what exactly is gained when using AD or maximum deviation of movements (MAD) instead of RT of key presses. Some studies indicate that AD might be more sensitive to certain influences (e.g., Scherbaum et al., 2015) or might indeed reflect different processes, as indicated by dissociations of RT and MAD (compare Koop & Johnson, 2011). In addition, Koop and Johnson (2013) argue that discrete mouse-tracking measures can provide researchers with more specific indicators for aspects of the preference development, such as changes of the absolute preference (which – in a typical two-choice mouse-tracking task – may be captured through crossings of the Y-axis) and changes of the momentary valence (via directional changes along the X-axis). Our study indicates that to uncover the full potential of mouse-tracking studies and to fully harvest the dynamics of decision processes by using dynamic analyses of mouse movements, a thorough design of the starting condition, including a dynamic start, might be necessary. Otherwise, one risks losing potentially present effects in the noise of inconsistent movements.

Limitations

Our study is a first attempt to assess the influence of the starting condition on movements and theoretically important effects in mouse-tracking studies. It faces several limitations that we discuss in the following.

First, as we compared data from a previous study with data from a new study, participants were not randomly assigned to the starting condition. By nature of such a design, we cannot fully exclude that participants in the first sample (dynamic starting condition from the original study) were simply different to participants in the second sample (static starting condition from the new experiment). However, we found no significant differences in any of the sample characteristics that were assessed for each participant in both studies. Besides, the theoretically expected effects could be replicated in both starting conditions, and the pattern of differences between conditions was specific and mostly as expected from a methodological point of view. This makes us confident that the found pattern in the data is not due to inherent group differences, but caused by the difference in the starting condition between groups. Still, a future study that randomly assigns participants to either starting condition could be used to experimentally ensure full comparability between groups. Of course, the ideal design to avoid any differences between the two groups would have been a within-subjects design. However, in such a design it cannot be excluded that participants carry over a certain mode of movement from one condition to the other one, so that a between-subjects design seems preferable.

Second, an unexpected finding was that RT in the static starting condition was shorter than in the dynamic starting condition. A look at the speed profiles of both conditions (Fig. 6) indicates that the dynamic condition shows a more consistent movement speed across time. Participants start their movements and do not accelerate much even in the end phase of their movement. In contrast, the static starting condition shows considerable variation in movement speed across time. Participants start slowly but sharply increase their speed in the end phase of the movement. In our view, this pattern indicates that in the dynamic condition, participants permanently coordinate the processing of information and the movement of the mouse cursor, while in the static condition, participants already start response selection before initiating their movement and hence after initiating their movement quickly finish response selection and execute their movement directly to the target box. The latter strategy could easily yield the found advantage of approximately 50 ms for the static condition compared to the dynamic one. However, it also underlines that in the static condition cognitive processing does not always continuously leak into the mouse movements. Instead, in several trials cognitive processing might take place before movement initiation reducing the effects on mouse movements. The initially slow upwards movement in the static starting condition presumably also contributes to an on average lower AD in the static than in the dynamic starting condition as does the requirement in the dynamic condition to initially move upwards even before stimulus onset. The differences in AD between the starting conditions certainly questions the validity of absolute comparisons of AD between the two conditions. However, here we performed relative comparisons of the Simon effect and the congruency sequence effect. Our approach is also bolstered by the fact that both effects did not interact with the starting condition: Hence, in principle, the effects were equal irrespective of the starting condition.

Third, a comparison of initiation times revealed no significant differences between the starting conditions. However, given the different procedures for movement initiation in the two conditions, it is difficult to create a measure for initiation time that is comparable across conditions. In the dynamic starting condition, the initiation time was defined as the time it took participants to fulfill the upwards movement criterion (to trigger stimulus presentation) after clicking on the start box. In the static starting condition, the stimulus was presented automatically 200 ms after the click on the start box; therefore, the initiation time was defined as the time it took participants to fulfill the upwards movement criterion (as in the dynamic starting condition) after stimulus presentation. Whether these two measures are indeed comparable is an open question (e.g., one could also argue that the initiation time is underestimated in the static starting condition as participants can also prepare and start moving the mouse before the stimulus is presented, and, as a consequence, the initiation time should also be computed starting with the click on the start box). Hence, the results from this comparison should be handled with care and not be over generalized.

Fourth, given that in the static starting condition participants could freely decide when to initialize their movement, an alternative analyses approach for the static condition could only focus on the part of each trial after movement has already been initiated (cf. Buetti & Kerzel, 2008). While restricting each trial only on the part after movement initiation indeed improved the movement index, time continuous regression analysis revealed less reliable results and hence a loss in data quality for continuous analyses. Hence, we conclude that even the exclusion of the initial non-movement period cannot fully compensate for the lack of leakage of cognitive processing into the mouse movements in the static starting condition.

Finally, it should be stressed that aside from the starting condition a lot of other methodological factors often vary between mouse-tracking studies, for example, whether participants have unlimited time (e.g., Kieslich & Hilbig, 2014) for responding or whether there is a time limit (e.g., Dshemuchadse et al., 2013), whether participants have to click on a button (e.g., Koop & Johnson, 2013) to indicate a response or simply “touching” the button without a click suffices (e.g., Scherbaum et al., 2010), or whether participants receive explicit instructions about continuously moving upwards (e.g., Scherbaum et al., 2015) or no instructions about mouse movements (e.g., Kieslich & Hilbig, 2014; Koop & Johnson, 2013) are given. So, in order to enable a comparison of findings across different mouse-tracking, the influence of these and other additional methodological factors needs to be investigated.

Conclusion

The present study is a first step in assessing the impact of methodological differences between mouse-tracking studies. We found that a static starting condition that did not enforce participants to initiate mouse movements before stimulus presentation led to less consistent mouse movements. While this did not have significant consequences for the investigation of effects with discrete mouse-tracking measures, effects on within-trial continuous measures were reduced. Moving to a higher ground in the studies of cognitive processes hence requires that experimenters understand the consequences of the individual methodological setup of a study and that they ensure methodologically (if desired) that the processes continuously leak into the movements of their participants.

Author note

This research was partly supported by the German Research Council (DFG grant SFB 940/2 and DFG grant SCH1827/1-2 to Stefan Scherbaum). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. No additional external funding was received for this study.

The data of the dynamic starting condition in this article has been used in a previous publication (Scherbaum et al., 2010, Cognition). The data were reanalyzed completely for the current article.

Footnotes

  1. 1.

    Given the significant difference in variances between groups, we repeated the statistical analysis for different means of initiation times based on Welch’s t. Again, we found no statistical differences, t(26.1) = 0.51, p = 0.61.

Supplementary material

13428_2017_977_MOESM1_ESM.pdf (459 kb)
ESM 1 (PDF 459 kb)

References

  1. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436.CrossRefGoogle Scholar
  2. Buetti, S., & Kerzel, D. (2008). Time course of the Simon effect in pointing movements for horizontal, vertical, and acoustic stimuli: Evidence for a common mechanism. Acta Psychologica, 129(3), 420–428.CrossRefGoogle Scholar
  3. Dale, R., Kehoe, C., & Spivey, M. J. (2007). Graded motor responses in the time course of categorizing atypical exemplars. Memory & Cognition, 35(1), 15–28.CrossRefGoogle Scholar
  4. Dshemuchadse, M., Grage, T., & Scherbaum, S. (2015). Action dynamics reveal two components of cognitive flexibility in a homonym relatedness judgment task. Frontiers in Cognition, 6, 1244.Google Scholar
  5. Dshemuchadse, M., Scherbaum, S., & Goschke, T. (2013). How decisions emerge: Action dynamics in intertemporal decision making. Journal of Experimental Psychology: General, 142(1), 93–100.CrossRefGoogle Scholar
  6. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis: Verbal reports as data. Cambridge: MIT Press.Google Scholar
  7. Faulkenberry, T. J., & Rey, A. E. (2014). Extending the reach of mousetracking in numerical cognition: A comment on Fischer and Hartmann (2014). Frontiers in Psychology, 5, 1436.CrossRefGoogle Scholar
  8. Fischer, M. H., & Hartmann, M. (2014). Pushing forward in embodied cognition: May we mouse the mathematical mind? Frontiers in Psychology, 5, 1315.PubMedPubMedCentralGoogle Scholar
  9. Freeman, J. B., & Ambady, N. (2010). MouseTracker: Software for studying real-time mental processing using a computer mouse-tracking method. Behavior Research Methods, 42(1), 226–241.CrossRefGoogle Scholar
  10. Freeman, J. B., Dale, R., & Farmer, T. A. (2011). Hand in Motion Reveals Mind in Motion. Frontiers in Psychology, 2, 59.CrossRefGoogle Scholar
  11. JASP Team. (2016). JASP (Version 0.7.5.6). Retrieved from https://jasp-stats.org/
  12. Kieslich, P. J., & Henninger, F. (2017). Mousetrap: An integrated, open-source mouse-tracking package. Behavior Research Methods, 49(5), 1652–1667.CrossRefGoogle Scholar
  13. Kieslich, P. J., & Hilbig, B. E. (2014). Cognitive conflict in social dilemmas: An analysis of response dynamics. Judgment and Decision Making, 9(6), 510–522.Google Scholar
  14. Kieslich, P. J., Wulff, D. U., Henninger, F., Haslbeck, J. M. B., & Schulte-Mecklenbeck, M. (2016). Mousetrap: An R package for processing and analyzing mouse-tracking data.  https://doi.org/10.5281/zenodo.596640
  15. Koop, G. J., & Johnson, J. G. (2011). Response dynamics: A new window on the decision process. Judgment and Decision Making, 6(8), 750–758.Google Scholar
  16. Koop, G. J., & Johnson, J. G. (2013). The response dynamics of preferential choice. Cognitive Psychology, 67(4), 151–185.CrossRefGoogle Scholar
  17. Miller, J., Patterson, T., & Ulrich, R. (2001). Jackknife-based method for measuring LRP onset latency differences. Psychophysiology, 35(1), 99–115.CrossRefGoogle Scholar
  18. Newell, A., & Simon, H. A. (1972). Human problem solving. Oxford: Prentice-Hall.Google Scholar
  19. O’Hora, D., Dale, R., Piiroinen, P. T., & Connolly, F. (2013). Local dynamics in decision making: The evolution of preference within and across decisions. Scientific Reports, 3, 2210.CrossRefGoogle Scholar
  20. Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10(4), 437–442.CrossRefGoogle Scholar
  21. R Core Team. (2016). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing.Google Scholar
  22. Scherbaum, S., Dshemuchadse, M., Fischer, R., & Goschke, T. (2010). How decisions evolve: The temporal dynamics of action selection. Cognition, 115(3), 407–416.CrossRefGoogle Scholar
  23. Scherbaum, S., Dshemuchadse, M., Leiberg, S., & Goschke, T. (2013). Harder than expected: Increased conflict in clearly disadvantageous intertemporal choices in a computer game. PLOS ONE, 8(11), e79310.CrossRefGoogle Scholar
  24. Scherbaum, S., Frisch, S., Dshemuchadse, M., Rudolf, M., & Fischer, R. (in press). The test of both worlds: Identifying feature binding and control processes in congruency sequence tasks by means of action dynamics. Psychological Research.Google Scholar
  25. Scherbaum, S., Frisch, S., Leiberg, S., Lade, S. J., Goschke, T., & Dshemuchadse, M. (2016). Process dynamics in delay discounting decisions: An attractor dynamics approach. Judgment and Decision Making, 11(5), 472–495.Google Scholar
  26. Scherbaum, S., Gottschalk, C., Dshemuchadse, M., & Fischer, R. (2015). Action dynamics in multitasking: The impact of additional task factors on the execution of the prioritized motor movement. Frontiers in Cognition, 6, 934.Google Scholar
  27. Song, J. H., & Nakayama, K. (2008). Numeric comparison in a visually-guided manual reaching task. Cognition, 106(2), 994–1003.CrossRefGoogle Scholar
  28. Song, J. H., & Nakayama, K. (2009). Hidden cognitive states revealed in choice reaching tasks. Trends in Cognitive Sciences, 13(8), 360–366.CrossRefGoogle Scholar
  29. Spivey, M. J. (2007). The Continuity of Mind. New York: Oxford University Press.Google Scholar
  30. Spivey, M. J., & Dale, R. (2006). Continuous dynamics in real-time cognition. Current Directions in Psychological Science, 15(5), 207–211.CrossRefGoogle Scholar
  31. Spivey, M. J., Grosjean, M., & Knoblich, G. (2005). Continuous attraction toward phonological competitors. Proceedings of the National Academy of Sciences of the United States of America, 102(29), 10393–10398.CrossRefGoogle Scholar
  32. Sullivan, N., Hutcherson, C., Harris, A., & Rangel, A. (2015). Dietary Self-Control Is Related to the Speed With Which Attributes of Healthfulness and Tastiness Are Processed. Psychological Science, 26(2), 122–134.CrossRefGoogle Scholar
  33. van Rooij, M. M., Favela, L. H., Malone, M., & Richardson, M. J. (2013). Modeling the Dynamics of Risky Choice. Ecological Psychology, 25(3), 293–303.CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2017

Authors and Affiliations

  1. 1.Department of PsychologyTechnische Universität DresdenDresdenGermany
  2. 2.Experimental Psychology, School of Social SciencesUniversity of MannheimMannheimGermany

Personalised recommendations