Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.
Planning a goal-directed action involves a number of selection processes. For example, when our goal is to drink coffee from the mug sitting on the table in front of us, we need to select the appropriate action (reaching and grasping), effector (hand), and target (mug), and we need to extract the visual information that is required to specify the movement parameters (e.g., the location and size of the mug). It has been suggested that the latter is supported by an intentional weighting of task-relevant feature dimensions (e.g., size): Planning a particular action increases the impact of features coded on action-relevant dimensions, thereby ensuring that all the information necessary for online action control and the specification of open parameters is available (e.g., Hommel, 2009; Memelink & Hommel, 2013).
Indeed, actions have been shown to prime features of the goal object that are relevant for the respective action. Bekkering and Neggers (2002) asked participants to saccade to a target object presented among distractors, which was defined by a conjunction of orientation and color, and then to either grasp the object or point to it. Orientation selection, as indicated by the accuracy of the first saccade, was better when the object was to be grasped than when it was to be pointed to. This selective enhancement has even been observed under rather unnatural conditions, when two-dimensional images of objects had to be pointed to or grasped on a screen (Hannus, Cornelissen, Lindemann, & Bekkering, 2005).
A more general effect of action planning on selective visual processing has been demonstrated by studies that combined a movement task with an unrelated visual task. In a study by Fagioli, Hommel, and Schubotz (2007), participants had to detect a deviant in a temporal sequence of stimuli that predictably varied in size or location. When participants were planning a grasping movement while monitoring the visual stimuli, the detection of size deviants was facilitated, whereas planning a pointing movement facilitated the detection of location deviants. Converging evidence has been obtained for selection in space: In a typical visual search task, the detection of a target defined by size was facilitated during the preparation of a grasping movement, and the detection of a target defined by luminance was facilitated during the preparation of a pointing movement, although the two tasks were unrelated and merely overlapped in time (Wykowska, Schubö, & Hommel, 2009). These studies show that planning a particular action not only increases the weights of specific features of the goal object, improving goal selection, but the impact of an entire feature dimension on visual processing, modulating even early perceptual and attentional processes (see also Wykowska & Schubö, 2012).
The present study was motivated by the idea that the influence of action intentions on selective visual processing does not end at the perceptual stage. Whenever we want to make comparisons between objects separated in time or space, we need to retain the visual information about these objects over short periods of time, even if only for the duration of an eye movement. Consequently, visual working memory (VWM) forms a basis for a vast number of simple everyday tasks and for higher cognitive functions. It is, however, highly limited in its capacity (Luck & Vogel, 1997, 2013; Ma, Husain, & Bays, 2014), necessitating selective processing to ensure that only relevant information is maintained. Selective attention modulates VWM throughout all processing stages, from encoding up to retrieval (Gazzaley & Nobre, 2012), and evidence is accumulating that information can be maintained in different representational states established by the allocation of attention, allowing for a weighting according to differences in task relevance (e.g., Heuer & Schubö, 2016b; LaRocque, Lewis-Peacock, & Postle, 2014; van Moorselaar, Olivers, Theeuwes, Lamme, & Sligte, 2015). Experimentally, such a weighting is typically induced by cues presented during the retention interval that indicate that some items as more behaviorally relevant than others, on the basis of their location or features (Gunseli, van Moorselaar, Meeter, & Olivers, 2015; Heuer & Schubö, 2016a). An action, a more natural indicator of the relevance of specific objects, has recently been shown to similarly result in a weighting of VWM representations reflecting potential action relevance (Heuer, Crawford, & Schubö, 2016).
In the present experiments, we investigated whether the planning of a particular type of action also induces a selective weighting of items in VWM, resulting in better memory for items defined by a feature coded on an action-related dimension. In a combined memory and movement task, participants had to memorize items defined by color or size while preparing a pointing or grasping movement. Unlike in many dual-task paradigms, the memory and movement tasks were related in a particular way: either type of movement rendered one or the other feature dimension defining the memory items as more action-relevant than the other. We reasoned that these differences in the action relevance of feature dimensions would be reflected in a weighting of items in VWM. Whereas size is a critically relevant feature dimension for grasping movements (e.g., Smeets & Brenner, 1999), it should be of little or no relevance for planning a pointing movement toward the center of an object. We therefore predicted better memory for size items when a grasping movement was to be performed than when a pointing movement was to be performed. Color, in contrast, is not required for the specification of grasping parameters. Its relevance for pointing is not as apparent as that of size for grasping, but it might be used to localize the target object and guide the pointing movement in a similar manner as luminance (White, Kerzel, & Gegenfurtner, 2006). Accordingly, a second and more tentative hypothesis was that memory for color items would be better while planning a pointing movement than during the preparation of a grasping movement.
Experiment 1 tested whether selective effects of action planning would become evident in memory performance by embedding the memory task within the action task (see Fig. 1a, top row). Although such effects would demonstrate that the preferential processing of action-related feature dimensions has consequences for the short-term storage of visual information, these consequences might be due to perceptual enhancement at encoding. To specifically test whether perceptual enhancement at encoding is determinative for actions to induce a selective weighting of information at the representational level in VWM, the cue indicating the movement to be performed was only presented during the retention interval in Experiment 2 (see Fig. 1a, bottom row).
In total, 49 students of Philipps - University Marburg participated in the experiments. The data from eight participants had to be excluded due to poor performance in the memory task (<60% correct answers) or because they reported having used strategies not consistent with the instructions (e.g., focusing only on color memory items) in a postexperimental questionnaire. Analyses were performed on the remaining participants (Exp. 1: 13 female, seven male, mean age 22 years; Exp. 2: 15 female, six male, mean age 24 years). All participants provided informed written consent, were naive to the purpose of the experiment, and had normal or corrected-to-normal visual acuity and color vision. Visual acuity and color vision were tested with the OCULUS Binoptometer 3 (OCULUS Optikgeräte GmbH, Wetzlar, Germany).
Participants were seated in a comfortable chair in a dimly lit room. On a table in front of them, a monitor was placed at a distance of approximately 104 cm from their eyes. At a distance of approximately 55 cm from the participants’ eyes, a framed glass plate was mounted on the table. The glass plate was adjusted to the eye height of each participant to ensure that it always covered the entire monitor. Pointing and grasping movements were performed toward this glass plate. Participants had a wooden board with a response box to the left and a movement pad to the right in front of them. For the memory task, participants pressed the two buttons on the response box with their left middle and index fingers. The right hand was positioned on the movement pad, on which a cross marked the starting position for index finger and thumb. The stimuli were presented on a 22-in. screen (1,680 × 1,050 pixels), and stimulus presentation and response collection were controlled by a Windows PC using E-Prime 2.0 software (Psychology Software Tools, Inc.). Movements were recorded using a magnetic motion-tracking device, and the experimenter sat approximately 2 m behind the participant to register whether the instructed movement (grasping or pointing) was executed.
Trial procedure and stimuli
The trial procedure is shown in Fig. 1a. In Experiment 1, a trial started with the presentation of a movement cue for 200 ms, indicating the movement to be performed (see Fig. 1b). Participants were instructed to prepare the shown movement, but to withhold movement execution. After an interval of 800 ms, the memory array was presented for 200 ms. This memory array consisted of ten circle-shaped items: four memory items and six distractor items. Two of the memory items differed from the distractor items by their color, and the other two by their size. Participants were instructed to memorize the colors and sizes of the deviating items. In Experiment 2, the order of movement cue and memory array was reversed. After another interval of 900 ms (Exp. 1) or 800 ms (Exp. 2), a test item was presented at one of the memory item locations. The test item was always of the same type (size or color) as the memory item that had previously been presented at that location, and participants were to indicate whether or not there was a change in size or color (see Fig. 1c). The response assignment was balanced across participants. The test item was present until response, but a quick reaction was encouraged. After the response, the test item disappeared for 200 ms. Upon its reappearance, participants were to execute the respective movement toward the glass plate in front of the monitor. For pointing movements, they were to point toward the center of the circle, touching the glass plate with the tip of their right index finger. For grasping movements, they were to perform a claw-like grasp (see Fig. 1b), touching the glass sheet with all five fingers along the outline of the circle. The next trial started 900 ms after the hand’s return to the starting position.
All stimuli were presented against a gray background. The movement cues (see Fig. 1b) were color photos of a female volunteer’s hand performing a grasping or a pointing movement. The cues subtended an area of approximately 4.41° × 3.58° of visual angle. The memory array consisted of ten fixed item positions, at eccentricities between 3.75° and 10.44° of visual angle from the fixation dot (0.17° of visual angle). The color items and distractor items were all 2.15° in diameter, and the size items were 0.88°, 1.32°, 1.76°, 2.59°, 3.03°, and 3.47° in diameter. The colors of the color items were chosen from a set of six colors (green, turquoise, blue, slate blue, purple, and magenta). For the two size memory items and the two color memory items, all combinations of different sizes and colors were equally likely. All memory items were isoluminant. The test item was always prominent in the same dimension (color or size) as the memory item that had previously been presented at its location. In the 50% of trials with a change, the color test items had a color that was spectrally neighboring to the color of the corresponding memory item, and size test items had a size that was at least 0.88° and not more than 1.71° of visual angle different from the size of the corresponding memory item.
The four experimental conditions were defined by the combinations of the factors Test Item Type (size vs. color) and Movement Type (grasping vs. pointing). The experimental condition was randomly chosen in each trial. All possible memory array configurations, consisting of two color items, two size items, and six distractor items, were equally probable. The experiment consisted of 560 trials, which were equally distributed among the four experimental conditions and were organized in 14 blocks of 40 trials each.
Testing took place in two sessions on consecutive days. On the first day, participants performed short versions of the movement task and the memory task separately. The separate training tasks were identical to the tasks of the main experiment and consisted of 160 trials each. On the second day, participants performed the main experiment, and afterward filled in a questionnaire to assess strategies and other factors that might have affected performance.
Trials with excessively long reaction times (>2.5 SDs from mean reaction time, calculated separately for each participant; on average, 2.6% of all trials in Exps. 1 and 2) and trials in which the wrong movement was performed (on average, 3.4% of all trials in Exp. 1, and 3.6% of all trials in Exp. 2) were excluded from further analysis. The primary measure of interest for memory performance with respect to the hypotheses was accuracy. Reaction times were analyzed to ensure that speed–accuracy trade-offs did not contribute to any differences in accuracy. Accuracy in percent and mean reaction times were calculated separately for each movement and test item type. For reaction times, only trials with correct responses were included.
Figure 2a shows performance in the memory tasks in both experiments, separately for the different movement and test item types. Two-way repeated measures analyses of variance (ANOVAs) with the factors Movement Type and Test Item Type were computed for accuracy and reaction time. Of main interest was the interaction in terms of accuracy, indicating that memory for the two test item types differed between movement types. This interaction reached significance in both Experiment 1 [F(1, 19) = 6.34, p = .021, ƞ p 2 = .25] and Experiment 2 [F(1, 20) = 7.07, p = .015, ƞ p 2 = .26]. No main effects were significant. In Experiment 1, we also found an interaction in reaction times [F(1, 19) = 6.16, p = .023, ƞ p 2 = .25], detailed below, but no main effects. In Experiment 2, there was no interaction, but reaction times were faster in trials with pointing movements (1,252 ms ± 52 ms) than in trials with grasping movements (1,266 ms ± 52 ms) [F(1, 20) = 5.18, p = .034, ƞ p 2 = .21], and faster for color test items (1,211 ms ± 49 ms) than for size test items (1,307 ms ± 55 ms) [F(1, 20) = 29.62, p < .001, ƞ p 2 = .60].
To elucidate the observed interactions, specifically testing for a selective weighting of feature dimensions depending on the planned movement, performance in pointing trials was subtracted from performance in grasping trials, separately for size and color test items (shown for accuracy in Fig. 2b). For accuracy, positive values indicate better performance when a grasping movement was being planned, and negative values indicate better performance when a pointing movement was being planned. For reaction times, positive values indicate faster reaction times for pointing trials, and negative values indicate faster reaction times for grasping trials. These difference measures were tested against 0 by means of one-tailed t tests. Accuracy for size test items was significantly higher when a grasping movement was to be performed than when a pointing movement was to be performed, both in Experiment 1 [t(19) = 2.11, p = .024] and Experiment 2 [t(20) = 2.52, p = .01]. Accuracy for color items tended to be higher during the preparation of a pointing movement, but this difference failed to reach significance [Exp. 1: t(19) = 1.32, p = .102; Exp. 2: t(20) = 0.082, p = .211]. For reaction times, a significant positive value for size test items in Experiment 1 (18 ± 9 ms) indicated slower responses during the planning of grasping movements than during the planning of pointing movements [t(19) = 2.23, p = .02]. None of the other comparisons for reaction times reached significance. To rule out that the effect in accuracy for size test items in Experiment 1 was due to a speed–accuracy trade-off, we calculated mean reaction time and accuracy for each quartile of the reaction time distribution, separately for each condition and participant. We then fitted orthogonal polynomials to accuracy as a function of reaction time. Across participants, we observed significant negative linear coefficients that did not differ between conditions: In a two-way repeated measures ANOVA with the factors Movement Type and Test Item Type, there were no main effects and no interaction, but the overall mean was significantly different from 0 [F(19) = 112.57, p < .001, ƞ p 2 = .86]. Thus, we found no indication that higher levels of accuracy could be attributed to longer reaction times.
The present experiments showed that the short-term storage of information in VWM is modulated by action intentions, meaning that representations are weighted to reflect differences in the action relevance of specific feature dimensions: Memory for items defined by size was better when this feature dimension was relevant for the action that was concurrently being prepared (i.e., a grasping action), as compared to when it was irrelevant for the planned action (i.e., a pointing action). Conversely, memory for items defined by color tended to be better during the preparation of pointing actions than during the preparation of grasping actions. However, this effect of action intention on memory performance for color items did not reach statistical significance in either experiment. As outlined above, the action relevance of color for pointing actions is not very high, and other studies have failed to find an effect of preparing a pointing action on performance (on perceptual performance, in those cases) for color items (Bekkering & Neggers, 2002; Hannus et al., 2005). It might even be that the relevance of color for pointing was particularly low in the present experiments, due to the way that the action goal object was presented: Color can be used to guide pointing movements to the action goal (White et al., 2006), but here only one potential action goal was presented, rendering its localization and selection to guide the movement very simple.
Presumably, the effect of action intentions on maintenance in VWM is due to an intentional weighting of action-related feature dimensions, which has previously been established for visual perception (Memelink & Hommel, 2013). The results of Experiment 1 can be regarded as an extension of these findings. In Experiment 1, the memory task was embedded in the movement task, meaning that the movement was already being prepared when the to-be-memorized items were presented. One could accordingly interpret the observed effects of action intention on memory performance in Experiment 1 as the result of perceptual enhancement of action-related feature dimensions at encoding, demonstrating the consequences of action-related perceptual modulation on the short-term storage of visual information. The results of Experiment 2, by contrast, cannot be attributed to a modulation at the perceptual stage. Here, the movement to be performed was instructed during the retention interval and well after the presentation of the memory items. Thus, the observed differences in performance depending on current action intentions are likely due to a selective weighting of action-related feature dimensions in VWM, introduced at the representational level during maintenance.
One could argue that the observed weighting of items arose during retrieval: In both experiments, participants were to respond to the memory task prior to executing the movement. The most likely mechanism to bring about improved performance for a specific feature dimension that would take effect at retrieval would be a prioritization, affecting the order of comparisons made between the items in memory and the displayed test item. In the present experiments, however, the number of required comparisons was already reduced to one by presenting only one test item at the previous location of the memory item it had to be compared to. More importantly, this test item determined the feature dimension that the comparison needed to be based on: It was either of a specific color or of a specific size, and thus only required comparisons within that dimension. A prioritization at retrieval therefore cannot account for the differences in performance for size and color test items depending on action intention. A second mechanism that could be assumed to facilitate retrieval would be an enhancement of perception of the test item. However, given that the test item was perceptually not very demanding and was present until response, it is unlikely that this would have affected performance. Moreover, any effect arising during retrieval, be it due to prioritization or perceptual enhancement, is likely to (also) reflect in reaction times, and not only in accuracy as in the present experiments. Therefore, it is unlikely that the weighting of action-related feature dimensions emerged during presentation of the test item.
In short, the present experiments show that the contents of VWM are selectively weighted according to the action relevance of specific feature dimensions. Thus, action intentions modulate selective visual processing not only during early perceptual stages, but also during the short-term maintenance of visual information. These findings reveal a hitherto unknown mechanism through which the limited capacity of VWM is optimally used: Action-related feature dimensions are enhanced, ensuring that the information that is needed for upcoming actions is easily available.
Bekkering, H., & Neggers, S. F. W. (2002). Visual search is modulated by action intentions. Psychological Science, 13, 370–374. doi:10.1111/j.1467-9639.1991.tb00167.x
Fagioli, S., Hommel, B., & Schubotz, R. I. (2007). Intentional control of attention: Action planning primes action-related stimulus dimensions. Psychological Research, 71, 22–29. doi:10.1007/s00426-005-0033-3
Gazzaley, A., & Nobre, A. C. (2012). Top-down modulation: Bridging selective attention and working memory. Trends in Cognitive Sciences, 16, 129–135. doi:10.1016/j.tics.2011.11.014
Gunseli, E., van Moorselaar, D., Meeter, M., & Olivers, C. N. L. (2015). The reliability of retro-cues determines the fate of noncued visual working memory representations. Psychonomic Bulletin & Review, 22, 1334–1341. doi:10.3758/s13423-014-0796-x
Hannus, A., Cornelissen, F. W., Lindemann, O., & Bekkering, H. (2005). Selection-for-action in visual search. Acta Psychologica, 118, 171–191. doi:10.1016/j.actpsy.2004.10.010
Heuer, A., Crawford, J. D., & Schubö, A. (2016). Action relevance induces an attentional weighting of representations in visual working memory. Memory & Cognition. Advance online publication. doi:10.3758/s13421-016-0670-3
Heuer, A., & Schubö, A. (2016a). Feature-based and spatial attentional selection in visual working memory. Memory & Cognition, 44, 621–632. doi:10.3758/s13421-015-0584-5
Heuer, A., & Schubö, A. (2016b). The focus of attention in visual working memory: Protection of focused representations and its individual variation. PLoS ONE, 11, e0154228. doi:10.1371/journal.pone.0154228
Hommel, B. (2009). Action control according to TEC (theory of event coding). Psychological Research, 73, 512–526. doi:10.1007/s00426-009-0234-2
LaRocque, J. J., Lewis-Peacock, J. A., & Postle, B. R. (2014). Multiple neural states of representation in short-term memory? It’s a matter of attention. Frontiers in Human Neuroscience, 8(5), 1–14. doi:10.3389/fnhum.2014.00005
Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390, 279–281. doi:10.1038/36846
Luck, S. J., & Vogel, E. K. (2013). Visual working memory capacity: From psychophysics and neurobiology to individual differences. Trends in Cognitive Sciences, 17, 391–400. doi:10.1016/j.tics.2013.06.006
Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory. Nature Neuroscience, 17, 347–356. doi:10.1038/nn.3655
Memelink, J., & Hommel, B. (2013). Intentional weighting: A basic principle in cognitive control. Psychological Research, 77, 249–259. doi:10.1007/s00426-012-0435-y
Smeets, J. B., & Brenner, E. (1999). A new view on grasping. Motor Control, 3, 237–271.
van Moorselaar, D., Olivers, C. N. L., Theeuwes, J., Lamme, V. A. F., & Sligte, I. G. (2015). Forgotten but not gone: Retro-cue costs and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41, 1755–1763. doi:10.1037/xlm0000124
White, B. J., Kerzel, D., & Gegenfurtner, K. R. (2006). Visually guided movements to color targets. Experimental Brain Research, 175, 110–126. doi:10.1007/s00221-006-0532-5
Wykowska, A., & Schubö, A. (2012). Action intentions modulate allocation of visual attention: Electrophysiological evidence. Frontiers in Psychology, 3(379), 1–15. doi:10.3389/fpsyg.2012.00379
Wykowska, A., Schubö, A., & Hommel, B. (2009). How you move is what you see: Action planning biases selection in visual search. Journal of Experimental Psychology: Human Perception and Performance, 35, 1755–1769. doi:10.1037/a0016798
This research was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft), International Research Training Group 1901, “The Brain in Action,” and by Grant No. SFB/TRR 135, TP B3. The authors thank Magda Lazarashvili for her assistance in data collection.
About this article
Cite this article
Heuer, A., Schubö, A. Selective weighting of action-related feature dimensions in visual working memory. Psychon Bull Rev 24, 1129–1134 (2017). https://doi.org/10.3758/s13423-016-1209-0
- Visual working memory
- Action planning
- Selective attention