We next tested whether physical interaction with objects is necessary to observe the target appreciation effect found in Experiment 1. Specifically, in Experiment 2 (see Fig. 1c), participants completed the same task in two conditions. In one condition, responses were made by reaching to grasp the real-world object, as in Experiment 1. In the other condition participants pressed a left or right key on a keyboard to indicate which real-world object matched the cue word. Further, participants were first shown the cue word (circles, squares, or shape), and then the two images on each of the objects were presented, rather than the reverse order, as in Experiment 1.
Methods
Sixty-nine participants (39 women; age: M = 19.39 years, SD = 1.72 years) took part in Experiment 2 to as closely match our Experiment 1 sample size as resources would allow. All participants gave written consent prior to the experiment, which was approved by the University of Alberta’s Research Ethics Office. All participants were right-handed, had normal or corrected-to-normal vision, and did not know the purpose of the study. Participants were compensated with course credit. In Experiment 2, all participants were intended to complete both conditions; however, three subjects participated in the reach condition but not the keyboard condition, while three subjects participated in the keyboard but not reach condition. All methods for Experiment 2 are the same as in Experiment 1, with the following exceptions.
In Experiment 2, the ceiling-mounted projector was replaced with a flat-screen TV (LG 50LB6000) horizontally mounted under glass in a table. Further, stimuli presented on the object screens (iPods) were transmitted over a wired connection to improve consistency in experimental timing. In the keyboard condition in Experiment 2, participants responded using a compact, wired USB computer keyboard. Keys used for the experiment were covered in white tape for identifiability: lower left and right corner keys for responding, and entire second top line of 15 keys for affective ratings. All other keys were masked by black tape.
In Experiment 2, participants completed the same task twice, and each time were directed to respond using either a reach-to-grasp movement (as in Experiment 1) or a key press (order counterbalanced). The cue word was presented for 200 ms coincident with a beep, was then removed for 50 ms, before both images were presented on the objects (see Fig. 1c). In the keyboard condition, participants evaluated images using one row of 15 keys on a keyboard. During evaluation, a positive and negative sign were projected on each side of the ratings keys on the keyboard.
To preserve statistical power for all participants in all conditions, we rejected participants with fewer than 50% usable trials after trial rejection in general, or in any of the unique conditions (16 unique conditions in Experiment 2 reaching, and eight conditions in Experiment 2 keyboard). In Experiment 2, exclusions left 57 participants with an average of 243 usable trials per participant in the reaching condition, and 63 participants with an average of 258 trials per participant in the keyboard condition. Excluding all other errors, participants grasped the correct object over the incorrect object on the vast majority of trials in the reaching condition (M = 96.99%; range: 84.77%–100%), and likewise pressed the correct key over the incorrect key on the vast majority of trials in the keyboard condition (M = 95.67%; range: 84.43%–99.63%).
Results
We again ask if attention to real-world objects on a trial influenced the affective evaluation of an associated image. For trials where participants responded with a reaching movement, a one-way repeated-measures ANOVA on attentional condition (target, distractor, obstacle, and novel) revealed that these abstract images were again evaluated differently depending on their associated attentional condition, F(2.44, 136.56) = 4.49, p = .0083, ηp2 = .0042. Multiple comparisons showed that only target-associated images were found to be different relative to a baseline evaluation of novel images not shown during a trial, t(56) = 3.43, p = .0011, Cohen’s d = 0.12. For trials where participants responded with a key press, abstract images were also evaluated differently depending on their attentional condition; one-way ANOVA on attentional condition (target, distractor, and novel), F(1.49, 92.40) = 6.53, p = .0052, ηp2 = .0032. Similar to the reaching condition, when responding with a keyboard only target appreciation relative to baseline was found, t(62) = 3.31, p = .0015, Cohen’s d = 0.12. For completeness, Bonferroni-corrected multiple comparisons showed that target images were rated as more cheerful than distractor images in the Experiment 2 reaching condition, t(56) = 3.4117, p = 0.0012. Otherwise, target-associated, distractor-associated, and obstacle-associated images were not significantly different from one another in the reaching or keyboard conditions, ps > .0083. Overall, target appreciation was found both when participants responded with a reach-to-grasp movement and when responding with a key press, ruling out the possibility that physical interaction with the objects was responsible for the effect (see Fig. 2c). Further, despite changes in experiment timing, equipment, and sample, the target appreciation effect from Experiment 1 was replicated.
Full-factor tests
Our experimental design permits an even more rigorous test of the affective impact of attention to real-world objects when all the factors that were manipulated are considered. For these more complete analyses, we conducted two additional ANOVAs. Recall that the category labels for the above analyses (target, distractor, obstacle, and novel) are derived from a combination of four factors in all: Start Side (hand start position, Left or Right); Evaluation Side (the side the evaluated abstract stimulus was presented on, Left or Right); Old or New (whether the evaluated abstract stimulus had been seen on that trial or not); and Target or Nontarget (whether the evaluated abstract stimulus was presented at the location you selected or not).
Examining all four factors simultaneously, we first analyzed all of the evaluations made in this study when participants were making reach responses. This meant taking all of the trials from Experiment 1 (n = 71) and combining them with all of the reach trials from Experiment 2 (n = 57). Experiment was a between-subjects factor. This resulted in a five-factor (2 × 2 × 2 × 2 × 2) mixed-model ANOVA, with the between-subjects factor Experiment and the within-subjects factors Start Side, Evaluation Side, Old or New, and Target or Nontarget. This analysis revealed main effects of Evaluation Side, F(1, 126) = 56.56, p = 8.97 e-12, Target or Nontarget, F(1, 126) = 9.70, p = .0023, and Old or New, F(1, 126) = 13.66, p = .00033, as well as interactions between Experiment and Evaluation Side, F(1, 126) = 10.34, p = .0017, and Target or Nontarget and Old or New, F(1, 126) = 6.09, p = .015.
The Evaluation Side main effect was driven by objects on the Right being generally evaluated more positively (55.29%) than objects on the Left (53.08%). The interaction with Experiment was driven by the fact that this Right > Left evaluation difference was larger for the participants in Experiment 2 (Right: 55.16%, Left: 52.01%) than Experiment 1 (Right: 55.42%, Left: 54.16%), but still significant for each Experiment group in isolation (Experiment 1, p < .0013; Experiment 2, p < 1.07 e-10). These results are consistent with previous findings showing that real-world objects presented to the right side of right-handers are attended more rapidly than objects presented to the left side, presumably because right-sided objects are privileged for manual reaching and grasping in a majority of participants (Cavallo, Ansuini, Capozzi, Tversky, & Becchio, 2017; Furlanetto, Gallace, Ansuini, & Becchio, 2014). The present findings extend this result to the realm of affective evaluations; objects privileged for reaching are also privileged when making affective judgments.
The Target or Nontarget main effect was driven by abstract stimuli presented at Target locations (54.55%) being evaluated more positively than those presented at Nontarget locations (53.83%). The Old or New main effect was driven by Old abstract stimuli seen on that trial (i.e., present during action and evaluation; 54.61%) being evaluated more positively than New stimuli that had not been seen (i.e., present during only evaluation; 53.77%). Both of these results are consistent with increased visual attention to an object being associated with increased evaluations. The critical interaction between Target or Nontarget and Old or New confirms the findings reported in the main one-way ANOVA analyses. Specifically, Old targets (55.25%) were evaluated significantly higher than New targets (53.85%) when they were presented at the Target locations (p = 7.60 e-5), but there were no significant differences between the evaluation of Old (53.97%) versus New (53.67%) targets when they were presented at Nontarget locations (p = .33). This means that stimuli that were physically interacted with (Old stimuli at Target locations) also received a boost in positive evaluation, which we refer to as a target appreciation effect.
A second full-factor repeated-measures ANOVA directly compared the reaching versus keyboard trials in Experiment 2. The following analysis was therefore conducted on the 53 participants from Experiment 2 who completed both the reach and keyboard trials. Since the keyboard trials did not have a Start Side, we removed that as a factor for this analysis, resulting in a four-factor (2 × 2 × 2 × 2) repeated-measures ANOVA, with Reach or Keyboard, Evaluation Side (Left or Right), Old or New, and Target or Nontarget as within-subjects factors. This analysis revealed main effects of Evaluation Side, F(1, 52) = 48.91, p = 5.06 e-9, Old or New, F(1, 52) = 10.86, p = .0018, and Target or Nontarget, F(1, 52) = 7.75, p = .0075, as well as an interaction between Target or Nontarget and Old or New, F(1, 52) = 4.06, p = .049. As before, abstract stimuli evaluated on the Right (55.68%) were rated as being more positive than those on the Left (52.37%), abstract stimuli presented at Target locations (54.41%) were rated as being more positive than those presented at Nontarget locations (53.64%), and Old stimuli (54.48%) were rated as more positive than New stimuli (53.57%). Again, in this second analysis, objects privileged for reaching were also privileged when making affective judgments. That is, images presented on real-world objects on the right side of these right-handed participants are affectively appreciated.
Critically, the interaction between Target or Nontarget and Old or New again confirms that attention directed to objects enhances their affective evaluations. We find a target appreciation effect, such that Old targets (55.14%) were evaluated significantly higher than New targets (53.67%) when they were presented at the Target location (p = .0014), but Old (53.82%) versus New (53.47%) targets were not significantly different when they were presented at Nontarget locations (p = .32). This means that it is the specific images, and not the objects that were appreciated as a result of target selection. In other words, if the object was briefly appreciated, then both Old and New images presented on a target object should be subsequently appreciated. Instead, we only see that Old (i.e., Target) images were appreciated, and not New (i.e., Novel) images.
Discussion
Experiment 2 investigated whether physical interaction with objects is necessary to observe the target appreciation effect found in Experiment 1. Here, we found target appreciation both when participants responded with a reach-to-grasp movement and when participants responded with a key press. These results rule out the possibility that the appreciation of target-associated images is solely due to the appreciation of objects after physical interaction (as in Peck & Shu, 2009; Streicher & Estes, 2015), since target appreciation also occurred when participants responded remotely using a key press.
In Experiment 1, images were presented before the cue word, while in Experiment 2 the cue word was presented before the images. Results from Experiment 1 may have been influenced by the relatively long viewing time of stimuli before the cue word, which determined the target and nontarget images/objects. In an attentional landscape framework (Baldauf & Deubel, 2010), attending both images/objects as potential targets may have enhanced them, potentially altering subsequent appreciation or devaluation. However, in Experiment 2, participants only saw the images after they are given information about which one is the target and which one is the nontarget on that trial. This was intended to limit any premovement attentional enhancement for the images in Experiment 2. Yet both experiments showed the same pattern of results—target appreciation, and no effects of obstacle-associated or distractor-associated images. Together, these results rule out the possibility that presentation order affected the results, and that viewing the images before a response was cued influenced affective ratings.
Neither of the present reaching experiments showed the fluency result observed in Hayes et al. (2008), where targets were evaluated more positively when paired with distractors relative to when they were paired with obstacles (see full repeated-measures ANOVA in Experiment 2 Results). One key difference is our use of short and sturdy screens as graspable objects, whereas Hayes et al. (2008) used a tall vase filled with water as an obstacle. Even though movement trajectories were significantly altered in our obstacle conditions (see Fig. 1d), these nonfluent actions did not impact affective ratings. One explanation for this difference in results is that it is not the fluency of an action that impacts affective evaluations, but perhaps the perceived risk associated with those actions. More research on this topic is certainly needed.
Both Experiment 1 and Experiment 2 are limited by the image presentation latency of the screens on the objects. Our experimental setup did not always allow both images to be presented on the object screens simultaneously. However, trials in both experiments were counterbalanced so that all conditions appeared on the left and right object screens in equal proportions. If, for example, one object screen was slightly faster than the other, and image order impacted affective ratings, then this difference would equally impact all conditions. There is always the possibility of an interaction between presentation latency and affective ratings however. For example, when targets are presented first, they are enhanced, but when nontargets are presented first, they are not enhanced. Such an interaction is not possible to address in the current data set.
The full-factor repeated-measures ANOVA analyses in both experiments indicated that more positive evaluations were given to images on the right side of space. While outside the scope of the current study, these findings are consistent with a large literature on stronger attention in right-handed participants to rightward stimuli both in keyboard and reaching based tasks (Kinsbourne, 1987; Lloyd, Azañón, & Poliakoff, 2010; Wispinski, Truong, Handy, & Chapman, 2017), and laterality effects in stimulus evaluation (Compton, Williamson, Murphy, & Heller, 2002; Goolsby et al., 2009). The present results are also consistent with an embodiment account, as responding with the same hand as a graspable object on a screen increased measures of emotional liking (Cannon et al., 2010). Even when not acting on an object, as in the Experiment 2 keyboard condition, the affordances of nearby graspable objects may have enhanced cognitive processing toward those objects and their associated images (Garrido-Vásquez & Schubö, 2014; Gibson, 1979).
Conclusions
Experiment 1 investigated whether appreciation and devaluation would be observed when evaluating objects that are targets, nontargets, or physical obstacles during real object interaction. Experiment 2 investigated whether physical interaction with objects was even necessary to observe subsequent effects on affective evaluations. Overall, despite several experimental changes between Experiments 1 and 2, we found that unique images presented on target objects during a selective attentional task were affectively appreciated. We speculate that target appreciation is explained by the automatic deployment of attention toward real objects that are being selected and acted on. This result adds to a growing number of studies exploring how responding to an image or object can dramatically enhance its subsequent affective evaluation. Similar to explanations for a distractor devaluation effect (Fenske & Raymond, 2006), we speculate that the target appreciation of real objects may have important adaptive functions. We interpret the biasing of behavior toward previously selected objects through positive emotional attribution as a possible mechanism to promote the repetition of previously advantageous behavior. Other research has shown that valuable stimuli automatically capture attention (Anderson, Laurent, & Yantis, 2011; Chapman, Gallivan, Wong, Wispinski, & Enns, 2015), and so increasing the subjective value of these previously attended target objects may give these objects priority in subsequent neural processing.
These results also highlight the bidirectional entanglement of selective attention and subjective value. There is now a very large literature documenting that recently rewarded objects and object properties involuntarily draw focused spatial attention to the locations in which they occur (Anderson, 2016; Chelazzi, Perlato, Santandrea, & Della Libera, 2013; Failing & Theeuwes, 2018). The present findings help to emphasize that the arrow of influence runs in the other direction as well. Merely attending to an object in preparation of its selection for action serves to increase its subsequent emotional appraisal.
These results are similar to a cue-approach effect, where repeated button presses to a stimulus increases the subsequent evaluation of that stimulus (Schonberg et al., 2014). This cue-approach effect is also thought to impact evaluations through associations to motor-driven attention—in that task, a button press. However, the target appreciation observed in the current study does not require several stimulus–response repetitions for appreciation as in cue-approach experiments (Schonberg et al., 2014). More research is needed to investigate the conditions under which attention paid to images or objects subsequently enhances affective evaluations.
In contrast, we did not see any devaluation in any of the experiments reported here relative to novel images. In particular, obstacles were not different from baseline in any of the experiments. As stated before, on one hand, obstacle avoidance is thought to require more cognitive resources and attention for successful action (Agyei et al., 2016; Baldauf, 2018; Deubel & Schneider, 2004; Johansson et al., 2001). Alternatively, obstacle avoidance may be implemented by inhibiting neural activity corresponding to obstacle locations (Howard & Tipper, 1997; Tipper et al., 1997; Welsh & Elliott, 2004). Such accounts predict that obstacles should have been appreciated or devalued, respectively. Why was neither effect observed? Obstacles are thought to first be attended and then rapidly suppressed within an attentional landscape framework (Chapman et al., 2011). Perhaps the time course of attention with respect to emotion matters. In other words, it is possible that the influence of attention on affective evaluations in the current task happens when obstacles are in a relatively neutral position within a rapidly evolving attentional landscape (Wispinski et al., 2018). On the other hand, perhaps the effect of attention on emotion is truly asymmetrical. Additional research regarding the attentional status of obstacles before and during avoidance movements is needed.
On another hand, perhaps salient action outcomes are needed for subsequent affective tags. Successful grasping of, or button pressing toward, a target object may provide significant cues for subsequent affective appreciation. In contrast, perhaps a salient event such as obstacle collision (Hayes et al., 2008), or the misidentification of a distractor as a target, may be needed for subsequent devaluation. The fluency of such obstacle avoidance movements have been shown to drive subsequent changes in evaluations (Hayes et al., 2008). However, such events are difficult to experimentally control. Future research on these questions are needed for a complete understanding of attention-emotion mechanisms.
Experiment 2 showed that physical interaction is not needed to observe target appreciation. Of note, the effect sizes for target appreciation when responding with a reach-to-grasp movement and when responding with a key press are roughly the same. These results stand in contrast to studies showing that physical interaction with graspable objects causes subsequent appreciation (Peck & Shu, 2009; Streicher & Estes, 2015). However, both of these studies were brand and consumer product oriented, and so perhaps the context of subsequent evaluations is critical when determining if physical interaction is important.
Here we used graspable objects, but only changed the images presented on these objects in order to present hundreds of unique stimuli to participants. Perhaps to observe devaluation effects, objects themselves must change (as in Hayes et al., 2008; Masson et al., 2008; Snow et al., 2011; Styrkowiec et al., 2019). In the current study, we show that an image associated with a target becomes affectively enhanced, but that the object itself does not (see target-old vs. target-new conditions in full-factor ANOVAs). Perhaps if the object screen was not presenting many different stimuli, the emotional tagging would be to the real-world graspable object and not the image. Given the current data, we cannot analyze whether the target-associated image presented at the location of a distractor/obstacle would still be affectively enhanced. In contrast to other studies using computer monitors, our screens were made up of graspable objects, which have strong affordances (Gibson, 1979). These affordances may have altered or enhanced processing relative to nongraspable screens (Garrido-Vásquez & Schubö, 2014; Gibson, 1979). However, affordances are also complex—behavioral and neural studies demonstrate differential processing of two-dimensional and three-dimensional stimuli (Andersen & Kramer, 1993; He & Nakayama, 1995; Snow et al., 2011; Snow, Skiba, Coleman, & Berryhill, 2014). Perhaps future work can investigate the subsequent affective influence of avoiding two-dimensional obstacles en route to two-dimensional targets on a screen using mouse tracking.