Abstract
A growing number of studies suggest that semantic knowledge can influence the control of gaze in scenes. For example, observers are more likely to look toward objects that are semantically related to the currently fixated object. Recent evidence also suggests that an object’s functional orientation can bias gaze direction. However, it is unknown whether these semantic and functional relationships can interact to determine gaze control. To address this issue, the present study assessed whether the functional arrangement of multiple objects can influence gaze control. Participants fixated a central object (e.g., a key) flanked by two peripheral objects. After a brief delay, participants were free to shift their gaze toward the peripheral object of their choice. One of the peripheral objects was semantically related to the central object (e.g., a lock), and the objects were arranged to depict a functional or non-functional interaction (e.g., a key pointing toward or away from a lock). When the orientation of the central object was manipulated, participants were more likely to look in the direction this object was pointing. Moreover, the functional arrangement of objects modulated this central orienting bias. However, when the orientation of the peripheral objects was manipulated, only the peripheral objects’ semantic relationships influenced gaze control. Together, these findings suggest that functional relationships play an important role in the allocation of gaze, and can interact with semantic relationships to determine gaze control.
Introduction
Our eyes move to a new location approximately three to four times per second. This means that every 250–300 ms, our visual system must choose where we will look next in a scene (Rayner, 2009). These choices are not based on random selection. Instead, where we look reflects a variety of visual and cognitive processes. For example, gaze control is influenced by a variety of image statistics, such as spatial variance, edge density, and occlusion (Krieger, Rentschler, Hauske, Schill, & Zetzsche, 2000), and observers are more likely to fixate visually salient regions of a scene, such as brightly colored objects and areas of high contrast (Itti & Koch, 2001; Koch & Ullman, 1985). There are also many cognitive factors that influence the control of gaze, including the global scene context (Torralba, Oliva, Castelhano, & Henderson, 2006), the nature of the observers’ task (Castelhano, Mack, & Henderson, 2009), and the momentary task relevance of objects (Land & Hayhoe, 2001). As a result, recent computational models of gaze control include both bottom-up and top-down factors in predicting where observers will look in a scene (e.g., Tatler, Brockmole, & Carpenter, 2017).
Recently, a growing number of studies have addressed the role of semantic knowledge in gaze control (see Wu, Wick, & Pomplun, 2014b, for a review). Most of these studies have focused on two broad classes of effects. One group of studies has examined the semantic relationships between objects and the global scene context. These studies have shown that observers can rapidly extract the context, or gist, of a scene, and use it to identify which objects are likely to be present (Loftus & Mackworth, 1978; Underwood & Foulsham, 2006; but see Henderson, Weeks, & Hollingworth, 1999) and guide their eyes toward the likely locations of objects (Neider & Zelinsky, 2006; Torralba et al., 2006). A second group of studies has examined the semantic relationships between individual objects. These studies indicate that observers are more likely to look toward objects that are semantically related to the currently fixated object (Hwang, Wang, & Pomplun, 2011). For example, if observers are currently fixating a table, they are more likely to subsequently fixate a chair rather than a fireplace. These effects are observed even when objects are not located near one another or when the gist of a scene is removed. However, these effects are eliminated when the locations of objects are randomized within a scene (Wu, Wang, & Pomplun, 2014a). Thus, although semantic relationships can influence gaze independently of a broader scene context, these effects are sensitive to the spatial dependencies among objects. In other words, semantically related objects are typically found in specific locations relative to each other, and their semantic relationships no longer influence gaze control when these spatial dependencies are violated (see also Castelhano & Heaven, 2011; Mack & Eckstein, 2011).
Spatial dependencies among semantically related objects are not simply defined by the locations of objects within a scene. In many cases, semantic relationships are also based on objects’ potential to interact with each other. For example, a key and lock can be used together to secure or open a door. The success of such interactions usually requires a specific type of spatial dependency, namely, the functional orientation of objects (e.g., a key must be oriented a particular way to be inserted into a lock). Thus, in addition to their locations relative to each other, object groupings can vary according to their functional arrangement within a scene, with proper arrangements depicting objects that are both semantically related to each other (e.g., a pitcher and glass) and oriented to perform a common functional interaction (e.g., a pitcher’s spout pointing toward a glass). When such arrangements are depicted within a display, cognitive processes such as object recognition (Green & Hummel, 2006; Roberts & Humphreys, 2011a), attentional allocation (Roberts & Humphreys, 2011b), and visual working memory (O’Donnell, Clement, & Brockmole, 2018) are facilitated, because these arrangements enable both perceptual grouping and representational compression in memory. Our goal in the present study was to assess whether similar cognitive benefits extend to the control of gaze. We know from the studies discussed above that semantic relationships among objects influence gaze control, but we do not know whether the visual system also uses information about the functional arrangement of objects to determine where observers will look in a scene.
We took as our starting point prior observations that an individual object’s functional orientation can influence the control of gaze. Specifically, in a recent study by Cronin and Brockmole (2016), participants fixated a central object flanked by two peripheral squares. After a brief delay, participants were free to shift their gaze toward the square of their choice. When the central object was oriented so that its functional end (e.g., a teapot’s spout) pointed toward one of these squares, participants were more likely to look in the direction this object was pointing. Thus, an object’s functional orientation biased gaze direction in the absence of any additional semantic information in a display. To determine whether the functional arrangement of multiple objects can influence gaze control, we sought to determine whether observers simply look where an object is pointing, or whether they also consider what that object is pointing toward. While the former refers to object-level biases associated with a single object, the latter refers to modulation of these biases by the scene-level relationships among multiple objects.
Experiment 1
In Experiment 1, we assessed whether the functional arrangement of objects can influence gaze control. Participants fixated a central object flanked by two peripheral objects. After a brief delay, participants were free to shift their gaze toward the peripheral object of their choice. One of the peripheral objects was semantically related to the central object, and the central object pointed toward or away from this object (in a control condition, no semantically related object was present). Based on previous evidence (e.g., Cronin & Brockmole, 2016), we expected the central object’s orientation to bias gaze direction, with participants being more likely to look in the direction this object is pointing. However, if the functional arrangement of objects additionally influences gaze control, this central orienting bias should be modulated by the identities and locations of the peripheral objects.
Methods
Participants
A group of 37 University of Notre Dame undergraduates participated for course credit.Footnote 1 One participant was excluded as a statistical outlier, providing average saccade latencies that exceeded ±3 standard deviations from the mean.
Apparatus and stimuli
Stimuli were adapted from Snodgrass and Vanderwart (1980), and consisted of eight images of objects. The images were presented in black on a gray background, and were arranged into stimulus displays, which consisted of a central object flanked by two peripheral objects. The central object subtended approximately 7° × 7°, and could be oriented so that its functional end pointed toward either of the two peripheral objects. The peripheral objects subtended approximately 5° × 5°, and were positioned 9° to the left and right of center. We then manipulated the semantic and functional relationships among the objects to create three types of displays (see Fig. 1A). Functional displays contained a peripheral object that was semantically related to the central object, and the central object always pointed toward this object. Non-functional displays also contained a peripheral object that was semantically related to the central object, but the central object always pointed away from this object. Neutral displays contained two peripheral objects that were unrelated to the central object, and the central object could point toward either of these two objects. The identity of the central object and the identities and locations of the peripheral objects were counterbalanced within each display type, resulting in a total of 32 stimulus displays.
Stimuli were presented on a 21.5-in LCD monitor with a refresh rate of 60 Hz. Participants sat 49 cm from the monitor so that it subtended 51.7° horizontally and 34° vertically. Participants’ eye movements were recorded using an EyeLink 2K eye-tracking system (SR Research, Inc.) with a sampling rate of 1,000 Hz.
Procedure and design
At the beginning of each trial, a fixation cross (0.5° × 0.5°) appeared in the center of the screen. After a randomly determined interval between 500–1,500 ms, a stimulus display was presented in the center of the screen (see Fig. 1B). Participants were instructed to maintain fixation on the central object in this display. After a randomly determined interval between 300 and 500 ms, a small white square (0.2° × 0.2°) appeared at the center of the screen for 150 ms. This square served as a “go signal,” indicating that participants were free to shift their gaze toward one of the peripheral objects. A trial ended once participants made a leftward or rightward saccade that subtended at least 4°. Thus, our dependent variables were restricted to this initial saccade’s direction and latency. A trial also ended if participants made a saccade prior to the onset of the go signal (these trials were recycled later in the experiment).
Participants completed four blocks of 96 trials, for a total of 384 trials. Of these trials, 25% contained functional displays, 25% contained non-functional displays, and 50% contained neutral displays. The 32 stimulus displays were presented randomly and equally often. As a result, the identity of the central object and the identities and locations of the peripheral objects were counterbalanced across trials.
Results
To test whether the central object’s orientation influenced gaze independently of any semantic relationships, we first analyzed the proportion of trials on which participants made a saccade in the direction the central object was pointing in neutral displays (functionally congruent saccades). A one-sample t-test revealed that the proportion of these saccades was significantly greater than chance (M = 63.1%, SD = 18.3%), t (35) = 4.30, p < .001. Thus, consistent with previous evidence, the central object’s orientation biased gaze direction (Cronin & Brockmole, 2016). More importantly, a repeated-measures analysis of variance (ANOVA) revealed that the functional arrangement of objects modulated this central orienting bias, F (2, 70) = 15.15, p < .001. Compared to neutral displays, participants were more likely to make functionally congruent saccades in functional displays (M = 72.0%, SD = 20.8%), p < .001, but were less likely to make these saccades in non-functional displays (M = 51.2%, SD = 25.7%), p < .001 (see Fig. 2A). Although gaze direction differed as a function of display type, saccade latency did not (M = 285 ms, SD = 45 ms), F (2, 70) = 0.36, p = .694.
Discussion
In Experiment 1, the central object’s orientation biased gaze direction, with participants being more likely to look in the direction this object was pointing (Cronin & Brockmole, 2016). However, this central orienting bias was modulated by the functional arrangement of objects. In functional displays, where the central object’s orientation was consistent with the location of the semantically related object, this bias was enhanced. In non-functional displays, where the central object’s orientation competed with the location of the semantically related object, this bias was reduced. Thus, a fixated object’s functional orientation and the peripheral objects’ semantic relationships biased gaze in an additive fashion. Together, these findings reveal that the functional arrangement of multiple objects can influence the allocation of gaze.
Experiment 2
In Experiment 1, we found that the functional arrangement of objects influenced gaze control. However, we only manipulated the orientation of the central object. In Experiment 2, we assessed whether the functional arrangement of objects can influence gaze control when the orientation of the peripheral objects is manipulated. As in Experiment 1, one of the peripheral objects was semantically related to the central object, and both peripheral objects now pointed toward or away from this object. Based on previous evidence (e.g., Hwang et al., 2011), we expected the peripheral objects’ semantic relationships to bias gaze direction, with participants being more likely to look toward the semantically related object in a display. However, if the functional arrangement of objects additionally influences gaze control, this gaze bias should be modulated by the orientation of the peripheral objects.
Methods
Participants
A new group of 37 University of Notre Dame undergraduates participated for course credit. One participant was excluded as a statistical outlier, providing average saccade latencies that exceeded ±3 standard deviations from the mean.
Apparatus and stimuli
Stimuli consisted of the same eight images from Experiment 1. However, the identities of the central and peripheral objects were switched, allowing us to manipulate the orientation of the peripheral objects. As a result of this manipulation, the central object now subtended approximately 5° × 5°. The peripheral objects now subtended approximately 7° × 7°, and could be oriented toward or away from the central object. Again, there were three types of displays (see Fig. 3). Functional displays contained a peripheral object that was semantically related to the central object, and both peripheral objects pointed toward the central object. Non-functional displays also contained a peripheral object that was semantically related to the central object, but both peripheral objects pointed away from the central object. Neutral displays contained two peripheral objects that were semantically unrelated to the central object, and both of these objects could point toward or away from the central object. All other experimental details were identical to those in Experiment 1.
Results
To test whether the peripheral objects’ orientation influenced gaze independently of any semantic relationships, we first analyzed the proportion of trials on which participants looked toward the right object in neutral displays (this choice was arbitrary, given that rightward and leftward saccades accounted for 100% of the data). A paired samples t-test revealed that participants were just as likely to look toward this object when the peripheral objects pointed inward (M = 51.1%, SD = 16.0%) or outward (M = 52.0%, SD = 16.4%), t (35) = 0.70, p = .488. Thus, the orientation of the peripheral objects did not bias gaze direction.
To test whether the functional arrangement of objects influenced gaze control, we next analyzed the proportion of trials on which participants made a saccade toward the semantically related object in functional and non-functional displays (semantically congruent saccades). A one-sample t-test revealed that the proportion of these saccades was significantly greater than chance (M = 59.9%, SD = 15.0%), t (35) = 3.97, p < .001. Thus, consistent with previous evidence, the peripheral objects’ semantic relationships biased gaze direction (Hwang et al., 2011; Wu, Wang & Pomplun, 2014a). However, a paired samples t-test revealed that the functional arrangement of objects did not modulate this gaze bias, t (35) = 1.41, p = .167. Specifically, participants were just as likely to make semantically congruent saccades in functional (M = 60.8%, SD = 15.1%) and non-functional displays (M = 59.1%, SD = 15.7%; see Fig. 2B). As in Experiment 1, saccade latency did not differ as a function of display type (M = 307 ms, SD = 61 ms), F (2, 70) = 0.68, p = .513.
Discussion
In Experiment 2, the peripheral objects’ semantic relationships biased gaze direction, with participants being more likely to look toward the semantically related object in a display (Hwang et al., 2011; Wu, Wang, & Pomplun, 2014a). However, in contrast to the central orienting bias observed in Experiment 1, this gaze bias was not modulated by the functional arrangement of objects. Together, these findings suggest that the orientation of the peripheral objects does not play a role in the allocation of gaze. Instead, only a fixated object’s functional orientation appears to influence gaze control.
General discussion
A growing number of studies suggest that semantic knowledge can influence the control of gaze in scenes. For example, observers are more likely to look toward objects that are semantically related to the currently fixated object (Hwang et al., 2011; Wu, Wang, & Pomplun, 2014a). Recent evidence also suggests that an object’s functional orientation can bias gaze direction (Cronin & Brockmole, 2016). However, it is unknown whether these semantic and functional relationships can interact to determine gaze control. To address this issue, the present study assessed whether the functional arrangement of multiple objects can influence gaze control. Participants fixated a central object (e.g., a key) flanked by two peripheral objects. After a brief delay, participants were free to shift their gaze toward the peripheral object of their choice. One of the peripheral objects was semantically related to the central object (e.g., a lock), and the objects were arranged to depict a functional or non-functional interaction (e.g., a key pointing toward or away from a lock). When the orientation of the central object was manipulated, participants were more likely to look in the direction this object was pointing. Moreover, the functional arrangement of objects modulated this central orienting bias. However, when the orientation of the peripheral objects was manipulated, only the peripheral objects’ semantic relationships influenced gaze control. Together, these findings reveal that a fixated object’s functional orientation and the peripheral objects’ semantic relationships can bias gaze in an additive fashion.
Overall, the present findings have important implications for models of gaze control. According to many theoretical accounts, the global scene context plays an important role in the allocation of gaze. For example, the gist of a scene can inform observers about the likely locations of objects and guide their eyes toward these objects (Neider & Zelinksy, 2006; Torralba et al., 2006). More recently, some studies have found that the semantic relationships (Hwang et al., 2011; Wu, Wang, & Pomplun, 2014a) and spatial dependencies among objects (Castelhano & Heaven, 2011; Mack & Eckstein, 2011) can influence gaze independently of a broader scene context. The present study provides further evidence for these claims, revealing that semantic and functional relationships can bias gaze direction even in relatively simple displays. Thus, semantic relationships can influence the control of gaze, even in the absence of a broader scene context.
Notably, the present findings also reveal that functional relationships play an important role in the allocation of gaze. Although a number of studies have examined the effects of semantic relationships on gaze, few studies have addressed whether the functional arrangement of objects can influence gaze control. Nonetheless, the functional arrangement of objects has been shown to influence a variety of other cognitive processes. For example, when pairs of objects are semantically related and oriented to perform a common functional interaction, observers can recognize these objects more accurately (Green & Hummel, 2006; Roberts & Humphreys, 2011a) and remember a greater number of them (O’Donnell et al., 2018). An object’s functional orientation has also been found to bias attention (Roberts & Humphreys, 2011b) and gaze direction (Cronin & Brockmole, 2016). In the present study, the functional arrangement of multiple objects influenced gaze control. This suggests that functional relationships play a greater role in the allocation of gaze than many theoretical accounts assume (see also Castelhano & Witherspoon, 2016).
In addition to these findings, the present study adds to a growing body of research on the semantic guidance of attention. As a number of studies indicate, semantic relationships not only influence the control of gaze in scenes, but can also influence attentional allocation in other tasks. For example, when observers are asked to search for a target object in a visual search display, they are more likely to fixate objects that are semantically related to this object (Belke, Humphreys, Watson, Meyer, & Telling, 2008; de Groot, Huettig, & Olivers, 2016; Moores, Laiti, & Chelazzi, 2003). Semantic relationships can also bias attention in relatively simple displays, even when these relationships are task-irrelevant (Malcolm, Rattinger, & Shomstein, 2016). However, these effects can be modulated by other cognitive factors, such as performing a cognitively demanding task (Belke et al., 2008). In the present study, both semantic and functional relationships influenced the control of gaze. This suggests that the semantic guidance of attention may be sensitive to spatial factors, such as an object’s functional orientation.
Finally, interesting parallels can be drawn between the present findings and the spatial cueing literature, which suggests that central cues such as arrows and spatial words can bias attention toward consistent locations in the visual field (e.g., Hommel, Colzato, Pratt, & Godijn, 2001). In such cases, spatial information associated with a cue is used to prioritize locations for further processing. It is possible that the central orienting bias observed in the present study may arise from a similar underlying mechanism (Roberts & Humphreys, 2011b). Nonetheless, the present findings point to a more complicated system of cue interpretation than has been outlined by the spatial cueing literature alone. The allocation of attention is not only influenced by a fixated object’s orientation, but can also be modulated by the identities and locations of the peripheral objects, a conclusion that has only come to light by varying the semantic and functional relationships between cues and potential targets. Thus, a variety of cognitive processes associated with fixated objects, peripheral objects, and their relationships can interact to bias attention and gaze direction.
In summary, the present study assessed whether the functional arrangement of multiple objects can influence gaze control. When the orientation of the central object was manipulated, participants were more likely to look in the direction this object was pointing. Moreover, the functional arrangement of objects modulated this central orienting bias. However, when the orientation of the peripheral objects was manipulated, only the peripheral objects’ semantic relationships influenced gaze control. Together, these findings suggest that functional relationships play an important role in the allocation of gaze, and can interact with semantic relationships to determine gaze control.
Author Note
Andrew Clement is now at the University of Toronto. Address correspondence concerning this article to: as.clement@utoronto.ca.
Open Practices Statement
The data and materials for all experiments will be made available by the corresponding author. None of the experiments were preregistered.
Notes
Although we used a relatively small set of stimuli, the potential generalizability of our findings is supported by several previous studies. For example, an object’s functional orientation has been shown to bias gaze direction for a variety of stimuli, including household objects, vehicles, and animals (Cronin & Brockmole, 2016), and the effects of semantic relationships on gaze have been observed for an even broader range of stimuli (Hwang et al., 2011; Wu, Wang, & Pomplun, 2014). Thus, any findings observed in the present study are unlikely to be due to our specific choice of stimuli.
References
Belke, E., Humphreys, G. W., Watson, D. G., Meyer, A. S., & Telling, A. L. (2008). Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load. Perception & Psychophysics, 70(8), 1444-1458.
Castelhano, M. S, & Heaven, C. (2011). Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychonomic Bulletin & Review, 18(5), 890-896.
Castelhano, M. S., Mack, M. L., & Henderson, J. M. (2009). Viewing task influences eye movement control during active scene perception. Journal of Vision, 9(3), 1-15.
Castelhano, M. S., & Witherspoon, R. L. (2016). How you use it matters: Object function guides attention during visual search in scenes. Psychological Science, 27(5), 606-621.
Cronin, D. A., & Brockmole, J. R. (2016). Evaluating the influence of a fixated object’s spatio- temporal properties on gaze control. Attention, Perception, & Psychophysics, 78(4), 996- 1003.
de Groot, F., Huettig, F., & Olivers, C. N. L. (2016). When meaning matters: The temporal dynamics of semantic influences on visual attention. Journal of Experimental Psychology: Human Perception and Performance, 42(2), 180-196.
Green, C., & Hummel, J. E. (2006). Familiar interacting object pairs are perceptually grouped. Journal of Experimental Psychology: Human Perception and Performance, 32(5), 1107- 1119.
Henderson, J. M., Weeks, P. A., & Hollingworth, A. (1999). The effects of semantic consistency on eye movements during complex scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 25(1), 210-228.
Hommel, B., Pratt, J., Colzato, L., & Godijn, R. (2001). Symbolic control of visual attention. Psychological Science, 12(5), 360-365.
Hwang, A. D., Wang, H.-C., & Pomplun, M. (2011). Semantic guidance of eye movements in real-world scenes. Vision Research, 51(10), 1192-1205.
Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3), 194-203.
Koch, C., & Ullman, S. (1985). Shifts in selective visual attention: Toward the underlying neural circuitry. Human Neurobiology, 4(4), 219-227.
Krieger, G., Rentschler, I., Hauske, G., Schill, K., & Zetzsche, C. (2000). Object and scene analysis by saccadic eye-movements: An investigation with higher-order statistics. Spatial Vision, 13(2-3), 201-214.
Land, M. F., & Hayhoe, M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41(25-26), 3559-3565.
Loftus, G. R., & Mackworth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology: Human Perception and Performance, 4(4), 565-572.
Mack, S. C., & Eckstein, M. P. (2011). Object co-occurrence serves as a contextual cue to guide and facilitate visual search in a natural viewing environment. Journal of Vision, 11(9), 1- 16.
Malcolm, G. L., Rattinger, M., & Shomstein, S. (2016). Intrusive effects of semantic information on visual selective attention. Attention, Perception, & Psychophysics, 78(7), 2066-2078.
Moores, E., Laiti, L., & Chelazzi, L. (2003). Associative knowledge controls deployment of visual selective attention. Nature Neuroscience, 6(2), 182-189.
Neider, M. B., & Zelinsky, G. J. (2006). Scene context guides eye movements during visual search. Vision Research, 46(5), 614-621.
O’Donnell, R. E., Clement, A., & Brockmole, J. R. (2018). Semantic and functional relationships among objects increase the capacity of visual working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(7), 1151-1158.
Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual search. The Quarterly Journal of Experimental Psychology, 62(8), 1457-1506.
Roberts, K. L., & Humphreys, G. W. (2011a). Action relations facilitate the identification of briefly presented objects. Attention, Perception, & Psychophysics, 73(2), 597-612.
Roberts, K. L., & Humphreys, G. W. (2011b). Action-related objects influence the distribution of visuospatial attention. The Quarterly Journal of Experimental Psychology, 64(4), 669- 688.
Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6(2), 174-215.
Tatler, B. W., Brockmole, J. R., & Carpenter, R. H. S. (2017). LATEST: A model of saccadic decisions in space and time. Psychological Review, 124(3), 267-300.
Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113(4), 766-786.
Underwood, G., & Foulsham, T. (2006). Visual saliency and semantic incongruency influence eye movements when inspecting pictures. The Quarterly Journal of Experimental Psychology, 59(11), 1931-1949.
Wu, C.-C., Wang, H.-C., & Pomplun, M. (2014a). The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes. Vision Research, 105, 10-20.
Wu, C.-C., Wick, F. A., & Pomplun, M. (2014b). Guidance of visual attention by semantic information in real-world scenes. Frontiers in Psychology, 5(54), 1-13.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Clement, A., O’Donnell, R.E. & Brockmole, J.R. The functional arrangement of objects biases gaze direction. Psychon Bull Rev 26, 1266–1272 (2019). https://doi.org/10.3758/s13423-019-01607-8
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13423-019-01607-8