Skip to main content

Scene meaningfulness guides eye movements even during mind-wandering

Abstract

During scene viewing, semantic information in the scene has been shown to play a dominant role in guiding fixations compared to visual salience (e.g., Henderson & Hayes, 2017). However, scene viewing is sometimes disrupted by cognitive processes unrelated to the scene. For example, viewers sometimes engage in mind-wandering, or having thoughts unrelated to the current task. How do meaning and visual salience account for fixation allocation when the viewer is mind-wandering, and does it differ from when the viewer is on-task? We asked participants to study a series of real-world scenes in preparation for a later memory test. Thought probes occasionally occurred after a subset of scenes to assess whether participants were on-task or mind-wandering. We used salience maps (Graph-Based Visual Saliency; Harel, Koch, & Perona, 2007) and meaning maps (Henderson & Hayes, 2017) to represent the distribution of visual salience and semantic richness in the scene, respectively. Because visual salience and meaning were represented similarly, we could directly compare how well they predicted fixation allocation. Our results indicate that fixations prioritized meaningful over visually salient regions in the scene during mind-wandering just as during attentive viewing. These results held across the entire viewing time. A re-analysis of an independent study (Krasich, Huffman, Faber, & Brockmole Journal of Vision, 20(9), 10, 2020) showed similar results. Therefore, viewers appear to prioritize meaningful regions over visually salient regions in real-world scenes even during mind-wandering.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    The unbiased meaning maps were correlated with salience maps at r = .60.

  2. 2.

    We, furthermore, used the R-packages BayesFactor (Version 0.9.12.4.2; Morey & Rouder, 2018), dplyr (Version 1.0.5; Wickham, François, Henry, & Müller, 2020), easystats (Version 0.4.0; Makowski, Ben-Shachar, & Lüdecke, 2020), knitr (Version 1.33.1; Xie, 2015), lme4 (Version 1.1.26; Bates, Mächler, Bolker, & Walker, 2015), papaja (Version 0.1.0.9997; Aust & Barth, 2020), patchwork (Version 1.1.1; Pedersen, 2019), raincloudplots (Version 0.2.0; Allen, Poggiali, Whitaker, Marshall, van Langen, & Kievit, 2021), and tidyr (Version 1.1.3; Wickham & Henry, 2020).

  3. 3.

    We also conducted one-sample t tests to examine if the unique variance explained by meaning and visual salience, after removing their shared variance, was significantly greater than zero. The results (see the Appendix) show that both meaning and visual salience explained statistically significant amount of unique variance in all fixation maps.

  4. 4.

    It is also worth noting that the extraction of scene gist may be facilitated by the fact that in the present work, the same type of scenes were presented in groups. We did so out of the concern that mixing different types of scenes would induce a novelty that reduce the mind-wandering rate (Faber, Radvansky, & D’Mello, 2018).

References

  1. Allen, M., Poggiali, D., Whitaker, K., Marshall, T. R., van Langen, J., & Kievit, R. A. (2021). Raincloud plots: a multi-platform tool for robust data visualization [version 2; peer review: 2 approved] Wellcome Open Research 2021, 4:63. https://doi.org/10.12688/wellcomeopenres.15191.2

  2. Anderson, N. C., Ort, E., Kruijne, W., Meeter, M., & Donk, M. (2015). It depends on when you look at it: Salience influences eye movements in natural scene viewing and search early in time. Journal of Vision, 15(5), 9. https://doi.org/10.1167/15.5.9

    Article  PubMed  Google Scholar 

  3. Aust, F., & Barth, M. (2020). papaja: Create APA manuscripts with R Markdown. https://github.com/crsh/papaja

  4. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01

    Article  Google Scholar 

  5. Borji, A., & Itti, L. (2012). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185–207.

    Article  Google Scholar 

  6. Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., & Durand, F. (2017). What do different evaluation metrics tell us about saliency models? arXiv:1604.03605[Cs].

  7. Dalmaijer, E. S., Mathôt, S., & van der Stigchel, S. (2014). Pygaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods, 46(4), 913–921.

    Article  Google Scholar 

  8. Faber, M., Krasich, K., Bixler, R., Brockmole, J., & D’Mello, S. (2020). The eye-mind wandering link: Identifying gaze indices of mind wandering across tasks. Journal of Experimental Psychology: Human Perception and Performance.

  9. Faber, M., Radvansky, G. A., & D’Mello, S. K. (2018). Driven to distraction: A lack of change gives rise to mind wandering. Cognition, 173, 133–137. https://doi.org/10.1016/j.cognition.2018.01.007

    Article  PubMed  Google Scholar 

  10. Foulsham, T., Farley, J., & Kingstone, A. (2013). Mind wandering in sentence reading: Decoupling the link between mind and eye. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Experimentalé, 67(1), 51.

    Article  Google Scholar 

  11. Frank, D. J., Nara, B., Zavagnin, M., Touron, D. R., & Kane, M. J. (2015). Validating older adults’ reports of less mind-wandering: An examination of eye movements and dispositional influences. Psychology and Aging, 30(2), 266–278. https://doi.org/10.1037/pag0000031

    Article  PubMed  Google Scholar 

  12. Garcia-Diaz, A., Fdez-Vidal, X. R., Pardo, X. M., & Dosil, R. (2012). Saliency from hierarchical adaptation through decorrelation and variance normalization. Image and Vision Computing, 30(1), 51–64.

    Article  Google Scholar 

  13. Greene, M. R., & Fei-Fei, L. (2014). Visual categorization is automatic and obligatory: Evidence from Stroop-like paradigm. Journal of Vision, 14(1), 14–14.

    Article  Google Scholar 

  14. Harel, J., Koch, C., & Perona, P. (2007). Graph-based visual saliency. Advances in Neural Information Processing Systems, 545–552.

  15. Hayes, T. R., & Henderson, J. M. (2019). Scene semantics involuntarily guide attention during visual search. Psychonomic Bulletin & Review, 26(5), 1683–1689. https://doi.org/10.3758/s13423-019-01642-5

    Article  Google Scholar 

  16. Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498–504.

    Article  Google Scholar 

  17. Henderson, J. M., Brockmole, J. R., Castelhano, M. S., & Mack, M. (2007). Visual saliency does not account for eye movements during visual search in real-world scenes. In Eye movements (pp. 537–III): Elsevier.

  18. Henderson, J. M., & Hayes, T. R. (2017). Meaning-based guidance of attention in scenes as revealed by meaning maps. Nature Human Behaviour, 1, 7.

    Article  Google Scholar 

  19. Henderson, J. M., Hayes, T. R., Peacock, C. E., & Rehrig, G. (2021). Meaning maps capture the density of local semantic features in scenes: A reply to Pedziwiatr, Kümmerer, Wallis, Bethge & Teufel (2021). Cognition, 104742. https://doi.org/10.1016/j.cognition.2021.104742

  20. Henderson, J. M., Malcolm, G. L., & Schandl, C. (2009). Searching in the dark: Cognitive relevance drives attention in real-world scenes. Psychonomic Bulletin & Review, 16(5), 850–856.

    Article  Google Scholar 

  21. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10), 1489–1506.

    Article  Google Scholar 

  22. Joubert, O. R., Rousselet, G. A., Fize, D., & Fabre-Thorpe, M. (2007). Processing scene context: Fast categorization and object interference. Vision Research, 47(26), 3286–3297. https://doi.org/10.1016/j.visres.2007.09.013

    Article  PubMed  Google Scholar 

  23. Jubera-García, E., Gevers, W., & Van Opstal, F. (2019). Influence of content and intensity of thought on behavioral and pupil changes during active mind-wandering, off-focus and on-task states. Attention, Perception, & Psychophysics. https://doi.org/10.3758/s13414-019-01865-7

  24. Kam, J. W. Y., Dao, E., Farley, J., Fitzpatrick, K., Smallwood, J., Schooler, J. W., & Handy, T. C. (2011). Slow fluctuations in attentional control of sensory cortex. Journal of Cognitive Neuroscience, 23(2), 460–470. https://doi.org/10.1162/jocn.2010.21443

    Article  PubMed  Google Scholar 

  25. Kane, M. J., Brown, L. H., McVay, J. C., Silvia, P. J., Myin-Germeys, I., & Kwapil, T. R. (2007). For whom the mind wanders, and when: An experience-sampling study of working memory and executive control in daily life. Psychological Science, 18(7), 614–621.

    Article  Google Scholar 

  26. Kane, M. J., Gross, G. M., Chun, C. A., Smeekens, B. A., Meier, M. E., Silvia, P. J., & Kwapil, T. R. (2017). For whom the mind wanders, and when, varies across laboratory and daily-life settings. Psychological Science, 28(9), 1271–1289. https://doi.org/10.1177/0956797617706086

    Article  PubMed  PubMed Central  Google Scholar 

  27. Killingsworth, M. A., & Gilbert, D. T. (2010). A wandering mind is an unhappy mind. Science, 330(6006), 932–932. https://doi.org/10.1126/science.1192439

    Article  PubMed  Google Scholar 

  28. Krasich, K., Huffman, G., Faber, M., & Brockmole, J. R. (2020). Where the eyes wander: The relationship between mind wandering and fixation allocation to visually salient and semantically informative static scene content. Journal of Vision, 20(9), 10. https://doi.org/10.1167/jov.20.9.10

    Article  PubMed  PubMed Central  Google Scholar 

  29. Krasich, K., McManus, R., Hutt, S., Faber, M., D’Mello, S. K., & Brockmole, J. R. (2018). Gaze-based signatures of mind wandering during real-world scene processing. Journal of Experimental Psychology: General, 147(8), 1111–1124. https://doi.org/10.1037/xge0000411

    Article  Google Scholar 

  30. Makowski, D., Ben-Shachar, M. S., & Lüdecke, D. (2020). The easystats collection of r packages. GitHub. https://github.com/easystats/easystats

  31. Mathôt, S., Schreij, D., & Theeuwes, J. (2012). Opensesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods, 44(2), 314–324.

    Article  Google Scholar 

  32. Morey, R. D., & Rouder, J. N. (2018). BayesFactor: Computation of bayes factors for common designs. https://CRAN.R-project.org/package=BayesFactor

  33. Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research, 155, 23–36.

    Article  Google Scholar 

  34. Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42(1), 107–123.

    Article  Google Scholar 

  35. Peacock, C. E., Hayes, T. R., & Henderson, J. M. (2019a). Meaning guides attention during scene viewing, even when it is irrelevant. Attention, Perception, & Psychophysics, 81(1), 20–34. https://doi.org/10.3758/s13414-018-1607-7

    Article  Google Scholar 

  36. Peacock, C. E., Hayes, T. R., & Henderson, J. M. (2019b). The role of meaning in attentional guidance during free viewing of real-world scenes. Acta Psychologica, 198, 102889. https://doi.org/10.1016/j.actpsy.2019.102889

    Article  PubMed  PubMed Central  Google Scholar 

  37. Pedersen, T. L. (2019). Patchwork: The composer of plots. https://CRAN.R-project.org/package=patchwork

  38. Pedziwiatr, M. A., Kümmerer, M., Wallis, T. S., Bethge, M., & Teufel, C. (2021). Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations. Cognition, 206, 104465.

    Article  Google Scholar 

  39. R Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/

  40. Reichle, E. D., Reineberg, A. E., & Schooler, J. W. (2010). Eye movements during mindless reading. Psychological Science, 21(9), 1300–1310.

    Article  Google Scholar 

  41. Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin, B., & Dutoit, T. (2013). Rare2012: a multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication, 28(6), 642–658.

    Google Scholar 

  42. Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). Labelme: A database and web-based tool for image annotation. International Journal of Computer Vision, 77(1), 157–173.

    Article  Google Scholar 

  43. Schad, D. J., Nuthmann, A., & Engbert, R. (2012). Your mind wanders weakly, your mind wanders deeply: Objective measures reveal mindless reading at different levels. Cognition, 125(2), 179–194. https://doi.org/10.1016/j.cognition.2012.07.004

    Article  PubMed  Google Scholar 

  44. Schooler, J. W., Smallwood, J., Christoff, K., Handy, T. C., Reichle, E. D., & Sayette, M. A. (2011). Meta-awareness, perceptual decoupling and the wandering mind. Trends in Cognitive Sciences, 15(7), 319–326. https://doi.org/10.1016/j.tics.2011.05.006

    Article  PubMed  Google Scholar 

  45. Seli, P., Risko, E. F., & Smilek, D. (2016). On the necessity of distinguishing between unintentional and intentional mind wandering. Psychological Science, 27(5), 685–691. https://doi.org/10.1177/0956797616634068

    Article  PubMed  Google Scholar 

  46. Smallwood, J. (2013). Distinguishing how from why the mind wanders: A process–occurrence framework for self-generated mental activity. Psychological Bulletin, 139(3), 519–535. https://doi.org/10.1037/a0030010

    Article  PubMed  Google Scholar 

  47. Steindorf, L., & Rummel, J. (2020). Do your eyes give you away? a validation study of eye-movement measures used as indicators for mindless reading. Behavior Research Methods, 52(1), 162–176.

    Article  Google Scholar 

  48. Tatler, B. W., Baddeley, R. J., & Gilchrist, I. D. (2005). Visual correlates of fixation selection: Effects of scale and time. Vision Research, 45(5), 643–659. https://doi.org/10.1016/j.visres.2004.09.017

    Article  PubMed  Google Scholar 

  49. Tatler, B. W., Hayhoe, M. M., Land, M. F., & Ballard, D. H. (2011). Eye guidance in natural vision: Reinterpreting salience. Journal of Vision, 11(5), 5–5.

    Article  Google Scholar 

  50. Theeuwes, J. (2010). Top–down and bottom–up control of visual selection. Acta Psychologica, 135(2), 77–99. https://doi.org/10.1016/j.actpsy.2010.02.006

    Article  PubMed  Google Scholar 

  51. Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113(4), 766.

    Article  Google Scholar 

  52. Underwood, G., Foulsham, T., van Loon, E., Humphreys, L., & Bloyce, J. (2006). Eye movements during scene inspection: A test of the saliency map hypothesis. European Journal of Cognitive Psychology, 18 (3), 321–342. https://doi.org/10.1080/09541440500236661

    Article  Google Scholar 

  53. Unsworth, N., & Robison, M. K. (2018). Tracking arousal state and mind wandering with pupillometry. Cognitive, Affective, & Behavioral Neuroscience, 18 (4), 638–664. https://doi.org/10.3758/s13415-018-0594-4

    Article  Google Scholar 

  54. Võ, M. L.-H., & Henderson, J. M. (2010). The time course of initial scene processing for eye movement guidance in natural scene search. Journal of Vision, 10(3), 14–14. https://doi.org/10.1167/10.3.14

    Article  PubMed  Google Scholar 

  55. Wagenmakers, E. -J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779–804. https://doi.org/10.3758/BF03194105

    Article  Google Scholar 

  56. Wickham, H., François, R., Henry, L., & Müller, K. (2020). Dplyr: A grammar of data manipulation. https://CRAN.R-project.org/package=dplyr

  57. Wickham, H., & Henry, L. (2020). Tidyr: Tidy messy data. https://CRAN.R-project.org/package=tidyr

  58. Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010). Sun database: Large-scale scene recognition from abbey to zoo. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 3485–3492.

  59. Xie, Y. (2015) Dynamic documents with R and knitr, (2nd edn.) Boca Raton: Chapman; Hall/CRC. https://yihui.org/knitr/

    Google Scholar 

  60. Zhang, H., Anderson, N. C., & Miller, K. F. (2021). Refixation patterns of mind-wandering during real-world scene perception. Journal of Experimental Psychology: Human Perception and Performance, 47(1), 36.

    PubMed  Google Scholar 

  61. Zhang, H., Miller, K. F., Sun, X., & Cortina, K. S. (2020). Wandering eyes: Eye movements during mind wandering in video lectures. Applied Cognitive Psychology, 34(2), acp. 3632. https://doi.org/10.1002/acp.3632

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Han Zhang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Practices Statement

Data, code, and stimuli for this paper are accessible at https://osf.io/jf65u/ (eye-tracking analyses) and https://github.com/HanZhang-psych/SceneMeaningMapping (creating meaning maps). None of the experiments were preregistered.

Appendices

Appendix A: One-sample t tests of the squared semipartial correlations (unique R 2)

Main Study

Table 9 One-sample t tests examining whether meaning and visual salience explained unique variance in fixation maps

Krasich et al. (2020)

Table 10 One-sample t tests examining whether meaning and visual salience explained unique variance in fixation maps

Appendix B: Performance indices of linear mixed models in the paper

The Main Study

Table 11 Performance indices for models in Table 3
Table 12 Performance indices for models in Table 4

The Re-analysis of Krasich et al. (2020)

Table 13 Performance Indices for Models in Table 7
Table 14 Performance Indices for Models in Table 8

Appendix C: Instructions

At the beginning of the study, the experimenter announced the following to the participant:

In this task, we will show you a series of pictures on the screen. Your task is to remember each picture for a later memory test. There will be three types of pictures: exteriors (outside of a building, for example, street views), interiors (inside of a building, for example, a bedroom), and natural views (for example, mountains). These three types of pictures will be divided into three blocks. In each block, you will see only one type of pictures. Each block has a study phase and a test phase. In the study phase, we will show you 60 pictures of the same type one by one. You will have 10 s to remember each picture. In the test phase, your memory on these pictures will be tested. We will present a series of pictures, and you need to indicate whether you just saw each picture. Then, we will move on to the next block (next type of pictures). You can have a rest between blocks.

During the study, your eye movements will be recorded. We would like you to reduce your body and head movement for better tracking quality.

One last thing: during the study phase, occasionally there will be a “thought-probe” asking if you were “mind-wandering”. Here is more information (Give the following to the participant, ask them to read it, and ask them if they have any questions).

Every once in a while, the task will temporarily stop and you will be presented with a screen asking you to indicate whether you were on-task or mind-wandering just before the screen appeared. Being on-task means that, just before the screen appeared, you were focused on completing the task and were not thinking about anything unrelated to the task. Some examples of on-task thoughts include thoughts about the picture, or thoughts about your performance on the task.

On the other hand, mind-wandering means that, just before the screen appeared, you were thinking about something completely unrelated to the task. Some examples of mind-wandering include thoughts about what to eat for dinner, thoughts about plans you have with friends, or thoughts about an upcoming test.

Importantly, mind-wandering can occur either because you intentionally decided to think about things that are unrelated to the task, or because your thoughts unintentionally drifted away to task-unrelated thoughts, despite your best intentions to focus on the task. When the thought-sampling screen is presented, we would like you to indicate whether any mind-wandering you might experience is intentional or unintentional.

Please be honest in reporting your thoughts. It is perfectly normal to mind-wander during the task. Your participation credit will not be affected by those mind-wandering reports. Also, the location of the thought probes is random. Please complete the task just as usual.

Do you have any remaining questions?

Appendix D: Meaning and salience maps for each probed picture

Fig. 7
figure7

Meaning (with center bias) and GBVS salience maps for each probed picture

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Anderson, N.C. & Miller, K.F. Scene meaningfulness guides eye movements even during mind-wandering. Atten Percept Psychophys (2021). https://doi.org/10.3758/s13414-021-02370-6

Download citation

Keywords

  • Mind-wandering
  • Scene perception
  • Attention
  • Eye-tracking