Advertisement

GeoJournal

, Volume 81, Issue 2, pp 153–167 | Cite as

Evaluating differences in spatial visual attention in wayfinding strategy when using 2D and 3D electronic maps

  • Tsu-Chiang Lei
  • Shih-Chieh WuEmail author
  • Chi-Wen Chao
  • Su-Hsin Lee
Article

Abstract

With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace.

Keywords

Visual attention Eye-tracking Mapping Cyberspace 

References

  1. Adams, R. J. (1987). An evaluation of color preference in early infancy. Infant Behavior and Development, 10(2), 143–150.CrossRefGoogle Scholar
  2. Altonen, A., Hyrskykari, A., & Räihä, K. (1998). 101 Spots, or how do users read menus? CHI 98 Human Factors in Computing Systems, ACM Press, pp. 132–139.Google Scholar
  3. Anselin, L. (1995). Local indicators of spatial association–LISA. Geographical Analysis, 27(2), 94–115.Google Scholar
  4. Antes, J. R. (1974). The time course of picture viewing. Journal of Experimental Psychology, 103(1), 62–70.CrossRefGoogle Scholar
  5. Bednarz, R. S., & Lee, J. (2011). The components of spatial thinking: empirical evidence. Procedia—Social and Behavioral Sciences, 21, 103–107.CrossRefGoogle Scholar
  6. Berlyne, D. (1971). Aesthetics and Psychobiology. New York: Appleton Century Crofts Press.Google Scholar
  7. Burigat, S., & Chittaro, L. (2007). Navigation in 3D virtual environments: Effects of user experience and location-pointing navigation aids. International Journal of Human-Computer Studies, 65, 946–958.CrossRefGoogle Scholar
  8. Carrasco, M. (2011). Vision research: The pase 25 years. Vision Research, 52, 1484–1525.CrossRefGoogle Scholar
  9. Chen, C. H., Lai, H. D., & Chiu, F. C. (2010). Eye Tracking Technology for Learning and Education. Journal of Research in Education Science, 55(4), 39–68.Google Scholar
  10. Christou, C., & Bülthoff, H. (2000). Using realistic virtual environments in the study of spatial encoding. Spatial Cognition II, 1849, 317–332. doi: 10.1007/3-540-45460-823.CrossRefGoogle Scholar
  11. Cliff, A. D., & Ord, J. K. (1973). Spatial autocorrelation. London: Pion.Google Scholar
  12. Cockburn, A., & McKenzie, B. (2002). Evaluating the Effectiveness of Spatial Memory in 2D and 3D Physical and Virtual Environments. SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves, pp. 203–210.Google Scholar
  13. David, N., & Lawrence, S. (1971). Scanpaths in saccadic eyemovements while viewing and recognizing patterns. Vision Research, 11(9), 929–942.CrossRefGoogle Scholar
  14. De Lucio, J. V., Mohamadian, M., Ruiz, J. P., Banayas, J., & Bernaldez, F. G. (1996). Visual land Visual landscape exploration as revealed by eye movement tracking. Landscape and Urban Planning, 34(8), 135–142.CrossRefGoogle Scholar
  15. Downs, M. D., & Stea, D. (1973). Cognitive maps and spatial behavior: Process and products. Image and environment (pp. 8–26). Chicago: Aldine Publishing Company.Google Scholar
  16. Ellis, S., Candrea, R., Misner, J., Craig, C. S., Lankford, C. P., & Hutshinson, T. E. (1998). Windows to the soul? What eye movements tell us about software usability? In Proceedings of the Usability Professionals’ Association Conference, 1998, 151–178.Google Scholar
  17. Erol, O., Turkan, K., Engin, K., & Kursat, C. (2009). An eye-tracking study of how color coding affects multimedia learning. Computers & Education, 53(2), 445–453.CrossRefGoogle Scholar
  18. Fletcher, W. A., Hain, T. C., & Zee, D. S. (1990). Optokinetic nystagmus and afternystagmus in human beings: Relationship to nonlinear processing of information about retinal slip. Experimental Brain Research81(1), 46–52.Google Scholar
  19. Galley, N. (1993). The evaluation of the electrooculogram as a psychophyisiological measuring instrument in the driver study of driver behaviour. Ergonomics, 36(9), 1063–1070.CrossRefGoogle Scholar
  20. Gerald, L. L. (1997). Consumer Eye Movement Patterns on Yellow Pages Advertising. Journal of Advertising, 26, 1.Google Scholar
  21. Ginsburg, H., & Opper, S. (1987). Piaget’s theory of intellectual development (3rd edition), Chapter 3.Google Scholar
  22. Gobell, J. L., Tseng, C. H., & Sperling, G. (2004). The spatial distribution of visual attention. Vision Research, 44, 1273–1296.CrossRefGoogle Scholar
  23. Goldberg, J. H., & Kotval, X. P. (1998). Eye movement-based evaluation of the computer interface. In S. K. Kumar (Ed.), Advances in occupational ergonomics and safety (pp. 529–532). Amsterdam: ISO Press.Google Scholar
  24. Gröger, G., Kolbe, T. H., Czerwinski, A., & Nagel, C. (2008). OpenGIS city geography markup language (CityGML) Encoding Standard, Tech. Report No. OGC-08-007r1, v1.0.0, Open Geospatial Consortium Inc.Google Scholar
  25. Gross, H., Thoennessen, U., & Hansen, W. V. (2005). 3D-modeling of urban structures, International Archives of Photogrammetry and Remote Sensing, 36, Part 3W24, 137–142.Google Scholar
  26. Hain, T. C., Herdman, S. J., Holliday, M. S., Mattox, D., Zee, D. S., & Byskosh, A. T. (1994). The localizing value of optokinetic afternystagmus. Annals ORL, 806–811.Google Scholar
  27. Hain, T. C., & Patel, G. (1992). Slow-cumulative Eye Position to Quantify Optokinetic Afternystagmus. Annals Otology, 101, 255–260.Google Scholar
  28. Hain, T. C., & Zee, D. S. (1991). Abolition of optokinetic afternystagmus by aminoglycoside ototoxicity. Annals Otology, 100, 580–583.Google Scholar
  29. Hile, H., Vedantham, R., Cuellar, G., Liu, A., Gelfand, N., Grzeszczuk, R., & Borriello, G. (2008). Landmark-based pedestrian navigation from collections of geotagged photos. In Proc. 7th International Conference on Mobile and Ubiquitous Multimedia, ACM, 145–152. doi: 10.1145/1543137.1543167.
  30. Hu, L. C., & Lay, J. Y. (2006). Spatial Analysis of Female Cancers in Taiwan. Journal of Taiwan Geographical Information Science, 4, 39–55.Google Scholar
  31. Imani, F., & Tabaeian, M. (2012). Recreating mental image with the aid of cognitive maps and its role in environmental perception. Procedia—Social and Behavioral Sciences, 32, 53–62.CrossRefGoogle Scholar
  32. Just, M. A., & Carpenter, P. A. (1976). The role of eye-fixation research in cognitive psychology. Behavior Research Methods, Instrument and Computer, 8, 139–143.CrossRefGoogle Scholar
  33. Kettunen, P., Irvankoski, K., Krause, C. M., Sarjakoski, T., & Sarjakoski, L. T. (2012). Geospatial images in the acquisition of spatial knowledge for wayfinding. Journal of Spatial Information Science, 5, 75–106. doi: 10.5311/JOSIS.2012.5.85.Google Scholar
  34. Kuipers, B., Tecuci, D. G., & Stankiewicz, B. J. (2003). The skeleton in the cognitive map: A computational and empirical exploration. Environment and behavior, 35, 1.CrossRefGoogle Scholar
  35. Lay, J. G., Huang, C. C., & Yap, Ko-hua. (2005). Exploring the Spatial Cognition in Early Maps. A Case Study of “The Territorial Map of Taiwan” 1878. Journal of Geographical Science, 42, 47–68.Google Scholar
  36. Lee, S. H., Lei, T. C., & Wu, S. C. (2009). Applying eye tracking technology to examine landscape preference in Prospect-Refuge theory. 15th International Symposium on Society and Resource Management People and Place: Linking Culture and Nature. Google Scholar
  37. Lee, S. H., Lei, T. C., Wu, S. C., & Li, C. Y. (2010) To Extract waterscape preference region with SVM theory via eye tracking technology. 16th International Symposium on Society and Resource Management. Google Scholar
  38. Li, P. Y., Hibino, H., Koyama, S., & Zheng, M. C. (2012). Tailoring Map Design Based on Map-Reading and Way-Finding Behaviour in Subway Stations. Procedia—Social and Behavioral Sciences, 42, 466–476.CrossRefGoogle Scholar
  39. Locher, P. J., & Nodine, C. F. (1987). Symmetry catches the eyes. In J. K. O’Regan & A. Levy-Schoen (Eds.), Eye movements: From physiology to cognition. North-Holland: Elsevier Science Publishers.Google Scholar
  40. Martin, B., André, M., Thomas, M., & Erhardt, B. (2006). Remote eye tracking: State of the art and directions for future development. The 2nd Conference on Communication by Gaze InteractionCOGAIN 2006: Gazing into the Future.Google Scholar
  41. McAndrew, F. T. (1993). Environmental Psychology. Calif: Pacific Grove.Google Scholar
  42. Meijer, F., Geudeke, B. L., & Van Den Broek, E. L. (2009). Navigating through virtual environments: Visual realism improves spatial cognition. CyberPsychology & Behavior, 12(5), 517–521. doi: 10.1089/cpb.2009.0053.CrossRefGoogle Scholar
  43. Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63, 81–97.CrossRefGoogle Scholar
  44. Montello, D. R., Lovelace, K. L., Golledge, R. G., & Self, C. M. (1999). Sex-related differences and similarities in geographic and environmental spatial abilities. Annals of Association of American Geographers, 89, 515–534.CrossRefGoogle Scholar
  45. Müller, P. U., Cavegn, D., d’Ydewalle, G., & Groner, R. (1993). A comparison of a new limbus tracker, corneal reflection technique, purkinje eye tracking and electro-oculography. In G. d’Ydewalle & J. V. Rensbergen (Eds.), Perception and Cognition (pp. 393–401). B.V.: Elsevier Science Publishers.Google Scholar
  46. OGC. (2012). OGC City Geography Markup Language (CityGML) Encoding Standard, from: https://portal.opengeospatial.org/files/?artifact_id=47842.
  47. Oulasvirta, A., Estlander, S., & Nurminen, A. (2009). Embodied interaction with a 3D versus 2D mobile map. Personal Ubiquitous Computing, 13(4), 303–320. doi: 10.1007/s00779-008-0209-0.CrossRefGoogle Scholar
  48. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422.CrossRefGoogle Scholar
  49. Rayner, K., Smith, T. J., Malcolm, G. L., & Henderson, J. M. (2009). Eye movements and visual encoding during scene perception. Psychological Science, 20, 6–10.CrossRefGoogle Scholar
  50. Rodrigo, Q. Q., & Carlos, P. (2011). How do we see art: An eye-tracker study. Frontiers Human Neuroscience, 5, 98.Google Scholar
  51. Sanders, M. S., & McCormick, E. J. (1987). Human factors in engineering and design. New York: McGraw-Hill.Google Scholar
  52. Schuchardt, P., & Bowman, D. A. (2007). The benefits of immersion for spatial understanding of complex underground cave systems. In Proc. 2007 ACM symposium on virtual reality software and technology (New York, NY, 2007), 121–124. doi: 10.1145/1315184.1315205.
  53. Sereno, S. C., & Rayner, K. (2003). Measuring word recognition in reading: Eye movements and event-related potentials. Trends in Cognitive Sciences, 7, 489–493.CrossRefGoogle Scholar
  54. Sperling, G., & Melchner, M. J. (1978). The attention operating characteristic: Examples from visual search. Science, 202(4365), 315–318.CrossRefGoogle Scholar
  55. Tang, D. L., Lee, T. R., & Tsai, C. M. (2005). An Exploratory Study on Relationship between Preference and Scanpath-Evidence from Color Preference Sorting Task. Chinese Journal of Psychology, 47(4), 339–351.Google Scholar
  56. Wood, M., Pearson, D. G., Calder, C., & Miller, D. (2007). Comparing the effects of different 3D representations on human wayfinding. Location based services and telecartography, Lecture Notes in Geoinformation and Cartography, 4, 345–356.CrossRefGoogle Scholar
  57. Wu, S. C., Lee, S. H., & Tang, D. L. (2008). The Attractive power of portrait on browsing behavior, 14th international symposium on society and resource management people and place: Linking culture and nature. Google Scholar
  58. Yarbus, A. L. (1967). Eye Movements and Vision. New York: Plenum Press.CrossRefGoogle Scholar
  59. Zeki, S. (1999). Inner vision: An exploration of art and the brain. Oxford: Oxford University Press.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Tsu-Chiang Lei
    • 1
  • Shih-Chieh Wu
    • 2
    Email author
  • Chi-Wen Chao
    • 3
  • Su-Hsin Lee
    • 4
  1. 1.Department of Urban Planning and Spatial InformationFeng Chia UniversityTaichungTaiwan
  2. 2.Graduate Institute of Civil and Hydraulic EngineeringFeng Chia UniversityTaichungTaiwan
  3. 3.Graduate Institute of Environmental and Spatial Information Science and TechnologyFeng Chia UniversityTaichungTaiwan
  4. 4.Department of GeographyNational Taiwan Normal UniversityTaipeiTaiwan

Personalised recommendations