Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Improving the visual momentum of tethered viewpoint displays using spatial cue augmentation

  • 230 Accesses

  • 1 Citations

Abstract

A tethered viewpoint in an operator interface visualizes remote situations according to a robot’s heading direction. The tethered view, adhering to the line-of-sight requirement, results lower visual momentum when visualizing dense environments, such as during indoor teleoperation tasks. This problem occurs because the tethered view only partially visualizes spatial information, unlike the bird’s-eye view. Operators are thus inhibited to build their spatial mental model because the transitions happen as the robot moves. This paper presents an approach to improve the visual momentum of the tethered view by complementing the omitted spatial information. The approach augments simplified spatial cues on the excluded areas of a tethered view. The cues are purportedly used to illuminate the basic spatial structure of the surroundings and can thus help operators in constructing their spatial mental models. Evaluation of the presented approach was conducted using a simulated telerobot environment. The results indicated that the augmented view possessed a higher visual momentum, which exhibited by the lowered workload and enhanced levels of both spatial perception and situational awareness.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

References

  1. 1.

    Adept MobileRobots (2011) Pioneer 3-DX Datasheet. http://www.mobilerobots.com/Libraries/Downloads/Pioneer3DX-P3DX-RevA.sflb.ashx

  2. 2.

    Association AP (2010) Publication manual of the American Psychological Association, sixth ed. doi:10.1037/a0020345

  3. 3.

    Barrouillet P, Bernardin S, Camos V (2004) Time constraints and resource sharing in adults’ working memory spans. J Exp Psychol 133(1):83–100. doi:10.1037/0096-3445.133.1.83

  4. 4.

    Beck MR, Lohrenz MC, Trafton JG (2010) Measuring search efficiency in complex visual search tasks: global and local clutter. Appl J Exp Psychol 16(3):238–250. doi:10.1037/a0019633

  5. 5.

    Bennett KB, Flach JM (2012) Visual momentum redux. Int J Hum Comput Stud 70(6):399–414. doi:10.1016/j.ijhcs.2012.01.003

  6. 6.

    Carlson J, Murphy R (2005) How UGVs physically fail in the field. IEEE Trans Robot 21(3):423–437

  7. 7.

    Cumming G (2014) The new statistics: why and how. Psychol Sci 25(1):7–29. doi:10.1177/0956797613504966

  8. 8.

    Dragicevic P (2015) HCI statistics without p-values. Tech. rep

  9. 9.

    Dragicevic P, Chevalier F, Huot S (2014) Running an HCI experiment in multiple parallel universes. In: ACM conference on human factors in computing systems (CHI EA ’14), pp 607–618. doi:10.1145/2559206.2578881

  10. 10.

    Gonzalez C, Wimisberg J (2007) Situation awareness in dynamic decision making : effects of practice and working memory. Hum Fact 1(1):56–74. doi:10.1177/155534340700100103

  11. 11.

    Hart S, Staveland L (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research.In: Advance in psychology http://www.sciencedirect.com/science/article/pii/S0166411508623869

  12. 12.

    Hart SSG (2006) Nasa-task load index (NASA-TLX); 20 years later. In: Human factors ergonamics society annual meet. doi:10.1037/e577632012-009, http://pro.sagepub.com/content/50/9/904.short

  13. 13.

    Hochberg J, Brooks V (1978) Film cutting and visual momentum. In the mind’s eye: Julian Hochberg on the perception of pictures, films, and the world, pp 206–228

  14. 14.

    Hollands JG, Lamb M (2011) Viewpoint tethering for remotely operated vehicles: effects on complex terrain navigation and spatial awareness. Hum Fact 53:154–167. doi:10.1177/0018720811399757

  15. 15.

    Holz D, Behnke S (2014) Approximate triangulation and region growing for efficient segmentation and smoothing of range images. Rob Auton Syst 62(9):1282–1293. doi:10.1016/j.robot.2014.03.013

  16. 16.

    Holzer S, Rusu RB, Dixon M, Gedikli S, Navab N (2012) Adaptive neighborhood selection for real-time surface normal estimation from organized point cloud data using integral images. In: 2012 IEEE/RSJ international conference on intelligent robot systems, pp 2684–2689. doi:10.1109/IROS.2012.6385999, URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6385999

  17. 17.

    Johannsdottir KR, Herdman CM (2010) The role of working memory in supporting drivers’ situation awareness for surrounding traffic. Hum Fact 52(6):663–673. doi:10.1177/0018720810385427

  18. 18.

    Kelly A, Chan N, Herman H (2011) Real-time photorealistic virtualized reality interface for remote mobile robot control. Springer Tracts Adv Robot 70:211–266

  19. 19.

    Kutner M, Nachtsheim C, Neter J, Li W (2005) Latin square crossover design. In: Applied linear statistical model, pp 1200–1201

  20. 20.

    Labonte D, Boissy P, Michaud F (2010) Comparative analysis of 3-D robot teleoperation interfaces with novice users. IEEE Trans Syst Man Cybern Part B Cybern 40(5):1331–1342. doi:10.1109/TSMCB.2009.2038357

  21. 21.

    Lewandowsky S, Oberauer K, Brown GDA (2009) No temporal decay in verbal short-term memory. Trends Cogn Sci 13(3):120–126. doi:10.1016/j.tics.2008.12.003

  22. 22.

    Mast M, Španěl M, Arbeiter G, Štancl V (2013) Teleoperation of domestic service robots: effects of global 3D environment maps in the user interface on operators’ cognitive and performance metrics. In: International conference on social robotics, pp 392–401

  23. 23.

    Nielsen CW, Goodrich Ma (2006) Comparing the usefulness of video and map information in navigation tasks. In: 1st ACM SIGCHI/SIGART conference on human–robot interaction, ACM Press, New York, USA, p 95. doi:10.1145/1121241.1121259

  24. 24.

    Nielsen CW, Goodrich MA, Ricks RW (2007) Ecological interfaces for improving mobile robot teleoperation. IEEE Trans Robot 23(5):927–941. doi:10.1109/TRO.2007.907479

  25. 25.

    Nunes A, Wickens C, Yin S (2006) Examining the viability of the Neisser search model in the flight. In: Human factors ergonomics society. 50th Annual Meeting, pp 35–39. doi:10.1177/154193120605000108

  26. 26.

    Rohmer E, Singh SPN, Freese M (2013) V-REP: A versatile and scalable robot simulation framework. In: IEEE/RSJ international conference on intelligent robots and systems, pp 1321–1326. doi:10.1109/IROS.2013.6696520

  27. 27.

    Saitoh K, Machida T, Kiyokawa K, Takemura H (2006) A 2D–3D integrated interface for mobile robot control using omnidirectional images and 3D geometric models. In: 2006 IEEE/ACM international symposium on mixed and augmented reality, pp 173–176. doi:10.1109/ISMAR.2006.297810, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4079272

  28. 28.

    Stoakley R, Conway M, Pausch R (1995) Virtual reality on a WIM: interactive worlds in miniature. In: SIGCHI conference on human factors in computing systems, pp 265–272

  29. 29.

    Tara RY, Teng Wc (2015) Omnidirectional camera with unified rgbd sensor for mapping remote environments. In: IEEE international conference on systems, man, and cybernetics, pp 795–800

  30. 30.

    Taylor RM. (1990). Situational awareness rating technique(SART): the development of a tool for aircrew systems design. In: AGARD, situational awareness in aerospace operations. 17, p (SEE N 90–28972 23–53)

  31. 31.

    Valero A, Randelli G, Botta F, Hernando M, Rodríguez-Losada D (2011) Operator performance in exploration Robotics: a comparison between stationary and mobile operators. J Intell Robot Syst Theory Appl 64(3–4):365–385. doi:10.1007/s10846-011-9539-7

  32. 32.

    Vorderer P, Wirth W, Gouveia F, Biocca F, Saari T, Jäncke F, Böcking S, Schramm H, Gysbers A, Hartmann T, Klimmt C, Laarni J, Ravaja N, Sacau A, Baumgartner T, Jäncke P (2004) MEC Spat Presence Quest (MEC- SPQ): short documentation and instructions for application. Tech. rep

  33. 33.

    Wang W, Milgram P (2001) Dynamic viewpoint tethering for navigation in large-scale virtual environments. Proc Hum Factors Ergon Soc Annu Meet 45(27):1862–1866. doi:10.1177/154193120104502702

  34. 34.

    Wickens C, Vincow M, Yeh M. (2005). Design applications of visual spatial thinking: the importance of frame of reference. doi:10.1017/CBO9780511610448.011, http://www.aviation.illinois.edu/avimain/papers/research/pub_pdfs/chapters/DesignApplicationsofVisualSpatialthinking.pdf

  35. 35.

    Wickens CD, Ambinder MS, Alexander AL (2004) The Role of Highlighting in visual search through maps. In: Human factors ergonamics society 48th annual meet, pp 1895–1899

  36. 36.

    Woods DD (1984) Visual momentum: a concept to improve the cognitive coupling of person and computer. Int J Man Mach Stud 21(3):229–244. doi:10.1016/S0020-7373(84)80043-7

  37. 37.

    Yanagisawa M, Akahori K (1999) The effect of visual discontinuity on spatial cognition. J Hum Interface Soc 1(1):37–44

  38. 38.

    Yanco H, Drury J, Scholtz J (2004) Beyond usability evaluation: analysis of human-robot interaction at a major robotics competition. Hum Comput Interact 19(1):117–149. doi:10.1207/s15327051hci1901&2_6

Download references

Author information

Correspondence to Wei-Chung Teng.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tara, R.Y., Teng, W. Improving the visual momentum of tethered viewpoint displays using spatial cue augmentation. Intel Serv Robotics 10, 313–322 (2017). https://doi.org/10.1007/s11370-017-0231-z

Download citation

Keywords

  • Operator interface
  • Visual momentum
  • Tethered view
  • Spatial cues
  • Telerobot