Advertisement

Acceptable Dwell Time Range for Densely Arranged Object Selection Using Video Mirror Interfaces

  • Kazuyoshi MurataEmail author
  • Yu Shibuya
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9732)

Abstract

We evaluated an acceptable dwell time range to decrease erroneous selections using a video mirror interface. In this interface, users select a target object by moving his/her palm and dwelling on the object. We focused on a situation wherein objects are densely arranged and users must select a target object from these objects. The effects of dwell time, object size, and distance between objects on the object selection task were experimentally evaluated. The results indicated that a dwell time of 0.3 s is the most appropriate to decrease both erroneous selections and unpleasant experiences. The results of this study can contribute to defining an effective basis for dwell time for selection operations in gesture-based interaction systems.

Keywords

Dwell time Object selection Gesture interaction system Video mirror interface 

References

  1. 1.
    Bau, O., Mackay, W.E.: OctoPocus: a dynamic guide for learning gesture-based command sets. In: UIST 2008 Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, pp. 37–46 (2008)Google Scholar
  2. 2.
    Bolt, R.A.: Put-That-There: voice and gesture at the graphics interface. In: SIGGRAPH 1980 Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, pp. 262–270 (1980)Google Scholar
  3. 3.
    Cockburn, A., Gin, A.: Faster cascading menu selections with enlarged activation areas. In: GI 2006 Proceedings of Graphics Interface 2006, pp. 65–71 (2006)Google Scholar
  4. 4.
    Freeman, D., Benko, H., Morris, M.R., Wigdor, D.: ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures. In: ITS 2009 Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, pp. 165–172 (2009)Google Scholar
  5. 5.
    Hespanhol, L., Tomitsch, M., Grace, K., Collins1, A., Kay, J.: Investigating intuitiveness and effectiveness of gestures for free spatial interaction with large displays. In: PerDis 2012 Proceedings of the 2012 International Symposium on Pervasive Displays, Article No. 6 (2012)Google Scholar
  6. 6.
    Hosoya, E., Kitabata, M., Sato, H., Harada, I., Nojima, H., Morisawa, F., Mutoh, S., Onozawa, A.: A mirror metaphor interaction system: touching remote real objects in an augmented reality environment. In: Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality, p. 350 (2003)Google Scholar
  7. 7.
    Jacob, R.J.K.: What you look at is what you get: eye movement-based interaction techniques. In: CHI 1990 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 11–18 (1990)Google Scholar
  8. 8.
    Krueger, M.W., Gionfriddo, T., Hinrichsen, K.: VIDEOPLACE—an artificial reality. In: CHI 1985 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 35–40 (1985)Google Scholar
  9. 9.
    MacKenzie, I.S.: Fitts’ law as a research and design tool in human-computer interaction. Hum. Comput. Interact. 7(1), 91–139 (1992)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Maes, P., Darrell, T., Blumberg, B., Pentland, A.: The ALIVE system: wireless, full-body interaction with autonomous agents. Multimedia Syst. Spec. Issue Multimedia Multisensory Virtual Worlds 5(2), 105–112 (1997)Google Scholar
  11. 11.
    Marquardt, Z., Beira, J., Em, N., Paiva, I., Kox, S.: Super mirror: a kinect interface for ballet dancers. In: CHI 2012 Extended Abstracts on Human Factors in Computing Systems, pp. 1619–1624 (2012)Google Scholar
  12. 12.
    Murata, K., Hattori, M., Shibuya, Yu.: Effect of Unresponsive Time for User’s Touch Action of Selecting an Icon on the Video Mirror Interface. In: Kurosu, M. (ed.) HCII/HCI 2013, Part IV. LNCS, vol. 8007, pp. 462–468. Springer, Heidelberg (2013)Google Scholar
  13. 13.
    Müller, J., Walter, R., Bailly, G., Nischt, M., Alt, F.: Looking glass: a field study on noticing interactivity of a shop wind. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 297–306 (2012)Google Scholar
  14. 14.
    Myers, B.A., Bhatnagar, R., Nichols, J., Peck, C.H., Kong, D., Miller, R., Long, A. C.: Interacting at a distance: measuring the performance of laser pointers and other devices. CHI 2002 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 33–40 (2002)Google Scholar
  15. 15.
    Olsen Jr., D.R., Nielsen, T.: Laser pointer interaction. In: CHI 2001 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 17–22 (2001)Google Scholar
  16. 16.
    Penkar, A.M., Lutteroth, C., Weber, G.: Designing for the eye–design parameters for dwell in gaze interaction. In: OzCHI 2012 Proceedings of the 24th Australian Computer-Human Interaction Conference, pp. 479–488 (2012)Google Scholar
  17. 17.
    Soukoreff, R.W., Mackenzie, I.S.: Towards a standard for pointing device evaluation, perspectives on 27 years of fitts’ law research in HCI. Int. J. Hum Comput Stud. 61, 751–789 (2004)CrossRefGoogle Scholar
  18. 18.
    Stampe, D.M., Reingold, E.M.: Selection by looking: a novel computer interface and its application to psychological research. Stud. Vis. Inf. Proc. 6, 467–478 (1995)Google Scholar
  19. 19.
    Taylor, B., Birk, M., Mandryk, R. L., Ivkovic, Z.: Posture training with real-time visual feedback. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 3135–3138 (2013)Google Scholar
  20. 20.
    Vera, L., Gimeno, J., Coma, I., Fernández, M.: Augmented mirror: interactive augmented reality system based on kinect. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds.) INTERACT 2011, Part IV. LNCS, vol. 6949, pp. 483–486. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  21. 21.
    Vogel, D., Balakrishnan, R.: Distant freehand pointing and clicking on very large, high resolution displays. In: UIST 2005 Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, pp. 33–42 (2005)Google Scholar
  22. 22.
    Walter, R., Bailly, G., Müller, J.: StrikeAPose: revealing mid-air gestures on public dis-plays. In: CHI 2013 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 841–850 (2013)Google Scholar
  23. 23.
    Walter, R., Bailly, G., Valkanova, N., Müller, J.: Cuenesics: using mid-air gestures to select items on interactive public displays. In: MobileHCI 2014 Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 299–308 (2014)Google Scholar
  24. 24.
    Wobbrock, J.O., Cutrell, E., Harada, S., MacKenzie, I.S.: An error model for pointing based on fitts’ law. In: CHI 2008 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1613–1622 (2008)Google Scholar
  25. 25.
    Yolcu, G., Kazan, S., Oz, C.: Real time virtual mirror using kinect. Balkan J. Electr. Comput. Eng. 2(2), 75–78 (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Aoyama Gakuin UniversityKanagawaJapan
  2. 2.Kyoto Institute of TechnologyKyotoJapan

Personalised recommendations