Advertisement

Disambiguation Canvas: A Precise Selection Technique for Virtual Environments

  • Henrique G. Debarba
  • Jerônimo G. Grandi
  • Anderson Maciel
  • Luciana Nedel
  • Ronan Boulic
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8119)

Abstract

We present the disambiguation canvas, a technique developed for easy, accurate and fast selection of small objects and objects inside cluttered virtual environments. Disambiguation canvas rely on selection by progressive refinement, it uses a mobile device and consists of two steps. During the first, the user defines a subset of objects by means of the orientation sensors of the device and a volume casting pointing technique. The subsequent step consists of the disambiguation of the desired target among the previously defined subset of objects, and is accomplished using the mobile device touchscreen. By relying on the touchscreen for the last step, the user can disambiguate among hundreds of objects at once. User tests show that our technique performs faster than ray-casting for targets with approximately 0.53 degrees of angular size, and is also much more accurate for all the tested target sizes.

Keywords

Selection techniques 3D interaction usability evaluation progressive refinement 

References

  1. 1.
    Bowman, D.A., Kruijff, E., LaViola, J.J., Poupyrev, I.: 3D User Interfaces: Theory and Practice. Addison Wesley Longman Publishing Co., Inc., USA (2004)Google Scholar
  2. 2.
    Mine, M.: Virtual environment interaction techniques. Technical report, UNC Chapel Hill CS Dept. (1995)Google Scholar
  3. 3.
    Steed, A.: Towards a general model for selection in virtual environments. In: Proceedings of the 3D User Interfaces, 3DUI 2006, USA, pp. 103–110. IEEE Computer Society (2006)Google Scholar
  4. 4.
    Haan, G.D., Koutek, M., Post, F.H.: Intenselect: Using dynamic object rating for assisting 3d object selection. In: Virtual Environments 2005, pp. 201–209 (2005)Google Scholar
  5. 5.
    Kopper, R., Bacim, F., Bowman, D.A.: Rapid and accurate 3d selection by progressive refinement. In: Proceedings of the 2011 IEEE Symposium on 3D User Interfaces, 3DUI 2011, pp. 67–74. IEEE Computer Society, USA (2011)Google Scholar
  6. 6.
    Bacim, F., Kopper, R., Bowman, D.: Design and evaluation of 3d selection techniques based on progressive refinement. International Journal of Human-Computer Studies (to appear, 2013)Google Scholar
  7. 7.
    Baudisch, P., Chu, G.: Back-of-device interaction allows creating very small touch devices. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 2009, USA, pp. 1923–1932. ACM (2009)Google Scholar
  8. 8.
    Debarba, H., Nedel, L., Maciel, A.: Lop-cursor: Fast and precise interaction with tiled displays using one hand and levels of precision. In: 2012 IEEE Symposium on 3D User Interfaces, 3DUI, pp. 125–132 (2012)Google Scholar
  9. 9.
    Dang, N.T., Le, H.H., Tavanti, M.: Visualization and interaction on flight trajectory in a 3d stereoscopic environment. In: The 22nd Digital Avionics Systems Conference, DASC 2003, vol. 2, pp. 9.A.5 –91–10 (2003)Google Scholar
  10. 10.
    Grossman, T., Balakrishnan, R.: The design and evaluation of selection techniques for 3d volumetric displays. In: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, UIST 2006, pp. 3–12. ACM, USA (2006)Google Scholar
  11. 11.
    Liang, J., Green, M.: Jdcad: A highly interactive 3D modeling system. Computers & Graphics 18(4), 499–506 (1994)CrossRefGoogle Scholar
  12. 12.
    Steed, A., Parker, C.: 3D selection strategies for head tracked and non-head tracked operation of spatially immersive displays. In: 8th International Immersive Projection Technology Workshop (2004)Google Scholar
  13. 13.
    Argelaguet, F., Andujar, C.: Visual feedback techniques for virtual pointing on stereoscopic displays. In: Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology, VRST 2009, USA, pp. 163–170. ACM (2009)Google Scholar
  14. 14.
    Ogre 3D: Ogre 3D (2012), http://www.ogre3d.org
  15. 15.
    Madgwick, S.O.H., Harrison, A.J.L., Vaidyanathan, R.: Estimation of IMU and MARG orientation using a gradient descent algorithm. In: 2011 IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 1–7 (2011)Google Scholar
  16. 16.
    Stellmach, S., Dachselt, R.: Look & touch: gaze-supported target acquisition. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI 2012), pp. 2981–2990. ACM, USA (2012)CrossRefGoogle Scholar
  17. 17.
    Nancel, M., Wagner, J., Pietriga, E., Chapuis, O., Mackay, A.: Mid-air pan-and-zoom on wall-sized displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2011, pp. 177–186. ACM, USA (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Henrique G. Debarba
    • 1
    • 2
  • Jerônimo G. Grandi
    • 1
  • Anderson Maciel
    • 1
  • Luciana Nedel
    • 1
  • Ronan Boulic
    • 2
  1. 1.Instituto de InformáticaUniversidade Federal do Rio Grande do Sul (UFRGS)Brazil
  2. 2.École Polytechnique Fédérale de Lausanne (EPFL)Switzerland

Personalised recommendations