Advertisement

TouchGlass: Raycasting from a Glass Surface to Point at Physical Objects in Public Exhibits

  • Florent CabricEmail author
  • Emmanuel Dubois
  • Pourang Irani
  • Marcos Serrano
Conference paper
  • 977 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11748)

Abstract

Physical objects such as natural items or fine art pieces are often placed behind glass cases to protect them from dust and damage. Generally, interacting with such objects is indirect, based for example on an adjacent touch interface detracting users’ attention from the object. In this paper, we explore whether the glass case could be used as an input surface to point and select distant physical objects. With such an approach, the glass case offers a physical delimiter for interaction to avoid unintended activations. We explore this innovative approach through a two steps approach. First, we carry an informative study with 46 participants to validate the most appropriate “walk-up and use” technique. Our results show that using a ray orthogonal to the glass surface is the most natural approach in a public setting. Next, we further explore this orthogonal raycasting technique and conduct a target acquisition experiment to evaluate the impact on target selection performance of the target size, target distance, presence of spatial references and user’s head position with regards to the glass case. Results reveal that using the glass as touch surface allows to easily select targets as small as 3 cm up to 35 cm away from the glass. From these results, we provide a set of guidelines to design interactive exhibits using a touch glass case.

Keywords

Touch input Distant pointing Transparent touch surface Absolute pointing Evaluation Physical objects 

Notes

Acknowledgments

This work is partially funded by the French region Occitanie, the neOCampus project (University Toulouse 3) and the AP2 project (ANR grant: AP2 ANR-15-CE23-0001).

Supplementary material

488593_1_En_15_MOESM1_ESM.mp4 (15.7 mb)
Supplementary material 1 (MP4 16127 kb)

References

  1. 1.
    Roberts, J., Banerjee, A., Hong, A., McGee, S., Horn, M., Matcuk, M.: Digital exhibit labels in museums: promoting visitor engagement with cultural artifacts. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, pp. 1–12 (2018).  https://doi.org/10.1145/3173574.3174197
  2. 2.
    Spindler, M., Dachselt, R.: PaperLens: advanced magic lens interaction above the tabletop. In: Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, ITS 2009, p. 1 (2009).  https://doi.org/10.1145/1731903.1731948
  3. 3.
    Martinez Plasencia, D., Berthaut, F., Karnik, A., Subramanian, S.: Through the combining glass. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST 2014, pp. 341–350 (2014).  https://doi.org/10.1145/2642918.2647351
  4. 4.
    Schmidt, D., Block, F., Gellersen, H.: A comparison of direct and indirect multi-touch input for large surfaces. In: Gross, T., et al. (eds.) INTERACT 2009. LNCS, vol. 5726, pp. 582–594. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-03655-2_65CrossRefGoogle Scholar
  5. 5.
    Sears, A., Shneiderman, B.: High precision touchscreens: design strategies and comparisons with a mouse. Int. J. Man-Mach. Stud. 34, 593–613 (1991).  https://doi.org/10.1016/0020-7373(91)90037-8CrossRefGoogle Scholar
  6. 6.
    Forlines, C., Balakrishnan, R.: Evaluating tactile feedback and direct vs. indirect stylus input in pointing and crossing selection tasks. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems, vol. 1, pp. 1563–1572 (2008).  https://doi.org/10.1145/1357054.1357299
  7. 7.
    Argelaguet, F., Andujar, C.: A survey of 3D object selection techniques for virtual environments. Comput. Graph. (Pergamon). 37, 121–136 (2013).  https://doi.org/10.1016/j.cag.2012.12.003CrossRefGoogle Scholar
  8. 8.
    Mine, M.: Virtual Environment Interaction Techniques (1995)Google Scholar
  9. 9.
    Debarba, H.G., Grandi, J.G., Maciel, A., Nedel, L., Boulic, R.: Disambiguation canvas: a precise selection technique for virtual environments. In: Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M. (eds.) INTERACT 2013. LNCS, vol. 8119, pp. 388–405. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40477-1_24CrossRefGoogle Scholar
  10. 10.
    Matulic, F., Vogel, D.: Multiray: multi-finger raycasting for large displays. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, pp. 1–13 (2018).  https://doi.org/10.1145/3173574.3173819
  11. 11.
    Song, C.G., Kwak, N., Jeong, H.: Developing an efficient technique of Selection and Manipulation in Immersive VE. In: VRST, pp. 142–146 (2000).  https://doi.org/10.1145/502390.502417
  12. 12.
    Gallo, L., De Pietro, G., Marra, I.: 3D interaction with volumetric medical data: experiencing the Wiimote. In: Proceedings of the 1st international Conference on Ambient Media and Systems, pp. 1–6 (2008).  https://doi.org/10.4108/icst.ambisys2008.2880
  13. 13.
    Hincapié-Ramos, J.D., Guo, X., Irani, P.: Designing interactive transparent exhibition cases. In: 7th International Workshop on Personalized Access to Cultural Heritage: The Future of Experiencing Cultural Heritage, pp. 16–19 (2014)Google Scholar
  14. 14.
    Teather, R.J., Stuerzlinger, W.: Pointing at 3D targets in a stereo head-tracked virtual environment. In: Proceedings of the IEEE Symposium on 3D User Interfaces, 3DUI 2011, pp. 87–94 (2011).  https://doi.org/10.1109/3dui.2011.5759222
  15. 15.
    Grossman, T., Wigdor, D., Balakrishnan, R.: Multi-finger gestural interaction with 3D volumetric displays. ACM Trans. Graph. 24, 931 (2005).  https://doi.org/10.1145/1073204.1073287CrossRefGoogle Scholar
  16. 16.
    Grossman, T., Balakrishnan, R.: The design and evaluation of selection techniques for 3D volumetric displays. In: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, UIST 2006, p. 3 (2006).  https://doi.org/10.1145/1166253.1166257
  17. 17.
    Benko, H., Feiner, S.: Balloon selection: a multi-finger technique for accurate low-fatigue 3D selection. In: Proceedings of the IEEE Symposium on 3D User Interfaces, 3DUI 2007, pp. 79–86 (2007).  https://doi.org/10.1109/3dui.2007.340778
  18. 18.
    Lee, M., Green, R., Billinghurst, M.: 3D natural hand interaction for AR applications. In: 2008 23rd International Conference Image and Vision Computing New Zealand, IVCNZ (2008).  https://doi.org/10.1109/ivcnz.2008.4762125
  19. 19.
    Lee, S., Lim, Y., Chun, J.: 3D interaction in Augmented Reality with stereo-vision technique. In: 2013 15th International Conference on Advanced Communication Technology (ICACT), pp. 401–405 (2013)Google Scholar
  20. 20.
    Khamis, M., Buschek, D., Thieron, T., Alt, F., Bulling, A.: EyePACT: eye-based parallax correction on touch-enabled interactive displays. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, pp. 146:1–146:18 (2018).  https://doi.org/10.1145/3161168CrossRefGoogle Scholar
  21. 21.
    Migge, B., Kunz, A.: User model for predictive calibration control on interactive screens. In: Proceedings of the 2010 International Conference on Cyberworlds, CW 2010, pp. 32–37 (2010).  https://doi.org/10.1109/cw.2010.18
  22. 22.
    Lee, J.H., Bae, S.: Binocular cursor: enabling selection on transparent displays troubled by binocular parallax. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 3169–3172 (2013).  https://doi.org/10.1145/2470654.2466433
  23. 23.
    Bandyopadhyay, D., Raskar, R., Fuchs, H.: Dynamic shader lamps: painting on movable objects. In: Proceedings of the IEEE and ACM International Symposium on Augmented Reality, ISAR 2001, pp. 207–216 (2001).  https://doi.org/10.1109/isar.2001.970539
  24. 24.
    Myers, B.A., et al.: Interacting at a distance: measuring the performance of laser pointers and other devices. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Changing Our World, Changing Ourselves, CHI 2002, p. 33 (2002).  https://doi.org/10.1145/503376.503383
  25. 25.
    Freeman, E., Williamson, J., Subramanian, S., Brewster, S.: Point-and-shake: selecting from levitating object displays. In: Proceedings of CHI, pp. 1–10 (2018).  https://doi.org/10.1145/3173574.3173592
  26. 26.
    Hinckley, K., Wigdor, D.: Input technologies and techniques. In: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, pp. 151–168 (2002)Google Scholar
  27. 27.
    Corsten, C., Cherek, C., Karrer, T., Borchers, J.: HaptiCase: back-of-device tactile landmarks for eyes-free absolute indirect touch. In: Proceedings of the ACM CHI 2015 Conference on Human Factors in Computing Systems, vol. 1, pp. 2171–2180 (2015).  https://doi.org/10.1145/2702123.2702277
  28. 28.
    Gilliot, J., Casiez, G., Roussel, N.: Impact of form factors and input conditions on absolute indirect-touch pointing tasks. In: Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, CHI 2014, pp. 723–732 (2014).  https://doi.org/10.1145/2556288.2556997
  29. 29.
    Pietroszek, K., Lank, E.: Clicking blindly: using spatial correspondence to select targets in multi-device environments. In: Proceedings of the 14th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI 2012, pp. 331–334 (2012).  https://doi.org/10.1145/2371574.2371625
  30. 30.
    Gehring, S., Löchtefeld, M., Daiber, F., Böhmer, M., Krüger, A.: Using intelligent natural user interfaces to support sales conversations. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, IUI 2012, p. 97 (2012).  https://doi.org/10.1145/2166966.2166985
  31. 31.
    Green, J., Pridmore, T., Benford, S.: Exploring attractions and exhibits with interactive flashlights. Pers. Ubiquit. Comput. 18, 239–251 (2014).  https://doi.org/10.1007/s00779-013-0661-3CrossRefGoogle Scholar
  32. 32.
    Ridel, B., Reuter, P., Laviole, J.: The Revealing Flashlight: interactive spatial augmented reality for detail exploration of cultural heritage artifacts. J. Comput. Cult. Heritage 7, 1–18 (2014).  https://doi.org/10.1145/0000000.0000000CrossRefGoogle Scholar
  33. 33.
    Pierce, J.S., Forsberg, A.S., Conway, M.J., Hong, S., Zeleznik, R.C., Mine, M.R.: Image plane interaction techniques in 3D immersive environments. In: Proceedings of the 1997 Symposium on Interactive 3D Graphics, SI3D 1997, p. 39 (1997).  https://doi.org/10.1145/253284.253303
  34. 34.
    Pierce, J.S., Steams, B.C., Pausch, R.: Voodoo dolls: seamless interaction at multiple scales in virtual environments, pp. 141–145 (1999)Google Scholar
  35. 35.
    Hoang, T.N., Porter, S.R., Thomas, B.H.: Augmenting image plane AR 3D interactions for wearable computers. In: Conferences in Research and Practice in Information Technology Series, pp. 9–16 (2009)Google Scholar
  36. 36.
    Marquardt, N., Greenberg, S.: Informing the design of proxemic interactions. IEEE Pervasive Comput. 11, 14–23 (2012).  https://doi.org/10.1109/MPRV.2012.15CrossRefGoogle Scholar
  37. 37.
    Hall, A.D., Cunningham, J.B., Roache, R.P., Cox, J.W.: Factors affecting performance using touch-entry systems: tactual recognition fields and system accuracy. J. Appl. Psychol. 4, 711–720 (1988)CrossRefGoogle Scholar
  38. 38.
    Vogel, D., Baudisch, P.: Shift: a technique for operating pen-based interfaces using touch. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2007, p. 657 (2007).  https://doi.org/10.1145/1240624.1240727
  39. 39.
    Wang, F., Ren, X.: Empirical evaluation for finger input properties in multi-touch interaction. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 2009, p. 1063 (2009).  https://doi.org/10.1145/1518701.1518864
  40. 40.
    Holz, C., Baudisch, P.: The generalized perceived input point model and how to double touch accuracy by extracting fingerprints. In: Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, p. 581 (2010).  https://doi.org/10.1145/1753326.1753413
  41. 41.
    Roudaut, A., Pohl, H., Baudisch, P.: Touch input on curved surfaces. In: Proceedings of the International Conference on Human Factors in Computing Systems, CHI 2011, pp. 1011–1020 (2011).  https://doi.org/10.1145/1978942.1979094
  42. 42.
    VandenBos, G.R.: Publication Manual of the American Psychological Association, 6th edn. American Psychological Association, Washington, D.C. (2009)Google Scholar
  43. 43.
    Dubois, E., Serrano, M., Raynal, M.: Rolling-menu: rapid command selection in toolbars using roll gestures with a multi-DoF Mouse. In: Proceedings of the Conference on Human Factors in Computing Systems, April 2018.  https://doi.org/10.1145/3173574.3173941
  44. 44.
    AVIZ Group: R Macros for Data Analysis. www.aviz.fr/reliefshearing
  45. 45.
    McGuffin, M., Balakrishnan, R.: Acquisition of expanding targets. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing our World, Changing Ourselves, CHI 2002, pp. 57–64 (2002).  https://doi.org/10.1145/503376.503388
  46. 46.
    McGuffin, M.J., Balakrishnan, R.: Fitts’ law and expanding targets. ACM Trans. Comput.-Hum. Interact. 12, 388–422 (2005).  https://doi.org/10.1145/1121112.1121115CrossRefGoogle Scholar
  47. 47.
    Kopper, R., Bacim, F., Bowman, D.A.: Rapid and accurate 3D selection by progressive refinement. In: Proceedings of the IEEE Symposium on 3D User Interfaces, 3DUI 2011, pp. 67–74 (2011).  https://doi.org/10.1109/3dui.2011.5759219
  48. 48.
    Vanacken, L., Grossman, T., Coninx, K.: Exploring the effects of environment density and target visibility on object selection in 3D virtual environments. In: Proceedings of the IEEE Symposium on 3D User Interfaces 2007, 3DUI 2007, pp. 115–122 (2007).  https://doi.org/10.1109/3dui.2007.340783

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Florent Cabric
    • 1
    Email author
  • Emmanuel Dubois
    • 1
  • Pourang Irani
    • 1
    • 2
  • Marcos Serrano
    • 1
  1. 1.University of ToulouseToulouseFrance
  2. 2.HCI LabUniversity of ManitobaWinnipegCanada

Personalised recommendations