Advertisement

Augmenting Looking, Pointing and Reaching Gestures to Enhance the Searching and Browsing of Physical Objects

  • David Merrill
  • Pattie Maes
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4480)

Abstract

In this paper we present a framework for attaching information to physical objects in a way that can be interactively browsed and searched in a hands-free, multi-modal, and personalized manner that leverages users’ natural looking, pointing and reaching behaviors. The system uses small infrared transponders on objects in the environment and worn by the user to achieve dense, on-object visual feedback usually possible only in augmented reality systems, while improving on interaction style and requirements for wearable gear. We discuss two applications that have been implemented, a tutorial about the parts of an automobile engine and a personalized supermarket assistant. The paper continues with a user study investigating browsing and searching behaviors in the supermarket scenario, and concludes with a discussion of findings and future work.

Keywords

Completion Time Visual Feedback Augmented Reality Physical Object User Study 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ishii, H., Ullmer, B.: Tangible bits: Towards seamless interfaces between people, bits, and atoms. In: Proceedings of CHI’97, pp. 234–241 (1997)Google Scholar
  2. 2.
    Google: Google maps for mobile (2006), http://google.com/gmm/
  3. 3.
    Rekimoto, J.: Gesturewrist and gesturepad: Unobtrusive wearable interaction devices. In: 5th IEEE International Symposium on Wearable Computers (2001)Google Scholar
  4. 4.
    Want, R., et al.: The active badge location system. ACM Transactions on Information Systems 10(1) (1992)Google Scholar
  5. 5.
    Want, R., et al.: An overview of the PARCTAB ubiquitous computing experiment. IEEE Personal Communications 2(6), 28–33 (1995)CrossRefGoogle Scholar
  6. 6.
    Want, R., et al.: Bridging physical and virtual worlds with electronic tags. In: Proceedings of CHI ’99, pp. 370–377. ACM Press, New York (1999)Google Scholar
  7. 7.
    Kindberg, T., et al.: People, places, things: Web presence for the real world. In: IEEE Workshop Mobile Computing Systems and Applications (WMCSA 2000), pp. 19–28. IEEE Computer Society Press, Los Alamitos (2000)CrossRefGoogle Scholar
  8. 8.
    Smith, M., Davenport, D., Hwa, H.: Aura: A mobile platform for object and location annotation. In: The Fifth International Conference on Ubiquitous Computing, Seattle, WA (2003)Google Scholar
  9. 9.
    Rekimoto, J., Abowd, G.D., Patel, S.N.: iCam: Precise at-a-Distance Interaction in the Physical Environment. In: Fishkin, K.P., et al. (eds.) PERVASIVE 2006. LNCS, vol. 3968, pp. 272–287. Springer, Heidelberg (2006)Google Scholar
  10. 10.
    Abowd, G.D., Patel, S.N.: A 2-Way Laser-Assisted Selection Scheme for Handhelds in a Physical Environment. In: Dey, A.K., Schmidt, A., McCarthy, J.F. (eds.) UbiComp 2003. LNCS, vol. 2864, pp. 200–207. Springer, Heidelberg (2003)Google Scholar
  11. 11.
    Rohs, M., Gfeller, B.: Using camera-equipped mobile phones for interacting with real-world objects. In: Advances in Pervasive Computing, pp. 265–271 (2004)Google Scholar
  12. 12.
    Abowd, G.D., et al.: Cyberguide: A mobile context-aware tour guide. ACM Wireless Networks 3, 421–433 (1997)CrossRefGoogle Scholar
  13. 13.
    Cheverst, K., et al.: Developing a context-aware electronic tourist guide: some issues and experiences. In: CHI, pp. 17–24 (2000)Google Scholar
  14. 14.
    Oppermann, R., Specht, M.: A Context-Sensitive Nomadic Exhibition Guide. In: Thomas, P., Gellersen, H.-W. (eds.) HUC 2000. LNCS, vol. 1927, Springer, Heidelberg (2000)CrossRefGoogle Scholar
  15. 15.
    Paradiso, J.A., Ma, H.: The FindIT Flashlight: Responsive Tagging Based on Optically Triggered Microprocessor Wakeup. In: Borriello, G., Holmquist, L.E. (eds.) UbiComp 2002. LNCS, vol. 2498, Springer, Heidelberg (2002)Google Scholar
  16. 16.
    Azuma, R., et al.: Recent advances in augmented reality. IEEE Computer Graphics and Applications 21(6), 34–47 (2001)CrossRefGoogle Scholar
  17. 17.
    Rekimoto, J., Ayatsuka, Y.: Cybercode:designing augmented reality environments with visual tags. In: Proc. Designing Augmented Reality Environments, pp. 1–10 (2000)Google Scholar
  18. 18.
    Rekimoto, J., Nagao, K.: The world through the computer: computer augmented interaction with real world environments. In: Proceedings of the 8th annual ACM symposium on User interface and software technology, Pittsburgh, Pennsylvania, United States, ACM Press, New York (1995), http://portal.acm.org/citation.cfm?id=215639 Google Scholar
  19. 19.
    Hoellerer, T., et al.: Exploring mars: developing indoor and outdoor user interfaces to a mobile augmented reality system. Computers & Graphics 23(6), 779–785 (1999)CrossRefGoogle Scholar
  20. 20.
    Feiner, S.K.: Augmented reality: A new way of seeing. Scientific American (April 2002)Google Scholar
  21. 21.
    Leibe, B., et al.: The perceptive workbench: Towards spontaneous and natural interaction in semi-immersive virtual environments. In: Proceedings of the IEEE Conference on Virtual Reality, pp. 13–20 (2000)Google Scholar
  22. 22.
    Patten, J., et al.: Sensetable: A wireless object tracking platform for tangible user interfaces. In: Proceedings of CHI’01, Seattle, WA, pp. 253–260. ACM Press, New York (2001)Google Scholar
  23. 23.
    Shell, J., Selker, T., Vertegaal, R.: Interacting with groups of computers. Communications of the ACM 46(3), 40–46 (2003)CrossRefGoogle Scholar
  24. 24.
    Stiefelhagen, R., Yang, J., Waibel, A.: Estimating focus of attention based on gaze and sound. In: Workshop on Perceptive User Interfaces (PUI’01) (2001)Google Scholar
  25. 25.
    Oh, A., et al.: Evaluating look-to-talk: a gaze-aware interface in a collaborative environment. In: Conference on Human Factors in Computing Systems, pp. 650–651 (2002)Google Scholar
  26. 26.
    Smith, J., Vertegaal, R., Sohn, C.: Viewpointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis. In: UIST, pp. 53–61 (2005)Google Scholar
  27. 27.
    Shell, J.S., et al.: Ecsglasses and eyepliances: using attention to open sociable windows of interaction. In: Proceedings of the 2004 symposium on Eye tracking research & applications, pp. 93–100 (2004)Google Scholar
  28. 28.
    Silicon labs (2006), http://www.silabs.com/
  29. 29.
    Sourceforge: Cmu sphinx (2006), http://sourceforge.net/projects/cmusphinx
  30. 30.
    Sourceforge: Freetts 1.2 - a speech synthesizer written entirely in the javaTMprogramming language (2006), http://freetts.sourceforge.net/docs/
  31. 31.
    Barbara, J.G., Candace, L.S.: Attention, intentions, and the structure of discourse. Comput. Linguist. 12(3), 175–204 (1986)Google Scholar
  32. 32.
    Roy, D.: Grounding words in perception and action: Insights from computational models. Trends in Cognitive Science 9(8), 389–396 (2005)CrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • David Merrill
    • 1
  • Pattie Maes
    • 1
  1. 1.MIT Media Lab, Cambridge MA 02139USA

Personalised recommendations