Context-Based Bounding Volume Morphing in Pointing Gesture Application

  • Andreas Braun
  • Arthur Fischer
  • Alexander Marinc
  • Carsten Stocklöw
  • Martin Majewski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8007)

Abstract

In the last few years the number of intelligent systems has been growing rapidly and classical interaction devices like mouse and keyboard are replaced in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user’s intention. In this work we present a method for supporting goal-based interaction using multimodal interaction systems. Combining speech and gesture we are able to compensate the insecurities of both interaction methods, thus improving intention recognition. Using a p‘rototypical system we have proven the usability of such a system in a qualitative evaluation.

Keywords

Multimodal Interaction Speech Recognition Goal-based Interaction Gesture Recognition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Braun, A., Kamieth, F.: Passive identification and control of arbitrary devices in smart environments. In: Jacko, J.A. (ed.) Human-Computer Interaction, Part III, HCII 2011. LNCS, vol. 6763, pp. 147–154. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  2. 2.
    Wilson, A., Shafer, S.: XWand: UI for intelligent spaces. In: Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 545–552. ACM (2003)Google Scholar
  3. 3.
    Cao, X., Balakrishnan, R.: VisionWand. In: Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology - UIST  2003, pp. 173–182. ACM Press, New York (2003)CrossRefGoogle Scholar
  4. 4.
    Majewski, M., Braun, A., Marinc, A., Kuijper, A.: Visual Support System for Selecting Reactive Elements in Intelligent Environments. In: International Conference on Cyberworlds, pp. 251–255 (2012)Google Scholar
  5. 5.
    Shneiderman, B.: Tree visualization with tree-maps: 2-d space-filling approach. ACM Transactions on Graphics 11, 92–99 (1992)MATHCrossRefGoogle Scholar
  6. 6.
    Heinze, C.: Modelling intention recognition for intelligent agent systems (2004)Google Scholar
  7. 7.
    Tahboub, K.A.: Intelligent Human-Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition. Journal of Intelligent and Robotic Systems 45, 31–52 (2006)CrossRefGoogle Scholar
  8. 8.
    Yamamoto, Y., Yoda, I., Sakaue, K.: Arm-pointing gesture interface using surrounded stereo cameras system. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, pp. 965–970. IEEE (2004)Google Scholar
  9. 9.
    Heider, T., Kirste, T.: Supporting goal-based interaction with dynamic intelligent environments. In: ECAI, pp. 596–602 (2002)Google Scholar
  10. 10.
    Valli, A.: The design of natural interaction. Multimedia Tools and Applications 38, 295–305 (2008)CrossRefGoogle Scholar
  11. 11.
    Oviatt, S., Cohen, P., Wu, L., Duncan, L., Suhm, B., Bers, J., Holzman, T., Winograd, T., Landay, J., Larson, J., Ferro, D.: Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions. Human-Computer Interaction 15, 263–322 (2000)Google Scholar
  12. 12.
    Quek, F., McNeill, D., Bryll, R., Duncan, S., Ma, X.-F., Kirbas, C., McCullough, K.E., Ansari, R.: Multimodal human discourse: gesture and speech. ACM Transactions on Computer-Human Interaction 9, 171–193 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Andreas Braun
    • 1
  • Arthur Fischer
    • 2
  • Alexander Marinc
    • 1
  • Carsten Stocklöw
    • 1
  • Martin Majewski
    • 2
  1. 1.Fraunhofer Institute for Computer Graphics Research IGDDarmstadtGermany
  2. 2.Technische UniversitätDarmstadtGermany

Personalised recommendations