Abstract
In the last few years the number of intelligent systems has been growing rapidly and classical interaction devices like mouse and keyboard are replaced in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user’s intention. In this work we present a method for supporting goal-based interaction using multimodal interaction systems. Combining speech and gesture we are able to compensate the insecurities of both interaction methods, thus improving intention recognition. Using a p‘rototypical system we have proven the usability of such a system in a qualitative evaluation.
Chapter PDF
Similar content being viewed by others
References
Braun, A., Kamieth, F.: Passive identification and control of arbitrary devices in smart environments. In: Jacko, J.A. (ed.) Human-Computer Interaction, Part III, HCII 2011. LNCS, vol. 6763, pp. 147–154. Springer, Heidelberg (2011)
Wilson, A., Shafer, S.: XWand: UI for intelligent spaces. In: Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 545–552. ACM (2003)
Cao, X., Balakrishnan, R.: VisionWand. In: Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology - UIST 2003, pp. 173–182. ACM Press, New York (2003)
Majewski, M., Braun, A., Marinc, A., Kuijper, A.: Visual Support System for Selecting Reactive Elements in Intelligent Environments. In: International Conference on Cyberworlds, pp. 251–255 (2012)
Shneiderman, B.: Tree visualization with tree-maps: 2-d space-filling approach. ACM Transactions on Graphics 11, 92–99 (1992)
Heinze, C.: Modelling intention recognition for intelligent agent systems (2004)
Tahboub, K.A.: Intelligent Human-Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition. Journal of Intelligent and Robotic Systems 45, 31–52 (2006)
Yamamoto, Y., Yoda, I., Sakaue, K.: Arm-pointing gesture interface using surrounded stereo cameras system. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, pp. 965–970. IEEE (2004)
Heider, T., Kirste, T.: Supporting goal-based interaction with dynamic intelligent environments. In: ECAI, pp. 596–602 (2002)
Valli, A.: The design of natural interaction. Multimedia Tools and Applications 38, 295–305 (2008)
Oviatt, S., Cohen, P., Wu, L., Duncan, L., Suhm, B., Bers, J., Holzman, T., Winograd, T., Landay, J., Larson, J., Ferro, D.: Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions. Human-Computer Interaction 15, 263–322 (2000)
Quek, F., McNeill, D., Bryll, R., Duncan, S., Ma, X.-F., Kirbas, C., McCullough, K.E., Ansari, R.: Multimodal human discourse: gesture and speech. ACM Transactions on Computer-Human Interaction 9, 171–193 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Braun, A., Fischer, A., Marinc, A., Stocklöw, C., Majewski, M. (2013). Context-Based Bounding Volume Morphing in Pointing Gesture Application. In: Kurosu, M. (eds) Human-Computer Interaction. Interaction Modalities and Techniques. HCI 2013. Lecture Notes in Computer Science, vol 8007. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39330-3_16
Download citation
DOI: https://doi.org/10.1007/978-3-642-39330-3_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39329-7
Online ISBN: 978-3-642-39330-3
eBook Packages: Computer ScienceComputer Science (R0)