Abstract
This paper presents a novel approach for multimodal interaction which combines user mental activity (thoughts and emotions), user facial expressions and user head movements. In order to avoid problems related to computer vision (sensitivity to lighting changes, reliance on camera position, etc.), the proposed approach doesn’t make use of optical techniques. Furthermore, in order to make human communication and control smooth, and avoid other environmental artifacts, the used information is non-verbal. The head’s movements (rotations) are detected by a bi-axial gyroscope; the expressions and gaze are identified by electromyography and electrooculargraphy; the emotions and the thoughts are monitored by electroencephalography. In order to validate the proposed approach we developed an application where the user can navigate through a virtual world using his head. We chose Google Street View as virtual world. The developed application was conceived for a further integration with a electric wheelchair in order to replace the virtual world with a real world. A first evaluation of the system is provided.
Chapter PDF
Similar content being viewed by others
Keywords
References
Gunes, H., Piccardi, M., Jan, T.: Face and body gesture recognition for a vision-based multimodal analyzer. In: Proceedings of the Pan-Sydney area, vol. 36, pp. 19–28 (2004)
Mitra, S., Acharya, T.: Gesture Recognition: A Survey. IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 37, 311–324 (2007)
Davis, J., Shah, M.: Visual gesture recognition. Vision, Image and Signal Processing. In: IEE Proceedings, IET , pp. 101–106 (2002)
Je, H., Kim, J., Kim, D.: Vision-Based Hand Gesture Recognition for Understanding Musical Time Pattern and Tempo. In: 33rd Annual Conference of the IECON 2007, pp. 2371–2376. IEEE Industrial Electronics Society (2007)
Ng, P.C., De Silva, L.C.: Head gestures recognition. In: International Conference on Image Processing, Proceedings, pp. 266–269. IEEE, Los Alamitos (2002)
Keir, P., Payne, J., Elgoyhen, J., Horner, M., Naef, M., Anderson, P.: Gesture-recognition with Non-referenced Tracking. The Computer Journal, 151–158 (2006)
Takahashi, K., Nakauke, T., Hashimto, M.: Remarks on Simple Gesture Recognition of Bio-Potential Signals and Its Application to Hands-Free Manipulation System. In: TENCON 2006 - 2006 IEEE Region 10 Conference, pp. 1–4 (2006)
Reilly, R.B.: Applications of face and gesture recognition for human-computer interaction. In: Proceedings of the Sixth ACM International Conference on Multimedia Face/Gesture Recognition and their Applications - MULTIMEDIA 1998, pp. 20–27 (1998)
Takahashi, K., Hashimoto, M.: Remarks on EOG and EMG gesture recognition in hands-free manipulation system. In: 2008 IEEE International Conference on Systems, Man and Cybernetics, pp. 798–803 (2008)
Hashimoto, M., Takahashi, K., Shimada, M.: Wheelchair control using an EOG- and EMG-based gesture interface. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1212–1217 (2009)
Zhang, X., Chen, X., Wang, W.-h., Yang, J.-h., Lantz, V., Wang, K.-q.: Hand gesture recognition and virtual game control based on 3D accelerometer and EMG sensors. In: Proceedings of the 13th International Conference on Intelligent User Interfaces - IUI 2009, vol. 401 (2008)
Chen, X., Zhang, X., Zhao, Z.-Y., Yang, J.-H., Lantz, V., Wang, K.-Q.: Hand Gesture Recognition Research Based on Surface EMG Sensors and 2D-accelerometers. In: 11th IEEE International Symposium on Wearable Computers, pp. 1–4 (2007)
Morimoto, C., Yacoob, Y., Davis, L.: Recognition of head gestures using hidden Markov models. In: Proceedings of 13th International Conference on Pattern Recognition, pp. 461–465 (1996)
Emotiv - Brain Computer Interface Technology, http://www.emotiv.com/
Pfurtscheller, G., Neuper, C.: Motor imagery and direct brain-computer communication. Proceedings of the IEEE 89, 1123–1134 (2001)
Pfurtscheller, G., Neuper, C.: Motor imagery activates primary sensorimotor area in humans. Neuroscience Letters 239, 65–68 (1997)
Scherer, R., Lee, F., Schlögl, A., Leeb, R., Bischof, H., Pfurtscheller, G.: Towards self-paced (asynchronous) Brain-Computer Communication: Navigation through virtual worlds. IEEE Transaction on Biomedical Engineering 55, 675–682 (2008)
Dumas, B., Lalanne, D., Oviatt, S.: Multimodal interfaces: a survey of principles, models and frameworks. Human Machine Interaction, 3–26 (2009)
Nigay, L., Coutaz, J.: A design space for multimodal systems: concurrent processing and data fusion. In: Proceedings of the INTERACT 1993 and CHI 1993 Conference on Human Factors in Computing Systems, pp. 172–178. ACM Press, New York (1993)
Guger, C., Edlinger, G.: How many people can control a brain-computer interface (BCI). In: Proc. BrainPlay, pp. 29–32 (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Carrino, F., Tscherrig, J., Mugellini, E., Abou Khaled, O., Ingold, R. (2011). Head-Computer Interface: A Multimodal Approach to Navigate through Real and Virtual Worlds. In: Jacko, J.A. (eds) Human-Computer Interaction. Interaction Techniques and Environments. HCI 2011. Lecture Notes in Computer Science, vol 6762. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21605-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-642-21605-3_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21604-6
Online ISBN: 978-3-642-21605-3
eBook Packages: Computer ScienceComputer Science (R0)