Combining User Modeling and Machine Learning to Predict Users’ Multimodal Integration Patterns
Temporal as well as semantic constraints on fusion are at the heart of multimodal system processing. The goal of the present work is to develop user-adaptive temporal thresholds with improved performance characteristics over state-of-the-art fixed ones, which can be accomplished by leveraging both empirical user modeling and machine learning techniques to handle the large individual differences in users’ multimodal integration patterns. Using simple Naive Bayes learning methods and a leave-one-out training strategy, our model correctly predicted 88% of users’ mixed speech and pen signal input as either unimodal or multimodal, and 91% of their multimodal input as either sequentially or simultaneously integrated. In addition to predicting a user’s multimodal pattern in advance of receiving input, predictive accuracies also were evaluated after the first signal’s end-point detection—the earliest time when a speech/pen multimodal system makes a decision regarding fusion. This system-centered metric yielded accuracies of 90% and 92%, respectively, for classification of unimodal/multimodal and sequential/simultaneous input patterns. In addition, empirical modeling revealed a .92 correlation between users’ multimodal integration pattern and their likelihood of interacting multimodally, which may have accounted for the superior learning obtained with training over heterogeneous user data rather than data partitioned by user subtype. Finally, in large part due to guidance from user-modeling, the techniques reported here required as little as 15 samples to predict a “surprise” user’s input patterns.
KeywordsTraining Sample User Modeling Input Pattern Machine Learning Technique Integration Pattern
Unable to display preview. Download preview PDF.
- 2.Oviatt, S.: Integration and synchronization of input modes during multimodal human computer interaction. In: Proc. of CHI, pp. 415–422 (1997)Google Scholar
- 3.Oviatt, S., Coulston, R., Tomko, S., Xiao, B., Lunsford, R., Wesson, M., Carmichael, L.: Toward a theory of organized multimodal integration patterns during human-computer interaction. In: Proc. of ICMI, pp. 44–51 (2003)Google Scholar
- 4.Xiao, B., Lunsford, R., Coulston, R., Wesson, M., Oviatt, S.: Modeling multimodal integration patterns and performance in seniors: Toward adaptive processing of individual differences. In: Proc. of ICMI, pp. 265–272 (2003)Google Scholar
- 5.Oviatt, S., Coulston, R., Lunsford, R.: When do we interact multimodally? Cognitive load and multimodal communication patterns. In: Proc. of ICMI, pp. 129–136 (2004)Google Scholar
- 6.Gupta, A., Anastasakos, T.: Dynamic time windows for multimodal input fusion. In: Proc. of Interspeech, pp. 2293–2296 (2004)Google Scholar
- 7.Oviatt, S., Lunsford, R., Coulston, R.: Individual differences in multimodal integration patterns: What are they and why do they exist? In: Proc. of CHI, pp. 241–249 (2005)Google Scholar
- 9.Bengio, S.: An asynchronous hidden Markov model for audio-visual speech recognition. In: Proc. of Advances in Neural Information Processing Systems, pp. 1213–1220 (2003)Google Scholar
- 11.Lester, J., Choudhury, T., Borriello, G.: A Practical approach to recognizing physical activities. In: The Proc. of Pervasive (to appear, 2006)Google Scholar
- 14.Duda, R., Hart, P., Stork, D.: Pattern classification. Morgan Kaufmann, San Francisco (2002)Google Scholar
- 15.Heckerman, D.: A tutorial on learning with Bayesian networks. Learning in Graphical Modals. MIT Press, Cambridge (1999)Google Scholar
- 16.Murphy, K.: The Bayes net toolbox for Matlab. Computing Science and Statistics 33 (2001)Google Scholar