Advertisement

A Multimodal Fusion Framework for Children’s Storytelling Systems

  • Danli Wang
  • Jie Zhang
  • Guozhong Dai
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3942)

Abstract

Storytelling by a child, as a training activity, influences significantly a child’s linguistic abilities, logic and thought process, imagination, and creativity. There presently exist many software-based storytelling applications. However, most are deemed incompatible or not suitable to Chinese children. Due to the limited vocabulary of pre-school and lower-level grade school children, speech-based and pen-based input models are considered the most effective way of input. But now there is not an effective multimodal mode to solve the problem for children’s storytelling systems. In this paper, we propose a multimodal fusion framework that utilizes pen and speech techniques to incorporate both context information and linguistic attributes of the Chinese language into the design. Based on the proposed framework, we formulated specific methods of integration, and developed a prototype for our proposed system.

Keywords

Chinese Child Multimodal Interface Linguistic Attribute Fusion Framework Multimodal Fusion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wright, A.: Creating stories with children. Oxford University Press, England (1995)Google Scholar
  2. 2.
    Benford, S., Bederson, B., et al.: Designing Storytelling Technologies to Encourage Collaboration Between Young Children. In: Proceedings of CHI 2000, pp. 556–563 (2000)Google Scholar
  3. 3.
    Montemayor, J., Druin, A., et al.: Physical Programming: Designing Tools for Children to Create Physical Interactive Environments. In: CHI 2002, ACM Conference on Human Factors in Computing Systems, CHI Letters, vol. 4(1), pp. 299–306 (2002)Google Scholar
  4. 4.
    Bers, M., Cassell, J.: Storytelling Systems: Constructing the Innerface of the Interface. In: Cognitive Technologies Procedings 1997, pp. 98–108. IEEE, Los Alamitos (1997)Google Scholar
  5. 5.
    Cohen, P.R., Johnston, M., et al.: Quickset: Multimodal interaction for distributed applications. In: Proceedings of ACM Multimedia, Seattle, WA, pp. 31–40 (1997)Google Scholar
  6. 6.
    Oviatt, S.L.: Multimodal Interfaces. In: Jacko, J., Sears, A. (eds.) Handbook of Human-Computer Interface. Lawrence Erlbaum, New Jersey (2002)Google Scholar
  7. 7.
    Pfleger, N.: Context Based Multimodal Fusion. In: Proceedings of ICMI 2004, State College, Pennsylvania, USA, pp. 265–272 (2004)Google Scholar
  8. 8.
    Neal, J.G., Thielman, C.Y., Dobes, A., Haller, S.M., Shapiro, S.C.: Natural language with integrated deictic and graphic gestures. In: Maybury, M.T., Wahlster, W. (eds.) Readings In Intelligent User Interfaces, pp. 38–51. Morgan Kaufmann Publishers, San Francisco (1991)Google Scholar
  9. 9.
    Campana, E., Baldridge, J., Dowding, J., Hockey, B.A., Remington, R.W., Stone, L.S.: Using eye movements to determine referents in a spoken dialogue system. In: Proceedings. of workshop on perceptive user interface, Orland, Florida (2001)Google Scholar
  10. 10.
    Zhang, Q., Imamiya, A., Go, K., Mao, X.A.: Gaze and Speech Multimodal Interface. In: Proceedings of the 24th International Conference on Distributed Computing Systems Workshops (ICDCSW 2004) (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Danli Wang
    • 1
  • Jie Zhang
    • 1
  • Guozhong Dai
    • 1
  1. 1.Institute of SoftwareThe Chinese Academy of ScienceBeijingChina

Personalised recommendations