Towards Multi-modal Interaction with Interactive Paint

  • Nicholas Torres
  • Francisco R. OrtegaEmail author
  • Jonathan Bernal
  • Armando Barreto
  • Naphtali D. Rishe
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10907)


We present a Multi-Modal Interactive Paint application. Our work is intended to illustrate shortcomings in current multi-modal interaction and to present design strategies to address and alleviate these issues. In particular, from an input perspective use in a regular desktop environment. A serious of challenges are listed and addressed individually with their corresponding strategies in our discussion of design practices for multi-modality. We also identify areas which we will improve for future iterations of similar multi-modal interaction applications due to the findings identified in this paper. These improvements should alleviate shortcomings with our current design and provide further opportunities to research multi-modal interaction.


Multi-modal Interaction Multi-touch Modality 



We acknowledge the support by the National Science Foundation under Grant Nos. I/UCRC IIP-1338922, AIR IIP-1237818, SBIR IIP-1330943, III-Large IIS-1213026, MRI CNS-1532061, OISE 1541472, MRI CNS-1532061, MRI CNS-1429345, MRI CNS-0821345, MRI CNS-1126619, CREST HRD-0833093, I/UCRC IIP-0829576, MRI CNS-0959985, RAPID CNS- 1507611. Also, we acknowledge Garrett Lemieux and Andrew Mitchell for their help with the first prototype.


  1. 1.
    Ortega, F.R., Abyarjoo, F., Barreto, A., Rishe, N., Adjouadi, M.: Interaction Design for 3D User Interfaces. The World of Modern Input Devices for Research Applications, and Game Development. CRC Press, Boca Raton (2016)Google Scholar
  2. 2.
    Stivers, T., Sidnell, J.: Introduction: multimodal interaction. Semiotica 156, 1–20 (2005)CrossRefGoogle Scholar
  3. 3.
    Jourde, F., Laurillau, Y., Nigay, L.: COMM notation for specifying collaborative and multimodal interactive systems. In: Proceedings of the 2nd ACM SIGCHI (2010)Google Scholar
  4. 4.
    Prammanee, S., Moessner, K., Tafazolli, R.: Discovering modalities for adaptive multimodal interfaces. Interactions 13, 66–70 (2006)CrossRefGoogle Scholar
  5. 5.
    Khorshidi, S., Mohammadipour, M.: Children’s drawing: a way to discover their psychological disorders and problems. Int. J. Ment. Disord. 14, 31–36 (2016)Google Scholar
  6. 6.
    Anthony, L., Brown, Q., Tate, B., Nias, J., Brewer, R., Irwin, G.: Designing smarter touch-based interfaces for educational contexts. Pers. Ubiquit. Comput. 18, 1471–1483 (2013)CrossRefGoogle Scholar
  7. 7.
    Hu, K., Canavan, S., Yin, L.: Hand pointing estimation for human computer interaction based on two orthogonal-views. In: Pattern Recognition (ICPR), pp. 3760–3763 (2010)Google Scholar
  8. 8.
    Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In: The SIGCHI Conference, pp. 1083–1092. ACM Press, New York (2009)Google Scholar
  9. 9.
    Balcazar, R., Ortega, F.R., Tarre, K., Barreto, A., Weiss, M., Rishe, N.D.: CircGR: interactive multi-touch gesture recognition using circular measurements. In: Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, ISS 2017, pp. 12–21. ACM, New York (2017)Google Scholar
  10. 10.
    Adler, A., Davis, R.: Speech and sketching for multimodal design. In: ACM SIGGRAPH 2007 Courses, SIGGRAPH 2007. ACM, New York (2007)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Nicholas Torres
    • 1
  • Francisco R. Ortega
    • 1
    Email author
  • Jonathan Bernal
    • 1
  • Armando Barreto
    • 1
  • Naphtali D. Rishe
    • 1
  1. 1.Florida International UniversityMiamiUSA

Personalised recommendations