Multimodal Mobile Collaboration Prototype Used in a Find, Fix, and Tag Scenario

  • Gregory M. Burnett
  • Thomas Wischgoll
  • Victor Finomore
  • Andres Calvo
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 110)


Given recent technological advancements in mobile devices, military research initiatives are investigating these devices as a means to support multimodal cooperative interactions. Military components are executing dynamic combat and humanitarian missions while dismounted and on the move. Paramount to their success is timely and effective information sharing and mission planning to enact more effective actions. In this paper, we describe a prototype multimodal collaborative Android application. The mobile application was designed to support real-time battlefield perspective, acquisition, and dissemination of information among distributed operators. The prototype application was demonstrated in a scenario where teammates utilize different features of the software to collaboratively identify and deploy a virtual tracker-type device on hostile entities. Results showed significant improvements in completion times when users visually shared their perspectives versus relying on verbal descriptors. Additionally, the use of shared video significantly reduced the required utterances to complete the task.


Multimodal interfaces mobile computing remote collaboration 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kraut, R.E., Fussell, S.R., Seigel, J.: Visual Information as a Conversational Resource in Collaborative Physical Tasks. Human Computer Interaction 18, 13–49Google Scholar
  2. 2.
    Clark, H., Wilkes-Gibbs, D.: Referring as a Collaborative Process. Cognition 22, 1–39 (1986)CrossRefGoogle Scholar
  3. 3.
    Brehmer, B.: Dynamic Decision Making: Human Control of Complex Systems. Acta Psychologies 81(3), 211–241 (1992)CrossRefGoogle Scholar
  4. 4.
    Kuzuoka, H., Kosuge, T., Tanaka, K.: GestureCam: A Video Communication System for Sympathetic Remote Collaboration. In: CSCW 1994, pp. 35–45. ACM Press, New York (1994)Google Scholar
  5. 5.
    Fussel, S.R., Kraut, R.E., Siegel, J.: Coordination of Communication: Effects of Shared Visual Context on Collaborative Work. In: CSCW 2000, Philadelphia, PA (2000)Google Scholar
  6. 6.
    Ou, J., Fussell, S.R., Chen, X., Setlock, L.D., Yang, J.: Gestural Communication over Video Stream: Supporting Multimodal Interaction for Remote Colloborative Physical Tasks. In: International Conference on Multimodal Interfaces. ACM Press, Vancouver (2003)Google Scholar
  7. 7.
    Stevenson, D., Li, J., Smith, J., Hutchins, M.: A Collaborative Guidance Case Study. In: AUIC 2008, Wollongong, NSW, Australia (2008)Google Scholar
  8. 8.
    Kraut, R.E., Darren, G., Fussell, S.R.: The Use of Visual Information in Shared Visual Co-Presence. In: CSCW 2002, New Orleans (2002)Google Scholar

Copyright information

© ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering 2013

Authors and Affiliations

  • Gregory M. Burnett
    • 1
  • Thomas Wischgoll
    • 2
  • Victor Finomore
    • 1
  • Andres Calvo
    • 3
  1. 1.Air Force Research LaboratoryWPAFBUSA
  2. 2.Department of Computer Science and EngineeringWright State UniversityUSA
  3. 3.Ball AersopaceFairbornUSA

Personalised recommendations