Skip to main content

Analyzing Multimodal Communication around a Shared Tabletop Display

  • Conference paper
ECSCW 2009

Abstract

Communication between people is inherently multimodal. People employ speech, facial expressions, eye gaze, and gesture, among other facilities, to support communication and cooperative activity. Complexity of communication increases when a person is without a modality such as hearing, often resulting in dependence on another person or an assistive device to facilitate communication. This paper examines communication about medical topics through Shared Speech Interface, a multimodal tabletop display designed to assist communication between a hearing and deaf individual by converting speech-to-text and representing dialogue history on a shared interactive display surface. We compare communication mediated by a multimodal tabletop display and by a human sign language interpreter. Results indicate that the multimodal tabletop display (1) allows the deaf patient to watch the doctor when she is speaking, (2) encourages the doctor to exploit multimodal communication such as co-occurring gesture-speech, and (3) provides shared access to persistent, collaboratively produced representations of conversation. We also describe extensions of this communication technology, discuss how multimodal analysis techniques are useful in understanding the affects of multiuser multimodal tabletop systems, and briefly allude to the potential of applying computer vision techniques to assist analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Argyle M. and M. Cook (1976): Gaze and mutual gaze. Cambridge University Press.

    Google Scholar 

  • Baker, C. (1977): ‘Regulators and turn-taking in American Sign Language discourse’. In: On the other hand: New perspectives on American Sign Language. New York, pp. 215–236, Academic Press.

    Google Scholar 

  • Baker, C. and C. Padden (1978): ‘Focusing on the nonmanual components of American Sign Language’. In: Understanding Language through Sign Language Research. New York, pp. 27–57, Academic Press.

    Google Scholar 

  • Beattie, G. and H. Shovelton (1999): ‘Do iconic hand gestures really contribute anything to the semantic information conveyed by speech? An experimental investigation’. Semiotica, vol. 123, no. 1–2, pp. 1.

    Article  Google Scholar 

  • Bekker, M. M., J. S. Olson, and G. M. Olson (1995): ‘Analysis of gestures in face-to-face design teams provides guidance for how to use groupware in design’. In: Proceedings of conference on Designing Interactive Systems (DIS). pp. 157–166.

    Google Scholar 

  • Cassell, J., D. McNeill, and K.-E. McCullough (1999): ‘Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information’. Pragmatics cognition, vol. 7, pp. 1.

    Google Scholar 

  • Cavender, A., R. E. Ladner, and E. A. Riskin (2006): ‘MobileASL: intelligibility of sign language video as constrained by mobile phone technology’. In: Proceedings of conference on Computers and Accessibility (ASSETS). pp. 71–78.

    Google Scholar 

  • Clark, H. (2003): ‘Pointing and Placing’. In: S. Kita (ed.): Pointing: Where Language, Culture, and Cognition Meet. Mihwah, NJ, pp. 243–268, Lawrence Erlbaum Associates.

    Google Scholar 

  • Clark, H. and S. Brennan (1991): ‘Grounding in Communication’. In: L. Resnick, J. Levine, and S. Teasley (eds.): Perspectives on Socially Shared Cognition. Washington, APA Books.

    Google Scholar 

  • Dietz, P. and D. Leigh (2001): ‘DiamondTouch: a multi-user touch technology’. In: Proceedings of symposium on User Interface Software and Technology (UIST). pp. 219–226.

    Google Scholar 

  • Duncan, S. (1972): ‘Some signals and rules for taking turns in conversations’. Journal of personality and social psychology, vol. 23, no. 2, pp. 283.

    Article  Google Scholar 

  • Duncan, S. (1974): ‘On the Structure of Speaker-Auditor Interaction During Speaking Turns.’. Language in society, vol. 3, no. 2, pp. 161.

    Article  Google Scholar 

  • Duncan, S. and D. W. Fiske (1977): Face-to-face interaction: Research, methods and theory. Hill-dale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Emmorey, K., R. Thompson, and R. Colvin (2008): ‘Eye Gaze During Comprehension of American Sign Language by Native and Beginning Signers’. Journal of Deaf Studies and Deaf Education.

    Google Scholar 

  • Engberg-Pedersen, E. (2003): ‘From Pointing to Reference and Predication: Pointing Signs, Eye-gaze, and Head and Body Orientation in Danish Sign Language’. In: S. Kita (ed.): Pointing: Where Language, Culture, and Cognition Meet. Lawrence Erlbaum Associates.

    Google Scholar 

  • Goldin-Meadow, S. (2003): Hearing Gesture: How our Hands Help Us Think. Harvard University Press.

    Google Scholar 

  • Goodwin, C. (1981): Conversational Organization: Interaction Between Speakers and Hearers. New York: Academic Press.

    Google Scholar 

  • Goodwin, C. (2000): ‘Practices of Seeing, Visual Analysis: An Ethnomethodological Approach’. In: Handbook of Visual Analysis. London, pp. 157–182, Sage.

    Google Scholar 

  • Goodwin, C. (2003): ‘Pointing as Situated Practice’. In: S. Kita (ed.): Pointing: Where Language, Culture, and Cognition Meet. Lawrence Erlbaum Associates.

    Google Scholar 

  • Goodwin, C. and M. Goodwin (1996): ‘Formulating Planes: Seeing as Situated Activity’. In: Cognition and Communication at Work. pp. 61–95, Cambridge University Press.

    Google Scholar 

  • Gullberg, M. and K. Holmqvist (2006): ‘What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video’. Pragmatics cognition, vol. 14, no. 1, pp. 53.

    Article  Google Scholar 

  • Heath, C. (1986): Body movement and speech in medical interaction. Cambridge University Press.

    Google Scholar 

  • Hindmarsh, J. and C. Heath (2000): ‘Embodied reference: A study of deixis in workplace interaction’. Journal of Pragmatics, vol. 32, no. 12, pp. 1855.

    Article  Google Scholar 

  • Hollan, J. D., E. Hutchins, and D. Kirsh (2000): ‘Distributed cognition: toward a new foundation for human-computer interaction research’. ACM transactions on computer-human interaction, vol. 7, no. 2, pp. 174–196.

    Article  Google Scholar 

  • Hutchins, E. (1995): Cognition in the Wild. Cambridge, MA: MIT Press.

    Google Scholar 

  • Kendon, A. (1967): ‘Some functions of gaze-direction in social interaction.’. Acta Psychologica, vol. 26, no. 1, pp. 22–63.

    Article  Google Scholar 

  • Kendon, A. (1980): ‘Gesticulation and Speech: Two Aspects of the Process of Utterance’. In: The Relationship of Verbal and Nonverbal Communication. p. 388, Walter de Gruyter.

    Google Scholar 

  • Kendon, A. and C. Muller (2001): ‘Introducing: GESTURE’. Gesture, vol. 1, no. 1, pp. 1.

    Article  Google Scholar 

  • Kirk, D., A. Crabtree, and T. Rodden (2005): ‘Ways of the hands’. In: Proceedings of European Conference on Computer Supported Cooperative Work (ECSCW. pp. 1–21.

    Google Scholar 

  • Kita, S. (2003): Pointing: where language, culture, and cognition meet. Lawrence Erlbaum Associates.

    Google Scholar 

  • Kleinke, C. (1986): ‘Gaze and eye contact: A research review’. Psychological Bulletin, vol. 100, no. 1, pp. 78–100.

    Article  Google Scholar 

  • Kraut, R. E., S. R. Fussell, and J. Siegel (2003): ‘Visual information as a conversational resource in collaborative physical tasks’. Hum.-Comput. Interact., vol. 18, no. 1, pp. 13–49.

    Article  Google Scholar 

  • Liddell, S. K. and M. Metzger (1998): ‘Gesture in sign language discourse’. Journal of Pragmatics, vol. 30, no. 6, pp. 657 – 697.

    Article  Google Scholar 

  • Low, D. G. (2004): ‘Distinctive image features from scale-invariant keypoints’. International Journal of Computer Vision, vol. 60, pp. 91–110.

    Google Scholar 

  • Luff, P., C. Heath, H. Kuzuoka, K. Yamazaki, and J. Yamashita (2006): ‘Handling documents and discriminating objects in hybrid spaces’. In: Proceedings of the conference on Human Factors in Computing Systems (CHI). pp. 561–570.

    Google Scholar 

  • McNeill, D. (1985): ‘So You Think Gestures Are Nonverbal?’. Psychological review, vol. 92, no. 3, pp. 350.

    Article  Google Scholar 

  • McNeill, D. (1992): Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press.

    Google Scholar 

  • Miller, D., K. Gyllstrom, D. Stotts, and J. Culp (2007): ‘Semi-transparent video interfaces to assist deaf persons in meetings’. In: ACM-SE 45: Proceedings of the 45th annual southeast regional conference. New York, NY, USA, pp. 501–506, ACM.

    Google Scholar 

  • Morris, M. R. (2006): ‘Supporting Effective Interaction with Tabletop Groupware’. Ph.D. thesis, Stanford University, Stanford, CA.

    Google Scholar 

  • NIDCD (2008): ‘National Institute on Deafness and Other Communication Disorders’. http://www.nidcd.nih.gov.

  • Piper, A. M. and J. D. Hollan (2008): ‘Supporting medical conversations between deaf and hearing individuals with tabletop displays’. In: Proceedings of the conference on Computer-Supported Cooperative Work (CSCW). pp. 147–156.

    Google Scholar 

  • Rogers, Y., W. Hazlewood, E. Blevis, and Y.-K. Lim (2004): ‘Finger talk: collaborative decision-making using talk and fingertip interaction around a tabletop display’. In: CHI ’04: CHI ’04 extended abstracts on Human factors in computing systems. New York, NY, USA, pp. 1271– 1274, ACM.

    Google Scholar 

  • Rogers, Y. and S. Lindley (2004): ‘Collaborating around vertical and horizontal large interactive displays: which way is best?’. Interacting with Computers, vol. 16, no. 6, pp. 1133 – 1152.

    Article  Google Scholar 

  • Roth, W.-M. (2001): ‘Gestures: Their Role in Teaching and Learning’. Review of Educational Research, vol. 71, no. 3, pp. 365–392.

    Article  Google Scholar 

  • Roth, W.-M. and D. V. Lawless (2002): ‘When up is down and down is up: Body orientation, proximity, and gestures as resources’. Language in Society, vol. 31, no. 01, pp. 1–28.

    Article  Google Scholar 

  • Schull, J. (2006): ‘An extensible, scalable browser-based architecture for synchronous and asynchronous communication and collaboration systems for deaf and hearing individuals’. In: Proceedings of the conference on Computers and Accessibility (ASSETS). pp. 285–286.

    Google Scholar 

  • Shen, C., F. D. Vernier, C. Forlines, and M. Ringel (2004): ‘DiamondSpin: an extensible toolkit for around-the-table interaction’. In: Proceedings of the conference on Human Factors in Computing Systems. pp. 167–174.

    Google Scholar 

  • Siple, P. (1978): ‘Visual Constraints for Sign Language Communication’. Sign Language Studies.

    Google Scholar 

  • Tang, J. C. (1991): ‘Findings from observational studies of collaborative work’. International Journal of Man-Machine Studies, vol. 34, no. 2, pp. 143 – 160. Special Issue: Computer-supported Cooperative Work and Groupware. Part 1.

    Article  Google Scholar 

  • Tang, J. C. and S. L. Minneman (1990): ‘VideoDraw: a video interface for collaborative drawing’. In: Proceedings of the conference on Human Factors in Computing Systems (CHI). pp. 313–320.

    Google Scholar 

  • Tse, E. (2007): ‘Multimodal Co-Located Interaction’. Ph.D. thesis, The University of Calgary, Calgary, Alberta, Canada.

    Google Scholar 

  • Tse, E., C. Shen, S. Greenberg, and C. Forlines (2007): ‘How pairs interact over a multimodal digital table’. In: Proceedings of the conference on Human Factors in Computing Systems (CHI). pp. 215–218.

    Google Scholar 

  • Turoff, M. (1975): ‘Computerized conferencing for the deaf and handicapped’. SIGCAPH Comput. Phys. Handicap., no. 16, pp. 4–11.

    Google Scholar 

  • Zemel, A., T. Koschmann, C. Lebaron, and P. Feltovich (2008): “‘What are We Missing?” Usability's Indexical Ground’. Computer Supported Cooperative Work, vol. 17, no. 1, pp. 63–85.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anne Marie Piper .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag London Limited

About this paper

Cite this paper

Piper, A.M., Hollan, J.D. (2009). Analyzing Multimodal Communication around a Shared Tabletop Display. In: Wagner, I., Tellioğlu, H., Balka, E., Simone, C., Ciolfi, L. (eds) ECSCW 2009. Springer, London. https://doi.org/10.1007/978-1-84882-854-4_17

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-854-4_17

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-853-7

  • Online ISBN: 978-1-84882-854-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics