Advertisement

“Roger that!” — The Value of Adding Social Feedback in Audio-Mediated Communications

  • Rahul Rajan
  • Joey Hsiao
  • Deven Lahoti
  • Ted Selker
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8120)

Abstract

Losing track of who is in a conversation, and what is being said, is always a problem especially on audio-only conference calls. This paper investigates how domain-independent social feedback can support such interactions, and improve communication, through the use of audio cues. In particular, we show how an agent can improve people’s ability to accurately identify and distinguish between speakers, reassure users about the presence of other collaborators on the line, and announce events like entry & exit with minimum impact on users cognitive ability.

Keywords

Audio-mediated Conference calls Considerate Social feedback 

References

  1. 1.
    Higgins, E.T.: Achieving ’shared reality’ in the communication game: A social action that creates meaning. Language and Social Psychology 11(3), 107–131 (1992)CrossRefGoogle Scholar
  2. 2.
    Sellen, A.J.: Remote conversations: the effects of mediating talk with technology. HCI 10(4), 401–444 (1995)Google Scholar
  3. 3.
    Halbe, D.: Whos there? Business Communication 49(1), 48–73 (2012)CrossRefGoogle Scholar
  4. 4.
    Brubaker, J.R., Venolia, G., Tang, J.C.: Focusing on shared experiences: moving beyond the camera in video communication. In: Proceedings of the Designing Interactive Systems Conference, DIS 2012, pp. 96–105. ACM (2012)Google Scholar
  5. 5.
    Nguyen, D.T., Canny, J.: Multiview: improving trust in group video conferencing through spatial faithfulness. In: Proc. CHI 2007, pp. 1465–1474. ACM (2007)Google Scholar
  6. 6.
    Yankelovich, N., Walker, W., Roberts, P., Wessler, M., Kaplan, J., Provino, J.: Meeting central: making distributed meetings more effective. In: Proc. CSCW 2004, pp. 419–428. ACM (2004)Google Scholar
  7. 7.
    Tang, J.C., Isaacs, E.: Why do users like video? In: Proc. CSCW 1992, vol. 1(3), pp. 163–196 (1992)Google Scholar
  8. 8.
    Selker, T.: Understanding considerate systems – UCS (pronounced: You see us). In: 2010 International Symposium on Collaborative Technologies and Systems, pp. 1–12. IEEE (2010)Google Scholar
  9. 9.
    Arons, B.: A Review of The Cocktail Party Effect. The American Voice I/O Society 12, 35–50 (1992)Google Scholar
  10. 10.
    Gaver, W.W.: Sound support for collaboration. In: Proc ECSCW 1991, pp. 293–308. Kluwer Academic Publishers (1991)Google Scholar
  11. 11.
    Cohen, J.: Out to lunch: Further adventures monitoring background activity. In: Proc. ICAD 1994, Santa Fe Institute, pp. 15–20 (1994)Google Scholar
  12. 12.
    Rigas, D.I., Hopwood, D., Memery, D.: Communicating spatial information via a multimedia-auditory interface. In: Proc. EUROMICRO 1999, vol. 2, pp. 398–405. IEEE Computer Society (1999)Google Scholar
  13. 13.
    Cohen, J.: “Kirk here”: using genre sounds to monitor background activity. In: Proc CHI 1993, pp. 63–64. ACM (1993)Google Scholar
  14. 14.
    Gutwin, C., Schneider, O., Xiao, R., Brewster, S.: Chalk sounds: the effects of dynamic synthesized audio on workspace awareness in distributed groupware. In: Proc. CSCW 2011, pp. 85–94. ACM (2011)Google Scholar
  15. 15.
    Rajan, R., Chen, C., Selker, T.: Considerate Audio MEdiating Oracle (CAMEO): improving human-to-human communications in conference calls. In: Proc. DIS 2012, pp. 86–95. ACM (2012)Google Scholar
  16. 16.
    Edwards, A.D.N.: Soundtrack: an auditory interface for blind users. HCI 4(1), 45–66 (1989)Google Scholar
  17. 17.
    Gaver, W.W.: The SonicFinder: An Interface That Uses Auditory Icons. HCI 4(1), 67–94 (1989)MathSciNetGoogle Scholar
  18. 18.
    Strachan, S., Eslambolchilar, P., Murray-Smith, R., Hughes, S., O’Modhrain, S.: GpsTunes: controlling navigation via audio feedback. In: Proc. MobileHCI 2005, pp. 275–278. ACM (2005)Google Scholar
  19. 19.
    Schlienger, C., Conversy, S., Chatty, S., Anquetil, M., Mertz, C.: Improving users’ comprehension of changes with animation and sound: An empirical assessment. In: Baranauskas, C., Abascal, J., Barbosa, S.D.J. (eds.) INTERACT 2007. LNCS, vol. 4662, pp. 207–220. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  20. 20.
    Dingler, T., Brewster, S.: AudioFeeds: a mobile auditory application for monitoring online activities. In: Proc. MM 2010, pp. 1067–1070. ACM (2010)Google Scholar
  21. 21.
    McGookin, D., Brewster, S.: PULSE: the design and evaluation of an auditory display to provide a social vibe. In: Proc. CHI 2012, pp. 1263–1272. ACM (2012)Google Scholar
  22. 22.
    Gaver, W.W., Smith, R.B.: Auditory icons in large-scale collaborative environments. In: Proc. INTERACT 1990, pp. 735–740. North-Holland (1990)Google Scholar
  23. 23.
    Gaver, W.W., Smith, R.B., O’Shea, T.: Effective sounds in complex systems: the ARKOLA simulation. In: Proc. CHI 1991, pp. 85–90. ACM (1991)Google Scholar
  24. 24.
    Ramloll, R., Mariani, J.: Do localised auditory cues in group drawing environments matter? In: Proc. ICAD 1998, p. 24. British Computer Society (1998)Google Scholar
  25. 25.
    McGookin, D., Brewster, S.: An initial investigation into non-visual computer supported collaboration. In: Ext. Abstracts CHI 2007, CHI EA 2007, pp. 2573–2578. ACM (2007)Google Scholar
  26. 26.
    Beaudouin-Lafon, M., Karsenty, A.: Transparency and awareness in a real-time groupware system. In: Proc. UIST 1992, pp. 171–180. ACM (1992)Google Scholar
  27. 27.
    Hindus, D., Ackerman, M.S., Mainwaring, S., Starr, B.: Thunderwire: a field study of an audio-only media space. In: Proc. CSCW 1996, pp. 238–247. ACM (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Rahul Rajan
    • 1
  • Joey Hsiao
    • 2
  • Deven Lahoti
    • 3
  • Ted Selker
    • 1
  1. 1.Carnegie Mellon UniversityPittsburghU.S.A.
  2. 2.National Taiwan UniversityTaipei CityTaiwan
  3. 3.IndependentUSA

Personalised recommendations