Impact on Performance and Process by a Social Annotation System: A Social Reading Experiment

  • Les Nelson
  • Gregorio Convertino
  • Peter Pirolli
  • Lichan Hong
  • Ed H. Chi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5638)

Abstract

Social annotation systems such as SparTag.us and del.icio.us have been designed to encourage individual reading and marking behaviors that, when shared, accumulate to build collective knowledge spaces. Prior work reported on the experimental design and performance effects observed in a controlled study of SparTag.us. Study participants working independently on a sensemaking task who had access to a set of expert annotations were compared against participants using SparTag.us without those annotations and participants using only office software for annotation support. A learning effect favored the participants exposed to expert annotations. In this paper, we analyze the behavioral data captured during the experiment and identify differences in the work process that can explain the performance effects reported previously.

Keywords

Convergent measures social annotation systems evaluation social sensemaking 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Convertino, G., Mentis, H.M., Rosson, M.B., Carroll, J.M., Slavkovic, A., Ganoe, C.H.: Articulating common ground in cooåperative work: content and process. In: Proc. CHI 2008, pp. 1637–1646. ACM, New York (2008)Google Scholar
  2. 2.
    Grudin, J.: Groupware and social dynamics: Eight challenges for developers. CACM 37(1), 92–105 (1994)CrossRefGoogle Scholar
  3. 3.
    Hong, L., Chi, E.H., Budiu, R., Pirolli, P., Nelson, L.: SparTag.us: A low cost tagging system for foraging of web content. In: Proc. AVI 2008, pp. 65–72. ACM, New York (2008)Google Scholar
  4. 4.
    Kalnikaite, V., Whittaker, S.: Social summarization: Does social feedback improve access to speech data? In: Proc. CSCW 2008. ACM, New York (to appear, 2008)Google Scholar
  5. 5.
    Monk, A., McCarthy, J., Watts, L., Daly-Jones, O.: Measures of Process. In: CSCW requirements and evaluation, ch. 9, pp. 125–139. Springer, Berlin (1996)CrossRefGoogle Scholar
  6. 6.
    Nelson, L., Held, C., Pirolli, P.L., Hong, L., Schiano, D.J., Chi, E.H.: With a Little Help from My Friends: Examining the Impact of Social Annotations in Sensemaking Tasks. In: Proc. CHI 2009 (2009)Google Scholar
  7. 7.
    Pirolli, P.: A Multilevel Science of Social Information Foraging and Sensemaking. In: Position Paper, Information Seeking Support Systems Workshop, Sponsored by the US National Science Foundation, Chapel Hill, NC USA (June 2008)Google Scholar
  8. 8.
    Schraefel, M., Zhu, Y., Modjeska, D., Wigdor, D., Zhao, S.: Hunter Gatherer: Interaction Support for the Creation and Management of Within-Web-Page Collections. In: Proc. WWW 2002, pp. 172–181 (2002)Google Scholar
  9. 9.
    Sellen, A.J.: Remote conversations: the effects of mediating talk with technology. Human-Computer Interaction 10, 401–444 (1995)CrossRefGoogle Scholar
  10. 10.
    Tapscott, D.: The Digital Economy: Promise and Peril in the Age of Networked Intelligence. McGraw-Hill, New York (1996)Google Scholar
  11. 11.
    Vicente, K.J.: HCI in the global knowledge-based economy: designing to support worker adaptation. ACM Transactions on Computer-Human Interaction 7(2), 263–280 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Les Nelson
    • 1
  • Gregorio Convertino
    • 1
  • Peter Pirolli
    • 1
  • Lichan Hong
    • 1
  • Ed H. Chi
    • 1
  1. 1.Palo Alto Research CenterPalo AltoUSA

Personalised recommendations