Computer Supported Cooperative Work (CSCW)

, Volume 21, Issue 4–5, pp 417–448 | Cite as

Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study

  • Anna De Liddo
  • Ágnes Sándor
  • Simon Buckingham Shum
Article

Abstract

We propose the concept of Contested Collective Intelligence (CCI) as a distinctive subset of the broader Collective Intelligence design space. CCI is relevant to the many organizational contexts in which it is important to work with contested knowledge, for instance, due to different intellectual traditions, competing organizational objectives, information overload or ambiguous environmental signals. The CCI challenge is to design sociotechnical infrastructures to augment such organizational capability. Since documents are often the starting points for contested discourse, and discourse markers provide a powerful cue to the presence of claims, contrasting ideas and argumentation, discourse and rhetoric provide an annotation focus in our approach to CCI. Research in sensemaking, computer-supported discourse and rhetorical text analysis motivate a conceptual framework for the combined human and machine annotation of texts with this specific focus. This conception is explored through two tools: a social-semantic web application for human annotation and knowledge mapping (Cohere), plus the discourse analysis component in a textual analysis software tool (Xerox Incremental Parser: XIP). As a step towards an integrated platform, we report a case study in which a document corpus underwent independent human and machine analysis, providing quantitative and qualitative insight into their respective contributions. A promising finding is that significant contributions were signalled by authors via explicit rhetorical moves, which both human analysts and XIP could readily identify. Since working with contested knowledge is at the heart of CCI, the evidence that automatic detection of contrasting ideas in texts is possible through rhetorical discourse analysis is progress towards the effective use of automatic discourse analysis in the CCI framework.

Key words

collective intelligence discourse human annotation knowledge mapping machine annotation learning sensemaking network visualization social software social annotation 

References

  1. Aït-Mokhtar, S., Chanod, J. P., & Roux, C. (2002). Robustness beyond shallowness: incremental dependency parsing. Natural Language Engineering, 8(2/3), 121–144.Google Scholar
  2. Andriessen, J., Baker, M., & Suthers, D. (2003). Arguing to learn: Confronting cognitions in computer-supported collaborative learning environments (Eds.). Kluwer: DordrechtGoogle Scholar
  3. Birnbaum, L., Horvitz, E., Kurlander, D., Lieberman, H., Marks, J., & Roth, S. (1996). Compelling intelligent user interfaces: How much AI?, Proceedings ACM International Conference on Intelligent Interfaces, Orlando, FL, January 1996. ACM Press: NYGoogle Scholar
  4. Brown, A. L., Bransford, R. A., Ferraraand, R. A., & Campione, J. C. (1983). Learning, remembering, and understanding. In J. H. Flavell & E. H. Markman (Eds.), Handbook of child psychology: Cognitive development (vol. 3). New York: Wiley.Google Scholar
  5. Browning, L., & Boudès, T. (2005). The use of narrative to understand and respond to complexity: a comparative analysis of the Cynefin and Weickian models. Emergence: Complexity & Organization—An International Transdisciplinary Journal of Complex Social Systems, 7(3–4), 32–39.Google Scholar
  6. Buckingham Shum, S. (2003). The roots of computer supported argument visualization. In P. Kirschner, S. Buckingham Shum, & C. Carr (Eds.), Visualizing argumentation (pp. 3–24). London: Springer.CrossRefGoogle Scholar
  7. Buckingham Shum, S. (2008). Cohere: Towards Web 2.0 argumentation. 2nd International Conference on Computational Models of Argument, 28–30 May 2008, Toulouse. IOS Press: Amsterdam.Google Scholar
  8. Buckingham Shum, S., Maclean, A., Bellotti, V. M., & Hammond, N. V. (1997). Graphical argumentation and design cognition. Human-Computer Interaction, 12(3), 267–300.CrossRefGoogle Scholar
  9. Buckingham Shum, S., Selvin, A., Sierhuis, M., Conklin, J., Haley, C., & Nuseibeh, B. (2006). Hypermedia support for argumentation-based rationale: 15 years on from gIBIS and QOC. In A. Dutoit, R. McCall, I. Mistrik, & B. Paech (Eds.), Rationale management in software engineering (pp. 111–132). Berlin: Springer.CrossRefGoogle Scholar
  10. Conklin, J., & Begeman, M. L. (1988). gIBIS: a hypertext tool for exploratory policy discussion. ACM Transactions on Office Information Systems, 4(6), 303–331.CrossRefGoogle Scholar
  11. Convertino, G., Billman, D., Pirolli, P., Massar, J. P., & Shrager, J. (2008). The CACHE Study: group effects in computer-supported collaborative analysis. Computer Supported Cooperative Work (CSCW). An International Journal, 17, 353–393.CrossRefGoogle Scholar
  12. De Liddo, A., & Buckingham Shum, S. (2010). Cohere: A prototype for contested collective intelligence. Workshop on Collective Intelligence in Organizations: Toward a Research Agenda, ACM Conference on Computer Supported Cooperative Work, Feb. 6–10, 2010, Savannah GA, USA. Available as ePrint: http://oro.open.ac.uk/19554.
  13. De Waard, A., S. Buckingham Shum, A. Carusi, J. Park, M. Samwald, & A. Sándor (2009). Hypotheses, evidence and relationships: The HypER approach for representing scientific knowledge claims. Workshop on Semantic Web Applications in Scientific Discourse, 8th International Semantic Web Conference. LNCS, Springer Verlag: Berlin, 26 Oct 2009, Washington DC.Google Scholar
  14. Dervin, B., & Naumer, C. (2009). Sense-making. In S. W. Littlejohn & K. A. Foss (Eds.), Encyclopedia of communication theory (pp. 876–880). Los Angeles: Sage.Google Scholar
  15. Engelbart, D. C. (1963). A conceptual framework for the augmentation of man’s intellect, in Vistas in Information Handling, P. Howerton and Weeks, Editors. 1963, Spartan Books: Washington, DC: London. p. 1–29.Google Scholar
  16. Ghosh, A. (2004). Learning in strategic alliances: a Vygotskian perspective. The Learning Organization, 11(4/5), 302–311.CrossRefGoogle Scholar
  17. Goodman, N. (1986). Mathematics as an objective science. In T. Tymocyko (Ed.), New directions in the philosophy of mathematics (pp. 79–94). Boston: Birkhauser.Google Scholar
  18. Gurkan, A., Iandoli, L., Klein, M., & Zollo, G. (2010). Mediating debate through on-line large-scale argumentation: evidence from the field. Information Sciences, 180, 3686–3702.CrossRefGoogle Scholar
  19. Hagel, J. III, Seely Brown, J., & Davison, L. (2010). The power of pull: How small moves, smartly made, can set big things in motion. Basic BooksGoogle Scholar
  20. Heuer, R. (1999). The psychology of intelligence analysis. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency.Google Scholar
  21. Hong, L., Chi, E. H., Budiu, R., Pirolli, P., & Nelson, L. (2008): SparTag.us: A low cost tagging system for foraging of web content. In: Proc. AVI 2008, pp. 65–72. ACM, New York.Google Scholar
  22. Horvitz, E. (1999). Principles of mixed-initiative user interfaces, In Proceeding of the ACM Conference on Human Factors in Computing Systems.Google Scholar
  23. Kalnikaité, V., & Whittaker, S. (2008). Social summarization: does social feedback improve access to speech data? In Proceedings of conference on computer supported co-operative work, ACM Press, New York, pp 9–12.Google Scholar
  24. Klein, G., Moon, B., & Hoffman, R. F. (2006). Making sense of sensemaking 1: alternative perspectives. IEEE Intelligent Systems, 21(4), 70–73.CrossRefGoogle Scholar
  25. Kurtz, C., & Snowden, D. (2003). The new dynamics of strategy: sense-making in a complex-complicated world. IBM Systems Journal, 42(3), 462–83.CrossRefGoogle Scholar
  26. Kong, N., Hanrahan, B., Weksteen, T., Convertino, G., & Chi E. H. (2011).: VisualWikiCurator: Human and Machine Intelligence for Organizing Wiki Content. In: Proc. IUI2011.Google Scholar
  27. Levy, D. M., & Marshall, C. C. (1995). Going digital: a look at assumptions underlying digital libraries. Communications of the ACM, 38(4), 77–84.CrossRefGoogle Scholar
  28. Lin, X., Hmelo, C., Kinzer, C. K., & Secules, T. J. (1999). Designing technology to support reflection. Educational Technology Research and Development, 47(3), 43–62.CrossRefGoogle Scholar
  29. Lowrance, J., Harrison, I., Rodriguez, A., Yeh, E., Boyce, T., Murdock, J., Thomere, J., & Murray, K. (2008). Template-based structured argumentation. In A. Okada, S. Buckingham Shum, & T. Sherborne (Eds.), Knowledge cartography: Software tools and mapping techniques. London: Springer.Google Scholar
  30. Malone, T. W., Laubacher, R., & Dellarocas, C. N. (2009). Harnessing crowds: Mapping the genome of collective intelligence. MIT Sloan Research Paper No. 4732–09. Available at SSRN: http://ssrn.com/abstract=1381502.
  31. Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137–168.MathSciNetCrossRefGoogle Scholar
  32. Okada, A., Buckingham Shum, S., & Sherborne, T. (2008). Knowledge cartography: Software tools and mapping techniques. London: Springer.Google Scholar
  33. Pea, R. D. (1993). Learning scientific concepts through material and social activities: conversational analysis meets conceptual change. Educational Psychologist, 28, 265–277.CrossRefGoogle Scholar
  34. Pirolli, P., & Card, S. (1995) The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. Proceedings of 2005 International Conference on Intelligence Analysis, 2–4Google Scholar
  35. Pirolli, P., & Russell, D. (2008). Call for submissions to special issue on sensemaking, Human-Computer Interaction. http://www.tandf.co.uk/journals/cfp/hhcicfp_sp1.pdf
  36. Rich, P. J., & Hannafin, M. (2009). Video annotation tools: technologies to scaffold, structure, and transform teacher reflection. Journal of Teacher Education, 60(1), 52–67.CrossRefGoogle Scholar
  37. Rittel, H., & Webber, M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155–169.Google Scholar
  38. Russell, D. M., Stefik, M. J., Pirolli, P., & Card, S. K. (1993). The cost structure of sensemaking. Proceedings of InterCHI ‘93, pp. 269–276. Amsterdam: Association for Computing Machinery.Google Scholar
  39. Sándor, Á. (2007). Modeling metadiscourse conveying the author’s rhetorical strategy in biomedical research abstracts. Revue Française de Linguistique Appliquée, 200(2), 97–109.Google Scholar
  40. Sándor, Á., & Vorndran, A. (2010), Extracting relevant messages from social science research papers for improving retevance of retrieval. Workshop on Natural Language Processing Tools Applied to Discourse Analysis in Psychology, Buenos Aires, 10–14 May 2010.Google Scholar
  41. Scaife, M., & Rogers, Y. (1996). External cognition: how do graphical representations work? International Journal of Human-Computer Studies, 45, 185–213.CrossRefGoogle Scholar
  42. Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge-building communities. The Journal of the Learning Sciences, 3, 265–283.CrossRefGoogle Scholar
  43. Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 67–98). Chicago: Open Court.Google Scholar
  44. Selvin, A. (2011, forthcoming). Making representations matter: Understanding practitioner experience in participatory sensemaking. Unpublished Doctoral Dissertation, Knowledge Media Institute, The Open University, UKGoogle Scholar
  45. Sellen, A., & Harper, R. (2003). The myth of the paperless office. MIT PressGoogle Scholar
  46. Sereno, B., Buckingham Shum, S., & Motta, E. (2007). Formalization, user strategy and interaction design: Users’ behaviour with discourse tagging semantics. Workshop on Social and Collaborative Construction of Structured Knowledge, 16 th International World Wide Web Conference (WWW 2007), Banff, AB, Canada; 8–12 May 2007.Google Scholar
  47. Shrager, J., Billman, D. O., Convertino, G., Massar, J. P., & Pirolli, P. L. (2010). Soccer science and the Bayes community: exploring the cognitive implications of modern scientific communication. topiCS - Topics in Cognitive Science, 2(1), 53–72.CrossRefGoogle Scholar
  48. Smallman, H. S. (2008). JIGSAW-joint intelligence graphical situation awareness web for collaborative intelligence analysis. In M. P. Letsky, N. Warner, S. Fiore, & C. A. P. Smith (Eds.), Macrocognition in teams: Theories and methodologies (pp. 321–337). Hampshire, England: Ashgate Publishing.Google Scholar
  49. Snowden, D. J., & Boone, M. E. (2007). Leader’s framework for decision making. Harvard Business Review, Nov 01, 2007.Google Scholar
  50. Tecuci, G., Boicu, M., & Cox, M. T. (2007). Seven aspects of mixed-initiative reasoning: an introduction to this special issue on mixed-initiative assistants. AI Magazine, 28(2).Google Scholar
  51. Uren, V., Buckingham Shum, S., Li, G., & Bachler, M. (2006). Sensemaking tools for understanding research literatures: design, implementation and user evaluation. International Journal of Human Computer Studies, 64(5), 420–445.CrossRefGoogle Scholar
  52. van Gelder, T. J. (2002). Enhancing Deliberation Through Computer-Supported Argument Visualization. In P. Kirschner, S. Buckingham Shum, & C. Carr (Eds.), Visualizing argumentation: software tools for collaborative and educational sense-making (pp. 97–115). London: Springer.Google Scholar
  53. Weick, K. E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage Publications.Google Scholar
  54. Weick, K. E. (2006). Faith, evidence, and action: better guesses in an unknowable world. Organization Studies, 27, 1723–1736.CrossRefGoogle Scholar

Copyright information

© Springer 2011

Authors and Affiliations

  • Anna De Liddo
    • 1
  • Ágnes Sándor
    • 2
  • Simon Buckingham Shum
    • 1
  1. 1.Knowledge Media InstituteThe Open UniversityMilton KeynesUK
  2. 2.Xerox Research Centre EuropeMeylanFrance

Personalised recommendations