Skip to main content
Log in

Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study

  • Published:
Computer Supported Cooperative Work (CSCW) Aims and scope Submit manuscript

Abstract

We propose the concept of Contested Collective Intelligence (CCI) as a distinctive subset of the broader Collective Intelligence design space. CCI is relevant to the many organizational contexts in which it is important to work with contested knowledge, for instance, due to different intellectual traditions, competing organizational objectives, information overload or ambiguous environmental signals. The CCI challenge is to design sociotechnical infrastructures to augment such organizational capability. Since documents are often the starting points for contested discourse, and discourse markers provide a powerful cue to the presence of claims, contrasting ideas and argumentation, discourse and rhetoric provide an annotation focus in our approach to CCI. Research in sensemaking, computer-supported discourse and rhetorical text analysis motivate a conceptual framework for the combined human and machine annotation of texts with this specific focus. This conception is explored through two tools: a social-semantic web application for human annotation and knowledge mapping (Cohere), plus the discourse analysis component in a textual analysis software tool (Xerox Incremental Parser: XIP). As a step towards an integrated platform, we report a case study in which a document corpus underwent independent human and machine analysis, providing quantitative and qualitative insight into their respective contributions. A promising finding is that significant contributions were signalled by authors via explicit rhetorical moves, which both human analysts and XIP could readily identify. Since working with contested knowledge is at the heart of CCI, the evidence that automatic detection of contrasting ideas in texts is possible through rhetorical discourse analysis is progress towards the effective use of automatic discourse analysis in the CCI framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11

Similar content being viewed by others

Notes

  1. MIT Center for Collective Intelligence. http://cci.mit.edu/

  2. http://cohere.open.ac.uk

  3. http://www.diigo.com

  4. http://www.google.com/sidewiki

  5. http://www.delicious.com

  6. http://www.mindmeister.com

  7. https://bubbl.us

  8. http://prefuse.org

  9. For examples of the range structured deliberation platforms which deploy models of dialogue and argumentation to promote collective intelligence, see Online Deliberation: Emerging Tools 2010: http://olnet.org/odet2010 and ESSENCE tools http://events.kmi.open.ac.uk/essence/tools. Gurkan et al. (2010) also report field trial evaluations of a structured discussion platform which reflects the CCI concerns set out in this paper.

  10. e.g. Learning Sciences http://olnet.org/node/610, Open Educational Resources http://ci.olnet.org, and Climate Change

References

  • Aït-Mokhtar, S., Chanod, J. P., & Roux, C. (2002). Robustness beyond shallowness: incremental dependency parsing. Natural Language Engineering, 8(2/3), 121–144.

    Google Scholar 

  • Andriessen, J., Baker, M., & Suthers, D. (2003). Arguing to learn: Confronting cognitions in computer-supported collaborative learning environments (Eds.). Kluwer: Dordrecht

  • Birnbaum, L., Horvitz, E., Kurlander, D., Lieberman, H., Marks, J., & Roth, S. (1996). Compelling intelligent user interfaces: How much AI?, Proceedings ACM International Conference on Intelligent Interfaces, Orlando, FL, January 1996. ACM Press: NY

  • Brown, A. L., Bransford, R. A., Ferraraand, R. A., & Campione, J. C. (1983). Learning, remembering, and understanding. In J. H. Flavell & E. H. Markman (Eds.), Handbook of child psychology: Cognitive development (vol. 3). New York: Wiley.

    Google Scholar 

  • Browning, L., & Boudès, T. (2005). The use of narrative to understand and respond to complexity: a comparative analysis of the Cynefin and Weickian models. Emergence: Complexity & Organization—An International Transdisciplinary Journal of Complex Social Systems, 7(3–4), 32–39.

    Google Scholar 

  • Buckingham Shum, S. (2003). The roots of computer supported argument visualization. In P. Kirschner, S. Buckingham Shum, & C. Carr (Eds.), Visualizing argumentation (pp. 3–24). London: Springer.

    Chapter  Google Scholar 

  • Buckingham Shum, S. (2008). Cohere: Towards Web 2.0 argumentation. 2nd International Conference on Computational Models of Argument, 28–30 May 2008, Toulouse. IOS Press: Amsterdam.

  • Buckingham Shum, S., Maclean, A., Bellotti, V. M., & Hammond, N. V. (1997). Graphical argumentation and design cognition. Human-Computer Interaction, 12(3), 267–300.

    Article  Google Scholar 

  • Buckingham Shum, S., Selvin, A., Sierhuis, M., Conklin, J., Haley, C., & Nuseibeh, B. (2006). Hypermedia support for argumentation-based rationale: 15 years on from gIBIS and QOC. In A. Dutoit, R. McCall, I. Mistrik, & B. Paech (Eds.), Rationale management in software engineering (pp. 111–132). Berlin: Springer.

    Chapter  Google Scholar 

  • Conklin, J., & Begeman, M. L. (1988). gIBIS: a hypertext tool for exploratory policy discussion. ACM Transactions on Office Information Systems, 4(6), 303–331.

    Article  Google Scholar 

  • Convertino, G., Billman, D., Pirolli, P., Massar, J. P., & Shrager, J. (2008). The CACHE Study: group effects in computer-supported collaborative analysis. Computer Supported Cooperative Work (CSCW). An International Journal, 17, 353–393.

    Article  Google Scholar 

  • De Liddo, A., & Buckingham Shum, S. (2010). Cohere: A prototype for contested collective intelligence. Workshop on Collective Intelligence in Organizations: Toward a Research Agenda, ACM Conference on Computer Supported Cooperative Work, Feb. 6–10, 2010, Savannah GA, USA. Available as ePrint: http://oro.open.ac.uk/19554.

  • De Waard, A., S. Buckingham Shum, A. Carusi, J. Park, M. Samwald, & A. Sándor (2009). Hypotheses, evidence and relationships: The HypER approach for representing scientific knowledge claims. Workshop on Semantic Web Applications in Scientific Discourse, 8th International Semantic Web Conference. LNCS, Springer Verlag: Berlin, 26 Oct 2009, Washington DC.

  • Dervin, B., & Naumer, C. (2009). Sense-making. In S. W. Littlejohn & K. A. Foss (Eds.), Encyclopedia of communication theory (pp. 876–880). Los Angeles: Sage.

    Google Scholar 

  • Engelbart, D. C. (1963). A conceptual framework for the augmentation of man’s intellect, in Vistas in Information Handling, P. Howerton and Weeks, Editors. 1963, Spartan Books: Washington, DC: London. p. 1–29.

  • Ghosh, A. (2004). Learning in strategic alliances: a Vygotskian perspective. The Learning Organization, 11(4/5), 302–311.

    Article  Google Scholar 

  • Goodman, N. (1986). Mathematics as an objective science. In T. Tymocyko (Ed.), New directions in the philosophy of mathematics (pp. 79–94). Boston: Birkhauser.

    Google Scholar 

  • Gurkan, A., Iandoli, L., Klein, M., & Zollo, G. (2010). Mediating debate through on-line large-scale argumentation: evidence from the field. Information Sciences, 180, 3686–3702.

    Article  Google Scholar 

  • Hagel, J. III, Seely Brown, J., & Davison, L. (2010). The power of pull: How small moves, smartly made, can set big things in motion. Basic Books

  • Heuer, R. (1999). The psychology of intelligence analysis. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency.

    Google Scholar 

  • Hong, L., Chi, E. H., Budiu, R., Pirolli, P., & Nelson, L. (2008): SparTag.us: A low cost tagging system for foraging of web content. In: Proc. AVI 2008, pp. 65–72. ACM, New York.

  • Horvitz, E. (1999). Principles of mixed-initiative user interfaces, In Proceeding of the ACM Conference on Human Factors in Computing Systems.

  • Kalnikaité, V., & Whittaker, S. (2008). Social summarization: does social feedback improve access to speech data? In Proceedings of conference on computer supported co-operative work, ACM Press, New York, pp 9–12.

  • Klein, G., Moon, B., & Hoffman, R. F. (2006). Making sense of sensemaking 1: alternative perspectives. IEEE Intelligent Systems, 21(4), 70–73.

    Article  Google Scholar 

  • Kurtz, C., & Snowden, D. (2003). The new dynamics of strategy: sense-making in a complex-complicated world. IBM Systems Journal, 42(3), 462–83.

    Article  Google Scholar 

  • Kong, N., Hanrahan, B., Weksteen, T., Convertino, G., & Chi E. H. (2011).: VisualWikiCurator: Human and Machine Intelligence for Organizing Wiki Content. In: Proc. IUI2011.

  • Levy, D. M., & Marshall, C. C. (1995). Going digital: a look at assumptions underlying digital libraries. Communications of the ACM, 38(4), 77–84.

    Article  Google Scholar 

  • Lin, X., Hmelo, C., Kinzer, C. K., & Secules, T. J. (1999). Designing technology to support reflection. Educational Technology Research and Development, 47(3), 43–62.

    Article  Google Scholar 

  • Lowrance, J., Harrison, I., Rodriguez, A., Yeh, E., Boyce, T., Murdock, J., Thomere, J., & Murray, K. (2008). Template-based structured argumentation. In A. Okada, S. Buckingham Shum, & T. Sherborne (Eds.), Knowledge cartography: Software tools and mapping techniques. London: Springer.

    Google Scholar 

  • Malone, T. W., Laubacher, R., & Dellarocas, C. N. (2009). Harnessing crowds: Mapping the genome of collective intelligence. MIT Sloan Research Paper No. 4732–09. Available at SSRN: http://ssrn.com/abstract=1381502.

  • Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137–168.

    Article  MathSciNet  Google Scholar 

  • Okada, A., Buckingham Shum, S., & Sherborne, T. (2008). Knowledge cartography: Software tools and mapping techniques. London: Springer.

    Google Scholar 

  • Pea, R. D. (1993). Learning scientific concepts through material and social activities: conversational analysis meets conceptual change. Educational Psychologist, 28, 265–277.

    Article  Google Scholar 

  • Pirolli, P., & Card, S. (1995) The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. Proceedings of 2005 International Conference on Intelligence Analysis, 2–4

  • Pirolli, P., & Russell, D. (2008). Call for submissions to special issue on sensemaking, Human-Computer Interaction. http://www.tandf.co.uk/journals/cfp/hhcicfp_sp1.pdf

  • Rich, P. J., & Hannafin, M. (2009). Video annotation tools: technologies to scaffold, structure, and transform teacher reflection. Journal of Teacher Education, 60(1), 52–67.

    Article  Google Scholar 

  • Rittel, H., & Webber, M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155–169.

    Google Scholar 

  • Russell, D. M., Stefik, M. J., Pirolli, P., & Card, S. K. (1993). The cost structure of sensemaking. Proceedings of InterCHI ‘93, pp. 269–276. Amsterdam: Association for Computing Machinery.

  • Sándor, Á. (2007). Modeling metadiscourse conveying the author’s rhetorical strategy in biomedical research abstracts. Revue Française de Linguistique Appliquée, 200(2), 97–109.

    Google Scholar 

  • Sándor, Á., & Vorndran, A. (2010), Extracting relevant messages from social science research papers for improving retevance of retrieval. Workshop on Natural Language Processing Tools Applied to Discourse Analysis in Psychology, Buenos Aires, 10–14 May 2010.

  • Scaife, M., & Rogers, Y. (1996). External cognition: how do graphical representations work? International Journal of Human-Computer Studies, 45, 185–213.

    Article  Google Scholar 

  • Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge-building communities. The Journal of the Learning Sciences, 3, 265–283.

    Article  Google Scholar 

  • Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 67–98). Chicago: Open Court.

  • Selvin, A. (2011, forthcoming). Making representations matter: Understanding practitioner experience in participatory sensemaking. Unpublished Doctoral Dissertation, Knowledge Media Institute, The Open University, UK

  • Sellen, A., & Harper, R. (2003). The myth of the paperless office. MIT Press

  • Sereno, B., Buckingham Shum, S., & Motta, E. (2007). Formalization, user strategy and interaction design: Users’ behaviour with discourse tagging semantics. Workshop on Social and Collaborative Construction of Structured Knowledge, 16 th International World Wide Web Conference (WWW 2007), Banff, AB, Canada; 8–12 May 2007.

  • Shrager, J., Billman, D. O., Convertino, G., Massar, J. P., & Pirolli, P. L. (2010). Soccer science and the Bayes community: exploring the cognitive implications of modern scientific communication. topiCS - Topics in Cognitive Science, 2(1), 53–72.

    Article  Google Scholar 

  • Smallman, H. S. (2008). JIGSAW-joint intelligence graphical situation awareness web for collaborative intelligence analysis. In M. P. Letsky, N. Warner, S. Fiore, & C. A. P. Smith (Eds.), Macrocognition in teams: Theories and methodologies (pp. 321–337). Hampshire, England: Ashgate Publishing.

    Google Scholar 

  • Snowden, D. J., & Boone, M. E. (2007). Leader’s framework for decision making. Harvard Business Review, Nov 01, 2007.

  • Tecuci, G., Boicu, M., & Cox, M. T. (2007). Seven aspects of mixed-initiative reasoning: an introduction to this special issue on mixed-initiative assistants. AI Magazine, 28(2).

  • Uren, V., Buckingham Shum, S., Li, G., & Bachler, M. (2006). Sensemaking tools for understanding research literatures: design, implementation and user evaluation. International Journal of Human Computer Studies, 64(5), 420–445.

    Article  Google Scholar 

  • van Gelder, T. J. (2002). Enhancing Deliberation Through Computer-Supported Argument Visualization. In P. Kirschner, S. Buckingham Shum, & C. Carr (Eds.), Visualizing argumentation: software tools for collaborative and educational sense-making (pp. 97–115). London: Springer.

    Google Scholar 

  • Weick, K. E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage Publications.

    Google Scholar 

  • Weick, K. E. (2006). Faith, evidence, and action: better guesses in an unknowable world. Organization Studies, 27, 1723–1736.

    Article  Google Scholar 

Download references

Acknowledgements

Cohere is being developed as part of the OLnet Project (http://olnet.org) funded by The William and Flora Hewlett Foundation. This work was conducted as part of an OLnet Visiting Fellowship for Ágnes Sándor at the Open University. Cohere uses the open source Prefuse visualization code from PARC: http://prefuse.org The authors also thank the anonymous referees for their thorough and helpful reviews.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna De Liddo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

De Liddo, A., Sándor, Á. & Buckingham Shum, S. Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study. Comput Supported Coop Work 21, 417–448 (2012). https://doi.org/10.1007/s10606-011-9155-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10606-011-9155-x

Key words

Navigation