Skip to main content
Log in

Technology enhanced assessment in complex collaborative settings

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

Abstract

Building upon discussions by the Assessment Working Group at EDUsummIT 2013, this article reviews recent developments in technology enabled assessments of collaborative problem solving in order to point out where computerised assessments are particularly useful (and where non-computerised assessments need to be retained or developed) while assuring that the purposes and designs are transparent and empowering for teachers and learners. Technology enabled assessments of higher order critical thinking in a collaborative social context can provide data about the actions, communications and products created by a learner in a designed task space. Principled assessment design is required in order for such a space to provide trustworthy evidence of learning, and the design must incorporate and take account of the engagement of the audiences for the assessment as well as vary with the purposes and contexts of the assessment. Technology enhanced assessment enables in-depth unobtrusive documentation or ‘quiet assessment’ of the many layers and dynamics of authentic performance and allows greater flexibility and dynamic interactions in and among the design features. Most important for assessment FOR learning, are interactive features that allow the learner to turn up or down the intensity, amount and sharpness of the information needed for self-absorption and adoption of the feedback. Most important in assessment OF learning, are features that compare the learner with external standards of performance. Most important in assessment AS learning, are features that allow multiple performances and a wide array of affordances for authentic action, communication and the production of artefacts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Anderson, D., Nashon, S., & Thomas, G. (2009). Evolution of research methods for probing and understanding metacognition. Research in Science Education, 39(2), 181–195.

    Article  Google Scholar 

  • Baker, R. S. J. (2010). Data mining for education. International Encyclopedia of Education, 3, 112–118.

    Article  Google Scholar 

  • Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307–359.

    Article  MathSciNet  Google Scholar 

  • Bell, T., Urhahne, D., Schanze, S., & Ploetzner, R. (2009). Collaborative inquiry learning: models, tools, and challenges. International Journal of Science Education, 32(3), 349–377.

    Article  Google Scholar 

  • Black, P., & Wiliam, D. (1998). Inside the Black Box: Raising Standards Through Classroom Assessment: King’s College.

  • Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2002). Working inside the black box: assessment for learning in the classroom. London: King’s College, London, Department of Education & Professional Studies.

    Google Scholar 

  • Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: putting it into practice. Buckingham: Open University.

    Google Scholar 

  • Black, P., Harrison, C., Hodgen, J., Marshall, B., & Serret, N. (2010). Validity in teachers’ summative assessments. Assessment in Education: Principles, Policy & Practice, 17(2), 215–232.

    Article  Google Scholar 

  • Blatchford, P., Baines, E., Rubie-Davies, C., Bassett, P., & Chowne, A. (2006). The effect of a new approach to group work on pupil-pupil and teacher-pupil interactions. Journal of Educational Psychology, 98(4), 750–765.

    Article  Google Scholar 

  • Bloom, B. S., Englehart, M. B., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives, the classification of educational goals - handbook I: cognitive domain. New York: McKay.

    Google Scholar 

  • Boekaerts, M., & Cascallar, E. (2006). How far have we moved toward the integration of theory and practice in self-regulation? Educational Psychology Review, 18(3), 199–210.

    Article  Google Scholar 

  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience and school. Washington: National Academy Press.

    Google Scholar 

  • Brown, N. J. S. (2005). The multidimensional measure of conceptual complexity. Berkeley: Bear Centre.

    Google Scholar 

  • Chan, C. K. (2012). Co-regulation of learning in computer-supported collaborative learning environments: a discussion. Metacognition and Learning, 7(1), 63–73.

    Article  Google Scholar 

  • Chauncey, A., & Azevedo, R. (2010). Emotions and motivation on performance during multimedia learning: How do I feel and Why Do I care? In V. Aleven, J. Kay, & J. Mostow (Eds.), Intelligent tutoring systems (Vol. 6094, pp. 369–378). Berlin: Springer.

    Chapter  Google Scholar 

  • Clark, D., Sampson, V., Weinberger, A., & Erkens, G. (2007). Analytic frameworks for assessing dialogic argumentation in online learning environments. Educational Psychology Review, 19(3), 343–374.

    Article  Google Scholar 

  • Clarke-Midura, J., Code, J., Dede, C., Mayrath, M., & Zap, N. (2012). Thinking outside the bubble: Virtual performance assessments for measuring complex learning. Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research, 125–148.

  • Crooks, T. J., Kane, M. T., & Cohen, A. S. (1996). Threats to the valid use of assessments. Assessment in Education: Principles, Policy & Practice, 3(3), 265–285.

    Article  Google Scholar 

  • Davis, E. A. (2000). Scaffolding students” knowledge integration: prompts for reflection in KIE. International Journal of Science Education, 22(8), 819–837.

    Article  Google Scholar 

  • Eurydice, A. (2011). Science Education in Europe: National Policies, Practices and Research.

  • Evagorou, M., & Osborne, J. (2009). ICT in Teaching and Learning. The Hague: EDUSUMMIT.

    Google Scholar 

  • Evagorou, M., & Osborne, J. (2013). Exploring young students” collaborative argumentation within a socioscientific issue. Journal of Research in Science Teaching, 50(2), 209–237.

    Article  Google Scholar 

  • Franklin, S., & Graesser, A. (1997). Is it an Agent, or Just a Program?: A Taxonomy for Autonomous Agents. Paper presented at the Proceedings of the Workshop on Intelligent Agents III, Agent Theories, Architectures, and Languages.

  • Funke, J. (1998). Computer-based testing and training with scenarios from complex problem-solving research: advantages and disadvantages. International Journal of Selection and Assessment, 6(2), 90–96.

    Article  MathSciNet  Google Scholar 

  • Gibson, D., & Clarke-Midura, J. (2013). Some psychometric and design implications of game-based learning analytics. In D. Ifenthaler, J. Spector, P. Isaias, & D. Sampson (Eds.), E-learning systems, environments and approaches: Theory and implementation. London: Springer.

    Google Scholar 

  • Gibson, D., & Webb, M. E. (2015). Data science in educational assessment. Education and Information Technologies. doi:10.1007/s10639-015-9411-7.

  • Goleman, D. (1995). Emotional intelligence. New York: Bantam Dell.

    Google Scholar 

  • Harlen, W., & Deakin Crick, R. (2002). A systematic review of the impact of summative assessment and tests on students” motivation for learning. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

    Google Scholar 

  • Hattie, J. A. C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Abingdon: Routledge.

    Google Scholar 

  • Hickey, D. T., & Zuiker, S. J. (2012). Multilevel assessment for discourse, understanding, and achievement. Journal of the Learning Sciences, 21(4), 522–582.

    Article  Google Scholar 

  • Jacob-Israel, M., & Moorefield-Lang, H. M. (2013). Redefining technology in libraries and schools: AASL best apps, best websites, and the SAMR model. Teacher Librarian, 42(2), 16–19.

    Google Scholar 

  • Järvelä, S., Volet, S., & Järvenojä, H. (2010). Research on motivation in collaborative learning: moving beyond the cognitive–situative divide and combining individual and social processes. Educational Psychologist, 45(1), 15–27.

    Article  Google Scholar 

  • Järvenoja, H., & Järvelä, S. (2009). Emotion control in collaborative learning situations: do students regulate emotions evoked by social challenges. British Journal of Educational Psychology, 79(3), 463–481.

    Article  Google Scholar 

  • Johnson, D. W., Johnson, R. T., & Stanne, M. B. (2000). Co-operative learning methods: A meta-analysis. Minneapolis: University of Minnesota.

    Google Scholar 

  • Kay, K., & Greenhill, V. (2011). Twenty-first century students need 21st century skills Bringing schools into the 21st century (pp. 41–65): Springer.

  • Lee, H.-S., Linn, M. C., Varma, K., & Liu, O. L. (2010). How do technology-enhanced inquiry science units impact classroom learning? Journal of Research in Science Teaching, 47(1), 71–90.

    Article  Google Scholar 

  • Manlove, S., Lazonder, A., & Jong, T. (2007). Software scaffolds to promote regulation during scientific inquiry learning. Metacognition and Learning, 2(2–3), 141–155.

    Article  Google Scholar 

  • Mansell, W., James, M., Group, A. R., & Newton, P. (2009). Assessment in schools. Fit for purpose? a commentary by the teaching and learning research programme. London: Economic and Social Research Council:Teaching and Learning Research Programme.

    Google Scholar 

  • Marcovitz, D., & Janiszewski, N. (2015). Technology, models, and 21st-century learning: How models, standards, and theories make learning powerful. In D. Slykhuis & G. Marks (Eds.), Society for information technology & teacher education international conference 2015 (pp. 1227–1232). Las Vegas: Association for the Advancement of Computing in Education (AACE).

    Google Scholar 

  • Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.

    Article  Google Scholar 

  • Mislevy, R.J., Steinberg, L., & Almond, R.. (1999). Evidence-centered assessment design. Retrieved from http://www.education.umd.edu/EDMS/mislevy/papers/ECD_overview.html

  • Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessment. Measurement: Interdisciplinary Research and Perspective, 1(1), 3–62.

    Google Scholar 

  • Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: Committee on the Foundations of Assessment, Board on Testing and Assessment, Center for Education, National Research Council.

    Google Scholar 

  • PISA. (2013). PISA 2015 Draft Collaborative Problem Solving Framework: Organisation for Economic Co-operation and Development (OECD).

  • Rupp, A.A., Gushta, M., Mislevy, R.J., & Shaffer, D.W. (2010). Evidence-centered design of epistemic games: Measurement principles for complex learning environments. The Journal of Technology, Learning, and Assessment Volume, 8(4).

  • Sandi‐Urena, S., Cooper, M. M., & Stevens, R. H. (2010). Enhancement of metacognition use and awareness by means of a collaborative intervention. International Journal of Science Education, 33(3), 323–340.

    Article  Google Scholar 

  • Shaffer, D. W., Hatfield, D., Svarovsky, G. N., Nash, P., Nulty, A., Bagley, E., et al. (2009). Epistemic network analysis: a prototype for 21st-century assessment of learning. International Journal of Learning and Media, 1(2), 33–53.

    Article  Google Scholar 

  • Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. In S. Tobias & J. D. Fletcher (Eds.), Computer games and instruction (pp. 503–524). Charlotte: Information Age Publishers.

    Google Scholar 

  • Ucan, S., & Webb, M. E. (2015). Social regulation of learning during collaborative inquiry learning in science: How does it emerge and what are its functions? International Journal of Science Education (in press).

  • Verbert, K., Duval, E., et al. (2013). Learning analytics dashboard applications. American Behavioral Scientist, 57(10), 1500–1509.

    Article  Google Scholar 

  • Voogt, J., Erstad, O., Dede, C., & Mishra, P. (2013). Challenges to learning and schooling in the digital networked world of the 21st century. Journal of Computer Assisted Learning, 29(5), 403–413.

    Article  Google Scholar 

  • Webb, M. E., & Jones, J. (2009). Exploring tensions in developing assessment for learning. Assessment in Education: Principles, Policy & Practice, 16(2), 165–184.

    Article  Google Scholar 

  • Webb, M., E., Gibson, D., & Forkosh-Baruch, A. (2013). Challenges for information technology supporting educational assessment. Journal of Computer Assisted Learning, 29(5), 451–462.

  • Weller, J. (2001). Building validity and reliability into classroom tests. NASSP Bulletin, 85(622), 32–37.

    Article  Google Scholar 

  • Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3–14.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mary Webb.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Webb, M., Gibson, D. Technology enhanced assessment in complex collaborative settings. Educ Inf Technol 20, 675–695 (2015). https://doi.org/10.1007/s10639-015-9413-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-015-9413-5

Keywords

Navigation