Advertisement

The Power of Learning-Centered Task Design: An Exercise in the Application of the Variation Principle

  • Dany Laveault
Chapter
Part of the The Enabling Power of Assessment book series (EPAS, volume 1)

Abstract

Recent developments in educational assessment task design have been stimulated by an increasing interest in aligning assessment tasks, not only on specific curriculum objectives but also on theories of learning. In order to achieve such an alignment, a construct-centered approach to assessment design is needed to identify the cognitive and metacognitive processes underlying performance on a task. In such a context, task design involves creating a family of learning situations that control the cognitive and metacognitive demands of a task to monitor students’ progress. This kind of learning-centered task design enables teachers to observe cognitive processes involved in learning, which would be otherwise impossible or quite difficult to assess, and helps them to provide efficient feedback. This chapter introduces a variety of task models and designs that identify what is required in order to monitor cognitive processes involved in learning, and how the results on such tasks may be interpreted and used to support students’ learning.

Keywords

Task Design Formative Assessment Student Control Authentic Assessment Instructional Purpose 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Allal, L. (1988). Vers un élargissement de la pédagogie de maîtrise: processus de régulation interactive, rétroactive et proactive. In M. Huberman (Ed.), Assurer la réussite des apprentissages scolaires. Les propositions de la pédagogie de la maîtrise (pp. 86–126). Paris: Delachaux et Niestlé.Google Scholar
  2. Allal, L. (2010). Assessment and the regulation of learning. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopedia of education (3rd ed., pp. 348–352). Oxford: Elsevier.CrossRefGoogle Scholar
  3. Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C. (2001). A Taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.Google Scholar
  4. Baumeister, R. F., Heatherton, T. F., & Tice, D. M. (1994). Losing control: How and why people fail at self-regulation. San Diego: Academic Press.Google Scholar
  5. Baxter, G. P., & Glaser, R. (1998). Investigating the cognitive complexity of science assessments. Educational Measurement Issues and Practice, 17(3), 37–45.CrossRefGoogle Scholar
  6. Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy and Practice, 18(1), 5–25.CrossRefGoogle Scholar
  7. Black, P., & William, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.CrossRefGoogle Scholar
  8. Boekaerts, M. (1997). Self-regulated learning: A new concept embraced by researchers, policy makers, educators, teachers, and students. Learning and Instruction, 7(2), 161–186.CrossRefGoogle Scholar
  9. Boekaerts, M. (2002). Bringing about change in the classroom: Strengths and weaknesses of the self-regulated learning approach. Learning and Instruction, 12, 589–604.CrossRefGoogle Scholar
  10. Boulé, S., & Laveault, D. (2011). Utilisation du degré de certitude et du degré de réalisme dans un contexte d’évaluation diagnostique. In G. Raîche, K. Paquette-Côté, & D. Magis (Eds.), Des mécanismes pour assurer la validité de l’interprétation de la mesure en éducation. Volume 2: L’évaluation. Québec: Presses de l’Université du Québec, pp. 31–48.Google Scholar
  11. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of Learning Sciences, 2(2), 141–178.CrossRefGoogle Scholar
  12. Christiansen, B., & Walther, G. (1986). Task and activity. In B. Christiansen, A. G. Howson, & M. Otte (Eds.), Perspectives on mathematics education (pp. 243–307). Dordrecht: Reidel.CrossRefGoogle Scholar
  13. Crooks, T. J. (1988). The impact of classroom evaluation practice on students. Review of Educational Research, 58, 438–481.CrossRefGoogle Scholar
  14. Earl, L. (2003). Assessment as learning. Using classroom assessment to maximize student learning. Thousand Oaks: Corwin Press, Inc.Google Scholar
  15. Hadwin, A. F., & Oshige, M. (2011). Self-regulation, co-regulation, and socially-shared regulation: Exploring perspectives of social in self-regulated learning theory. Teachers College Record, 113(2), 240–264.Google Scholar
  16. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.CrossRefGoogle Scholar
  17. Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative assessment? Educational Measurement: Issues and Practice, 28(3), 24–31.CrossRefGoogle Scholar
  18. James, M. (2006). Assessment, teaching and theories of learning. In J. Gardner (Ed.), Assessment and learning (pp. 47–60). London: Sage.Google Scholar
  19. Jonnaert, P., & Laveault, D. (1994). Évaluation de la familiarité de la tâche: quelle confiance accorder à la perception de l’élève. Revue des sciences de l’éducation, 20(2), 271–291.CrossRefGoogle Scholar
  20. Laveault, D. (2007). De la régulation au réglage: étude des dispositifs d’évaluation favorisant l’autorégulation des apprentissages. In L. Allal & L. Mottier Lopez (Eds.), Régulation des apprentissages en situation scolaire et en formation (pp. 207–234). Bruxelles: De Boeck.Google Scholar
  21. Leclercq, D. (1993). Validity, reliability and acuity of self-assessment in educational testing. In D. Leclercq & J. Bruno (Eds.), Item banking: Interactive testing and self-assessment (NATO ASI Series, pp. 113–131). Heidelberg: Springer Verlag.CrossRefGoogle Scholar
  22. Leclercq, D., & Poumay, M. (2005). Degrés de certitude: Épistémologie, méthodes et conséquences. 18e Colloque International de l’ADMÉÉ-Europe 2005, Reims.Google Scholar
  23. Marton, F., & Trigwell, K. (2000). Variatio est mater studiorum. Higher Education Research and Development, 19(3), 381–395.CrossRefGoogle Scholar
  24. Meirieu, P. (1995). Différencier, c’est possible et ça peut rapporter gros, in: Vers le changement … espoirs et craintes. Actes du premier Forum sur la rénovation de l’enseignement primaire. Genève: Département de l’instruction publique, pp. 11–41.Google Scholar
  25. Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.CrossRefGoogle Scholar
  26. Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (1999). On the roles of task model variables in assessment design. CSE Technical Report 500. Los Angeles: Center for the Study of Evaluation.Google Scholar
  27. Nunziati, G. (1990). Pour construire un dispositif d’évaluation formatrice. Cahiers pédagogiques, 280, 48–64.Google Scholar
  28. Perrenoud, P. (1998). From formative evaluation to a controlled regulation of learning processes: Towards a wider conceptual field. Assessment in Education, 5(1), 85–102.CrossRefGoogle Scholar
  29. Perry, N., Phillips, L., & Dowler, J. (2004). Examing features of tasks and their potential to promote self-regulated learning. Teachers College Record, 106(9), 1854–1878.Google Scholar
  30. Rey, B., Carette, V., Defrance, A., & Kahn, S. (2003). Les compétences à l’école. Bruxelles: Éditions De Boeck.Google Scholar
  31. Rosenthal, R. (1979). The “file drawer problem” and the tolerance for null results. Psychological Bulletin, 86(3), 638–641.CrossRefGoogle Scholar
  32. Salonen, P., Vauras, M., & Efklides, A. (2005). Social interaction—What can it tell us about metacognition and coregulation in learning. European Psychologist, 10(3), 199–208.CrossRefGoogle Scholar
  33. Shafer, M. C., & Foster, S. (1997). The changing face of assessment. Principled Practice in Mathematics & Science Education, 1(2), 1–8.Google Scholar
  34. Snow, R. E. (1989). Toward assessment of cognitive and conative structures in learning, Educational Researcher, 18(9), 8–14.CrossRefGoogle Scholar
  35. Stefanou, C. R., Perencevich, K. C., DiCintio, M., & Turner, J. C. (2004). Supporting autonomy in the classroom: Ways teachers encourage student decision making and ownership. Educational Psychologist, 39(2), 97–110.CrossRefGoogle Scholar
  36. Verhage, H., & de Lange, J. (1997). Mathematics education and assessment. Pythagoras, 42, 14–20.Google Scholar
  37. Webb, D.C. (2009). Designing professional development for assessment. Educational Designer, 1(2). <www.educationaldesigner.org/ed/volume1/issue2/article6>. Accessed 6 Aug 2012.
  38. Wiggins, G. (1989). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass Publishers.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.University of OttawaOntarioCanada

Personalised recommendations