Advertisement

Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning

  • D. Royce SadlerEmail author
Chapter

Introduction

One of the themes running through my work since 1980 has been that students need to develop the capacity to monitor the quality of their own work during its actual production. For this to occur, students need to appreciate what constitutes work of higher quality; to compare the quality of their emerging work with the higher quality; and to draw on a store of tactics to modify their work as necessary. In this chapter, this theme is extended in two ways. The first is an analysis of the fundamental validity of using preset criteria as a general approach to appraising quality. The second is a teaching design that enables holistic appraisals to align pedagogy with assessment.

For the purposes of this chapter, a course refers to a unit of study that forms a relatively self-contained component of a degree program. A student response to an assessment task is referred to as a work. The assessed quality of each work is represented by a numerical, literal or verbal mark or grade....

Keywords

Student Work Assessment Task Global Judgment Qualitative Judgment Analytic Judgment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgment

I am grateful to Gordon Joughin for his constant support and critical readings of different versions of this chapter during its development. His many suggestions for improvement have been invaluable.

References

  1. Bloxham, S., & West, A. (2004). Understanding the rules of the game: marking peer assessment as a medium for developing students’ conceptions of assessment. Assessment & Evaluation in Higher Education, 29, 712–733.CrossRefGoogle Scholar
  2. Braddock, R., Lloyd-Jones, R., & Schoer, L. (1963). Research in written composition. Urbana, Ill.: National Council of Teachers of English.Google Scholar
  3. Burke, E. (1759). A philosophical enquiry into the origin of our ideas of the sublime and beautiful, 2nd ed. London: Dodsley. (Facsimile edition 1971. New York: Garland).Google Scholar
  4. Chi, M. T. H., Glaser, R., & Farr, M. J. (Eds.), (1988). The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  5. Dewey, J. (1939). Theory of valuation. (International Encyclopedia of Unified Science, 2 (4)). Chicago: University of Chicago Press.Google Scholar
  6. Ericsson, K. A., & Smith, J. (Eds.), (1991). Toward a general theory of expertise: Prospects and limits. New York: Cambridge University Press.Google Scholar
  7. Freeman, R., & Lewis, R. (1998). Planning and implementing assessment. London: Kogan Page.Google Scholar
  8. Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Heights, Mass: Allyn & Bacon.Google Scholar
  9. Lloyd-Jones, R. (1977). Primary trait scoring. In C. R. Cooper & L. Odell (Eds.), Evaluating writing: Describing, measuring, judging. Urbana, Ill.: National Council of Teachers of English.Google Scholar
  10. Meehl, P. E. (1996). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence (New Preface). Lanham, Md: Rowan & Littlefield/Jason Aronson. (Original work published 1954).Google Scholar
  11. Morgan, C., Dunn, L., Parry, S., & O’Reilly, M. (2004). The student assessment handbook: New directions in traditional and online assessment. London: RoutledgeFalmer.Google Scholar
  12. Orsmond, P., Merry, S., & Reiling, K. (2000). The use of student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education, 25, 23–38.CrossRefGoogle Scholar
  13. Polanyi, M. (1962). Personal knowledge. London: Routledge and Kegan Paul.Google Scholar
  14. Rust, C., Price, M., & O’Donovan, R. (2003). Improving students’ learning by developing their understanding of assessment criteria and processes. Assessment & Evaluation in Higher Education, 28, 147–164.CrossRefGoogle Scholar
  15. Sadler, D. R. (1980). Conveying the findings of evaluative inquiry. Educational Evaluation and Policy Analysis, 2(2), 53–57.Google Scholar
  16. Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations. Educational Evaluation and Policy Analysis, 3(4), 25–31.Google Scholar
  17. Sadler, D. R. (1983). Evaluation and the improvement of academic learning. Journal of Higher Education, 54, 60–79.CrossRefGoogle Scholar
  18. Sadler, D. R. (1985). The origins and functions of evaluative criteria. Educational Theory, 35, 285–297.CrossRefGoogle Scholar
  19. Sadler, D. R. (1987). Specifying and promulgating achievement standards. Oxford Review of Education, 13, 191–209.CrossRefGoogle Scholar
  20. Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144.CrossRefGoogle Scholar
  21. Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education: Principles, Policy & Practice, 5, 77–84.CrossRefGoogle Scholar
  22. Sadler, D. R. (2002). Ah! … So that’s ‘Quality’. In P. Schwartz & G. Webb (Eds.), Assessment: case studies, experience and practice from higher education (pp. 130–136). London: Kogan Page.Google Scholar
  23. Sadler, D. R. (2005). Interpretations of criteria-based assessment and grading in higher education. Assessment and Evaluation in Higher Education, 30, 175–194.CrossRefGoogle Scholar
  24. Stevens, D. D., & Levi, A. J. (2004). Introduction to rubrics: an assessment tool to save grading time, convey effective feedback and promote student learning. Sterling, Va: Stylus Publishing.Google Scholar
  25. Suskie, L. (2004). Assessing student learning: A common sense approach. Boston, Mass: Anker Publishing.Google Scholar
  26. Walvoord, B. E., & Anderson, V. J. (1998). Effective grading: A tool for learning and assessment. Etobicoke, Ontario: John Wiley.Google Scholar
  27. Woolf, H. (2004). Assessment criteria: Reflections of current practices. Assessment & Evaluation in Higher Education, 29, 51–493.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  1. 1.Griffith Institute for Higher EducationMt Gravatt Campus, Griffith University, NathanAustralia

Personalised recommendations