Instructional Science

, Volume 18, Issue 2, pp 119–144 | Cite as

Formative assessment and the design of instructional systems

  • D. Royce Sadler

Abstract

The theory of formative assessment outlined in this article is relevant to a broad spectrum of learning outcomes in a wide variety of subjects. Specifically, it applies wherever multiple criteria are used in making judgments about the quality of student responses. The theory has less relevance for outcomes in which student responses may be assessed simply as correct or incorrect. Feedback is defined in a particular way to highlight its function in formative assessment. This definition differs in several significant respects from that traditionally found in educational research. Three conditions for effective feedback are then identified and their implications discussed. A key premise is that for students to be able to improve, they must develop the capacity to monitor the quality of their own work during actual production. This in turn requires that students possess an appreciation of what high quality work is, that they have the evaluative skill necessary for them to compare with some objectivity the quality of what they are producing in relation to the higher standard, and that they develop a store of tactics or moves which can be drawn upon to modify their own work. It is argued that these skills can be developed by providing direct authentic evaluative experience for students. Instructional systems which do not make explicit provision for the acquisition of evaluative expertise are deficient, because they set up artificial but potentially removable performance ceilings for students.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bailin, S. (1987). Creativity or quality: a deceptive choice. Journal of Educational Thought, 21, 33–39.Google Scholar
  2. Beaven, M. H. (1977). Individualized goal-setting, self-evaluation, and peer evaluation. In C. R.Cooper and L.Odell (Eds.), Evaluating writing: describing, measuring, judging. Urbana, IL.: National Council of Teachers of English.Google Scholar
  3. Black, H. D. (1986). Assessment for learning. In D. L.Nuttall (Ed.), Assessing educational achievement. London: Falmer.Google Scholar
  4. Black, H. D. and Dockrell, W. B. (1984). Criterion-referenced assessment in the classroom. Edinburgh: Scottish Council for Research in Education.Google Scholar
  5. Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives: Handbook 1, Cognitive domain. New York: David McKay.Google Scholar
  6. Bloom, B. S., Madaus, G. F. and Hastings, J. T. (1981). Evaluation to improve learning. New York: McGraw-Hill.Google Scholar
  7. Boud, D. (1986). Implementing student self-assessment. HERDSA Green Guide No.5. Kensington N.S.W.: Higher Education Research and Development Society of Australasia.Google Scholar
  8. Chater, P. (1984). Marking and assessment in English. London: Methuen.Google Scholar
  9. Cooper, C. R. (1977). Holistic evaluation of writing. In C. R.Cooper and L.Odell (Eds.), Evaluating writing: describing, measuring, judging. Urbana, IL: National Council of Teachers of English.Google Scholar
  10. Daly, J. A. and Dickson-Markman, F. (1982). Contrast effects in evaluating essays. Journal of Educational Measurement, 19, 309–316.Google Scholar
  11. Diederich, P. B. (1974). Measuring growth in English. Urbana, IL: National Council of Teachers of English.Google Scholar
  12. Elbow, P. (1973). Writing without teachers. New York: Oxford University Press.Google Scholar
  13. Gere, A. R. (1980). Written composition: toward a theory of evaluation. College English, 42(1), 44–58.Google Scholar
  14. Hales, L. W. and Tokar, E. (1975). The effect of the quality of preceding responses on the grades assigned to subsequent responses to an essay question. Journal of Educational Measurement, 12, 115–117.Google Scholar
  15. Helson, H. (1959). Adaptation level theory. In S.Koch (Ed.), Psychology: a study of a science. Volume 1: Sensory, perceptual and physiological formulations. New York: McGraw-Hill.Google Scholar
  16. Kaplan, A. (1964). The conduct of inquiry: methodology for behavioral science. San Francisco: Chandler.Google Scholar
  17. Kulhavy, R. W. (1977). Feedback in written instruction. Review of Educational Research, 47, 211–232.Google Scholar
  18. Kulik, J. A. and Kulik, C-L. C. (1988). Timing of feedback and verbal learning. Review of Educational Research, 58, 79–97.Google Scholar
  19. Lindemann, E. (1982). A rhetoric for writing teachers. New York: Oxford University Press.Google Scholar
  20. Locke, E. A., Shaw, K. N., Saari, L. M. and Latham, G. P. (1981). Goal setting and task performance: 1969–1980. Psychological Bulletin, 90, 125–152.Google Scholar
  21. Marshall, M. S. (1958). This thing called evaluation. Educational Forum, 23, 41–53.Google Scholar
  22. Marshall, M. S. (1968). Teaching without grades. Corvallis, Oregon: Oregon State University Press.Google Scholar
  23. Myers, M. (1980). A procedure for writing assessment and holistic scoring. Urbana, IL: ERIC Clearinghouse on Reading and Communication Skills, National Institute of Education, and National Council of Teachers of English.Google Scholar
  24. Nitko, A. J. (1983). Educational tests and measurement: an introduction. New York: Harcourt Brace Jovanovich.Google Scholar
  25. Odell, L. and Cooper, C. R. (1980). Procedures for evaluating writing: assumptions and needed research. College English, 42(1), 35–43.Google Scholar
  26. Pianko, S. and Radzik, A. (1980). The student editing method. Theory into Practice, 19, 220–224.Google Scholar
  27. Polanyi, M. (1962). Personal knowledge: towards a post-critical philosophy. London: Routledge and Kegan Paul.Google Scholar
  28. Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4–13.Google Scholar
  29. Rowntree, D. (1977). Assessing students: how shall we know them? London: Harper and Row.Google Scholar
  30. Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations. Educational Evaluation and Policy Analysis, 3(4), 25–31.Google Scholar
  31. Sadler, D. R. (1982). Evaluation criteria as control variables in the design of instructional systems. Instructional Science, 11, 265–271.Google Scholar
  32. Sadler, D. R. (1983). Evaluation and the improvement of academic learning. Journal of Higher Education, 54, 60–79.Google Scholar
  33. Sadler, D. R. (1985). The origins and functions of evaluative criteria. Educational Theory, 35, 285–297.Google Scholar
  34. Sadler, D. R. (1987). Specifying and promulgating achievement standards. Oxford Review of Education, 13, 191–209.Google Scholar
  35. Shenstone, W. (1968). On writing and books, LXXIX. In Works: In verse and prose Vol. 2, (3rd ed.). London: Dodsley.Google Scholar
  36. Thompson, R. F. (1981). Peer grading: some promising advantages for composition research and the classroom. Research in the Teaching of English, 15, 172–174.Google Scholar
  37. Thorndike, E. L. (1913). Educational Psychology, Vol.1: The original nature of man. New York: Teachers College, Columbia University.Google Scholar
  38. Tversky, A. (1969). Intransitivity of preferences. Psychological Review, 76, 31–48.Google Scholar
  39. Wittgenstein, L. (1974). Philosophical investigations. (G.E.M. Anscombe, Trans.). Oxford: Basil Blackwell. (Original work: 3rd ed. published 1967).Google Scholar

Copyright information

© Kluwer Academic Publishers 1989

Authors and Affiliations

  • D. Royce Sadler
    • 1
  1. 1.Assessment and Evaluation Research Unit, Department of EducationUniversity of QueenslandSt LuciaAustralia

Personalised recommendations