Assessment 2000: Towards a Pluralistic Approach to Assessment

  • Menucha Birenbaum
Chapter
Part of the Evaluation in Education and Human Services book series (EEHS, volume 42)

Abstract

The title Assessment 2000 would have sounded like science fiction a few decades ago, an opportunity to use my imagination in making creative and wild speculations about assessment in a distant future. However, less than half a decade before the due date, this chapter entails more modest and careful speculations, based on contemporary theories and on lessons gained from current practice. Indeed, it starts by introducing the most generic term currently used in educational literature with respect to assessment, i.e., alternative assessment. It briefly explains to what and why an alternative is sought and describes the main features of this type of assessment, as it is currently viewed. Of the various devices subsumed under the alternative assessment umbrella a focus is put on the portfolio describing its various types, uses and criteria for judgment. Next, criteria for evaluating alternative assessment and lessons to be learnt from current practice are discussed, and finally, a rationale for a pluralistic approach to assessment is presented.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC: National Education Association.Google Scholar
  2. Arter, J. A., & Spandel, V. (1992). Using portfolios of student work in instruction and assessment. Educational, Measurement: Issues and Practice, 11 (1), 36–44.CrossRefGoogle Scholar
  3. Asturias, H. (1994). Using students’ portfolios to assess mathematical understanding. Mathematics Teacher, 87 (9), 698–701.Google Scholar
  4. Badger, E. (1995). The effect of expectations on achieving equity in state-wide testing: Lessons from Massachusetts. In: Nettles, M. T., & A. L Nettles, Equity and excellence in educational testing and assessment, (pp. 289–308). Boston: Kluwer.CrossRefGoogle Scholar
  5. Belanoff, P., & Dickson, M. (Eds.). (1991). Portfolios: Process and product. Portsmouth: Boynton/ Cook.Google Scholar
  6. Ben-Shakhar, G., & Sinai, Y. (1991). Gender differences in multiple-choice tests: The role of differential guessing tendencies. Journal of Educational Measurement, 28, 23–35.CrossRefGoogle Scholar
  7. Birenbaum, M. (1994a). Toward adaptive assessment — the student’s angle. Studies in Educational Assessment, 20, 239–255.Google Scholar
  8. Birenbaum, M. (1994b). Effects of gender, test anxiety, and self regulation on students’ attitudes toward two assessment formats. Unpublished Manuscript. School of Education Tel Aviv University.Google Scholar
  9. Birenbaum, M., & Feldman, R. (1995, July). Relationships between learning patterns and attitudes toward two assessment formats. Paper prepared for presentation at the 16th International Conference of the Stress and Anxiety Research Society. Prague: Czech Republic.Google Scholar
  10. Birenbaum, M., & Gutvirtz, Y. (1995, January). Relationships between assessment preferences, cognitive style, motivation and learning strategies. Paper presented at the 11th conference of the Israeli Educational Research Association. Jerusalem.Google Scholar
  11. Birenbaum, M., & Tatsuoka, K. K. (1987). Open-ended versus multiple-choice response format ~ it does make a difference. Applied Psychological Measurement, 11, 385–395.CrossRefGoogle Scholar
  12. Birenbaum, M., Tatsuoka, K. K., & Gutvirtz, Y. (1992). Effects of response format on diagnostic assessment of scholastic achievement. Applied Psychological Measurement, 16, 353–363.CrossRefGoogle Scholar
  13. Blake Yancey, K. (Ed.). (1992). Portfolios in the writing classroom. Urbana, Il: National Council of Teachers of English.Google Scholar
  14. Camp, R. (1991). Portfolios evolving. Background and variations in sixththrough twelfth — grade classrooms. In P. Belanoff, & M. Dickson (Eds.), Portfolios process and product (pp. 194–205). Portsmouth, NH: Boyton/Cook Heineman.Google Scholar
  15. Camp, R. (1992). The place of portfolios in our changing views of writing assessment. In R. Bennett, & W. Ward (Eds.), Construction versus choice in cognitive measurement Hillsdale, NJ: Erlbaum.Google Scholar
  16. Cannell, J. J. (1989). The “Lake Wobegon” report How public educators cheat on standardized achievement tests. Albuquerque, NM: Friends for Education.Google Scholar
  17. Chapelle, C. (1988). Field independence: A source of language test variance. Language Testing, 5, 62–82.CrossRefGoogle Scholar
  18. Chapelle, C., & Roberts, C. (1986). Ambiguity tolerance and field dependence as predictors of proficiency in English as a second language. Language Learning, 36, 27–45.CrossRefGoogle Scholar
  19. Clarke, D., & Stephens, M. (1995). The ripple effect: The instructional implications of the systemic introduction of performance assessment in mathematics. In M. Birenbaum, & F.J.R.C. Dochy (Eds.), Alternatives in assessment of achievement, learning processes and prior knowledge. Boston: Kluwer.Google Scholar
  20. Collins, A. (1991). Portfolios for biology teacher assessment. Journal of Personnel Evaluation in Education, 5, 147–169.CrossRefGoogle Scholar
  21. Collins, A. (1993). Performance-based assessment of biology: Promises and pitfalls. Journal of Research in Science Teaching, 30, 1103–1120.CrossRefGoogle Scholar
  22. Crocker, L., & Schmitt, A. (1987). Improving multiple-choice test performance for examinees with different levels of test anxiety. Journal of Experimental Education, 55, 201–205.Google Scholar
  23. Cronbach, L. J. (1988). Five perspectives on validation argument. In H. Wainer, & H. Braun (Eds.), Test validity (pp. 3–17). Hillsdale, NJ: Erlbaum.Google Scholar
  24. D’Aoust, C. (1992). Portfolios: Process for students and teachers. In K. Blake Yancey (Ed.), Portfolios in the writing classroom, (pp. 39–48). Urbana, Il: National Council of Teachers of English.Google Scholar
  25. Darling-Hammond, L. (1995). Equity issues in performance-based assessment. In M. T. Nettles, & A. L Nettles, Equity and excellence in educational testing and assessment, (pp. 89–114). Boston: Kluwer.CrossRefGoogle Scholar
  26. Davis, A., & Felknor, C. (1994). Graduation by exhibition: The effects of high stakes portfolio assessments on curriculum and instruction on one high school. Paper presented at the Annual Meeting of the American Educational Research Association. New Orleans, LA.Google Scholar
  27. Delandshere, G., & Petrosky, A. R. (1994). Capturing teachers’ knowledge: Performance assessment. Educational Researcher, 23 (5), 11–18.Google Scholar
  28. Dunbar, S. B., Koretz, D. M., & Hoover, H. D. (1991). Quality control in the development and use of performance assessments. Applied Measurement in Education, 4, 289–303.CrossRefGoogle Scholar
  29. Frederiksen, J. R., & Collins A. (1989). A systems approach to educational testing. Educational Researcher, 18 (9), 27–32.Google Scholar
  30. Frederiksen, N. (1984), The real test bias: Influences of testing on teaching and learning. American Psychologist, 39, 193–202.CrossRefGoogle Scholar
  31. Freire, P. P. (1972). Pedagogy of the oppressed. Harmondsworth, Middlesex UK: Penguin Books.Google Scholar
  32. Gardner, H. (1983). Frames of mind. New York: Basic Books.Google Scholar
  33. Gardner, H. (1993). Multiple intelligences: The theory in practice. New York: Basic BooksGoogle Scholar
  34. Gardner, H., & Hatch, T. (1989). Multiple intelligences go to school. Educational implications of the theory of multiple intelligences. Educational Researcher, 18, (8), 4–10.Google Scholar
  35. Gentile, C. (1992). Exploring new methods for collecting students’ schoolbased writing: NAEP’s 1990 portfolio study. Washington, DC: National Center for Education Statistics.Google Scholar
  36. Grandy, J. (1987). Characteristics of examinees who leave questions unanswered on the GRE general test under rights-only scoring. ETS Research Report 87-38. Princeton, NJ: Educational Testing Service.Google Scholar
  37. Hamm, M., & Adams, D. (1991). Portfolio assessment. The Science Teacher, 58(5) 18–21.Google Scholar
  38. Hamp-Lyons, L., & Condon, W, (1993). Questioning assumptions about portfolio-based assessment. College Composition and Communication, 44 (2), 176–190.CrossRefGoogle Scholar
  39. Hansen, J. (1984). Field dependence-independence and language testing: Evidence from six pacific island cultures. TESOL Quarterly, 18, 311–324.CrossRefGoogle Scholar
  40. Hansen, J. (1992). Literacy portfolios: Helping students know themselves. Educational Leadership, 49, 66–68.Google Scholar
  41. Hansen, J., & Stansfield, C. (1981). The relationship between field dependentindependent cognitive styles and foreign language achievement, Language Learning, 31, 349–367.CrossRefGoogle Scholar
  42. Herman, J. L., Aschbacher, R., & Winters, L. (1992). A practical guide to alternative assessment. Alexandria, VA: Association for Supervision and Curriculum Development.Google Scholar
  43. Hieronymus, A. N., & Hoover, H. D. (1987). Iowa tests of basic skills: Writing supplement teacher’s guide. Chicago: Riverside.Google Scholar
  44. Kleinsasser, A., Horsch, E., & Tastad, S. (1993, April). Walking the talk: Moving from a testing culture to an assessment culture. Paper presented at the Annual Meeting of the American Educational Research Association. Atlanta, GA.Google Scholar
  45. Knight, D. (1992). How I use portfolios in mathematics. Educational Leadership, 49, 71–72.Google Scholar
  46. Koppert, J. (1991). Primary performance assessment portfolio. Mountain Village, AK: Lower Yukon School District.Google Scholar
  47. Koretz, D., McCaffrey, D., Klein, S., Bell, R., & Stecher, B. (1992). The reliability of scores from the 1992 Vermont portfolio assessment program. Interim Report. Santa Monica, CA: RAND Institute of Education and Training.Google Scholar
  48. Koretz, D., Stecher, B., Klein, S, & McCaffrey, D. (1994). The Vermont portfolio assessment program: Findings and implications. Educational Measurement; Issues and Practice, 13 (3), 5–16.CrossRefGoogle Scholar
  49. Larsen R, L. (1991). Using portfolios in the assessment of writing in the academic disciplines. In P. Belanoff, & M. Dickson (Eds.). Portfolios: Process and product. Portsmouth, NH: Boynton/Cook.Google Scholar
  50. LeMahieu, P. (1993, April). Data from the Pittsburgh writing portfolio assessment. In J. Herman (Chair), Portfolio assessment meets the reality of data. Symposium conducted at the Annual Meeting of the American Educational Research Association, Atlanta, GA.Google Scholar
  51. Leiva, M. (1995). Empowering teachers through the evaluation process. Mathematics Teacher, 88, (19), 44–47.Google Scholar
  52. Linn, M. C. , De Benedictis, T., Delucchi, K., Harris, & A., Stage, E. (1987). Gender differences in National Assessment of Educational Progress in science items.: What does ‘I don’t know’ really mean? Journal of Research on Science Teaching, 24, 267–278.CrossRefGoogle Scholar
  53. Linn, R. L. (1994). Performance assessment: Policy promises and technical measurement standards. Educational Researcher, 23, (9), 4–14.Google Scholar
  54. Linn, R. L., Baker, E., & Dunbar, S. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 16. 15–21.Google Scholar
  55. Lu, C., & Suen, H. K. (1993, April). The interaction effect of individual characteristics and assessment format on the result of performance-based assessment. Paper presented at the Annual Meeting of the American Educational Research Association. Atlanta, GA.Google Scholar
  56. Lukhele, R., Thissen, D., & Wainer, H. (1994). On the relative value of multiple-choice, constructed response, and examinee-selected items on two achievement tests. Journal of Educational Measurement, 31, 234–250.CrossRefGoogle Scholar
  57. Madaus, G. F., & Kellaghan, T. (1993). The British experience with ‘authentic’ testing. Phi Delta Kappan, 74(6), 458–469.Google Scholar
  58. Maeroff, G. I. (1991). Assessing alternative assessment. Phi Delta Kappan. 72, 272–281.Google Scholar
  59. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd ed., pp. 13–103). New York: Macmillan.Google Scholar
  60. Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23 (2) 13–23.Google Scholar
  61. Moss, P. A. (1994). Can there be validity without reliability? Educational Researcher, 23, (2)5 -12.Google Scholar
  62. Moss, P. A. (in press). Rethinking validity: Themes and variations in current theory. Educational Measurement: Issues and Practice.Google Scholar
  63. Mumme, J. (1991). Portfolio assessment in mathematics. Santa Barbara: California Mathematics Project. University of California, Santa Barbara.Google Scholar
  64. Nettles, M. T., & Bernstein A. (1995). Introduction: The pursuit of equity in educational testing and assessment. In: Nettles, M. T., & A. L Nettles, Equity and excellence in educational testing and assessment, (pp. 3–21). Boston: Kluwer.CrossRefGoogle Scholar
  65. Paulson, F. L., & Paulson, P. R. (1992, October). A draft for judging portfolios. Draft prepared for use at the NWEA Fifth Annual October Institute on Assessment Alternatives. Portland OR.Google Scholar
  66. Paulson, F. L., Paulson, P. R. , & Meyer, C. A. (1991). What makes a portfolio a portfolio? Educational Leadership, 48,. 60–63.Google Scholar
  67. Perkins, D. N. (1986). Thinking frames: An integrative perspective on teaching cognitive skills. In J. B . Baron, & R. S. Sternberg (Eds.), Teaching thinking skills: Theory and practice (pp. 41–61). New York: W. H. Freeman.Google Scholar
  68. Perkins, D. N. (1992). Smart schools. New York: The Free Press.Google Scholar
  69. Perkins, D. N., & Blythe, T. (1994). Putting understanding up front. Educational Leadership, 51, (5), 4–7.Google Scholar
  70. Perrone, V. (1994). How to Engage students in learning. Educational Leadership, 57(5), 11–13.Google Scholar
  71. Resnick, L. B., & Klopfer, L. E. (Eds.). (1989). Toward the thinking curriculum: Current cognitive research. Alexandria, VA: ASCD.Google Scholar
  72. Resnick, L. B., & Resnick, D. P. (1992). Assessing the thinking curriculum: New tools for educational reform. In B. R. Gifford, & C. O’Connor (Eds.), Changing assessments: Alternative views of aptitude, achievement and instruction (pp. 37–75).Boston, MA: Kluwer.Google Scholar
  73. Rocklin, T., & O’Donnell, A. M. (1987). Self adapted testing: A performanceimproving variant of computerized adaptive testing. Journal of Educational Psychology, 79, 315–319.CrossRefGoogle Scholar
  74. Ruiz-Primo, M. A., Baxter, G. P., & Shavelson, R. J. (1993). On stability of performance assessments. Journal of Educational Measurement, 30, 41–53.CrossRefGoogle Scholar
  75. Shavelson, R., J., Baxter, G. P., & Gao, X. (1993). Sampling variability of performance assessments. Journal of Educational Measurement, 30, 215–232.CrossRefGoogle Scholar
  76. Shavelson, R., J., Gao, X., & Baxter, G. P. (1995). On the content validity of performance assessments: Centrality of domain specification. In M. Birenbaum, & F.J.R.C. Dochy (Eds.), Alternatives in assessment of achievement, learning processes and prior knowledge. Boston: Kluwer.Google Scholar
  77. Shepard, L. A. (1993). Evaluating test validity. Review of Research in Education, 19, 405–450.Google Scholar
  78. Shulman, L. S. (1988). A union of insufficiencies: Strategies for teacher assessment in a period of educational reform. Educational Leadership, 45, 36–14.Google Scholar
  79. Simmons, R. (1994). The horse before the cart: Assessing for understanding. Educational Leadership, 51 (5), 22–23.Google Scholar
  80. Snow, R. E. (1993). Construct validity and constructed-response tests. In R. E. Bennett, & W. C. Ward (Eds.), Construction versus choice in cognitive measurement (pp. 45–60). Hillsdale NJ: Erlbaum.Google Scholar
  81. Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. New York: Cambridge University Press.Google Scholar
  82. Swartz, R. J., & Perkins, D. N. (1991). Teaching thinking: Issues and approaches. Pacific Grove, CA: Midwest Publications.Google Scholar
  83. Tierney, R. J., Carter, M. A., & Desai, L. E. (1991). Portfolio assessment in the reading writing classroom. Norwood, AM: Christopher Gordon.Google Scholar
  84. Valencia, S. (1990). A portfolio approach to classroom reading assessment: The whys, whats and hows. The Reading Teacher, 44, 338–340.Google Scholar
  85. Valeri-Gold, M., Olson, J. R., & Deming, M. P. (1991–2). Portfolios: Collaborative authentic assessment opportunities for college developmental learners. Journal of Reading, 35 (4), 298–305.Google Scholar
  86. Wagner, R. K., & Sternberg, R. J. (1986). Tacit knowledge and intelligence in the everyday world. In R. J. Sternberg, & R. K. Wagner (Eds.), Practical intelligence: Nature and origins of competence in the everyday world (pp. 51–83). New York: Cambridge University Press.Google Scholar
  87. Wise, L. S., Plake, B. S., Johnson, P. L., & Roos, L. L. (1992). A comparison of self-adapted and computerized adaptive tests. Journal of Educational Measurement, 29, 329–339.CrossRefGoogle Scholar
  88. Wolf Palmer, D. (1989). Portfolio assessment: Sampling student work. Educational Leadership, 46, 35–39.Google Scholar
  89. Wolf, D., Bixby, J., Glenn III, J., & Gardner, H. (1991). To use their minds well: Investigating new forms of student assessment. Review of Research in Education, 17, 31–73.Google Scholar
  90. Worthen, B. R. (1993). Critical issues that will determine the future of alternative assessment. Phi Delta Kappan, 74, 444–456.Google Scholar
  91. Zeidner, M. (1987). Essay versus multiple-choice type classroom exams: The student’s perspective. Journal of Educational Research, 80, 352–258.Google Scholar
  92. Zoller, U., & Ben-Chaim, D. (1988). Interaction between examination-type anxiety state, and academic achievement in college science; An action-oriented research. Journal of Research in Science Teaching, 26, 65–77.CrossRefGoogle Scholar
  93. Zoller, U., & Ben-Chaim, D. (1990). Gender differences in examination-type performances, test anxiety, and academic achievements in college science education -A case study. Science Education, 74, 597–608.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1996

Authors and Affiliations

  • Menucha Birenbaum

There are no affiliations available

Personalised recommendations