Advertisement

Higher Education

, Volume 52, Issue 4, pp 635–663 | Cite as

Assessment to improve learning in higher education: The BEAR Assessment System

  • Mark Wilson
  • Kathleen Scalise
Article

Abstract

This paper discusses how assessment practices in higher education can improve or hinder learning. An example is given to illustrate some common educational practices that may be contributing to underpreparation and underperformance of students. Elements of effective learning environments that may better address underlying metacognitive issues are discussed. The principles of the Berkeley Evaluation & Assessment Research Assessment (BEAR) System are introduced, and their use to improve learning is described in the context of the UC Berkeley ChemQuery project.

Keywords

assessment BEAR Assessment System chemistry education diagnostic assesment feedback feed forward formative higher education learning progressions learning trajectories metacognition progress variables science education 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adams, R.J., and Wilson, M. (1992). ‘A random coefficients multinomial logit: Generalizing Rasch models,’ Paper Presented at the Annual Meeting of the American Educational Research Association, San Francisco.Google Scholar
  2. Adams, R.J., Wilson, M. 1996‘Formulating the Rasch model as a mixed coefficients multinomial logit,’Engelhard, G.Wilson, M. eds. Objective Measurement III: Theory into PracticeAblexNorwood, NJGoogle Scholar
  3. Airasian, P.W. 1988‘Measurement-driven instruction: A closer look’Educational Measurement: Issues and Practice7611CrossRefGoogle Scholar
  4. Berry, B. (1997). ‘New ways of testing and grading can help students learn and teachers teach, Reforming Middle Schools and School Systems 1(2). http://www. middleweb.com/CSLB2testing.html.Google Scholar
  5. Biggs, J. 1999Teaching for Quality Learning at UniversitySRHE and Open University PressBuckinghamGoogle Scholar
  6. Biggs, J.B., Collis, K.F. 1982Evaluating the Quality of Learning: The SOLO TaxonomyAcademic PressNew YorkGoogle Scholar
  7. Black, P., Wiliam, D. 1998‘Inside the black box: Raising standards through classroom assessment’Phi Delta Kappan80139148Google Scholar
  8. Bloom, B.S. eds. 1956Taxonomy of Educational Objectives: The Classification of Educational Goals: Handbook I, Cognitive DomainLongmans GreenNew York, TorontoGoogle Scholar
  9. Bransford, J.D., Brown, A.L., Cocking,  2000The Design of Learning Environments: Assessment-Centered Environments. How People Learn: Brain, Mind, Experience, and SchoolNational Academy PressWashington, DC131154Google Scholar
  10. Brown, A.L., Campione, J.C., Webber, L.S., McGilly, K. 1992‘Interactive learning environments: A new look at assessment and instruction’Gifford, B.R.O’Connor, M.C. eds. Changing AssessmentsKluwerBoston121212Google Scholar
  11. Bruner, J. 1996The Culture of EducationHarvard University PressCambridge, MassGoogle Scholar
  12. Claesgens, J., Scalise, K. Draney, K., Wilson, M. and Stacy, A. (2002). ‘Perspectives of chemists: A framework to promote conceptual understanding of chemistry’, Paper Presented at the Annual Meeting of the American Educational Research Association, New Orleans.Google Scholar
  13. Cole, N. (1991). ‘The impact of science assessment on classroom practice’, In Kulm, G. and Malcom, S. (eds.), Science Assessment in the Service of Reform, Washington, DC: American Association for the Advancement of Science, pp. 97–106.Google Scholar
  14. Draney, K.D. and Peres, D. (1998). Unidimensional and multidimensional modeling of complex science assessment data. BEAR Research Report SA-98-1. Berkeley: University of California.Google Scholar
  15. Dweck, C.S., Leggett, E.L. 1988‘A social-cognitive approach to motivation and personality’. Psychological Review95256273Google Scholar
  16. Haladyna, T.M. (1994). ‘Cognitive taxonomies’, In Developing and Validating Multiple-Choice Test Items. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers, (pp. 104–110).Google Scholar
  17. Haney, W. (1991). ‘We must take care: Fitting assessments to functions’, in Perrone V. (eds.) ‘Expanding Student Assessment, Alexandria, VA: Association for Supervision and Curriculum Development, pp. 142–163.Google Scholar
  18. Hesse, J. (1989). From Naive to Knowledgeable. The Science Teacher, 55–58.Google Scholar
  19. Land, R. 1997‘Moving up to complex assessment systems’Evaluation Comment7121Google Scholar
  20. Linn, R. and Baker, E. (1996). ‘Can performance-based student assessments be psychometrically sound?’, in Baron, J.B. and Wolf, D.P. Performance-Based Student Assessment: Challenges and Possibilities. Ninety-fifth yearbook of the National Society for the Study of Education, Chicago: University of Chicago Press, pp. 84–103.Google Scholar
  21. Masters, G.N., Adams, R.A. and Wilson, M. (1990). ‘Charting student progress’, in Husen, T. and Postlethwaite, T.N. Oxford: Pergamon Press, International Encyclopedia of Education: Research and Studies. Supplementary vol. 2, pp. 628–634.Google Scholar
  22. Minstrell, J. (1998). ‘Student thinking and related instruction: Creating a facet-based learning environment’, Paper Presented at the Meeting of the Committee on Foundations of Assessment, Woods Hole, MA (October).Google Scholar
  23. Olson, D.R.Torrance, N. eds. 1996Handbook of Education and Human Development: New Models of Learning, Teaching and SchoolingBlackwellOxfordGoogle Scholar
  24. Pellegrino, J., Chudowsky, N., Glaser, R. 2001Knowing What Students Know: The Science and Design of Educational AssessmentN. R. C. Center for Education. National Academy PressWashington DCGoogle Scholar
  25. Resnick, L.B. and Resnick, D.P. (1992). ‘Assessing the thinking curriculum: New tools for educational reform’, in Gifford, B.R. and O’Connor, M.C. (eds.), Changing Assessments, Boston: Kluwer, pp. 37–76.Google Scholar
  26. Scalise, K., Claesgens, J., Krystyniak, R., Mebane, S., Wilson, M. and Stacy, A. (2004). ‘Perspectives of Chemists: Tracking conceptual understanding of student learning in chemistry at the secondary and university levels,’ Paper Presented at the Enhancing the Visibility and Credibility of Educational Research, American Educational Research Association Annual Meeting, San Diego, CA.Google Scholar
  27. SEPUP. (1995). Issues, Evidence and You: Teacher’s Guide. Berkeley, CA: Lawrence Hall of Science.Google Scholar
  28. Stake, R. (1991). Advances in Program Evaluation: Volume 1, Part A: Using Assessment Policy to Reform Education. Greenwich, CT: JAI Press.Google Scholar
  29. Torrance, H. 1995a‘The role of assessment in educational reform’Torrance, H. eds. Evaluating Authentic AssessmentOpen University PressPhiladelphia144156Google Scholar
  30. Torrance, H. 1995b‘Teacher involvement in new approaches to assessment’Torrance, H. eds. Evaluating Authentic AssessmentOpen University PressPhiladelphia4456Google Scholar
  31. Tucker, M. 1991‘Why assessment is now issue number one’Kulm, G.Malcom, S. eds. Science Assessment in the Service of Reform.American Association for the Advancement of ScienceWashington DC316Google Scholar
  32. Wilson, M. 1990‘Measurement of developmental levels’Husen, T.Postlethwaite, T.N. eds. International Encyclopedia of Education: Research and StudiesPergamon PressOxfordSupplementary VolumeGoogle Scholar
  33. Wilson, M. 2005Constructing Measures: An Item Response Modeling ApproachLawrence Erlbaum AssocMahwah, NJGoogle Scholar
  34. Wilson, M. and Adams, R.J. (1996). ‘Evaluating progress with alternative assessments: A model for Chapter 1, in Kane, M.B. (eds.,) Implementing Performance Assessment: Promise, Problems and Challenges, Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  35. Wilson, M. Kennedy, C. and Draney, K. (2004). GradeMap (Version 4.0) [computer program]. Berkeley: University of California, BEAR Center.Google Scholar
  36. Wilson, M., Sloane, K. 2000‘From principles to practice: An embedded assessment system’Applied Measurement in Education13181208CrossRefGoogle Scholar
  37. Wolf, D., Bixby, J., Glenn, J.,III, Gardner, H. 1991‘To use their minds well: Investigating new forms of student assessment’Review of Research in Education173174CrossRefGoogle Scholar
  38. Wu, M., Adams, R.J. and Wilson, M. (1998). ACERConQuest [computer program]. Melbourne, Australia: ACER Press.Google Scholar
  39. Zessoules, R. and Gardner, H. (1991) ‘Authentic assessment: Beyond the buzzword and into the classroom’, in Perrone, V. (ed.), Expanding Student Assessment, Alexandria, VA: Association for Supervision and Curriculum Development, pp. 47–71.Google Scholar

Copyright information

© Springer 2006

Authors and Affiliations

  • Mark Wilson
    • 1
  • Kathleen Scalise
    • 1
  1. 1.University of CaliforniaBerkeleyUSA

Personalised recommendations