Skip to main content
Log in

Quality standards for new modes modes of assessment. An exploratory study of the consequential validity of the OverAll Test

  • Published:
European Journal of Psychology of Education Aims and scope Submit manuscript

Abstract

During the past decade, due to societal developments, methods of instruction as well as the assessment of students’ performances have changed to an important considerable extent. Two of the elements of this change are the accents on cognitive competencies such as problem solving and on learning in an authentic context. In conjunction with the development of such learning methods, new modes of assessment were implemented. It was expected that this change would have positive feedback effects on learning and teaching. These feedback effects are the central issue of this article. They are discussed in terms of the experiences of the Maastricht School of Economics and Business Administration. This school places the analysis of authentic problems at the core of the curriculum, including the learning process as well as the assessment system. The OverAll Test, a case-based assessment instrument aiming to assess problem solving skills, was implemented as part of this. Different quality issues related to the OverAll Test have been evaluated. This article presents the results of one of the four validity studies conducted; an exploratory study of the consequential validity of the OverAll Test. It starts with the an outline of the main features of the new modes of assessment and the OverAll test as an example. There is then a discussed discussion of effectively the OverAll test fits these features as well as the goals and characteristics of problem-based learning. The study of the consequential validity of the OverAll Test is then described in depth. The results of the survey, as well as the results of the semi-structured interviews with staff and students, indicate a friction between the intended characteristics of the learning and assessment environment and the practice of instruction and assessment.

Résumé

Durant la décennie passée, l’appréciation de la prestation des étudiants a changé. Ceci implique plus d’accent sur les compétences cognitives, comme l’aptitude pour résoudre des problèmes et sur l’appréciation dans un contexte authentique. Il était a espérer que ce changement aurait des effets de feedback positifs sur l’apprentissage et sur l’enseignement. Ces effets de feedback sont l’essentiel de cet article. Les effets sont discutes en termes de l’expérience de la Maastricht School of Economics and Business Administration avec un instrument d’appréciation base sur des cases avec le but d’apprécier l’aptitude de pouvoir résoudre des problèmes, nommée l’OverAll Test. Cet article présente les résultats d’une des quatre études de validité conduites, une étude investigatrice sur la validité importante de l’OverAll Test. Les caractéristiques principaux des nouvelles modes de l’appréciation et de l’OverAll test comme exemple sont décrits. Il est discuté de quelle manière efficace cet instrument convient ces caractéristiques ainsi que les buts et les caractéristiques de l’enseignement basé sur des problèmes. Ensuite, l’étude de la validité importante de l’OverAll Test est présentée. Les résultats du rapport ainsi que les résultats des interviews semistructures avec les professeurs et les étudiants indiquent une friction entre l’alignement de l’enseignement, l’apprentissage et l’appréciation en vue et l’alignement comme aperçu par les étudiants et les professeurs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Albanese, M.A., & Mitchell, S. (1993). Problem-based learning: A review of literature on its outcomes and implementation.Academic Medicine, 68, 2–81.

    Article  Google Scholar 

  • Baxter, G.P., & Shavelson, R.J. (1994). Science Performance Assessments: Benchmarks and Surrogates.International Journal of Educational Research, 21, 279–299.

    Article  Google Scholar 

  • Benett, Y. (1993). The validity and Reliability of Assessments and Self-assessments of Work-based Learing.Assessment and Evaluation in Education, 18, 81–94.

    Google Scholar 

  • Bereiter, C., & Scardamelia, M. (1993).Surpassing ourselves: An inquiry into the nature of expertise. Chicago: Open Court.

    Google Scholar 

  • Biggs, J. (1996). Enhancing teaching through constructive alignment.Higher Education, 32, 347–364.

    Article  Google Scholar 

  • Birenbaum, M. (1996) Assessment 2000: Towards a Pluralistic Approach to Assessment. In M. Birenbaum & F.J.R.C. Dochy (Eds.),Alternatives in Assessment of Achievements, Learing Processes and Prior Knowledge (pp. 3–29). Boston: Kluwer Academic Press.

    Google Scholar 

  • Birenbaum, M. & Dochy, F. (Eds.). (1996).Alternatives in Assessment of Achievement, Learning Processes and prior Knowledge. Boston: Kluwer Academic.

    Google Scholar 

  • Bond, L. (1995). Unintended Consequences of Performance Assessment: Issues of Bias and Fairness.Educational Measurement: Issues and Practice.Winter 1995, 21–24.

    Article  Google Scholar 

  • Bound, D. (1990). Assessment and the promotion of academic values.Studies in Higher Education, 15, 101–111.

    Article  Google Scholar 

  • Boud, D. (1995). Assessment and learning: Contradictory or complementary? In P. Knight (Ed.),Assessment for learning in higher education (pp. 35–48). London: Kogan Page.

    Google Scholar 

  • Bowden, J., & Marton, F. (1998).The University of Learning. London: Kogan Page.

    Google Scholar 

  • Brown, G., Bull, J., & Pendlebury, M. (1997).Assessing student learning in higher education. London: Routledge.

    Google Scholar 

  • Brown, G., & Knight, P. (1994).Assessing Learners in Higher Education. London: Kogan Page.

    Google Scholar 

  • Churchill, G.A. (1996).Basic Marketing Research. Orlando: The Dryden Press.

    Google Scholar 

  • Collins, A. (1990). Reformulating Testing to Measure Learning and Thinking. In N. Frederiksen, R. Glaser, A. Lesgold, & M.G. Shafto (Eds.),Diagnostic Monitoring of Skill and Knowledge Acquisition (pp. 75–87). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • De Corte, E. (1990).A State-of-the-art of research on learning and teaching. Keynote lecture presented at the first European Conference on the First Year Experience in Higher Education, Aalborg University, Denmark, 23–25 April 1990.

  • Des Marchais, J.E., & Vu, N.V. (1996). Developing and evaluating the student assessment system in the preclinical problem-based curriculum at Sherbrooke.Academic Medicine, 71, 274–283.

    Article  Google Scholar 

  • Dochy, F., & Dierick, S. (2000). Quality assessment at the crossroads. Paper presented at the Assessment 2000 Conference of the Special Interest Group on Assessment of the European Association of Research on Learning and Instruction, Maastricht, Sept, 13–15.

  • Dochy, F., & McDowell, L. (1997). Assessment as a tool for Learning. Studies inEducational Evaluation, 23, 279–298.

    Article  Google Scholar 

  • Dochy, F., & Moerkerke, G. (1997). The present, the past and the future of achievement testing and performance assessment.International Journal of Educational Research, 27, 415–432.

    Google Scholar 

  • Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer- and co-assessment in higher education. A review.Studies in Higher education, 24, 331–350.

    Article  Google Scholar 

  • Engel, C.E. (1991). Not just a method but a way of learning. In D. Boud & G. Feletti (Eds.),The challenge of problem based learning. London: Kogan Page.

    Google Scholar 

  • Feltovich, P.J., Spiro, R.J., & Coulson, R.L. (1993). Learning, Teaching, and Testing for Complex Conceptual Understanding. In N. Frederiksen, R.J. Mislevy, & I.I. Bejar (Eds.),Test theory for a New Generation of Tests (pp. 178–193). Hillsdale, New Jersey: Lawrence Erlbaum Associates, Publishers.

    Google Scholar 

  • Frederiksen, J.R., & Collins, A. (1989). A system approach to educational testing.Educational researcher, 18, 27–32.

    Google Scholar 

  • Gibbs, G. (1992).Improving the Quality of Student Learning. Bristol, Technical and Educational Services.

    Google Scholar 

  • Glaser, R., & Chi, M.H.T. (1988). Overview. In M.H.T. Chi, R. Glaser, & M.J. Farr (Eds.),The nature of expertise (XV–XXVIII). Hillsdale, New Jersey: Erlbaum.

    Google Scholar 

  • Glaser, R., Raghavan, K., & Baxter, G. (1992).Design characteristics of Science Performance assessments. CSE Technical report 349.

  • Glaser, R., & Silver, E. (1993). Assessment, Testing and Instruction: Retrospect and Prospect.Review of Research in Education, 20, 393–419.

    Google Scholar 

  • Gredler, M. (1999).Classroom assessment and learning. New York: Longman.

    Google Scholar 

  • Haertel, E.H. (1991). New forms of teacher assessment.Review of research in education, 17, 3–29.

    Google Scholar 

  • Hambleton, R.K., & Murphy, E. (1992). A psychometric perspective on authentic measurement.Applied Measurement in Education, 5, 1–16.

    Article  Google Scholar 

  • Harnisch, D.L., & Mabry, L. (1993). Issues in the development and evaluation of alternative assessments.Journal of curriculum studies, 25, 179–187.

    Article  Google Scholar 

  • Leenders, M.R., & Erskine, J.A. (1989).Case research: The case writing process London, Ontario: University of Western Ontario.

    Google Scholar 

  • Linn, R.L. (1992).Educational Assessment: Expanded Expectations and Challenges. Paper presented at the Annual Meeting of the American Psychological Association, August, 14–18. Washington, DC.

  • Linn, R.L., Baker, E., & Dunbar, S. (1991). Complex, performance-based assessment: Expectations and validation criteria.Educational Researcher, 16, 1–21.

    Google Scholar 

  • Magone, M.E., Cai, J., Silver, E.A., & Wang, N. (1994). Validating the cognitive complexity and content quality of a mathematics performance assessment.International Journal of Educational Research, 21, 317–340.

    Article  Google Scholar 

  • Martin, S. (1997). Two models of Educational Assessment: A response from initial teacher education: If the cap fits...Assessment and evaluation in Higher Education, 22, 223–234.

    Article  Google Scholar 

  • Marton, F., Hounsell, D., & Entwistle, N. (Eds.). (1984).The experience of learning. Edinburgh: Scottish Academic Press.

    Google Scholar 

  • Masters, G., & Mislevy, R.J. (1993). New views of student learning: Implications for educational measurement. In N. Frederiksen, R.J. Mislevy, & I. Bejar (Eds.),Test theory for a new generation of tests. Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Messick, S. (1995). Standards of Validity and the Validity of Standards in Performance Assessment.Educational Measurement: Issues and Practices, Winter 1995, 5–8.

    Article  Google Scholar 

  • Moss, P.A. (1995). Themes and variations in validity theory.Educational Measurement, 11, 5–13.

    Google Scholar 

  • Nendaz, M.R., & Tekian, A. (1999). Assessment in problem-based learning medical schools: A literature review.Teaching and learning in medicine, 11, 232–243.

    Article  Google Scholar 

  • Nuy, H.J.P. (1991). Interaction of study orientation and students’ appreciation of structure in their educational environment.Higher Education, 22, 267–274.

    Article  Google Scholar 

  • Poikela, E., & Poikela, S. (1997). Conceptions of Learning and Knowledge — Impacts on the Implementation of Problem-based Learning.Zeitschrift fur Hochschuldidactik, 21, 8–21.

    Google Scholar 

  • Powney, J., & Watts, J. (1987).Interviewing in educational research. London: Routledge Kegan Paul.

    Google Scholar 

  • Ramsden, P. (1979). Student learning and the perceptions of the academic environment.Higher Education, 8, 411–428.

    Article  Google Scholar 

  • Ramsden, P., Prosser, M., Trigwell, K., & Martin, E. (1997).Perceptions of academic leadership and the effectiveness of university teaching. Paper presented at the Annual Conference of the Australian Association for Research in Education, December 1997, Brisbane, Australia.

  • Sambell, K., & McDowell, L. (1998). The construction of the Hidden Curriculum: Messages and meanings in the assessment of student learning.Assessment and Evaluation in Higher Education, 23, 391–402.

    Article  Google Scholar 

  • Sambell, K., McDowell, L., & Brown, S. (1997). “But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment.Studies in Educational Evaluation, 23, 349–371.

    Article  Google Scholar 

  • Savery, J.R., & Duffy, T.M. (1995). Problem Based Learning: An Instructional Model and Its Constructivist Framework.Educational Technology, Sept.–Oct., 31–38.

    Google Scholar 

  • Schoemaker, P.J.H. (1995). Scenario Planning: a Tool for Strategic Thinking.Sloan Management Review, 25–39.

  • Segers, M.S.R. (1995). Problem-Solving and Assessment. The Maastricht Experiences. In W.H. Gijselaers, D.T. Tempelaar, P.K. Keizer, J.M. Blommaert, E.M. Bernard, & H. Kasper (Eds.),Educational Innovation in Economics and Business Administration (pp. 347–357). Boston: Kluwer Academic Publishers.

    Google Scholar 

  • Segers, M.S.R. (1996). Assessment in a Problem-Based Economics Curriculum. In M. Birenbaum & F.J.R.C. Dochy (Eds.),Alternatives in Assessment of Achievements, Learning Processes and Prior Knowledge (pp. 201–224. Boston/Dordrecht/London: Kluwer Academic Publishers.

    Google Scholar 

  • Segers, M.S.R. (1997). An Alternative for Assessing Problem-solving Skills: The OverAll Test.Studies in Educational Evaluation, 23, 373–398.

    Article  Google Scholar 

  • Segers, M., Dochy, F., & De Corte, E. (1999). Assessment Practices and Students’ Knowledge Profiles in a Problembased curriculum.Learning Environments Research: An International Journal, 4, 191–213.

    Article  Google Scholar 

  • Shavelson, R.J., Gao, X., & Baxter, G.P. (1996). On the content validity of performance assessments: Centrality of domain specification. In M. Birenbaum & F. Dochy (Eds.),Alternatives in Assessment of Achievements, Learning Processes and Prior Learning (pp. 131–143). Boston: Kluwer Academic Press.

    Google Scholar 

  • Shepard, L.A. (1993). Evaluating test validity.Review of Research in Education, 19, 405–450.

    Google Scholar 

  • Swanson, D.B., Case, S.N., & van der Vleuten, C.P.M. (1991). Strategies for student assessment. In D. Boud & G. Feletti (Eds.),The challenge of problem-based learning (pp. 260–274). London: Kogan Page.

    Google Scholar 

  • Trigwell, K., & Prosser, M. (1991). Relating learning approaches, perceptions of context and learning outcomes.Higher Education, 22, 251–266.

    Article  Google Scholar 

  • Tynjälä, P. (1999). Towards Expert Knowledge? A comparison Between a Constructivist and a Traditional Learning Environment in the University.International Journal of Educational Research, 31(5), 355–442.

    Article  Google Scholar 

  • Vermunt, J., & Verloop, N. (1999). Congruence and friction between learning and teaching.Learning and Instruction, 9, 257–280.

    Article  Google Scholar 

  • Vernon, D.T.A., & Blake, R.L. (1993). Does Problem-Based Learning Work? A Meta-Analysis of Evaluative Research.Academic Medicine, 68, 550–63.

    Article  Google Scholar 

  • Wilkerson, L. (1995). Identification of skills for the problem-based tutor: student and faculty perspectives.Instructional Science, 23, 303–315.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mien Segers.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Segers, M., Dierick, S. & Dochy, F. Quality standards for new modes modes of assessment. An exploratory study of the consequential validity of the OverAll Test. Eur J Psychol Educ 16, 569–588 (2001). https://doi.org/10.1007/BF03173198

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF03173198

Key words

Navigation