Abstract
During the past decade, due to societal developments, methods of instruction as well as the assessment of students’ performances have changed to an important considerable extent. Two of the elements of this change are the accents on cognitive competencies such as problem solving and on learning in an authentic context. In conjunction with the development of such learning methods, new modes of assessment were implemented. It was expected that this change would have positive feedback effects on learning and teaching. These feedback effects are the central issue of this article. They are discussed in terms of the experiences of the Maastricht School of Economics and Business Administration. This school places the analysis of authentic problems at the core of the curriculum, including the learning process as well as the assessment system. The OverAll Test, a case-based assessment instrument aiming to assess problem solving skills, was implemented as part of this. Different quality issues related to the OverAll Test have been evaluated. This article presents the results of one of the four validity studies conducted; an exploratory study of the consequential validity of the OverAll Test. It starts with the an outline of the main features of the new modes of assessment and the OverAll test as an example. There is then a discussed discussion of effectively the OverAll test fits these features as well as the goals and characteristics of problem-based learning. The study of the consequential validity of the OverAll Test is then described in depth. The results of the survey, as well as the results of the semi-structured interviews with staff and students, indicate a friction between the intended characteristics of the learning and assessment environment and the practice of instruction and assessment.
Résumé
Durant la décennie passée, l’appréciation de la prestation des étudiants a changé. Ceci implique plus d’accent sur les compétences cognitives, comme l’aptitude pour résoudre des problèmes et sur l’appréciation dans un contexte authentique. Il était a espérer que ce changement aurait des effets de feedback positifs sur l’apprentissage et sur l’enseignement. Ces effets de feedback sont l’essentiel de cet article. Les effets sont discutes en termes de l’expérience de la Maastricht School of Economics and Business Administration avec un instrument d’appréciation base sur des cases avec le but d’apprécier l’aptitude de pouvoir résoudre des problèmes, nommée l’OverAll Test. Cet article présente les résultats d’une des quatre études de validité conduites, une étude investigatrice sur la validité importante de l’OverAll Test. Les caractéristiques principaux des nouvelles modes de l’appréciation et de l’OverAll test comme exemple sont décrits. Il est discuté de quelle manière efficace cet instrument convient ces caractéristiques ainsi que les buts et les caractéristiques de l’enseignement basé sur des problèmes. Ensuite, l’étude de la validité importante de l’OverAll Test est présentée. Les résultats du rapport ainsi que les résultats des interviews semistructures avec les professeurs et les étudiants indiquent une friction entre l’alignement de l’enseignement, l’apprentissage et l’appréciation en vue et l’alignement comme aperçu par les étudiants et les professeurs.
Similar content being viewed by others
References
Albanese, M.A., & Mitchell, S. (1993). Problem-based learning: A review of literature on its outcomes and implementation.Academic Medicine, 68, 2–81.
Baxter, G.P., & Shavelson, R.J. (1994). Science Performance Assessments: Benchmarks and Surrogates.International Journal of Educational Research, 21, 279–299.
Benett, Y. (1993). The validity and Reliability of Assessments and Self-assessments of Work-based Learing.Assessment and Evaluation in Education, 18, 81–94.
Bereiter, C., & Scardamelia, M. (1993).Surpassing ourselves: An inquiry into the nature of expertise. Chicago: Open Court.
Biggs, J. (1996). Enhancing teaching through constructive alignment.Higher Education, 32, 347–364.
Birenbaum, M. (1996) Assessment 2000: Towards a Pluralistic Approach to Assessment. In M. Birenbaum & F.J.R.C. Dochy (Eds.),Alternatives in Assessment of Achievements, Learing Processes and Prior Knowledge (pp. 3–29). Boston: Kluwer Academic Press.
Birenbaum, M. & Dochy, F. (Eds.). (1996).Alternatives in Assessment of Achievement, Learning Processes and prior Knowledge. Boston: Kluwer Academic.
Bond, L. (1995). Unintended Consequences of Performance Assessment: Issues of Bias and Fairness.Educational Measurement: Issues and Practice.Winter 1995, 21–24.
Bound, D. (1990). Assessment and the promotion of academic values.Studies in Higher Education, 15, 101–111.
Boud, D. (1995). Assessment and learning: Contradictory or complementary? In P. Knight (Ed.),Assessment for learning in higher education (pp. 35–48). London: Kogan Page.
Bowden, J., & Marton, F. (1998).The University of Learning. London: Kogan Page.
Brown, G., Bull, J., & Pendlebury, M. (1997).Assessing student learning in higher education. London: Routledge.
Brown, G., & Knight, P. (1994).Assessing Learners in Higher Education. London: Kogan Page.
Churchill, G.A. (1996).Basic Marketing Research. Orlando: The Dryden Press.
Collins, A. (1990). Reformulating Testing to Measure Learning and Thinking. In N. Frederiksen, R. Glaser, A. Lesgold, & M.G. Shafto (Eds.),Diagnostic Monitoring of Skill and Knowledge Acquisition (pp. 75–87). Hillsdale, NJ: Lawrence Erlbaum Associates.
De Corte, E. (1990).A State-of-the-art of research on learning and teaching. Keynote lecture presented at the first European Conference on the First Year Experience in Higher Education, Aalborg University, Denmark, 23–25 April 1990.
Des Marchais, J.E., & Vu, N.V. (1996). Developing and evaluating the student assessment system in the preclinical problem-based curriculum at Sherbrooke.Academic Medicine, 71, 274–283.
Dochy, F., & Dierick, S. (2000). Quality assessment at the crossroads. Paper presented at the Assessment 2000 Conference of the Special Interest Group on Assessment of the European Association of Research on Learning and Instruction, Maastricht, Sept, 13–15.
Dochy, F., & McDowell, L. (1997). Assessment as a tool for Learning. Studies inEducational Evaluation, 23, 279–298.
Dochy, F., & Moerkerke, G. (1997). The present, the past and the future of achievement testing and performance assessment.International Journal of Educational Research, 27, 415–432.
Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer- and co-assessment in higher education. A review.Studies in Higher education, 24, 331–350.
Engel, C.E. (1991). Not just a method but a way of learning. In D. Boud & G. Feletti (Eds.),The challenge of problem based learning. London: Kogan Page.
Feltovich, P.J., Spiro, R.J., & Coulson, R.L. (1993). Learning, Teaching, and Testing for Complex Conceptual Understanding. In N. Frederiksen, R.J. Mislevy, & I.I. Bejar (Eds.),Test theory for a New Generation of Tests (pp. 178–193). Hillsdale, New Jersey: Lawrence Erlbaum Associates, Publishers.
Frederiksen, J.R., & Collins, A. (1989). A system approach to educational testing.Educational researcher, 18, 27–32.
Gibbs, G. (1992).Improving the Quality of Student Learning. Bristol, Technical and Educational Services.
Glaser, R., & Chi, M.H.T. (1988). Overview. In M.H.T. Chi, R. Glaser, & M.J. Farr (Eds.),The nature of expertise (XV–XXVIII). Hillsdale, New Jersey: Erlbaum.
Glaser, R., Raghavan, K., & Baxter, G. (1992).Design characteristics of Science Performance assessments. CSE Technical report 349.
Glaser, R., & Silver, E. (1993). Assessment, Testing and Instruction: Retrospect and Prospect.Review of Research in Education, 20, 393–419.
Gredler, M. (1999).Classroom assessment and learning. New York: Longman.
Haertel, E.H. (1991). New forms of teacher assessment.Review of research in education, 17, 3–29.
Hambleton, R.K., & Murphy, E. (1992). A psychometric perspective on authentic measurement.Applied Measurement in Education, 5, 1–16.
Harnisch, D.L., & Mabry, L. (1993). Issues in the development and evaluation of alternative assessments.Journal of curriculum studies, 25, 179–187.
Leenders, M.R., & Erskine, J.A. (1989).Case research: The case writing process London, Ontario: University of Western Ontario.
Linn, R.L. (1992).Educational Assessment: Expanded Expectations and Challenges. Paper presented at the Annual Meeting of the American Psychological Association, August, 14–18. Washington, DC.
Linn, R.L., Baker, E., & Dunbar, S. (1991). Complex, performance-based assessment: Expectations and validation criteria.Educational Researcher, 16, 1–21.
Magone, M.E., Cai, J., Silver, E.A., & Wang, N. (1994). Validating the cognitive complexity and content quality of a mathematics performance assessment.International Journal of Educational Research, 21, 317–340.
Martin, S. (1997). Two models of Educational Assessment: A response from initial teacher education: If the cap fits...Assessment and evaluation in Higher Education, 22, 223–234.
Marton, F., Hounsell, D., & Entwistle, N. (Eds.). (1984).The experience of learning. Edinburgh: Scottish Academic Press.
Masters, G., & Mislevy, R.J. (1993). New views of student learning: Implications for educational measurement. In N. Frederiksen, R.J. Mislevy, & I. Bejar (Eds.),Test theory for a new generation of tests. Hillsdale, NJ: Lawrence Erlbaum Associates.
Messick, S. (1995). Standards of Validity and the Validity of Standards in Performance Assessment.Educational Measurement: Issues and Practices, Winter 1995, 5–8.
Moss, P.A. (1995). Themes and variations in validity theory.Educational Measurement, 11, 5–13.
Nendaz, M.R., & Tekian, A. (1999). Assessment in problem-based learning medical schools: A literature review.Teaching and learning in medicine, 11, 232–243.
Nuy, H.J.P. (1991). Interaction of study orientation and students’ appreciation of structure in their educational environment.Higher Education, 22, 267–274.
Poikela, E., & Poikela, S. (1997). Conceptions of Learning and Knowledge — Impacts on the Implementation of Problem-based Learning.Zeitschrift fur Hochschuldidactik, 21, 8–21.
Powney, J., & Watts, J. (1987).Interviewing in educational research. London: Routledge Kegan Paul.
Ramsden, P. (1979). Student learning and the perceptions of the academic environment.Higher Education, 8, 411–428.
Ramsden, P., Prosser, M., Trigwell, K., & Martin, E. (1997).Perceptions of academic leadership and the effectiveness of university teaching. Paper presented at the Annual Conference of the Australian Association for Research in Education, December 1997, Brisbane, Australia.
Sambell, K., & McDowell, L. (1998). The construction of the Hidden Curriculum: Messages and meanings in the assessment of student learning.Assessment and Evaluation in Higher Education, 23, 391–402.
Sambell, K., McDowell, L., & Brown, S. (1997). “But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment.Studies in Educational Evaluation, 23, 349–371.
Savery, J.R., & Duffy, T.M. (1995). Problem Based Learning: An Instructional Model and Its Constructivist Framework.Educational Technology, Sept.–Oct., 31–38.
Schoemaker, P.J.H. (1995). Scenario Planning: a Tool for Strategic Thinking.Sloan Management Review, 25–39.
Segers, M.S.R. (1995). Problem-Solving and Assessment. The Maastricht Experiences. In W.H. Gijselaers, D.T. Tempelaar, P.K. Keizer, J.M. Blommaert, E.M. Bernard, & H. Kasper (Eds.),Educational Innovation in Economics and Business Administration (pp. 347–357). Boston: Kluwer Academic Publishers.
Segers, M.S.R. (1996). Assessment in a Problem-Based Economics Curriculum. In M. Birenbaum & F.J.R.C. Dochy (Eds.),Alternatives in Assessment of Achievements, Learning Processes and Prior Knowledge (pp. 201–224. Boston/Dordrecht/London: Kluwer Academic Publishers.
Segers, M.S.R. (1997). An Alternative for Assessing Problem-solving Skills: The OverAll Test.Studies in Educational Evaluation, 23, 373–398.
Segers, M., Dochy, F., & De Corte, E. (1999). Assessment Practices and Students’ Knowledge Profiles in a Problembased curriculum.Learning Environments Research: An International Journal, 4, 191–213.
Shavelson, R.J., Gao, X., & Baxter, G.P. (1996). On the content validity of performance assessments: Centrality of domain specification. In M. Birenbaum & F. Dochy (Eds.),Alternatives in Assessment of Achievements, Learning Processes and Prior Learning (pp. 131–143). Boston: Kluwer Academic Press.
Shepard, L.A. (1993). Evaluating test validity.Review of Research in Education, 19, 405–450.
Swanson, D.B., Case, S.N., & van der Vleuten, C.P.M. (1991). Strategies for student assessment. In D. Boud & G. Feletti (Eds.),The challenge of problem-based learning (pp. 260–274). London: Kogan Page.
Trigwell, K., & Prosser, M. (1991). Relating learning approaches, perceptions of context and learning outcomes.Higher Education, 22, 251–266.
Tynjälä, P. (1999). Towards Expert Knowledge? A comparison Between a Constructivist and a Traditional Learning Environment in the University.International Journal of Educational Research, 31(5), 355–442.
Vermunt, J., & Verloop, N. (1999). Congruence and friction between learning and teaching.Learning and Instruction, 9, 257–280.
Vernon, D.T.A., & Blake, R.L. (1993). Does Problem-Based Learning Work? A Meta-Analysis of Evaluative Research.Academic Medicine, 68, 550–63.
Wilkerson, L. (1995). Identification of skills for the problem-based tutor: student and faculty perspectives.Instructional Science, 23, 303–315.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Segers, M., Dierick, S. & Dochy, F. Quality standards for new modes modes of assessment. An exploratory study of the consequential validity of the OverAll Test. Eur J Psychol Educ 16, 569–588 (2001). https://doi.org/10.1007/BF03173198
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF03173198