The Edumetric Quality of New Modes of Assessment: Some Issues and Prospects

Chapter

Keywords

Coherence Assure Assimilation Stake Spiro 

References

  1. American Educational Research Association, American Psychological Association, National Council on Measurement in Education (1985). Standards for educational and psychological testing. Washington, DC: Author.Google Scholar
  2. Amrein, A. L., & Berliner, D. C. (2002). High-stakes testing, uncertainty, and student learning Education Policy Analysis Archives, 10(18). Retrieved August 15, 2007, fromhttp://epaa.asu.edu/epaa/v10n18/.
  3. Askham, P. (1997). An instrumental response to the instrumental student: assessment for learning. Studies in Educational Evaluation, 23(4), 299–317.CrossRefGoogle Scholar
  4. Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. M. (2007). Teachers’ opinions on quality criteria for competency assessment programmes. Teaching and Teacher Education. Retrieved August 15, 2007, from http://www.fss.uu.nl/edsci/images/stories/pdffiles/Baartman/baartman%20et%20al_2006_teachers%20opinions%20on%20quality%20criteria%20for%20caps.pdf
  5. Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. M. (2005). The wheel of competency assessment. Presenting quality criteria for competency assessment programmes. Paper presented at the 11th biennial Conference for the European Association for Research on Learning and Instruction (EARLI), Nicosia, Cyprus.Google Scholar
  6. Bateson, D. (1994). Psychometric and philosophic problems in ‘authentic’ assessment, performance tasks and portfolios. The Alberta Journal of Educational Research, 11(2), 233–245.Google Scholar
  7. Bennet, Y. (1993). Validity and reliability of assessments and self-assessments of work-based learning assessment. Assessment & Evaluation in Higher Education, 18(2), 83–94.CrossRefGoogle Scholar
  8. Biggs, J. (1996). Assessing learning quality: Reconciling institutional, staff and educational demands. Assessment & Evaluation in Higher Education, 21(1), 5–16.CrossRefGoogle Scholar
  9. Biggs, J. (1998). Assessment and classroom learning: A role for summative assessment? Assessment in Education: Principles, Policy & Practices, 5, 103–110.CrossRefGoogle Scholar
  10. Birenbaum, M. (1994). Toward adaptive assessment – The student's angle. Studies in Educational Evaluation, 20, 239–255.CrossRefGoogle Scholar
  11. Birenbaum, M. (1996). Assessment 2000. In: M. Birenbaum & F. Dochy, (Eds.). Alternatives in assessment of achievement, learning processes and prior knowledge. Boston: Kluwer Academic.Google Scholar
  12. Birenbaum, M., & Dochy, F. (Eds.) (1996). Alternatives in assessment of achievement, learning processes and prior knowledge. Boston: Kluwer Academic.Google Scholar
  13. Black, P., (1995). Curriculum and assessment in science education: The policy interface. International Journal of Science Education, 17(4), 453–469.CrossRefGoogle Scholar
  14. Black, P. & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practices, 5(1), 7–74.CrossRefGoogle Scholar
  15. Boud, D. (1990). Assessment and the promotion of academic values. Studies in Higher Education, 15(1), 101–111.CrossRefGoogle Scholar
  16. Boud, D. (1995). Assessment and learning: Contradictory or complementary? In P. Knight (Ed.), Assessment for learning in higher education (pp. 35–48). London: Kogan Page.Google Scholar
  17. Brennan, R. L., & Johnson, E. G. (1995). Generalisability of performance assessments. Educational Measurement: Issues and Practice, 11(4), 9–12.Google Scholar
  18. Brown, S. (1999). Institutional strategies for assessment. In S. Brown, & A. Glasner (Eds.), Assessment matters in higher education: Choosing and using diverse approaches (pp. 3–13). Buckingham: The Society of Research into Higher Education/Open University Press.Google Scholar
  19. Butler, D. L. (1988). A critical evaluation of software for experiment development in research and teaching. Behaviour-Research-Methods, Instruments and Computers, 20, 218–220.CrossRefGoogle Scholar
  20. Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281.Google Scholar
  21. Cronbach, L. J. (1989). Construct validation after thirty years. In R. L. Linn (Ed.), Intelligence: Measurement, theory and public policy (pp. 147–171). Chicago: University of Illinois Press.Google Scholar
  22. Cronbach, L. J., Gleser, G. C., Nanda, H. & Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: Wiley.Google Scholar
  23. Cronbach, L. J., Linn, R. L., Brennan, R. L. & Haertel, E. H. (1997). Generalizability analysis for performance assessments of students’ achievement or school effectiveness. Educational and Psychological Measurement, 57(3), 373–399.CrossRefGoogle Scholar
  24. Crooks, T. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58(4), 438–481.Google Scholar
  25. Dancer, D., & Kamvounias, P. (2005). Student involvement in assessment: A project designed to assess class participation fairly and reliably. Assessment and Evaluation in Higher Education, 30, 445–454.CrossRefGoogle Scholar
  26. Dierick, S., & Dochy, F. (2001). New lines in edumetrics: New forms of assessment lead to new assessment criteria. Studies in Educational Evaluation, 27, 307–329.CrossRefGoogle Scholar
  27. Dierick, S., Dochy, F., & Van de Watering, G. (2001). Assessment in het hoger onderwijs. Over de implicaties van nieuwe toetsvormen voor de edumetrie [Assessment in higher education. About the implications of new test forms for edumetrics]. Tijdschrift voor Hoger Onderwijs, 19, 2–18.Google Scholar
  28. Dierick, S., Van de Watering, G., & Muijtjens, A. (2002). De actuele kwaliteit van assessment: Ontwikkelingen in de edumetrie [The actual quality of assessment: Developments in edumetrics]. In F. Dochy, L. Heylen, & H. Van de Mosselaer (Eds.) Assessment in onderwijs: Nieuwe toetsvormen en examinering in studentgericht onderwijs en competentiegericht onderwijs [Assessment in education: New testing formats and examinations in student-centred education and competence based education] (pp. 91–122). Utrecht: Lemma BV.Google Scholar
  29. Dochy, F. (1999). Instructietechnologie en innovatie van probleemoplossen: over constructiegericht academisch onderwijs. Utrecht: Lemma.Google Scholar
  30. Dochy, F. (2001). A new assessment era: Different needs, new challenges. Research Dialogue in Learning and Instruction, 2(1), 11–20.CrossRefGoogle Scholar
  31. Dochy, F. (2005). Learning lasting for life and assessment: How far did we progress? Presidential address at the EARLI conference 2005, Nicosia, Cyprus. Retrieved October 18, 2007 from http://perswww.kuleuven.be/~u0015308/Publications/EARLI2005%20presidential%20address%20FINAL.pdf
  32. Dochy, F., & Gijbels, D. (2006). New learning, assessment engineering and edumetrics. In L. Verschaffel, F. Dochy, M. Boekaerts, & S. Vosniadou (Eds.), Instructional psychology: Past, present and future trends. Sixteen essays in honour of Erik De Corte. New York: Elsevier.Google Scholar
  33. Dochy, F., & McDowell, L. (1997). Assessment as a tool for learning. Studies in Educational Evaluation, 23, 279–298.CrossRefGoogle Scholar
  34. Dochy, F. & Moerkerke, G. (1997). The present, the past and the future of achievement testing and performance assessment. International Journal of Educational Research, 27(5), 415–432.Google Scholar
  35. Dochy F., Moerkerke G., & Martens R. (1996). Integrating assessment, learning and instruction: Assessment of domain-specific and domain transcending prior knowledge and progress. Studies in Educational Evaluation, 22(4), 309–339.CrossRefGoogle Scholar
  36. Dochy, F., Segers, M., Gijbels, D., & Struyven, K. (2006). Breaking down barriers between teaching and learning, and assessment: Assessment engineering. In D. Boud & N. Falchikov (Eds.), Rethinking assessment for future learning. London: RoutledgeFalmer.Google Scholar
  37. Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer- and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 331–350.CrossRefGoogle Scholar
  38. Elton L. R. B., & Laurillard D. M. (1979). Trends in research on student learning. Studies in Higher Education, 4(1), 87–102.CrossRefGoogle Scholar
  39. Evans, A. W., McKenna, C., & Oliver, M. (2005). Trainees’ perspectives on the assessment and self-assessment of surgical skills. Assessment and Evaluation in Higher Education, 30, 163–174.CrossRefGoogle Scholar
  40. Falchikov, N. (1986). Product comparisons and process benefits of collaborative peer group and self-assessments. Assessment and Evaluation in Higher Education, 11(2), 146–166.Google Scholar
  41. Falchikov, N. (1995). Peer feedback marking: Developing peer assessment. Innovations in Education and Training International, 32(2), 395–430.Google Scholar
  42. Fan, X., & Chen, M. (2000). Published studies of interrater reliability often overestimate reliability: computing the correct coefficient. Educational and Psychological Measurement, 60(4), 532–542.CrossRefGoogle Scholar
  43. Farmer, B., & Eastcott, D. (1995). Making assessment a positive experience. In P. Knight (Ed.), Assessment for learning in higher education (pp. 87–93). London: Kogan Page.Google Scholar
  44. Feltovich, P. J., Spiro, R. J. & Coulson, R. L. (1993). Learning, teaching, and testing for complex conceptual understanding. In N. Frederiksen, R. J. Mislevy, & I. I. Bejar (Eds.), Test theory for a new generation of tests. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  45. Firestone, W. A., & Mayrowitz, D. (2000). Rethinking “high stakes”: Lessons from the United States and England and Wales. Teachers College Record, 102, 724–749.CrossRefGoogle Scholar
  46. Frederiksen, J. R., & Collins, A. (1989). A system approach to educational testing. Educational Researcher, 18(9), 27–32.Google Scholar
  47. Frederiksen, N. (1984). The real test bias: Influences of testing on teaching and learning. American Psychologist, 39(3), 193–202.CrossRefGoogle Scholar
  48. Gibbs, G. (1999). Using assessment strategically to change the way students learn, In S. Brown & A. Glasner (Eds.), Assessment matters in higher education: Choosing and using diverse approaches, Buckingham: Open University Press.Google Scholar
  49. Gielen, S., Dochy, F., & Dierick, S. (2003). Evaluating the consequential validity of new modes of assessment: The influence of assessment on learning, including pre-, post-, and true assessment effects. In M. S. R. Segers, F. Dochy, & E. Cascallar (Eds.), Optimising new modes of assessment: In search of qualities and standards (pp. 37–54). Dordrecht/Boston: Kluwer Academic Publishers.CrossRefGoogle Scholar
  50. Gielen, S., Dochy, F., & Dierick, S. (2007). The impact of peer assessment on the consequential validity of assessment. Manuscript submitted for publication.Google Scholar
  51. Gielen, S., Tops, L., Dochy, F., Onghena, P., & Smeets, S. (2007). Peer feedback as a substitute for teacher feedback. Manuscript submitted for publication.Google Scholar
  52. Gielen, S., Dochy, F., Onghena, P., Janssens, S., Schelfhout, W., & Decuyper, S. (2007). A complementary role for peer feedback and staff feedback in powerful learning environments. Manuscript submitted for publication.Google Scholar
  53. Glaser, R. (1990). Testing and assessment; O Tempora! O Mores! Horace Mann Lecture, University of Pittsburgh, LRDC, Pittsburgh, Pennsylvania.Google Scholar
  54. Gulliksen H. (1985). Creating Better Classroom Tests. Educational Testing Service. Opinion papers, Reports – evaluative. Google Scholar
  55. Haertel, E. H. (1991). New forms of teacher assessment. Review of Research in Education, 17, 3–29.Google Scholar
  56. Harlen, W., & Deakin Crick, R. (2003). Testing and motivation for learning. Assessment in Education, 10(2), 169–207.CrossRefGoogle Scholar
  57. Heller, J. I., Sheingold, K., & Mayford, C. M. (1998). Reasoning about evidence in portfolios: Cognitive foundations for valid and reliable assessment. Educational Assessment, 5(1), 5–40.CrossRefGoogle Scholar
  58. Kane, M. (1992). An argument-based approach to validity. Psychological Bulletin, 112, 527–535.CrossRefGoogle Scholar
  59. Langan, A. M., Wheater, C. P., Shaw, E. M., Haines, B. J., Cullen, W. R., Boyle, J. C., et al. (2005). Peer assessment of oral presentations: Effects of student gender, university affiliation and participation in the development of assessment criteria. Assessment and Evaluation in Higher Education, 30, 21–34.CrossRefGoogle Scholar
  60. Leonard, M, & Davey, C. (2001). Thoughts on the 11 plus. Belfast: Save the Children Fund.Google Scholar
  61. Linn, R. L. (1993). Educational assessment: Expanded expectations and challenges. Educational Evaluation and Policy Analysis, 15(1), 1–16.Google Scholar
  62. Linn, R. L., Baker, E., & Dunbar, S. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 20(8), 15–21.Google Scholar
  63. Martens, R., & Dochy, F. (1997). Assessment and feedback as student support devices. Studies in Educational Evaluation, 23(3), 257–273.CrossRefGoogle Scholar
  64. Marton, F. & Säljö, R. (1976). On qualitative differences in learning. Outcomes and process. British Journal of Educational Psychology, 46, 4–11, 115–127.Google Scholar
  65. McDowell, L. (1995). The impact of innovative assessment on student learning. Innovations in Education and Training International, 32(4), 302–313.Google Scholar
  66. McDowell, L., & Sambell, K. (1999). The experience of innovative assessment: Student perspectives. In S. Brown, & A. Glasner (Eds.), Assessment matters in higher education: Choosing and using diverse approaches (pp. 71–82). Buckingham: The Society of Research into Higher Education/Open University Press.Google Scholar
  67. Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, 18(2), 5–11.Google Scholar
  68. Messick, S. (1994). The interplay of evidence and consequences in the validation performance assessments. Educational Researcher, 23(2), 13–22.Google Scholar
  69. Messick, S. (1995). Validity of psychological assessment. Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.CrossRefGoogle Scholar
  70. Nevo, D. (1995). School-based evaluation: A dialogue for school improvement. London: Pergamon.Google Scholar
  71. Nijhuis, J., Segers, M. R. S., & Gijselaers, W. (2005). Influence of redesigning a learning environment on student perceptions and learning strategies. Learning Environments Research: An International Journal, 8, 67–93.CrossRefGoogle Scholar
  72. Norton, L. S., Tilley, A. J., Newstead, S. E., & Franklyn-Stokes, A. (2001). The pressures of assessment in undergraduate courses and their effect on student behaviours. Assessment and Evaluation in Higher Education, 26, 269–284.CrossRefGoogle Scholar
  73. Orsmond, P., Merry, S., & Reiling, K. (2000). The use of student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education, 25(1), 23–38.CrossRefGoogle Scholar
  74. Pond, K., Ui-Haq, R., & Wade, W. (1995). Peer review: A precursor to peer assessment. Innovations in Education and Training International, 32, 314–323.Google Scholar
  75. Pope, N. (2001). An examination of the use of peer rating for formative assessment in the context of the theory of consumption values. Assessment and Evaluation in Higher Education, 26, 235–246.CrossRefGoogle Scholar
  76. Pope, N. (2005). The impact of stress in self- and peer assessment. Assessment and Evaluation in Higher Education, 30, 51–63.CrossRefGoogle Scholar
  77. Powers, D., Fowles, M., & Willard, A. (1994). Direct assessment, direct validation? An example from the assessment of writing? Educational Assessment, 2(1), 89–100.CrossRefGoogle Scholar
  78. Rigsby, L. C., & DeMulder, E. K. (2003). Teachers voices interpreting standards: compromising teachers autonomy or raising expectations and performances. Educational Policy Analysis Archives, 11(44), Retrieved August 15, 2007, from http://epaa.asu.edu/epaa/v11n44/
  79. Sambell, K., McDowell, L., & Brown, S. (1997). But is it fair? An exploratory study of student perceptions of the consequential validity of assessment. Studies in Educational Evaluation, 23(4), 349–371.CrossRefGoogle Scholar
  80. Scouller, K. (1998). The influence of assessment method on student’s learning approaches: Multiple choice question examination versus assignment essay. Higher Education, 35, 453–472.CrossRefGoogle Scholar
  81. Scouller, K. M., & Prosser, M. (1994). Students' experiences in studying for multiple choice question examinations. Studies in Higher Education, 19(3), 267–279.CrossRefGoogle Scholar
  82. Segers, M., & Dochy, F. (2001). New assessment forms in problem-based learning: The value-added of the students' perspective. Studies in Higher Education, 26(3), 327–343.CrossRefGoogle Scholar
  83. Segers, M. S. R., Dierick, S., & Dochy, F. (2001). Quality standards for new modes of assessment. An exploratory study of the consequential validity of the OverAll Test. European Journal of Psychology of Education, XVI, 569–588.CrossRefGoogle Scholar
  84. Segers, M., Dochy, F., & Cascallar, E. (2003). Optimizing new modes of assessment: In search for qualities and standards. Boston: Kluwer Academic.CrossRefGoogle Scholar
  85. Shavelson, R. J. (1994). Guest Editor Preface. International Journal of Educational Research, 21, 235–237.CrossRefGoogle Scholar
  86. Shavelson, R. J., Webb, N. M. & Rowley, G. L. (1989). Generalizability Theory. American Psychologist, 44(6), 922–932.CrossRefGoogle Scholar
  87. Shepard, L. (1991). Interview on assessment issues with Lorie Shepard. Educational Researcher, 20(2), 21–23.CrossRefGoogle Scholar
  88. Sluijsmans, D. M. A. (2002). Student involvement in assessment: The training of peer assessment skills. Unpublished doctoral dissertation, Open University, Heerlen, The Netherlands.Google Scholar
  89. Starren H. (1998). De toets als hefboom voor meer en beter leren. Academia, Februari, 1998.Google Scholar
  90. Struyf, E., Vandenberghe, R., & Lens, W. (2001). The evaluation practice of teachers as a learning opportunity for students. Studies in Educational Evaluation, 27(3), 215–238.CrossRefGoogle Scholar
  91. Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: a review. Assessment and Evaluation in Higher Education, 30(4), 331–347.CrossRefGoogle Scholar
  92. Struyven, K., Gielen, S., & Dochy, F. (2003). Students’ perceptions on new modes of assessment and their influence on student learning: the portfolio case. European Journal of School Psychology, 1(2), 199–226.Google Scholar
  93. Suen, H. K., Logan, C. R., Neisworth J. T. & Bagnato, S. (1995). Parent-professional congruence. Is it necessary? Journal of Early Intervention, 19(3), 243–252.Google Scholar
  94. Tan, C. M. (1992). An evaluation of continuous assessment in the teaching of physiology. Higher Education, 23(3), 255–272.CrossRefGoogle Scholar
  95. Thomas, P., & Bain, J. (1984). Contextual dependence of learning approaches: The effects of assessments. Human Learning, 3, 227–240.Google Scholar
  96. Thomson, K., & Falchikov, N. (1998). Full on until the sun comes out: The effects of assessment on student approaches to studying. Assessment & Evaluation in Higher Education, 23(4), 379–390.CrossRefGoogle Scholar
  97. Topping, K. (1998). Peer-assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276.Google Scholar
  98. Trigwell, K., & Prosser, M. (1991a). Improving the quality of student learning: The influence of learning context and student approaches to learning on learning outcomes. Higher Education, 22(3), 251–266.CrossRefGoogle Scholar
  99. Trigwell, K., & Prosser, M. (1991b). Relating approaches to study and quality of learning outcomes at the course level. British Journal of Educational Psychology, 61(3), 265–275.Google Scholar
  100. Vermunt, J. D. H. M. (1992). Qualitative-analysis of the interplay between internal and external regulation of learning in two different learning environments. International Journal of Psychology, 27, 574.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  1. 1.Centre for Educational Research on Lifelong Learning and Participation, Centre for Research on Teaching and TrainingUniversity of LeuvenBelgium

Personalised recommendations