Advertisement

Technology-Based Diagnostic Assessments for Identifying Early Mathematical Learning Difficulties

  • Gyöngyvér MolnárEmail author
  • Benő Csapó
Chapter

Abstract

This chapter summarises the latest development in the field of educational assessment, focusing on the transition from paper-based to computer-based and from summative to formative and diagnostic assessment to realise efficient testing for personalised learning. The chapter introduces an online diagnostic assessment system that has been developed for the first 6 years of primary school to assess students’ progress in three main domains, including mathematics. Mathematics has always been one of the most challenging school subjects, and early failures may cause problems later in other domains of schooling as well. Making mathematics learning successful for every student requires early identification of difficulties, continuous monitoring of development, and individualised support for those who need it in the first school years. The diagnostic assessment is based on a scientifically established, detailed framework which outlines mathematics learning and development in three dimensions: (1) psychological development or mathematical reasoning, (2) application of knowledge or mathematical literacy, and (3) disciplinary mathematics or curricular content. A complex online platform, eDia, has been constructed to support the entire assessment process from item writing through item banking, test delivery, and storing and analysing the data to providing feedback. This chapter outlines the foundations of the framework and shows how it has been mapped into a large set of items. Finally, it shows some early results from the empirical scaling and validation processes and indicates further possibilities for the application of the diagnostic system.

Keywords

Technology-based assessment Diagnostic assessment Learning difficulties 

Notes

Acknowledgement

This study was funded by OTKA K115497.

References

  1. Aunola, K., Leskinen, E., Lerkkanen, M. K., & Nurmi, J. E. (2004). Developmental dynamics of math performance from preschool to grade 2. Journal of Educational Psychology, 96(4), 699–713.  https://doi.org/10.1037/0022-0663.96.4.699 CrossRefGoogle Scholar
  2. Becker, J. (2004). Computergestütztes Adaptives Testen (CAT) von Angst entwickelt auf der Grundlage der Item Response Theorie (IRT). Unpublished PhD dissertation. Freie Universität, Berlin.Google Scholar
  3. Beller, M. (2013). Technologies in large-scale assessments: New directions, challenges, and opportunities. In M. von Davier, E. Gonzalez, I. Kirsch, & K. Yamamoto (Eds.), The role of international large-scale assessments: Perspectives from technology, economy, and educational research (pp. 25–45). Dordrecht, The Netherlands: Springer.CrossRefGoogle Scholar
  4. Bennett, R. E. (2002). Using electronic assessment to measure student performance. The state education standard. Washington, DC: National State Boards of Education.Google Scholar
  5. Bennett, R. E. (2003). Online assessment and the comparability of score meaning. Princeton, NJ: Educational Testing Service.Google Scholar
  6. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74.  https://doi.org/10.1080/0969595980050102 CrossRefGoogle Scholar
  7. Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: Wiley.CrossRefGoogle Scholar
  8. Breiter, A., Groß, L. M., & Stauke, E. (2013). Computer-based large-scale assessments in Germany. In D. Passey, A. Breiter, & A. Visscher (Eds.), Next generation of information technology in educational management (pp. 41–54). Berlin, Heidelberg: Springer.CrossRefGoogle Scholar
  9. Bridgeman, B. (2010). Experiences from large-scale computer-based testing in the USA. In F. Scheuermann & J. Bjornsson (Eds.), The transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 39–44). Brussels: European Communities.Google Scholar
  10. Choi, S. W., & Tinkler, T. (2002). Evaluating comparability of paper and computer based assessment in a K-12 setting. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans, LA.Google Scholar
  11. Christakoudis, C., Androulakis, G. S., & Zagouras, C. (2011). Prepare items for large scale computer based assessment: Case study for teachers’ certification on basic computer skills. Procedia-Social and Behavioral Sciences, 29, 1189–1198.CrossRefGoogle Scholar
  12. Csapó, B. (2004). Knowledge and competencies. In J. Letschert (Ed.), The integrated person. How curriculum development relates to new competencies (pp. 35–49). Enschede: CIDREE.Google Scholar
  13. Csapó, B. (2007). Research into learning to learn through the assessment of quality and organization of learning outcomes. The Curriculum Journal, 18(2), 195–210.  https://doi.org/10.1080/09585170701446044 CrossRefGoogle Scholar
  14. Csapó, B. (2010). Goals of learning and the organization of knowledge. In E. Klieme, D. Leutner, & M. Kenk (Eds.), Kompetenzmodellierung. Zwischenbilanz des DFG-Schwerpunktprogramms und Perspektiven des Forschungsansatzes. 56. Beiheft der Zeitschrift für Pädagogik (pp. 12–27). Weinheim: Beltz.Google Scholar
  15. Csapó, B., Ainley, J., Bennett, R. E., Latour, T., & Law, N. (2012). Technological issues for computer-based assessment. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching of 21st Century skills (pp. 143–230). New York, NY: Springer.  https://doi.org/10.1007/978-94-007-2324-5_4 CrossRefGoogle Scholar
  16. Csapó, B., & Csépe, V. (Eds.). (2012). Framework for diagnostic assessment of reading. Budapest: Nemzeti Tankönyvkiadó.Google Scholar
  17. Csapó, B., Lőrincz, A., & Molnár, G. (2012). Innovative assessment technologies in educational games designed for young students. In D. Ifenthaler, D. Eseryel, & X. Ge (Eds.), Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 235–254). New York, NY: Springer.CrossRefGoogle Scholar
  18. Csapó, B., Molnár, G., & Nagy, J. (2014). Computer-based assessment of school-readiness and reasoning skills. Journal of Educational Psychology, 106(2), 639–650.CrossRefGoogle Scholar
  19. Csapó, B., & Szabó, G. (Eds.). (2012). Framework for diagnostic assessment of science. Budapest: Nemzeti Tankönyvkiadó.Google Scholar
  20. Csapó, B., & Szendrei, M. (Eds.). (2011). Framework for diagnostic assessment of mathematics. Budapest: Nemzeti Tankönyvkiadó.Google Scholar
  21. Csíkos, C., & Csapó, B. (2011). Diagnostic assessment frameworks for mathematics: Theoretical background and practical issues. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 137–162). Budapest: Nemzeti Tankönyvkiadó.Google Scholar
  22. Csíkos, C., & Verschaffel, L. (2011). Mathematical literacy and the application of mathematical knowledge. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 57–93). Budapest: Nemzeti Tankönyvkiadó.Google Scholar
  23. de Koning, E. (2000). Inductive reasoning in primary education. Measurement, teaching, transfer. Zeist: Kerckebosch.Google Scholar
  24. DeWind, N. K., & Brannon, M. (2016). Significant inter-test reliability across approximate number system assessments. Frontiers in Psychology, 7, 310.  https://doi.org/10.3389/fpsyg.2016.00310 CrossRefGoogle Scholar
  25. Dikli, S. (2006). An overview of automated scoring of essays. The Journal of Technology, Learning, and Assessment, 5(1). http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1640/1489
  26. Farcot, M., & Latour, T. (2008). An open source and large - scale computer based assessment platform: A real winner. In F. Scheuermann & A. G. Pereira (Eds.), Towards a research agenda on computer - based assessment: Challenges and needs for European educational measurement (pp. 64–67). Ispra: European Commission Joint Research Centre.Google Scholar
  27. Farcot, M., & Latour, T. (2009). Transitioning to Computer-based sssessments: A question of costs. In F. Scheuermann & J. Bjornsson (Eds.), The transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 108–116). Brussels: European Communities.Google Scholar
  28. Frey, A. (2007). Adaptives Testen. In H. Moosbrugger & A. Kelava (Eds.), Testtheorie und Testkonstruktion (pp. 261–278). Berlin, Heidelberg: Springer.Google Scholar
  29. Griffin, P., McGaw, B., & Care, E. (Eds.). (2012). Assessment and teaching of 21st century skills. New York: Springer.Google Scholar
  30. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling A Multidisciplinary Journal, 6(1), 1–55.CrossRefGoogle Scholar
  31. Inhelder, B., & Piaget, J. (1958). The growth of logical thinking. London: Routledge and Kegan Paul.Google Scholar
  32. Jodoin, M., Zenisky, A., & Hambleton, R. K. (2006). Comparison of the psychometric properties of several computer-based test designs for credentialing exams with multiple purposes. Applied Measurement in Education, 19(3), 203–220.CrossRefGoogle Scholar
  33. Jurecka, A., & Hartig, J. (2007). Computer- und Netzbasiertes Assessment. In J. Hartig & E. Klieme (Eds.), Möglichkeiten und Voraussetzungen technologiebasierter Kompetenzdiagnostik (pp. 37–48). Berlin, Bonn: Bundesministerium für Bildung und Forschung.Google Scholar
  34. Kettler, R. J. (2011). Computer-based screening for the new modified alternate assessment. Journal of Psychoeducational Assessment, 29(1), 3–13.CrossRefGoogle Scholar
  35. Kikis, K. (2010). Reflections on paper-and-pencil tests to eAssessments: Narrow and broadband paths to 21st Century challenges. In F. Scheuermann & J. Bjornsson (Eds.), The Transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 99–103). Brussels: European Communities.Google Scholar
  36. Klauer, K. J. (1993). Denktraining für Jugendliche. Göttingen: Hogrefe.Google Scholar
  37. Let’s Go Learn. Retrieved from https://frontend.letsgolearn.com/login
  38. Martin, R. (2010). Utilising the potential of Computer delivered surveys in assessing scientific Literacy. In F. Scheuermann & J. Bjornsson (Eds.), The Transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 172–177). Brussels: European Communities.Google Scholar
  39. Martin, R., & Binkley, M. (2009). Gender differences in cognitive tests: A consequence of gender-dependent preferences for specific information presentation formats? In F. Scheuermann & J. Bjórnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 75–82). Luxembourg: Office for Official Publications of the European Communities.Google Scholar
  40. Math Garden from the University of Amsterdam. Retrieved from https://www.mathsgarden.com/more-info/
  41. Meijer, R. (2010). Transition to computer-based assessment: Motivations and considerations. In F. Scheuermann & J. Bjornsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 104–107). Brussels: European Communities.Google Scholar
  42. Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Measurement: Issues and Practice, 14(4), 5–8.  https://doi.org/10.1111/j.1745-3992.1995.tb00881.x CrossRefGoogle Scholar
  43. Mitchell, T., Russel, T., Broomhead, P., & Aldridge, N. (2002). Towards robust computerized marking of free-text responses. In M. Danson (Ed.), Proceedings of the Sixth International Computer Assisted Assessment Conference. Loughborouh: Loughboroug University. Retrieved from https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/1884/1/Mitchell_t1.pdf
  44. Moe, E. (2010). Introducing large-scale computerized assessment – Lessons learned and future challenges. In F. Scheuermann & J. Bjórnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 51–56). Luxembourg: Office for Official Publications of the European Communities.Google Scholar
  45. Molnár, G., Greiff, S., Wüstenberg, S., & Fischer, A. (2017). Empirical study of computer based assessment of domain-general dynamic problem solving skills. In B. Csapó, J. Funke, & A. Schleicher (Eds.), The nature of problem solving (pp. 123–143). Paris: OECD.Google Scholar
  46. Molnár, G., & Lőrincz, A. (2012). Innovative assessment technologies: Comparing ‘face-to-face’ and game-based development of thinking skills in classroom settings. In D. Chen (Ed.), International proceedings of economics development and research. Management and education innovation (Vol. 37, pp. 150–154). Singapore: IACSIT Press.Google Scholar
  47. Mullis, I. V. S. (2013). In M. O. Martin (Ed.), TIMSS 2015 assessment frameworks. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.Google Scholar
  48. Nguyen, T., Watts, T. W., Duncan, G. J., Clements, D. H., Sarama, J. S., Wolfe, C., & Spitler, M. E. (2016). Which preschool mathematics competencies are most predictive of fifth grade achievement? Early Childhood Research Quarterly, 36, 550–560.  https://doi.org/10.1016/j.ecresq.2016.02.003 CrossRefGoogle Scholar
  49. Nunes, T., & Csapó, B. (2011). Developing and assessing mathematical reasoning. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 17–56). Budapest: Nemzeti Tankönyvkiadó.Google Scholar
  50. OECD. (2000). PISA 2015 assessment and analytical framework: Science, reading, mathematic and financial literacy. Paris: OECD.  https://doi.org/10.1787/9789264181564-en CrossRefGoogle Scholar
  51. OECD. (2014). PISA 2012 results: Creative problem solving: students’ skills in tackling real-life problems (volume V). Paris: OECD.CrossRefGoogle Scholar
  52. OECD. (2016). Measuring student knowledge and skills. The PISA 2000 assessment of reading, mathematical and scientific literacy. Paris: OECD.  https://doi.org/10.1787/9789264255425-en CrossRefGoogle Scholar
  53. Pachler, N., Daly, C., Mor, Y., & Mellar, H. (2010). Formative e-assessment: Practitioner cases. Computers and Education, 54(3), 715–721.CrossRefGoogle Scholar
  54. PAT: Mathematics in New Zealand. Retrieved from http://www.nzcer.org.nz/tests/pat-mathematics
  55. Peak, P. (2005). Recent trends in comparability studies. Pearson educational measurement. Retrieved from http://www.pearsonassessments.com/NR/rdonlyres/5FC04F5A-E79D-45FE-8484-07AACAE2DA75/0/TrendsCompStudies_rr0505.pdf
  56. Pearson (2012). From paper and pencil to computer-based testing (CBT). Retrieved from http://www.pearsonvue.co.uk/india/Documents/PP_to_CBT.pdf.
  57. Redecker, C., & Johannessen, Ø. (2013). Changing assessment - towards a new assessment paradigm using ICT. European Journal of Education, 48(1), 79–96.CrossRefGoogle Scholar
  58. Ridgway, J., & McCusker, S. (2003). Using computers to assess new educational goals. Assessment in Education, 10(3), 309–328.CrossRefGoogle Scholar
  59. Ripley, M. (2010). Transformational Computer-based Testing. In F. Scheuermann & J. Bjórnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 92–98). Luxembourg: Office for Official Publications of the European Communities.Google Scholar
  60. SETDA. (2008). National trends report 2008. Enhancing education through technology. Retrieved from http://www.setda.org/wp-content/uploads/2013/12/National-Trends-Report-2008.pdf.
  61. Scheuermann, F., & Björnsson, J. (2009). The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing. Luxemburg: Office for Official Publications of the European Communities.Google Scholar
  62. Scheuermann, F., & Pereira, G. A. (Eds.). (2008). Towards a research Agenda on Computer-based assessment. Luxembourg: Office for Official Publications of the European Communities.Google Scholar
  63. Sim, G., & Horton, M. (2005). Performance and attitude of children in Computer based versus paper based testing. In P. Kommers & G. Richards (Eds.), Proceedings of world conference on educational multimedia, hypermedia and Telecommunications 2005 (pp. 3610–3614). Chesapeake, VA: AACE.Google Scholar
  64. Strain-Seymour, E., Way, W. D., & Dolan, R. P. (2009). Strategies and processes for developing innovative items in large-scale assessments. Iowa City, IA: Pearson Education.Google Scholar
  65. Szendrei, J., & Szendrei, M. (2011). Scientific and curriculum sspects of teaching and assessing mathematics. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 95–135). Budapest: Nemzeti Tankönyvkiadó.Google Scholar
  66. Valenti, S., Neri, F., & Cucchiarelli, A. (2003). An overview of current research on automated essay grading. Journal of Information Technology Education: Research, 2(1), 319–330.CrossRefGoogle Scholar
  67. Van der Kleij, F. M., Eggen, T. J. H. M., Timmers, C. F., & Veldkamp, B. P. (2012). Effects of feedback in a computer-based assessment for learning. Computers & Education, 58(1), 263–272.CrossRefGoogle Scholar
  68. van Lent, G. (2010). Risks and benefits of CBT versus PBT in high-stakes testing. In F. Scheuermann & J. Bjornsson (Eds.), The Transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 83–91). Brussels: European Communities.Google Scholar
  69. Verschaffel, L., De Corte, E., & Lasure, S. (1994). Realistic considerations in mathematical modeling of school arithmetic word problems. Learning and Instruction, 7, 339–359.  https://doi.org/10.1016/0959-4752(94)90002-7 CrossRefGoogle Scholar
  70. Watts, T. W., Duncan, G. J., Siegler, R. S., & Davis-Kean, P. E. (2014). What’s past is prologue: Relations between early mathematics knowledge and high school achievement. Educational Researcher, 43(7), 352–360.  https://doi.org/10.3102/0013189X14553660 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Institute of Education, University of SzegedSzegedHungary

Personalised recommendations