Abstract
This chapter summarises the latest development in the field of educational assessment, focusing on the transition from paper-based to computer-based and from summative to formative and diagnostic assessment to realise efficient testing for personalised learning. The chapter introduces an online diagnostic assessment system that has been developed for the first 6 years of primary school to assess students’ progress in three main domains, including mathematics. Mathematics has always been one of the most challenging school subjects, and early failures may cause problems later in other domains of schooling as well. Making mathematics learning successful for every student requires early identification of difficulties, continuous monitoring of development, and individualised support for those who need it in the first school years. The diagnostic assessment is based on a scientifically established, detailed framework which outlines mathematics learning and development in three dimensions: (1) psychological development or mathematical reasoning, (2) application of knowledge or mathematical literacy, and (3) disciplinary mathematics or curricular content. A complex online platform, eDia, has been constructed to support the entire assessment process from item writing through item banking, test delivery, and storing and analysing the data to providing feedback. This chapter outlines the foundations of the framework and shows how it has been mapped into a large set of items. Finally, it shows some early results from the empirical scaling and validation processes and indicates further possibilities for the application of the diagnostic system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aunola, K., Leskinen, E., Lerkkanen, M. K., & Nurmi, J. E. (2004). Developmental dynamics of math performance from preschool to grade 2. Journal of Educational Psychology, 96(4), 699–713. https://doi.org/10.1037/0022-0663.96.4.699
Becker, J. (2004). Computergestütztes Adaptives Testen (CAT) von Angst entwickelt auf der Grundlage der Item Response Theorie (IRT). Unpublished PhD dissertation. Freie Universität, Berlin.
Beller, M. (2013). Technologies in large-scale assessments: New directions, challenges, and opportunities. In M. von Davier, E. Gonzalez, I. Kirsch, & K. Yamamoto (Eds.), The role of international large-scale assessments: Perspectives from technology, economy, and educational research (pp. 25–45). Dordrecht, The Netherlands: Springer.
Bennett, R. E. (2002). Using electronic assessment to measure student performance. The state education standard. Washington, DC: National State Boards of Education.
Bennett, R. E. (2003). Online assessment and the comparability of score meaning. Princeton, NJ: Educational Testing Service.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: Wiley.
Breiter, A., Groß, L. M., & Stauke, E. (2013). Computer-based large-scale assessments in Germany. In D. Passey, A. Breiter, & A. Visscher (Eds.), Next generation of information technology in educational management (pp. 41–54). Berlin, Heidelberg: Springer.
Bridgeman, B. (2010). Experiences from large-scale computer-based testing in the USA. In F. Scheuermann & J. Bjornsson (Eds.), The transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 39–44). Brussels: European Communities.
Choi, S. W., & Tinkler, T. (2002). Evaluating comparability of paper and computer based assessment in a K-12 setting. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans, LA.
Christakoudis, C., Androulakis, G. S., & Zagouras, C. (2011). Prepare items for large scale computer based assessment: Case study for teachers’ certification on basic computer skills. Procedia-Social and Behavioral Sciences, 29, 1189–1198.
Csapó, B. (2004). Knowledge and competencies. In J. Letschert (Ed.), The integrated person. How curriculum development relates to new competencies (pp. 35–49). Enschede: CIDREE.
Csapó, B. (2007). Research into learning to learn through the assessment of quality and organization of learning outcomes. The Curriculum Journal, 18(2), 195–210. https://doi.org/10.1080/09585170701446044
Csapó, B. (2010). Goals of learning and the organization of knowledge. In E. Klieme, D. Leutner, & M. Kenk (Eds.), Kompetenzmodellierung. Zwischenbilanz des DFG-Schwerpunktprogramms und Perspektiven des Forschungsansatzes. 56. Beiheft der Zeitschrift für Pädagogik (pp. 12–27). Weinheim: Beltz.
Csapó, B., Ainley, J., Bennett, R. E., Latour, T., & Law, N. (2012). Technological issues for computer-based assessment. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching of 21st Century skills (pp. 143–230). New York, NY: Springer. https://doi.org/10.1007/978-94-007-2324-5_4
Csapó, B., & Csépe, V. (Eds.). (2012). Framework for diagnostic assessment of reading. Budapest: Nemzeti Tankönyvkiadó.
Csapó, B., Lőrincz, A., & Molnár, G. (2012). Innovative assessment technologies in educational games designed for young students. In D. Ifenthaler, D. Eseryel, & X. Ge (Eds.), Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 235–254). New York, NY: Springer.
Csapó, B., Molnár, G., & Nagy, J. (2014). Computer-based assessment of school-readiness and reasoning skills. Journal of Educational Psychology, 106(2), 639–650.
Csapó, B., & Szabó, G. (Eds.). (2012). Framework for diagnostic assessment of science. Budapest: Nemzeti Tankönyvkiadó.
Csapó, B., & Szendrei, M. (Eds.). (2011). Framework for diagnostic assessment of mathematics. Budapest: Nemzeti Tankönyvkiadó.
Csíkos, C., & Csapó, B. (2011). Diagnostic assessment frameworks for mathematics: Theoretical background and practical issues. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 137–162). Budapest: Nemzeti Tankönyvkiadó.
Csíkos, C., & Verschaffel, L. (2011). Mathematical literacy and the application of mathematical knowledge. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 57–93). Budapest: Nemzeti Tankönyvkiadó.
de Koning, E. (2000). Inductive reasoning in primary education. Measurement, teaching, transfer. Zeist: Kerckebosch.
DeWind, N. K., & Brannon, M. (2016). Significant inter-test reliability across approximate number system assessments. Frontiers in Psychology, 7, 310. https://doi.org/10.3389/fpsyg.2016.00310
Dikli, S. (2006). An overview of automated scoring of essays. The Journal of Technology, Learning, and Assessment, 5(1). http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1640/1489
Farcot, M., & Latour, T. (2008). An open source and large - scale computer based assessment platform: A real winner. In F. Scheuermann & A. G. Pereira (Eds.), Towards a research agenda on computer - based assessment: Challenges and needs for European educational measurement (pp. 64–67). Ispra: European Commission Joint Research Centre.
Farcot, M., & Latour, T. (2009). Transitioning to Computer-based sssessments: A question of costs. In F. Scheuermann & J. Bjornsson (Eds.), The transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 108–116). Brussels: European Communities.
Frey, A. (2007). Adaptives Testen. In H. Moosbrugger & A. Kelava (Eds.), Testtheorie und Testkonstruktion (pp. 261–278). Berlin, Heidelberg: Springer.
Griffin, P., McGaw, B., & Care, E. (Eds.). (2012). Assessment and teaching of 21st century skills. New York: Springer.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling A Multidisciplinary Journal, 6(1), 1–55.
Inhelder, B., & Piaget, J. (1958). The growth of logical thinking. London: Routledge and Kegan Paul.
Jodoin, M., Zenisky, A., & Hambleton, R. K. (2006). Comparison of the psychometric properties of several computer-based test designs for credentialing exams with multiple purposes. Applied Measurement in Education, 19(3), 203–220.
Jurecka, A., & Hartig, J. (2007). Computer- und Netzbasiertes Assessment. In J. Hartig & E. Klieme (Eds.), Möglichkeiten und Voraussetzungen technologiebasierter Kompetenzdiagnostik (pp. 37–48). Berlin, Bonn: Bundesministerium für Bildung und Forschung.
Kettler, R. J. (2011). Computer-based screening for the new modified alternate assessment. Journal of Psychoeducational Assessment, 29(1), 3–13.
Kikis, K. (2010). Reflections on paper-and-pencil tests to eAssessments: Narrow and broadband paths to 21st Century challenges. In F. Scheuermann & J. Bjornsson (Eds.), The Transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 99–103). Brussels: European Communities.
Klauer, K. J. (1993). Denktraining für Jugendliche. Göttingen: Hogrefe.
Let’s Go Learn. Retrieved from https://frontend.letsgolearn.com/login
Martin, R. (2010). Utilising the potential of Computer delivered surveys in assessing scientific Literacy. In F. Scheuermann & J. Bjornsson (Eds.), The Transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 172–177). Brussels: European Communities.
Martin, R., & Binkley, M. (2009). Gender differences in cognitive tests: A consequence of gender-dependent preferences for specific information presentation formats? In F. Scheuermann & J. Bjórnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 75–82). Luxembourg: Office for Official Publications of the European Communities.
Math Garden from the University of Amsterdam. Retrieved from https://www.mathsgarden.com/more-info/
Meijer, R. (2010). Transition to computer-based assessment: Motivations and considerations. In F. Scheuermann & J. Bjornsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 104–107). Brussels: European Communities.
Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Measurement: Issues and Practice, 14(4), 5–8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x
Mitchell, T., Russel, T., Broomhead, P., & Aldridge, N. (2002). Towards robust computerized marking of free-text responses. In M. Danson (Ed.), Proceedings of the Sixth International Computer Assisted Assessment Conference. Loughborouh: Loughboroug University. Retrieved from https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/1884/1/Mitchell_t1.pdf
Moe, E. (2010). Introducing large-scale computerized assessment – Lessons learned and future challenges. In F. Scheuermann & J. Bjórnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 51–56). Luxembourg: Office for Official Publications of the European Communities.
Molnár, G., Greiff, S., Wüstenberg, S., & Fischer, A. (2017). Empirical study of computer based assessment of domain-general dynamic problem solving skills. In B. Csapó, J. Funke, & A. Schleicher (Eds.), The nature of problem solving (pp. 123–143). Paris: OECD.
Molnár, G., & Lőrincz, A. (2012). Innovative assessment technologies: Comparing ‘face-to-face’ and game-based development of thinking skills in classroom settings. In D. Chen (Ed.), International proceedings of economics development and research. Management and education innovation (Vol. 37, pp. 150–154). Singapore: IACSIT Press.
Mullis, I. V. S. (2013). In M. O. Martin (Ed.), TIMSS 2015 assessment frameworks. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.
Nguyen, T., Watts, T. W., Duncan, G. J., Clements, D. H., Sarama, J. S., Wolfe, C., & Spitler, M. E. (2016). Which preschool mathematics competencies are most predictive of fifth grade achievement? Early Childhood Research Quarterly, 36, 550–560. https://doi.org/10.1016/j.ecresq.2016.02.003
Nunes, T., & Csapó, B. (2011). Developing and assessing mathematical reasoning. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 17–56). Budapest: Nemzeti Tankönyvkiadó.
OECD. (2000). PISA 2015 assessment and analytical framework: Science, reading, mathematic and financial literacy. Paris: OECD. https://doi.org/10.1787/9789264181564-en
OECD. (2014). PISA 2012 results: Creative problem solving: students’ skills in tackling real-life problems (volume V). Paris: OECD.
OECD. (2016). Measuring student knowledge and skills. The PISA 2000 assessment of reading, mathematical and scientific literacy. Paris: OECD. https://doi.org/10.1787/9789264255425-en
Pachler, N., Daly, C., Mor, Y., & Mellar, H. (2010). Formative e-assessment: Practitioner cases. Computers and Education, 54(3), 715–721.
PAT: Mathematics in New Zealand. Retrieved from http://www.nzcer.org.nz/tests/pat-mathematics
Peak, P. (2005). Recent trends in comparability studies. Pearson educational measurement. Retrieved from http://www.pearsonassessments.com/NR/rdonlyres/5FC04F5A-E79D-45FE-8484-07AACAE2DA75/0/TrendsCompStudies_rr0505.pdf
Pearson (2012). From paper and pencil to computer-based testing (CBT). Retrieved from http://www.pearsonvue.co.uk/india/Documents/PP_to_CBT.pdf.
Pearson’s MyMathLab. Retrieved from https://www.pearsonmylabandmastering.com/northamerica/mymathlab/
Redecker, C., & Johannessen, Ø. (2013). Changing assessment - towards a new assessment paradigm using ICT. European Journal of Education, 48(1), 79–96.
Ridgway, J., & McCusker, S. (2003). Using computers to assess new educational goals. Assessment in Education, 10(3), 309–328.
Ripley, M. (2010). Transformational Computer-based Testing. In F. Scheuermann & J. Bjórnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 92–98). Luxembourg: Office for Official Publications of the European Communities.
SETDA. (2008). National trends report 2008. Enhancing education through technology. Retrieved from http://www.setda.org/wp-content/uploads/2013/12/National-Trends-Report-2008.pdf.
Scheuermann, F., & Björnsson, J. (2009). The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing. Luxemburg: Office for Official Publications of the European Communities.
Scheuermann, F., & Pereira, G. A. (Eds.). (2008). Towards a research Agenda on Computer-based assessment. Luxembourg: Office for Official Publications of the European Communities.
Sim, G., & Horton, M. (2005). Performance and attitude of children in Computer based versus paper based testing. In P. Kommers & G. Richards (Eds.), Proceedings of world conference on educational multimedia, hypermedia and Telecommunications 2005 (pp. 3610–3614). Chesapeake, VA: AACE.
Strain-Seymour, E., Way, W. D., & Dolan, R. P. (2009). Strategies and processes for developing innovative items in large-scale assessments. Iowa City, IA: Pearson Education.
Szendrei, J., & Szendrei, M. (2011). Scientific and curriculum sspects of teaching and assessing mathematics. In B. Csapó & M. Szendrei (Eds.), Framework for diagnostic assessment of mathematics (pp. 95–135). Budapest: Nemzeti Tankönyvkiadó.
The Diagnostic assessment part of PARCC. Retrieved from http://futurereadyma.org/uploads/documents/Diagnostics_ELA_and_Math_FAQs_-_December_2015.pdf
Valenti, S., Neri, F., & Cucchiarelli, A. (2003). An overview of current research on automated essay grading. Journal of Information Technology Education: Research, 2(1), 319–330.
Van der Kleij, F. M., Eggen, T. J. H. M., Timmers, C. F., & Veldkamp, B. P. (2012). Effects of feedback in a computer-based assessment for learning. Computers & Education, 58(1), 263–272.
van Lent, G. (2010). Risks and benefits of CBT versus PBT in high-stakes testing. In F. Scheuermann & J. Bjornsson (Eds.), The Transition to Computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 83–91). Brussels: European Communities.
Verschaffel, L., De Corte, E., & Lasure, S. (1994). Realistic considerations in mathematical modeling of school arithmetic word problems. Learning and Instruction, 7, 339–359. https://doi.org/10.1016/0959-4752(94)90002-7
Watts, T. W., Duncan, G. J., Siegler, R. S., & Davis-Kean, P. E. (2014). What’s past is prologue: Relations between early mathematics knowledge and high school achievement. Educational Researcher, 43(7), 352–360. https://doi.org/10.3102/0013189X14553660
Acknowledgement
This study was funded by OTKA K115497.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Molnár, G., Csapó, B. (2019). Technology-Based Diagnostic Assessments for Identifying Early Mathematical Learning Difficulties. In: Fritz, A., Haase, V.G., Räsänen, P. (eds) International Handbook of Mathematical Learning Difficulties. Springer, Cham. https://doi.org/10.1007/978-3-319-97148-3_40
Download citation
DOI: https://doi.org/10.1007/978-3-319-97148-3_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97147-6
Online ISBN: 978-3-319-97148-3
eBook Packages: EducationEducation (R0)