Skip to main content
Log in

A Context-based Question Selection Model to Support the Adaptive Assessment of Learning: A study of online learning assessment in elementary schools in Indonesia

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

Abstract

In an online learning environment, it is important to establish a suitable assessment approach that can be adapted on the fly to accommodate the varying learning paces of students. At the same time, it is essential that assessment criteria remain compliant with the expected learning outcomes of the relevant education standard which predominantly utilizes a competency-based curriculum such as in Indonesia. The aim of the research in this paper is to improve the adaptiveness of questions in the existing Computerized Adaptive Testing (CAT) model by taking into consideration multiple aspects of user context. We propose a context-based question selection model based on competency evaluation by merging four methods of Classical Test Theory, Rasch Model, Linear and Quadratic models, and the combination of branching and item adaptive methods to select questions of suitable difficulty for each individual student. To evaluate the proposed model, we conducted experiments based on a real dataset of 689 elementary school students in Indonesia. The experiment results prove the effectiveness of the proposed model in terms of accuracy in predicting the appropriateness of the questions in relation to the students’ ability. This adaptive assessment method which accurately builds upon the students’ competency level will support students’ success in the online learning environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  • Agarwal, S., Jain, N., & Dholay, S. (2015). Adaptive testing and performance analysis using naive bayes classifier. Procedia Computer Science, 45, 70–75. https://doi.org/10.1016/j.procs.2015.03.088

    Article  Google Scholar 

  • Akase, M. (2022). Longitudinal measurement of growth in vocabulary size using Rasch-based test equating. Lang Test Asia, 12(5), 1–20. https://doi.org/10.1186/s40468-022-00155-8

    Article  Google Scholar 

  • Alfian, M., Yuhana, U. L., Pardede, E., & Bimantoro, A. N. P. (2023). Correction of Threshold Determination in Rapid-Guessing Behaviour Detection. Information., 14(7), 422.

    Article  Google Scholar 

  • Alruwais, N., Wills, G., & Wald, M. (2018). Advantages and challenges of using e-assessment. International Journal of Information and Education Technology, 8(1), 34–37.

    Article  Google Scholar 

  • Barla, M., et al. (2010). On the impact of adaptive test question selection for learning efficiency. Computers and Education, 55(2), 846–857. https://doi.org/10.1016/j.compedu.2010.03.016

    Article  Google Scholar 

  • Bashkansky, E., & Turetsky, V. (2016). A new approach to simultaneous latent ability & test difficulty estimation. In 2016 Second International Symposium on Stochastic Models in Reliability Engineering, Life Science and Operations Management (SMRLO), 71–75. 10.1109/SMRLO.2016.22

  • Bichi, A. A. (2016). Classical test theory: an introduction to linear modeling approach to test and item analysis. International Journal for Social Studies, 2(9), 27–33.

    Google Scholar 

  • Black, P. (2008). Formative assessment in the learning and teaching of design and technology. Design and Technology Education: An International Journal 13(3).

  • Bruso, J., Stefaniak, J., & Bol, L. (2020). An examination of personality traits as a predictor of the use of self-regulated learning strategies and considerations for online instruction. Educational Technology Research and Development, 68(5), 2659–2683. https://doi.org/10.1007/s11423-020-09797-y

    Article  Google Scholar 

  • Chand, V. S., Deshmukh, K. S., & Shukla, A. (2020). Why does technology integration fail? Teacher beliefs and content developer assumptions in an Indian initiative. Educational Technology Research and Development, 68(5), 2753–2774. https://doi.org/10.1007/s11423-020-09760-x

    Article  Google Scholar 

  • Chen, X., Zou, D., Xie, H., Cheng, G., & Liu, C. (2022). Two decades of artificial intelligence in education. Educational Technology & Society, 25(1), 28–47.

    Google Scholar 

  • Colwell, N. M. (2013). Test anxiety, computer-adaptive testing and the common core. Journal of Education and Training Studies, 1(2), 50–60. https://doi.org/10.11114/jets.v1i2.101

    Article  Google Scholar 

  • Gorgun, G., & Bulut, O. (2023). Incorporating test-taking engagement into the item selection algorithm in low-stakes computerized adaptive tests. Large-scale Assessments in Education, 11(1), 1–21.

    Article  Google Scholar 

  • Greving, S., Lenhard, W., & Richter, T. (2020). Adaptive retrieval practice with multiple-choice questions in the university classroom. Journal of Computer Assisted Learning, 36(6), 799–809. https://doi.org/10.1111/jcal.12445

    Article  Google Scholar 

  • Griffith, W. I., & Lim, H.-Y. (2014). Introduction to competency-based language teaching. Mextesol Journal, 38(2), 1–8.

    Google Scholar 

  • Haleem, A., Javaid, M., Qadri, M. A., & Suman, R. (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers, 3, 275–285.

    Article  Google Scholar 

  • Hambleton, R.K., Zaal, J.N., & Pieters, J.P.M. (1991). Computerized adaptive testing: theory, applications, and standards. In Advances in Educational and Psychological Testing: Theory and Applications, Springer, 341–366. https://doi.org/10.1007/978-94-009-2195-5_12

  • Han, K. C., & Tyek. (2018). Components of the item selection algorithm in computerized adaptive testing. Journal of Educational Evaluation for Health Professions, 15, 7. https://doi.org/10.3352/jeehp.2018.15.7

    Article  Google Scholar 

  • Hartmeyer, R., Stevenson, M. P., & Bentsen, P. (2018). A systematic review of concept mapping-based formative assessment processes in primary and secondary science education. Assessment in Education: Principles, Policy and Practice, 25(6), 598–619. https://doi.org/10.1080/0969594X.2017.1377685

    Article  Google Scholar 

  • Hogenboom, S. A. M., Hermans, F. F. J., Maas, H. L. J., & Van der. (2021). Computerized adaptive assessment of understanding of programming concepts in primary school children. Computer Science Education, 00(00), 1–30. https://doi.org/10.1080/08993408.2021.1914461

    Article  Google Scholar 

  • Hulin, C. L., Drasgow, F., & Parsons, C. K. (1983). Item Response Theory: Application to Psychological Measurement. Dorsey Press.

    Google Scholar 

  • Hwang, G.-J. (2003). A conceptual map model for developing intelligent tutoring systems. Computers & Education, 40(3), 217–235. https://doi.org/10.1016/S0360-1315(02)00121-5

    Article  Google Scholar 

  • Ju, G-FN., & Bork, A. (2005). The implementation of an adaptive test on the computer. In Fifth IEEE International Conference on Advanced Learning Technologies (ICALT’05), 822–23. https://doi.org/10.1109/ICALT.2005.274

  • Klinkenberg, S., Straatemeier, M., Maas, H. L. J., & van der. (2011). Computer adaptive practice of maths ability using a new item response model for on the fly ability and difficulty estimation. Computers & Education, 57(2), 1813–1824. https://doi.org/10.1016/j.compedu.2011.02.003

    Article  Google Scholar 

  • Linacre, J.M. et al. (2000). Computer-adaptive testing: a methodology whose time has come. Development of computerized middle school achievement test 69.

  • Lourdusamy, R., & Magendiran, P. (2021). A systematic analysis of difficulty level of the question paper using student’s marks: a case study. International Journal of Information Technology, 13, 1127–1143.

    Article  Google Scholar 

  • Mangowal, R. G., Yuhana, U. L., Yuniarno, E. M., & Purnomo, M. H. (2017). MathBharata: A serious game for motivating disabled students to study mathematics. IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), Perth, WA, Australia, 2017, pp. 1-6, doi: https://doi.org/10.1109/SeGAH.2017.7939277.

  • Mo, D. Y., Tang, Y. M., Wu, E. Y., & Tang, V. (2022). Theoretical model of investigating determinants for a successful Electronic Assessment System (EAS) in higher education. Education and Information Technologies, 2022(27), 1254312566.

    Article  Google Scholar 

  • Oppl, S., Reisinger, F., Eckmaier, A., & Helm, C. (2017). A flexible online platform for computerized adaptive testing. International Journal of Educational Technology in Higher Education, 14(1), 2. https://doi.org/10.1186/s41239-017-0039-0

    Article  Google Scholar 

  • Peng, S.S., & Lee, C.K.J. (2009). Educational Evaluation in East Asia: Emerging Issues and Challenges., Nova Science Publishers.

  • Razak, N. A., Khairani, A. Z., & Thien, L. M. (2012). Examining quality of mathematics test items using Rasch model: preliminarily analysis. Procedia-Social and Behavioral Sciences, 69, 2205–2214. https://doi.org/10.1016/j.sbspro.2012.12.187

    Article  Google Scholar 

  • Scheerens, J., Glas, C. A. W., Thomas, S. M., & Thomas, S. (2003). 13 Educational Evaluation, Assessment, and Monitoring: A Systemic Approach. Taylor & Francis.

    Google Scholar 

  • Schuwirth, L.W.T., & Vleuten, C., (2012). General Overview of the Theories Used in Assessment. Association for Medical Education in Europe. https://doi.org/10.3109/0142159X.2011.611022

  • Smith, A. et al. (2018). A multimodal assessment framework for integrating student writing and drawing in elementary science learning. IEEE Transactions on Learning Technologies 12(1): 3–15. 1 Jan.-March 2019, 10.1109/TLT.2018.2799871

  • Thompson, N. A., & Weiss, D. A. (2011). A framework for the development of computerized adaptive tests. Practical Assessment, Research, and Evaluation, 16(1), 1. https://doi.org/10.7275/wqzt-9427

    Article  Google Scholar 

  • Torrington, J., Bower, M., & Burns, E. C. (2023). What self-regulation strategies do elementary students utilize while learning online? Education and Information Technologies, 28, 1735–1762. https://doi.org/10.1007/s10639-022-11244-9

    Article  Google Scholar 

  • Tseng, W.-T. (2016). Measuring English vocabulary size via computerized adaptive testing. Computers & Education, 97, 69–85. https://doi.org/10.1016/j.compedu.2016.02.018

    Article  Google Scholar 

  • University of Washington, The. (2017). Understanding Item Analyses. http://www.washington.edu/assessment/scanning-scoring/scoring/reports/item-analysis/.

  • Van Norman, E. R., & Forcht, E. R. (2023). An Evaluation of the Validity of Growth on Two Computer Adaptive Tests to Predict Performance on End-of-Year Achievement Tests using Quantile Regression. Assessment for Effective Intervention, 48(2), 80–89.

    Article  Google Scholar 

  • Wauters, K., Desmet, P., & Van Den Noortgate, W. (2012). Item difficulty estimation: an auspicious collaboration between data and judgment. Computers and Education, 58(4), 1183–1193. https://doi.org/10.1016/j.compedu.2011.11.020

    Article  Google Scholar 

  • Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21(4), 361–375. https://doi.org/10.1111/j.1745-3984.1984.tb01040.x

    Article  Google Scholar 

  • Wise, S. L., & Kingsbury, G. G. (2022). Performance decline as an indicator of generalized test-taking disengagement. Applied Measurement in Education, 35(4), 272–286.

    Article  Google Scholar 

  • Wolberg, J. (2006). Data Analysis Using the Method of Least Squares: Extracting the Most Information from Experiments. Springer Science & Business Media.

    Google Scholar 

  • Yang, A. C. M., Flanagan, B., & Ogata, H. (2022). Adaptive formative assessment system based on computerized adaptive testing and the learning memory cycle for personalized learning. Computers and Education: Artificial Intelligence, 3, 100104.

    Google Scholar 

  • Yu, C.H. (2011). A Simple Guide to the Item Response Theory (IRT) and Rasch Modeling. Retrieved from www. creative-wisdom. com/computer/sas/IRT. pdf: 1–30.

  • Zhu, J., et al. (2019). Mapping engineering students’ learning outcomes from international experiences: designing an instrument to measure attainment of knowledge, skills, and attitudes. IEEE Transactions on Education, 62(2), 108–118. https://doi.org/10.1109/TE.2018.2868721

    Article  Google Scholar 

  • Zhu, M., Liu, O. L., & Lee, H. S. (2020). The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Computers and Education, 143(September 2018), 103668. https://doi.org/10.1016/j.compedu.2019.103668

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Ministry of Education, Culture, Research, and Technology of Republic Indonesia for the funding to support the collaborative research in this paper through the World Class Professor Program. The authors gratefully acknowledge the headmasters, teachers, and students of the elementary schools who have been involved in this study.

Funding

The research of this paper was supported by Ministry of Education, Culture, Research, and Technology of Republic Indonesia through World Class Professor Program from the Director-General of Higher Education (DIKTI).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, Umi Laili Yuhana; methodology, Umi Laili Yuhana, Eko Mulyanto Yuniarno; data curation, Umi Laili Yuhana.; writing— original draft preparation, Umi Laili Yuhana and Eko Mulyanto Yuniarno; writing—review, analysis, and editing, Eric Pardede and Wenny Rahayu;

Corresponding author

Correspondence to Umi Laili Yuhana.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institution. Informed consent was obtained from all individual participants included in the study.

Informed Consent

The author agreed to publish the study in this journal.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yuhana, U.L., Yuniarno, E.M., Rahayu, W. et al. A Context-based Question Selection Model to Support the Adaptive Assessment of Learning: A study of online learning assessment in elementary schools in Indonesia. Educ Inf Technol (2023). https://doi.org/10.1007/s10639-023-12184-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10639-023-12184-8

Keywords

Navigation