Advertisement

An Assessment System for Monitoring the Academic Development of Students

  • Recep Gür
Conference paper
Part of the Springer Proceedings in Complexity book series (SPCOM)

Abstract

This study aims to develop an assessment system for monitoring the academic development of students. In accordance with this purpose, the study hereby includes the details about a team of specialists who are to develop the assessment system and their tasks in this regard, what kind of taxonomy to use, what should be observed in the item development process, the limitations and advantages of item types, measures to be taken for test security, issues to be considered during the process of receiving feedback with regard to the achievements and failures of students, and finally what kind of evidence to accumulate regarding the validity and reliability of the measurement results. In line with the aforementioned details, thanks to a qualified assessment system for monitoring the academic development of students by way of pursuing a holistic approach to cognitive, emotional, and behavioral skills of students and monitoring their academic and social development throughout their basic schooling years instead of high-stakes tests (university entrance exams, public personnel selection exams, etc.) in which critical decisions are taken within a few hours of time, usually in a single session, it will both ensure the effective use of human capital and make it possible for individuals to be referred to the fields of occupation in which they will be more successful and satisfied in their professional lives.

Keywords

Student monitoring system High-stakes testing Accountability in education 

References

  1. Anastasi, A. (1961). Psychological testing. New York: The Macmillan Company.Google Scholar
  2. Berberoğlu, G. (2009). Madde haritalama yöntemi ve cito türkiye öğrenci izleme sistemi (öis) uygulamalarında yeterlik düzeylerinin belirlenmesi. Cito Eğitim: Kuram ve Uygulama Dergisi, Mayıs- Haziran 14–24.Google Scholar
  3. Berberoğlu, G., & Kalender, İ. (2005). Öğrenci başarısının yıllara, okul türlerine, bölgelere göre incelenmesi: ÖSS ve PISA analizi. Eğitim Bilimleri ve Uygulama, 4(7), 21–35.Google Scholar
  4. Elmore R. F. (2013). External environments and accountability of schools. In W. K. Hoy & C. G. Miskel (Eds). Educational administration: Theory, research, and practice New York: McGraw-Hill.Google Scholar
  5. Erkuş, A. (2003). Psikometri üzerine yazılar. Ankara: Türk Psikologlar Derneği.Google Scholar
  6. Eurydice. (2011). Science education in Europe: National policies, practices and research. Brussels: Eurydice.Google Scholar
  7. Gronlund, N. E., & Linn, R. L. (1990). Measurement and evaluation in teaching. New York: Mcmillan Publishing Company.Google Scholar
  8. Haladyna, T. M. (2002). Supporting documentation: Assuring more valid test score interpretations and uses. In G. Tindal & T. M. Haladyna (Eds.), Large-scale assessment programs for all students: Validity, technical adequacy and implementation. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  9. Ladd, H. F., & Zelli, A. (2002). School-based accountability in North Carolina: The responses of school principals. Educational Administration Quarterly, 38(4), 494–529.CrossRefGoogle Scholar
  10. Linn, R. L. (2003). Accountability: Responsibility and reasonable expectations. Educational Researcher, 32(7), 3–13.CrossRefGoogle Scholar
  11. Martinez, M. E. (1999). Cognition and the question of test item format. Educational Psychologist, 34(4), 207–218.CrossRefGoogle Scholar
  12. Nunnally, J. C. (1964). Educational measurement and evaluation. New York: Mcgraw-Hill Book Company.Google Scholar
  13. OECD. (2012). PISA 2009 technical report. Paris: OECD Publishing. Retrieved [10.01.2017] from  https://doi.org/10.1787/9789264167872-en.
  14. OECD. (2013). PISA 2015 draft science framework. Paris: OECD Publishing. Retrieved [12.01.2017] from http://www.oecd.org/pisa/pisaproducts/Draft%20PISA%202015% 20Science%20Framework%20.pdf.Google Scholar
  15. Olson, J. F., Martin, M. O., & Mullis, V. S. I. (2008). TIMSS-2007 technical report. Chestnut Hill: TIMSS & PIRLS International Study Center Boston College.Google Scholar
  16. Sadler, R. (1989). Formative assessment and the design of ınstructional systems. Instructional Science, 18, 119–144.CrossRefGoogle Scholar
  17. Thompson, S., Johnston, C. J., & Thurlow, M. L. (2002). Universal design applied to large scale assessments, Synthesis Report 44. Minneapolis: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  18. Thorndike, R. L. (1982). Applied psychometrics. Boston: Houghton Mifflin.Google Scholar
  19. Zenisky, A., & Sireci, S. G. (2002). Technical innovations in large scale assessment. Applied Measurement in Education, 15, 337–362.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Faculty of Education, Department of Educational Sciences, Measurement and Evaluation in EducationErzincan UniversityErzincanTurkey

Personalised recommendations