Student-Problem Chart: An Essential Tool for SLOA

  • Magdalena Mo Ching Mok
  • Sze Ming Lam
  • Ming-Yan Ngan
  • Jing Jing Yao
  • Michael Ying Wah Wong
  • Jacob Kun Xu
  • Stephen Yin Chuen Ting
Chapter
Part of the Education in the Asia-Pacific Region: Issues, Concerns and Prospects book series (EDAP, volume 18)

Abstract

This chapter introduces the concepts of the student-problem chart (SP chart) as a tool to support implementation of assessment for learning. In essence, the SP chart is a matrix of students’ responses against set items in an assessment where the rows are rearranged in descending order of student ability from the top of the matrix and the columns in ascending order of item difficulty from the left. Using the assumption that a more able student is more likely than a less able student to answer any item correctly and that an easier item is more likely to be answered correctly than a more difficult item by any student, modified caution indices can be computed to reflect the extent to which the student and item response patterns deviate from the expected pattern. The SP Xpress software outputs the SP chart, the modified caution indices, and other statistics to support teachers in diagnostic assessment for enhanced student learning.

References

  1. Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. London: GL Assessment.Google Scholar
  2. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21, 5–31.CrossRefGoogle Scholar
  3. Chacko, I. (1998). S-P chart and instructional decisions in the classroom. International Journal of Mathematical Education in Science & Technology, 29(3), 445–450.Google Scholar
  4. Connell, M., & Harnisch, D. (2004). SP charts: Creating a longitudinal view of a technology enabled intervention. In C. Crawford, D. Willis, R. Carlsen, I. Gibson, K. McFerrin, J. Price, & R. Weber (Eds.), Proceedings of Society for Information Technology and Teacher Education international conference 2004 (pp. 951–954). Chesapeake: AACE.Google Scholar
  5. Dai, C., Cheng, J., & Hsu, Y. (2005). The new meaning of S-P Chart. In P. Kommers & G. Richards (Eds.), Proceedings of world conference on educational multimedia, hypermedia and telecommunications 2005 (pp. 3074–3079). Chesapeake: AACE.Google Scholar
  6. de la Torre, J. (2012). Application of the DINA model framework to enhance assessment and learning. In M. M. C. Mok (Ed.), Self-directed learning oriented assessment in the Asia-Pacific. Dordrecht: Springer.Google Scholar
  7. Dinero, T. E., & Blixt, S. L. (1988). Information about tests from Sato’s S-P Chart. College Teaching Journal, 36(3), 123–128.CrossRefGoogle Scholar
  8. Guttman, L. (1950). The basis for scalogram analysis. In S. A. Stouffer (Ed.), Measurement and prediction (The American Soldier, Vol. IV). New York: Wiley.Google Scholar
  9. Harnisch, D. L. (1981). Analysis of item response patterns: Consistency indices and their application to criterion referenced tests (ERIC Document Reproduction Service No. ED 209335).Google Scholar
  10. Harnisch, D. L. (1983). Item response patterns: Applications for educational practice. Journal of Educational Measurement, 20(2), 191–206.CrossRefGoogle Scholar
  11. Harnisch, D. L., & Linn, R. L. (1981). Analysis of item patterns: Questionable test data and dissimilar curriculum practices. Journal of Educational Measurement, 18(3), 133–46.CrossRefGoogle Scholar
  12. Harnisch, D. L., & Romy, N. (1985). SPP: Student problem package on the IBM-PC. User’s guide, version 1.0. Champaign: Office of Educational Testing, Research, and Service/University of Illinois at Urbana-Champaign.Google Scholar
  13. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.CrossRefGoogle Scholar
  14. Ho, C. M., Leung, A. W. C., Mok, M. M. C., & Cheung, P. (2012). Informing learning and teaching using feedback from assessment data: Hong Kong Teachers’ attitudes towards Rasch measurement. In M. M. C. Mok (Ed.), Self-directed learning oriented assessments in the Asia-Pacific. Dordrecht: Springer.Google Scholar
  15. Linn, R. L., & Harnisch, D. L. (1981). Interactions between item content and group membership on achievement test items. Journal of Educational Measurement, 18(2), 109–18.CrossRefGoogle Scholar
  16. Mok, M. M. C. (2010). Self-directed learning oriented assessment: Assessment that informs learning & empowers the learner. Hong Kong: Pace Publications Ltd.Google Scholar
  17. Mok, M. M. C., Ting, Y. C., Ho, P. H. S., Wong, M. Y. W., Tse, L. C. N., Xu, J. K., & Yao, J.-J. (2011). (In Chinese: 莫慕貞、丁彥銓、何昊璇、黃英華、謝棹南、徐坤、姚靜靜 (2011) 。優化學習導向評估之SP Xpress 2.2。Hong Kong: Pace Publications Ltd.)Google Scholar
  18. Ngan M. Y. (2011). (In Chinese: 顏明仁(2011) 。第八章:試題分析(二)。促進學生學習的當代教育評估理論與實踐。香港:培生教育出版南亞洲有限剬司。)Google Scholar
  19. Sato, T. (1980). The S-P chart and caution index. In NEC Educational Information Bulletin, 80–1. C&C Systems Research Laboratories, Nippon Eletric Co. Ltd, Takatsu-Ku Kawasaki City, Kanagawa Prefecture 213, Japan.Google Scholar
  20. Sato, T. (1984). The state of art on S-P analysis activities in Japan. C & C System Research Labs, Nippon Electric Co. Ltd, Takatsu-Ku Kawasaki City, Kanagawa Prefecture 213, Japan.Google Scholar
  21. Sato, T. (1985). Introduction to student-problem curve theory analysis and evaluation. Tokyo: Meiji Tosho.Google Scholar
  22. Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.CrossRefGoogle Scholar
  23. Tam, H. P., Wu, M., Lau, D. C. H., & Mok, M. M. C. (2012). Using user-defined fit statistic to analyze two tier items in mathematics. In M. M. C. Mok (Ed.), Self-directed learning oriented assessment in the Asia-Pacific. Dordrecht: Springer.Google Scholar
  24. Tatsuoka, K. K. (1984). Caution indices based on item response theory. Psychometrika, 49(1), 95–110.CrossRefGoogle Scholar
  25. Tzuriel, D. (2012). Dynamic assessment of learning potential. In M. M. C. Mok (Ed.), Self-directed learning oriented assessment in the Asia-Pacific. Dordrecht: Springer.Google Scholar
  26. Wu, M. (2012). Using item response theory as a tool in educational measurement. In M. M. C. Mok (Ed.), Self-directed learning oriented assessment in the Asia-Pacific. Dordrecht: Springer.Google Scholar
  27. Yu M. N. (2002). (In Chinese: 余民寧(2002) 。第八章:學生問題表分析。《教育測驗與評量:成就測驗與教學評量》。台北市:心理出版社股份有限剬司。頁325–369。)Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2012

Authors and Affiliations

  • Magdalena Mo Ching Mok
    • 1
  • Sze Ming Lam
    • 2
  • Ming-Yan Ngan
    • 3
  • Jing Jing Yao
    • 2
    • 4
  • Michael Ying Wah Wong
    • 2
  • Jacob Kun Xu
    • 2
  • Stephen Yin Chuen Ting
    • 5
  1. 1.Department of Psychological Studies, and Assessment Research CentreThe Hong Kong Institute of EducationTai PoHong Kong
  2. 2.Assessment Research CentreThe Hong Kong Institute of EducationTai PoHong Kong
  3. 3.Department of Curriculum and InstructionThe Hong Kong Institute of EducationTai PoHong Kong
  4. 4.Department of PsychologyZhejiang Normal UniversityJinhuaChina
  5. 5.Formerly Assessment Research CentreThe Hong Kong Institute of EducationTai PoHong Kong

Personalised recommendations