Mathematics Education Research Journal

, Volume 23, Issue 1, pp 67–76 | Cite as

Middle school students’ responses to two-tier tasks

Original Article

Abstract

The structure of two-tier testing is such that the first tier consists of a multiple-choice question and the second tier requires justifications for choices of answers made in the first tier. This study aims to evaluate two-tier tasks in “proportion” in terms of students’ capacity to write and select justifications and to examine the effect of different two-tier formats on student performance. Twenty students each from Y7 to Y8 participated in the study in Melbourne in March 2008. The students took eight similar tests with each test having eight two-tier tasks. Eight students were interviewed individually after the testing. Analysis of students’ responses revealed that 1) Y7 and Y8 students were able to select and write justifications to two-tier tasks, 2) Y7 and Y8 students’ success in writing or selecting justifications varied on “marked answer” and “select answer” formats, and 3) Y7 and Y8 students’ justifications gave some information about their misconceptions in proportional reasoning. Implications for teachers looking for alternative assessment tasks tracing students’ reasoning behind their correct and incorrect answers are discussed.

Keywords

ASSE SECO PRSO BELI ADVN 

References

  1. Clarke, D. J. (1996). Assessment. In A. Bishop (Ed.), International handbook of mathematics education. Dordrecht: Kluwer.Google Scholar
  2. Clarke, D. M. (2003). Changing assessment for changing times. In S. Jaffer & L. Burgess (Eds.), Proceedings of the 9th national congress of the association for mathematics education of South Africa, vol. 1 (pp. 1–10). Cape Town: AMESA.Google Scholar
  3. Clarke, D. J., Clarke, D. M., & Lovitt, C. J. (1990). Changes in mathematics teaching call for assessment alternatives. In T. Cooney & C. Hirsch (Eds.), Teaching and learning mathematics in the 1990s. Reston: National Council of Teachers of Mathematics.Google Scholar
  4. Clements, M. A., & Ellerton, N. F. (1995). Assessing the effectiveness of pencil-and-paper tests for school mathematics. In B. Atweh & S. Flavel (Eds.), Galtha: MERGA 18: Proceedings of the 18th annual conference of the mathematics education research group of Australasia (pp. 184–188). Darwin: University of the Northern Territory.Google Scholar
  5. Collis, K. F., & Romberg, T. A. (1992). Collis-Romberg mathematical problem solving profiles [Kit]. Hawthorn: Australian Council of Educational Research.Google Scholar
  6. Eley, L., Caygill, R., Crooks, T. (2001). Designing and implementing a study to examine the effects of task format on student achievement. Paper presented at the British Educational Research Association conference, Leeds, UK, September 13–15. Retrieved December 12, 2008 from the World Wide Web: http://www.leeds.ac.uk/educol/documents/00001841.htm
  7. Feinberg, L. (1990). Multiple choice and its critics. The college board review, no. 157, FallGoogle Scholar
  8. Gay, S., & Thomas, M. (1993). Just because they got it right, does it mean they know it? In N. L. Webb & A. F. Coxford (Eds.), Assessment in the mathematics classroom. Reston: National Council of Teachers of Mathematics.Google Scholar
  9. Ginsburg, H. P., Lopez, L. S., Mukhopadhyay, S., Yamamoto, T., Willis, M., & Kelly, M. S. (1992). Assessing understanding of arithmetic. In R. Lesh & S. J. Lamon (Eds.), Assessment of authentic performance in school mathematics (pp. 265–289). Washington: AAAS Press.Google Scholar
  10. Hart, K. M. (1981). Children’s understanding of mathematics (pp. 11–16). London: John Murray.Google Scholar
  11. Joffe, L. S. (1990). Evaluating assessment: Examining alternatives. In S. Willis (Ed.), Being numerate: What counts? (pp. 138–161). Hawthorn: Australian Council for Educational Research.Google Scholar
  12. Karplus, R., Pulos, S., & Stage, E. K. (1983). Proportional reasoning of early adolescents. In R. Lesh & M. Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 45–90). New York: Academic.Google Scholar
  13. Küchemann, D. & Hoyles, C. (2003). The quality of students’ explanations on a non-standard geometry item. Retrieved September 19, 2007 from the World Wide Web: http://www.ioe.ac.uk/proof/cerme3.pdf
  14. Mathematical Sciences Education Board (MSEB). (1993). Measuring what counts. A conceptual guide for mathematics assessment. Washington: National Academy Press.Google Scholar
  15. Mehrens, W. A. (1992). Using performance assessment for accountability purposes. Educational Measurement Issues and Practice, 11(1), 3–9.CrossRefGoogle Scholar
  16. Panizzon, D., & Pegg, J. (2007). Assessment practices: Empowering mathematics and science teachers in rural secondary schools to enhance student learning. International Journal of Science and Mathematics Education, 6, 417–436.CrossRefGoogle Scholar
  17. Rowntree, D. (1991). Assessing students: How shall we know them. New York: Nichols Publishing Company.Google Scholar
  18. Schwarz, J. L. (1992). The intellectual prices of secrecy in mathematics assessment. In R. Lesh & S. J. Lamon (Eds.), Assessment of authentic performance in school mathematics (pp. 427–437). Washington: AAAS Press.Google Scholar
  19. Tamir. P. (1971). An alternative approach to the construction of multiple-choice test items. Journal of Biological Education, 5, 305–307.Google Scholar
  20. Tamir, P. (1989). Some issues related to the use of justifications to multiple-choice answers. Journal of Biological Education, 23, 285–292.Google Scholar
  21. Tamir, P. (1990). Justifying the selection of answers in multiple-choice items. International Journal of Science Education, 12(5), 563–573.CrossRefGoogle Scholar
  22. Tobin, K., & Copie, W. (1981). The development and validation of a group test of logical thinking. Educational and Psychological Measurement, 41(2), 413–424.CrossRefGoogle Scholar
  23. Tourniaire, F., & Pulos, S. (1985). Proportional reasoning: A review of the literature. Educational Studies in Mathematics, 16(2), 181–204.CrossRefGoogle Scholar
  24. Treagust, D. F. (1985). Diagnostic tests to evaluate students’ misconceptions in science. Paper presented at the annual meeting of the National Association for Research in Science Teaching, French Lick Springs, Indiana.Google Scholar
  25. Treagust. D.F. (1988). Development and use of diagnostic tests to evaluate students' misconceptions in science, International Journal of Science Education, 10, 159–169.Google Scholar
  26. Treagust, D. F. (2006). Diagnostic assessment in science as a means to improving teaching, learning and retention. Retrieved September 15, 2007 from the World Wide Web: http://science.uniserve.edu.au/pubs/procs/2006/treagust.pdf
  27. van den Heuvel-Panhuizen, M. (1996). Assessment and realistic mathematics education. Utrecht: CD-Press/Freudenthal Institute, Utrecht University.Google Scholar

Copyright information

© Mathematics Education Research Group of Australasia, Inc. 2011

Authors and Affiliations

  1. 1.University of MelbourneCarltonAustralia

Personalised recommendations