Skip to main content
Log in

Middle school students’ responses to two-tier tasks

  • Original Article
  • Published:
Mathematics Education Research Journal Aims and scope Submit manuscript

Abstract

The structure of two-tier testing is such that the first tier consists of a multiple-choice question and the second tier requires justifications for choices of answers made in the first tier. This study aims to evaluate two-tier tasks in “proportion” in terms of students’ capacity to write and select justifications and to examine the effect of different two-tier formats on student performance. Twenty students each from Y7 to Y8 participated in the study in Melbourne in March 2008. The students took eight similar tests with each test having eight two-tier tasks. Eight students were interviewed individually after the testing. Analysis of students’ responses revealed that 1) Y7 and Y8 students were able to select and write justifications to two-tier tasks, 2) Y7 and Y8 students’ success in writing or selecting justifications varied on “marked answer” and “select answer” formats, and 3) Y7 and Y8 students’ justifications gave some information about their misconceptions in proportional reasoning. Implications for teachers looking for alternative assessment tasks tracing students’ reasoning behind their correct and incorrect answers are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Clarke, D. J. (1996). Assessment. In A. Bishop (Ed.), International handbook of mathematics education. Dordrecht: Kluwer.

    Google Scholar 

  • Clarke, D. M. (2003). Changing assessment for changing times. In S. Jaffer & L. Burgess (Eds.), Proceedings of the 9th national congress of the association for mathematics education of South Africa, vol. 1 (pp. 1–10). Cape Town: AMESA.

    Google Scholar 

  • Clarke, D. J., Clarke, D. M., & Lovitt, C. J. (1990). Changes in mathematics teaching call for assessment alternatives. In T. Cooney & C. Hirsch (Eds.), Teaching and learning mathematics in the 1990s. Reston: National Council of Teachers of Mathematics.

    Google Scholar 

  • Clements, M. A., & Ellerton, N. F. (1995). Assessing the effectiveness of pencil-and-paper tests for school mathematics. In B. Atweh & S. Flavel (Eds.), Galtha: MERGA 18: Proceedings of the 18th annual conference of the mathematics education research group of Australasia (pp. 184–188). Darwin: University of the Northern Territory.

    Google Scholar 

  • Collis, K. F., & Romberg, T. A. (1992). Collis-Romberg mathematical problem solving profiles [Kit]. Hawthorn: Australian Council of Educational Research.

    Google Scholar 

  • Eley, L., Caygill, R., Crooks, T. (2001). Designing and implementing a study to examine the effects of task format on student achievement. Paper presented at the British Educational Research Association conference, Leeds, UK, September 13–15. Retrieved December 12, 2008 from the World Wide Web: http://www.leeds.ac.uk/educol/documents/00001841.htm

  • Feinberg, L. (1990). Multiple choice and its critics. The college board review, no. 157, Fall

  • Gay, S., & Thomas, M. (1993). Just because they got it right, does it mean they know it? In N. L. Webb & A. F. Coxford (Eds.), Assessment in the mathematics classroom. Reston: National Council of Teachers of Mathematics.

    Google Scholar 

  • Ginsburg, H. P., Lopez, L. S., Mukhopadhyay, S., Yamamoto, T., Willis, M., & Kelly, M. S. (1992). Assessing understanding of arithmetic. In R. Lesh & S. J. Lamon (Eds.), Assessment of authentic performance in school mathematics (pp. 265–289). Washington: AAAS Press.

    Google Scholar 

  • Hart, K. M. (1981). Children’s understanding of mathematics (pp. 11–16). London: John Murray.

    Google Scholar 

  • Joffe, L. S. (1990). Evaluating assessment: Examining alternatives. In S. Willis (Ed.), Being numerate: What counts? (pp. 138–161). Hawthorn: Australian Council for Educational Research.

    Google Scholar 

  • Karplus, R., Pulos, S., & Stage, E. K. (1983). Proportional reasoning of early adolescents. In R. Lesh & M. Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 45–90). New York: Academic.

    Google Scholar 

  • Küchemann, D. & Hoyles, C. (2003). The quality of students’ explanations on a non-standard geometry item. Retrieved September 19, 2007 from the World Wide Web: http://www.ioe.ac.uk/proof/cerme3.pdf

  • Mathematical Sciences Education Board (MSEB). (1993). Measuring what counts. A conceptual guide for mathematics assessment. Washington: National Academy Press.

    Google Scholar 

  • Mehrens, W. A. (1992). Using performance assessment for accountability purposes. Educational Measurement Issues and Practice, 11(1), 3–9.

    Article  Google Scholar 

  • Panizzon, D., & Pegg, J. (2007). Assessment practices: Empowering mathematics and science teachers in rural secondary schools to enhance student learning. International Journal of Science and Mathematics Education, 6, 417–436.

    Article  Google Scholar 

  • Rowntree, D. (1991). Assessing students: How shall we know them. New York: Nichols Publishing Company.

    Google Scholar 

  • Schwarz, J. L. (1992). The intellectual prices of secrecy in mathematics assessment. In R. Lesh & S. J. Lamon (Eds.), Assessment of authentic performance in school mathematics (pp. 427–437). Washington: AAAS Press.

    Google Scholar 

  • Tamir. P. (1971). An alternative approach to the construction of multiple-choice test items. Journal of Biological Education, 5, 305–307.

    Google Scholar 

  • Tamir, P. (1989). Some issues related to the use of justifications to multiple-choice answers. Journal of Biological Education, 23, 285–292.

    Google Scholar 

  • Tamir, P. (1990). Justifying the selection of answers in multiple-choice items. International Journal of Science Education, 12(5), 563–573.

    Article  Google Scholar 

  • Tobin, K., & Copie, W. (1981). The development and validation of a group test of logical thinking. Educational and Psychological Measurement, 41(2), 413–424.

    Article  Google Scholar 

  • Tourniaire, F., & Pulos, S. (1985). Proportional reasoning: A review of the literature. Educational Studies in Mathematics, 16(2), 181–204.

    Article  Google Scholar 

  • Treagust, D. F. (1985). Diagnostic tests to evaluate students’ misconceptions in science. Paper presented at the annual meeting of the National Association for Research in Science Teaching, French Lick Springs, Indiana.

  • Treagust. D.F. (1988). Development and use of diagnostic tests to evaluate students' misconceptions in science, International Journal of Science Education, 10, 159–169.

    Google Scholar 

  • Treagust, D. F. (2006). Diagnostic assessment in science as a means to improving teaching, learning and retention. Retrieved September 15, 2007 from the World Wide Web: http://science.uniserve.edu.au/pubs/procs/2006/treagust.pdf

  • van den Heuvel-Panhuizen, M. (1996). Assessment and realistic mathematics education. Utrecht: CD-Press/Freudenthal Institute, Utrecht University.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shajahan Haja.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Haja, S., Clarke, D. Middle school students’ responses to two-tier tasks. Math Ed Res J 23, 67–76 (2011). https://doi.org/10.1007/s13394-011-0004-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13394-011-0004-5

Keywords

Navigation