How do we report scores and set pass marks?

  • Rita Green
Chapter

Abstract

The first decision you need to make when considering how scores should be reported is whether your listening test results will be reported as an individual skill, or as part of a total test score including other skills such as reading, language in use, writing and speaking. Your answer needs to take into account such factors as the purpose of the test and how the test results are to be used. For example, if the purpose of the test is diagnostic, placement or achievement, there are good reasons for the skills to be reported separately. In a diagnostic test, the more information you can obtain about a test taker’s strengths and weaknesses the better; collapsing the scores will result in a lot of useful information being hidden. The results of a placement test are generally used as the basis for determining which class is appropriate for a test taker. Clearly having more details will help particularly if the classes are subdivided for the teaching of different skills. The results of an achievement test are usually fed back into the teaching and learning cycle. Receiving information on individual skills would help the teacher to decide which particular skills need further attention.

DLT Bibliography

  1. Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge: CUP.Google Scholar
  2. Bhumichitr, D., Gardner, D., & Green, R. (2013). Developing a test for diplomats: Challenges, impact and accountability. LTRC Seoul, Korea: Broadening Horizons: Language Assessment, Diagnosis, and Accountability.Google Scholar
  3. Buck, G. (2009). Challenges and constraints in language test development. In J. Charles Alderson (Ed.), The politics of language education: Individuals and institutions (pp. 166-184). Bristol: Multilingual Matters.Google Scholar
  4. Cizek, J. G., & Bunch, M. B. (2006). Standard setting: A guide to establishing and evaluating performance standards on tests. Thousand Oaks, CA: Sage Publications, Inc.Google Scholar
  5. Council of Europe. (2009). Relating language examinations to the common European framework of reference for languages: Learning, teaching, assessment. A Manual.Google Scholar
  6. Figueras, N., & Noijons, J. (Eds.) (2009). Linking to the CEFR levels: Research perspectives. Arnhem: CITO.Google Scholar
  7. Fulcher, G. (2016). Standard and frameworks. In D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment (pp. 29-44). Boston: De Gruyter Mouton.Google Scholar
  8. Geranpayeh, A. (2013). Scoring validity. In A. Geranpayeh & L. Taylor (Eds.), Examining listening. Research and practice in assessing second language listening (pp. 242-272). Cambridge: CUP.Google Scholar
  9. Green, R. (2013). Statistical analyses for language testers. New York: Palgrave Macmillan.CrossRefGoogle Scholar
  10. Green, R., & Spoettl, C. (2011). Building up a pool of standard setting judges: Problems solutions and Insights C. EALTA Conference, Siena, Italy.Google Scholar
  11. Green, R., & Wall, D. (2005). Language testing in the military: Problems, politics and progress. Language Testing, 22, 379.Google Scholar
  12. Martyniuk, W. (Ed.) (2010). Relating language examinations to the Common European framework of reference for languages: Case studies and reflections on the use of the Council of Europe’s Draft Manual. Cambridge, UK: Cambridge University Press.Google Scholar
  13. Papageorgiou, S. (2016). Aligning language assessments to standards and frameworks. In D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment (pp. 327-340). Boston: De Gruyter Mouton.Google Scholar
  14. Zieky, M. J., Perie, M., & Livingston, S. A. (2008). Cutscores: A manual for setting standards of performance on educational and occupational tests. Princeton, NJ: Educational Testing Service.Google Scholar

Copyright information

© The Author(s) 2017

Authors and Affiliations

  • Rita Green
    • 1
  1. 1.RichmondUK

Personalised recommendations