Designing Listening Tests pp 175-201 | Cite as
How do we report scores and set pass marks?
Abstract
The first decision you need to make when considering how scores should be reported is whether your listening test results will be reported as an individual skill, or as part of a total test score including other skills such as reading, language in use, writing and speaking. Your answer needs to take into account such factors as the purpose of the test and how the test results are to be used. For example, if the purpose of the test is diagnostic, placement or achievement, there are good reasons for the skills to be reported separately. In a diagnostic test, the more information you can obtain about a test taker’s strengths and weaknesses the better; collapsing the scores will result in a lot of useful information being hidden. The results of a placement test are generally used as the basis for determining which class is appropriate for a test taker. Clearly having more details will help particularly if the classes are subdivided for the teaching of different skills. The results of an achievement test are usually fed back into the teaching and learning cycle. Receiving information on individual skills would help the teacher to decide which particular skills need further attention.
DLT Bibliography
- Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge: CUP.Google Scholar
- Bhumichitr, D., Gardner, D., & Green, R. (2013). Developing a test for diplomats: Challenges, impact and accountability. LTRC Seoul, Korea: Broadening Horizons: Language Assessment, Diagnosis, and Accountability.Google Scholar
- Buck, G. (2009). Challenges and constraints in language test development. In J. Charles Alderson (Ed.), The politics of language education: Individuals and institutions (pp. 166-184). Bristol: Multilingual Matters.Google Scholar
- Cizek, J. G., & Bunch, M. B. (2006). Standard setting: A guide to establishing and evaluating performance standards on tests. Thousand Oaks, CA: Sage Publications, Inc.Google Scholar
- Council of Europe. (2009). Relating language examinations to the common European framework of reference for languages: Learning, teaching, assessment. A Manual.Google Scholar
- Figueras, N., & Noijons, J. (Eds.) (2009). Linking to the CEFR levels: Research perspectives. Arnhem: CITO.Google Scholar
- Fulcher, G. (2016). Standard and frameworks. In D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment (pp. 29-44). Boston: De Gruyter Mouton.Google Scholar
- Geranpayeh, A. (2013). Scoring validity. In A. Geranpayeh & L. Taylor (Eds.), Examining listening. Research and practice in assessing second language listening (pp. 242-272). Cambridge: CUP.Google Scholar
- Green, R. (2013). Statistical analyses for language testers. New York: Palgrave Macmillan.CrossRefGoogle Scholar
- Green, R., & Spoettl, C. (2011). Building up a pool of standard setting judges: Problems solutions and Insights C. EALTA Conference, Siena, Italy.Google Scholar
- Green, R., & Wall, D. (2005). Language testing in the military: Problems, politics and progress. Language Testing, 22, 379.Google Scholar
- Martyniuk, W. (Ed.) (2010). Relating language examinations to the Common European framework of reference for languages: Case studies and reflections on the use of the Council of Europe’s Draft Manual. Cambridge, UK: Cambridge University Press.Google Scholar
- Papageorgiou, S. (2016). Aligning language assessments to standards and frameworks. In D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment (pp. 327-340). Boston: De Gruyter Mouton.Google Scholar
- Zieky, M. J., Perie, M., & Livingston, S. A. (2008). Cutscores: A manual for setting standards of performance on educational and occupational tests. Princeton, NJ: Educational Testing Service.Google Scholar