Testing Usefulness of Reading Comprehension Exams Among First Year Students of English at the Tertiary Level in Tunisia

  • Yassmine MattoussiEmail author
Part of the Second Language Learning and Teaching book series (SLLT)


This study investigated testing usefulness (Bachman & Palmer, 1996) of reading comprehension among first year students of English at the tertiary level in an EFL context. T Data were gathered by means of two questionnaires and a corpus which included 173 first year students’ graded reading exams. A questionnaire was administered to 64 students and 21 teachers which was collected at different institutes and universities. Scores of the reading exams were meant to check the appropriateness of the construct and rater consistency. Questionnaire results indicated that most of the reading comprehension examinations had an acceptable level of reliability and a moderate level of authenticity and interactiveness. It was also revealed that these exams had a high level of construct validity and practicality. In addition, the findings indicated that the achievement reading tests had a harmful impact on first year students, whereas they had a beneficial washback effect on the teachers. On the other hand, the results of the reading scores’ analysis using Cronbach alpha to estimate the reading tests’ reliability showed that there were only three tests among six which had an acceptable reliability of (∝=.70). Concerning the findings of the construct validity assessment procedure of the reading exams, they showed that only the reading part in comprehension, composition, grammar test 1 and the reading section in comprehension, composition, grammar test 2 were proven to be construct valid. These results contradicted those of the questionnaires. Actually, the participants did not provide trustworthy answers in the reliability and construct validity parts. Therefore, most of the reading midterm and final exams designed by the teachers at the tertiary level in Tunisia had a low reliability and construct validity, a moderate authenticity and interactiveness. Nevertheless, the reading exams had a high practicality and a beneficial impact on the teachers, but had a harmful impact on first year learners. The reading exams had a low, since reliability and construct validity which are two important criteria were proven to be threatened. The study had implications for test design.


Test usefulness Reliability Construct validity Authenticity Interactiveness Impact Practicality 


  1. Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge: Cambridge University Press.Google Scholar
  2. Bachman, L. F. (1990). Fundamental considerations of language testing. Oxford: Oxford University Press.Google Scholar
  3. Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.Google Scholar
  4. Brown, J. D. (1996). Testing in language programs. Upper Saddle River, NJ: Prentice Hall.Google Scholar
  5. Brown, J. D. (2002). The Cronbach alpha reliability estimate. Shiken: JALT Testing & Evaluation SIG Newsletter, 6(1), 17–18.Google Scholar
  6. Brown, J. D. (2003). Language assessment-principles and classroom practices. White Plains: Longman.Google Scholar
  7. Cronbach, L. J. (1970). Essentials of psychological testing (3rd ed.). New York: Harper & Row.Google Scholar
  8. Davies, A. (1990). Principles of language testing. Oxford: Basil Blackwell, Ltd.Google Scholar
  9. Henning, G. (1987). A guide to language testing: Development, evaluation, research. Cambridge: Newbury House Publishers.Google Scholar
  10. Hidri, S. (2015). Conceptions of assessment: Investigating what assessment means to secondary and university teachers. Arab Journal of Applied Linguistic, 1(1), 19–43.Google Scholar
  11. Hughes, A. (1989). Testing for language teachers. Cambridge: Cambridge University Press.Google Scholar
  12. Hughes, A. (2003). Testing for language teachers (2nd ed.). Cambridge: Cambridge University Press.Google Scholar
  13. Pallant, J. (2005). SPSS survival manual: A step by step to data analysis using SPSS for Windows (Version 12). Australia: Allen & Unwin.Google Scholar
  14. Read, J., & Chapelle, C. A. (2001). A framework for second language vocabulary assessment. Language Testing, 18(1), 1–32.CrossRefGoogle Scholar
  15. Taylor, L., & Weir, J. C. (2012). IELTS collected papers 2 research in reading and listening assessment. Cambridge: Cambridge University Press.Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Department of EnglishThe University of Letters, Arts and Humanities of ManoubaTunisTunisia

Personalised recommendations