Advertisement

State-of-the-Art of Assessment in Tunisia: The Case of Testing Listening Comprehension

  • Sahbi Hidri
Chapter

Abstract

As part of their prerequisite for any hallmark form of accountability, language tests are designed and administered to spur learning outcomes that would serve the educational needs of any nation. This article investigated language assessment in Tunisia by focusing on the testing of listening comprehension. SPSS and FACETS quantitative analyses of test scores of an achievement examination among 646 test-takers suggested that students had a very low language ability and that the nine raters who graded this exam were harshly and subjectively biased toward the nature of the exam tasks, thus showing a fuzzy idea of the construct of listening comprehension. This was conducive to irrelevant assessment results which, in fact, put into questions the testability of the exam items. Implications of the results were considered and recommendations to further investigate assessment in this context were also discussed.

Keywords

Listening Assessment Ability Fuzzy construct Bias Irrelevance 

Notes

Acknowledgements

I would like to thank the following people for their unconditional help and cooperation at the time of data collection: Aicha Graja, Faten Belhaj, Faten Houioui, Hajer Mami, Hedia Oueslati, Rim Drira, Rim Zaoui, Safia Sahli, and Selma Ben Mrad. I, however, remain responsible for the contents of this work.

References

  1. Anckar, J. (2011). Assessing foreign language listening comprehension by means of the multiple-choice format: Processes and products. Jyväskylä: University of Jyväskylä.Google Scholar
  2. Association of Language Testers in Europe (ALTE). (2001). Glossary. Retrieved from http://www.alte.org/glossary/index.cfm (April 2018).
  3. Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford: Oxford University Press.Google Scholar
  4. Bond, T. G., & Fox, C. M. (2015). Applying the Rasch model: Fundamental measurement in the human sciences (3rd ed.). Mahwah, NJ: L. Erlbaum.CrossRefGoogle Scholar
  5. Bostrom, R. N. (2011). Rethinking conceptual approaches to the study of “listening”. The International Journal of Listening, 25(1–2), 10–26.CrossRefGoogle Scholar
  6. Brindley, G. (1998). Assessing listening abilities. Annual Review of Applied Linguistics, 18, 171–191.CrossRefGoogle Scholar
  7. Brindley, G., & Slayter, H. (2002). Exploring task difficulty in ESL listening assessment. Language Testing, 19(4), 369–394.CrossRefGoogle Scholar
  8. Brown, H. D., & Abeywickrama, P. (2004). Language assessment: Principles and classroom practices. White Plains, NY: Pearson Education.Google Scholar
  9. Buck, G. (2001). Assessing listening. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  10. Chniti, Y. (2018). Testing grammar in an EFL context. In S. Hidri (Ed.), Revisiting the assessment of second language abilities: From theory to practice (pp. 310–327). Basel: Springer.Google Scholar
  11. Davidson, F. (1991). Statistical support for training ESL composition rating. In L. Hamp-Lyons (Ed.), Assessing second language writing in academic context (pp. 155–164). Norwood, NJ: Ablex.Google Scholar
  12. Davison, C., & Cummins, J. (2007). Introduction: Assessment and evaluation in ELT: Shifting paradigms and practices. In J. Cummins & C. Davison (Eds.), International handbook of English language teaching (pp. 415–420). Boston, MA: Springer.CrossRefGoogle Scholar
  13. Flowerdew, J. (1994). Research of relevance to second language lecture comprehension—An overview. In J. Flowerdew (Ed.), Academic listening: Research perspectives (pp. 7–29). Cambridge: Cambridge University Press.Google Scholar
  14. Flowerdew, J., & Miller, L. (2005). Second language listening: Theory and practice. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  15. Freedle, R. & Kostin, I. (1996). The prediction of TOEFL listening comprehension item difficulty for mini-talk passages: Implications for construct validity (TOEFL Research Report 56). Princeton, NJ: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.1996.tb01707.x.
  16. Ginther, A. (2002). Context and context visuals and performance on listening comprehension stimuli. Language Testing, 19(2), 133–167.CrossRefGoogle Scholar
  17. Hidri, S. (2013). The effectiveness of assessment of learning and assessment for learning in eliciting valid inferences on the test-takers’ listening comprehension ability. In Article published in the Proceedings of the 2013 Nile TESOL Conference Revolutionizing TESOL: Techniques and Strategies (pp. 1–25). Egypt: The American University of Cairo. https://docs.google.com/file/d/0B6bmHwcjFuVYX2FJZTFndGF0QzA/edit.
  18. Hidri, S. (2014). Developing and evaluating a dynamic assessment of listening comprehension in an EFL context. Language Testing in Asia, 4(4), 1–19.  https://doi.org/10.1186/2229-0443-4-4.CrossRefGoogle Scholar
  19. Hidri, S. (2015). Conceptions of assessment: Investigating what assessment means to secondary and university teachers. Arab Journal of Applied Linguistics, 1(1), 19–43.Google Scholar
  20. Hidri, S. (2017). Specs validation of a dynamic reading comprehension test for EAP learners in an EFL context. In S. Hidri & C. Coombe (Eds.), Evaluation in foreign language education in the Middle East and North Africa (pp. 315–337). Basel: Springer.CrossRefGoogle Scholar
  21. Hidri, S. (2018a). Revisiting the assessment of second language abilities: From theory to practice. Basel: Springer.Google Scholar
  22. Hidri, S. (2018b). Introduction: State of the art of assessing second language abilities. In S. Hidri (Ed.), Revisiting the assessment of second language abilities: From theory to practice (pp. 1–19). Basel: Springer.Google Scholar
  23. Hidri, S. (2018c). Assessing spoken language ability: A many-Facet Rasch analysis. In S. Hidri (Ed.), Revisiting the assessment of second language abilities: From theory to practice (pp. 23–48). Basel: Springer.Google Scholar
  24. Inan-Karagul, B., & Yuksel, D. (2018). Self-assessment in listening. In J. I. Liontas (Ed.), The TESOL Encyclopedia of English Language Teaching (pp. 1–7). Wiley.  https://doi.org/10.1002/9781118784235.eelt0256.
  25. Jenkins, J., & Leung, C. (2014). English as a Lingua Franca. In A. J. Kunnan (Ed.), The companion to language assessment (pp. 1–10). Wiley.  https://doi.org/10.1002/9781118411360.wbcla047.
  26. Kondo-Brown, K. (2002). A FACETS analysis of rater bias in measuring Japanese second language writing performance. Language Testing, 19(1), 3–31.CrossRefGoogle Scholar
  27. Lumley, T., & McNamara, T. F. (1995). Rater characteristics and rater bias: Implications for training. Language Testing, 12, 54–71.CrossRefGoogle Scholar
  28. Major, R. C., Fitzmaurice, S. F., Bunta, F., & Balasubramanian, C. (2002). The effects of nonnative accents on listening comprehension: Implications for ESL assessment. TESOL Quarterly, 36(2), 173–190.CrossRefGoogle Scholar
  29. Mattoussi, Y. (2018). Testing usefulness of reading comprehension exams among first year students of English at the tertiary level in Tunisia. In S. Hidri (Ed.), Revisiting the assessment of second language abilities: From theory to practice (pp. 265–288). Springer International Publishing AG.CrossRefGoogle Scholar
  30. McNamara, T. (1996). Measuring second language performance. London: Longman.Google Scholar
  31. Messick, S. (1996). Validity and washback in language test. Language Testing, 13(4), 241–257.CrossRefGoogle Scholar
  32. Naimi, Y. (2018). Teachers’ conceptions of assessment in an ESP context. In S. Hidri (Ed.), Revisiting the assessment of second language abilities: From theory to practice (pp. 174–194). Springer International Publishing AG.Google Scholar
  33. Patri, M. (2002). The influence of peer feedback on self-and peer-assessment of oral skills. Language Testing, 19(2), 109–131.CrossRefGoogle Scholar
  34. Renandya, W. A., & Farrell, T. S. (2010). ‘Teacher, the tape is too fast!’ Extensive listening in ELT. ELT Journal, 65(1), 52–59.CrossRefGoogle Scholar
  35. Riahi, I. (2018). Techniques in teaching and testing vocabulary for learners of English in an EFL context. In S. Hidri (Ed.), Revisiting the assessment of second language abilities: From theory to practice (pp. 289–309). Basel: Springer.CrossRefGoogle Scholar
  36. Ross, J. A. (2006). The reliability, validity and utility of self-assessment. Practical Assessment Research & Evaluation, 11(10), 1–13.Google Scholar
  37. Shohamy, E. (1991). The power of tests: A critical perspective on the uses of language tests. London: Longman.Google Scholar
  38. Song, X. (2005). Language learner strategy use and English proficiency on the Michigan English Language Assessment Battery. Spaan Fellow Working Papers in Second or Foreign Language Assessment, 3, 1–23.Google Scholar
  39. Vandergrift, L. (2007). Recent developments in second and foreign language listening comprehension research. Language Teaching, 40(3), 191–210.CrossRefGoogle Scholar
  40. Vandergrift, L., & Goh, C. C. (2012). Teaching and learning second language listening: Metacognition in action. New York: Routledge.CrossRefGoogle Scholar
  41. Vandergrift, L., Goh, C., Mareschal, C. J., & Tafaghodtari, M. H. (2006). The metacognitive awareness listening questionnaire: development and validation. Language Learning, 56(3), 431–462.CrossRefGoogle Scholar
  42. Wall, D. (1996). Introducing new tests into traditional systems: Insights from general education and from innovation theory. Language Testing, 13, 334–354.CrossRefGoogle Scholar
  43. Wall, D., & Alderson, J. C. (1993). Examining washback: The Sri Lankan impact study. Language Testing, 10(1), 41–69.CrossRefGoogle Scholar
  44. Weigle, S. (1998). Using FACETS to model rater training effects. Language Testing, 15, 263–287.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Authors and Affiliations

  • Sahbi Hidri
    • 1
  1. 1.Faculty of Human and Social Sciences of TunisTunisTunisia

Personalised recommendations