Telling the Story of a Test: The Test of Academic Literacy for Postgraduate Students (TALPS)

Chapter

Abstract

This chapter will follow Shohamy’s exhortation “to tell the story of the test” (2001). It begins by highlighting the need for the Test of Academic Literacy for Postgraduate Students (TALPS), for use primarily as a placement and diagnostic mechanism for postgraduate study, before documenting the progress made from its initial conceptualisation, design and development to its trial, results and its final implementation. Using the empirical evidence gathered, assertions will be made about the reliability and validity of the test. Documenting the design process ensures that relevant information is available and accessible both to test takers and to the public. This telling of the story of TALPS is the first step in ensuring transparency and accountability. The second is related to issues of fairness, especially the use of tests to restrict and deny access, which may occasion a negative attitude to tests. Issues of fairness dictate that test designers consider the impact of the test; employ effective ways to promote the responsible use of the test; be willing to mitigate the effects of mismeasurement; consider potential refinement of the format of the test; and ensure alignment between the test and the teaching/intervention that follows. It is in telling the story of TALPS, and in highlighting how issues of fairness have been considered seriously in its design and use that we hope to answer a key question that all test designers need to ask: Have we, as test designers, succeeded in designing a socially acceptable, fair and responsible test?

Keywords

Academic literacy Academic writing Language tests Postgraduate study Intervention Test design Fairness Transparency Accountability 

References

  1. American Educational Research Association (AERA). (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
  2. Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford: Oxford University Press.Google Scholar
  3. Beu, D. S., & Buckley, M. R. (2004). Using accountability to create a more ethical climate. Human Resource Management Review, 14, 67–83.CrossRefGoogle Scholar
  4. Bovens, M. (2005). Public accountability: A framework for the analysis and assessment of accountability arrangements in the public domain. In E. Ferlie, L. Lynne, & C. Pollitt (Eds.), The Oxford handbook of public management (pp. 1–36). Oxford: Oxford University Press.Google Scholar
  5. Boyd, K., & Davies, A. (2002). Doctors’ orders for language testers. Language Testing, 19(3), 296–322.CrossRefGoogle Scholar
  6. Butler, H.G. (2007). A framework for course design in academic writing for tertiary education. PhD thesis, University of Pretoria, Pretoria.Google Scholar
  7. Butler, H. G. (2009). The design of a postgraduate test of academic literacy: Accommodating student and supervisor perceptions. Southern African Linguistics and Applied Language Studies, 27(3), 291–300.CrossRefGoogle Scholar
  8. Bygate, M. (2004). Some current trends in applied linguistics: Towards a generic view. AILA Review, 17, 6–22.Google Scholar
  9. CITO. (2006). TiaPlus, classical test and item analysis ©. Arnhem: Cito M. and R. Department.Google Scholar
  10. Davidson, F., & Lynch, B. K. (2002). Testcraft. New Haven: Yale University Press.Google Scholar
  11. Davies, A., Brown, A., Elder, C., Hill, K., Lumley, T., & McNamara, T. (Eds.). (1999). Dictionary of language testing. Studies in Language Testing, 7. Cambridge: Cambridge University Press.Google Scholar
  12. Du Plessis, C. (2012). The design, refinement and reception of a test of academic literacy for postgraduate students. MA dissertation, University of the Free State, Bloemfontein.Google Scholar
  13. Frink, D. D., & Klimoski, R. J. (2004). Advancing accountability theory and practice: Introduction to the human resource management review special edition. Human Resource Management Review, 14, 1–17.CrossRefGoogle Scholar
  14. Fulcher, G., & Davidson, F. (2007). Language testing and assessment: An advanced resource book. New York: Routledge.CrossRefGoogle Scholar
  15. Geldenhuys, J. (2007). Test efficiency and utility: Longer or shorter tests. Ensovoort, 11(2), 71–82.Google Scholar
  16. Hamp-Lyons, L. (2000a). Fairnesses in language testing. In A.J. Kunnan (Ed.), Fairness and validation in language assessment. Studies in Language Testing, 9, (pp. 30–34). Cambridge: Cambridge University Press.Google Scholar
  17. Hamp-Lyons, L. (2000b). Social, professional and individual responsibility in language testing. System, 28, 579–591.CrossRefGoogle Scholar
  18. Hamp-Lyons, L. (2001). Ethics, fairness(es), and developments in language testing. In C. Elder et al. (Eds.), Experimenting with uncertainty: Essays in honour of Alan Davies, Studies in Language Testing, 11, (pp. 222–227). Cambridge: Cambridge University Press.Google Scholar
  19. Inter-Institutional Centre for Language Development and Assessment (ICELDA). (2015). [Online]. Available http://icelda.sun.ac.za. Accessed 7 May 2015.
  20. Kearns, K. P. (1998). Institutional accountability in higher education: A strategic approach. Public Productivity & Management Review, 22(2), 140–156.CrossRefGoogle Scholar
  21. Kurpius, S. E. R., & Stafford, M. E. (2006). Testing and measurement: A user-friendly guide.Thousand Oaks: Sage Publications.Google Scholar
  22. Maher, C. (2011). Academic writing ability and performance of first year university students in South Africa. Research report for the MA dissertation, University of the Witwatersrand, Johannesburg. [Online]. Available http://wiredspace.wits.ac.za/bitstream/. Accessed 20 July 2015.
  23. McNamara, T., & Roever, C. (2006). Language testing: The social dimension. Malden: Blackwell Publishing.Google Scholar
  24. Mdepa, W., & Tshiwula, L. (2012). Student diversity in South African Higher Education. Widening Participation and Lifelong Learning, 13, 19–33.CrossRefGoogle Scholar
  25. Naurin, D. (2007). Transparency, publicity, accountability – The missing links. Unpublished paper delivered at the CONNEX-RG 2 workshop on ‘Delegation and mechanisms of accountability in the EU’, 8–9 March, Uppsala.Google Scholar
  26. Norton, B. (1997). Accountability in language assessment. In C. Clapham & D. Corson (Eds.), Language testing and assessment: Encyclopaedia of language and education 7 (pp. 323–333). Dordrecht: Kluwer Academic.Google Scholar
  27. Patterson, R., & Weideman, A. (2013). The typicality of academic discourse and its relevance for constructs of academic literacy. Journal for Language Teaching, 47(1), 107–123. http://dx.doi.org/10.4314/jlt.v47i1.5.Google Scholar
  28. Pot, A. (2013). Diagnosing academic language ability: An analysis of TALPS. MA dissertation, Rijksuniversiteit Groningen, Groningen.Google Scholar
  29. Pot, A., & Weideman, A. (2015). Diagnosing academic language ability: Insights from an analysis of a postgraduate test of academic literacy. Language Matters, 46(1), 22–43. Retrieved from: http://dx.doi.org/10.1080/10228195.2014.986665
  30. Rambiritch, A. (2012). Transparency, accessibility and accountability as regulative conditions for a postgraduate test of academic literacy. PhD thesis, University of the Free State, Bloemfontein.Google Scholar
  31. Rambiritch, A. (2013). Validating the test of academic literacy for postgraduate students (TALPS). Journal for Language Teaching, 47(1), 175–193.Google Scholar
  32. Rea-Dickins, P. (1997). So, why do we need relationships with stakeholders in language testing? A view from the U.K. Language Testing, 14(3), 304–314.CrossRefGoogle Scholar
  33. Scholtz, D., & Allen-lle, C. O. K. (2007). Is the SATAP test an indicator of academic preparedness for first year university students? South African Journal of Higher Education, 21(7), 919–939.Google Scholar
  34. Schuurman, E. (2005). The technological world picture and an ethics of responsibility: Struggles in the ethics of technology. Sioux Center: Dordt College Press.Google Scholar
  35. Second Language Testing Inc. (2013). Pilot testing and field testing. [Online]. Available: http://2lti.com/test-development/pilot-testing-and-field-testing/
  36. Shohamy, E. (1997). Testing methods, testing consequences: Are they ethical? Are they fair? Language Testing, 14(3), 340–349.CrossRefGoogle Scholar
  37. Shohamy, E. (2001). The power of tests: A critical perspective on the uses of language tests. London: Longman.Google Scholar
  38. Shohamy, E. (2008). Language policy and language assessment: The relationship. Current Issues in Language Planning, 9(3), 363–373.CrossRefGoogle Scholar
  39. Sinclair, A. (1995). The chameleon of accountability: Forms and discourses. Accounting, Organisations and Society, 20(2/3), 219–237.CrossRefGoogle Scholar
  40. Van der Slik, F., & Weideman, A. (2005). The refinement of a test of academic literacy. Per Linguam, 21(1), 23–35.Google Scholar
  41. Van Dyk, T., & Weideman, A. (2004). Finding the right measure: From blueprint to specification to item type. SAALT Journal for Language Teaching, 38(1), 15–24.Google Scholar
  42. Visser, A. J., & Hanslo, M. (2005). Approaches to predictive studies: Possibilities and challenges. South African Journal of Higher Education, 19(6), 160–1176.Google Scholar
  43. Weideman, A. (2006). Transparency and accountability in applied linguistics. Southern African Linguistics and Applied Language Studies, 24(1), 71–86.CrossRefGoogle Scholar
  44. Weideman, A. (2007). A responsible agenda for applied linguistics: Confessions of a philosopher. Per Linguam, 23(2), 29–53.Google Scholar
  45. Weideman, A. (2009). Constitutive and regulative conditions for the assessment of academic literacy. South African Linguistics and Applied Language Studies, 27(3), 235–251.CrossRefGoogle Scholar
  46. Weideman, A. (2014). Innovation and reciprocity in applied linguistics. Literator, 35(1), 1–10. [Online]. Available doi: http://dx.doi.org/10.4102/lit.v.35i1.1074.CrossRefGoogle Scholar
  47. Weideman, A., & Van Dyk, T. (Eds.). (2014). Academic literacy: Test your competence. Potchefstroom: Inter-Institutional Centre for Language Development and Assessment (ICELDA).Google Scholar
  48. Weideman, A., Patterson, R., & Pot, A. (2016). Construct refinement in tests of academic literacy. In J. Read (Ed), Post-admission language assessment of university students. Dordrecht: Springer.Google Scholar
  49. Weir, C. J. (2005). Language testing and validation: An evidence-based approach. Houndmills: Palgrave Macmillan.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Unit for Academic LiteracyUniversity of PretoriaPretoriaSouth Africa
  2. 2.Office of the Dean: HumanitiesUniversity of the Free StateBloemfonteinSouth Africa

Personalised recommendations