Abstract
It is not surprising that there are continuing tensions between the disciplinary research paradigms in which language testers situate themselves: psychometrics, which, by definition, involves the objective measurement of psychological traits, processes, and abilities and is based on the analysis of sophisticated, quantitative data, and applied linguistics, where the study of language in use, and especially the construction of discourse, often demands a more interpretive, qualitative approach to the research process. This chapter looks at qualitative research techniques that are increasingly popular choices for designing, revising, and validating performance tests – those in which test takers write or speak, the latter of which is the primary focus of this chapter. It traces the history of qualitative research in language testing from 1990 to the present and describes some of the main findings about face-to-face speaking tests that have emerged from this scholarship. Several recent qualitative research papers on speaking tests are summarized, followed by an examination of three mixed methods studies, where both qualitative and quantitative techniques are carefully and consciously mixed in order to further elucidate findings that could not be derived from either method alone. I conclude by considering challenges facing qualitative language testing researchers, especially in terms of explicating research designs and determining appropriate evaluative criteria, and speculating on areas for future research, including studies that tap other methodological approaches such as critical language testing and ethnography and that shed light on World Englishes (WEs) and the Common European Framework of Reference (CEFR).
References
Brown, J. D. (2014a). Mixed methods research for TESOL. Edinburgh: Edinburgh University Press.
Brown, J. D. (2014b). The future of World Englishes in language testing. Language Assessment Quarterly, 11(1), 5–26. doi:10.1080/15434303.2013.869817.
Creswell, J. W. (2014). A concise introduction to mixed methods research. Los Angeles: Sage.
Gan, Z. (2010). Interaction in group assessment: A case study of higher- and lower-scoring students. Language Testing, 27(4), 585–602. doi:10.1177/0265532210364049.
Green, A. (1998). Verbal protocol analysis in language testing research. Cambridge: Cambridge University Press.
Harding, L. (2014). Communicative language testing: Current issues and future research. Language Assessment Quarterly, 11(2), 186–197. doi:10.1080/15434303.2014.895829.
Hill, K., & McNamara, T. (2012). Developing a comprehensive, empirically based research framework for classroom-based assessment. Language Testing, 29(3), 395–420. doi:10.1177/0265532211428317.
Jacoby, S., & Ochs, E. (1995). Co-construction: An introduction. Research on Language and Social Interaction, 28(3), 171–183. doi:10.1207/s15327973rlsi2803_1.
Kane, M. (2012). Validating score interpretations and uses: Messick Lecture. Language Testing, 29(1), 3–17. doi:10.1177/0265532211417210.
Kim, Y.-H. (2009). An investigation into native and non-native teachers’ judgments of oral English performance: A mixed methods approach. Language Testing, 26(2), 187–217. doi:10.1177/0265532208101010.
Lazaraton, A. (2002). A qualitative approach to the validation of oral language tests. Cambridge: Cambridge University Press.
Lazaraton, A. (2008). Utilizing qualitative methods for assessment. In E. Shohamy & N. H. Hornberger (Eds.), Encyclopedia of language and education (Language testing and assessment 2nd ed., Vol. 7, pp. 197–209). New York: Springer.
Li, H., & He, L. (2015). A comparison of EFL raters’ essay-rating processes across two types of rating scales. Language Assessment Quarterly, 12, 178–212. doi:10.1080/15434303.2015.1011738.
Luk, J. (2010). Talking to score: Impression management in L2 oral assessment and the co-construction of a test discourse genre. Language Assessment Quarterly, 7(1), 25–53. doi:10.1080/15434300903473997.
May, L. (2011). Interactional competence in a paired speaking test: Features salient to raters. Language Assessment Quarterly, 8(2), 127–145. doi:10.1080/15434303.2011.565845.
McNamara, T. (2011). Applied linguistics and measurement: A dialogue. Language Testing, 28(4), 435–440. doi:10.1177/0265532211413446.
McNamara, T. (2014). 30 years on – Evolution or revolution? Language Assessment Quarterly, 11(2), 226–232. doi:10.1080/15434303.2014.895830.
Norton, J. (2013). Performing identities in speaking tests: Co-construction revisited. Language Assessment Quarterly, 10(3), 309–330. doi:10.1080/15434303.2013.769549.
O’Loughlin, K. (2011). The interpretation and use of proficiency test scores in university selection: How valid and ethical are they? Language Assessment Quarterly, 8(2), 146–160. doi:10.1080/15434303.2011.564698.
Sasaki, M. (2014). Introspective methods. In A. Kunnan (Ed.), Companion to language assessment. Wiley. doi:10.1002/9781118411360.wbcla076.
Shohamy, E. (2001). The power of tests: A critical perspective on the use of language tests. New York: Pearson.
Shohamy, E., & McNamara, T. (2009). Language assessment for immigration, citizenship, and asylum. Language Assessment Quarterly,6(4). doi:10.1080/15434300802606440.
Sidnell, J., & Stivers, T. (Eds.). (2013). The handbook of conversation analysis. West Sussex: Wiley-Blackwell.
Taylor, L., & Galaczi, E. (2011). Scoring validity. In L. Taylor (Ed.), Examining speaking: Research and practice in assessing second language speaking (pp. 171–233). Cambridge: Cambridge University Press.
Tsagari, D. (2012). FCE exam preparation discourses: Insights from an ethnographic study. UCLES Research Notes, 47, 36–48.
Turner, C. (2014). Mixed methods research. In A. Kunnan (Ed.), Companion to language assessment. Wiley. doi:10.1002/9781118411360.wbcla142.
Van Lier, L. (1989). Reeling, writhing, drawling, stretching, and fainting in coils: Oral proficiency interviews as conversation. TESOL Quarterly, 23(3), 489–508. doi:10.2307/3586922.
Winke, P., & Gass, S. (2013). The influence of second language experience and accent familiarity on oral proficiency rating: A qualitative investigation. TESOL Quarterly, 47(4), 762–789. doi:10.1002/tesq.73.
Youn, S. J. (2015). Validity argument for assessing L2 pragmatics in interaction using mixed methods. Language Testing, 32(2), 199–225. doi:10.1177/0265532214557113.
Zhao, C. G. (2013). Measuring authorial voice strength in L2 argumentative writing: The development and validation of an analytic rubric. Language Testing, 30, 201–230. doi:10.1177/0265532212456965.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this entry
Cite this entry
Lazaraton, A. (2016). Qualitative Methods of Validation. In: Shohamy, E., Or, I., May, S. (eds) Language Testing and Assessment. Encyclopedia of Language and Education. Springer, Cham. https://doi.org/10.1007/978-3-319-02326-7_15-1
Download citation
DOI: https://doi.org/10.1007/978-3-319-02326-7_15-1
Received:
Accepted:
Published:
Publisher Name: Springer, Cham
Online ISBN: 978-3-319-02326-7
eBook Packages: Springer Reference EducationReference Module Humanities and Social SciencesReference Module Education