Skip to main content
Log in

Developing an adaptive tool to select, plan, and scaffold oral assessment tasks for undergraduate courses

  • Development Article
  • Published:
Educational Technology Research and Development Aims and scope Submit manuscript

Abstract

The increased linguistic and cultural diversity of undergraduate classrooms at English language institutions has imposed additional pedagogical and assessment challenges on instructors, many of whom lack the knowledge necessary to design classroom activities and assessments that are fair to all students regardless of students’ background and language abilities. The development of an adaptive instrument for instructors who do not specialize in English language learning represents an attempt to adjust instructional practices to meet this need. This paper reports on the development of an instrument that undergraduate instructors can use to plan their courses at universities where English is the language of instruction. The instrument’s intended use is illustrated through an example that involves the planning of an interdisciplinary undergraduate course. To build this adaptive tool, a taxonomy that describes the relevant components of assessments that involve oral communication was developed and externally reviewed. The questions used in the instrument were then developed and piloted with a group of university undergraduate instructors; after which, the instrument was further refined. Although piloting revealed an increase in instructor awareness of how language abilities relate to assessment, further research is needed to determine the extent to which this tool affects instructor’s classroom or assessment practices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. ELL is being used to represent any learner whose primary language(s) or language(s) spoken at home is not English and who may still be trying to achieve proficiency in English regardless of the context in which s/he is found.

Abbreviations

L1:

First language

ELL:

English language learner

ILTA:

International Language Testing Association

IELTS:

International English Language Test System

TOEFL iBT:

Test of English as a Foreign Language Internet Based Test

ETS:

Educational Testing Service

TOEIC:

Test of English for International Communication

References

  • Al-Issa, A. S., & Al-Qubtan, R. (2010). Taking the floor: Oral presentations in EFL classrooms. TESOL Journal, 1, 227–246. doi:10.5054/tj.2010.220425.

    Article  Google Scholar 

  • Anaheim University. (2012). Anaheim University—Entrance requirements for accredited online degree and certificate programs. Resource document. Anaheim University. Retrieved December 11, 2012, from http://www.anaheim.edu/admissions/entrance-requirements.

  • Ang-Aw, H., & Chuen Meng Goh, C. (2011). Understanding discrepancies in rater judgement on national-level oral examination tasks. RELC Journal, 42(1), 31–51. doi:10.1177/0033688210390226.

    Article  Google Scholar 

  • American Psychological Association (1988). Code of fair testing practices in education. Washington, D.C.: Joint Committee on Testing Practices, American Psychological Association.

  • American Educational Research Association (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.

  • Association of Language Testers in Europe. (2012). The content analysis checklists project. Resource document. Retrieved December 7, 2012, from http://events.alte.org/projects/analysis.php.

  • Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford, UK: Oxford University Press.

    Google Scholar 

  • Bachman, L. F., & Palmer, A. S. (2012). Language assessment in practice: Developing language assessments and justifying their use in the real world. Oxford, UK: Oxford University Press.

    Google Scholar 

  • Bailey, K. (1999). Washback in language testing. Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Borup, J., West, R. E., & Graham, C. R. (2012). Improving online social presence through asynchronous video. Internet and Higher Education, 15, 195–203.

    Article  Google Scholar 

  • Brown, J. D. (2008). Testing-context analysis: Assessment is just another part of language curriculum development. Language Assessment Quarterly, 5, 275–312.

    Article  Google Scholar 

  • Chapelle, C. A., Enright, M., & Jamieson, J. (2010). Does an argument-based approach to validity make a difference? Educational Measurement: Issues and Practice, 29, 3–13.

    Article  Google Scholar 

  • Cheng, L., Klinger, D. A., & Zheng, Y. (2007). The challenges of the Ontario Secondary School Literacy Test for second language students. Language Testing, 24, 185–208. doi:10.1177/0265532207076363.

    Article  Google Scholar 

  • Cizek, G. J., Schmid, L. A., & Germuth, A. A. (2011). A Checklist For Evaluating K-12 Assessment Programs. Kalamazoo, MI: The Evaluation Center, Western Michigan University.

    Google Scholar 

  • Coley, M. (1999). The english language entry requirements of Australian universities for students of non-english speaking background. Higher Education Research & Development, 18, 7–17. doi:10.1080/0729436990180102.

    Article  Google Scholar 

  • Crystal, D. (1991). A dictionary of linguistics and phonetics. Oxford, UK: Basil Blackwell.

    Google Scholar 

  • Demmans Epp, C., & McCalla, G. I. (2011). ProTutor: Historic open learner models for pronunciation tutoring. In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), Artificial intelligence in education (pp. 441–443). Auckland, New Zealand: Springer.

    Chapter  Google Scholar 

  • Demmans Epp, C., Tsourounis, S., Djordjevic, J., & Baecker, R. M. (2013). Interactive event: Enabling vocabulary acquisition while providing mobile communication support. In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Artificial intelligence in education (pp. 932–933). Memphis, TN: Springer.

    Chapter  Google Scholar 

  • Deng, C., & Carless, D. (2010). Examination preparation or effective teaching: Conflicting priorities in the implementation of a pedagogic innovation. Language Assessment Quarterly, 7, 285–302. doi:10.1080/15434303.2010.510899.

    Article  Google Scholar 

  • Dix, A., Finlay, J. E., Abowd, G. D., & Beale, R. (2004). Human–computer interaction (3rd ed.). Harlow, England: Pearson/Prentice-Hall.

    Google Scholar 

  • Educational Testing Service. (2004). ibt/Next generation TOEFL Test: Independent Speaking Rubrics (Scoring Standards). Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Educational Testing Service. (2008). Top universities in UK accept TOEFL ® scores. Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Educational Testing Service. (2010). User Guide (Speaking & Writing). Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Educational Testing Service. (2012). TOEFL Destinations Directory. Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Flint, N., & Johnson, B. (2011). Towards fairer university assessment: Recognizing the concerns of students. New York, NY: Taylor & Francis.

    Google Scholar 

  • Fowler, F. J. (2009). Survey research methods (4th ed.). Thousand Oaks, CA: Sage Publications.

    Google Scholar 

  • Frost, K., Elder, C., & Wigglesworth, G. (2011). Investigating the validity of an integrated listening-speaking task: A discourse-based analysis of test takers’ oral performances. Language Testing, 29, 345–369. doi:10.1177/0265532211424479.

    Article  Google Scholar 

  • Fulcher, G., & Reiter, R. (2003). Task difficulty in speaking tests. Language Testing, 20(3), 321–344. doi:10.1191/0265532203lt259oa.

    Article  Google Scholar 

  • Gambino, C. P., Acosta, Y. D., & Grieco, E. M. (2014). English-speaking ability of the foreign-born population in the United States: 2012. Washington, DC: US Census Bureau.

    Google Scholar 

  • Gan, Z. (2013). Task type and linguistic performance in school-based assessment situation. Linguistics and Education, 24(4), 535–544. doi:10.1016/j.linged.2013.08.004.

    Article  Google Scholar 

  • Gilson, C. (1994). Of dinosaurs and sacred cows: The grading of classroom participation. Journal of Management Education, 18, 227–236. doi:10.1177/105256299401800207.

    Article  Google Scholar 

  • Goubeaud, K. (2009). How is science learning assessed at the postsecondary level? Assessment and grading practices in college biology, chemistry and physics. Journal of Science Education and Technology, 19, 237–245. doi:10.1007/s10956-009-9196-9.

    Article  Google Scholar 

  • Heywood, J. (2000). Assessment in higher education: Student learning, teaching, programmes and institutions. Philadelphia, PA: Jessica Kingsley Pub.

    Google Scholar 

  • Hu, C., Sharpe, L., Crawford, L., Gopinathan, S., Khine, M. S., Moo, S. N., & Wong, A. (2000). Using lesson video clips via multipoint desktop video conferencing to facilitate reflective practice. Journal of Information Technology for Teacher Education, 9(3), 377–388. doi:10.1080/14759390000200093.

    Article  Google Scholar 

  • IELTS. (2007). Handbook 2007. Cambridge, UK: University of Cambridge.

    Google Scholar 

  • IELTS. (2009). US Recognition List: Educational Institutions, Professional Organizations and Accrediting Bodies Recognizing IELTS. Cambridge, UK: University of Cambridge.

    Google Scholar 

  • IELTS. (2012a). Institutions: Who accepts IELTS? Cambridge, UK: University of Cambridge. Retrieved December 7, 2012, from http://bandscore.ielts.org/.

  • IELTS. (2012b). Speaking: Band descriptors (public version). Cambridge, UK: University of Cambridge. Retrieved November 14, 2012, from http://www.ielts.org/institutions/test_format_and_results/ielts_band_scores.aspx.

  • International Language Testing Association (2001). Code of ethics for ILTA.

  • Ishii, D., & Baba, K. (2003). Locally developed oral skills evaluation in ESL/EFL classrooms: A checklist for developing meaningful assessment procedures. TESL Canada Journal, 21, 79–95.

    Google Scholar 

  • Joughin, G. (1998). Dimensions of oral assessment. Assessment and Evaluation in Higher Education, 23(4), 367–378.

    Article  Google Scholar 

  • Kieffer, M. J., Lesaux, N. K., Rivera, M., & Francis, D. J. (2009). Accommodations for English language learners taking large-scale assessments: A meta-analysis on effectiveness and validity. Review of Educational Research, 79, 1168–1201. doi:10.3102/0034654309332490.

    Article  Google Scholar 

  • King, J. (2002). Preparing EFL learners for oral presentations. Internet TESL Journal, 8(3). Retrieved from http://iteslj.org/.

  • Kunnan, A. J. (2000). Fairness and justice for all. In A. J. Kunnan (Ed.), Fairness and validation in language assessment (pp. 1–14). Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Kunnan, A. J. (2004). Test Fairness. In M. Milanovic & C. Weir (Eds.), European language testing in a global context: Proceedings of the ALTE Barcelona conference (pp. 27–48). Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Labov, W. (1972). Sociolinguistic patterns. Philadelphia, PN: University of Pennsylvania Press.

    Google Scholar 

  • Laing, K., & Todd, L. (2012). Fair or foul? Towards practice and policy in fairness in education. Newcastle, UK: Newcastle University. Retrieved April 28, 2014, from http://www.ncl.ac.uk/socialrenewal/engagement/fairnesscommission/documents/fairness-in-education.pdf.

  • Leki, I. (2007). Undergraduates in a second language: Challenges and complexities of academic literacy development. New York, NY: Lawrence Erlbaum Associates.

    Google Scholar 

  • MacDonald, K., Alderson, J., & Lai, L. (2004). Selecting and using computer-based language tests (CLBTs) to assess language proficiency: Guidelines for educators. TESL Canada Journal, 21, 93–104.

    Google Scholar 

  • McNamara, T. F., & Roever, C. (2006). Language testing: The social dimension. Malden, MA: Blackwell Pub.

    Google Scholar 

  • Melvin, K. B. (1988). Rating class participation: The prof/peer method. Teaching of Psychology, 15, 137–139. doi:10.1207/s15328023top1503_7.

    Article  Google Scholar 

  • NAEP Data Explorer. (2014). 2011 national vocabulary, reading, and writing scores. USA: Institute of Educational Sciences National Center for Education Statistics.

    Google Scholar 

  • Nielson, J. (1994). Heuristic evaluation. In J. Nielson & R. L. Mack (Eds.), Usability inspection methods (pp. 25–62). New York, NY: John Wiley & Sons.

    Google Scholar 

  • Nitko, A. J. (2011). Educational assessment of students (6th ed.). Boston, MA: Pearson/Allyn & Bacon.

    Google Scholar 

  • North, B., & Schneider, G. (1998). Scaling descriptors for language proficiency scales. Language Testing, 15(2), 217–262. doi:10.1177/026553229801500204.

    Article  Google Scholar 

  • Nwoye, O. G. (1992). Linguistic politeness and socio-cultural variations of the notion of face. Journal of Pragmatics, 18, 309–328.

    Article  Google Scholar 

  • O’Neil, T., Buckendahl, C., Plake, B., & Taylor, L. (2007). Recommending a nursing-specific passing standard for the IELTS examination. Language Assessment Quarterly, 4, 295–317.

    Article  Google Scholar 

  • Otoshi, J., & Heffernen, N. (2008). Factors predicting effective oral presentations in EFL classrooms. Asian EFL Journal, 10(1), 65–78.

    Google Scholar 

  • Popham, W. J. (2000). Modern educational measurement: Practical guidelines for educational leaders. Boston, MA: Allyn and Bacon Pub.

    Google Scholar 

  • Robinson, P., Ting, S., & Urwin, J. J. (1995). Investigating second language task complexity. RELC Journal, 26(2), 62–79. doi:10.1177/003368829502600204.

    Article  Google Scholar 

  • Rogers, S. L. (2013). Calling the question: Do college instructors actually grade participation? College Teaching, 61, 11–22. doi:10.1080/87567555.2012.703974.

    Article  Google Scholar 

  • Saville, N. (2003). The process of test development and revision within UCLES EFL. In C. J. Weir & M. Milanovic (Eds.), Continuity and innovation: Revising the Cambridge Proficiency in English Examination, 1913–2002 (pp. 57–120)). Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Saville, N. (2005). Setting and monitoring professional standards: A QMS approach. Cambridge ESOL: Research Notes, 22, 2–5.

    Google Scholar 

  • Scott, M. (1986). Student affective reactions to oral language tests. Language Testing, 3, 99–118.

    Article  Google Scholar 

  • Shakya, A., & Horsfall, J. (2000). ESL undergraduate nursing students in Australia: Some experiences. Nursing and Health Sciences, 2, 163–171.

    Article  Google Scholar 

  • Shohamy, E. (1982). Affective considerations in language testing. The Modern Language Journal, 66, 13–17. doi:10.2307/327810.

    Article  Google Scholar 

  • Spector, P. (1992). Summated rating scale construction: Introduction. Newbury Park, CA: Sage Publications.

    Book  Google Scholar 

  • Taguchi, N. (2007). Task difficulty in oral speech act production. Applin, 28(1), 113–135. doi:10.1093/applin/aml051.

    Google Scholar 

  • Trudgill, P. (1983). Sociolinguistics: An introduction to language and society. New York, N.Y.: Penguin.

    Google Scholar 

  • University of California, San Diego. (2012). GLI—Admissions information. Resource document. University of California, San Diego. Retrieved December 11, 2012, from http://irps.ucsd.edu/programs/executive-education-gli/admissions-information/.

  • Van Moere, A. (2012). A psycholinguistic approach to oral language assessment. Language Testing, 29(3), 325–344. doi:10.1177/0265532211424478.

    Article  Google Scholar 

  • Wall, D., & Alderson, J. C. (1993). Examining washback: the Sri Lankan impact study. Language Testing, 10, 41–69. doi:10.1177/026553229301000103.

    Article  Google Scholar 

  • Webber, K. (2012). The use of learner-centered assessment in US colleges and universities. Research in Higher Education, 53, 201–228.

    Article  Google Scholar 

  • Webster, F. (2002). A genre approach to oral presentations. Internet TESL Journal, 8(7). Retrieved from http://iteslj.org/.

  • Welsh, A. J. (2012). Exploring undergraduates’ perceptions of the use of active learning techniques in science lectures. Journal of College Science Teaching, 42, 80–87.

    Google Scholar 

  • Wieman, C., Perkins, K., & Gilbert, S. (2010). Transforming science education at large research universities: A case study in progress. Change: The Magazine of Higher Learning, 42, 7–14.

    Article  Google Scholar 

  • Young, D. J. (1986). The relationship between anxiety and foreign language oral proficiency ratings. Foreign Language Annals, 19, 439–445. doi:10.1111/j.1944-9720.1986.tb01032.x.

    Article  Google Scholar 

  • Zeidner, M., & Bensoussan, M. (1988). College students’ attitudes towards written versus oral tests of English as a foreign language. Language Testing, 5, 100–114.

    Article  Google Scholar 

Download references

Acknowledgments

We would like to thank the reviewers for their guidance. We would also like to thank our instructor, classmates, and participants for their guidance, contributions, and feedback. The first author held W. Garfield Weston and Walter C. Sumner Memorial Fellowships.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carrie Demmans Epp.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Demmans Epp, C., Park, G. & Plumb, C. Developing an adaptive tool to select, plan, and scaffold oral assessment tasks for undergraduate courses. Education Tech Research Dev 63, 475–498 (2015). https://doi.org/10.1007/s11423-015-9375-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11423-015-9375-8

Keywords

Navigation