Audience Response Systems Reimagined
Audience response systems (ARS) allow lecturers to run quizzes in large classes by handing to technology the time-consuming tasks of collecting and aggregating students’ answers. ARSs provide immediate feedback to lecturers and students alike. The first commercial ARSs emerged in the 1990s in form of clickers, i.e., transmitters equipped with a number of buttons, which impose restrictions on possible questions – most often, only multiple choice and numerical answers are possible.
Starting from the early 2010s, the ubiquity of smartphones, laptops, and tablet computers paved the way for web-based ARSs which, while running on technology that provides more means for input and a graphical display, still have much in common with their precursors: Even though more types of questions besides multiple choice are supported, the full capability of web-based technology is still not fully exploited. Furthermore, they also do not adapt to a student’s needs and knowledge, and often restrict quizzes to two phases: Answering a question and viewing the results.
This article first examines the current state of web-based ARSs: Question types found in current ARSs are identified and their support in a variety of ARSs is examined. Afterwards, three axes on which ARSs should advance in the future are introduced: Means of input, adaption to students, and support for multiple phases. Each axis is illustrated with concrete examples of quizzes.
KeywordsAudience response systems Adaptive learning environments Large classes
- 7.González-Tato, J., Llamas-Nistal, M., Caeiro-Rodríguez, M., Mikic-Fonte, F.A., et al.: Web-based audience response system using the educational platform called BeA. J. Res. Pract. Inf. Technol. 45(3/4), 251 (2013)Google Scholar
- 8.Gross, M.: Collective peer evaluation of quiz answers in large classes through pairwise matching, Institute of Informatics, Ludwig Maximilian University of Munich. Bachelor thesis (2017)Google Scholar
- 9.Grüner, G.: Die didaktische Reduktion als Kernstück der Didaktik. Die Deutsche Schule 59(7/8), 414–430 (1967)Google Scholar
- 10.Haladyna, T.M.: Writing Test Items to Evaluate Higher Order Thinking. ERIC, New York (1997)Google Scholar
- 13.Hauswirth, M.: Models and clickers for teaching computer science. In: 7th Educators’ Symposium@ MODELS (2011)Google Scholar
- 19.Jumaat, N.F., Tasir, Z.: Instructional scaffolding in online learning environment: a meta-analysis. In: 2014 International Conference on Teaching and Learning in Computing and Engineering, pp. 74–77. IEEE (2014)Google Scholar
- 22.Mader, S., Bry, F.: Phased classroom instruction: a case study on teaching programming languages. In: Proceedings of the 11th International Conference on Computer Supported Education, CSEDU, vol. 1, pp. 241–251. SciTePress (2019)Google Scholar
- 24.Martyn, M.: Clickers in the classroom: an active learning approach. Educ. Q. 30(2), 71 (2007)Google Scholar
- 25.McLoone, S., Brennan, C.: A smartphone-based student response system for obtaining high quality real-time feedback-evaluated in an engineering mathematics classroom: National university of ireland maynooth. Thinking Assessment in Science and Mathematics, p. 148 (2013)Google Scholar
- 27.Schön, D., Klinger, M., Kopf, S., Weigold, T., Effelsberg, W.: Customizable learning scenarios for students’ mobile devices in large university lectures: a next generation audience response system. In: Zvacek, S., Restivo, M.T., Uhomoibhi, J., Helfert, M. (eds.) CSEDU 2015. CCIS, vol. 583, pp. 189–207. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-29585-5_11CrossRefGoogle Scholar
- 29.Staudacher, K., Mader, S., Bry, F.: Automated scaffolding and feedback for proof construction: a case study. In: Proceedings of the 18th European Conference on e-Learning (ECEL 2019). ACPI (2019, to appear)Google Scholar
- 31.White, E.M.: Assessing higher-order thinking and communication skills in college graduates through writing. J. Gen. Educ. 42(2), 105–122 (1993)Google Scholar