Encyclopedia of Education and Information Technologies

Living Edition
| Editors: Arthur Tatnall

Clicker Interventions: Promoting Student Activity and Feedback at University Lectures

  • Kjetil EgelandsdalEmail author
  • Rune Johan Krumsvik
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-60013-0_189-1

Synonyms

Clicker Interventions: Promoting Student Activity and Feedback at University Lectures

“(…) Context is not always everything, but it colors everything” (Pajares 2005, p. 342), and in academia, the frames and contexts for teaching activities set much of the premise for how we carry out our teaching activities. What is realistic to do in small student groups can be completely unrealistic to do in large student classes. Therefore, Cleveland (2002) and Denker (2013) distinguish among “small” classrooms (30 students or fewer), “medium” classrooms (40 to 100 students), “large” classrooms (100 to 150 students), and “mega” classrooms (200 students or more). In higher education, medium or large lectures often involve less dialogue and communication between students and teachers, and several studies have found that traditional lecturing in such lectures is ineffective in promoting student learning (Deslauriers et al. 2011). Student response systems (SRSs, or “clickers”) are digital tools that can be used to increase student activity in such large lecture settings. This entry will examine how this educational technology influences “how teachers teach and students learn” in higher education today.

Lecturing is the most traditional form of teaching at universities and is still widely used, both in the everyday teaching of university students and on big occasions, when distinguished professors are invited to give guest lectures. There is an interest in oral presentation even outside the university walls, as illustrated by the popularity of TED talks, personal narratives, public lectures, and even stand-up comedy. In recent years, however, there has been increasing criticism of lectures in higher education as an outdated and ineffective method of teaching. This criticism is rooted in the increasing emphasis on student activity and student learning in education, together with an increase in students who are used to being actively included in instruction. Empirical studies support this criticism by showing that student activity and feed back promote student learning (Black and Wiliam 1998; Evans 2013; Hattie and Timperley 2007; Prince 2004) and that students struggle to maintain their attention during lectures (Risko et al. 2012). Lecturing has also been found to be generally less effective than student-active ways of teaching in enhancing student achievement (Deslauriers et al. 2011; Hake 1998; Knight and Wood 2005; Yoder and Hochevar 2005).

Since campus-based lectures have traditionally offered little room for student activity beyond listening to the teacher talking, lecturing seems to conflict with the idea of good teaching. The primary barrier to involving students as active participants in university lectures is often the number of students present in the auditorium, which affects both the potential amount of time dedicated to each student and the students’ willingness to participate due to fears of speaking in public. In these contexts, SRSs can be used to help all students present participate actively, regardless of the group size.

SRSs are digital tools that allow students to individually answer multiple-choice questions using a wireless remote control called a “clicker.” The distribution of student answers can be projected on a large screen for the teacher and students to see. The student answers can also be stored for later use. Studies have found that this technology can be used to increase student activity and engagement (Boscardin and Penuel 2012; Kay and LeSage 2009; Keough 2012; Krumsvik and Ludvigsen 2012; Lantz 2010). Interventions using this technology (henceforth called “clicker interventions”) can also increase student attention (Blood 2012; Cain et al. 2009; Rush et al. 2010; Sun 2014), have a positive effect on student learning (see Chien et al. 2016 for a review), and be a useful tool for providing both students and teacher feed back on the students’ understanding (Egelandsdal and Krumsvik 2017a, b, Forthcoming; Krumsvik 2012; Krumsvik and Ludvigsen 2012; Ludvigsen and Egelandsdal 2016; Ludvigsen et al. 2015).

This entry presents how clicker interventions can be used to promote student activity and feed back at university lectures. We start by giving a historical presentation of the university lecture to highlight the functions of such lectures, the criticisms raised against them, and the challenges they face in the twenty-first century. We then present various perspectives on feed back and how feed back situations can inform teacher instruction and enhance student learning and self-assessment. Finally, drawing on research on SRS, we present how clicker interventions can be used to promote formative feed back situations in large plenary lectures.

History of the University Lecture

In the Middle Ages, the word “lecture” (rooted in the Latin word legere) meant “to read aloud.” At this time, a university lecture involved a teacher reading authoritative texts, most often the Bible or another ancient text, aloud to students. The number of written texts was limited. The students’ job was to write down the teacher’s dictation and reproduce the texts themselves. The function of the lecture was, therefore, as much about cultural preservation as knowledge distribution. For accuracy, the lecturer needed to stick to the script and could be fined for departing from the text at hand (Friesen 2011).

Following the invention of the printing press in the mid-1400s, the mass production of books began to challenge the lecturer as the sole source of information. As books gradually became cheaper and more available, students could engage in studies independent of lectures. This shift is exemplified by a young astronomy student in the fifteenth century asking: “Why should old men be preferred to their juniors now that it is possible for the young by diligent study to acquire the same knowledge” (Eisenstein 1997, p. 66).

Despite this development, the lecture maintained its original form of dictation and note-taking for quite some time. An indication of a gradual shift from pure dictation can be found in the emerging use of glosses. Explanatory notes were written into the margins of the authoritative texts to assist the lecturer in commenting on different passages, and this paved the way for the use of commentary as a way of mediating between original texts and the audience. By the middle of the seventeenth century, the two ways of lecturing – pure dictation and dictation with the use of comments – appeared to be competing for dominance. For instance, in one 1642 lecture plan, the first half hour of each lecture was devoted to dictation, and the other half was devoted to glosses and commentary (Friesen 2011).

According to Clark (2006, p. 85), “[t]he eighteenth century appears to be the century when the dictation was first stopped.” In response to concerns over the quality of education, some governments went so far as to outlaw dictations (Friesen 2011). However, famous thinkers like Humboldt, Schleiermacher, and Fichte opposed lecturing as reading. Humboldt claimed that dictation was not suitable for engaging students and argued that teachers should create their lectures following rhetorical and didactical rules. Schleiermacher proposed that lectures should enlighten the audience with knowledge they did not previously possess and guide them toward better understandings (Skagen 2000). Thus, these and other contemporary scholars began lecturing without a set text or glossary. As Fichte (as quoted in Friesen 2011, p. 98) argued, the principal concern of a lecture is not “what is printed in books for us to read,” but, rather, “what has stirred and transformed our spirit” (p. 98). This way of thinking about understanding resonates with the hermeneutical tradition, in which text and spoken words are valuable as far as they are interpreted and brought to life as thoughts. From this perspective, a lecture should not be about the authority of books but about the lecturer using his knowledge to affect the audience. The speaker and his own words are, therefore, important. This represents a shift in the history of the lecture from the authority of the text to the authority of the teacher. Teachers as the authors of their own spoken words came to replace the medieval tradition of teachers reading the same authoritative texts (Friesen 2011).

In the twentieth century, projection media came to supplement the use of speech. The overhead projector was first used by the US military during the Second World War and was introduced in its commercial form in the 1960s. Later, this technology was replaced by similar but more advanced tools for digital projection, such as PowerPoint. Despite these changes, however, the lecture maintained its basic structure, though the dramaturgical effects of the lecture were given more attention (Friesen 2011). Goffman (1981) distinguished among three primary modes of animation of spoken words: aloud reading, memorization, and fresh talk. In fresh talk, the lecturer improvises the text during the lecture. According to Goffman (1981), fresh talk is the ideal lecture style. With the assistance of notes, this method of lecturing is quite common, although, in reality, many lectures employ only the illusion of fresh talk. As suggested by the concept of fresh talk and fresh talk illusions, lecturing can also be considered a public performance that brings ideas and the written word into life. Hence, lecturing can provide an experience of authenticity that is livelier and more entertaining than the reading of a book on the same topic. Unfortunately, it can also be downright dull. How a lecture is perceived is likely to depend on both the performance of the teacher and the interests of the students.

This brief presentation illustrates both continuity and change in the history of the university lecture. So where does the university lecture stand in the twenty-first century?

The University Lecture in the Twenty-First Century

In the 1960s, the main criticism of lectures was rooted in the emerging criticism of authority in society. In this century, however, the focus of the criticism has instead been that lectures are ineffective in promoting student learning (Mazur 2009; Wieman 2007). Interestingly, this development marks a third shift in the history of the lecture: from the authority of the teacher to a focus on student activity and the learning outcome. An illustrative example is Biggs and Tang (2011) three levels of thinking about teaching. A teacher at Level 1 focuses on the differences among students, believing that there are good students and there are poor students. If the students do poorly, they can only blame themselves. A teacher at Level 2 focuses on what the teacher does. If the students do poorly, it is because the teacher has failed to get the message across. Finally, a teacher at Level 3 focuses on what the students do and how well the intended outcomes are achieved. If the students do poorly, it is because the learning activities of the course are poorly adapted to promote the intended learning outcomes. From the first two perspectives, lecturing is an unproblematic way of teaching. From the third perspective, however, university lectures are problematic because they rely mostly on teacher monologue and are, thus, unfit for facilitating student activities that promote student learning.

This shift in thinking about teaching seems to be rooted in several developments. For one, the conception of how students learn has changed, such that constructivism has replaced the “transmission view” of learning as the dominating paradigm. Hence, learning is no longer understood as information that is transferred from the lecturer to the students but is “conceptualized as a process of active construction wherein learners drew on prior knowledge and experiences—both individual and sociocultural—as they built new understandings” (Cochran-Smith and Villegas 2015, p. 10). Such an understanding of learning embeds demands both for student activity and social interaction and for the teacher to assess her students’ understanding to adapt her teaching to their learning needs.

Second, there has been an increased focus on effective teaching in higher education, “understood as teaching that is oriented to and focused on students and their learning” (Devlin and Samarawickrema 2010). The global shift from an industrial economy to a knowledge economy has put greater emphasis on the importance of higher education for sustainable development and economic growth. This, in turn, has turned attention to the quality of education, particularly with respect to students’ learning outcomes (Cochran-Smith and Villegas 2015; Đonlagić and Kurtić 2016; George 2006). The number of students enrolling in higher education has also increased dramatically worldwide. This development has resulted in larger student groups, often with different cultural and socioeconomic backgrounds, enrolling in universities. Whereas the university was previously an elite institution for high-achieving and highly motivated student, teachers now must deal with more diverse student groups (Biggs and Tang 2011). Thus, lecturing and otherwise leaving students to study on their own might serve only to replicate social differences. For this reason, the Bologna Process and National Reforms in higher education have put greater emphasis on pedagogical facilitation to even out social differences and reduce the number of students dropping out.

Third, empirical findings show that student activity and feed back situations do promote student learning (Black and Wiliam 1998; Evans 2013; Hattie 2009; Hattie and Timperley 2007; Prince 2004) and that student-active teaching is usually more effective in promoting student performance than lecturing (Deslauriers et al. 2011; Hake 1998; Hrepic et al. 2007; Knight and Wood 2005; Prince 2004; Yoder and Hochevar 2005). It has also been found that the human attention span and short-term memory are too limited to process and store most of the information contained in a long lecture (Risko et al. 2012).

In light of these developments, one could ask: Should the university lecture be kept in the twenty-first century?

The most obvious argument is that large lectures enable the instruction of many students simultaneously, which is both time- and cost-saving. Another argument presented by Tone Kvernbekk (2011) is that the monologue of the traditional lecture is less exclusive and, therefore, less excluding than dialogical approaches. She also claims that this form of education is less intrusive because the teacher cannot control whether or how the students receive the information provided. These arguments are reasonable but debatable as a defense for the campus-based lecture. The use of digital lectures and instructional designs like the “flipped classroom” offers alternatives to campus lectures that can be more cost-effective and less exclusive and intrusive, since students can, in principle, watch these videos whenever and wherever they want. Furthermore, digital lectures offer more possibilities than campus-based lectures when it comes to combining different modalities.

However, campus-based lectures do possess a potential advantage over pre-made videos with regard to flexibility and interactivity. During a lecture, students can ask questions, voice their ideas to the teacher, discuss with their peers, and reflect on both the subject and their understanding under the guidance of an expert. The teacher can potentially improvise and make changes in her teaching based on interaction with students. Traditionally, however, student–teacher interaction has been challenging in large lecture halls with many students. Although the teacher can involve some students by posing questions to the audience, many students refrain from answering due to a fear of speaking up in public, and the few who do speak up might not represent the student group as a whole, giving the teacher a biased view of the students’ understanding.

In this context, the use of response systems has the potential to mediate the interaction between the teacher and all students present, providing valuable feed back to both the teacher and the students.

Research and Various Perspectives on Feedback

Research shows that feed back can have a considerable impact on student learning (Black and Wiliam 1998; Evans 2013; Hattie and Timperley 2007; Kluger and DeNisi 1996; Shute 2008). Feedback interventions have been found to be particularly useful when they raise students’ awareness of how to improve in relation to their current level of performance and the learning intentions (Black and Wiliam 1998, 2009; Hattie and Timperley 2007; Nicol and Macfarlane-Dick 2006; Sadler 1989). However, feed back does not always result in student improvement and may sometimes inhibit learning rather than promote it. Variations in the effects of feed back have been related to its content, form, and timing, and studies have indeed found variations based on these factors (Hattie and Timperley 2007; Kluger and DeNisi 1996; Shute 2008). For instance, the use of extrinsic rewards and praise has been found to have a limited effect on student achievement (Hattie and Gan 2011; Hattie and Timperley 2007), and extrinsic rewards can undermine internal motivation (Deci et al. 1999). On the other hand, feed back has been found to be effective when it provides information on correct (versus incorrect) responses, when it builds previous changes, when the goals are specific and the task complexity is low, and when it is not perceived as threatening the students’ self-esteem (Hattie and Timperley 2007).

Another variation in the effectiveness of feed back is how different students perceive and use feed back (Bloxham and Campbell 2010; Carless et al. 2010; Hattie and Gan 2011; Higgins et al. 2001; Nicol and Macfarlane-Dick 2006; Sadler 2010). The literature offers numerous examples of students failing to make use of the feed back they are given (see Evans 2013; Jonsson 2013 for reviews on the topic.). This discrepancy is commonly referred to as the “feed back gap.” In a review, Jonsson (2013) found that students’ use (or lack thereof) of feed back is related to their perceptions of the information and the opportunity to use it in the near future. He also found that many students use feed back passively to motivate themselves or to indicate progress but lack strategies for employing the feed back actively. Critical feed back also seems to undermine performance and motivation when strategies for improvement are lacking (Ilgen and Davis 2000; Kluger and Van Dijk 2010). In other words, whether feed back situations have the desired effect depends not only on external conditions but also on the students’ internal conditions. Since students have different preconditions (e.g., conceptual understanding and strategies) for interpreting and using feed back, the effectiveness of feed back cannot be explained merely by its content, form, and timing; one must also consider how the feed back is received and used (Boud and Molloy 2013).

Nelson and Schunn (2009) argue that feed back has three major effects: (a) motivational, to influence beliefs and willingness to participate; (b) reinforcement, to reward and punish specific behaviors; and (c) informational, to change performance in a particular direction. Students’ experiences of feed back are likely to consist of a combination of these. It is, therefore, understandable that feed back may both support students’ learning processes and have a negative impact depending on the context, since the way in which students respond to feed back is likely to be influenced by both its emotional impact and the information it provides (Price et al. 2010). Students differ in the ways they face difficulties and failures, and while some students may choose to respond to feed back by increasing their efforts to improve, others may become demotivated and choose to reduce their efforts or give up (Boekaerts and Corno 2005; Yorke 2003). Hence, feed back also affects and is affected by students’ emotions and motivation.

Another difference in how feed back works is related to different actors’ understandings of the concept. As noted by Evans (2013), the way in which feed back is conceived depends on “the particular feed back paradigm adopted” (p. 71). Different understandings of feed back lead to various feed back practices: from monologic to dialogic and from teacher-controlled to student-involved. For researchers, different understandings lead to various ways of studying these practices. The fact that practitioners, students, and researchers operate with different and often unarticulated understandings of what feed back is and its function highlights the need to clarify how feed back is understood. The most influential sources for conceptualizing feed back in this entry are Hattie and colleagues’ (Hattie 2009; Hattie and Gan 2011; Hattie and Timperley 2007) “visible teaching and learning” perspective and the formative assessment perspective as it is presented in the most frequently cited texts in the field (Black and Wiliam 1998, 2009; Nicol and Macfarlane-Dick 2006; Sadler 1989).

Formative Feedback

The visible teaching and learning perspective (Hattie and Gan 2011; Hattie and Timperley 2007) and the formative assessment perspective (Black and Wiliam 1998, 2009; Nicol and Macfarlane-Dick 2006; Sadler 1989) are similar in their emphasis on the importance of raising students’ awareness of their learning process. Hattie and Timperley (2007) focus on feed back to the students, while Black and Wiliam (2009) focus on feed back for both the students and the teacher. The purpose of feed back is to make the learning process visible to the students to support their self-monitoring and self-regulation both in the short term, when they are working with particular tasks, and in the long term, to enhance their abilities as self-regulated learners. In particular, feed back is considered effective when it answers the questions: Where am I going? (feed up), How am I going? (feed back), and Where to next? (feed forward) (Hattie and Timperley 2007).

Both Black and Wiliam (2009) and Hattie and Timperley (2007) regard all situations that promote reflection as situations of formative assessment/feed back. Black and Wiliam (2009) claim that:

Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited. (p. 9)

whereas Hattie and Timperley (2007) define feed back as:

…information provided by an agent (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one’s performance or understanding. A teacher or parent can provide corrective information, a peer can provide an alternative strategy, a book can provide information to clarify ideas, a parent can provide encouragement, and a learner can look up the answer to evaluate the correctness of a response. Feedback thus is a “consequence” of performance. (p. 81)

Both these definitions are quite broad, and their main difference is that Black and Wiliam (2009) focus on “evidence” that can be used by both the students and the teacher, while Hattie and Timperley (2007) focus solely on the student.

Since the definitions seem to apply to every situation that promotes reflection for the students (or teacher), what distinguishes an intentionally driven formative practice from more or less random events? According to Black and Wiliam (2009), “formative assessment is concerned with the creation of, and capitalization upon, ‘moments of contingency’ in instruction for the regulation of learning processes” (p. 10). In the context of education, therefore, an intended formative practice depends on the teacher facilitating situations that elicit evidence of student understanding.

Thus, the practice of formative assessment/feed back in the lecture hall depends on creating situations in which the students can engage with the subject and receive feed back on their understanding to make more informed decisions in their studying, as well as on the teacher receiving feed back on the students’ understanding in order to make informed decisions about her teaching. When feed back is conceived in this way, it is freed from a “transference” understanding of the concept, in which the teacher “tells” the students something about their performance. Instead, feed back represents the very phenomenon of the experience that arises when we act and suffer the consequences (Dewey 1997). This opens the possibility that feed back situations sometimes occur unintentionally. From such a perspective, it is the situations and the students’ experiences of them that become our focus, not a message from “a sender” to “a receiver.” In this context, clicker interventions can be conceived as situations in which students need to act, articulate, use their pre-understandings about various topics, and suffer the consequences of their actions, while the teacher needs to act on the feed back from the student answers and plenary discussions, which reflect on their teaching.

In this entry, we use the term “formative feed back” to distinguish between feed back as an intention or something given and feed back that becomes a learning experience. Shute (2008) introduced this term, defining formative feed back “as information communicated to the learner that is intended to modify his or her thinking and behavior for the purpose of improving learning” (p. 154). This definition differentiates between formative feed back aimed at improving learning and summative feed back for certification and control. However, to conceptualize formative feed back as an experience, not as an intention, we propose that formative feed back can be understood as a consequence of our actions; for example, just as putting your understanding into action through a discussion can reveal misunderstandings, running on the ice and breaking your leg can painfully teach you to be more careful next time. This definition also encompasses feed back that immediately enhances the students’ understanding of the subject matter, not just feed back that leads to self-assessment (metacognition). Below, we will use this understanding of feed back to present and discuss research findings on the use of clicker interventions.

Student Response Systems: Affordances and Research

A distinction can be made between SRSs operating with a designated handheld device, a “clicker,” with a receiver connected to a computer, and web-based systems in which the students bring their own devices, such as smartphones, tablets, and computers. The benefit of the clicker systems is that they are easy for the students to use and usually yield a response rate close to 100%. These systems also allow the students to be sure of their anonymity when they answer if the devices are handed out at the lecture. The benefit of the web-based systems is that they are usually free to use. Furthermore, the lecturer does not have to distribute any physical devices at the lecture. An obstacle, however, is that some students cannot participate due to connectivity problems or because they do not possess a compatible device.

In large lectures, SRSs are usually employed for formative purposes to ask students subject-related questions during the lecture; however, clickers have also been used for student evaluation and summative assessment. Some of the challenges of using SRSs are that students may forget to bring or lose their remotes (when they are not handed out at the lecture), that the remotes may not function properly, that less experienced teachers may have trouble adjusting their teaching in response to student answers, that classes using SRSs may cover less course content, that creating SRS questions is time-consuming, and that students do not like when SRSs are used to monitor attendance or for summative tests (Kay and LeSage 2009).

Studies have shown that when SRSs are used for formative purposes, the students’ attitudes toward the technology are generally positive. Findings also reveal that using clickers leads to increased student attendance and preparation, greater student engagement, and student appreciation for being able to participate anonymously (Boscardin and Penuel 2012; Kay and LeSage 2009; Keough 2012; Krumsvik and Ludvigsen 2012; Lantz 2010). Studies have also found that clicker interventions increase student attention (Blood 2012; Cain et al. 2009; Rush et al. 2010; Sun 2014), and the majority of studies show that clicker interventions can have a positive effect on student learning (see Chien et al. 2016 for a review).

A common criticism of clicker studies is that they are overly oriented toward technology and lack a theoretical foundation (Beatty and Gerace 2009; Boscardin and Penuel 2012; Fies and Marshall 2006). It is reasonable to say that the theoretical underpinnings for clickers are still in their early stages and need to be developed. Instead of using “grand theories” adopted from other disciplines, it seems important to develop “home ground” theories that build on education, educational technology, and digital artifacts. This implies both theories that can explain the particular phenomenon of such educational technology use, but also analytical frameworks that hold true beyond the local setting to allow broader and more in-depth discussions of research findings outside the context of a particular study.

Although the use of digital tools offers new possibilities for instruction, it is the way in which such tools are used pedagogically – and not their use, per se – that influences students’ learning processes (Clark and Mayer 2011). This illustrates the need to distinguish between the potentials of the technology and the ways technologies can be applied. In the following section, we will present the two most common ways of using clicker interventions before reviewing research on these interventions through the lenses of formative feed back.

Clicker Interventions: The Classic and Peer Instruction Approaches

The two most common ways of conducting clicker interventions for formative purposes are what Nielsen et al. (2016) refer to as the “classic” approach and the “peer instruction” approach. The “peer instruction” approach is based on the work of Mazur (1997). Students are asked a multiple-choice question that they answer individually before discussing their answer with the students seated next to them and re-answering the same question. In the “classic” approach, students discuss with their peers before answering individually. In both approaches, the teacher usually follows up on the student answers with a plenary discussion. Some studies have used similar interventions without peer discussions (Campbell and Mayer 2009; Mayer et al. 2009; Shapiro and Gordon 2012, 2013). After students have answered the clicker questions, a histogram of the students’ answers is usually projected on a large screen, and the teacher follows up by asking the students to explain their reasonings and providing them with her own explanations.

Clicker Interventions Through the Lenses of Formative Feedback

From clicker interventions (including the clicker questions, peer discussions, and follow-up phase), students may experience two kinds of feed back: (1) feed back supporting their self-assessment (metacognition) by raising their awareness of their understanding and (2) feed back enhancing their understanding of the subject matter. The first kind of feed back relates to studies showing that clicker interventions do raise students’ awareness of their understanding (Egelandsdal and Krumsvik 2017a). The second kind of feed back relates to studies showing that clicker interventions can also have an immediate effect on student achievement (Chien et al. 2016; Egelandsdal and Krumsvik 2017b). Clicker interventions can also provide the teacher with (3) feed back on the students’ understanding. In the next subchapters, we will use these potential “feed back outcomes” to structure and present research findings related to formative feed back from clicker interventions.

Feedback Supporting Students’ Self-Assessment

Feedback supporting the students’ self-assessment entails situations that raise the students’ awareness of their understanding. Such situations can arise from being asked a question, discussing the question with peers, and/or listening to the teacher or other students talk during the follow-up phase. This kind of awareness can be broken down into three strands of information: feed up, feed back, and feed forward. Feed up denotes understanding what is essential to learn in the course (important topics, concepts, etc.). Feed back means understanding how well the students have understood the subject matter. Feed forward means understanding what the students need to focus on to improve (Black and Wiliam 2009; Hattie and Timperley 2007).

Studies have shown that creating situations that raise students’ awareness of their understanding can improve student performance and help students self-regulate (Hattie and Timperley 2007), particularly when it comes to low-achieving students (Black and Wiliam 1998). As illustrated by the Dunning-Kruger effect, low competence can lead people to overestimate their abilities (Kruger and Dunning 1999), and low-achieving students tend to overestimate their understanding of subject matter (Isaacson and Fujita 2006). If students are not challenged to articulate their understanding, their self-assessment depends on seeking out and creating feed back situations on their own (Clark 2012). Since students differ in their approaches to studying, the ways in which they adapt their focus and effort will also differ (Biggs and Tang 2011; Nicol and Macfarlane-Dick 2006).

Krumsvik and Ludvigsen (2012) and Ludvigsen et al. (2015) found that clicker interventions made students more aware of their understanding (feed back). Egelandsdal and Krumsvik (2017a) confirmed this finding and also found that most students experienced that, compared to lectures without clickers, the clicker interventions provided them with more information about what was important to learn in the subject (feed up), revealed misunderstandings (feed back), and showed them what they needed to study further (feed forward).

Ludvigsen et al. (2015) also found that students employed the feed back from the interventions in various ways in their coursework. Based on six interviews, they found that students used their experiences from the clicker interventions to identify difficult topics for further study, to discuss tricky concepts with one another, and to adjust the focus of their reading. One student also claimed that the clicker interventions had transformed the way she studied, leading her to use questioning as method for self-assessing her understanding in her coursework. Egelandsdal and Krumsvik (Forthcoming) also investigated whether and how students used the feed back from clicker interventions by using student logs. They found that, of their 23 participants, half (11) reported using the feed back from the interventions in their coursework, while the other half did not. Some of the students used the clicker questions as a reference point for their understanding of the course material, either by employing the clicker questions while studying or adapting their focus in light of how they assessed their understanding of the different topics. Others used the questions to engage in discussions. These students emphasized that discussing the questions afterward made them more aware of the different concepts. These approaches show that some of the students used the interventions to clarify and organize new knowledge and to self-assess their understanding. This may be particularly useful for first-year students confronted with a wide variety of concepts and theories for the first time (Nicol 2009).

Obviously, teachers cannot give students individual feed back during lectures. In such contexts, through clicker questions, peer discussions, and teacher follow-up, clicker interventions provide students with several opportunities to adjust their focus when assessing their understanding of different topics. However, reaping these benefits requires students to be able to both understand and purposefully use the information they receive. Some students may become overconfident if they answer a question correctly or be unable to use the information from the interventions purposefully. For example, in Egelandsdal and Krumsvik (Forthcoming) study, only half of the students reported using the feed back in their coursework, even though most students experienced that the interventions raised their awareness of their understanding of the material and what they should focus on further. This illustrates that the teacher might need to guide students on how to consider and use information from the interventions in their coursework and how the interventions align with course activities and learning intentions.

Feedback Enhancing Students’ Understanding of the Subject Matter

The second kind of feed back concerns situations that immediately increase students’ understanding of the content. Assuming that questions about key topics, reflection on these topics, discussions with peers, and listening to the perspectives of others (both students and teachers) can contribute to developing a student’s content understanding, this kind of feed back can be measured by changes in student performance before and after clicker interventions.

Previous studies have found that the use of clicker questions can increase students’ retention (Campbell and Mayer 2009) and that lectures using clicker questions improved students’ exam performance by one-third of a grade compared to lectures without clickers and lectures without questions (Mayer et al. 2009). These findings can be related to the testing effect (Roediger and Karpicke 2006), which has shown that the use of questioning can, in itself, improve student retention. Shapiro and Gordon (2012) found that the use of clicker questions in a psychology class improved performance on delayed exam questions by 10% to 13% and concluded, based on their controlled experiment and survey, that interventions invoked the “testing effect.” In another study, Shapiro and Gordon (2013) found that the use of clicker questions also promoted significantly higher performance on test questions than repetition of the same material.

With respect to the peer discussions, several studies have found that the number of students answering correctly increases when the same clicker question is re-answered after the discussion (Crouch and Mazur 2001; Mazur 1997; Rao and DiCarlo 2000; Smith et al. 2012; Smith et al. 2009; Vickrey et al. 2015). The average improvement varies between 8% and 30%. Smith et al. (2009) found that the number of students answering correctly also increases when the students are asked a new (isomorphic) question after the discussion requiring approximately the same level of understanding as the first question, but posed as a new case. The average improvement on these isomorphic questions was 21%. In a similar study, using isomorphic questions, Egelandsdal and Krumsvik (2017b) found an average improvement of 12% on the second question after the discussion, as well as a Cohen’s d effect size of 0.66. This is 65% above the average effect of interventions aimed to increase student performance, which is 0.4 (Hattie 2009). These studies show not only that the students improved on the initial question they discussed but also that the knowledge they gained transferred to a new case.

Some studies have also found that the combination of peer discussions and teacher follow-up can enhance student performance even more (Smith et al. 2011; Zingaro and Porter 2014).

Feedback to the Teacher

There is a considerable difference between teachers’ and students’ understandings of various lecture topics (Hrepic et al. 2007). This makes it hard for teachers to assess how students receive the material presented. This is a challenge for the teacher because the students’ pre-understanding has a significant impact on how a lecture is experienced (Schwartz and Bransford 1998). One of the benefits of using clickers is that a teacher can quickly collect answers from all students present. Although studies have shown that clicker results can sometimes misrepresent some students’ understanding (James and Willoughby 2011; Knight et al. 2015; Wood et al. 2014), the interventions do provide the teacher with a general idea of how well the students have understood the material (Anderson et al. 2011; D’Inverno et al. 2003; Kolikant et al. 2010). This feed back can be used synchronously to address the students’ understanding and misconceptions in the follow-up phase (Kolikant et al. 2010) and asynchronously to adapt future lectures and the amount of time spend on various topics to the students’ needs and current levels of understanding (Anderson et al. 2011; D’Inverno et al. 2003).

The teacher must, however, be aware that there are several nuances not captured by clicker answers (James and Willoughby 2011). Since clicker questions are multiple-choice, both the questions and the answers need to be constructed by the teacher; thus, they might not accurately represent students’ own questions and ideas. It is, therefore, crucial for the teacher to follow up on the students’ answers at the end of the interventions to ask them to explain their reasoning or to use a digital tool (e.g., Flinga) to enable the students to write their comments, answers, and questions freely.

Negative remarks from teachers concerning clicker interventions are usually related to a loss of lecturing time (Egelandsdal and Krumsvik Forthcoming). This illustrates the major trade-off when using clicker interventions, namely, that there will be less time for lecturing. If the teachers experience, however, that they are “teaching more by lecturing less” (Knight and Wood 2005), this trade-off might be well worth it. As illustrated in many studies, the amount of material covered does not equal the amount of material learned (Deslauriers et al. 2011; Hake 1998; Hrepic et al. 2007; Jennifer K. Knight and Wood 2005; Yoder and Hochevar 2005). As noted by one of the teachers in Egelandsdal and Krumsvik (Forthcoming) study, it is better to focus on a few important points than to provide students with a great deal of information that they do not retain. Since humans have limited short-term memory and attention spans when it comes to retaining information from lectures (Risko et al. 2012) and clicker interventions increase student attention (Blood 2012; Cain et al. 2009; Rush et al. 2010; Sun 2014), this is a valid point in itself. Studies have also found that brief activities help students remember more content (Prince 2004), that the use of clicker questions enhances student retention (Campbell and Mayer 2009; Mayer et al. 2009; Shapiro and Gordon 2012, 2013), and that students are likely to understand more of the content if it is simple, explicitly stated, and reiterated multiple times (Hrepic et al. 2007). Hence, reducing the amount of material covered, slowing down the tempo, and using questions and peer discussions might be acceptable from a “student learning” point of view.

The ways in which teachers use SRSs depend on the possibilities they identify, both with respect to the affordances of the technology and the pedagogical opportunities to facilitate purposeful activities. How these activities play out might also be affected by unintended events and consequences (Kirschner et al. 2004). Studies have found that the perceived advantages of clicker interventions increase when teachers become more experienced with using them (Draper and Brown 2004; Kolikant et al. 2010). In this respect, it is important for teachers to be aware that becoming familiar with the technology, creating appropriate questions, and learning how to adjust their teaching based on information from clicker interventions are likely to be a process of development (Boscardin and Penuel 2012).

Conclusion and Suggestions for Practice

In this entry, we have seen that clicker interventions can be used to promote formative feed back and student activity in university lectures. Clicker interventions can be used to engage students in peer discussions and gather answers from all students present, and they also serve as a catalyst for plenary discussions. A benefit of clicker interventions is that they allow the teacher to collect answers from the whole student group instantly, usually yielding a response rate close to 100%. They also work well regardless of group size. Clicker interventions can inform teachers and students about the students’ current understanding and can be used to adjust studying and teaching. As we have seen, clicker interventions can also have a positive impact on student learning, motivation, attention, and engagement.

A limitation is that both the questions and answers in clicker interventions need to be pre-constructed by the teacher, since they are multiple-choice. It is particularly important that the teacher pay attention to the purpose of the lecture when constructing the questions. A recent study found that the use of solely factual questions can improve student retention, but simultaneously impede conceptual understanding because they can orient students too heavily toward facts (Shapiro et al. 2017). Even in a multiple-choice system, it is still possible to construct questions to which the answers are not merely “right” or “wrong” by, for instance, using questions for which the alternative answers represent different perspectives on a topic. It is also possible to construct questions that require a deeper understanding, such as case questions that the students must use their understanding of the subject matter to solve. The clicker questions can also be used in combination with modalities other than text and speech. For example, in studies by Ludvigsen et al. (2015) and Egelandsdal and Krumsvik (2017a), the lecturer used a combination of clicker questions and video cases. In these cases, the students needed to employ their understanding of the subject matter to interpret and solve the cases presented to them. Professor Rune J. Krumsvik, who has used clickers systematically in large lectures for psychology students since 2008, states that “[t]he combination of such educational technology, peer discussion, authentic video cases from the practice field and feed back as theoretical underpinning, have increased the interactivity and the student engagement, and changed the teachers’ and students’ roles.”

To explore students’ own ideas beyond plenary discussions, it is also possible to collect answers to open-ended questions or allow students to submit their own questions before the lecture for use in the planning of clicker interventions. Another option is to combine the use of clicker interventions with a qualitative response system, such as Flinga. These systems allow students to write text-based answers on a shared digital wall, thereby allowing students to submit explanations for their clicker answers.

Although the kind of questions used is important and different kinds of questions serve different purposes, another essential factor in clicker interventions is the level of facilitated interactivity. If response systems are used in combination with peer discussions and the teacher follows up on the student answers and uses them purposefully to make changes in her lecture, the change from a traditional lecture will be more extensive than if a teacher merely poses a few questions and then moves on with the monologue. The quality of the interventions also rests upon how well the teacher follows up on the students’ answers. For example, a teacher might simply collect the student answers and not engage them in a discussion of different perspectives or, alternatively, relate the ideas to one another, compare and contrast them, and relate them to existing ideas discussed in the course. This second approach uses information from clicker interventions to create tension between the ideas of the students and the ideas of the discipline, which can allow the students to draw connections between their everyday views and the ideas of the course and to become more aware of the different perspectives on a topic.

Cross-References

References

  1. Anderson LS, Healy AF, Kole JA, Bourne LE (2011) Conserving time in the classroom: the clicker technique. Q J Exp Psychol 64(8):1457–1462.  https://doi.org/10.1080/17470218.2011.593264CrossRefGoogle Scholar
  2. Beatty ID, Gerace WJ (2009) Technology-enhanced formative assessment: a research-based pedagogy for teaching science with classroom response technology. J Sci Educ Technol 18(2):146–162.  https://doi.org/10.2307/23036186CrossRefGoogle Scholar
  3. Biggs J, Tang C (2011) Teaching for quality learning at university, 4th edn. McGraw-Hill/Open University Press, MaidenheadGoogle Scholar
  4. Black P, Wiliam D (1998) Inside the Black box: raising standards through classroom assessment. Phi Delta Kappan 80(2):139–144Google Scholar
  5. Black P, Wiliam D (2009) Developing the theory of formative assessment. Educ Assess Eval Account 21(1):5–31CrossRefGoogle Scholar
  6. Blood E (2012) Student response systems in the college classroom: an investigation of short-term, intermediate, and long-term recall of facts. J Technol Teach Educ 20(1):5–20Google Scholar
  7. Bloxham S, Campbell L (2010) Generating dialogue in assessment feed back: exploring the use of interactive cover sheets. Assess Eval High Educ 35(3):291–300.  https://doi.org/10.1080/02602931003650045CrossRefGoogle Scholar
  8. Boekaerts M, Corno L (2005) Self-regulation in the classroom: a perspective on assessment and intervention. Appl Psychol 54(2):199–231.  https://doi.org/10.1111/j.1464-0597.2005.00205.xCrossRefGoogle Scholar
  9. Boscardin C, Penuel W (2012) Exploring benefits of audience-response systems on learning: a review of the literature. Acad Psychiatry 36(5):401–407.  https://doi.org/10.1176/appi.ap.10080110CrossRefGoogle Scholar
  10. Boud D, Molloy E (2013) Rethinking models of feed back for learning: the challenge of design. Assess Eval High Educ 38(6):698–712.  https://doi.org/10.1080/02602938.2012.691462CrossRefGoogle Scholar
  11. Cain J, Black EP, Rohr J (2009) An audience response system strategy to improve student motivation, attention, and feed back. Am J Pharm Educ 73(2).  https://doi.org/10.5688/aj730221CrossRefGoogle Scholar
  12. Campbell J, Mayer RE (2009) Questioning as an instructional method: does it affect learning from lectures? Appl Cogn Psychol 23(6):747–759.  https://doi.org/10.1002/acp.1513CrossRefGoogle Scholar
  13. Carless D, Salter D, Yang M, Lam J (2010) Developing sustainable feed back practices. Stud High Educ 36(4):395–407.  https://doi.org/10.1080/03075071003642449CrossRefGoogle Scholar
  14. Chien Y-T, Chang Y-H, Chang C-Y (2016) Do we click in the right way? A meta-analytic review of clicker-integrated instruction. Educ Res Rev 17:1–18.  https://doi.org/10.1016/j.edurev.2015.10.003CrossRefGoogle Scholar
  15. Clark W (2006) Academic Charisma and the origins of the research university. University of Chicago Press, ChicagoGoogle Scholar
  16. Clark I (2012) Formative assessment: assessment is for self-regulated learning. Educ Psychol Rev 24(2):205–249MathSciNetCrossRefGoogle Scholar
  17. Clark RC, Mayer RE (2011) E-learning and the science of instruction, 3rd edn. Pfeiffer, San FranciscoCrossRefGoogle Scholar
  18. Cleveland LG (2002) That’s not a large class; it’s a small town: How do I manage. In: Stanley CA, Porter ME (eds) Engaging large classes: Strategies and techniques for college faculty. Bolton, MA: Anker, pp 16–27Google Scholar
  19. Cochran-Smith M, Villegas AM (2015) Framing teacher preparation research: an overview of the field, part 1. J Teach Educ 66(1):7–20.  https://doi.org/10.1177/0022487114549072CrossRefGoogle Scholar
  20. Crouch CH, Mazur E (2001) Peer instruction: ten years of experience and results. Am J Phys 69(9):970–977.  https://doi.org/10.1119/1.1374249CrossRefGoogle Scholar
  21. D’Inverno R, Davis H, White S (2003) Using a personal response system for promoting student interaction. Teach Math Appl 22(4):163–169Google Scholar
  22. Deci EL, Koestner R, Ryan RM (1999) A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychol Bull 125(6):627–668.  https://doi.org/10.1037/0033-2909.125.6.627CrossRefGoogle Scholar
  23. Denker KJ (2013) Student response systems and facilitating the large lecture basic communication course: Assessing engagement and learning, communication teacher 27(1):50–69.  https://doi.org/10.1080/17404622.2012.730622CrossRefGoogle Scholar
  24. Deslauriers L, Schelew E, Wieman C (2011) Improved learning in a large-enrollment physics class. Sci Educ Int 322(6031):862–864.  https://doi.org/10.1126/science.1201783CrossRefGoogle Scholar
  25. Devlin M, Samarawickrema G (2010) The criteria of effective teaching in a changing higher education context. High Educ Res Dev 29(2):111–124.  https://doi.org/10.1080/07294360903244398CrossRefGoogle Scholar
  26. Dewey J (1997) Experience and education. Touchstone, New YorkGoogle Scholar
  27. Đonlagić S, Kurtić A (2016) The role of higher education in a knowledge economy. In: Ateljević J, Trivić J (eds) Economic development and entrepreneurship in transition economies: issues, obstacles and perspectives. Springer International Publishing, Cham, pp 91–106Google Scholar
  28. Draper SW, Brown MI (2004) Increasing interactivity in lectures using an electronic voting system. J Comput Assist Learn 20(2):81–94CrossRefGoogle Scholar
  29. Egelandsdal K, Krumsvik RJ (2017a) Clickers and formative feed back at university lectures. Educ Inf Technol 22(1):55–74.  https://doi.org/10.1007/s10639-015-9437-xCrossRefGoogle Scholar
  30. Egelandsdal K, Krumsvik RJ (2017b) Peer discussions and response technology: short interventions, considerable gains. Nordic J Digit Lit 12(01–02):19–30CrossRefGoogle Scholar
  31. Egelandsdal K, Krumsvik RJ (Forthcoming) Clicker interventions at university lectures and the feed back gap. Forthcoming submitted to JournalGoogle Scholar
  32. Eisenstein EL (1997) The printing press as an agent of change: communications and cultural transformation in early-modern Europe. Cambridge University Press, Cambridge, UKGoogle Scholar
  33. Evans C (2013) Making sense of assessment feed back in higher education. Rev Educ Res 83(1):70–120.  https://doi.org/10.3102/0034654312474350CrossRefGoogle Scholar
  34. Fies C, Marshall J (2006) Classroom response systems: a review of the literature. J Sci Educ Technol 15(1):101–109CrossRefGoogle Scholar
  35. Friesen N (2011) The lecture as a transmedial pedagogical form: a historical analysis. Educ Res 40(3):95–102.  https://doi.org/10.3102/0013189x11404603CrossRefGoogle Scholar
  36. George ES (2006) Positioning higher education for the knowledge based economy. High Educ 52(4):589–610.  https://doi.org/10.1007/s10734-005-0955-0CrossRefGoogle Scholar
  37. Goffman E (1981) Forms of talk. University of Pennsylvania Press, Philadelphia, PAGoogle Scholar
  38. Hake RR (1998) Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. Am J Phys 66(1):64–74.  https://doi.org/10.1119/1.18809CrossRefGoogle Scholar
  39. Hattie J (2009) Visible learning. A synthesis of over 800 meta-analysis relating to achievement. Routledge, LondonGoogle Scholar
  40. Hattie J, Gan M (2011) Instruction based on feed back. In: Mayer RE, Alexander PA (eds) Handbook of research on learning and instruction. Routledge, New York, pp 249–271Google Scholar
  41. Hattie J, Timperley H (2007) The power of feed back. Rev Educ Res 77(1):81–112CrossRefGoogle Scholar
  42. Higgins R, Hartley P, Skelton A (2001) Getting the message across: the problem of communicating assessment feed back. Teach High Educ 6(2):269–274.  https://doi.org/10.1080/13562510120045230CrossRefGoogle Scholar
  43. Hrepic Z, Zollman DA, Rebello NS (2007) Comparing students’ and experts’ understanding of the content of a lecture. J Sci Educ Technol 16(3):213–224.  https://doi.org/10.1007/s10956-007-9048-4CrossRefGoogle Scholar
  44. Ilgen D, Davis C (2000) Bearing bad news: reactions to negative performance feed back. Appl Psychol 49(3):550–565.  https://doi.org/10.1111/1464-0597.00031CrossRefGoogle Scholar
  45. Isaacson RM, Fujita F (2006) Metacognitive knowledge monitoring and self-regulated learning: academic success and reflections on learning. J Scholarship Teach Learn 6(1):39–55Google Scholar
  46. James MC, Willoughby S (2011) Listening to student conversations during clicker questions: what you have not heard might surprise you! Am J Phys 79(1):123–132.  https://doi.org/10.1119/1.3488097CrossRefGoogle Scholar
  47. Jonsson A (2013) Facilitating productive use of feed back in higher education. Act Learn High Educ 14(1):63–76.  https://doi.org/10.1177/1469787412467125CrossRefGoogle Scholar
  48. Kay RH, LeSage A (2009) Examining the benefits and challenges of using audience response systems: a review of the literature. Comput Educ 53(3):819–827.  https://doi.org/10.1016/j.compedu.2009.05.001CrossRefGoogle Scholar
  49. Keough SM (2012) Clickers in the classroom: a review and a replication. J Manag Educ 36(6):822–847.  https://doi.org/10.1177/1052562912454808MathSciNetCrossRefGoogle Scholar
  50. Kirschner PA, Martens RL, Strijbos JW (2004) CSCL in higher education? In: Strijbos J-W, Kirschner PA, Martens RL (eds) What we know about CSCL: and implementing it in higher education. Springer, Dordrecht, pp 3–30CrossRefGoogle Scholar
  51. Kluger AN, DeNisi A (1996) The effects of feed back interventions on performance: a historical review, a meta-analysis, and a preliminary feed back intervention theory. Psychol Bull 119(2):254–284.  https://doi.org/10.1037/0033-2909.119.2.254CrossRefGoogle Scholar
  52. Kluger AN, Van Dijk D (2010) Feedback, the various tasks of the doctor, and the feedforward alternative. Med Educ 44(12):1166–1174.  https://doi.org/10.1111/j.1365-2923.2010.03849.xCrossRefGoogle Scholar
  53. Knight JK, Wood WB (2005) Teaching more by lecturing less. Cell Biol Educ 4(4):298–310.  https://doi.org/10.1187/05-06-0082CrossRefGoogle Scholar
  54. Knight JK, Wise SB, Rentsch J, Furtak EM (2015) Cues matter: learning assistants influence introductory biology student interactions during clicker-question discussions. CBE Life Sci Educ 14(4):ar41.  https://doi.org/10.1187/cbe.15-04-0093CrossRefGoogle Scholar
  55. Kolikant YB-D, Drane D, Calkins S (2010) “Clickers” as catalysts for transformation of teachers. Coll Teach 58(4):127–135CrossRefGoogle Scholar
  56. Kruger J, Dunning D (1999) Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol 77(6):1121–1134.  https://doi.org/10.1037//0022-3514.77.6.1121CrossRefGoogle Scholar
  57. Krumsvik RJ (2012) Feedback clickers in plenary lectures: a new tool for formative assessment? In: Rowan L, Bigum C (eds) Transformative approaches to new technologies and student diversity in futures oriented classrooms: future proofing education. Springer, Dordrecht, pp 191–216CrossRefGoogle Scholar
  58. Krumsvik RJ, Ludvigsen K (2012) Formative E-assessment in plenary lectures. Nordic J Digit Lit 7(01):36–54Google Scholar
  59. Kvernbekk T (2011) Til forelesningens forsvar. In: Kvernbekk T (ed) Humaniorastudier i pedagogikk. Pedagogisk filosofi og historie. Abstrakt forlag AS, Oslo, pp 203–226Google Scholar
  60. Lantz ME (2010) The use of ‘clickers’ in the classroom: teaching innovation or merely an amusing novelty? Comput Hum Behav 26(4):556–561.  https://doi.org/10.1016/j.chb.2010.02.014CrossRefGoogle Scholar
  61. Ludvigsen K, Egelandsdal K (2016) Formativ E-vurdering i høyere utdanning. In: Krumsvik RJ (ed) Digital læring i skole og lærerutdanning. Universitetsforlaget AS, Bergen, pp 256–273Google Scholar
  62. Ludvigsen K, Krumsvik RJ, Furnes B (2015) Creating formative feed back spaces in large lectures. Comput Educ 88(0):48–63.  https://doi.org/10.1016/j.compedu.2015.04.002CrossRefGoogle Scholar
  63. Mayer RE, Stull A, DeLeeuw K, Almeroth K, Bimber B, Chun D, ⋯, Zhang H (2009) Clickers in college classrooms: fostering learning with questioning methods in large lecture classes. Contemp Educ Psychol 34(1):51–57.  https://doi.org/10.1016/j.cedpsych.2008.04.002CrossRefGoogle Scholar
  64. Mazur E (1997) Peer instruction: a user’s manual. Prentice Hall, Upper Saddle RiverGoogle Scholar
  65. Mazur E (2009) Farewell, lecture? Science 323(5910):50–51.  https://doi.org/10.1126/science.1168927CrossRefGoogle Scholar
  66. Nelson MM, Schunn CD (2009) The nature of feed back: how different types of peer feed back affect writing performance. Instr Sci 37(4):375–401CrossRefGoogle Scholar
  67. Nicol D (2009) Assessment for learner self-regulation: enhancing achievement in the first year using learning technologies. Assess Eval High Educ 34(3):335–352.  https://doi.org/10.1080/02602930802255139CrossRefGoogle Scholar
  68. Nicol D, Macfarlane-Dick D (2006) Formative assessment and self-regulated learning: a model and seven principles of good feed back practice. Stud High Educ 31(2):199–218CrossRefGoogle Scholar
  69. Nielsen KL, Hansen G, Stav JB (2016) How the initial thinking period affects student argumentation during peer instruction: students’ experiences versus observations. Stud High Educ 41(1):124–138.  https://doi.org/10.1080/03075079.2014.915300CrossRefGoogle Scholar
  70. Pajares F (2005) Self-efficacy during childhood and adolescence – implications for teacher and parents. In: Pajares F, Urdan T (eds) Self-efficacy beliefs of adolescents. Information Age Publishing, Greenwich, pp 339–367Google Scholar
  71. Price M, Handley K, Millar J, O’Donovan B (2010) Feedback: all that effort, but what is the effect? Assess Eval High Educ 35(3):277–289.  https://doi.org/10.1080/02602930903541007CrossRefGoogle Scholar
  72. Prince M (2004) Does active learning work? A review of the research. J Eng Educ 93(3):223–231CrossRefGoogle Scholar
  73. Rao SP, DiCarlo SE (2000) Peer instruction improves performance on quizzes. Adv Physiol Educ 24(1):51–55CrossRefGoogle Scholar
  74. Risko EF, Anderson N, Sarwal A, Engelhardt M, Kingstone A (2012) Everyday attention: variation in mind wandering and memory in a lecture. Appl Cogn Psychol 26(2):234–242.  https://doi.org/10.1002/acp.1814CrossRefGoogle Scholar
  75. Roediger HL, Karpicke JD (2006) The power of testing memory. Basic research and implications for educational practice. Perspect Psychol Sci 1(3):181–210.  https://doi.org/10.1111/j.1745-6916.2006.00012.xCrossRefGoogle Scholar
  76. Rush BR, Hafen M, Biller DS, Davis EG, Klimek JA, Kukanich B, ⋯, White BJ (2010) The effect of differing audience response system question types on student attention in the veterinary medical classroom. J Vet Med Educ 37(2):145–153.  https://doi.org/10.3138/jvme.37.2.145CrossRefGoogle Scholar
  77. Sadler DR (1989) Formative assessment and the design of instructional systems. Instr Sci 18(2):119–144.  https://doi.org/10.2307/23369143CrossRefGoogle Scholar
  78. Sadler DR (2010) Beyond feed back: developing student capability in complex appraisal. Assess Eval High Educ 35(5):535–550.  https://doi.org/10.1080/02602930903541015CrossRefGoogle Scholar
  79. Schwartz DL, Bransford JD (1998) A time for telling. Cogn Instr 16(4):475–522.  https://doi.org/10.1207/s1532690xci1604_4CrossRefGoogle Scholar
  80. Shapiro AM, Gordon LT (2012) A controlled study of clicker-assisted memory enhancement in college classrooms. Appl Cogn Psychol 26(4):635–643.  https://doi.org/10.1002/acp.2843CrossRefGoogle Scholar
  81. Shapiro AM, Gordon LT (2013) Classroom clickers offer more than repetition: converging evidence for the testing effect and confirmatory feed back in clicker-assisted learning. J Teach Learn Technol 2(1):15–30Google Scholar
  82. Shapiro AM, Sims-Knight J, O’Rielly GV, Capaldo P, Pedlow T, Gordon L, Monteiro K (2017) Clickers can promote fact retention but impede conceptual understanding. Comput Educ 111(C):44–59.  https://doi.org/10.1016/j.compedu.2017.03.017CrossRefGoogle Scholar
  83. Shute VJ (2008) Focus on formative feed back. Rev Educ Res 78(1):153–189CrossRefGoogle Scholar
  84. Skagen K (2000) Forelesningens muligheter. Tema: forelesning. Uniped 22Google Scholar
  85. Smith MK, Wood WB, Adams WK, Wieman C, Knight JK, Guild N, Su TT (2009) Why peer discussion improves student performance on in-class concept questions. Science 323(5910):122–124.  https://doi.org/10.1126/science.1165919CrossRefGoogle Scholar
  86. Smith MK, Wood WB, Krauter K, Knight JK (2011) Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE Life Sci Educ 10(1):55–63.  https://doi.org/10.1187/cbe.10-08-0101CrossRefGoogle Scholar
  87. Smith EL, Rice KL, Woolforde L, Lopez-Zang D (2012) Transforming engagement in learning through innovative technologies: using an audience response system in nursing orientation. J Contin Educ Nurs 43(3):102–103.  https://doi.org/10.3928/00220124-20120223-47CrossRefGoogle Scholar
  88. Sun JC-Y (2014) Influence of polling technologies on student engagement: an analysis of student motivation, academic performance, and brainwave data. Comput Educ 72(0):80–89.  https://doi.org/10.1016/j.compedu.2013.10.010CrossRefGoogle Scholar
  89. Vickrey T, Rosploch K, Rahmanian R, Pilarz M, Stains M (2015) Research-based implementation of peer instruction: a literature review. CBE Life Sci Educ 14(1).  https://doi.org/10.1187/cbe.14-11-0198CrossRefGoogle Scholar
  90. Wieman C (2007) Why not try a scientific approach to science education? Change Mag High Learn 39(5):9–15.  https://doi.org/10.3200/CHNG.39.5.9-15CrossRefGoogle Scholar
  91. Wood AK, Galloway RK, Hardy J, Sinclair CM (2014) Analyzing learning during peer instruction dialogues: a resource activation framework. Phys Rev Spec Top Phys Educ Res 10(2):020107CrossRefGoogle Scholar
  92. Yoder JD, Hochevar CM (2005) Encouraging active learning can improve students’ performance on examinations. Teach Psychol 32(2):91–95.  https://doi.org/10.1207/s15328023top3202_2CrossRefGoogle Scholar
  93. Yorke M (2003) Formative assessment in higher education: moves towards theory and the enhancement of pedagogic practice. High Educ 45(4):477–501.  https://doi.org/10.1023/a:1023967026413CrossRefGoogle Scholar
  94. Zingaro D, Porter L (2014) Peer instruction in computing: the value of instructor intervention. Comput Educ 71:87–96.  https://doi.org/10.1016/j.compedu.2013.09.015CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of BergenBergenNorway

Section editors and affiliations

  • Bill Davey
    • 1
  1. 1.Business Information TechnologyMelbourneAustralia