Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning
The present study explored the effects of lecture fluency on students’ metacognitive awareness and regulation. Participants watched one of two short videos of an instructor explaining a scientific concept. In the fluent video, the instructor stood upright, maintained eye contact, and spoke fluidly without notes. In the disfluent video, the instructor slumped, looked away, and spoke haltingly with notes. After watching the video, participants in Experiment 1 were asked to predict how much of the content they would later be able to recall, and participants in Experiment 2 were given a text-based script of the video to study. Perceived learning was significantly higher for the fluent instructor than for the disfluent instructor (Experiment 1), although study time was not significantly affected by lecture fluency (Experiment 2). In both experiments, the fluent instructor was rated significantly higher than the disfluent instructor on traditional instructor evaluation questions, such as preparedness and effectiveness. However, in both experiments, lecture fluency did not significantly affect the amount of information learned. Thus, students’ perceptions of their own learning and an instructor’s effectiveness appear to be based on lecture fluency and not on actual learning.
In order to learn effectively, individuals must be able to accurately assess their own knowledge. Being able to recognize what one knows—and does not know—is an essential step when deciding what information needs to be learned (e.g., Bjork, 1994; Dunlosky & Lipko, 2009; Dunlosky & Metcalfe, 2009; Kornell & Metcalfe, 2006; Metcalfe, 2009; Nelson, Dunlosky, Graf, & Narens, 1994; Pyc & Dunlosky, 2010). An inaccurate assessment of one’s own knowledge can lead to bad decisions, such as choosing to stop studying before information has been fully learned.
Students’ assessments of their own knowledge can be influenced by the ease or fluency with which information is acquired (e.g., Benjamin, Bjork, & Schwartz, 1998; Diemand-Yauman, Oppenheimer, & Vaughan, 2011; Kornell, Rhodes, Castel, & Tauber, 2011; Oppenheimer, 2008; Rawson & Dunlosky, 2002; Schwartz, 1994). Fluency sometimes leads to accurate assessments of learning. For example, concrete words are considered easier to process and are, in fact, easier to remember than abstract words (e.g., Begg, Duft, Lalonde, Melnick, & Sanvito, 1989), and coherent text is considered easier to process and is typically remembered better than incoherent text (e.g., Rawson & Dunlosky, 2002). Fluency can also mislead assessments of learning, however. Students’ predictions of their own learning—but not actual learning—are sometimes higher when verbal information is presented in easier-to-read font (e.g., Alter, Oppenheimer, Epley, & Eyre, 2007, Experiment 4) or a larger font size (e.g., Rhodes & Castel, 2008) or is accompanied by images (e.g., Serra & Dunlosky, 2010; see also Carpenter & Olson, 2012).
An important context in which fluency has not been fully explored is learning from lectures. Instructors vary in their degree of preparation and knowledge of a topic, so students are likely to encounter some lectures that are more smoothly delivered than others. Students may form judgments of how easily they will remember a lecture on the basis of the apparent ease (or lack thereof) with which an instructor explains information. Students’ perceptions of an instructor (as measured through traditional instructor evaluation questions) can be positively influenced by the instructor’s degree of expressiveness (e.g., free use of gestures or humor; Ware & Williams, 1975). These perceptions are positively associated with later test scores in some cases (e.g., Coats & Smidchens, 1966; Williams & Ware, 1977), but not in others (see Williams & Ware, 1976). Thus, the delivery of a lecture may influence students’ perceptions in ways that are not correlated with actual learning.
The present study examined the effect of lecture fluency on students’ perceived and actual learning. Participants viewed one of two videos depicting an instructor explaining a scientific concept. The same speaker delivered the same script in both videos. The only difference was in how the information was delivered. In the fluent speaker condition, the speaker stood upright, maintained eye contact, displayed relevant gestures, and did not use notes. In the disfluent speaker condition, she hunched over a podium, read from notes, spoke haltingly, and failed to maintain eye contact.
Immediately after watching one of these videos, participants in Experiment 1 made a judgment of learning (JOL) estimating how much of the information from the video they would be able to recall after about 10 min. Participants in Experiment 2 were given an opportunity to study a text-based script of the video for as long as they wished. Participants in both experiments then answered several instructor evaluation questions (e.g., organization, preparedness, etc.), in addition to questions requiring self-assessment (e.g., how effectively they felt they had learned the material). After about 10 min, participants in both experiments recalled as much of the information from the video as they could. These experiments thus examined the role of lecture fluency in metacognitive awareness (Experiment 1) and regulation (Experiment 2), allowing an investigation of how well students think they learn from a lecture and whether this lines up with how well they actually learn.
Forty-two undergraduates participated to fulfill partial requirements for introductory psychology courses at Iowa State University. Twenty-one participants were randomly assigned to view each video.
The only difference between the two videos was in how the lecture was delivered. In the fluent speaker condition, the speaker stood upright before a desk, maintained eye contact with the camera, and spoke fluently. All information was delivered without notes so that her hands could display relevant gestures. In the disfluent speaker condition, the speaker hunched over a podium behind the desk and read the information from notes. She did not maintain eye contact but switched her gaze back and forth repeatedly between the camera and notes. She read haltingly and flipped through her notes several times.
Design and procedure
Each participant was seated at a computer in an individual testing room. They were informed that they would be viewing a videotaped lecture about a scientific concept. They were asked to pay careful attention and were told that their memory would be tested later.
Immediately after the video, participants were asked the following question: “In about 10 minutes from now, how much of the information from the video do you think you will be able to recall?” Below this question was a scale containing the numbers 0 % (none of it), 20 %, 40 %, 60 %, 80 %, and 100 % (all of it). Participants entered a number between 0 and 100.
Participants were then asked the following questions one at a time: (1) “How organized was the speaker in the video?” (2) “How prepared was the speaker in the video?” (3) “How knowledgeable was the speaker in the video?” and (4) “Please rate the overall effectiveness of the speaker in the video.” A 5-point Likert scale appeared below each question, with 1 representing not at all organized/prepared/knowledgeable/effective and 5 representing very organized/prepared/knowledgeable/effective.
Participants then answered three additional questions requiring self-assessment: (1) “How well do you feel that you have learned the information that was presented in the video?” (2) “Please rate your overall level of interest in the information that was presented in the video,” and (3) “Please rate your overall level of motivation to learn the information that was presented in the video.” Participants again entered a number between 1 and 5, with 1 representing not at all learned/interested/motivated and 5 representing very well learned/interested/motivated.
Participants then completed an unrelated distractor task that involved answering approximately 30 trivia questions. Following this (which lasted approximately 10 min), they were given a memory test over the information from the video. Participants were given the following instructions: “In the space below, please type a detailed explanation for why calico cats are almost always female. Try to include as much detail as you can remember. You have 5 minutes!”
After 5 min, participants were informed that time was up. They were then asked whether they had any detailed prior knowledge of the information presented in the video before participating in the experiment. Data from 1 participant who reported having such knowledge were replaced.
Results and discussion
The video content was organized into 10 idea units (see the Appendix). Two independent raters blindly evaluated all responses to determine how many idea units were present. Interrater agreement according to Cronbach’s alpha was .99 for both conditions. Performance was calculated by averaging the two raters’ scores.
Experiment 1 revealed that lecture fluency can bias metacognitive judgments such that a fluent lecture is perceived as better-learned, but is not actually better remembered, than the same lecture delivered in a less fluent manner. Using the same materials and basic design, Experiment 2 explored the potential effects of lecture fluency on subsequent study decisions.
A student’s sense of overconfidence from a fluent lecture could have the undesirable consequence of causing them to study too little. Experiment 2 explored the effects of lecture fluency on how long students choose to study. After watching one of the two videos from Experiment 1, students were given as much time as they wished to restudy the video content via a text-based script before completing the same evaluation questions and memory test from Experiment 1.
Seventy undergraduates were recruited from the same participant pool as in Experiment 1. Thirty-five were randomly assigned to view either the fluent speaker video or the disfluent speaker video.
Materials, design, and procedure
Participants were informed that they were about to watch a video of an instructor explaining a scientific concept and that, afterward, they would be given a chance to review the information from the video. They were encouraged to learn the information as best they could to prepare for a memory test that would be given approximately 10 min later. Immediately after viewing one of the two videos, participants pressed a button to study the video script for as long as they wished and then pressed another button to advance to the next screen, which contained the same evaluation questions from Experiment 1.
Participants then completed a distractor task that involved answering unrelated trivia questions for approximately 10 min, followed by the same memory test from Experiment 1. Participants were then asked whether they had any detailed prior knowledge of the information presented in the video before participating in the experiment. Data from 4 participants who reported having such knowledge were replaced.
Results and discussion
Two independent raters blindly evaluated all responses to determine the number of idea units present. Interrater agreement according to Cronbach’s alpha was .93 and .92 for the fluent and disfluent conditions, respectively. Performance was again calculated by averaging the two raters’ scores.
As in Experiment 1, test performance did not differ significantly between participants who viewed the fluent speaker (M = .44, SD = .19) versus the disfluent speaker (M = .40, SD = .19), t(68) = 0.81, p = .42. Participants spent a comparable amount of time reading the script after viewing the fluent speaker (M = 1.39 min, SD = 1.20 min) versus the disfluent speaker (M = 1.43 min, SD = 1.12 min), t(68) = 0.16, p = .88. A positive correlation emerged between reading time and later memory accuracy for participants who viewed the disfluent speaker (r = .50, p = .002), but not for those who viewed the fluent speaker (r = −.09, p = .62). The difference between these correlations was significant, Fisher’s r to z transformation = 2.56, p = .01.
The fluent speaker was again rated as significantly more organized, knowledgeable, prepared, and effective than the disfluent speaker, ts > 8.48, ps < .001, ds > 2.04 (see Table 1, bottom section). Participants who viewed the fluent speaker versus the disfluent speaker also indicated that they learned the information better (M = 3.89, SD = 1.02,and M = 2.74, SD = 1.11, respectively), t(67) = 4.48, p < .001, d = 1.08, and were more motivated to learn the information (M = 3.09, SD = 1.10, and M = 2.43, SD = 1.09, respectively), t(68) = 2.51, p = .014, d = 0.60. Ratings of interest were higher for participants who viewed the fluent versus disfluent speaker (M = 2.94, SD = 1.16, and M = 2.49, SD = 1.20, respectively), but this difference was not significant, t = 1.62.
In two experiments, students viewed a fluent (i.e., prepared and well-organized) lecture or a disfluent (i.e., unprepared and disorganized) version of the same lecture. In both experiments, responses to instructor evaluations indicated that students felt that they had learned more from the fluent lecture than from the disfluent lecture. Actual memory performance, however, did not differ as a function of lecture fluency.
These findings are consistent with a number of studies showing that what appears to be easy to encode is not always easy to remember (e.g., Carpenter & Olson, 2012; Kornell et al., 2011; Rhodes & Castel, 2008; Serra & Dunlosky, 2010; Schwartz, 1994) and that individuals tend to overestimate what they know (e.g., Castel, McCabe, & Roediger, 2007; Dunlosky & Nelson, 1994; Finn & Metcalfe, 2007; Koriat & Bjork, 2005; Koriat, Sheffer, & Ma’ayan, 2002; Kornell & Bjork, 2009). The present findings extend beyond past research—which typically has included fluency manipulations based on simple perceptual features (e.g., Alter et al., 2007; Schwartz, 1994) or memory manipulations (e.g., ease of retrieval; Benjamin et al., 1998)—by investigating fluency in a complex and dynamic lecture context. It is not clear precisely which aspects of the lecturer’s behavior influenced participants’ judgments, and the experience of fluency may be subjective. What is clear, however, is that a more fluent instructor may increase perceptions of learning without increasing actual learning.
In Experiment 2, students who viewed the fluent or disfluent lecture subsequently studied the material for a comparable amount of time. This finding aligns with other research showing a disassociation between metacognitive judgments and study decisions (e.g., Kornell & Son, 2009; Moulin, Perfect, & Jones, 2000). Although study time could be driven to some degree by students’ perceptions of how well they know the material, it could also be driven by the potentially stronger effects of habitual reading processes (Ariel, Al-Harthy, Was, & Dunlosky, 2011). Students in both conditions may have simply read the passage from start to finish and advanced to the next screen as soon as they were done. The well-practiced, habitual act of reading may not be particularly sensitive to differences in perceived level of knowledge, especially under relatively low stakes learning conditions. Furthermore, decisions about whether to study and how long to persist are separable (Metcalfe & Kornell, 2005). Lecture fluency might have a larger influence on decisions about whether or not to study at all. The relationship between study time and test performance suggests that study time matters more after viewing a disfluent lecture than after viewing a fluent lecture, raising the possibility that lecture fluency—even if it does not influence study time per se—could influence how that time is spent.
The instructor evaluation data are in line with research showing that students’ evaluations can be sensitive to an instructor’s behavioral cues that may not relate to lecture content. In research on the “Dr. Fox effect” (e.g., Naftulin, Ware, & Donnelly, 1973), students’ evaluations of an instructor were sensitive to the amount of information contained in a lecture (with higher evaluations assigned to lectures that contain greater coverage of the topic) when the lecturer displayed low expressiveness. When the lecturer covered the same topic with greater enthusiasm, friendliness, humor, and so on, students’ evaluations of instructors were high and did not vary as a function of content (e.g., Ware & Williams, 1975; Williams & Ware, 1976, 1977). An instructor’s level of expressiveness may therefore mask the effects of important factors, such as lecture content, that could directly affect learning.
The present results suggest that students should be cautious about assessing their own knowledge on the basis of the ease with which an instructor explains information. An instructor’s level of fluency likely reflects years of practice, during which challenges were overcome. Experts, whether they are skiers or teachers, can sometimes do something difficult and “make it look easy.” Even if a chemistry teacher struggled when he first encountered inorganic chemistry or a skier fell 50 times on her first day skiing, these struggles are invisible to a student who is learning the information for the first time. Unaware of these difficulties, students may be inclined to appraise their own learning on the basis of the most salient cue available—the instructor’s level of fluency. The present results demonstrate that this cue can be misleading.
Learning from someone else—whether it is a teacher, a peer, a tutor, or a parent—may create a kind of “social metacognition,” in which judgments are made on the basis of the fluency with which someone else seems to be processing information. The question students should ask themselves is not whether it seemed clear when someone else explained it. The question is, “can I explain it clearly?”
Our results suggest that instructor evaluations and JOLs are at least partially based on instructor fluency, although these evaluations may be affected by additional factors. For example, interactive classroom activities may increase perceived instructor effectiveness and positively affect learning. Although fluency did not significantly affect test performance in the present study, it is possible that fluent presentations usually accompany high-quality content. Furthermore, disfluent presentations might indirectly impair learning by encouraging mind wandering, reduced class attendance, and a decrease in the perceived importance of the topic (e.g., Coats & Smidchens, 1966). Effects like these were not captured in our study because the lecture was mandatory and brief. Evaluating a 65-s video seems quite different from evaluating an entire semester of classes. However, instructor ratings made after viewing a silent video that was approximately 6 s long accurately predicted ratings that students gave an instructor at the end of the semester (Ambady & Rosenthal, 1993).
In summary, the effects of fluency on learning are not as straightforward and intuitive as one might predict. Given the pervasiveness of fluency in academic settings and its potential to mislead students’ judgments of their own learning and an instructor’s effectiveness, these results suggest that one should be cautious in interpreting evaluative measures that could be based by fluency. Whenever possible, such evaluations should be corroborated with objective measures of student learning.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 202-18-94-00.