Number of published studies
Between 2002 and August of 2015, seven OER efficacy studies, seven OER perceptions and two studies measuring both OER efficacy and perceptions were published (sixteen total studies). Between September 2015 and December 31, 2018, an additional nine OER efficacy studies, thirteen OER perceptions and seven OER efficacy and perceptions studies were published (twenty-nine total studies). This illustrates a rapid rise in research related to OER efficacy and perceptions with more published studies in the past 3 years than the previous fifteen. This rise is summarized in Figs. 1, 2.
Collective findings from efficacy research between 2015 and 2018
Sixteen efficacy studies that met the aforementioned criteria were published between September 2015 and December 2018, containing a total of 114,419 students. The number of participants, in some respects, is deceptively large, as some of the studies [e.g., Wiley et al. (2016) and Hilton et al. (2016)] contained large overall populations but only a small portion of students who used OER. In total 27,710 students across these studies used OER and 86,709 used CT. The following paragraphs provide a brief overview of each of these OER efficacy studies, organized by how the study controlled for teacher and student differences.
No controls for teacher or student differences
Four studies did not make any attempt to control for teacher or student differences. Essentially two different methodologies were employed, each in two articles. Utilizing a methodology that compares student success metrics of students based on whether they use a CT or OER, Wiley et al. (2016) analyzed the rate at which students at Tidewater Community College dropped courses during the add/drop period at the start of a semester. They found that were students .8% less likely to drop courses when utilizing OER. Although the difference was small, it was statistically significant.
Hilton et al. (2016) followed Wiley et al. (2016) by reviewing two additional semesters of OER adoption at Tidewater Community College. Their data set included those from Wiley et al. (2016) for a total of 45,237 students, 2014 of whom used OER. They compared drop, withdrawal and passing rates based on whether students used OER or CT. They found that when combining drop, withdrawal and passing rates, students who used OER were about 6% more likely to complete the class with credit than their peers who did not use OER (6.6% in the face to face courses and 5.6% in online courses).
A separate methodology examined faculty reports of OER implementation. Croteau (2017) examined twenty-four datasets involving 3847 college students in Georgia who used OER. These datasets came from faculty members reporting on the results they obtained from this OER adoption. Unfortunately, the data was inconsistent—some faculty provided completion rates, while others reported on grade distributions or other metrics. In total, instructors provided pre/post efficacy measures for 27 courses. Across the faculty reports, there were “twenty-four data sets for DFW [drop, failure, withdrawal] rates, eight data sets for completion rate, fourteen data sets for grade distribution, three data sets reported for final exam grades, three data sets reported for course specific assessment and one data set reported for final grades” (p. 97). While results varied across sections (e.g., with respect to DFW data, 11 sections favored CT, 12 sections favored OER, one unchanged) across each of these metrics there were no overall statistical differences in results when comparing pre and post OER.
Similar to Croteau (2017), Ozdemir and Hendricks (2017) examined the reports of multiple faculty who had adopted OER. In total, 28 faculty provided some type of evaluation regarding the impact of adopting an open textbook on student learning outcomes; however, their metrics varied widely, and Ozdemir and Hendricks did not report the total number of students involved (clearly it was more than 50; however because it was not specified I have put “not provided” in Table 1). Twenty instructors reported that learning outcomes had improved because of using open textbooks, and eight said that there was no difference. Of the twenty who said that learning outcomes had improved, nine measured provided data such as improved scores on exams or assignments, or improved course grades overall. Eight provided no data or explanation to support their claims that student learning had improved, and three only provided anecdotal evidence. Fourteen instructors described student retention in their reports; eight said student retention improved with six stating that it remained the same. Like Croteau (2017), this study provides a valuable synthesis of instructor self-reports on the outcomes of using OER in their classes; however, as noted by the authors, there was little rigor or control in the instructor reporting process, limiting the value of the overall study.
Studies that accounted for teacher, but not student variables
Five studies accounted for teacher, but not student variables. The design of each study was similar in that the same faculty member taught the identical course (to control for teacher variables), in some instances with OER and others with a CT. Researchers used student efficacy outcomes as the dependent variables in these studies. Chiorescu (2017) examined the results of 606 students talking college algebra at a college in Georgia across four semesters. In spring 2014, fall 2014, and fall 2015 Chiorescu used a math CT, coupled with MyMathLab (an online software math supplement). In spring 2014 she used an OER textbook, coupled with WebAssign, a different online math supplement. Chiorescu found statistically significant differences in the percentages of students earning a C or better in the course between spring 2014 and spring 2015, favoring the use of OER. Similar results were noted between spring 2015 (OER) and fall 2015 (non-OER); however, there were no significant differences between spring 2015 (OER) and fall 2014 (non-OER). She also found that students were statistically more likely to receive an A when using OER and that students were approximately half as likely to withdraw from the class when using OER (also a statistically significant finding). In this study, unlike many others, the instructor went back to a CT after using OER, given that she found the online math component aligned to OER to be inferior to the online math component used in connection with the CT. She found that both grades and withdrawal rates that improved during the semester in which OER were used, regressed to their previous levels when OER stopped being utilized in the course.
Using a similar design, Hendricks et al. (2017) examined the academic performance of students in an introductory physics course at the University of British Columbia. They compared the results of students between fall 2012 and spring 2015 (students used CT) with students in fall 2015–spring 2016 (students used OER). Concurrent with the change in textbooks were significant pedagogical changes, although the teachers stayed the same. There were 811 students in the OER semesters with a non-specified amount (estimated to be 2400) in the CT semesters. The researchers found no significant differences when comparing grade distributions; however, they found a small significant improvement in final exam score when comparing fall 2015 (OER) with fall 2014 and fall 2013. They also compared student scores on the Colorado Learning Attitudes about Science Survey (CLASS) for Physics, a common diagnostic measurement in physics education. A one-way ANOVA indicates that there were no significant differences when all categories were combined; however, there was a small negative shift in the problem-solving category during the year that OER were utilized.
Choi and Carpenter (2017) examined the academic results across five semesters of students taking a class on Human Factors and Ergonomics with the same teacher. In two semesters students used CT (n = 114); in the other three they used OER (n = 175). Researchers measured differences in student learning based on midterm and final exam scores, as well as overall course grade. Midterm exam grades fluctuated widely, with significant differences both before and after the introduction of OER, with the tendency towards lower scores post-OER. In one of the three OER semesters final exam scores were lower than they had been in the CT semesters; there were no significant differences in the other two semesters. In terms of overall course grades, there were no significant differences.
Lawrence and Lester (2018) used a similar design in an introductory American Government course, comparing two teachers who used CT in fall 2014 with these same teachers who used OER in spring 2015. Although the researchers do not specify the number of students in their study, based on the survey results they report, there were at least 162 students who took the class in fall of 2014 and 117 in spring of 2015. There were no statistically significant differences in course GPA average or DFW rates. Students who used OER did perform better for one of the two teachers studied; however, the authors attribute this change to policy changes regarding online classes. They conclude that their “findings do not support the notion that OERs represent a dramatic improvement over commercial texts, nor do they indicate that students perform substantially worse when using open content texts either” (p. 563).
Ross et al. (2018) studied the use of the OpenStax Sociology textbook in an introductory Sociology course at the University of Saskatchewan. One instructor taught a sociology course with a CT in the 2015–2016 school year (n = 330), and then used an OpenStax textbook in the fall of 2016 (n = 404). The researchers found no significant differences in course grades between the two groups. However, students using CT had a completion rate of 80.3%, whereas students using OER completed at a rate of 85.3%, a statistically significant difference.
Studies that accounted for student, but not teacher variables
Three studies accounted for student, but not teacher differences. In each case, the study performed statistical analyses that controlled for student variables such as income, GPA, mother’s education, and/or ACT scores. Multiple teachers were involved in each study and there were no attempts to control for teacher variables. Westermann Juárez and Muggli (2017) examined the results of first year students enrolled in a mathematics class at an institution of higher education in Chile, notable in part for being the only study outside the United States and Canada to meet the criteria for inclusion in the present study. Students were in three different groups; one used a CT (n = 30), one used Kahn Academy videos (OER) (n = 35) and a third used an open textbook (OER) (n = 31). The researchers used propensity score matching to control for student age, family income and number of education years of the mother. When comparing student results using a CT versus Kahn Academy, the researchers found that students who used a CT had higher class attendance but scored lowered on the final exam. In contrast, those who used the open textbook scored lower on the final exam than students using a CT; there were no significant differences in class attendance. While there were differences between the instantiations of OER and CT in terms of attendance and final exam score, there were no differences between CT and OER in terms of overall course score.
Grewe and Davis (2017) studied 146 students who attended Northern Virginia Community College. These students were enrolled in an online introductory history class in fall of 2013 (two sections used OER, two sections did not) or spring 2014 (three sections used OER, three did not). The authors gave the total number of students but did not specify the number of students per section, for analysis purposes in Table 1, I have assumed equal numbers of students in each section. While the online courses were all created from the same master template, different teachers administered the courses and may have had variations in how they responded in discussion forums, graded student work, etc. Researchers attempted to control for student differences by using prior student GPA as a covariate. They found “a moderately positive relationship between taking an OER course and academic achievement” and that even after accounting for prior GPA that “enrollment in an OER course was…a significant predictor of student grade achievement” (n.p).
Gurung (2017) sent a Qualtrics survey to course instructors at seven institutions who in turn forwarded it to their students. In the first study reported in this article, 569 students from five institutions who used an electronic version of the NOBA Discovering Psychology OER responded to the survey. At the other two institutions, 530 students who used hard copies of one of two different CT responded. The survey asked students to share their ACT scores, study habits, use of the textbooks, behaviors demonstrated by their teachers, and then answer fifteen psychology questions drawn from the 2007 AP Psychology exam. When controlling for ACT scores, students who used OER scored 1.2 points (13%) lower than those who used CT.
Gurung noted potential limitations of his first study included the fact that the OER textbooks were electronic compared with the hard-copy CT, and that the AP exam questions may have been more closely aligned to the CT. In the second study, Gurung rectified these issues by including students who used both hard and electronic copies of CT as well as hard and electronic copies of the NOBA OER. He also included ten quiz questions from the NOBA test bank. In this second study, 1447 students at four schools who used the open NOBA textbook responded to the survey; 782 students at two schools who used a CT responded. All other procedures and survey questions mirrored the first study. When comparing total quiz scores, there was an overall significant effect of the book used, favoring those who used the CT. However, when only the NOBA test bank items were used there were no significant differences, indicating alignment may be the reason for the difference between groups. In addition, there were no statistically significant differences when comparing the quiz scores of those who used an electronic version of OER versus those who used electronic version of a CT. There were also differences in quiz results between the two different schools that used commercial textbooks (each of which used a different CT). It may be that one CT was superior, or that the instruction at one institution was stronger, leading to the difference.
Studies that accounted for student and teacher variables
Four studies accounted for both student and teacher variables. Winitzky-Stephens and Pickavance (2017) assessed a large-scale OER adoption across 37 different courses in several general education subjects at Salt Lake Community college. In total, there were 7588 students who used OER compared with 26,538 students who used commercial materials. The researchers used multilevel modeling to control for course subject, course level, individual instructors, and student backgrounds (including age, gender, race, new/continuing student, and prior GPA). After accounting for these variables, they found that for continuing students, the use of OER was not a significant factor in student grade, pass rate or withdrawal rate. For new students, OER had a slight, positive impact on course grade, but not for pass or withdrawal rates.
Colvard et al. (2018) performed a similar large-scale analysis by examining course-level faculty adoption of OER at the University of Georgia. They evaluated eight undergraduate courses that switched from CT to OpenStax OER textbooks between fall 2010 and fall 2016. In contrast to Winitzky-Stephens and Pickavance (2017), who used statistical controls to account for teacher variables, Colvard et al. (2018) only included sections where instructors had taught with both textbook versions. Researchers found statistically significant differences in grade distributions favoring OER. There was a 5.5% increase in A grades after OER adoption, a 7.7% increase in A-grades, and those receiving a D, F or W grade decreased by 2.7%. This study was also the first to specifically examine the interactions among different student populations. Researchers found an overall GPA increase of 6.90% increase for non-Pell recipients and an 11.0% increase for Pell recipients. Furthermore, OER adoption resulted in a 2.1% reduction in DFW grades for non-Pell eligible students versus a 4.4% reduction for Pell-eligible students, indicating that the OER effect was stronger for these students with greater financial needs. Non-white students similarly received higher grade boosts and decreased likelihood for withdrawals than did white students, although both groups showed better outcomes when using OER. The largest differential between student groups came in the comparison between full and part time students. Course grades improved by 3.2% for full-time students but jumped 28.1% for part-time students. The DFW rate for full-time students actually increased from 6.3 to 7.4%; however, the rate for part-time students dramatically dropped from 34.3 to 24.2%. One limitation of their approach was that results were only reported at an aggregate level because Pell eligibility data was only given to the researchers in aggregate (not by course or instructor level). This was a stipulation from the Financial Aid office in order to prevent any students from possibly being identified. While this is a reasonable limitation, it is possible that reporting results in aggregate masked or created differences that would not have been present had results been disaggregated.
Jhangiani et al. (2018) attempted to control for both student variables (through demographic analysis and a pretest) and instructor variables (by using the same instructors) in their examination of seven sections of an introductory psychology class taught in Canada. Two sections were assigned digital OER, two were assigned the same OER, but in hardcopy format, and three were assigned a CT. Three different instructors taught the seven courses; one instructor taught back-to-back semesters, first with the print OER, then with the print CT. The other two instructors taught with either open or commercial, but not both. Students in all conditions had similar demographic variables and had equivalent knowledge of psychology at the start of the semester. Those using CT had completed more college credits, were taking fewer concurrent courses, and reported spending more time studying than those who used OER. Collectively, these indicators suggest that the two groups are roughly equivalent, with any differences favoring those in the CT condition. Students took three exams, identical for each section. When all sections were analyzed in a MANOVA, students assigned the digital open textbook performed significantly better than those who used the commercial textbook on the third of the three exams. There were no differences in the other two exams. When only the two sections taught by the same teacher were analyzed (to control for teacher bias), students using OER outperformed students using CT on one exam and there were no significant differences on the other two.
Clinton (2018) used a similar approach to compare the overall class scores and withdrawals rates of students taking her introductory psychology classes across two semesters. She compared students 316 students using a CT in spring of 2016 with 204 students in who used the OpenStax Psychology textbook in fall of 2016. The demographic makeup of students, as well as their self-reports on how they used the textbooks were similar. When accounting for differences in student high school GPAs there was no grade impact connected with OER adoption. The number of students who withdrew during the OER semester was significantly lower than when CT were used, a difference that did not appear to be related to GPA.
A summary of the OER efficacy research published between September 2015 and December 31, 2018 is provided in Table 1.
Collective findings from perceptions research between 2015 and 2018
There were twenty OER perceptions studies published between September 2015 and December 2018, involving 10,807 students and 379 faculty. Six of these studies also included efficacy data, and thus were also identified as efficacy studies in the previous section. I next provide a brief overview of each of these perceptions studies, organized by two different types of studies. Fifteen of the twenty studies directly ask students to compare OER they have used with CT, and five compare student reports about the OER or CT they were currently using.
Studies examining direct comparisons between OER and CT
Pitt (2015) surveyed 127 educators who utilized OER, specifically materials from OpenStax College by putting a link to her survey in the OpenStax newsletter. Those who completed the survey had used ten different OpenStax textbooks. Sixty-four percent of faculty members reported that using OER facilitated meeting diverse learners’ needs and sixty-eight percent perceived greater student satisfaction with the learning experience when using OER.
Delimont et al. (2016) surveyed 524 learners and thirteen faculty members across thirteen courses at Kansas State University regarding their experiences with both “open” and “alternative” resources (where alternative resources refer to free, but copyrighted materials). When students evaluated the statement, “I prefer using the open/alternative educational resource instead of buying a textbook for this course (1 = Strongly disagree, 7 = Strongly agree)” they rated it 5.7 (moderately agree). Twelve of the thirteen faculty members interviewed preferred teaching with OER and stated their perception that students learned better when using OER and alternative resources as opposed to CT. When asked to rate their experience with the open/alternative textbooks, faculty members rated it 6.5 on a 7 point scale.
CA OER Council (2016) surveyed faculty and students at California community colleges and state universities who adopted OER in the fall of 2015. Seven of the sixteen surveyed faculty members felt that the OER were superior to CT they had used. Five faculty rated the OER as being equivalent to CT, with the remaining four rating it as worse. Faculty expressed concern regarding ancillary materials such as PowerPoints and test banks. Of the fourteen faculty members who responded to a question about the quality of ancillary materials, five felt the OER support materials had sufficient quality, three were neutral, and six faculty felt the materials lacked sufficient quality. When students (n = 351) were asked if the OER used were better than traditional resources, 42% rated OER as better, 39% as about the same, 11% as worse than CT and 8% declined to answer. All students in the study wanted to use OER textbooks in the future and stated they would recommend the use of OER to friends.
Illowsky et al. (2016) surveyed 325 students in California who used two versions of an open statistics textbook. The first survey (n = 231) asked students about an earlier version of the OER. Fifty percent of students said if given the choice between courses using OER or CT they would choose to take future classes that used OER; 32% had no preference, with the remaining 19% preferring to enroll in courses with a printed CT. Twenty-five percent of students rated OER as better than CT, 62% as the same and 13% worse relative to CT. A second survey (n = 94) was given to students who used a later OpenStax version of the textbook with similar results.
As stated in the efficacy section, Ozdemir and Hendricks (2017) studied 51 e-portfolios written by faculty in the state of California who used OER. They report, “The vast majority of faculty also reported that the quality of the textbooks was as good or better than that of traditional textbooks” (p. 98). Moreover, 40 of the 51 portfolios contained faculty insights regarding students’ attitudes towards the open textbooks; only 15% of these e-portfolios reported any negative student comments.
Jung et al. (2017) surveyed 137 faculty members who used OpenStax textbooks. Sixty-two percent stated OpenStax textbooks had the same quality as traditional textbooks; 19% thought the quality was better, and 19% thought it was worse. Faculty were also specifically asked about the time they spent preparing the course after adopting an OpenStax text. Seventy-two percent of faculty stated they spent the same amount of time preparing to teach a course using open textbooks, 18% spent more and 10% spent less. Those who reported spending more time were asked if the extra amount of preparation time was acceptable and 78% said it was.
Hendricks et al. (2017) surveyed 143 Physics students; 72% said the OER had the same quality as CT. An additional 21% said OER were better than CT and 7% said they were worse. Students were also asked to rate their agreement with the following statement: “I would have preferred to purchase a traditional textbook for this course rather than using the free online textbook.” 64% of respondents disagreed, 18% were neutral, and 18% agreed. The primary reason given for choosing OER was cost, and for choosing a traditional textbook was a preference for print materials.
Jhangiani and Jhangiani (2017) surveyed 320 college students in British Columbia, registered in courses with an open textbook. Students were asked to rate the agreement with the question, “I would have preferred to purchase a traditional textbook for this course”; 41% strongly disagreed, with an additional 15% slightly disagreeing. Another 24% were neutral, with 20% either slightly or strongly agreeing.
Cooney (2017) surveyed 67 and interviewed six students who were enrolled in a health psychology course at New York City College of Technology that used OER. Those interviewed had a favorable perspective of OER, commenting on both cost savings and convenience. Students who were surveyed clearly preferred OER to CT; 42% said they were much better, 39% somewhat better, 16% neutral, and only 3% somewhat or much worse.
Ikahihifo et al. (2017) analyzed survey responses from 206 community college students in eleven courses that used OER. Students were asked, “On a scale of 1 (poor) to 5 (excellent), how would you rate the quality of the OER material versus a textbook?” (n.p). A majority (55%) rated the OER as excellent relative to a CT. An additional 25% rated OER as being slightly better. 15% considered the two to be equal; 5% considered the quality of the OER material to be less than that of a traditional textbook.
Watson et al. (2017) surveyed 1299 students at the University of Georgia who used the OpenStax biology textbook. A majority of students (64%) reported that the OpenStax book had approximately the same quality as traditional books and 22% said it had higher quality. Only 14% ranked it lower than traditional textbooks. The two most common things students mentioned in terms of why they liked the OpenStax book was the free cost and ease of access.
Hunsicker-Walburn et al. (2018) surveyed 90 students at a community college who reported that they had used OER in lieu of a traditional textbook. While little detail was provided about the students, the courses, or the OER used, the results were similar to other studies in this genre. They found that 33% of these students said the quality of OER were better than traditional textbooks, 54% said they were the same with 12% stating OER were worse.
Abramovich and McBride (2018) studied results from 35 college instructors and 662 students across 11 different courses and seven colleges. Each instructor replaced a CT with OER; students and instructors were surveyed to gauge their perceptions of the OER they used. In total, 86% of students rated OER either as useful or more useful than materials used in their other courses. Only 6% of students stated that the open textbooks rarely or never helped them meet their course objectives. Faculty were similarly positive about the educational value of OER; nearly every instructor rated the OER as being either equal (40%), a little more useful (23%) or much more useful (34%) than materials they had previously used. This left only 3% of instructors who felt that the OER were less useful than other materials.
Ross et al. (2018) surveyed 129 students their experiences using the OpenStax Sociology textbook. Forty-six percent of students the OER were excellent relative to other textbooks, 27%, above average, 19% average, 6% below average, and 2% very poor. Most students (83%) said they would not have preferred purchasing a CT. The three features of the OER that students most appreciated were no-cost, immediate access and the convenience/portability of the digital format.
Griffiths et al. (2018) performed the largest OER student perceptions to date as they surveyed 2350 students across 12 colleges in the United States. They asked students to compare the quality of the OER with the instructional materials they used in a typical class. Students responded as follows: OER were much lower quality (2%), slightly lower (5%), about the same (34%), slightly higher (29%), much higher (30%). Although this seems like an extremely strong statement regarding the quality of OER, it is tempered by the fact that students had similar patterns in how they rated other aspects of the class. For example, students who used OER were asked to rate the quality of teaching, compared to typical class, and stated that the quality of teaching in the class that used OER was much lower (2%), slightly lower (5%), about the same (36%), slightly higher (26%), and much higher (31%). Their overall class rating of the OER class compared to typical classes followed a similar pattern. While it is possible that the use of OER was so significant that the difference in instructional materials led to higher student perceptions of the teacher and overall course, it is equally likely that exceptionally strong faculty or courses colored their perceptions of the materials. It is also possible that students tend to have an overall positive experience in every class they take thus causing them to rate most classes as “better” than a typical class, even though this is not mathematically possible.
Studies comparing ratings of OER and CT
Gurung (2017), used a short version of the Textbook Assessment and Usage Scale (TAUS; Gurung and Martin 2011) to assess student perceptions of CT and OER. The TAUS is assesses different components of a textbook, such as study aids, visual appeal, examples, and so forth. Gurung (2017) asked students to rate the current textbook they were using (some subjects used OER and others used CT) and compared the results. In his first study, Gurung found CT users rated the total quality of their textbook as higher than those using an OER textbook. Further analyses showed this occurred because of differences in ratings on figures, photos and visual appeal. Students using OER rated the material as being more applicable to their lives. The results in Gurung’s second study were similar; however, additional details provided in the first study (e.g., indicating whether the overall differences stemmed from differences in ratings solely on visual appeal) were not included with the second study.
Jhangiani et al. (2018) also used a modified version of the TAUS to compare how 178 university students in Canada rated the psychology textbooks that they used. Notably, three of the six questions they eliminated from the original TAUS to create their modified version were related to visual aspects of the materials. Some students used a print CT, while others used print OER or digital OER. Unlike Gurung (2017), statistical analyses of student surveys found that students rated the OER print book higher that the print CT on seven of the sixteen TAUS dimensions. There was no dimension where CT was higher rated than OER nor any significant differences between the CT and digital OER.
Lawrence and Lester (2018) surveyed students regarding their use of US History textbooks. Contrary to many OER research studies, they found that students were more positive about the CT than the OER. Seventy-four percent of the 162 students who used the traditional textbook said that they were “overall satisfied with the book” versus 57% of the 117 people who used the open, a difference of 17% (279 total survey respondents). The researchers attribute these results to problems related to the specific OER used and believe the results would have been different had a more robust history OER textbook been available.
Clinton (2018) surveyed students in two separate semesters regarding their opinions of the textbook that they used (one semester used a CT, the other, OER, study described in greater detail in the efficacy section). She asked the two groups of students to answer specific questions about the book they used and then compared the two sets of responses. Across the 458 completed surveys, student perceptions of the quality of the two textbooks were similar except on two attributes. The CT was rated slightly higher (p = .06) in terms of visual appeal, whereas the OER was rated significantly higher with respect to the way it was written (p = .03).
Carpenter-Horning (2018) used the Cognitive Affective Psychomotor (CAP) Perceived Learning Scale to compare how students perceived their learning in a course depending on the textbook type used. She surveyed first-year students at nine community colleges, all of whom had taken a required first-year seminar during the fall semester of 2016. Some of these classes used OER, others CT. In spring 2017 semester, these students (n = 5644) were surveyed regarding their experience in the course. A total of 227 students responded for a response rate of 4%. Of these, 101 used OER and 126 used CT in their course. An independent samples t test showed that students who used OER reported significantly higher levels of perceived cognitive learning in the course (p = .02, d = .31). A separate independent samples t-test demonstrated no statistically significant mean differences perceptions of affective learning. While using of a pre-established instrument to analyze the perceptions of OER is laudable, the CAP Perceived Learning Scale is not designed to measure student textbook perceptions, but rather their overall learning. Thus, a weakness of this study may be an assumption that the difference in perceived learning in the courses is attributable to the type of textbooks; however, other factors may have influenced the difference in student perceptions of learning.
A summary of the OER perceptions research published between September 2015 and December 31, 2018 is provided in Table 2.
Aggregate OER efficacy and perceptions research between 2002 and 2018
By the end of 2018, a total of twenty-five peer-reviewed studies examining the efficacy of OER had been published. These studies involve 184,658 students, 41,480 who used OER and 143,178 who used CT. Three studies did not provide results regarding statistical significance. Ten reported no significant differences or mixed results. Eleven had results that favored OER. One had results that favored CT, although the researcher this study stated these differences could relate to how the learning materials were aligned with the assessment.
A consistent trend across this OER efficacy research (spanning from 2008 to 2018) is that OER does not harm student learning. Although anecdotal reports that OER are not comparable to CT exist, the research does not bear this out with respect to student learning. While the impact of OER on student learning appears to be small, it is positive. Given that students save substantial amounts of money when OER is utilized, this is a particularly important pattern.
In terms of perceptions, at the end of 2018, twenty-nine studies of student and faculty perceptions of OER have been published. These studies involve 13,302 students and 2643 faculty members. Every study that has asked those who have used both OER and CT as primary learning resources to directly compare the two has shown that a strong majority of participants report that OER are as good or better. In the five studies in which the ratings of students using CT were compared with the ratings of students who used OER, two studies found higher ratings for CT, two reported higher ratings for OER and one showed similar ratings.
The key pattern of OER perceptions research is easy to identify—students do not like paying for textbooks and tend to appreciate free options. Many instructors appear to be sensitive to this student preference, which may influence their ratings of OER. The fact that consistent survey data show that both faculty and students who use OER largely rate it as being equal to or superior to CT has important practical and policy implications for those responsible for choosing textbooks.