Innovative Higher Education

, Volume 37, Issue 5, pp 397–405

Does Learning-Centered Teaching Promote Grade Improvement?


    • Department of Biological SciencesUniversity of the Sciences
  • Phyllis Blumberg
    • Director of the Teaching and Learning CenterUniversity of the Sciences

DOI: 10.1007/s10755-012-9216-1

Cite this article as:
Mostrom, A.M. & Blumberg, P. Innov High Educ (2012) 37: 397. doi:10.1007/s10755-012-9216-1


When the grade distribution within a course shifts towards higher grades, it may be due to grade inflation or grade improvement. If the positive shift is accompanied by an increase in achievement or learning, it should be considered grade improvement, not grade inflation. Effective learning-centered teaching is designed to promote student learning due to increased responsibility for learning, engagement with course material, and opportunity for formative assessments prior to summative assessments of course learning outcomes, which leads to improved grades. We suggest ways that faculty members practicing learning-centered teaching can collect and analyze data to support increased learning and grade improvement.


Learning-centered teachingGrade improvementGrade inflation

In this article, we integrate two topics: a positive shift in grade distributions and learning-centered teaching. First, we discuss the trend toward an upward shift in grade distribution over time that traditionally has been attributed to grade inflation, but that upon deeper analysis may actually reflect earned grade improvement. Next, we define three essential characteristics of learning-centered teaching. We provide support that illustrates that this approach promotes greater learning than traditional, instructor-centered teaching (Weimer 2002). We argue that this improved learning leads to earned grade improvement, not grade inflation. Finally, we offer strategies for documenting and defending improved learning and grade improvement as a result of implementing learning-centered teaching.

Grade Inflation Versus Grade Improvement

Upward shifts in grade distributions are usually labeled grade inflation, which has been defined as an increase in the grades students receive without an increase in achievement (Bejar and Blew 1981). This classic definition is still applicable today. Educators have long been concerned about grade inflation; for example, in 1894 administrators complained that faculty members at Harvard University awarded too many A’s and B’s (Hu 2005). Numerous studies published over the past thirty years show a general increase in higher grade averages during this time without a concomitant increase in either standardized tests (such as SATs) or an indication that students are better prepared for subsequent courses or college (Denton and Henson 1979; Hassel and Lourey 2005; Potter et al. 2001), and Seligman (2002) reported that grade inflation continues to occur at Harvard. Grade inflation has been described as a factor that could lower the effectiveness and credibility of higher education and its graduates (Hassel and Lourey 2005; Kolevzon 1981).

However, not all upward shifts in grade distributions are grade inflation. In her review of the literature on grade inflation since 1993 Boretz (2004) attributed a higher proportion of students earning A’s and B’s to two factors promoting greater success. First, more professors are attending professional development programs emphasizing more effective instructional strategies, including learning-centered practices. Second, institutions of higher education are providing improved and greater student support services. Additionally, we believe improved instructional designs and more appropriate, authentic assessment techniques may lead to increased student learning. If course teaching, learning, and assessment techniques are aligned with course objectives, then students’ grades should reflect this improved learning as they improve their mastery of an assignment (Blumberg 2009). Furthermore, allowing students opportunities to master the material and demonstrate their attainment of the desired knowledge and skills helps more students to achieve the course learning outcomes (Denton and Henson 1979). All of these factors may lead to students earning higher grades because they achieve the course goals and objectives.

Boretz (2004) warned that referring to grade distribution shifts as grade inflation without carefully analyzing why grades have shifted is potentially damaging to higher education, and we all know that the term grade inflation has negative connotations. Nelson (2010) countered the idea that grade inflation is a corruption of standards by thoughtfully including it in his list of “dysfunctional illusions of rigor” which drive some teaching philosophies. He suggested that grade inflation needs to be re-examined. He continued his discussion of grade inflation under “more realistic views” by differentiating between bad and good grade inflation. Bad grade inflation occurs when students receive higher grades from their instructors than they deserve, which is what most educators equate with grade inflation. Good grade inflation occurs when students earn improved grades because they achieve higher learning outcomes. He advocated for good grade inflation and states that improved pedagogy can lead to good grade inflation.

Recent discussions of “good grade inflation” at a teaching conference were met with resistance (Mostrom et al. 2010). There is considerable stigma associated with the term “grade inflation” because teachers do not want to give the impression to colleagues or administrators that students may achieve grades in their course that were not earned. Therefore, we have chosen to adopt the phrase “grade improvement” to describe the increase of higher course grades accompanied by a documented increase in student learning outcomes (e.g. through comparisons of post- vs. pre-tests, and/or learning-centered teaching courses vs. more traditional courses). We propose that the term grade inflation be used to refer only to “bad grade inflation” since a strong negative connotation already exists.

Defining Characteristics of Learning-Centered Teaching

Learning-centered teaching moves the focus from what the instructor does to what the students learn. It is more than a collection of teaching strategies that comprise a teaching toolbox. Instead, it involves a philosophical paradigm shift about how one teaches (Barr and Tagg 1995). It is an approach to teaching that includes improved instructional techniques, learning strategies that empower students to be more actively engaged in their education, and an assessment philosophy. Learning-centered teaching is well supported by research, especially the constructivist theories of learning (Alexander and Murphy 1998; Lambert and McCombs 1998).

Learning-centered teaching is, in part, characterized by the three essential behaviors: a shift in responsibility for learning towards students and away from the instructor, active student engagement in the course material, and formative assessment opportunities for students (Blumberg 2009; Weimer 2002). Students take greater responsibility for their own learning because the instructor’s role changes from disseminator of information to facilitator of learning. Students actively engage with the course content and construct their own meaning of the topic. They move beyond memorizing information; instead, they understand the course material and can state the concepts in their own words and apply these to a novel situation.

Instructors implementing learning-centered teaching can foster a shift in the responsibility for learning to the students by explicitly teaching learning-to-learn skills that are aligned with course objectives and assessments instead of assuming the students already have these skills and allowing students the opportunity to practice these skills (Blumberg 2009). For example, Nelson (2010) found it critical to model how to read texts for his Introductory Biology students and how to develop evidence-based arguments for his advanced majors. When he modeled how to read material for the class and when he assigned questions that were appropriate to the learning objectives of the class, he found that students were actively engaged in the course material outside of class, thereby taking greater responsibility for their own learning; and their grades improved on exams. He also found that organizing assignments for the students clarified his learning objectives and allowed him to revise and more closely align both formative and summative assessment to meet the revised learning objectives. For more advanced students, teachers may model how to critically read, analyze, and evaluate primary literature and representations of data therein or may model metacognitive practices.

The second essential component of learning-centered teaching is to engage students actively with the course material. The purpose of this active engagement is to promote the students’ own understanding of the content so that they can apply this knowledge in different contexts. Not all studying achieves this purpose. For example, when students read a textbook and highlight the relevant sections, they are not creating their own meaning, but merely drawing attention to what the textbook author wrote. In contrast, when students have repeated opportunities to interact with each other and the instructor using the course content, they become more engaged and create their own understanding of the content. Some learning exercises that actively engage students in the course material include constructing concept maps, interacting with web-based learning resources, discussing class material in small groups, practicing problem solving, and taking quizzes relating to reading assignments or exams. Students especially value experiencing real-world, authentic exercises because they see the relevance of these to their future. Publishers of textbooks frequently provide interactive auxiliary materials that help students actively engage in the content.

It is important not to confuse learning-centered teaching with “active learning” because learning-centered teaching involves more than simply providing “hands-on activities” in the classroom. Some of these activities, along with embracing the three essential characteristics of learning-centered teaching, can lead to learning-centered teaching; but simply implementing the activities by themselves does not constitute a learning-centered teaching environment.

The third essential component of learning-centered teaching is the opportunity for students to receive formative feedback so they can improve and move toward mastery of the course learning outcomes prior to the teacher’s assignment of grades derived from summative evaluations. When provided formative assessment students have the opportunity to learn from their mistakes. Opportunities for students to interact with course content and receive feedback prior to completing a graded assignment improve students’ knowledge and understanding of the material (Blumberg 2009; Nelson 2010). When students receive formative feedback that guides their learning, they perceive that the teacher cares about them as individuals and their learning, which often increases their motivation in the course.

Formative feedback can take many forms. With the assistance of technology teachers can provide formative feedback by commenting on students’ web-based assignments, problem sets, practice quizzes, and written assignments. Some examples of technology-assisted instruction include the use of audience response systems or clickers and Just-in-Time-Teaching (JiTT). In JiTT students answer questions about the reading assignment and submit these to the instructor prior to the class (Novak et al. 1999). Using these techniques, the instructor either enhances or reduces the time devoted to class discussion of the relevant concept based upon student mastery. Bloxham and Campbell (2010) provided first-year undergraduate students an interactive cover sheet for written assignments through which students articulated the areas where they felt they needed the most help. Therefore students were self-identifying their weaknesses and initiating a written dialogue with assessors. The grading tutors found that this dialogue helped them provide more meaningful feedback and saved them grading time.

Problem-based learning (PBL) has been implemented widely in the health professions and engineering curricula and has been called a prototypical example of learning-centered teaching (Blumberg 2007). Working in small groups, students iteratively discuss real-world, open-ended problems. The first time they encounter the problem students raise questions they need to research, and they then independently access resources to find the answers. When they reconvene, they share and integrate the new information to discuss the concepts at a deeper level or apply their knowledge to a similar problem. Essential learning-centered aspects of PBL include the students’ identification of what they need to learn. They learn the material with minimal assistance from the professor, and they receive feedback on the accuracy and completeness of their understanding (Blumberg 2007).

Learning-Centered Teaching and Grade Improvement or Grade Inflation?

Critics of learning-centered teaching might argue that the practices highlighted above lead to bad grade inflation because any of the following occurs: content has been removed to make room for active learning (such as discussions), the course has lost rigor because new learning techniques are used, or the assessment methods allow students to do better than they would with instructor-centered courses. However, the literature on learning-centered teaching suggests that these courses can be as rigorous as instructor-centered courses (Weimer 2002). Students in learning-centered courses achieve the learning outcomes more frequently and to a higher standard than those in instructor-centered courses (Fink 2003; Nelson 2010).

We posit that learning-centered teaching leads to grade improvement due to students’ increased responsibility for learning, greater engagement in the process of learning, and the opportunity to improve their mastery of a skill. When instructors use these approaches, they are effectively eliminating some of the characteristics associated with current higher education practices that can lead to grade inflation. If students can obtain good grades without attending or participating in class or by being apathetic about their learning, then their grades may be inflated because they have not increased their learning (Hassel and Lourey 2005). Grade improvement is actually the opposite of grade inflation, and these two phenomena represent a polarity of learning.

Just as it is important to define what learning-centered teaching is, it is also relevant to distinguish what it is not because sometimes this issue is not clear. Database search engines such as ERIC may confuse instructors when they suggest that searches for learning-centered teaching include the terms such as independent study, student control of the content, self-paced instruction, self-grading, or individualized instruction. These are not necessarily examples of learning-centered teaching approaches.

Searches in the ERIC and Faculty Development databases on grade inflation found no peer reviewed articles that discussed the link between grade inflation and learning-centered teaching. Instead, the 262 grade inflation articles we identified in the ERIC database for the past ten years focused on other organizational issues and administrative policies such as lax academic standards for admissions and retention. There were many articles on the relationship between student course evaluations and grade inflation. Grade inflation has also been attributed to the economic realities of academia including the pressures to retain students and to the pressures that adjunct faculty members experience, i.e., the desire to appear to be good teachers in order to retain their academic positions.

Assessing Learning-Centered Teaching

How can instructors defend their learning-centered courses from accusations of undesirable grade inflation? First, instructors should align their course learning outcomes, learning-centered teaching practices, and assessment methods to ensure that their assessments are good measures of learning (Blumberg 2009; Fink 2003). Through this process they should clarify for themselves and for assessors which essential characteristics of learning-centered teaching they are using in the classroom (Blumberg 2009; Weimer 2002). Faculty members can assess the learning-centered status of the course using rubrics and Likert scales such as those proposed by Blumberg and Pontiggia (2011).

After aligning and adopting learning-centered teaching practices in the classroom, it is important to collect data regarding student achievement in order to determine whether or not they have achieved the learning outcomes. There are four types of analyses for assessing the impact, or lack thereof, of learning-centered teaching practices on students achieving learning outcomes. One analysis involves a comparison of post- versus pre-test results to determine students’ learning gains in a learning-centered course. These tests should accurately assess a specific, measurable learning outcome within a course. The pre-and post-tests can be created by the instructor, but using standardized tests that many other instructors use may provide better evidence. A second analysis involves comparing students exposed to learning-centered teaching versus students exposed to traditional teaching. A third analysis combines the first two measures, in which pre- and post-tests are administered to two different groups of students in an experimental design in which one group is exposed to instructor-centered teaching and the other group to learning-centered teaching. A fourth analysis considers the long-term effects of learning-centered teaching and provides the strongest evidence for the positive effects of adopting this teaching philosophy and strategy. The data generated from any of these analyses can be used to determine if learning-centered teaching practices promote students’ achievement of learning outcomes and grade improvement. The data and conclusions drawn from these analyses should be shared with other faculty members and administrators in order to maintain the transparency of the evaluative process. Below we provide examples of each of the four types of data analyses so as to illustrate the connection between implementing learning-centered teaching practices and improved learning.

Comparisons of Pre-test vs. Post-test Results in Learning-Centered Courses

In an introductory biology course students completed a Scientific Method and Information Literacy Exercise in which they individually accessed, analyzed, and evaluated a scientific article from the journal Animal Behaviour (Porter et al. 2010). The instructors provided models for students to follow and answered students’ questions about the assignment, but the final product was that of the individual student. Students completed pre-and post-tests assessing their basic knowledge of the scientific method and information literacy and their perceived relevance of these to their academic career. The test results revealed that students gained both scientific and information knowledge that supported attainment of specific learning outcomes for the course. Students also showed an increased perception of the relevance of scientific and information literacy to their academic careers.

A mathematics instructor changed his upper level course to include more student presentations, group discussions of homework, projects, and choices of how they would be examined (Alsardary et al. 2011). All of these changes are learning-centered teaching strategies that foster greater engagement with the content and require students to take more responsibility for their learning. Two mathematicians conducted a primary trait analysis of the pre- and post-test performance of the students in this learning-centered class. They identified four primary traits for this course: conceptual content knowledge, procedural knowledge to solve problems, application of mathematical concepts to other situations, and accurate mathematical communication. For three of the primary traits all of the students improved significantly from the pre- to the post-test; the only primary trait for which students did not show a significant improvement was mathematical communication (Alsardary et al. 2011).

Comparisons of Student Performance in Learning-Centered vs. Instructor-Centered Courses

Using data from 37 published papers Springer et al. (1999) conducted a meta-analysis of STEM (Science, Technology, Engineering and Mathematics) courses for which students’ overall achievement on tests were measured. Their overall analyses found that students in small groups for which group-learning exercises were incorporated exhibited a significant improvement in achievement compared to students in traditional lecture courses. Their analysis of the effects of small group learning on different ethnic groups showed that African Americans and Latinos exhibited a significantly greater improvement on tests when compared to Caucasian students or heterogeneous groups.

Mahalingam et al. (2008) shifted their weekly, first-year chemistry recitations from an optional, less structured, poorly attended question / answer and instructor-centered problem- solving format to a required, highly structured, collaborative learning format. The math SAT scores of the pre-learning-centered teaching cohort versus the learning-centered teaching cohort were no different, suggesting these two cohorts were equally proficient in math, an important knowledge area for success in chemistry. The percentage of students in this course with failing exam averages (below 60%) decreased dramatically (from approximately 27.5% during the optional recitation years to less than 20% during the required recitation years). The means and medians of the students’ exam scores increased significantly in the first year of the implementation of the new format as compared to the previous year. Because there was documented enhanced learning within the learning-centered teaching cohort, the collaborative-learning format resulted in grade improvement, not grade inflation.

Deslauriers et al. (2011) exposed 538 students in their first-year physics class to a 1-week experimental intervention. The control group was exposed to three 1-hour lectures by an experienced professor. Students in the experimental group proposed, tested, and critiqued scientific predictions on the same content during their three 1-hour meetings. In addition students received formative feedback on their skill development. When compared to themselves prior to the intervention and to the control group, students in the experimental group showed increased attendance during the intervention week and increased engagement. Students in the experimental group earned significantly higher grades on a 12-question test that followed the intervention, showing the immediate positive effect of the learning-centered teaching practices used within this group.

Pre-and Post-Test Comparisons with Learning-Centered and Traditional Teaching Approaches

Hake (1998) conducted a meta-analysis of data from 48 physics courses that provided interactive engagement versus 14 traditional courses with little or no active learning. Students enrolled in high school and college Introductory Physics took the Force Concept Inventory, a widely used standardized physics test, as a pre- and post- test; and normalized gains were compared for these two groups. Students who engaged interactively with the material showed a 48% average gain compared to a 23% average gain by students in traditional lecture courses. There was little overlap in the distribution of average gains between these two groups. This meta-analysis provides strong evidence that increased student engagement with course material improves students’ achievement.

Long-Term Longitudinal Effects of Learning-Centered Teaching

Prince and Felder (2006) synthesized the results of a few meta-analyses and individual studies comparing the long-term effects of PBL versus traditional instruction. Students in traditional courses tended to do better on tests of knowledge acquisition than students in PBL courses when these tests occurred immediately after the course. However, this review of the literature indicates that PBL students retained knowledge better and were more able to apply this knowledge in the long-term than students who were traditionally trained. Students in the PBL courses demonstrated greater skill development both in the short-term and long-term.

Students exposed to PBL demonstrated greater self-directed learning than did students in instructor-centered curricula (Blumberg 2000). Primary care physicians who had graduated five to ten years previously from a problem-based medical program were compared to a similar cohort who had graduated from a prestigious, traditional medical college. The PBL graduates were more up-to-date with the current regimens for treating hypertension than the graduates of a traditional program (Shin et al. 1993).

Derting and Ebert-May (2010) examined the short- and long-term effects of changing a two-semester introductory biology course sequence from traditional lecture format to one that used learning-centered strategies of inquiry. While both course formats covered similar content, the learning-centered course focused more on developing students’ understanding of broad underlying concepts and the process of scientific inquiry and less on retaining detailed information about the concepts. The instructor-centered courses were taught in the traditional lecture style. Students in the two learning groups did not differ in composite ACT scores or total number of biology credits taken prior to the course. Immediately following the course the two groups’ scores on the Views About Biology Survey were not significantly different from each other. However, students who were enrolled in the learning-centered introductory courses showed a significantly more sophisticated understanding of biology as an inquiry process in their senior year as compared to the students had been in the traditional lecture courses. The seniors who completed the inquiry-based courses also scored significantly higher than those students in the traditional courses on the Major Field Test in Biology. Therefore, while the learning-centered environment did not promote an immediate effect (as measured by one survey), it did promote long-term effects (as measured by two assessment tools). Derting and Ebert-May (2010) stated that these delayed response results are similar to other studies. We conclude that these results are also similar to those found by Prince and Felder (2006).


We believe that educators need to address and document why upward shifts in grade distributions have occurred over time. A grade shift towards more A’s and B’s should not immediately imply grade inflation for which students have not demonstrated improved learning. Professors need to track and document the manner in which courses are taught and collect and analyze data about student learning in order to distinguish between undesirable grade inflation and desirable grade improvement. One strategy to achieve this is to compare results of post- versus pre-tests within a course. Other strategies are to compare learning retention in both the short- and long-term when students are exposed to traditional versus learning-centered courses.

Positive grade shifts may reflect improved student learning because teachers provide more effective learning opportunities. When teachers employ learning-centered approaches, they help students take more responsibility for their learning, increase opportunities for active student engagement with the course content, and provide formative assessment. We have summarized research evidence to support the idea that using these approaches leads to student learning increases, which leads to earned grade improvement but not to bad grade inflation.


We thank Jason A. Porter, University of the Sciences; John Immerwahr, Villanova University; and three anonymous journal reviewers for their critical review of this manuscript.

Copyright information

© Springer Science+Business Media, LLC 2012