The Faculty-Student Low-Low Contract
- 596 Downloads
In too many college courses, faculty and students appear to engage in what I term a “low-low contract.” Students have low expectations of faculty with respect to teaching and faculty have low expectations of students with respect to studying. To put it colloquially, “faculty pretend to teach, students pretend to study, and as long as parents and others paying the bills are oblivious, everyone is happy.”
Numerous popular books have claimed, often in highly moralistic and/or cynical terms, that rather than teaching and learning being the primary focus of today’s universities and colleges, faculty have become overly invested in their research; students more interested in partying than studying; sports have been given top priority (evidenced by the fact that some football coaches are paid far better than university presidents); and vast sums of money have been spent on student amenities from climbing walls to dorms with high end suites. With the publication of Richard Arum’s and Josipa Roksa’s highly discussed book, Academically Adrift, we have the beginnings of hard scientific evidence that universities and colleges are, in fact, failing to educate the current cohort of students. What Arum and Roksa show in their book (and in a subsequent report) is that students’ critical thinking skills develop little if at all during their college years. Their research provides verification that higher education is giving far too little attention to teaching and learning.
The focus of this short essay is on why the incentives for faculty to teach and for students to study are now such that we have the “low-low contract,” an obvious explanation for why college students are not learning. Evidence for the “low-low contract” is manifest in the sharp reductions in recent years in the amount of time and effort faculty devote to teaching and the amount of time and effort students put into studying. I examine changes in incentives faced by faculty to teach and students to study created by problems or failures in institutional design. Put in terms an economist would use, my focus is on “market failures” particularly with respect to teaching, but also to a degree with student studying.
I argue that the problem with teaching is that because teaching quality is difficult to measure, there is no market for star teachers, as there is for star researchers, with the result that there are little or no incentives to teach well. With respect to studying, the most likely culprit is grade inflation. Why study hard if it will make little difference in one’s grades? A second possibility is that students are arriving at college “burned out” by the competition to get into college and once there, find they need a “break.”
The shift in faculty priorities from teaching to research has been well documented and commented on by many. At a superficial level, the answer to why this has occurred is obvious—faculty are rewarded, both in terms of status and financially, to a far greater degree for their research than for their teaching. At top research universities, it is not unheard of for senior faculty to counsel their junior, untenured colleagues to put their effort in research and ignore teaching. The presumption is that it is the quality of one’s research, not teaching, that ultimately determines whether one gets tenure and professional success.
The more difficult question is why do universities and colleges so disproportionately reward research in comparison to teaching. My answer has two parts. First, it is because the quality of teaching is so hard to measure. Second, precisely because teaching is so hard to measure, there is no “market” for faculty who are star teachers as there is for star scholars.
If one doubts that measuring teaching quality is difficult, one only needs to look to the considerable ongoing efforts to measure teaching quality at the K-12 level. States, the Federal Government, the Gates Foundation, among others, have spent enormous sums of money trying to develop valid measures of teacher performance. Yet, the efforts to date have been criticized as being deeply flawed, even though the K-12 context involves far fewer subjects and is far more standardized than what occurs in college curriculum.
There are additional reasons why measuring the quality of college level teaching is probably even harder than in the K-12 setting. First, there is the question of what is meant by good teaching. Is it a course where the goal is for students to master a specific subject matter, develop their analytic skills, or broaden their thinking?
Second, even if one could agree on what the goal of college level courses should be or at least the goal of specific courses, how might we actually measure the outcome? These days most universities and colleges have students evaluate their courses. These evaluations, however, generally focus on “student satisfaction” with a course. Recently, a science professor at my institution dramatically changed how he taught his course. Using a consistent set of exams, he was able to demonstrate that students learned more using the new teaching methodology. However, the student evaluations of the course and his teaching fell!
A most obvious methodology for evaluating courses would be for faculty to evaluate each other’s courses—a system of peer evaluation. However, faculty are notorious for believing that the classroom is their private sanctum where others should not be allowed to pass judgment. In addition, a peer evaluation system would take considerable time, time, given current incentives, faculty would see as better invested in their own research.
The consequence of the fact that teaching is difficult to evaluate is that there is no national market for star teachers as there is for star scholars. Higher education in America is unusual if not nearly unique in the high degree of mobility of faculty between institutions. Almost all of this mobility is a function of one institution raiding another’s star scholars. Harvard may recruit a faculty member from Princeton because they are a great scholar, but never simply because they are a great teacher. Even at elite liberal arts colleges where there is considerably more emphasis put on teaching, institutions do not compete with each other for the best teachers. To drive the point home, at the level of elite preparatory schools, where research is obviously not a concern, such competition does not occur. Hotchkiss does not attempt to raid Choate or Deerfield for its best teaching faculty.
The idea that markets will be distorted where there are multiple outcomes and some outcomes are measured better than others is well established in economics and goes back to the work of Dranove and Satterthwaite (Northwestern) on health care. If one dimension (research) is better measured than another (teaching), then institutions will compete more aggressively on the better measured dimension, creating, in economic terms, a market distortion. In the case of higher education, this argument can be extended to multiple dimensions. The quality of students that matriculate (at least as measured by SAT scores and GPAs), the winning records of sports teams, and the quality of student amenities, are all institutional dimensions that are easily observed. The quality of teaching is not. Thus, it is hardly surprising that institutions compete aggressively on the former dimensions, and too a far less degree on the quality of faculty teaching. It is difficult to compete on something that is difficult to measure.
In their overview paper in this symposium, Arum and Roksa discuss recent research findings by the economists Babcock (UC-Santa Barbara) and Marks (UC-Riverside) that show that in the last four decades the amount of time that students spend studying has fallen by 50% and that currently 35% of students spend five or fewer hours a week studying alone. If students are not learning, perhaps it is for the simple reason that they are not studying. But why are students studying so much less?
The number one suspect is grade inflation. If one gets more or less the same grade no matter how well one performs in a course, it is perhaps not surprising that students are less willing to study. Although we would hope that students would work for the intrinsic rewards of learning, we would be fooling ourselves to think that grades are not, at least potentially, an important incentive.
A huge literature both in the public and academic press has documented and sought to explain why grade inflation has occurred. I will not rehearse these explanations here. However, I do want to point to several factors that fit with my overall institutional/structural explanation of change.
First, as universities and colleges, especially at the more elite levels, have become more selective, they are more likely to have students that are more homogenous and higher in ability. If this is correct, then it is appropriate that grades now are higher and have less variance than in the past. A consequence of this, however, is that enrollment in a particular institution, rather than grades, has become the stronger indicator of ability for future employers. If grades are a weaker signal of future labor market productivity, then it is totally rational for students to care less about their grades and study less.
A related factor is that many parents may not know their children’s grades. With the enactment of FERPA (Family Education Rights and Privacy Act) in 1974, parents no longer have the right to have access to their children’s grades once they reach legal adulthood at eighteen. Thus, the incentive to avoid the cajoling of parents when one gets bad grades may no longer be the factor it was for earlier generations. If either parents or potential employers do not care about grades as much as they used to, why study hard?
Grade inflation may also be a function of how institutional structures have affected faculty behavior. In recent decades, colleges and universities have put increasing effort into evaluating courses and professors’ teaching using student questionnaires. In general, these evaluations do not focus on how much students learn, but rather with student satisfaction. Perhaps this is an outgrowth of the constant pressure to think of students as customers. Thus, it is student (customer) satisfaction, not student learning, that has become the measure of faculty teaching. There are two potential consequences of this.
First, faculty may feel pressured to make their courses less demanding and assign less work. The literature on whether more difficult courses receive lower ratings is mixed. However, if professors believe that they will be penalized for requiring too much work, they are likely to assign less work with the consequence that students are likely to learn less.
Second, student complaints, and the consequent fear of bad evaluations may well have a ratcheting effect. One of the most unpleasant experiences for faculty is to deal with complaints from a student about a bad grade. Explaining to a student why they received a poor grade can take considerable time and often results in a confrontational situation. The solution is obvious: do not give bad grades, especially since students already getting good grades are unlikely to know or complain about the more lenient grading of weaker students. The net result of this is that the bottom is ratcheted up, pushing grades higher and higher. This grade inflation then redefines what is an adequate grade—yesterday’s C is now a B, only further increasing the pressure on faculty to grade leniently.
If grade inflation is the primary cause of why students do not study harder, a secondary factor may well be that students arrive at college burned out. The popular press is full of stories about how competitive high school years have become as students compete to get into the best colleges or at least the best college that they can. The assumption is that the college one goes to will determine your future career, who your friends will be, and who you will marry. A recent study by UCLA’s Higher Education Research Institute has found that today’s freshman are experiencing considerably higher levels of stress than previous cohorts. Interestingly, the levels of stress among upper classmen have not changed appreciably. To put it colloquially, students may be arriving in college already burned out, either unwilling or unable to put in the effort studying that earlier generations of students did.
An issue that has received less attention is the work demands that students are likely to face after college, especially in highly remunerative and demanding jobs in areas such as finance and management consulting. It is common around Harvard to hear students say something like “I killed myself to get in here. I will have to kill myself once I get out [working in a new job]. I need a four year vacation.” Implicit in this statement is the idea that what students have done is to relocate their effort to their high school and post college years. If what college one attends and getting and succeeding in a high end job right after college are what is more important, or at least that is the perception, then it is rational for students to invest less time in studying in college and put more effort in the periods before and after college. That said, given the expense of college and the lost potential for learning there, this may not be at all optimal from a societal perspective.
If faculty have weak incentives to teach and students weak incentives to study, then it is hardly surprising to see them engage in a “low-low contract”: low effort on the faculty’s part agreeably matched by low effort on the students’ part. What are we to do? If the question is how to motivate faculty to invest more time in teaching, the answer would seem to be to provide incentives for good teaching that are commensurate with that of being a productive researcher. As discussed above, this may not be easy, given the difficulties in measuring the quality of a professor’s teaching. The fact that it is difficult does not mean that nothing can be done. I have recently suggested to my university that it create endowed chairs for star teachers and that Departments compete for the opportunity to recruit faculty from outside the university to fill these chairs. This would certainly send a message to faculty that teaching is important.
An obvious way to get around grade inflation would be reinstitute class rankings. Done in a simplistic way, class rankings are problematic because they create incentives for students to take easy courses or at least courses where faculty are known as lenient graders. However, there is a highly developed measurement theory from psychometrics testing that can be used to deal with this problem. This theory was developed for situations in which test takers answered different but overlapping questions (SATs, GREs, MCATs, etc). In order to create a single, valid scale, questions are rated in terms of their difficulty, and a student is given more points for correctly answering more difficult questions. A similar methodology could be used where a student’s overall GPA and thus class rank was adjusted for how difficult their courses were.
If student burn out prior to entering college is a key factor, students taking a “gap year” between high school and college, as is now allowed and even encouraged by some institutions, may well be beneficial. Such gap years are only likely to help, however, if this time is not used for additional resume building and as an opportunity to reapply for admission.
More generally we may need to reduce the competition to get into the so called “best colleges.” Some institutions might claim to have done this by no longer requiring SAT scores. A moment’s thought, however, reveals that the affect of this policy may be just the opposite, that is, it increases the pressure to achieve. Given that it is difficult, though not impossible, to increase one’s SAT scores, not requiring scores puts more emphasis on those outcomes that students can affect—high school grades and extracurriculars.