In 2006, the General Social Survey (GSS), a survey of trends in social attitudes conducted nearly every year since 1972, asked a series of questions about the scientific knowledge of the U.S. adult population. Among the questions posed was, “Now, does the Earth go around the Sun, or does the Sun go around the Earth?” Overall, just under three-quarters of the respondents correctly answered that the Earth goes around the sun; but nearly one in five responded that the sun goes around the Earth, and nearly 10% said that they didn’t know. Sociologists Omar Lizardo and Jeremy Freese showed that the prevalence of scientific misunderstanding varied for adults with different levels of education. Among those with a high school diploma or less, approximately 37% said either that the sun rotated around the earth or they didn’t know. But among those with at least some college, 19% reported either that the sun rotated around the Earth or they didn’t know.

Findings such as these clearly establish that if the state of American higher education is a glass, it is either half-empty or half-full, depending on one’s initial assumptions. If we assume that the difference in understanding between those adults who have gone to college and those who have not is properly attributed to the college experience, then these results suggest that among the many things students might learn in college, some come to know that the sun, not the Earth, is the center of the solar system. On the other hand, the fact that nearly one in five adults reporting that they had attended at least some college either believed that the Earth was at the center, or did not know, is a powerful testament to the limited influence of the American education system on the children and youth it serves. In the state of New York, for example, understanding that the Earth revolves around the sun is part of the core curriculum for elementary science in grades K-4.

When the GSS results were released, some sociologists were quick to direct attention away from the sheer volume of scientific misunderstanding within the U.S. adult population, focusing instead on understanding group variations in these misunderstandings. The idea that there was a great deal of scientific illiteracy, even among the products of an education system once touted as among the best in the world, really wasn’t a surprise. Although Sputnik had propelled Congress in the late 1950’s to invest in the infrastructure of American education at all levels of the system, there had been a steadily escalating rhetoric of decline dating back at least to the 1983 report of the National Commission on Excellence in Education, A Nation at Risk. As the fate of the nation’s economy has become increasingly tied rhetorically to the quality of the American education system, the rhetoric has become ever more shrill: if we do not move quickly to increase what our children learn in school, our future will be bleak.

Although the primary target for this new wave of educational accountability has been the K-12 school system, colleges and universities are increasingly coming under scrutiny. The Commission on the Future of Higher Education convened by former U.S. Secretary of Education Margaret Spellings was a natural successor to A Nation at Risk, expressing concern over college affordability and access, the quality of postsecondary instruction, and the weak and inconsistent accountability of colleges and universities to the students who attend them and the taxpayers who support them.

With concerns about the well-being of the American education system, and the role of undergraduate study in particular, on the ascent in the policy agenda, Academically Adrift is the right book at the right time. Arriving on the heels of yet another round of international comparisons that show American youth to be in the middle or bottom of the pack, the book both reinforces what we think we already know, and introduces new understandings of the problem of limited learning on college campuses. Sociologists Richard Arum and Josipa Roksa draw on their encyclopedic grasp of the literature on American colleges and universities to place their key findings in context, and to identify possible policy directions for addressing the problem they illuminate. But as is often the case, there are limits to what social science can do, even when carried out with great care.

The key findings of the study have already received a great deal of attention. If we take the primary mission of undergraduate education to be the development of broad cognitive competencies such as critical thinking skills, complex reasoning, and the ability to communicate ideas and arguments in writing clearly and effectively, then most colleges and universities are falling far short of their responsibilities. Arum and Roksa find that, based on their performance on the Collegiate Learning Assessment (CLA), a new tool for assessing institutional influences on students’ development of higher-order skills, many students are learning very little during their undergraduate years. Students enter college with widely varying levels of these skills, and those students who demonstrate greater competence in thinking and reasoning at the time they enter college, perhaps due to their prior academic preparation and family backgrounds, tend to perform better on the CLA during and at the end of college. But exposure to college itself does not, on average, have much of an impact on student performance on the CLA, leading to the claim that there is limited learning taking place on the undergraduate campus. (Although the authors don’t draw this interpretation, the variability in entering students’ performance, coupled with the limited impact of the college experience on students’ thinking and reasoning skills, also leads to the perverse conclusion that most collegiate learning takes place in high school.)

Arum and Roksa juxtapose this finding, which they acknowledge rests on the narrow shoulders of the CLA measure, with evidence that the academic demands which undergraduate schools place on their students are shockingly limited. The students in their sample reported spending an average of only 12 hours per week on homework outside of class, and more than a third say that they spend less than 5 hours per week studying and attending to their course requirements. One-half of the students they studied reported that every course they took in the preceding semester required less than 20 pages of writing, and a third reported that all of their courses demanded fewer than 40 pages of reading per week. Taken together, one-quarter of the undergraduates in their study had no courses with either of these reading and writing requirements. Even acknowledging that the content of the reading and writing in which students engage might matter as much as the quantity of that reading and writing, Arum and Roksa are on firm ground in arguing that this level of academic rigor is unlikely to lead to improvement over time on an assessment whose tasks reflect critical thinking, complex reasoning, and written communication skills.

There is plenty of blame to go around in explaining the weak academic engagement of undergraduate students in most institutions, and few stakeholders escape the authors’ withering eyes. The authors draw on historical accounts of developments in the relationship between postsecondary institutions and their external environments, but in truth there is little new under the sun. For example, the Faustian bargain struck by students and their professors to exchange limited academic demands and inflated grades proffered by faculty for, well, limited academic demands and inflated grades awarded by students was first described by Willard Waller in 1932. The tyranny of this market is not a 21st-century phenomenon. And the prestige structure of higher education in the United States has long rested on the reputations of colleges and universities as sites for knowledge production and application, and their abilities to recruit academic and social elites as students, much more so than their capacity to transform the intellectual characteristics of the students who enroll.

It nevertheless is the case that findings of the sort reported by Arum and Roksa are a kind of education policy Rorschach inkblot enabling pundits to attach their preferred policy agendas to the evidence of limited learning. Thus, we can expect calls for increasingly elaborate and technically-sophisticated accountability systems paralleling those which have been developed for the evaluation of U.S. public schools and their teachers. The Spellings Commission in 2006 cited the CLA, the assessment tool at the heart of this book’s analysis, as an exemplary way of assessing student learning in college, putting it in play as a candidate for measuring student outcomes alongside of such indirect measures of learning such as graduation and retention rates.

Arum and Roksa question whether externally-imposed accountability systems are likely to remedy the problems they have identified, and rightly so, given the weak links we observe between accountability and the teaching and learning that takes place in K-12 and college classrooms. The U.S. has a poor track record in holding K-12 schools accountable for their outcomes; and there may be more consensus on the desired learning outcomes of elementary and secondary schooling than of postsecondary education, if only because these earlier outcomes are more generic. Witness the snail’s pace with which learning outcomes on assessments such as the National Assessment of Educational Progress have improved over time, despite increasing investments in schools and a series of efforts to ratchet up standards dating back at least to A Nation at Risk in 1983. So there has to be a healthy dose of skepticism regarding the prospects of accountability tools such as the CLA being a powerful vehicle for improving teaching and learning.

The authors are, however, reluctant to embrace too strongly the critiques of American schooling offered by scholars such as David Labaree, who argues that most school reform efforts are destined to fail because they are unable to address the increasing power of markets and consumer demand in the shaping of how schools and colleges work. Lofty goals such as “college for all,” Labaree warns, can serve as the engine of educational and social inequality, decoupling genuine learning from the market for credentials which can be exchanged for social and economic advantage. In this sense, the higher education system we have is exactly the one we deserve, given the multiple and conflicting hopes and goals which the American people place upon their schools. But it has always been easier to critique the workings of educational institutions than to offer persuasive roadmaps for how to improve them.

Arum and Roksa’s recommendations are, therefore, earnest and predictable. We would not worry so much about the cultivation of critical thinking and complex reasoning skills in college, for example, if students came to college better-prepared for rigorous academic challenges. Higher-education leaders need to support a “culture of learning” in which there is collective responsibility on the part of all components of an institution to promote undergraduate learning, and clearly-articulated plans to use institutional resources to prioritize this goal. Faculty must be trained to demand more from their students in a way that reflects what we know about best practices in postsecondary instruction. And institutions must develop homegrown assessments of student learning, including perhaps the CLA, which can serve as indicators of progress towards improved undergraduate teaching and learning.

What’s missing—and it is by design, given this monograph’s reliance on the CLA—is any consideration of subject matter. As important as metacognitive skills such as critical thinking and complex reasoning might be to the myths that undergraduate institutions construct about their contributions to society, there is another, competing myth about the primacy of big ideas rooted in the disciplines. Arum and Roksa’s methods provide little purchase on what subject matter is learned in the undergraduate career. And there is evidence that students often do learn a great deal about key disciplinary ideas and methods, even in general education courses in the first two years of college. For example, in an introductory one-semester physics course, the net change from pre-test to post-test in understanding Newtonian mechanics is typically about one standard deviation, which represents considerably more growth than is observed over time on the CLA. In this sense, the authors may be overstating the gravity of the problem of undergraduate learning.

Taking subject matter seriously complicates the task of improving undergraduate teaching and learning considerably. But doing so might potentially yield greater benefits, as most postsecondary faculty have strong disciplinary identities, and view themselves as teaching particular content, and not “generic” thinking and reasoning skills. We can craft an agenda for advancing critical thinking, complex reasoning and written communication skills in and about disciplinary ideas. And if we do, the question invited by Arum and Roksa’s recommendation for developing an institutional “culture of learning”—the learning of what?—might answer itself.