, Volume 50, Issue 3, pp 230–235

Stakeholder and Public Responses to Measuring Student Learning


Symposium: Higher Education and the Challenges of Reform

DOI: 10.1007/s12115-013-9648-y

Cite this article as:
Arum, R. Soc (2013) 50: 230. doi:10.1007/s12115-013-9648-y

In the fall of 2005, the Council for Aid to Education (CAE) began an ambitious attempt to measure school-level differences in longitudinal growth in generic collegiate competencies. The CAE had worked with psychometricians at the Rand Corporation and Stanford University to develop a measure – the Collegiate Learning Assessment (CLA) – that relied not on a multiple choice exam, but instead on a performance task that would challenge students to read a set of documents, think critically and synthesize the information to produce a written response to a specific task that a future employer might assign. While not as precise an instrument as multiple choice assessments currently in use that measure similar competencies, the CLA assessment that was given to students arguably was more closely aligned with the type of generic learning that college educators have long professed to aim to produce.1 The CLA performance task was a measure that one would expect a student to do better on over time, if college enhanced the ability to think critically, reason systematically and communicate in writing effectively.

After the first round of testing a cohort of entering college freshmen in several dozen colleges and universities, the CAE opened up their project to Josipa Roksa and me as outside researchers at the Social Science Research Council. In spring of 2007, we began our work collecting supplementary survey and transcript data that would allow us to move beyond school-level comparisons to identify factors associated with individual-level variation in student improvement on the CLA performance task. This effort led to a series of reports released by the Social Science Research Council as well as Academically Adrift: Limited Learning on College Campuses, published by the University of Chicago in January 2011. The book, in spite of its 67 pages of statistical tables and prose that were described as “a dense tome that could put Ambien out of business,”2 was one of those rare social science books that found a readership and influence outside of typical disciplinary boundaries. In this paper at the suggestion of Society editor Jonathan Imber, I provide some reflections on why this might be the case, the type of responses that the work generated and what this might tell us about the future of measuring student learning in higher education. Before beginning this exercise, however, I will briefly summarize the main findings of the book to provide a context for the more general reflections that follow.

Our research on undergraduate learning in higher education portrayed a system with relatively low standards in general, but significant variation across institutions, majors and individuals. Specifically, Roksa and I found that large numbers of students progressed through higher education with relatively little asked of them academically. For example, half the students during the spring of their sophomore year reported to us that they did not have a single class that required more than twenty pages of writing over the course of the prior semester; 32 % of students did not have a single class the prior semester that required 40 pages of reading per week. On average students spent less than 2 hours per day preparing for class – half the time college students spent on these pursuits in the 1960s. One-third of that time was spent working with peers – an activity that was not associated with any measurable gains on the CLA assessment. 36 % of students reported spending only five or fewer hours per week studying alone. These students, who spent less than an hour per day studying alone outside of class, were able to maintain a 3.2 grade point average in their coursework – a testament to the extent to which academic standards have deteriorated significantly on college campuses.

Given the state of undergraduate education, it is not surprising that college students in general demonstrate only modest improvement when generic competencies are measured by performance assessments such as the CLA. While we found students in every college and university examined who were applying themselves to their studies and showing meaningful gains on the assessment given, we also found large numbers of students who were not.

While the book documenting the academic experiences of more than two thousand students across two dozen diverse colleges and universities presented empirical findings from a significant research endeavor, many of the general conclusions and findings were consistent with a larger, long-standing body of research on the topic. Why did Academically Adrift capture more attention than one would expect? We can potentially find an answer to this question by considering a recent study by Pamela Barnhouse Walters and Annette Lareau that focuses on why some educational research ends up being influential. While Walters and Lareau attempt to distinguish between scholarship that influences other educational research and scholarship that influences educational policy, we are too close to the publication of our work to know whether there will be a long-term effect on either of these audiences from this study. Nevertheless, Walters and Lareau argue that “it is the consistency of the research findings with prevailing political concerns, with prevailing understandings of what is wrong with schools and schooling, and with already-formed policy preferences of powerful social groups” that determines its influence on policy development (Lareau and Walters 2008:214). According to Chester Finn, educational research shapes policy, “when advocates, policy makers and journalists… (are) able to use these studies to devise, justify, or sustain a reform agenda. Thus the research is less a source of change and more an ‘arsenal’ for those already fighting the policy wars.”3

If we are to take this insight seriously, we then would want to consider how the various responses to Academically Adrift can help illuminate the different issues and interests currently at play in higher education. These responses can be thought of as falling largely on a continuum from antipathy to sympathetic receptivity towards the work based on the extent to which the findings are aligned with “already-formed policy preferences.” Of course, the work in a manner similar to any social scientific study has also received its fair share of scholarly critique, including an intellectually rich symposium organized by Society. This is as it should be and will not be addressed here. Rather, I will focus reflections on responses that appeared in the general media (e.g., the New York Times, the Washington Post, the Chicago Tribune, the Atlantic, the New Yorker, the New York Review of Books, Doonsebury, etc.) and the educational press (e.g., the Chronicle of Higher Education, Inside Higher Education, Education Week, etc.).

I will discuss five broad categories of ideal-typical responses from various higher education stakeholders and the public, beginning with parties most hostile to the work – i.e., advocates of greater investment and administrative focus on social engagement on college campuses. I then will identify the general indifference to the work shown by government officials and state regulators. Next, I will consider the largely positive, but mixed response expressed by those committed to defending the humanities and liberal arts model of higher education as well as faculty who have grown concerned with declining academic rigor on college campuses. The final two categories of responses come from two distinct parties that have embraced the work most fully – those individuals within the system who have institutional responsibility for assessing the quality of student learning and those concerned parties external to the system who are troubled by the rising costs of higher education. Finally, I will provide some reflections on what these ideal-typical responses suggest about possible future scenarios involving the measurement of learning in higher education.

Social Engagement Advocates

Perhaps not surprisingly the most hostile response to the work came from those associated with promoting the expansion of student service infrastructure on college campuses. In recent decades, as we noted in Academically Adrift, this is the area of colleges and universities that has grown most rapidly. As full-time faculty positions have decreased in relative terms, quasi-professional staffing on campus has increased and been charged with taking greater responsibility for ensuring the general well-being and promoting the social engagement and attachment of students with collegiate life. Social engagement policy and programs on campuses were promoted in part to reduce the high attrition rates of college students – since the more individuals are integrated into campus social life, the less likely they are to drop out. In recent decades, however, advocates of increased social engagement went a step further. Educational researchers identified an association between self-reported learning and social engagement. Based on these findings, social engagement was advocated as a strategy to promote both retention and learning.

Academically Adrift was an explicit challenge to this research and programmatic paradigm. It identified factors associated not with self-reported learning, but growth in objective student performance on a standardized assessment indicator. In our project, we found no evidence that social engagement was associated with positive learning outcomes. Our work thus was understood as in direct opposition to those promoting such investments in college as well as a research apparatus that was dependent on the use of survey instruments focused on self-reported learning. Criticism from these quarters sought to discredit the work by challenging the CLA measure. The CLA was argued to be incapable of measuring generic collegiate learning – regardless of the fact that in our study it was demonstrated to be sensitive to instruction.

Roksa and I, however, were the first to admit to the limitations of the CLA. All assessment instruments are by definition limited and imperfect. Clearly, though, the CLA was an improvement over the widespread use of self-reported learning measures that some of our critics had utilized in their work. In addition, other researchers relying on a different national sample and an alternative objective multiple-choice assessment instrument designed to track higher order generic skills, such as critical thinking and complex reasoning, generated largely similar findings as those we reported on the CLA measure.4 In addition, the low levels of academic engagement we found in our work were consistent with a large number of other studies, including findings form some of our critics own survey research. We were not exaggerating the limited learning occurring in these setting, but instead attempting to report descriptive findings on the state of higher education accurately.

Government Officials and State Regulators

Government official and state regulators expressed mixed sentiments towards our research or remained largely indifferent. While Roksa and I consistently asserted in public forums that increased federal accountability would be counterproductive to addressing the problems of limited academic rigor and learning in higher education. Politicians responsive to middle class constituents (including parents, students and those employed by academe) were with few exceptions happy to adhere to this advice. Government actors have been largely willing to continue to rely on existing mechanisms to address problems associated with undergraduate learning – such as systems of accreditation and institutional governance boards. Policies focusing explicitly on increased college access (e.g., “College for All”) and the extension of federal financial higher education supports (e.g., Pell Grants, federal research overhead and student loans) have continued to receive greater legislative attention given that they are broadly popular and easier to advance politically.

While Roksa and I have expressed significant reservations about measuring learning for accountability purposes, we have been equally adamant that the federal government should assume an enhanced role for developing the research infrastructure necessary to advance understanding of learning in higher education. For example, following the publication of our research, Roksa and I in a New York Times opinion piece argued that the Department of Education should “make available nationally representative longitudinal data on undergraduate learning outcomes for research purposes, as it has been doing for decades for primary and secondary education.”5 Government agencies, however, have been slow to respond.

The reasons why the Federal Government provides billions of dollars in support of financing higher education but has not made the minimal investments necessary to support the development of social scientific infrastructure necessary for improving our understanding of higher education learning was suggested by comments made at the National Advisory Committee on Institutional Quality and Integrity (February 3–4, 2011). Following my recommendations at this national hearing that the federal government should produce longitudinal data on student learning, Vice Chair of the committee Arthur Rothkopf commented:

If I might just give a little historical context to your excellent suggestion. I was a member of the Commission known as the Commission in the Future of Higher Education in 2006, appointed by the then-Secretary. Among the recommendations, which were endorsed by 18 of the 19 members, was the kind of – we go ahead and get the exact kind of information you’re talking about, tracking students through the system, far more data so that the kind of work that you’re talking about that is now available in K-12 would be available in higher education. Unfortunately, the higher education community somehow didn’t find, didn’t really want to do that, and they went to the Congress and Congress did not approve it. But all I’m saying is I think there’s a history there. I think, I happen to believe that what you’re suggesting is right, and that the higher education community should reconsider its views in this matter, so that we do have the data, so we do really know what student outcomes are.6

Rothkopf’s claim that the higher education industry went to Congress to block the collection of data suggests the extent to which specific higher education actors are hostile to the expansion of measurement and understanding of higher education learning outcomes.

The international social scientific community this fall again witnessed a similar challenge to the collection of data on higher education learning. The Organization of Economic Cooperation and Development (OECD) recently launched the Assessment of Higher Education Learning Outcomes feasibility study with data collected from students at colleges and universities in seventeen countries. The OECD is currently undertaking a review of the project to explore the extent to which such data collection is technically feasible and scientifically sound – as a disclosure, I am a member of an expert panel convened to provide input on this matter. The technical issues around cross-national measurement of this character are not trivial. However, objections voiced to OECD from the American Council on Education, the Association of Universities and Colleges of Canada, and the European University Association appear to have focused not on the real technical challenges to the endeavor that warrant extensive discussion, but instead on the fear that the results might be used in a manner that would reflect negatively on student performance in U.S. and other higher education systems.7

Humanities and Liberal Education Advocates

Another vocal set of constituents who have largely embraced the work are faculty and individuals committed to defending and promoting traditional models of liberal education and the humanities. While some in this camp express deep reservations about the use of any standardized objective measure of learning, including the CLA, many others have come to embrace the instrument as an attempt to provide an authentic assessment of the broad set of competencies associated with traditional liberal models of collegiate education. Our finding that students taking humanities and social science coursework have greater exposure to classes requiring significant reading and writing as well as greater growth on the CLA, has been embraced as a rationale that can justify these disciplinary fields in the face of continuing enrollment pressures.

Louis Menand in The New Yorker, for example, emphasized the following in his account of Academically Adrift: “The most interesting finding is that students majoring in liberal-arts fields—sciences, social sciences, and arts and humanities—do better on the C.L.A., and show greater improvement, than students majoring in non-liberal-arts fields such as business, education and social work, communications, engineering and computer science, and health.”8 In a similar fashion Anthony Grafton in The New York Review of Books in his review of our work noted: “Nowadays the liberal arts attract a far smaller proportion of students than they did two generations ago. Still, those majoring in liberal arts fields—humanities and social sciences, natural sciences and mathematics—outperformed those studying business, communications, and other new, practical majors on the CLA.”9 Stanley Fish in an on-line column for The New York Times, while lamenting that “the only way humanist educators and their students are going to get to the top is by hanging on to the coattails of their scientist and engineering friends as they go racing by,” noted that our work confirmed arguments made by Diane Ravitch and Martha Nussbaum about the negative consequences of narrowing the curriculum.10

In addition to arts and science faculty committed to humanist educational models, of course, are large numbers of faculty on college campuses who have grown increasingly dismayed about declining academic rigor at their institutions. These parties found in the work empirical evidence that their concerns were not simply a product of recollection bias and were thus not so easily dismissed as old-fogeyism.

Advocates for Instructional Assessment

If the work confronted a built in internal constituency from those promoting expanded student services, our research found a parallel set of natural allies amongst those who promoted or had begun to use objective measures of student performance. These parties included those who had been involved in the Voluntary System of Accountability, the New Leadership Alliance for Student Learning and Accountability, the professional associations that serve higher education assessment specialists, and those involved in the Wabash National Study. While these various parties often raised various challenges or methodological concerns related to the specific instrument or offered alternative interpretation of our results, in general the work was embraced for highlighting and advancing public awareness of the needs for greater objective assessment on college campus.

Ernerst Pascarella, Charles Blaich, Georgianna Martin and Jana Hanson, for example, were able to utilize data from their longitudinal Wabash study, which relies on an alternative objective performance measure, to see whether our results were robust to replication. Although Pascarella and colleagues in the May-June 2011 issue of Change raised concerns about over-interpreting the modest and/or non-existent student change scores, they also largely replicated our earlier results noting:

When someone delivers news as potentially unsettling as that delivered by Arum and Roksa, it is almost axiomatic that their methods will be questioned, as will the robustness of their findings. Our attempt to cross-validate some of Arum and Roksa’s major findings with the Wabash National Study is not an attempt to answer all those questions. Nevertheless, the findings from the WNS, based on an independent sample of institutions and students and using a multiple-choice measure of critical thinking substantially different in format than the Collegiate Learning Assessment, closely match those reported by Arum and Roksa.11

In a similar fashion, Executive Director of the New Leadership Alliance for Student Learning and Accountability David Paris commented in Faculty Focus, “what Academically Adrift does regarding higher education as a whole is really within reach of institutions separately. That is, most institutions have or can obtain data on important indicators of outcomes, surveys of student experiences, and other data and could do similar kinds of analyses and carefully and openly discuss what they see.” Paris continues, “This book models the kind of analysis that should be an ordinary part of the practice of higher education.”12

A Public Concerned with College Costs

In recent decades as college costs have escalated at roughly twice the rate of inflation and financing has increasingly relied on the growth of both federal subsidies and private student debt, various parties have expressed greater concern over the value of these public and private investments. Since the release of Academically Adrift in January 2011, coverage of our work has increasingly come to focus on the issue of college costs and value. A search of news references of Academically Adrift in Lexis-Nexis Academic Universe clearly demonstrates this trend. In the first 3 months following publication, the book received 115 references in indexed news sources with 43 % of these references also including the search terms “cost!” or “value”. Over the next 6 months, the book received 93 news references with 55 % utilizing these terms; the next 6 months had 61 references with 64 % mentioning these terms; and the last 6 months ending on October 15, 2012, included 54 news references with 74 % invoking these items. The trends in news reporting suggest that the issue of costs and value increasingly have come to dominate news coverage of our research.

Coverage on the New York Times editorial page illustrates this focus. Bob Herbert as the first of three New York Times columnists to cover our work began his commentary: “The cost of college has skyrocketed and a four-year degree has become an ever more essential cornerstone to a middle-class standard of living. But what are America’s kids actually learning in college? For an awful lot of students, the answer appears to be not much.”13 Gail Collins column discussed the work in relationship to the fact that during the year she wrote “the total amount of outstanding student loans will pass the $1 trillion threshold for the first time.”14 David Brooks also drew similar lessons noting that “colleges are charging more money, but it’s not clear how much actual benefit they are providing.” Brooks in his column also raised the specter of federal accountability, asking whether assessment “should be tied to federal dollars or (be) more voluntary. Should we impose a coercive testing regime that would reward and punish schools based on results? Or should we let schools adopt their own preferred systems?”15

The Dynamics of Assessment of Higher Education

What if anything can be learned from this exercise? I believe the analysis suggests the dynamics that are likely to play out with respect to the assessment of higher education learning. Specifically, looking at actors within the higher education system, one observes significant divisions with respect to attitudes towards the systematic measurement of learning outcomes. Many higher education administrators even when sympathetic to initiatives in this area, find it difficult to respond effectively given competing institutional incentives, organizational inertia and the collective action problem – fearing that without the field of higher education moving as a whole, their individual school would be disadvantaged when competing for clients perceived to value student consumer-related services over academic rigor and enhanced learning. Other than institutional researchers directly involved with this work, only limited support exists from those elements of the faculty concerned with the decline of academic rigor or those humanities professors who recognize the potential for assessments to validate traditional models of liberal education. In addition, the parts of the university indifferent or in opposition to the objective measurement of learning are institutionally in ascendance, while those who are more sympathetic to assessment of this character are marginal to the core functioning of the university or in actual decline. Given this reality, it is unlikely that left to its own devices there would be much internal movement forward with respect to the systematic measurement of learning outcomes.

Higher education, however, is heavily reliant on public subsidy. While U.S. legislators and government regulators have not yet fully embraced this issue, I am skeptical that they will be able to sit on the sidelines for too much longer. Colleges and universities have largely been unable or unwilling to make significant changes to address either costs or instructional quality issues. In the face of declining state government support, it is unlikely that increased federal revenue and greater reliance on student loans will be sufficient to put off further consideration of these issues. And higher education leaders, indeed, are beginning to face this reality. For example, in comments this past month the President of Stanford University candidly asserted that the public university model was no longer viable: “You just have to blow up the system.” Former Princeton President William Bowen at the same event commented that “there’s going to have to be a re-engineering of all this.”16

As legislators and regulators move to address cost issues at the center of public concerns, it is quite possible that they will move from indifference to growing acceptance and reliance on measuring student learning outcomes. Without measuring these outcomes, it is virtually impossible to determine whether alternative versions of higher education, including new approaches that rely heavily on digital media, are either cost effective or desirable. The U.S. higher education system is comparatively expensive – approximately twice the costs of European alternatives.17 While it is understandable why some U.S. higher education industry leaders have opposed the measurement of student learning outcomes in the past, their ability to resist these changes in the future is questionable. For those interested in improving higher education access, ultimately the measurement of learning outcomes is likely unavoidable. Accountability, however, should not occur at the federal-level, but lower in the system where governance boards, accreditors and consumers can ask institutions: How are you measuring learning? What areas need improvement? What steps are you taking to improve these areas of weakness? National and international efforts to measure higher education learning should focus on building research capacity to better understand and address these issues at the institutional and classroom levels.


The CLA includes three components: a performance task as well as make an argument and break an argument essays. Our research solely focuses on the first component of the assessment, the performance task.


Kathleen Parker, “Our Unprepared Graduates,” Washington Post (September 30, 2011).


As cited in Lareau and Walters 2008


Ernest T. Pascarella, Charles Blaich, Georgianna L. Martin and Jana M. Hanson, “How Robust Are the Findings of Academically Adrift?” Change: The Magazine of Higher Learning (May–June, 2011).


Richard Arum and Josipa Roksa, “Your So-Called Education,” The New York Times (May 15, 2011).


National Advisory Committee on Institutional Quality and Integrity - Report of the February 3–4, 2011 Meeting, Appendix C, p. 40.


Doug Lederman, “Rifts over Global Test of Learning,” Inside Higher Education (September 20, 2012). See also Ben Wildavsky, “Measuring Student Learning in a Global Education Marketplace,” The Quick and the Ed (September 25, 2012).


Louis Menand, “Live and Learn: Why We Have College,” The New Yorker (June 6, 2011).


Anthony Grafton, “Our Universities: Why are they Failing?” The New York Review of Books (November 24, 2011).


Stanley Fish, “Race to the Top of What? Obama on Education,” New York Times blog (January 31, 2011).


Ernest T. Pascarella, Charles Blaich, Georgianna L. Martin and Jana M. Hanson, “How Robust Are the Findings of Academically Adrift?” Change: The Magazine of Higher Learning (May–June, 2011).


David Paris, “Holding Up a Mirror to Higher Education,” Faculty Focus (March 4, 2011).


Bob Herbert, “College the Easy Way,” The New York Times (March 5, 2011).


Gail Collins, “Humming to Higher Education,” The New York Times (October 22, 2011).


David Brooks, “Testing the Teachers,” The New York Times (April 20, 2012).


“Universities Suffering from Near Fatal Cost Disease,” Stanford University News, October 12, 2012.


Organisation of Economic Cooperation and Development, Education at a Glance 2012, table B1.1a (Paris, 2012).


Copyright information

© Springer Science+Business Media New York 2013