Skip to main content

The learning outcomes race: the value of self-reported gains in large research universities


Throughout the world, measuring “learning outcomes” is viewed by many stakeholders as a relatively new method to judge the “value added” of colleges and universities. The potential to accurately measure learning gains is also a diagnostic tool for institutional self-improvement. This essay discussed the marketisation of learning outcomes tests, and the relative merits of student experience surveys in gauging learning outcomes by analyzing results from the University of California’s Undergraduate Experience Survey (Student Experience in the Research University Survey: SERU-S). The SERU-S includes responses by seniors who entered as freshmen on six educational outcomes self-reports: analytical and critical thinking skills, writing skills, reading and comprehension skills, oral presentation skills, quantitative skills, and skills in a particular field of study. Although self-reported gains are sometimes regarded as having dubious validity compared to so-called “direct measures” of student learning, the analysis of this study reveals the SERU survey design has many advantages, especially in large, complex institutional settings. Without excluding other forms of gauging learning outcomes, we conclude that, designed properly, student surveys offer a valuable and more nuanced alternative in understanding and identifying learning outcomes in the broad tapestry of higher education institutions. We discuss the politics of the learning outcomes race, the validity of standardized tests like the Collegiate Learning Assessment (CLA), and what we can learn from student surveys like SERU-S. We also suggest there is a tension between what meets the accountability desires of governments and the needs of individual universities focused on self-improvement.

This is a preview of subscription content, access via your institution.


  1. 1.

    In 2011 the SERU Consortium included 18 major US research universities—the nine general campuses of the University of California System, plus the Universities of Michigan, Minnesota, Florida, Texas, Rutgers, Pittsburgh, Oregon, North Carolina and the University of Southern California. Fifteen are members of the prestigious American Association of Universities (AAU). For further information on the SERU Consortium, see:

  2. 2.

    For the student experiences and perceptions category of the VSA, participating institutions are required to report data from one of four surveys: the College Student Experiences Questionnaire, the College Senior Survey, the National Survey of Student Engagement, or the SERU Survey (or what is known in the UC system as the University of California Undergraduate Experience Survey).

  3. 3.

    The Spellings Commission was established on September 19, 2005, by U.S. Secretary of Education Margaret Spellings. The nineteen-member Commission was charged with recommending a national strategy for reforming post-secondary education, with a particular focus on how well colleges and universities are preparing students for the 21st-century workplace, as well as a secondary focus on how well high schools are preparing the students for post-secondary education. In the report, released on September 26, 2006, the Commission focuses on four key areas: access, affordability (particularly for non-traditional students), the standards of quality in instruction, and the accountability of institutions of higher learning to their constituencies (students, families, taxpayers, and other investors in higher education).

  4. 4.

    UC President Robert C. Dynes quoted in Scott Jaschik, “Accountability System Launched,” Inside Higher Education, Nov. 12, 2007.

  5. 5.

    Speech before the National Press Club, report in The Chronicle of Higher Education, Feb. 1, 2008.

  6. 6.

    Authors Richard Arum at New York University, and Josipa Roksa at the University of Virginia charged that many undergraduates, are "drifting through college without a clear sense of purpose is readily apparent." Based on CLA data, they tracked the academic gains (or stagnation) of 2,300 students of traditional college age enrolled at a range of 4-year colleges and universities. 45% of students "did not demonstrate any significant improvement in learning" during the first 2 years of college; 36% of students "did not demonstrate any significant improvement in learning" over 4 years of college. Those students who do show improvements tend to show only modest improvements. Students improved on average only 0.18 standard deviations over the first 2 years of college and 0.47 over 4 years. What this means is that a student who entered college in the 50th percentile of students in his or her cohort would move up to the 68th percentile 4 years later—but that's the 68th percentile of a new group of freshmen who haven't experienced any college learning.

  7. 7.

    See AHLO website:,3343,en_2649_35961291_42295209_1_1_1_1,00.html.


  1. Adelman, C. (2006). Border blind side. Education Week, 26(11).

  2. Arum, R., & Roska, J. (2008). Learning to reason and communicate in college: Initial report of findings from the longitudinal CLA study. New York, NY: Social Science Research Council.

    Google Scholar 

  3. Arum, R., & Roska, J. (2011). Academically adrift: Limited learning on college campuses. Chicago: University of Chicago Press.

    Google Scholar 

  4. Assessment of Higher Education Learning Outcomes. (2011). Project Update, Organisation for Economic Development and Cooperation.

  5. Banta, T. (2006). Reliving the history of large-scale assessment in higher education. Assessment Update, 18(4), 3–4, 15.

    Google Scholar 

  6. Banta, T. (2007). A warning on measuring learning outcomes. Inside Higher Education, January 26, found at:

  7. Banta, T. (2009). Assessment for improvement and accountability. Provost’s Forum on the Campus Learning Environment, University of Michigan, February 4, 2009.

  8. Braun, H. (2008). Viccissitudes of the validators. 2008 Reidy Interactive Lectures Series, Portsmouth, NH. Last accessed January 24, 2012.

  9. Chatman, S. (2007). Institutional versus academic discipline measures of student experience: A matter of relative validity. Berkeley: Center for Studies in Higher Education, University of California.

    Google Scholar 

  10. Consortium on Financing Higher Education (COFHE). (2008). Assessment: A fundamental responsibility. Found at:

  11. Gonyea, R. M. (2005). Self-reported data in institutional research: Review and recommendations. In P. D. Umbach (Ed.), New directions for institutional research (Vol. 127, pp. 73–89). San Francisco: Jossey-Bass.

    Google Scholar 

  12. Hill, L. G., & Betz, D. I. (2005). Revisiting the retrospective pretest. American Journal of Evaluation, 26(4), 501–517.

    Article  Google Scholar 

  13. Hosch, B. J. (2010). Time on test: Student motivation and performance on the college learning assessment: Implications for institutional accountability. Paper presented at the association of institution researchers conference, June 2, 2010.

  14. Howard, G. S. (1980). Response-shift bias: A problem in evaluating interventions with pre/post self-reports. Evaluation Review, 4, 93–106.

    Article  Google Scholar 

  15. Howard, G. S., & Dailey, P. R. (1979). Response-shift bias: A source of contamination of self-report measures. Journal of Applied Psychology, 64, 144–150.

    Article  Google Scholar 

  16. Howard, G. S., Ralph, K. M., Gulanick, N. A., Maxwell, S. E., Nance, D. W., & Gerber, S. K. (1979). Internal invalidity in pretest-postest self-report evaluations and a re-evaluation of retrospective pretests. Applied Psychological Measurement, 3, 1–23.

    Article  Google Scholar 

  17. Klein, S., Benjamin, R., & Shavelson, R. (2007). The collegiate learning assessment: Facts and fantasies. Evaluation Review, 31(5), 415–439.

    Article  Google Scholar 

  18. Klein, S., Freedman, D., Shavelson, R., & Bolus, R. (2008). Assessing school effectiveness. Evaluation Review, 32(6), 511–525.

    Article  Google Scholar 

  19. Klein, S., Kuh, G., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to measuring cognitive outcomes across higher education Institutions. Research in Higher Education, 46(3), 251–276.

    Article  Google Scholar 

  20. Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236.

    Article  Google Scholar 

  21. Lam, T. C. M., & Bengo, P. (2003). A comparison of three retrospective self-reporting methods of measuring change in instructional practice. American Journal of Evaluation, 24(1), 65–80.

    Google Scholar 

  22. Pike, G. R. (2006). Value-added measures and the collegiate learning assessment. Assessment Update, 18(4), 5–7.

    Google Scholar 

  23. Shulman, L. S. (2007). Counting and recounting: Assessment and the quest for accountability. Change, 39(1), 28–35.

    Google Scholar 

  24. Spelling Commission on the Future of Higher Education. (2006). A test of leadership: Charting the future of U.S. higher education. US Department of Education, September 26, 2006.

  25. Taylor, P. T., Russ-Eft, D. F., & Taylor, H. (2009). Gilding the outcome by tarnishing the past. American Journal of Evaluation, 30(1), 31–43.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to John Aubrey Douglass.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Douglass, J.A., Thomson, G. & Zhao, CM. The learning outcomes race: the value of self-reported gains in large research universities. High Educ 64, 317–335 (2012).

Download citation


  • Learning outcomes
  • Standardized tests
  • Global higher education markets
  • Student academic engagement