Skip to main content

Advertisement

Log in

The learning outcomes race: the value of self-reported gains in large research universities

  • Published:
Higher Education Aims and scope Submit manuscript

Abstract

Throughout the world, measuring “learning outcomes” is viewed by many stakeholders as a relatively new method to judge the “value added” of colleges and universities. The potential to accurately measure learning gains is also a diagnostic tool for institutional self-improvement. This essay discussed the marketisation of learning outcomes tests, and the relative merits of student experience surveys in gauging learning outcomes by analyzing results from the University of California’s Undergraduate Experience Survey (Student Experience in the Research University Survey: SERU-S). The SERU-S includes responses by seniors who entered as freshmen on six educational outcomes self-reports: analytical and critical thinking skills, writing skills, reading and comprehension skills, oral presentation skills, quantitative skills, and skills in a particular field of study. Although self-reported gains are sometimes regarded as having dubious validity compared to so-called “direct measures” of student learning, the analysis of this study reveals the SERU survey design has many advantages, especially in large, complex institutional settings. Without excluding other forms of gauging learning outcomes, we conclude that, designed properly, student surveys offer a valuable and more nuanced alternative in understanding and identifying learning outcomes in the broad tapestry of higher education institutions. We discuss the politics of the learning outcomes race, the validity of standardized tests like the Collegiate Learning Assessment (CLA), and what we can learn from student surveys like SERU-S. We also suggest there is a tension between what meets the accountability desires of governments and the needs of individual universities focused on self-improvement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In 2011 the SERU Consortium included 18 major US research universities—the nine general campuses of the University of California System, plus the Universities of Michigan, Minnesota, Florida, Texas, Rutgers, Pittsburgh, Oregon, North Carolina and the University of Southern California. Fifteen are members of the prestigious American Association of Universities (AAU). For further information on the SERU Consortium, see: http://cshe.berkeley.edu/research/seru/consortium.htm.

  2. For the student experiences and perceptions category of the VSA, participating institutions are required to report data from one of four surveys: the College Student Experiences Questionnaire, the College Senior Survey, the National Survey of Student Engagement, or the SERU Survey (or what is known in the UC system as the University of California Undergraduate Experience Survey).

  3. The Spellings Commission was established on September 19, 2005, by U.S. Secretary of Education Margaret Spellings. The nineteen-member Commission was charged with recommending a national strategy for reforming post-secondary education, with a particular focus on how well colleges and universities are preparing students for the 21st-century workplace, as well as a secondary focus on how well high schools are preparing the students for post-secondary education. In the report, released on September 26, 2006, the Commission focuses on four key areas: access, affordability (particularly for non-traditional students), the standards of quality in instruction, and the accountability of institutions of higher learning to their constituencies (students, families, taxpayers, and other investors in higher education).

  4. UC President Robert C. Dynes quoted in Scott Jaschik, “Accountability System Launched,” Inside Higher Education, Nov. 12, 2007.

  5. Speech before the National Press Club, report in The Chronicle of Higher Education, Feb. 1, 2008.

  6. Authors Richard Arum at New York University, and Josipa Roksa at the University of Virginia charged that many undergraduates, are "drifting through college without a clear sense of purpose is readily apparent." Based on CLA data, they tracked the academic gains (or stagnation) of 2,300 students of traditional college age enrolled at a range of 4-year colleges and universities. 45% of students "did not demonstrate any significant improvement in learning" during the first 2 years of college; 36% of students "did not demonstrate any significant improvement in learning" over 4 years of college. Those students who do show improvements tend to show only modest improvements. Students improved on average only 0.18 standard deviations over the first 2 years of college and 0.47 over 4 years. What this means is that a student who entered college in the 50th percentile of students in his or her cohort would move up to the 68th percentile 4 years later—but that's the 68th percentile of a new group of freshmen who haven't experienced any college learning.

  7. See AHLO website: http://www.oecd.org/document/41/0,3343,en_2649_35961291_42295209_1_1_1_1,00.html.

References

  • Adelman, C. (2006). Border blind side. Education Week, 26(11).

  • Arum, R., & Roska, J. (2008). Learning to reason and communicate in college: Initial report of findings from the longitudinal CLA study. New York, NY: Social Science Research Council.

    Google Scholar 

  • Arum, R., & Roska, J. (2011). Academically adrift: Limited learning on college campuses. Chicago: University of Chicago Press.

    Google Scholar 

  • Assessment of Higher Education Learning Outcomes. (2011). Project Update, Organisation for Economic Development and Cooperation. http://www.oecd.org/dataoecd/8/26/48088270.pdf.

  • Banta, T. (2006). Reliving the history of large-scale assessment in higher education. Assessment Update, 18(4), 3–4, 15.

    Google Scholar 

  • Banta, T. (2007). A warning on measuring learning outcomes. Inside Higher Education, January 26, found at: http://www.insidehighered.com/views/2007/01/26/banta.

  • Banta, T. (2009). Assessment for improvement and accountability. Provost’s Forum on the Campus Learning Environment, University of Michigan, February 4, 2009.

  • Braun, H. (2008). Viccissitudes of the validators. 2008 Reidy Interactive Lectures Series, Portsmouth, NH. http://www.hciea.org/publications/RIL508_HB_092508.pdf. Last accessed January 24, 2012.

  • Chatman, S. (2007). Institutional versus academic discipline measures of student experience: A matter of relative validity. Berkeley: Center for Studies in Higher Education, University of California.

    Google Scholar 

  • Consortium on Financing Higher Education (COFHE). (2008). Assessment: A fundamental responsibility. Found at: http://www.assessmentstatement.org/index_files/Page717.htm.

  • Gonyea, R. M. (2005). Self-reported data in institutional research: Review and recommendations. In P. D. Umbach (Ed.), New directions for institutional research (Vol. 127, pp. 73–89). San Francisco: Jossey-Bass.

    Google Scholar 

  • Hill, L. G., & Betz, D. I. (2005). Revisiting the retrospective pretest. American Journal of Evaluation, 26(4), 501–517.

    Article  Google Scholar 

  • Hosch, B. J. (2010). Time on test: Student motivation and performance on the college learning assessment: Implications for institutional accountability. Paper presented at the association of institution researchers conference, June 2, 2010. http://www.ccsu.edu/uploaded/departments/AdministrativeDepartments/Institutional_Research_and_Assessment/Research/20100601a.pdf.

  • Howard, G. S. (1980). Response-shift bias: A problem in evaluating interventions with pre/post self-reports. Evaluation Review, 4, 93–106.

    Article  Google Scholar 

  • Howard, G. S., & Dailey, P. R. (1979). Response-shift bias: A source of contamination of self-report measures. Journal of Applied Psychology, 64, 144–150.

    Article  Google Scholar 

  • Howard, G. S., Ralph, K. M., Gulanick, N. A., Maxwell, S. E., Nance, D. W., & Gerber, S. K. (1979). Internal invalidity in pretest-postest self-report evaluations and a re-evaluation of retrospective pretests. Applied Psychological Measurement, 3, 1–23.

    Article  Google Scholar 

  • Klein, S., Benjamin, R., & Shavelson, R. (2007). The collegiate learning assessment: Facts and fantasies. Evaluation Review, 31(5), 415–439.

    Article  Google Scholar 

  • Klein, S., Freedman, D., Shavelson, R., & Bolus, R. (2008). Assessing school effectiveness. Evaluation Review, 32(6), 511–525.

    Article  Google Scholar 

  • Klein, S., Kuh, G., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to measuring cognitive outcomes across higher education Institutions. Research in Higher Education, 46(3), 251–276.

    Article  Google Scholar 

  • Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236.

    Article  Google Scholar 

  • Lam, T. C. M., & Bengo, P. (2003). A comparison of three retrospective self-reporting methods of measuring change in instructional practice. American Journal of Evaluation, 24(1), 65–80.

    Google Scholar 

  • Pike, G. R. (2006). Value-added measures and the collegiate learning assessment. Assessment Update, 18(4), 5–7.

    Google Scholar 

  • Shulman, L. S. (2007). Counting and recounting: Assessment and the quest for accountability. Change, 39(1), 28–35.

    Google Scholar 

  • Spelling Commission on the Future of Higher Education. (2006). A test of leadership: Charting the future of U.S. higher education. US Department of Education, September 26, 2006.

  • Taylor, P. T., Russ-Eft, D. F., & Taylor, H. (2009). Gilding the outcome by tarnishing the past. American Journal of Evaluation, 30(1), 31–43.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Aubrey Douglass.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Douglass, J.A., Thomson, G. & Zhao, CM. The learning outcomes race: the value of self-reported gains in large research universities. High Educ 64, 317–335 (2012). https://doi.org/10.1007/s10734-011-9496-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10734-011-9496-x

Keywords

Navigation