Higher Education

, Volume 64, Issue 3, pp 317–335 | Cite as

The learning outcomes race: the value of self-reported gains in large research universities

  • John Aubrey Douglass
  • Gregg Thomson
  • Chun-Mei Zhao
Article

Abstract

Throughout the world, measuring “learning outcomes” is viewed by many stakeholders as a relatively new method to judge the “value added” of colleges and universities. The potential to accurately measure learning gains is also a diagnostic tool for institutional self-improvement. This essay discussed the marketisation of learning outcomes tests, and the relative merits of student experience surveys in gauging learning outcomes by analyzing results from the University of California’s Undergraduate Experience Survey (Student Experience in the Research University Survey: SERU-S). The SERU-S includes responses by seniors who entered as freshmen on six educational outcomes self-reports: analytical and critical thinking skills, writing skills, reading and comprehension skills, oral presentation skills, quantitative skills, and skills in a particular field of study. Although self-reported gains are sometimes regarded as having dubious validity compared to so-called “direct measures” of student learning, the analysis of this study reveals the SERU survey design has many advantages, especially in large, complex institutional settings. Without excluding other forms of gauging learning outcomes, we conclude that, designed properly, student surveys offer a valuable and more nuanced alternative in understanding and identifying learning outcomes in the broad tapestry of higher education institutions. We discuss the politics of the learning outcomes race, the validity of standardized tests like the Collegiate Learning Assessment (CLA), and what we can learn from student surveys like SERU-S. We also suggest there is a tension between what meets the accountability desires of governments and the needs of individual universities focused on self-improvement.

Keywords

Learning outcomes Standardized tests Global higher education markets AHELO Student academic engagement 

References

  1. Adelman, C. (2006). Border blind side. Education Week, 26(11).Google Scholar
  2. Arum, R., & Roska, J. (2008). Learning to reason and communicate in college: Initial report of findings from the longitudinal CLA study. New York, NY: Social Science Research Council.Google Scholar
  3. Arum, R., & Roska, J. (2011). Academically adrift: Limited learning on college campuses. Chicago: University of Chicago Press.Google Scholar
  4. Assessment of Higher Education Learning Outcomes. (2011). Project Update, Organisation for Economic Development and Cooperation. http://www.oecd.org/dataoecd/8/26/48088270.pdf.
  5. Banta, T. (2006). Reliving the history of large-scale assessment in higher education. Assessment Update, 18(4), 3–4, 15.Google Scholar
  6. Banta, T. (2007). A warning on measuring learning outcomes. Inside Higher Education, January 26, found at: http://www.insidehighered.com/views/2007/01/26/banta.
  7. Banta, T. (2009). Assessment for improvement and accountability. Provost’s Forum on the Campus Learning Environment, University of Michigan, February 4, 2009.Google Scholar
  8. Braun, H. (2008). Viccissitudes of the validators. 2008 Reidy Interactive Lectures Series, Portsmouth, NH. http://www.hciea.org/publications/RIL508_HB_092508.pdf. Last accessed January 24, 2012.
  9. Chatman, S. (2007). Institutional versus academic discipline measures of student experience: A matter of relative validity. Berkeley: Center for Studies in Higher Education, University of California.Google Scholar
  10. Consortium on Financing Higher Education (COFHE). (2008). Assessment: A fundamental responsibility. Found at: http://www.assessmentstatement.org/index_files/Page717.htm.
  11. Gonyea, R. M. (2005). Self-reported data in institutional research: Review and recommendations. In P. D. Umbach (Ed.), New directions for institutional research (Vol. 127, pp. 73–89). San Francisco: Jossey-Bass.Google Scholar
  12. Hill, L. G., & Betz, D. I. (2005). Revisiting the retrospective pretest. American Journal of Evaluation, 26(4), 501–517.CrossRefGoogle Scholar
  13. Hosch, B. J. (2010). Time on test: Student motivation and performance on the college learning assessment: Implications for institutional accountability. Paper presented at the association of institution researchers conference, June 2, 2010. http://www.ccsu.edu/uploaded/departments/AdministrativeDepartments/Institutional_Research_and_Assessment/Research/20100601a.pdf.
  14. Howard, G. S. (1980). Response-shift bias: A problem in evaluating interventions with pre/post self-reports. Evaluation Review, 4, 93–106.CrossRefGoogle Scholar
  15. Howard, G. S., & Dailey, P. R. (1979). Response-shift bias: A source of contamination of self-report measures. Journal of Applied Psychology, 64, 144–150.CrossRefGoogle Scholar
  16. Howard, G. S., Ralph, K. M., Gulanick, N. A., Maxwell, S. E., Nance, D. W., & Gerber, S. K. (1979). Internal invalidity in pretest-postest self-report evaluations and a re-evaluation of retrospective pretests. Applied Psychological Measurement, 3, 1–23.CrossRefGoogle Scholar
  17. Klein, S., Benjamin, R., & Shavelson, R. (2007). The collegiate learning assessment: Facts and fantasies. Evaluation Review, 31(5), 415–439.CrossRefGoogle Scholar
  18. Klein, S., Freedman, D., Shavelson, R., & Bolus, R. (2008). Assessing school effectiveness. Evaluation Review, 32(6), 511–525.CrossRefGoogle Scholar
  19. Klein, S., Kuh, G., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to measuring cognitive outcomes across higher education Institutions. Research in Higher Education, 46(3), 251–276.CrossRefGoogle Scholar
  20. Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236.CrossRefGoogle Scholar
  21. Lam, T. C. M., & Bengo, P. (2003). A comparison of three retrospective self-reporting methods of measuring change in instructional practice. American Journal of Evaluation, 24(1), 65–80.Google Scholar
  22. Pike, G. R. (2006). Value-added measures and the collegiate learning assessment. Assessment Update, 18(4), 5–7.Google Scholar
  23. Shulman, L. S. (2007). Counting and recounting: Assessment and the quest for accountability. Change, 39(1), 28–35.Google Scholar
  24. Spelling Commission on the Future of Higher Education. (2006). A test of leadership: Charting the future of U.S. higher education. US Department of Education, September 26, 2006.Google Scholar
  25. Taylor, P. T., Russ-Eft, D. F., & Taylor, H. (2009). Gilding the outcome by tarnishing the past. American Journal of Evaluation, 30(1), 31–43.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  • John Aubrey Douglass
    • 1
  • Gregg Thomson
    • 1
  • Chun-Mei Zhao
    • 1
  1. 1.Center for Studies in Higher EducationUniversity of California-BerkeleyBerkeleyUSA

Personalised recommendations