Skip to main content

Advertisement

Log in

Value-added assessment in higher education: a comparison of two methods

  • Published:
Higher Education Aims and scope Submit manuscript

Abstract

Evaluation of the effectiveness of higher education has received unprecedented attention from stakeholders at many levels. The Voluntary System of Accountability (VSA) is one of the initiatives to evaluate institutional core educational outcomes (e.g., critical thinking, written communication) using standardized tests. As promising as the VSA method is for calculating a valueadded score and allowing results to be comparable across institutions, it has a few potential methodological limitations. This study proposed an alternative way of value-added computation which takes advantage of multilevel models and considers important institution-level variables. The institutional value-added ranking was significantly different for some of the institutions (i.e., from being ranked at the bottom to performing better than 50% of the institutions) between these two methods, which may lead to substantially different consequences for those institutions, should the results be considered for accountability purposes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. SAT and ACT are college admissions tests used in the United States.

References

  • ACT. (2009). CAAP guide to successful general education outcomes assessment. IOWA City, IA: ACT.

    Google Scholar 

  • Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37–65.

    Article  Google Scholar 

  • Borden, V. M. H., & Young, J. W. (2008). Measurement validity and accountability for student learning. In V. M. H. Borden & G. R. Pike (Eds.), Assessing and accounting for student learning: Finding a constructive path forward. San Francisco: Jossey-Bass.

    Google Scholar 

  • Burke, J. C., & Minassians, H. (2002). Performance reporting: The preferred ‘‘no cost’’ accountability program (2001). Albany: The Nelson A. Rockefeller Institute of Government.

    Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.

    Google Scholar 

  • College Board. (2008). College Handbook 2008. The College Board: New York.

    Google Scholar 

  • Council for Aid to Education. (2007). CLA institutional report 2006–2007. New York: Council for Aid to Education.

    Google Scholar 

  • Darling-Hammond, L., Berry, B., & Thoreson, A. (2001). Does teacher certification matter? Evaluating the evidence. Educational Evaluation and Policy Analysis, 23, 57–77.

    Article  Google Scholar 

  • DiPrete, T. A., & Forristal, J. D. (1994). Multilevel models: Method and substance. Annual Review of Sociology, 20, 331–357.

    Article  Google Scholar 

  • Educational Testing Service. (2007). MAPP user’s guide. Princeton, NJ: ETS.

    Google Scholar 

  • Gates, S. M., Augustine, C. H., Benjamin, R., Bikson, T. K., Derghazarian, E., Kaganoff, T., et al. (2001). Ensuring the quality and productivity of education and professional development activities: A review of approaches and lessons for DoD. Santa Monica, CA: National Defense Research Institute, RAND.

    Google Scholar 

  • Gorard, S. (2008). The value-added of primary schools: What is it really measuring? Educational Review, 60(2), 179–185.

    Article  Google Scholar 

  • Johnson, R., McCormick, R. D., Prus, J. S., & Rogers, J. S. (1993). Assessment options for the college major. In T. W. Banta, et al. (Eds.), Making a difference: Outcomes of a decade of assessment in higher education (pp. 151–167). San Francisco: Jossey-Bass.

    Google Scholar 

  • Klein, S., Benjamin, R., Shavelson, R., & Bolus, R. (2007). The collegiate learning assessment: Facts and fantasies. Evaluation Review, 31(5), 415–439.

    Article  Google Scholar 

  • Klein, S., Freedman, D., Shavelson, R., & Bolus, R. (2008). Assessing school effectiveness. Evaluation Review, 32, 511–525.

    Article  Google Scholar 

  • Klein, S., Kuh, G., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to measuring cognitive outcomes across higher-education institutions. Journal of Research on Higher Education, 46(3), 251–276.

    Article  Google Scholar 

  • Lee, V. E. (2000). Using hierarchical linear modeling to study social contexts: The case of school effects. Educational Psychologist, 35(2), 125–141.

    Article  Google Scholar 

  • Linn, R. L. (2001). The design and evaluation of educational assessment and accountability systems. CSE Technical Report 539. Los Angeles: University of California.

    Google Scholar 

  • Liu, O. L. (2008). Measuring learning outcomes in higher education using the Measure of Academic Proficiency and Progress (MAPP™). ETS Research Report Series (RR-08–047). Princeton: ETS.

    Google Scholar 

  • Liu, O. L. (2009a). Measuring learning outcomes in higher education (Report No. RDC-10). Princeton, NJ: ETS.

    Google Scholar 

  • Liu, O. L. (2009b). Measuring value-added in higher education: Conditions and caveats. Results from using the Measure of Academic Proficiency and Progress (MAPP™). Assessment and Evaluation in Higher Education, 34(6), 1–14.

    Google Scholar 

  • Lockwood, J. R., McCaffrey, D. F., & Hamilton, L. S. (2007). The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Journal of Educational Measurement, 44(1), 47–67.

    Article  Google Scholar 

  • Luke, D. A. (2004). Multilevel modeling. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Marr, D. (1995). Validity of the academic profile. Princeton, NJ: ETS.

    Google Scholar 

  • Naughton, B. A., Suen, A. Y., & Shavelson, R. J. (2003). Accountability for what? Understanding the learning objectives in state higher education accountability programs. Paper presented at the annual meeting of the American Educational Research Association, Chicago.

  • Nettles, M. T., Cole, J., & Sharp, S. (1997). A comparative analysis of trends and patterns in state assessment policy. Paper presented at the annual meeting of the Association for the Study of Higher Education Conference, Albuquerque, NM.

  • Opdenakker, M. C., & Van Damme, J. (2006). Principles and standards for school mathematics. Reston, VA: Opdenakker MC & Van Damme J.

    Google Scholar 

  • Palardy, G. J. (2008). Differential school effects among low, middle, and high social class composition schools: A multilevel, multiple group latent growth curve analysis. School Effectiveness and School Improvement, 19, 21–49.

    Article  Google Scholar 

  • Park, E., & Palardy, G. J. (2004). The impact of parental involvement and authoritativeness on academic achievement: A cross ethnic comparison. In S. J. Paik & H. Walberg (Eds.), Advancing educational productivity: Policy implications from national databases (pp. 95–122). Greenwich, CT: Information Age.

    Google Scholar 

  • Pascarella, E. T., Bohr, L., Nora, A., & Terenzini, P. T. (1996). Is differential exposure to college linked to the development of critical thinking? Research in Higher Education, 37, 159–174.

    Article  Google Scholar 

  • Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Raudenbush, S., Bryk, A., Cheong, Y. F., & Congdon, R. (2004). HLM 6: Hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International.

    Google Scholar 

  • Richter, T. (2006). What is wrong with ANOVA and multiple regression? Analyzing sentence reading times with hierarchical linear models. Discourse Processes, 41(3), 221–250.

    Article  Google Scholar 

  • Rodgers, T. (2007). Measuring value added in higher education: A proposed methodology for developing a performance indicator based on teachers economic value added to graduates. Education Economics, 15(1), 55–74.

    Article  Google Scholar 

  • Sanders, W. L., & Rivers, J. C. (1996). Cumulative and residual effects of teachers on future student academic achievement. Tennessee: University of Tennessee Value-Added Research and Assessment Center.

    Google Scholar 

  • Sanders, W. L., Saxton, A. M., & Horn, S. P. (1997). The Tennessee value-added assessment system: A quantitative, outcomes-based approach to educational assessment. In J. Millman (Ed.), Grading teachers, grading schools. Is student achievement a valid evaluation measure? (pp. 137–162). Thousand Oaks, CA: Corwin.

    Google Scholar 

  • Singer, J. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Education and Behavioral Statistics, 24(4), 323–355.

    Google Scholar 

  • Stone, C. (1995). A course in probability and statistics. Belmont, CA: Duxbury Press.

    Google Scholar 

  • Tekwe, C. D., Carter, R. L., Ma, C. X., Algina, J., Lucas, M. E., Roth, J., et al. (2004). An empirical comparison of statistical models for value-added assessment of school performance. Journal of Educational and Behavioral Statistics, 29(1), 11–36.

    Article  Google Scholar 

  • U.S. Department of Education, National Center for Education Statistics. (2006). Digest of Education Statistics, 2005 (NCES 2006-030).

  • Voluntary System of Accountability. (2008). Background on learning outcomes measures. Retrieved May 18, 2009, from www.voluntarysystem.org/index.cfm?page=about_cp .

  • Yeh, S. S., & Ritter, J. (2009). The cost-effectiveness of replacing the bottom quartile of novice teachers through value-added teacher assessment. Journal of Education Finance, 34(4), 426–451.

    Google Scholar 

  • Yu, L., & White, D. B. (2002). Measuring value-added school effects on Ohio six-grade proficiency test results using two-level hierarchical linear modeling. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans: LA.

Download references

Acknowledgments

I want to thank Paul Holland and Sandip Sinharay for discussions with them during the development of this research project. I also want to thank Yue (Helena) Jia for her helpful suggestions on the use of the hierarchical models.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ou Lydia Liu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, O.L. Value-added assessment in higher education: a comparison of two methods. High Educ 61, 445–461 (2011). https://doi.org/10.1007/s10734-010-9340-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10734-010-9340-8

Keywords

Navigation