Advertisement

Research in Higher Education

, Volume 45, Issue 2, pp 193–208 | Cite as

Measuring Quality: A Comparison of U.S. News Rankings and NSSE Benchmarks

  • Gary R. Pike
Article

Abstract

College rankings and guidebooks have become big business. The prominent role played by rankings and guidebooks is problematic because the criteria used to evaluate institutions have little to do with the quality of education students receive. Designed as an alternative to college rankings, NSSE assesses student engagement in activities that contribute to learning and success during college. This study compared the NSSE scores for 14 AAU public research universities with their rankings by U.S. News and World Report.

quality rankings NSSE U.S. News engagement 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Astin, A. W. (1977). Four Critical Years, Jossey-Bass, San Francisco.Google Scholar
  2. Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel 25: 297-307.Google Scholar
  3. Astin, A. W. (1993). What Matters in College? Four Critical Years Revisited, Jossey-Bass, San Francisco.Google Scholar
  4. Baird, L. (1976). Using Self-Reports to Predict Student Performance, The College Board, New York.Google Scholar
  5. Baxter Magolda, M. B. (1992). Knowing and Reasoning in College: Gender-Related Patterns in Students' Intellectual Development, Jossey-Bass, San Francisco.Google Scholar
  6. Berdie, R. (1971). Self-claimed and tested knowledge. Educational and Psychological Measurement 31: 629-636.Google Scholar
  7. Biglan, A. (1973). The characteristics of subject matter in different academic areas. Journal of Applied Psychology 57: 195-203.Google Scholar
  8. Bryk, A. S., and Raudenbush, S. W. (1992). Hierarchical Linear Models: Applications and Data Analysis Methods, Sage, Newbury Park, CA.Google Scholar
  9. Burstein, L. (1980a). The analysis of multilevel data in educational research and evaluation. In: Berliner, D. (ed.), Review of Research in Education (Vol. 8), American Educational Research Association, Washington, DC, pp. 158-233.Google Scholar
  10. Chickering, A. (1974). Commuting Versus Residential Students: Overcoming Educational Inequities of Living Off Campus, Jossey-Bass, San Francisco.Google Scholar
  11. Chickering, A. W., and Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin 7(3): 3-7.Google Scholar
  12. Dichev, I. (2001), News or noise? Estimating the noise in the U.S. News university rankings. Research in Higher Education 42: 237-266.Google Scholar
  13. Ethington, C. A. (1997). A hierarchical linear modeling approach to studying college effects. In: Smart, J. (ed.), Higher Education: Handbook of Theory and Research (Vol. 12), Agathon, New York, pp. 165-194.Google Scholar
  14. Feldman, K. A., and Newcomb, T. (1969). The Impact of College on Students, Jossey-Bass, San Francisco.Google Scholar
  15. Heath, D. (1968). Growing Up in College, Jossey-Bass, San Francisco.Google Scholar
  16. Hossler, D. (2000). The problem with college rankings. About Campus 5(1): 20-24.Google Scholar
  17. Hossler, D., and Foley, E. M. (1995). Reducing the noise in the college choice process: The use of college guidebooks and ratings. In: Walleri, R. D., and Moss, M. K. (eds.), Evaluating and Responding to College Guidebooks and Rankings, New Directions for Institutional Research Series No. 88, Jossey-Bass, San Francisco, pp. 21-30).Google Scholar
  18. Hossler, D., and Litten, L. H. (1993). Mapping the Higher Education Landscape, College Entrance Examination Board, New York.Google Scholar
  19. Hunter, B. (1995). College guidebooks: Background and development. In: Walleri, R. D., and Moss, M. K. (eds.), Evaluating and Responding to College Guidebooks and Rankings, New Directions for Institutional Research Series No. 88, Jossey-Bass, San Francisco, pp. 5-12.Google Scholar
  20. Indiana University Center for Postsecondary Research and Planning (2001). Improving the College Experience: National Benchmarks of Effective Educational Practice, Indiana University Center for Postsecondary Research and Planning, Bloomington.Google Scholar
  21. Katz, J. & Associates. (1968). No Time for Youth: Growth and Constraint in College Students, Jossey-Bass, San Francisco.Google Scholar
  22. Kuh, G. D. (2001a). Assessing what really matters to student learning: Inside the National Survey of Student Engagement. Change 33(3): 10-17.Google Scholar
  23. Kuh, G. D. (2001b). The National Survey of Student Engagement: Conceptual Framework and Overview of Psychometric Properties, Indiana University Center for Postsecondary Research and Planning, Bloomington.Google Scholar
  24. Kuh, G. D., Hayek, J. C., Carini, R. M., Ouimet, J. A., Gonyea, R. M., and Kennedy, J. (2001). NSSE Technical and Norms Report, Indiana University Center for Postsecondary Research and Planning, Bloomington.Google Scholar
  25. Kuh, G. D., Pace, C. R., and Vesper, N. (1997). The development of process indicators to estimate student gains associated with good practices in undergraduate education. Research in Higher Education 38: 435-454.Google Scholar
  26. McDonough, P. M., Antonio, A. L., Walpole, M. B., and Perez, L. X. (1998). College rankings: Democratized knowledge for whom? Research in Higher Education 39: 513-538.Google Scholar
  27. Pace, C. R. (1984). Measuring the Quality of College Student Experiences, Center for the Study of Evaluation, University of California Los Angeles, Los Angeles.Google Scholar
  28. Pace, C. R. (1985). The Credibility of Student Self-Reports, Center for the Study of Evaluation, University of California Los Angeles, Los Angeles.Google Scholar
  29. Pascarella, E. T. (2001). Identifying excellence in undergraduate education: Are we even close? Change 33(3): 18-23.Google Scholar
  30. Pascarella, E., and Terenzini, P. (1991). How College Affects Students: Findings and Insights from Twenty Years of Research, Jossey-Bass, San Francisco.Google Scholar
  31. Perry, W. G. (1970). Forms of Intellectual and Ethical Development in the College Years: A Scheme, Holt, Rinehart, & Winston, Troy, MO.Google Scholar
  32. Pike, G. R. (1995). The relationship between self reports of college experiences and achievement test scores. Research in Higher Education 36: 1-21.Google Scholar
  33. Pike, G., and Killian, T. (2001). Reported gains in student learning: Do academic disciplines make a difference? Research in Higher Education 42: 429-454.Google Scholar
  34. Pike, G. R., Kuh, G. D., and Gonyea, R. M. (2003). The relationship between institutional mission and students' involvement and educational outcomes. Research in Higher Education 44: 241-261.Google Scholar
  35. Pohlmann, J., and Beggs, D. (1974). A study of the validity of self-reported measures of academic growth. Journal of Educational Measurement 11: 115-119.Google Scholar
  36. Pollock, C. R. (1992). College guidebooks—users beware. Journal of College Admissions 135: 21-28.Google Scholar
  37. Raudenbush, S. W., Bryk, A. S., Cheong, Y. F., and Congdon, R. (2001). HLM5: Hierarchical Linear and Nonlinear Modeling (2nd Ed.), Scientific Software International, Chicago.Google Scholar
  38. Sanford, N. (ed.) (1962). The American College. A Psychological and Social Interpretation of the Higher Learning, Wiley, New York.Google Scholar
  39. Smart, J. C., and Hagedorn, L. S. (1994). Enhancing professional competencies in graduate education. Review of Higher Education 17: 241-258.Google Scholar
  40. Stecklow, S. (1995, April 5). Colleges inflate SATs and graduation rates in popular guidebooks. Wall Street Journal, pp. A1, A4, A8.Google Scholar

Copyright information

© Human Sciences Press, Inc. 2004

Authors and Affiliations

  1. 1.Mississippi State UniversityMississippi State

Personalised recommendations