Skip to main content
Log in

Using Admission Tests to Predict Success in College — Evidence from the University of Puerto Rico

  • Article
  • Published:
Eastern Economic Journal Aims and scope Submit manuscript

Abstract

In making admission decisions, many colleges have de-emphasized standardized test scores. Using data for seven cohorts of applicants to the University of Puerto Rico, we assess the ability of test scores and other proxies of academic potential to predict student GPA. We study sample selection and address a dilemma facing admissions offices: college grades of non-matriculants are unknowable. We find that decreasing the weight on admission tests benefits females and students from public high schools and that college grades can be predicted more accurately by increasing (decreasing) the weight on mathematical aptitude for students choosing more (less) quantitative programs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1

Similar content being viewed by others

Notes

  1. In addition, when colleges allow students to retake admissions exams and count only the highest score, the non-random nature of test retaking disadvantages students with low self-confidence and those for whom the cost of retaking the exam are high [Vigdor and Clotfelter 2003].

  2. According to the National Center for Fair and Open Testing (www.fairtest.org), in fall 2008, “over 760 four-year colleges do not use the SAT I or ACT to admit substantial numbers of bachelor degree applicants.” Included in the list are colleges that require these tests only for out-of-state applicants or for applicants to certain programs, colleges that use the tests only for applicants with a low high-school rank or GPA, and colleges that allow applicants to substitute alternative test results (the Stanford Achievement Test, SAT subject tests, etc.).

  3. Including its medical school, the UPR system has 11 campuses. Admission to these campuses is highly sought because, in Puerto Rico, public universities are rated more highly than private universities.

  4. Admission decisions are based exclusively on GAI scores except for a small number of cases in which the chancellor waives minimum GAI scores for non-academic reasons, for example, to allow admission of a student with high musical talent but low test scores.

  5. Board of Trustees, University of Puerto Rico, Certification 015, July 21, 1994. Because detailed admissions data are not available prior to 1995-96, it is not possible to directly compare admissions before and after the weights were changed.

  6. Private high schools are more common in Puerto Rico than in the United States. Only 10 percent of US students attend private high schools [Toma 2005], but 43 percent of the students in our sample graduated from a private high school.

  7. An issue that we cannot address is whether the higher test scores of students from private high schools result from differences in the characteristics of students and their families or from differences in the quality of private and public high schools. For perspective on this issue, see Duncan and Sandy [2007]. They find that students from private high schools average 12 points higher on the Armed Forces Qualifications Test (AFQT) than students from public high schools; but once a rich array of family-background variables is included, the difference shrinks to 4 points. In separate decomposition analysis, Duncan and Sandy estimate that differences in characteristics account for 78 percent of the test-score gap and that differences in family background are of primary importance.

  8. Because our data on applicants does not contain information on whether the student attended a public or private high school, we use data on matriculated students to compare the academic records of these two types of students.

  9. The theoretical maximum for GAI is 400; the observed maximum is 393.

  10. GPA is not the only measure of success. For example, Betts (1995) considers wages after graduation. We focus on GPA because this is the standard in the literature and because we lack data on income and other post-graduation indicators of success. We thank an anonymous referee for raising this point.

  11. We can find no research in the field that adopts Heckman's model of sample selection.

  12. When we estimate regressions separately by gender, the explanatory power is higher for males.

  13. Among the 39,851 applicants in our sample, the average male advantage is 41.9 points for mathematical aptitude but only 0.3 point for verbal aptitude.

  14. As one referee observed, the simulation study examines only the potential selection problems among the sample of matriculants. Still, these results represent a contribution. They demonstrate that selectivity is not an issue for this sample, as it well might have been. Even when admissions standards tighten or we switch to alternative admissions criteria, prediction results are robust. The experiment does not address the more challenging question of how non-matriculants differ from matriculants. Data on non-matriculant outcomes are simply unavailable.

  15. When we predict GPA of matriculants, the coefficient of Public High School is generally negative. When GAI is the measure of academic aptitude, the estimated coefficient is −0.080 for first-year GPA, −0.041 for second-year GPA, −0.004 for third-year GPA, and −0.037 for GPA after four years. Only for first-year and second-year GPA is the coefficient statistically significant.

  16. Dee and Jackson [1999] find that, controlling for admission test scores and high-school GPA, grades are lower in computing, engineering, and the sciences than in other disciplines.

  17. The coefficient of Female is positive and statistically significant in all cases and lowest in value for the specification that includes academic program. When GAI is the measure of academic potential, first-year GPA is predicted to be 0.27 higher for females than males if we do not control for discipline (model 2) and 0.20 higher if controls are added (model 5). The higher college GPA of females is consistent with the findings of Leppel [1984], who also finds that females spend more time studying.

  18. An advantage of first-year GPA is that it avoids the problem of non-random attrition from the university.

  19. When we alternatively replaced GPA of years two through four with cumulative GPA, again GAI generated the lowest mean squared error. The primary difference is that mean squared error is much lower when the dependent variable is cumulative GPA. Our explanation for this finding is that, because GPA measures learning imprecisely, measurement error falls the more courses a student has taken. That is, the noise-to-signal ratio of cumulative GPA declines as years of schooling increase.

References

  • Betts, Julian R. 1995. Does School Quality Matter? Evidence from the National Longitudinal Survey of Youth. Review of Economics and Statistics, 77 (2): 231–250.

    Article  Google Scholar 

  • Betts, Julian R., and Darlene Morell . 1999. The Determinants of Undergraduate Grade Point Average: The Relative Importance of Family Background, High School Resources, and Peer Group Effects. Journal of Human Resources, 34 (2): 268–293.

    Article  Google Scholar 

  • Bridgeman, Brent, Laura McCamley-Jenkins, and Nancy Ervin . 2000. Predictions of Freshman Grade-Point Average From the Revised and Recentered SAT I: Reasoning Test, College Board Research Report No. 2000-1. New York: College Entrance Examination Board.

  • Cohn, Elchanan, Sharon Cohn, Donald C. Balch, and James Bradley, Jr. 2004. Determinants of Undergraduate GPAs: SAT Scores, High-school GPA and High-school Rank. Economics of Education Review, 23 (6): 577–586.

    Article  Google Scholar 

  • Dee, Thomas S., and Linda A. Jackson . 1999. Who Loses Hope? Attrition from Georgia's College Scholarship Program. Southern Economic Journal, 66 (2): 379–390.

    Article  Google Scholar 

  • Duncan, Kevin C., and Jonathan Sandy . 2007. Explaining the Performance Gap Between Public and Private School Students. Eastern Economic Journal, 33 (2): 177–191.

    Article  Google Scholar 

  • Ehrenberg, Ronald G. 2004. Econometric Studies of Higher Education. Journal of Econometrics, 121 (1–2): 19–37.

    Article  Google Scholar 

  • Freeman, Donald G. 1999. Grade Divergence as a Market Outcome. Journal of Economic Education, 30 (4): 344–351.

    Article  Google Scholar 

  • Grove, Wayne A., and Tim Wasserman . 2004. The Life-Cycle Pattern of Collegiate GPA: Longitudinal Cohort Analysis and Grade Inflation. Journal of Economic Education, 35 (2): 162–174.

    Article  Google Scholar 

  • Heckman, James 1979. Sample Selection Bias as a Specification Error. Econometrica, 47 (1): 153–161.

    Article  Google Scholar 

  • Horowitz, John B., and Lee Spector . 2005. Is there a Difference between Private and Public Education on College Performance? Economics of Education Review, 24 (2): 189–195.

    Article  Google Scholar 

  • Leppel, Karen 1984. The Academic Performance of Returning and Continuing College Students: An Economic Analysis. Journal of Economic Education, 15 (1): 46–54.

    Article  Google Scholar 

  • Noble, J.P. 1991. Predicting College Grades from ACT Assessment Scores and High School Coursework and Grade Information, Report No. 91-3 [50291930]. Iowa City, IA: American College Testing.

  • Park, Kang H., and Peter M. Kerr . 1990. Determinants of Academic Performance: A Multinomial Logit Approach. Journal of Economic Education, 21 (2): 101–111.

    Article  Google Scholar 

  • Ramist, Leonard, Charles Lewis, and Laura McCamley-Jenkins . 2001. Using Achievement Tests/SAT II: Subject Tests to Demonstrate Achievement and Predict College Grades, Research Report No. 2001-5. New York: College Entrance Examination Board.

  • Robinson, Michael, and James Monks . 2005. Making SAT Scores Optional in Selective College Admissions: A Case Study. Economics of Education Review, 24 (4): 393–405.

    Article  Google Scholar 

  • Rothstein, Jesse 2004. College Performance Predictions and the SAT. Journal of Econometrics, 121 (1–2): 297–317.

    Article  Google Scholar 

  • Stockwell, Sarah, Bob Schaeffer, and Jeffrey Lowenstein . 1991. The SAT Coaching Coverup. Cambridge, MA: FairTest.

    Google Scholar 

  • Toma, Eugenia F. 2005. Private Schools in a Global World. Southern Economic Journal, 71 (4): 693–704.

    Article  Google Scholar 

  • Vigdor, Jacob L., and Charles T. Clotfelter . 2003. Retaking the SAT. Journal of Human Resources, 38 (1): 1–33.

    Article  Google Scholar 

Download references

Acknowledgements

Part of the project was completed while Ragan was a visiting scholar at the Ragnar Frisch Centre for Economic Research in Oslo, Norway; he was grateful for the Centre's support. Matos-Díaz acknowledges the generous support of Dr. Andrés Rodríguez Rubio, the former chancellor of the UPR-Bayamón. The authors thank Gilberto Calderón for compiling data for this project and acknowledge the helpful comments of Wei Chi, Oddbjørn Raaum, three referees, and seminar participants at the University of Puerto Rico-Bayamón.

Author information

Authors and Affiliations

Authors

Additional information

Dedication: To the memory of our dear colleague Professor James F. Ragan, Jr.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ragan, J., Li, D. & Matos-Díaz, H. Using Admission Tests to Predict Success in College — Evidence from the University of Puerto Rico. Eastern Econ J 37, 470–487 (2011). https://doi.org/10.1057/eej.2010.3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1057/eej.2010.3

Keywords

JEL Classifications

Navigation