Instrumental Variables: Conceptual Issues and an Application Considering High School Course Taking

  • Rob M. Bielby
  • Emily House
  • Allyson Flaster
  • Stephen L. DesJardins
Chapter
Part of the Higher Education: Handbook of Theory and Research book series (HATR, volume 28)

Abstract

Policymakers are becoming increasingly adamant that the research they use to evaluate educational interventions, practices, and programs can support statements about cause and effect. An instrumental variables (IV) estimation approach, which has historically been employed by economists, can be utilized to make causal statements regarding a predictor and an outcome when randomized trials are not feasible. In this chapter, we provide an overview of the IV approach as we explore the causal relationship between taking Algebra II in high school and degree attainment in college. We discuss concepts and terminology related to conducting experimental and quasi-experimental work, present the assumptions that should be met by an IV before a researcher can use it to make causal inferences, and demonstrate several IV estimation strategies. Additionally, we provide the reader with annotated Stata code to facilitate the application of this underutilized methodological approach in higher education research.

Keywords

Ordinary Little Square Degree Attainment College Completion Local Labor Market Condition Local Average Treatment Effect 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The authors would like to thank Brian McCall and Stephen Porter for their helpful suggestions and feedback. All errors and omissions are, however, our own.

References

  1. Abdulkadiroglu, A., Angrist, J., Dynarski, S., Kane, T., & Pathak, P. (2011). Accountability and flexibility in public schools: Evidence from Boston’s charters and pilots. Quarterly Journal of Economics, 126(2), 699–748.CrossRefGoogle Scholar
  2. Achieve. (2011). Closing the expectations gap: An annual 50-state report on the alignment of high school policies with the demands of college and careers. Retrieved from http://www.achieve.org/ClosingtheExpectationsGap2011
  3. Adelman, C. (1999). Answers in the tool box: Academic intensity, attendance patterns, and bachelor’s degree attainment. Washington, DC: U.S. Department of Education.Google Scholar
  4. Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U.S. Department of Education.Google Scholar
  5. Alexander, K. L., Riordan, C., Fennessey, J., & Pallas, A. M. (1982). Social background, academic resources, and college graduation: Recent evidence from the National Longitudinal Survey. American Journal of Education, 90, 315–333.CrossRefGoogle Scholar
  6. Altonji, J. G. (1995). The effects of high school curriculum on education and labor market outcomes. Journal of Human Resources, 30(3), 409–438.CrossRefGoogle Scholar
  7. Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434), 444–455.CrossRefGoogle Scholar
  8. Angrist, J. D., & Krueger, A. B. (1991). Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics, 106(4), 979–1014.CrossRefGoogle Scholar
  9. Angrist, J. D., & Krueger, A. B. (2001). Instrumental variables and the search for identification: From supply and demand to natural experiments. Journal of Economic Perspectives, 15(4), 69–85.CrossRefGoogle Scholar
  10. Angrist, J. D., & Pischke, J. S. (2009). Mostly harmless econometrics. Princeton: Princeton University Press.Google Scholar
  11. Astin, A. W., & Oseguera, L. (2005). Pre-college and institutional influences on degree attainment. In A. Seidman (Ed.), College student retention (pp. 245–276). Westport, CT: American Council on Education.Google Scholar
  12. Attewell, P., & Domina, T. (2008). Raising the bar: Curricular intensity and academic performance. Educational Evaluation and Policy Analysis, 30(1), 51–71.CrossRefGoogle Scholar
  13. Baum, C. F., Schaffer, M. E., & Stillman, S. (2003). Instrumental variables and GMM: Estimation and testing. The Stata Journal, 3(1), 1–31.Google Scholar
  14. Bean, J. P. (1980). Dropouts and turnover: The synthesis and test of a causal model of student attrition. Research in Higher Education, 12(2), 155–187.CrossRefGoogle Scholar
  15. Bettinger, E.P., & Baker, R. (2011). The effects of student coaching in college: An evaluation of a randomized experiment in student mentoring (NBER Working Paper No. 16881). Retrieved from http://www.nber.org.proxy.lib.umich.edu/papers/w16881
  16. Bishop, J. H., & Mane, F. (2004). Educational reform and disadvantaged students: Are they better off or worse off? (CAHRS Working Paper #04-13). Ithaca, NY: Cornell University, School of Industrial and Labor Relations, Center for Advanced Human Resource Studies. Retrieved from http://digitalcommons.ilr.cornell.edu/cahrswp/17
  17. Bishop, J. H., & Mane, F. (2005). Raising academic standards and vocational concentrators: Are they better off or worse off? Education Economics, 13(2), 171–187.CrossRefGoogle Scholar
  18. Bozick, R., & Ingels, S. J. (2008). Mathematics course taking and achievement at the end of high school: Evidence from the Education Longitudinal Study of 2002 (ELS: 2002) (NCES 2008-319). Washington, DC: U.S. Department of Education.Google Scholar
  19. Burkam, D. T., & Lee, V. E. (2003). Mathematics, foreign language, and science course taking and the NELS:88 transcript data (NCES 2003-01). Washington, DC: U.S. Department of Education.Google Scholar
  20. Camara, W., & Echternacht, G. (2000). The SAT I and high school grades: Utility in predicting success in college (RN-10). Retrieved from The College Board Office of Research and Development: http://professionals.collegeboard.com/profdownload/pdf/rn10_10755.pdf
  21. Cameron, C. A., & Trivedi, P. K. (2005). Microeconometrics: Methods and applications. New York: Cambridge University Press.CrossRefGoogle Scholar
  22. Card, D. (1995). Using geographic variation in college proximity to estimate the return to schooling. In L. Christofides, E. Grant, & R. Swidinsky (Eds.), Aspects of labour market behaviour: Essays in honour of John Vanderkamp (pp. 201–222). Toronto: University of Toronto Press.Google Scholar
  23. Card, D. (2001). Estimating the return to schooling: Progress on some persistent econometric problems. Econometrica, 69(5), 1127–1160.CrossRefGoogle Scholar
  24. Carneiro, P., Heckman, J. J., & Vytlacil, E. J. (2011). Estimating marginal returns to education. American Economic Review, 101(6), 2754–2781.CrossRefGoogle Scholar
  25. Cellini, S. R. (2008). Causal inference and omitted variable bias in financial aid research: Assessing solutions. The Review of Higher Education, 31(3), 329–354.CrossRefGoogle Scholar
  26. Choy, S. P. (2001). Students whose parents did not go to college: Postsecondary access, persistence, and attainment. Washington, DC: U.S. Department of Education.Google Scholar
  27. Cornwell, C., Lee, K., & Mustard, D. (2005). Student responses to merit scholarship retention rules. Journal of Human Resources, 40(4), 895–917.Google Scholar
  28. Cornwell, C., Lee, K., & Mustard, D. (2006).The effects of state-sponsored merit scholarships on course selection and major choice in college (IZA Discussion Paper No. 1953). Retrieved from ftp://ftp.iza.org/dps/dp1953.pdf
  29. Crosnoe, R. (2009). Low-income students and the socioeconomic composition of public high schools. American Sociological Review, 74(5), 709–730.CrossRefGoogle Scholar
  30. Dalton, B., Ingels, S. J., Downing, J., &Bozick, R. (2007). Advanced mathematics and science course taking in the spring high school senior classes of 1982, 1992, and 2004. Statistical analysis report (NCES 2007-312). Washington, DC: U.S. Department of Education.Google Scholar
  31. Deke, J., & Haimson, J. (2006). Valuing student competencies: Which ones predict postsecondary educational attainment and earnings, and for whom? Princeton, NJ: Mathematica Policy Research, Inc.Google Scholar
  32. Dynarski, S., Hyman, J. M., & Schanzenbach, D. W. (2011). Experimental evidence on the effect of childhood investments on postsecondary attainment and degree completion (Working paper). Ann Arbor, MI: University of Michigan.Google Scholar
  33. Ehrenberg, R. G., & Marcus, A. J. (1982). Minimum wages and teenagers’ enrollment-employment outcomes: A multinomial logit model. Journal of Human Resources, 17(1), 39–58.CrossRefGoogle Scholar
  34. Fletcher, J., & Zirkle, C. (2009). The relationship of high school curriculum tracks to degree attainment and occupational earnings. Career and Technical Education Research, 34(2), 81–102.CrossRefGoogle Scholar
  35. Frank, K. A., Muller, C., Schiller, K. S., Riegle-Crumb, C., Mueller, A. S., Crosnoe, R., & Pearson, J. (2008). The social dynamics of mathematics course taking in high school. The American Journal of Sociology, 113(6), 1645–1696.CrossRefGoogle Scholar
  36. Gamoran, A. (1987). The stratification of high school learning opportunities. Sociology of Education, 60(3), 135–155.CrossRefGoogle Scholar
  37. Geiser, S., & Santelices, V. (2004). The role of advanced placement and honors courses in college admissions. Berkeley, CA: Center for Studies in Higher Education, University of California. Retrieved from http://cshe.berkeley.edu/publications/docs/ROP.Geiser.4.04.pdf#search=%22Saul%20Geiser%22
  38. Goldrick-Rab, S., Carter, D. F., & Wagner, R. W. (2007). What higher education has to say about the transition to college. Teachers College Record, 109(10), 2444–2481.Google Scholar
  39. Goodman, J. (2008). Who merits financial aid? Massachusetts’ Adams scholarship. Journal of Public Economics, 92(10–11), 2121–2131.CrossRefGoogle Scholar
  40. Greene, W. H. (2011). Econometric analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall.Google Scholar
  41. Hall, A. R., Rudebusch, G., & Wilcox, D. (1996). Judging instrument relevance in instrumental variables estimation. International Economic Review, 37, 283–298.CrossRefGoogle Scholar
  42. Hallinan, M. T. (1994). Tracking: From theory to practice. Sociology of Education, 67(2), 79–84.CrossRefGoogle Scholar
  43. Heckman, J., & Vytlacil, E. (2005). Structural equations, treatment effects, and econometric policy evaluation. Econometrica, 73, 669–738.CrossRefGoogle Scholar
  44. Heckman, J. J., & Urzua, S. (2009). Comparing IV with structural models: What simple IV can and cannot identify (NBER Working Paper 14706). Retrieved from http://www.nber.org/papers/w14706
  45. Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945–970.CrossRefGoogle Scholar
  46. Horn, L., & Kojaku, L. (2001).High school academic curriculum and the persistence path through college. Statistical analysis report (NCES 2001-163). Washington, DC: U.S. Department of Education.Google Scholar
  47. Kane, T. J., & Rouse, C. (1993). Labor market returns to two- and four-year colleges: Is a credit a credit, and do degrees matter? (NBER Working Paper #4268). Retrieved from http://en.scientificcommons.org/37629075
  48. Kelly, S. (2009). The Black-White gap in mathematics course taking. Sociology of Education, 82(1), 47–69.CrossRefGoogle Scholar
  49. Kim, J., Kim, J., DesJardins, S., & McCall, B. (2012, April). Exploring the relationship between high school math course-taking and college access and success. Paper presented at the meeting of the American Educational Research Association, Vancouver, Canada.Google Scholar
  50. Klopfenstein, K., & Thomas, M. K. (2009). The link between advanced placement experience and early college success. Southern Economic Journal, 75(3), 873.Google Scholar
  51. Lee, V. E., Chow-Hoy, T. K., Burkam, D. T., Geverdt, D., & Smerdon, B. A. (1998). Sector differences in high school course taking: A private school or Catholic school effect? Sociology of Education, 71(4), 314–335.CrossRefGoogle Scholar
  52. Lemieux, T., & Card, D. (1998). Education, earnings, and the “Canadian GI Bill” (NBER Working Paper #6718). Retrieved from http://faculty.arts.ubc.ca/tlemieux/papers/w6718.pdf
  53. Levine, P. B., & Zimmerman, D. J. (1995). The benefit of additional high school math and science classes for young women: Evidence from longitudinal data. Journal of Business and Economics Statistics, 13(2), 137–149.Google Scholar
  54. Long, L. H. (1972). The influence of number and ages of children on residential mobility. Demography, 9(3), 371–382.CrossRefGoogle Scholar
  55. Long, B. T. (2007). The contributions of economics to the study of college access and success. Teachers College Record, 109(10), 2367–2443.Google Scholar
  56. Long, M. C., Iatarola, P., & Conger, D. (2009). Explaining gaps in readiness for college-level math: The role of high school courses. Education Finance and Policy, 4(1), 1–33.CrossRefGoogle Scholar
  57. Ma, X. (2001). Participation in advanced mathematics: Do expectation and influence of students, peers, teachers, and parents matter? Contemporary Educational Psychology, 26(1), 132–146.CrossRefGoogle Scholar
  58. McCall, B. P., & Bielby, R. M. (2012). Regression discontinuity design: Recent developments and a guide to practice for researchers in higher education. In J. Smart & M. Paulsen (Eds.), Higher education: Handbook of theory and research (Vol. 27, pp. 249–290). Dordrecht, The Netherlands: Springer.CrossRefGoogle Scholar
  59. Michigan Department of Education. (2006).Preparing Michigan students for work and college success. Retrieved from http://www.michigan.gov/mde/0,4615,7-140-37818---S,00.html
  60. Murnane, R. J., & Willett, J. B. (2011). Methods matter: Improving causal inference in educational and social science research. New York: Oxford University Press.Google Scholar
  61. Murray, M. P. (2006). Avoiding invalid instruments and coping with weak instruments. Journal of Economic Perspectives, 20(4), 111–132.CrossRefGoogle Scholar
  62. National Commission on Excellence in Education. (1983). A nation at risk: The imperative for educational reform. The Elementary School Journal, 84(2), 113–130.CrossRefGoogle Scholar
  63. Neumark, D., & Wascher, W. (1995). Minimum wage effects on employment and school enrollment. Journal of Business& Economic Statistics, 13(2), 199–206.Google Scholar
  64. Newhouse, J., & McClellan, M. (1998). Econometrics in outcomes research: The use of instrumental variables. Annual Review of Public Health, 19, 17–34.CrossRefGoogle Scholar
  65. Oakes, J., & Guiton, G. (1995). Matchmaking: The dynamics of high school tracking decisions. American Educational Research Journal, 32(1), 3–33.Google Scholar
  66. Pallas, A. M., & Alexander, K. L. (1983). Sex differences in quantitative SAT performance: New evidence on the differential coursework hypothesis. American Educational Research Journal, 20(2), 165–182.Google Scholar
  67. Pike, G. R., Hansen, M. J., & Lin, C. H. (2011). Using instrumental variables to account for selection effects in research on first-year programs. Research in Higher Education, 52, 194–214.CrossRefGoogle Scholar
  68. Planty, M., Provasnik, S., & Daniel, B. (2007). High school course taking: Findings from the Condition of Education, 2007 (NCES 2007–065). Washington, DC: U.S. Department of Education.Google Scholar
  69. Porter, S. R. (2012). Using instrumental variables properly to account for selection effects. Unpublished manuscript. Retrieved from http://www.stephenporter.org/papers/Pike_IV.pdf
  70. Rech, J. F., & Harrington, J. (2000). Algebra as a gatekeeper: A descriptive study at an urban university. Journal of African American Studies, 4(4), 63–71.CrossRefGoogle Scholar
  71. Reynolds, C. L., & DesJardins, S. L. (2009). The use of matching methods in higher education research: Answering whether attendance at a 2-year institution results in differences in educational attainment. In J. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 24, pp. 47–97). Dordrecht, The Netherlands: Springer.Google Scholar
  72. Rose, H., & Betts, J. R. (2001). Math matters: The links between high school curriculum, college graduation, and earnings. San Francisco: Public Policy Institute of California.Google Scholar
  73. Rose, H., & Betts, J. R. (2004). The effect of high school courses on earnings. The Review of Economics and Statistics, 86(2), 497–513.CrossRefGoogle Scholar
  74. Rothenberg, T. J. (1983). Asymptotic properties of some instruments in structural models. In S. Karlin, T. Amemiya, & L. A. Goodman (Eds.), Studies in econometrics, time series, and multivariate Statistics. New York: Academic.Google Scholar
  75. Rubin, D. (1974). Estimating causal effects of treatments in randomized and non-randomized studies. Journal of Educational Psychology, 66, 688–701.CrossRefGoogle Scholar
  76. Sadler, P. M., & Tai, R. H. (2007). The two high-school pillars supporting college science. Science, 317(5837), 457–458.CrossRefGoogle Scholar
  77. Schneider, B., Swanson, C. B., & Riegle-Crumb, C. (1998). Opportunities for learning: Course sequences and positional advantages. Social Psychology of Education, 2, 25–53.CrossRefGoogle Scholar
  78. Sells, L. W. (1973). High school mathematics as the critical filter in the job market. In R. T. Thomas (Ed.), Developing opportunities for minorities in graduate education (pp. 37–39). Berkeley, CA: University of California Press.Google Scholar
  79. Simpkins, S., Davis-Kean, P. E., & Eccles, J. S. (2006). Math and science motivation: A longitudinal examination of the links between choices and beliefs. Developmental Psychology, 42, 70–83.CrossRefGoogle Scholar
  80. Sovey, A. J., & Green, D. P. (2011). Instrumental variables estimation in political science: A readers’ guide. American Journal of Political Science, 55(1), 188–200.CrossRefGoogle Scholar
  81. Staiger, D., & Stock, J. (1997). Instrumental variables regression with weak instruments. Econometrica, 65, 557–586.CrossRefGoogle Scholar
  82. StataCorp. (2009). Stata: Release 11. College Station, TX: StataCorpLp.Google Scholar
  83. Stock, J. H., Wright, J. H., & Yogo, M. (2002). A survey of weak instruments and weak identification in generalized method of moments. Journal of Business & Economic Statistics, 20, 518–529.CrossRefGoogle Scholar
  84. Stock, J. H., & Yogo, M. (2005). Testing for weak instruments in linear IV regression. In D. W. K. Andrews & J. H. Stock (Eds.), Identification and Inference for Econometric Models: Essays in Honor of Thomas Rothenberg (pp. 80–108). New York: Cambridge University Press.Google Scholar
  85. Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89–125.Google Scholar
  86. United States Department of Agriculture.(2004). Rural education at a glance (Rural Development Research Report No. 98). Retrieved from http://inpathways.net/rural_education.pdf
  87. United States Department of Education. (2008, December). What Works Clearinghouse, Procedures and standards handbook, version 2. Washington, DC: United States Department of Education.Google Scholar
  88. Useem, E. L. (1992). Middle schools and math groups: Parents’ involvement in children’s placement. Sociology of Education, 65(4), 263–279.CrossRefGoogle Scholar
  89. Wooldridge, J. M. (2002). Econometric analysis of cross section and panel data. Cambridge, MA: The MIT Press.Google Scholar
  90. Yonezawa, S., Wells, A. S., & Serna, I. (2002). Choosing tracks: “Freedom of choice” in detracking schools. American Educational Research Journal, 39(1), 37–67.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  • Rob M. Bielby
    • 1
  • Emily House
    • 1
  • Allyson Flaster
    • 1
  • Stephen L. DesJardins
    • 1
  1. 1.Center for the Study of Higher and Postsecondary Education, School of EducationUniversity of MichiganAnn ArborUSA

Personalised recommendations