Design of Observational Studies pp 113-145

Part of the Springer Series in Statistics book series (SSS)

| Cite as

Opportunities, Devices, and Instruments

Chapter

Abstract

What features of the design of an observational study affect its ability to distinguish a treatment effect from bias due to an unmeasured covariate uij? This topic, which is the focus of Part III of the book, is sketched in informal terms in the current chapter. An opportunity is an unusual setting in which there is less confounding with unobserved covariates than occurs in common settings. One opportunity may be the base on which one or more natural experiments are built. A device is information collected in an effort to disambiguate an association that might otherwise be thought to reflect either an effect or a bias. Typical devices include: multiple control groups, outcomes thought to be unaffected by the treatment, coherence among several outcomes, and varied doses of treatment. An instrument is a relatively haphazard nudge towards acceptance of treatment where the nudge itself can affect the outcome only if it prompts acceptance of the treatment. Although competing theories structure design, opportunities, devices, and instruments are ingredients from which designs are built.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abadie, A., Gardeazabal, J.: Economic costs of conflict: A case study of the Basque Country. Am Econ Rev 93, 113–132 (2003)CrossRefGoogle Scholar
  2. Angrist, J.D., Imbens, G.W., Rubin, D.B. : Identification of causal effects using instrumental variables (with Discussion). J Am Statist Assoc 91, 444–455 (1996)MATHCrossRefGoogle Scholar
  3. Angrist, J.D., Lavy, V. : Using Maimonides’ rule to estimate the effect of class size on scholastic achievement. Quart J Econ 114, 533–575 (1999)CrossRefGoogle Scholar
  4. Angrist, J.D., Krueger, A.B.: Empirical strategies in labor economics. In: Ashenfelter, O., Card, D. (eds.), Handbook of Labor Economics, Volume 3, pp. 1277–1366. New York: Elsevier (1999)Google Scholar
  5. Angrist, J., Lavy, V.: New evidence on classroom computers and pupil learning. Econ J 112, 735–765 (2002)CrossRefGoogle Scholar
  6. Anthony, J.C., Breitner, J.C., Zandi, P.P., Meyer, M.R., Jurasova, I., Norton, M.C., Stone, S.V.: Reduced prevalence of AD in users of NSAIDs and H receptor antagonists. Neurology 54, 2066–2071 (2000)Google Scholar
  7. Barnard, J., Du, J.T., Hill, J.L., Rubin, D.B.: A broader template for analyzing broken randomized experiments. Sociol Methods Res 27, 285–317 (1998)CrossRefGoogle Scholar
  8. Barnard, J., Frangakis, C.E., Hill, J.L., Rubin, D.B. : Principal stratification approach to broken randomized experiments: A case study of School Choice vouchers in New York City. J Am Statist Assoc 98, 299–311 (2003)MATHMathSciNetCrossRefGoogle Scholar
  9. Battistin, E., Rettore, E. : Ineligibles and eligible non-participants as a double comparison group in regression-discontinuity designs. J Econometrics 142, 715–730 (2008)MathSciNetCrossRefGoogle Scholar
  10. Behrman, J.R., Cheng, Y., Todd, P.E. : Evaluating preschool programs when length of exposure to the program varies: A nonparametric approach. Rev Econ Statist 86, 108–132 (2004)CrossRefGoogle Scholar
  11. Berk, R.A., Rauma, D. : Capitalizing on nonrandom assignment to treatments: A regression-discontinuity evaluation of a crime-control program. J Am Statist Assoc 78, 21–27 (1983)CrossRefGoogle Scholar
  12. Berk, R.A., de Leuuw, J. : An evaluation of California’s inmate classification system using a regression discontinuity design. J Am Statist Assoc 94, 1045–1052 (1999)CrossRefGoogle Scholar
  13. Bernanke, B.S. : The macroeconomics of the Great Depression: a comparative approach. J Money Cred Bank 27, 1–28 (1995) Reprinted: Bernanke, B.S. Essays on the Great Depression, Princeton, NJ: Princeton University Press (2000)Google Scholar
  14. Bilban, M., Jakopin, C.B. : Incidence of cytogenetic damage in lead-zinc mine workers exposed to radon. Mutagenesis 20, 187–191 (2005)CrossRefGoogle Scholar
  15. Black, S. : Do better schools matter? Parental valuation of elementary education. Q J Econ 114, 577–599 (1999)CrossRefGoogle Scholar
  16. Bound, J. : The health and earnings of rejected disability insurance applicants. Am Econ Rev 79, 482–503 (1989)Google Scholar
  17. Bound, J., Jaeger, D.A., Baker, R.M. : Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak. J Am Statist Assoc 90, 443–450 (1995)CrossRefGoogle Scholar
  18. Campbell, D.T. : Factors relevant to the validity of experiments in social settings. Psychol Bull 54, 297–312 (1957)CrossRefGoogle Scholar
  19. Campbell, D.T. : Prospective: artifact and control. In: Rosenthal, R. and Rosnow, R. (eds.), Artifact in Behavioral Research, pp. 351–382. New York: Academic Press (1969)Google Scholar
  20. Campbell, D.T. : Methodology and Epistemology for Social Science: Selected Papers. Chicago: University of Chicago Press (1988)Google Scholar
  21. Card, D. : The causal effect of education. In: Ashenfelter, O., Card, D., (eds.) Handbook of Labor Economics. New York: North Holland (2001)Google Scholar
  22. Choudhri, E.U., Kochin, L.A.: The exchange rate and the international transmission of business cycle disturbances: Some evidence from the Great Depression. J Money Cred Bank 12, 565–574 (1980)CrossRefGoogle Scholar
  23. Cioran, E.M. : History and Utopia. Chicago: University of Chicago Press (1998)Google Scholar
  24. Cochran, W.G. : The planning of observational studies of human populations (with Discussion). J Roy Statist Soc A 128, 234–265 (1965)CrossRefGoogle Scholar
  25. Cook, T.D. : Waiting for life to arrive: a history of the regression-discontinuity designs in psychology, statistics and economics. J Econometrics 142, 636–654 (2007)CrossRefGoogle Scholar
  26. Eichengreen, B., Sachs, J. : Exchange rates and economic recovery in the 1930’s. J Econ Hist 45, 925–946 (1985)CrossRefGoogle Scholar
  27. Evans, L.: The effectiveness of safety belts in preventing fatalities. Accid Anal Prev 18, 229–241 (1986)CrossRefGoogle Scholar
  28. Frangakis, C.E., Rubin, D.B. : Addressing complications of intention-to-treat analysis in the combined presence of all-or-none treatment noncompliance and subsequent missing outcomes. Biometrika 86, 365–379 (1999)MATHMathSciNetCrossRefGoogle Scholar
  29. Fenech, M., Changb, W.P., Kirsch-Voldersc, M., Holland, N., Bonassie, S., Zeiger, E.: HUMN project: Detailed description of the scoring criteria for the cytokinesis-block micronucleus assay using isolated human lymphocyte cultures. Mutat Res 534, 65–75 (2003)Google Scholar
  30. Friedman, M., Schwartz, A.J.: A Monetary History of the United States. Princeton, NJ: Princeton University Press (1963)Google Scholar
  31. Goetghebeur, E., Loeys, T.: Beyond intent to treat. Epidemiol Rev 24, 85–90 (2002)CrossRefGoogle Scholar
  32. Gould, E. D., Lavy, V., Paserman, M. D.: Immigrating to opportunity: Estimating the effect of school quality using a natural experiment on Ethiopians in Israel. Q J Econ 119, 489–526 (2004)CrossRefGoogle Scholar
  33. Greevy, R., Silber, J.H., Cnaan, A., Rosenbaum, P.R.: Randomization inference with imperfect compliance in the ACE-inhibitor after anthracycline randomized trial. J Am Statist Assoc 99, 7–15 (2004)MATHMathSciNetCrossRefGoogle Scholar
  34. Hahn, J., Todd, P., Van der Klaauw, W. : Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica 69, 201–209 (2001)CrossRefGoogle Scholar
  35. Hamermesh, D.S.: The craft of labormetrics. Indust Labor Relat Rev 53, 363–380 (2000)CrossRefGoogle Scholar
  36. Hawkins, N.G., Sanson-Fisher, R.W., Shakeshaft, A., D’Este, C., Green, L.W.: The multiple baseline design for evaluating population based research. Am J Prev Med 33, 162–168 (2007)CrossRefGoogle Scholar
  37. Heckman, J., Navarro-Lozano, S. : Using matching, instrumental variables, and control functions to estimate economic choice models. Rev Econ Statist 86, 30–57 (2004)CrossRefGoogle Scholar
  38. Hill, A.B. : The environment and disease: Association or causation? Proc Roy Soc Med 58, 295–300 (1965)Google Scholar
  39. Ho, D.E., Imai, K. : Estimating the causal effects of ballot order from a randomized natural experiment: California alphabet lottery, 1978–2002. Public Opin Q 72, 216–240 (2008)CrossRefGoogle Scholar
  40. Holland, P.W. : Causal Inference, path analysis, and recursive structural equations models. Sociolog Method 18, 449–484 (1988)CrossRefGoogle Scholar
  41. Holland, P.W. : Choosing among alternative nonexperimental methods for estimating the impact of social programs: comment. J Am Statist Assoc 84, 875–877 (1989)CrossRefGoogle Scholar
  42. Imbens, G.W. : The role of the propensity score in estimating dose response functions. Biometrika 87, 706–710 (2000)MATHMathSciNetCrossRefGoogle Scholar
  43. Imbens, G.W., Rubin, D.B., Sacerdote, B.I. : Estimating the effect of unearned income on labor earnings, savings, and consumption: Evidence from a survey of lottery players. Am Econ Rev 91, 778–794 (2001)CrossRefGoogle Scholar
  44. Imbens, G.W. : Nonparametric estimation of average treatment effects under exogeneity: A review. Rev Econ Statist 86, 4–29 (2004)CrossRefGoogle Scholar
  45. Imbens, G., Rosenbaum, P.R.: Robust, accurate confidence intervals with a weak instrument: Quarter of birth and education. J Roy Statist Soc A 168, 109–126 (2005)MATHMathSciNetCrossRefGoogle Scholar
  46. Imbens, G.W., Lemieux, T.: Regression discontinuity designs: A guide to practice. J Econometrics 142, 615–635 (2008)MathSciNetCrossRefGoogle Scholar
  47. in ’t Veld, B.A., Launer, L.J., Breteler, M.M.B., Hofman, A., Stricker, B.H.C.: Pharmacologic agents associated with a preventive effect on Alzheimer’s disease. Epidemiol Rev 2, 248–268 (2002)Google Scholar
  48. Joffe, M.M., Colditz, G.A. : Restriction as a method for reducing bias in the estimation of direct effects. Statist Med 17, 2233–2249 (1998)CrossRefGoogle Scholar
  49. Khuder, S.A., Milz, S., Jordan, T., Price, J., Silvestri, K., Butler, P.: The impact of a smoking ban on hospital admissions for coronary heart disease. Prev Med 45, 3–8 (2007)CrossRefGoogle Scholar
  50. LaLumia, S. : The effects of joint taxation of married couples on labor supply and non-wage income. J Public Econ 92, 1698–1719 (2008)CrossRefGoogle Scholar
  51. Lambe, M., Cummings, P. : The shift to and from daylight savings time and motor vehicle crashes. Accid Anal Prev 32, 609–611 (2002)CrossRefGoogle Scholar
  52. Li, F., Frangakis, C.E.: Polydesigns and causal inference. Biometrics 62, 343–351 (2006)MATHMathSciNetCrossRefGoogle Scholar
  53. Lu, B., Rosenbaum, P.R.: Optimal matching with two control groups. J Comput Graph Statist 13, 422–434 (2004)MathSciNetCrossRefGoogle Scholar
  54. Ludwig, J., Miller, D.L. : Does Head Start improve children’s life chances? Evidence from a regression discontinuity design. Q J Econ 122, 159–208 (2007)CrossRefGoogle Scholar
  55. Manski, C. : Nonparametric bounds on treatment effects. Am Econ Rev 319–323 (1990)Google Scholar
  56. Marguart, J.W., Sorensen, J.R.: Institutional and postrelease behavior of Furman-commuted inmates in Texas. Criminology 26, 677–693 (1988)CrossRefGoogle Scholar
  57. McKillip, J. : Research without control groups: A control construct design. In: Bryant, F.B., et al., (eds.), Methodological Issues in Applied Social Psychology, pp. 159–175. New York: Plenum Press (1992)Google Scholar
  58. Meyer, B.D.: Natural and quasi-experiments in economics. J Business Econ Statist 13, 151–161 (1995)CrossRefGoogle Scholar
  59. Mill, J.S.: On Liberty. New York: Barnes and Nobel (1859, reprinted 2004)Google Scholar
  60. Milyo, J., Waldfogel, J.: The effect of price advertising on prices: evidence in the wake of 44 Liquormart. Am Econ Rev 89, 1081–1096 (1999)CrossRefGoogle Scholar
  61. NIDA: Washington DC Metropolitan Area Drug Study (DC*MADS), 1992. U.S. National Institute on Drug Abuse: ICPSR Study No. 2347. http://www.icpsr.umich.edu (1999)
  62. Oreopoulos, P. : Long-run consequences of living in a poor neighborhood. Q J Econ 118, 1533–1575 (2003)MATHCrossRefGoogle Scholar
  63. Origo, F.: Flexible pay, firm performance and the role of unions: New evidence from Italy. Labour Econ 16, 64–78 (2009)CrossRefGoogle Scholar
  64. Peto, R., Pike, M., Armitage, P., Breslow, N., Cox, D., Howard, S., Mantel, N., McPherson, K., Peto, J., Smith, P. : Design and analysis of randomised clinical trials requiring prolonged observation of each patient, I. Br J Cancer 34, 585–612 (1976)Google Scholar
  65. Pinto, D., Ceballos, J.M., García, G., Guzmán, P., Del Razo, L.M., Gómez, E.V.H., García, A., Gonsebatt, M.E. : Increased cytogenetic damage in outdoor painters. Mutat Res 467, 105–111 (2000)Google Scholar
  66. Reynolds, K.D., West, S.G. : A multiplist strategy for strengthening nonequivalent control group designs. Eval Rev 11, 691–714 (1987)CrossRefGoogle Scholar
  67. Rosenbaum, P.R.: From association to causation in observational studies. J Am Statist Assoc 79, 41–48 (1984)MATHMathSciNetCrossRefGoogle Scholar
  68. Rosenbaum, P.R.: Sensitivity analysis for certain permutation inferences in matched observational studies. Biometrika 74, 13–26 (1987)MATHMathSciNetCrossRefGoogle Scholar
  69. Rosenbaum, P.R.: The role of a second control group in an observational study (with Discussion). Statist Sci 2, 292–316 (1987)CrossRefGoogle Scholar
  70. Rosenbaum, P.R.: The role of known effects in observational studies. Biometrics 45, 557–569 (1989)MATHMathSciNetCrossRefGoogle Scholar
  71. Rosenbaum, P.R.: On permutation tests for hidden biases in observational studies. Ann Statist 17, 643–653 (1989)MATHMathSciNetCrossRefGoogle Scholar
  72. Rosenbaum, P.R.: Some poset statistics. Ann Statist 19, 1091–1097 (1991)MATHMathSciNetCrossRefGoogle Scholar
  73. Rosenbaum, P.R.: Hodges-Lehmann point estimates in observational studies. J Am Statist Assoc 88, 1250–1253 (1993)MATHMathSciNetCrossRefGoogle Scholar
  74. Rosenbaum, P.R.: Detecting bias with confidence in observational studies. Biometrika 79, 367–374 (1992)MATHCrossRefGoogle Scholar
  75. Rosenbaum, P.R.: Comment on a paper by Angrist, Imbens, and Rubin. J Am Statist Assoc 91, 465–468 (1996)CrossRefGoogle Scholar
  76. Rosenbaum, P.R.: Signed rank statistics for coherent predictions. Biometrics 53, 556–566 (1997)MATHCrossRefGoogle Scholar
  77. Rosenbaum, P.R.: Choice as an alternative to control in observational studies (with Discussion). Statist Sci 4, 259–304 (1999)MATHCrossRefGoogle Scholar
  78. Rosenbaum, P.R.: Using quantile averages in matched observational studies. Appl Statist 48, 63–78 (1999)MATHGoogle Scholar
  79. Rosenbaum, P.R.: Replicating effects and biases. Am Statistician 55, 223–227 (2001)MathSciNetCrossRefGoogle Scholar
  80. Rosenbaum, P.R.: Stability in the absence of treatment. J Am Statist Assoc 96, 210–219 (2001)MATHMathSciNetCrossRefGoogle Scholar
  81. Rosenbaum, P.R.: Observational Studies (2nd ed.). New York: Springer (2002)MATHGoogle Scholar
  82. Rosenbaum, P.R.: Covariance adjustment in randomized experiments and observational studies (with Discussion). Statist Sci 17, 286–327 (2002)MATHMathSciNetCrossRefGoogle Scholar
  83. Rosenbaum, P.R.: Does a dose-response relationship reduce sensitivity to hidden bias? Biostatistics 4, 1–10 (2003)MATHMathSciNetCrossRefGoogle Scholar
  84. Rosenbaum, P.R.: Design sensitivity in observational studies. Biometrika 91, 153–164 (2004)MATHMathSciNetCrossRefGoogle Scholar
  85. Rosenbaum, P. R.: Heterogeneity and causality: Unit heterogeneity and design sensitivity in observational studies. Am Statist 59, 147–152 (2005)MathSciNetCrossRefGoogle Scholar
  86. Rosenbaum, P.R.: Exact, nonparametric inference when doses are measured with random errors. J Am Statist Assoc 100, 511–518 (2005)MATHMathSciNetCrossRefGoogle Scholar
  87. Rosenbaum, P.R.: Differential effects and generic biases in observational studies. Biometrika 93, 573–586 (2006)MATHMathSciNetCrossRefGoogle Scholar
  88. Rosenbaum, P.R.: What aspects of the design of an observational study affect its sensitivity to bias from covariates that were not observed? Festschrift for Paul W. Holland . Princeton, NJ: ETS (2009)Google Scholar
  89. Rosenzweig, M.R., Wolpin, K.I.: Natural ‘natural experiments’ in economics. J Econ Lit 38, 827–874 (2000)Google Scholar
  90. Rothman, K.J. : Modern Epidemiology. Boston: Little, Brown (1986)Google Scholar
  91. Roychoudhuri, R., Robinson, D., Putcha, V., Cuzick, J., Darby, S., Møller, H.: Increased cardiovascular mortality more than fifteen years after radiotherapy for breast cancer: a population-based study. BMC Cancer, Jan 15, 7–9 (2007)Google Scholar
  92. Rutter, M.: Identifying the Environmental Causes of Disease: How Do We Decide What to Believe and When to Take Action? London: Academy of Medical Sciences. (2007)Google Scholar
  93. Sennett, R. : The Uses of Disorder. New Haven, CT: Yale University Press (1971, 2008)Google Scholar
  94. Shadish, W.R., Cook, T.D. : The renaissance of field experimentation in evaluating interventions. Annu Rev Psychol 60, 607–629 (2009)CrossRefGoogle Scholar
  95. Silber, J.H., Cnaan, A., Clark, B.J., Paridon, S.M., Chin, A.J., et al.: Enalapril to prevent cardiac function decline in long-term survivors of pediatric cancer exposed to anthracyclines. J Clin Oncol 5, 820–828 (2004)CrossRefGoogle Scholar
  96. Small, D.S. : Sensitivity analysis for instrumental variables regression with overidentifying restrictions. J Am Statist Assoc 102, 1049–1058 (2007)MATHMathSciNetCrossRefGoogle Scholar
  97. Small, D.S., Rosenbaum, P.R.: War and wages: the strength of instrumental variables and their sensitivity to unobserved biases. J Am Statist Assoc 103, 924–933 (2008)MathSciNetCrossRefGoogle Scholar
  98. Small, D.S., Rosenbaum, P.R.: Error-free milestones in error-prone measurements. Ann Appl Statist, to appear (2009)Google Scholar
  99. Sobel, M.E. : An introduction to causal inference. Sociol Methods Res 24, 353–379 (1996)CrossRefGoogle Scholar
  100. Sommer, A., Zeger, S.L. : On estimating efficacy from clinical trials. Statist Med 10, 45–52 (1991)CrossRefGoogle Scholar
  101. Stuart, E.A., Rubin, D.B. : Matching with multiple control groups with adjustment for group differences. J Educ Behav Statist 33, 279–306 (2008)CrossRefGoogle Scholar
  102. Sullivan, J.M., Flannagan, M.J. : The role of ambient light level in fatal crashes: Inferences from daylight saving time transitions. Accid Anal Prev 34, 487–498 (2002)CrossRefGoogle Scholar
  103. Summers, L.H.: The scientific illusion in empirical macroeconomics (with Discussion). Scand J Econ 93, pp. 129–148 (1991)CrossRefGoogle Scholar
  104. Tan, Z. : Regression and weighting methods for causal inference using instrumental variables. J Am Statist Assoc 101, 1607–1618 (2006)MATHCrossRefGoogle Scholar
  105. Thistlethwaite, D.L., Campbell, D.T. : Regression-discontinuity analysis. J Educ Psychol 51, 309–317 (1960)CrossRefGoogle Scholar
  106. Trochim, W.M.K. : Pattern matching, validity and conceptualization in program evaluation. Eval Rev 9, 575–604 (1985)CrossRefGoogle Scholar
  107. Vandenbroucke, J.P. : When are observational studies as credible as randomized trials? Lancet 363, 1728–1731 (2004)CrossRefGoogle Scholar
  108. van Eeden, C. : An analogue, for signed rank statistics, of Jureckova’s asymptotic linearity theorem for rank statistics. Ann Math Statist 43, 791–802 (1972)MATHMathSciNetCrossRefGoogle Scholar
  109. Weiss, N.: Inferring causal relationships: Elaboration of the criterion of dose-response. Am J Epidemiol 113, 487–490 (1981)Google Scholar
  110. Weiss, N.: Can the ‘specificity’ of an association be rehabilitated as a basis for supporting a causal hypothesis? Epidemiol 13, 6–8 (2002)CrossRefGoogle Scholar
  111. West, S.G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais, D.C., Holtgrave, D., Szapocznik, J., Fishbein, M., Rapkin, B., Clatts, M., Mullen, P.D. : Alternatives to the randomized controlled trial. Am J Public Health 98, 1359–1366 (2008)CrossRefGoogle Scholar
  112. Wintemute, G.J., Wright, M.A., Drake, C.M., Beaumont, J.J.: Subsequent criminal activity among violent misdemeanants who seek to purchase handguns: risk factors and effectiveness of denying handgun purchase. J Am Med Assoc 285, 1019–1026 (2001)CrossRefGoogle Scholar
  113. Wright, M.A., Wintemute, G.J., Rivara, F.P.: Effectiveness of denial of handgun purchase to persons believed to be at high risk for firearm violence. Am J Public Health 89, 88–90 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag New York 2010

Authors and Affiliations

  1. 1.Statistics Department Wharton SchoolUniversity of PennsylvaniaPhiladelphiaUSA

Personalised recommendations