Advertisement

Research Design: Toward a Realistic Role for Causal Analysis

  • Herbert L. SmithEmail author
Chapter
Part of the Handbooks of Sociology and Social Research book series (HSSR)

Abstract

For a half-century, sociology and allied social sciences have worked with a model of research design founded on a distinction between internal validity, the capacity of designs to support statements about cause and effect, and external validity, the extent to which the results from specific studies can be generalized beyond the batch of data on which they are founded. The distinction is conceptually useful and has great pedagogic value, that is, the association of the experimental model with internal validity, and random sampling with external validity. The advent of the potential outcomes model of causation, by emphasizing the definition of a causal effect at the unit level and the heterogeneity of causal effects, has made it clear how indistinct (and interpenetrated) are these “twin pillars” of research design. This is the theme of this chapter, which inveighs against the idea of a hierarchy of research design desiderata, with causal inference at the peak. Rather, I adopt the design typology of Leslie Kish (1987), which advocates an appropriate balance of randomization, representation, and realism, and illustrate how all three elements (and not just randomization, the internal validity design mechanism) are integrated aspects of meaningful causal analysis. What is meaningful causal analysis? It depends first and foremost on getting straight why we are doing what we are doing. Understanding why something has happened may tell us a lot about what will happen if we were actually to do something, but this is not necessarily so.

Keywords

Research Design Causal Effect Causal Inference Potential Outcome Causal Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Ackoff, R. L. (1953). The design of social research. Chicago: The University of Chicago Press.Google Scholar
  2. Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434), 444–455.CrossRefGoogle Scholar
  3. Arceneaux, K., Gerber, A. S., & Green, D. P. (2010). A cautionary note on the use of matching to estimate causal effects: An empirical example comparing matching estimates to an experimental benchmark. Sociological Methods & Research, 39(2), 256–282.CrossRefGoogle Scholar
  4. Babbie, E. (2010). The practice of social research (12th ed.). Belmont: Wadsworth.Google Scholar
  5. Berk, R. A. (1991). Toward a methodology for mere mortals. In P. V. Marsden (Ed.), Sociological methodology (pp. 315–324). Oxford: Basil Blackwell.Google Scholar
  6. Berk, R. A. (2005). Randomized experiments as the bronze standard. Journal of Experimental Criminology, 1(4), 417–433.CrossRefGoogle Scholar
  7. Berk, R. A., & Sherman, L. W. (1988). Police responses to family violence incidents: An analysis of an experimental design with incomplete randomization. Journal of the American Statistical Association, 83(401), 70–76.Google Scholar
  8. Bickel, P. J., & Freedman, D. A. (1981). Some asymptotic theory for the bootstrap. The Annals of Statistics, 9(6), 1196–1217.CrossRefGoogle Scholar
  9. Blalock, H. M., Jr. (1991). Are there really any constructive alternatives to causal modeling. In P. V. Marsden (Ed.), Sociological methodology (pp. 325–335). Oxford: Basil Blackwell.Google Scholar
  10. Bongaarts, J., & Potter, R. G. (1983). Fertility, biology, and behavior: An analysis of the proximate determinants. New York: Academic.Google Scholar
  11. Boruch, R. (Ed.). (2005). Place randomized trials: Experimental tests of public policy. Annals of the American Academy of Political and Social Science 599.Google Scholar
  12. Brand, J. E., & Xie, Y. (2007). Identification and estimation of causal effects with time-varying treatments and time-varying outcomes. In Y. Xie (Ed.), Sociological methodology (pp. 393–434). Boston/Oxford: Blackwell.Google Scholar
  13. Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally & Company.Google Scholar
  14. Card, D. (1999). The causal effect of education on earnings. In O. Ashenfelter & D. Card (Eds.), Handbook of labor economics (Vol. 5, pp. 1801–1863). New York: North-Holland.Google Scholar
  15. Cheslack-Postava, K., Liu, K., & Bearman, P. S. (2011). Closely spaced pregnancies are associated with increased odds of autism in California sibling births. Pediatrics, 127(2), 246–253.CrossRefGoogle Scholar
  16. Cook, T. D. (2002). Randomized experiments in educational policy research: A critical examination of the reasons the educational evaluation community has offered for not doing them. Educational Evaluation and Policy Analysis, 24(3), 175–199.CrossRefGoogle Scholar
  17. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally & Company.Google Scholar
  18. Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.Google Scholar
  19. Dawid, A. P. (2000). Causal inference without counterfactuals. Journal of the American Statistical Association, 95(450), 407–424.CrossRefGoogle Scholar
  20. Dawid, A. P., & Fienberg, S. E. (2011, July). The causes of effects. In Plenary talk presented at 8th international conference on forensic inference statistics, Seattle, WA.Google Scholar
  21. de Brauw, A., & Hoddinott, J. (2011). Must conditional cash transfer programs be conditioned to be effective? The impact of conditioning transfers on school enrollment in Mexico. Journal of Development Economics, 96(2), 359–370.CrossRefGoogle Scholar
  22. Diaconis, P., & Freedman, D. (1986). On the consistency of Bayes estimates. The Annals of Statistics, 14(1), 1–26.CrossRefGoogle Scholar
  23. Duncan, O. D. (1974). Developing social indicators. Proceedings of the National Academy of Sciences, 71(12), 5096–5102.CrossRefGoogle Scholar
  24. Duncan, O. D. (1984). Notes on social measurement: Historical and critical. New York: Russell Sage.Google Scholar
  25. Farewell, V. T. (1979). Some results on the estimation of logistic models based on retrospective data. Biometrika, 66(1), 27–32.CrossRefGoogle Scholar
  26. Firebaugh, G. (1978). A rule for inferring individual-level relationships from aggregate data. American Sociological Review, 43(4), 557–572.CrossRefGoogle Scholar
  27. Fisher, R. A. ([1925] 1951). Statistical methods for research workers (6th ed.). New York: Hafner Publishing Company.Google Scholar
  28. Fisher, R. A. ([1935] 1958). The design of experiments (13th ed.). New York: Hafner Publishing Company, Inc.Google Scholar
  29. Fountain, C., & Bearman, P. (2011). Risk as social context: Immigration policy and autism. Sociological Forum, 26(2), 215–240.CrossRefGoogle Scholar
  30. Frangakis, C. E., & Rubin, D. B. (2002). Principal stratification in causal inference. Biometrics, 58(1), 21–29.CrossRefGoogle Scholar
  31. Frankel, M., & King, B. (1996). A conversation with Leslie Kish. Statistical Science, 11(1), 65–87.CrossRefGoogle Scholar
  32. Freedman, D. A. (1981). Bootstrapping regression models. The Annals of Statistics, 9(6), 1218–1228.CrossRefGoogle Scholar
  33. Freedman, D. A. (1983). Markov chains. New York: Springer.CrossRefGoogle Scholar
  34. Freedman, D. A. (1985). Statistics and the scientific method. In W. M. Mason & S. E. Fienberg (Eds.), Cohort analysis in social research: Beyond the identification problem (pp. 343–366). New York: Springer.CrossRefGoogle Scholar
  35. Freedman, D. A. (1991a). Statistical models and shoe leather. In P. V. Marsden (Ed.), Sociological methodology 1991 (pp. 291–313). Oxford: Basil Blackwell.Google Scholar
  36. Freedman, D. A. (1991b). A rejoinder to Berk, Blalock, and Mason. In P. V. Marsden (Ed.), Sociological methodology 1991 (pp. 353–358). Oxford: Basil Blackwell.Google Scholar
  37. Freedman, D. A. (2005). Statistical models: Theory and practice. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  38. Freedman, D., Pisani, R., & Purves, R. (1978). Statistics. New York: W. W. Norton.Google Scholar
  39. Gage, N. L. (Ed.). (1963). Handbook of research on teaching. Chicago: Rand McNally & Company.Google Scholar
  40. Gangl, M. (2010). Causal inference in sociological research. Annual Review of Sociology, 36, 21–47.CrossRefGoogle Scholar
  41. Goldthorpe, J. H. (2001). Causation, statistics, and sociology. European Sociological Review, 17(1), 1–20.CrossRefGoogle Scholar
  42. Greiner, D. J., & Rubin, D. B. (2011). Causal effects of perceived immutable characteristics. The Review of Economics and Statistics, 93(3), 775–785.CrossRefGoogle Scholar
  43. Härdle, W. (1990). Applied nonparametric regression. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  44. Heckman, J. J. (2001). Micro data, heterogeneity, and the evaluation of public policy: Nobel lecture. Journal of Political Economy, 109(4), 673–748.CrossRefGoogle Scholar
  45. Heckman, J. J., & Smith, J. A. (1995). Assessing the case for social experiments. Journal of Economic Perspectives, 9(2), 85–110.CrossRefGoogle Scholar
  46. Hedström, P., & Swedberg, R. (1998). Social mechanisms: An introductory essay. In P. Hedström & R. Swedberg (Eds.), Social mechanisms: An analytical approach to social theory (pp. 1–31). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  47. Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.CrossRefGoogle Scholar
  48. Holland, P. W. (2008). Causation and race. In T. Zuberi & E. Bonilla-Silva (Eds.), White logic, white methods: Racism and methodology (pp. 93–109). Lanham: Rowman & Littlefield.Google Scholar
  49. Jick, H., & Kaye, J. A. (2003). Epidemiology and possible causes of autism. Pharmacotherapy, 23(12), 1524–1530.CrossRefGoogle Scholar
  50. Johnson-Hanks, J. A., Bachrach, C. A., Morgan, S. P., & Kohler, H.-P. (2011). Understanding family change and variation: Toward a theory of conjunctural action. New York: Springer.CrossRefGoogle Scholar
  51. Kadane, J. B. (2011). Principles of uncertainty. Boca Raton: Chapman & Hall.CrossRefGoogle Scholar
  52. King, M., & Bearman, P. (2009). Diagnostic change and the increased prevalence of autism. International Journal of Epidemiology, 38(5), 1224–1234.CrossRefGoogle Scholar
  53. King, M. D., Fountain, C., Dakhlallah, D., & Bearman, P. S. (2009). Estimated autism and older reproductive age. American Journal of Public Health, 99(9), 1673–1679.CrossRefGoogle Scholar
  54. Kish, L. (1965). Survey sampling. New York: Wiley.Google Scholar
  55. Kish, L. (1987). Statistical design for research. New York: Wiley.CrossRefGoogle Scholar
  56. Kong, A., Frigge, M. L., Masson, G., Besenbacher, S., Sulem, P., Magnusson, G., Gudjonsson, S. A., Sigurdsson, A., Jonasdottir, A., Jonasdottir, A., Wong, W. S. W., Sigurdsson, G., Walters, G. B., Steinberg, S., Helgason, H., Thorleifsson, G., Gudbjartsson, D. F., Helgason, A., Magnusson, O. T., Thorsteinsdottir, U., & Stefansson, K. (2012). Rate of de novo mutations and the importance of father’s age to disease risk. Nature, 488, 471–475.CrossRefGoogle Scholar
  57. Lieberson, S., & Lynn, F. B. (2002). Barking up the wrong branch: Scientific alternatives to the current model of sociological science. Annual Review of Sociology, 28, 1–19.CrossRefGoogle Scholar
  58. Liu, K.-Y., King, M., & Bearman, P. S. (2010a). Social influence and the autism epidemic. The American Journal of Sociology, 115(5), 1387–1434.CrossRefGoogle Scholar
  59. Liu, K., Zerubavel, N., & Bearman, P. (2010b). Social demographic change and autism. Demography, 47(2), 327–343.CrossRefGoogle Scholar
  60. Loring, M., & Powell, B. (1988). Gender, race, and DSM-III: A study of objectivity of psychiatric diagnostic behavior. Journal of Health and Social Behavior, 29(1), 1–22.CrossRefGoogle Scholar
  61. Ludwig, J., Liebman, J. B., Kling, J. R., Duncan, G. J., Katz, L. F., Kessler, R. C., & Sanbonmatsu, L. (2008). What can we learn about neighborhood effects from the moving to opportunity experiment? The American Journal of Sociology, 114(1), 144–188.CrossRefGoogle Scholar
  62. Marini, M. M., & Singer, B. (1988). Causality in the social sciences. In C. C. Clogg (Ed.), Sociological methodology 1988 (pp. 347–409). Washington, DC: American Sociological Association.Google Scholar
  63. Merli, M. G., Qian, Z., & Smith, H. L. (2004). Adaptation of a political bureaucracy to economic and institutional change under socialism: The Chinese state family planning system. Politics and Society, 31(2), 231–256.Google Scholar
  64. Mill, J. S. (1843). A system of logic, ratiocinative and inductive, being a connected view of the principles of evidence, and the methods of scientific investigation. London: John W. Parker.CrossRefGoogle Scholar
  65. Morgan, S. L., & Winship, C. (2007). Counterfactuals and causal inference: Methods and principles for social research. New York: Cambridge University Press.CrossRefGoogle Scholar
  66. Morgan, S. L. & Winship, C. (2012). Bringing context and variability back in to causal analysis, Chapter 14. In H. Kincaid (Ed.), The Oxford handbook of the philosophy of the social sciences. New York: Oxford University Press.Google Scholar
  67. Ní Bhrolcháin, M., & Dyson, T. (2007). On causation in demography: Issues and illustrations. Population and Development Review, 33(1), 1–36.CrossRefGoogle Scholar
  68. O’Roak, B. J., Vives, L., Girirajan, S., Karakoc, E., Krumm, N., Coe, B. P., Levy, R., Ko, A., Lee, C., Smith, J. D., Turner, E. H., Stanaway, I. B., Vernot, B., Malig, M., Baker, C., Reilly, B., Akey, J. M., Borenstein, E., Rieder, M. J., Nickerson, D. A., Bernier, R., Shendure, J., & Eichler, E. E. (2012). Sporadic autism exomes reveal a highly interconnected protein network of de novo mutations. Nature, 485(7397), 246–250.CrossRefGoogle Scholar
  69. Rosenbaum, P. R. (1984). From association to causation in observational studies: The role of tests of strongly ignorable treatment assignment. Journal of the American Statistical Association, 79, 41–48.CrossRefGoogle Scholar
  70. Rosenbaum, P. R. (2002). Observational studies (2nd ed.). New York: Springer.CrossRefGoogle Scholar
  71. Rosenbaum, P. R. (2009). Design of observational studies. New York: Springer.Google Scholar
  72. Rosenbaum, P. R., & Rubin, D. B. (1983a). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.CrossRefGoogle Scholar
  73. Rosenbaum, P. R., & Rubin, D. B. (1983b). Assessing sensitivity to an unobserved binary covariate in an observational study with binary outcome. Journal of the Royal Statistical Society, Series B, 45(2), 212–218.Google Scholar
  74. Rossi, P. H., Berk, R. A., & Lenihan, K. J. (1982). Saying it wrong with figures: A comment on Zeisel. The American Journal of Sociology, 88(2), 390–393.CrossRefGoogle Scholar
  75. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688–701.CrossRefGoogle Scholar
  76. Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469), 322–331.CrossRefGoogle Scholar
  77. Russo, F., Wunsch, G., & Mouchart, M. (2010). Inferring causality through counterfactuals in observational studies: Some epistemological issues (Discussion Paper 1029). Institut de statistique, biostatistique et sciences actuarielles (ISBA), Université Catholique de Louvain. http://www.stat.ucl.ac.be/ISpub/dp/2010/DP1029.pdf
  78. Rytand, D. A. (1980). Sutton’s or dock’s Law. The New England Journal of Medicine, 302(17), 972.Google Scholar
  79. Sampson, R. J. (2010). Gold standard myths: Observations on the experimental turn in quantitative criminology. Journal of Quantitative Criminology, 26(4), 489–500.CrossRefGoogle Scholar
  80. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston/New York: Houghton Mifflin Company.Google Scholar
  81. Shelton, J. F., Tancredi, D. J., & Hertz-Picciotto, I. (2010). Independent and dependent contributions of advanced maternal and paternal ages to autism risk. Autism Research, 3(1), 30–39.Google Scholar
  82. Sloan, J. H., Kellermann, A. L., Reay, D. T., Ferris, J. A., Koepsell, T., Rivara, F. P., Rice, C., Gray, L., & LoGerfo, J. (1988). Handgun regulations, crime, assaults, and homicide: A tale of two cities. The New England Journal of Medicine, 319(19), 1256–1262.CrossRefGoogle Scholar
  83. Smith, H. L. (1990). Specification problems in experimental and nonexperimental social research. In C. C. Clogg (Ed.), Sociological methodology 1990 (pp. 59–91). Cambridge, MA: Basil Blackwell.Google Scholar
  84. Smith, H. L. (1997). Matching with multiple controls to estimate treatment effects in observational studies. In A. E. Raftery (Ed.), Sociological methodology 1997 (pp. 325–353). Oxford: Basil Blackwell.Google Scholar
  85. Smith, H. L. (2003). Some thoughts on causation as it relates to demography and population studies. Population and Development Review, 29(3), 459–469.CrossRefGoogle Scholar
  86. Smith, H. L. (2005). Introducing new contraceptives in rural China: A field experiment. The Annals of the American Academy of Political and Social Science, 599, 246–271.CrossRefGoogle Scholar
  87. Smith, H. L. (2009). Causation and its discontents. In H. Engelhardt-Woelfler, H.-P. Kohler, & A. Fuernkranz-Prskawetz (Eds.), Causal analysis in population studies: Concepts, methods, applications. Dordrecht: Springer.Google Scholar
  88. Smith, H. L. (n.d.). La causalité en sociologie et démographie. Retour sur le principe de laction humaine.Google Scholar
  89. Sobel, M. E. (2006). What do randomized studies of housing mobility demonstrate? Causal inference in the face of interference. Journal of the American Statistical Association, 101(476), 1398–1407.CrossRefGoogle Scholar
  90. Stinchcombe, A. L. (1969). Constructing social theories. New York: Harcourt, Brace & World, Inc.Google Scholar
  91. Tukey, J. (1986). Sunset salvo. The American Statistician, 40(1), 72–76.Google Scholar
  92. Vaupel, J. W., Carey, J. R., & Christensen, K. (2003). It’s never too late. Science, 301(5640), 1679–1681.CrossRefGoogle Scholar
  93. Vogt, T., & Kluge, F. (2012, May 5). Does public spending level mortality inequalities? — Findings from East Germany after unification. In Presented at the annual meeting of the Population Association of America, San Francisco, CA.Google Scholar
  94. Vogt, T., Vaupel, J. W., & Rau, R. (2012). Health or wealth. Life expectancy convergence after the German unification. Dissertation, Max Planck Institute for Demographic Research.Google Scholar
  95. Wainer, H. (Ed.). ([1986] 2000). Drawing inferences from self-selected samples. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  96. Wheaton, B. (1978). The sociogenesis of psychological disorder: Reexamining the causal issues with longitudinal data. American Sociological Review, 43(3), 383–403.CrossRefGoogle Scholar
  97. Winship, C., & Morgan, S. L. (1999). The estimation of causal effects from observational data. Annual Review of Sociology, 25, 659–706.CrossRefGoogle Scholar
  98. Zeisel, H. (1982a). Disagreement over the evaluation of a controlled experiment. American Journal of Sociology, 88(2), 378–389.CrossRefGoogle Scholar
  99. Zeisel, H. (1982b). Hans Zeisel concludes the debate. American Journal of Sociology, 88(2), 394–396.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Population Studies CenterUniversity of PennsylvaniaPhiladelphiaUSA

Personalised recommendations