Notes
I focus on experiments as classically understood, in particular the model of the randomized clinical trial in medicine that has motivated the experimental movement in criminology.
A number of programmatic statements promoting experimental criminology are available (e.g., Sherman 2009; Weisburd 2010; Weisburd, Mazerolle and Petrosino 2010). Critiques of Sherman’s argument that experiments advance liberty recently appeared in Criminology and Criminal Justice; see Carr (2010), Hope (2009), Hough (2010) and Tilley (2009).
Deaton (2008). For further debate see the special issue of the Journal of Economic Perspectives where Angrist, of instrumental variable fame, refers to the “credibility revolution” in empirical economics (Angrist and Pischke 2010). A number of critics respond. Educational research has also seen a strong experimental push. For an evaluation, see Raudenbush (2008).
In his recent ASC Presidential Address, Clear (2010) provides a strong critique of the experimental paradigm in criminology from a different angle. In this essay, appropriate for the JQC, I take experiments on their own terms and address key methodological limitations and resulting implications for both causal knowledge and policy formation. I take no stance on Clear’s normative position on how the ASC should engage or promote the content of policy.
I thank Gary King for discussion of these points. For a practical guide to addressing common misunderstandings in the analysis of experiments, see Imai et al. (2008)). An influential counterfactual approach to noncompliance is to estimate the “local average treatment effect” (LATE), or the treatment effect on compliers, where treatment assignment is used as an instrumental variable for the treatment actually taken (Angrist et al. 1996).
Technically, SUTVA means that the causal effect estimate in MTO is the difference between the average treatment effect and the “spillover” effect on the untreated (Sobel 2006, p. 1405). Not only are spillover effects are of great substantive interest, policy inferences could be led significantly astray by not distinguishing these two components of the treatment effect.
For a recent argument on the specification of “selection bias” as a social and causal process, see Sampson and Sharkey (2008).
References
Angrist JD, Pischke J-S (2010) The credibility revolution in empirical economics: how better research design is taking the con out of econometrics. J Econ Perspect 24:3–30
Angrist J, Imbens G, Rubin D (1996) Identification of causal effects using instrumental variables. J Am Stat Assoc 91:328–336
Berk R (2005) Randomized experiments as the bronze standard. J Exp Criminol 1:417–433
Berk R, Barnes G, Ahlman L, Kurtz E (2010) When second best is good enough: a comparison between a true experiment and a regression discontinuity quasi-experiment. J Exp Criminol 6:191–208
Carr PJ (2010) The problem with experimental criminology: a response to Sherman’s ‘evidence and liberty’. Criminol Crim Justice 10:2–10
Cartwright N (2007) Are RCTs the gold standard? Biosocieties 2:11–20
Clampet-Lundquist S, Massey DS (2008) Neighborhood effects on economic self-sufficiency: a reconsideration of the moving to opportunity experiment. Am J Sociol 114:107–143
Clear T (2010) Policy and evidence: the challenge to the American society of criminology: 2009 presidential address to the American society of criminology. Criminology 48:1–25
Cook TD, Shadish WR, Wong VC (2008) Three conditions under which experiments and observational studies produce comparable causal estimates: new findings from within-study comparisons. J Policy Anal Manage 27(4):724–750
Deaton A (2008) Instruments of development: randomization in the tropics, and the search for the elusive keys to economic development. The Keynes Lecture, British Academy, London
Gangl M (2010) Causal inference in sociological research. Annual Review of Sociology Forthcoming
Graif C, Sampson RJ (2010) Inter-neighborhood networks and the structure of urban residential mobility. Harvard University, Department of Sociology, Cambridge
Green DP, Wink D (2010) Using random judge assignments to estimate the effects of incarceration and probation on recidivism among drug offenders. Criminology 48:357–387
Haviland AM, Nagin DS (2005) Causal inferences with group based trajectory models. Psychometrika 70(3):557–578
Heckman JJ (1992) Randomization and social policy evaluation. In: Manski CF, Garfinkel I (eds) Evaluating welfare and training programs. Harvard University Press, Cambridge, pp 210–229
Heckman JJ (2001) Accounting for heterogeneity, diversity and general equilibrium in evaluating social programs. Econ J 111:654–699
Heckman JJ (2005) The scientific model of causality. Sociol Methodol 35:1–97
Holland P (1986) Statistics and causal inference. J Am Stat Assoc 81:945–970
Hong G, Raudenbush SW (2008) Causal inference for time-varying instructional treatments. J Educ Behav Stat 33(3):333–362
Hope T (2009) The illusion of control: a response to professor sherman. Criminol Crim Justice 9:125–134
Hough M (2010) Gold standard or fool’s gold: the pursuit of certainty in experimental criminology. Criminol Crim Justice 10:11–32
Hudgens MG, Halloran ME (2008) Toward causal inference with interference. J Am Stat Assoc 103:832–842
Imai K, King G, Stuart E (2008) Misunderstandings among experimentalists and observationalists about causal inference. J R Stat Soc Ser A 171(Part 2):481–502
Kamo N, Carlson M, Brennan RT, Earls F (2008) Young citizens as health agents: use of drama in promoting community efficacy for hiv/aids. Am J Public Health 98:201–204
Keane MP (2010) A structural perspective on the experimental school. J Econ Perspect 24:47–58
LaLonde R (1986) Evaluating the econometric evaluations of training programs with experimental data. Am Econ Rev 76(4):604–620
Lieberson S (1985) Making it count: the improvement of social research and theory. University of California Press, Berkeley
Lieberson S, Lynn F (2002) Barking up the wrong branch: scientific alternatives to the current model of sociological science. Annu Rev Sociol 28:1–19
Ludwig J, Liebman JB, Kling JR, Duncan GJ, Katz LF, Kessler RC et al (2008) What can we learn about neighborhood effects from the moving to opportunity experiment? A comment on Clampet-Lundquist and Massey. Am J Sociol 114:144–188
Manski CF (2009) Diversified policy choice with partial knowledge of policy effectiveness, 10th annual jerry lee crime prevention symposium. University of Maryland Inn and Conference Center, Adelphi
Moffitt R (2005) Remarks on the analysis of causal relationships in population research. Demography 42:91–108
Morgan S, Winship C (2007) Counterfactuals and causal inference: methods and principles for social research. Cambridge University Press, New York
Nieuwbeerta P, Nagin DS, Blokland A (2009) Assessing the impact of first-time imprisonment on offenders’ subsequent criminal career development: a matched samples comparison. J Quant Criminol 25:227–257
Raudenbush S (2008) Advancing educational policy by advancing research on instruction. Am Educ Res J 45(1):206–230
Rosenbaum PR (2007) Interference between units in randomized experiments. J Am Stat Assoc 102:191–200
Sampson RJ (2008) Moving to inequality: neighborhood effects and experiments meet social structure. Am J Sociol 114:189–231
Sampson RJ (Forthcoming) Neighborhood effects: social structure and community in the American city. University of Chicago Press: Chicago
Sampson RJ, Sharkey P (2008) Neighborhood selection and the social reproduction of concentrated racial inequality. Demography 45:1–29
Sampson RJ, Laub JH, Wimer C (2006) Does marriage reduce crime? A counterfactual approach to within-individual causal effects. Criminology 44:465–508
Sherman LW (2009) Evidence and liberty: the promise of experimental criminology. Criminol Crim Justice 9:5–28
Sikkema K, Kelly JA, Winett RA, Solomon LJ, Cargill VA, Roffman RA et al (2000) Outcomes of a randomized community-level hiv prevention intervention for women living in 18 low-income housing developments. Am J Public Health 90:57–63
Smith HL (2009) Causation and its discontents. In: Engelhardt H, Kohler H-P, Fürnkranz-Prskawetz A (eds) Causal analysis in population studies. Springer, New York, pp 233–242
Sobel M (2006) What do randomized studies of housing mobility demonstrate? Causal inference in the face of interference. J Am Stat Assoc 101:1398–1407
Tilley N (2009) Sherman vs. Sherman: realsim vs. rhetoric. Criminol Crim Justice 9:135–144
Weisburd D (2010) Justifying the use of non-experimental methods and disqualifying the use of randomized controlled trials: challenging the folklore in evaluation research in crime and justice. J Exp Criminol 6:209–227
Weisburd D, Wyckoff L, Ready J, Eck J, Hinkle J, Gajewski F (2006) Does crime just move around the corner? A controlled study of spatial displacement and diffusion of crime control benefits. Criminology 44:549–592
Weisburd D, Mazerolle L, Petrosino A (2010). The academy of experimental criminology: advancing randomized trials in crime and justice. http://www.crim.upenn.edu/aec/AECCriminologist417.doc
Wikström P-O, Sampson RJ (eds) (2006) The explanation of crime: context, mechanisms, and development. Cambridge University Press, Cambridge
Acknowledgments
I thank Gary King, Carly Knight, John Laub, Steve Raudenbush, P–O Wikström, and Chris Winship for their feedback.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sampson, R.J. Gold Standard Myths: Observations on the Experimental Turn in Quantitative Criminology. J Quant Criminol 26, 489–500 (2010). https://doi.org/10.1007/s10940-010-9117-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10940-010-9117-3