Advertisement

Journal of Experimental Criminology

, Volume 5, Issue 2, pp 163–183 | Cite as

No effects in independent prevention trials: can we reject the cynical view?

  • Manuel EisnerEmail author
Article

Abstract

Recent studies suggest that the reported effect sizes of prevention and intervention trials in criminology are considerably larger when program developers are involved in a study than when trials are conducted by independent researchers. This paper examines the possibility that these differences may be due to systematic bias related to conflict of interest. A review of the evidence shows that the possibility of a substantial problem cannot be currently rejected. Based on a theoretical model about how conflict of interest may influence research findings, the paper proposes several strategies to examine empirically the extent of systematic bias related to conflict of interest. It also suggests that, in addition to improved standards for conducting and publishing future experimental studies, more research is needed on the extent of systematic bias in the existing body of literature.

Keywords

Conflict of interests Independent evaluation Methodological bias Research synthesis 

References

  1. Al-Marzouki, S., Evans, S., Marshall, T., & Roberts, I. (2005a). Are these data real? Statistical methods for the detection of data fabrication in clinical trials. BMJ, 331, 267–270.CrossRefGoogle Scholar
  2. Al-Marzouki, S., Roberts, I., Marshall, T., & Evans, S. (2005b). The effect of scientific misconduct on the results of clinical trials: a Delphi survey. Contemporary Clinical Trials, 26(3), 331–337.CrossRefGoogle Scholar
  3. Altman, D. G. (1996). Better reporting of randomised controlled trials: the consort statement. BMJ, 313(7057), 570–571.Google Scholar
  4. Babcock, L., Loewenstein, G., Issacharoff, S., & Camerer, C. (1995). Biased judgments of fairness in bargaining. The American Economic Review, 85(5), 1337–1343.Google Scholar
  5. Bauer, N. S., Lozano, P., & Rivara, F. P. (2007). The Effectiveness of the Olweus bullying prevention program in public middle schools: a controlled trial. The Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine, 40(3), 266–274.Google Scholar
  6. Beelmann, A., & Lösel, F. (2006). Child social skills training in developmental crime prevention: effects on antisocial behavior and social competence. Psycothema, 18(3), 603–610.Google Scholar
  7. Botvin, G. J., Baker, E., Dusenbury, L., Botvin, E. M., & Diaz, T. (1995). Long-term follow-up results of a randomized drug abuse prevention trial in a white middle-class population. JAMA, 273(14), 1106–1112.CrossRefGoogle Scholar
  8. Campbell, M. K., Elbourne, D. R., & Altman, D. G. (2004). Consort statement: extension to cluster randomised trials. BMJ, 328(7441), 702–708.CrossRefGoogle Scholar
  9. Cho, H., Hallfors, D. D., & Sánchez, V. (2005). Evaluation of a high school peer group intervention for at-risk youth. Journal of Abnormal Child Psychology, 33(3), 363–374.CrossRefGoogle Scholar
  10. Cornish, D. B., & Clarke, R. V. G. (Eds.). (1986). The reasoning criminal: Rational choice perspectives on offending. New York: Springer.Google Scholar
  11. Dana, J., & Loewenstein, G. (2003). A social science perspective on gifts to physicians from industry. JAMA, 290(2), 252–255.CrossRefGoogle Scholar
  12. de Graaf, I., Speetjens, P., Smit, F., de Wolff, M., & Tavecchio, L. (2008). Effectiveness of the Triple P positive parenting program on behavioral problems in children: a meta-analysis. Behav Modif, 32(5), 714–735.CrossRefGoogle Scholar
  13. Diekmann, A. (2007). Not the first digit! Using Benford’s Law to detect fraudulent scientific data. Journal of Applied Statistics, 34(3), 321–329.CrossRefGoogle Scholar
  14. DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002a). Effectiveness of mentoring programs for youth: a meta-analytic review. American Journal of Community Psychology, 30, 157–197.CrossRefGoogle Scholar
  15. DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002b). Effectiveness of mentoring programs for youth: a meta-analytic review. American Journal of Community Psychology, 30(2), 157–197.CrossRefGoogle Scholar
  16. Eggert, L. L., Seyi, C. D., & Nicholas, L. J. (1990). Effects of a school-based prevention program for potential high school dropouts and drug abusers. Substance Use & Misuse, 25(7), 773–801.CrossRefGoogle Scholar
  17. Eggert, L. L., Thompson, E. A., Herting, J. R., Nicholas, L. J., & Dickers, B. G. (1994). Preventing adolescent drug abuse and high school dropout through an intensive social network development program. American Journal of Health Promotion, 8(2), 202–215.Google Scholar
  18. Eisner, M., & Ribeaud, D. (2008). Markt, Macht und Wissenschaft; Kritische Überlegungen zur Deutschen Präventionsforschung. In E. Marks & W. Steffen (Eds.), Starke Jugend - Starke Zukunft (Ausgewählte Beiträge des 12. Deutschen Präventionstages, 18. und 19. Juni 2007) (pp. 173–191). Mönchengladbach: Forum Verlag Godesberg.Google Scholar
  19. Eisner, M., Ribeaud, D., Jünger, R., & Meidert, U. (2007). Frühprävention von Gewalt und Aggression; Ergebnisse des Zürcher Interventions- und Präventionsprojektes an Schulen. Zürich: Rüegger.Google Scholar
  20. Ellickson, P. L. (1998). Preventing adolescent substance abuse: lessons from the project alert program. In J. Crane (Ed.), Social programs that work (pp. 201–257). New York: Russell Sage.Google Scholar
  21. Ellickson, P. L., & Bell, R. M. (1990). Drug prevention in junior high: a multi-site longitudinal test. Science, 247, 1299–1305.CrossRefGoogle Scholar
  22. Ellickson, P. L., McCaffrey, D. F., Ghosh-Dastidar, B., & Longshore, D. L. (2003). New inroads in preventing adolescent drug use: results from a large-scale trial of project ALERT in middle schools. American Journal of Public Health, 93(11), 1830–1836.CrossRefGoogle Scholar
  23. Elliott, D. S., & Mihalic, S. F. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5(1), 47–53.CrossRefGoogle Scholar
  24. Etzkowitz, H. (2003). Research groups as ‘Quasi-Firms’: the invention of the entrepreneurial university. Research Policy, 32(1), 109–121.CrossRefGoogle Scholar
  25. Farrington, D. P. (2003). Methodological quality standards for evaluation research. Annals of the American Academy of Political and Social Sciences, 587, 49–68.CrossRefGoogle Scholar
  26. Farrington, D. P., & Welsh, B. (2003). Family-based prevention of offending: a meta-analysis. Australian and New Zealand Journal of Criminology, 36(2), 127–151.CrossRefGoogle Scholar
  27. Felson, M. (1994). Crime and everyday life; insight and implications for society. Thousand Oaks: Pine Forge Press.Google Scholar
  28. Friedman, L. S., & Richter, E. D. (2004). Relationship between conflicts of interest and research results. Journal of General Internal Medicine, 19(1), 51–56.CrossRefGoogle Scholar
  29. Gandhi, A. G., Murphy-Graham, E., Petrosino, A., Chrismer, S. S., & Weiss, C. H. (2007). The devil is in the details: examining the evidence for “proven” school-based drug abuse prevention programs. Evaluation Review, 31(1), 43–74.CrossRefGoogle Scholar
  30. Gilbert, N. (1997). Advocacy research and social policy. Crime and Justice: A Review of Research, 22, 101–148.CrossRefGoogle Scholar
  31. Gorman, D. M. (2003). Alcohol & drug abuse: the best of practices, the worst of practices: the making of science-based primary prevention programs. Psychiatric Services, 54(8), 1087–1089.CrossRefGoogle Scholar
  32. Gorman, D. M. (2005a). Does measurement dependence explain the effects of the life skills training program on smoking outcomes? Preventive Medicine, 40(4), 479–487.CrossRefGoogle Scholar
  33. Gorman, D. M. (2005b). Drug and violence prevention: rediscovering the critical rational dimension of evaluation research. Journal of Experimental Criminology, 1(1), 39–62.CrossRefGoogle Scholar
  34. Gorman, D. M. (2006). Conflicts of interest in the evaluation and dissemination of drug use prevention programs. In J. Kleinig, & S. Einstein (Eds.), Intervening in drug use: Ethical challenges (pp. 171–187). Huntsville: Office of International Criminal Justice.Google Scholar
  35. Gorman, D. M., & Conde, E. (2007). Conflict of interest in the evaluation and dissemination of “Model” school-based drug and violence prevention programs. Evaluation and Program Planning, 30(4), 422–429.CrossRefGoogle Scholar
  36. Heinrichs, N., Hahlweg, K., Bertram, H., Kuschel, A., Naumann, S., & Harstick, S. (2006). Die langfristige Wirksamkeit eines Elterntrainings zur universellen Prävention kindlicher Verhaltensstörungen: Ergebnisse aus Sicht der Mütter und Väter. Zeitschrift für klinische Psychologie und Psychotherapie, 35(2).Google Scholar
  37. Henggeler, S. W., Cinningham, P. B., Pickrel, S. G., Schoenwald, S. K., & Brondino, M. J. (1996). Multisystemic therapy: an effective violence prevention approach for serious juvenile offenders. Journal of Adolescence, 19(1), 47–61.CrossRefGoogle Scholar
  38. Hewitt, C. E., Mitchell, N., & Torgerson, D. J. (2008). Listen to the data when results are not significant. BMJ, 336(7634), 23–25.CrossRefGoogle Scholar
  39. Holyoak, K. J., & Simon, D. (1999). Bidirectional reasoning in decision making by constraint satisfaction. Journal of Experimental Psychology; General, 128(1), 3–31.CrossRefGoogle Scholar
  40. Kaptchuk, T. J. (2003). Effect of interpretive bias on research evidence. BMJ, 326(7404), 1453–1455.CrossRefGoogle Scholar
  41. Lipsey, M. W. (1995). What do we learn from 400 research studies on the effectiveness of treatment with juvenile delinquents? In J. McGuire (Ed.), What works? Reducing reoffending (pp. 63–78). New York: Wiley.Google Scholar
  42. Lipsey, M. W., & Cullen, F. T. (2007). The effectiveness of correctional rehabilitation: a review of systematic reviews. Annual Review of Law and Social Science, 3, 297–320.Google Scholar
  43. Lipsey, M. W., Petrie, C., Weisburd, D., & Gottfredson, D. (2006). Improving evaluation of anti-crime programs: summary of a national research council report. Journal of Experimental Criminology, 2(3), 271–307.CrossRefGoogle Scholar
  44. Littell, J. (2005). Lessons from a systematic review of effects of multisystemic therapy. Children and Youth Services Review, 27(4), 445–463.CrossRefGoogle Scholar
  45. Littell, J., Popa, M., & Forsythe, B. (2005). Multisystemic Therapy for Social, Emotional, and Behavioral Problems in Youth Aged 10–17 (Report for the Campbell Collaboration) (Electronic Version). Accessed 14 January 2008 from http://www.sfi.dk/graphics/Campbell/Dokumenter/MST_Review/MULTISYSTEMIC%20TERAPHY%20-%20REVIEW.pdf.
  46. Lock, S., Wells, F., & Farthing, M. (Eds.). (2001). Fraud and misconduct in medical research. London: BMJ Publishing Group.Google Scholar
  47. Lösel, F., & Beelmann, A. (2003). Effects of child skills training in preventing antisocial behavior: a systematic review of randomized evaluations. The ANNALS of the American Academy of Political and Social Science, 587(1), 84–109.CrossRefGoogle Scholar
  48. Lösel, F., & Koferl, P. (1989). Evaluation research on correctional treatment in West Germany: A meta-analysis. In H. Wegener, F. Lösel, & J. Haisch (Eds.), Criminal behavior and the justice system: Psychological perspectives. New York: Springer.Google Scholar
  49. Lösel, F., & Beelmann, A. (2003). Effects of child skills training in preventing antisocial behavior: a systematic review of randomized evaluations. The ANNALS of the American Academy of Political and Social Science, 587(1), 84–109.CrossRefGoogle Scholar
  50. Luborsky, L., Diguer, L., Seligman, D. A., Rosenthal, R., Krause, E. D., Johnson, S., et al. (1999). The researcher’s own therapy allegiances: a “Wild Card” in comparisons of treatment efficacy. Clinical Psychology: Science and Practice, 6(1), 95–106.CrossRefGoogle Scholar
  51. Lundahl, B., Risser, H. J., & Lovejoy, M. C. (2006). A meta-analysis of parent training: moderators and follow-up effects. Clinical Psychology Review, 26(1), 86–104.CrossRefGoogle Scholar
  52. MacCoun, R. J. (1998). Biases in the interpretation and use of research statistics. Annual Review of Psychology, 49, 259–287.CrossRefGoogle Scholar
  53. MacCoun, R. J. (2005). Conflicts of interest in public policy research. In D. Moore, D. M. Cain, G. Loewenstein, & M. H. Bazerman (Eds.), Conflicts of interest; challenges and solutions in business, law, medicine, and public policy (pp. 233–262). Cambridge: Cambridge University Press.Google Scholar
  54. Martinson, B. C., Anderson, M. S., & de Vries, R. (2005). Scientists behaving badly. Nature, 435(9 June 2005), 737–738.CrossRefGoogle Scholar
  55. Messick, D. M., & Sentis, K. P. (1979). Fairness and preference. Journal of Experimental Social Psychology, 15(4), 418–434.CrossRefGoogle Scholar
  56. Moore, D., Tetlock, P. E., Tanlu, L., & Bazerman, M. H. (2006). Conflicts of interest and the case of auditor independence: moral seduction and strategic issue cycling. Academy of Management Review, 31(1), 10–29.Google Scholar
  57. Nowak, C., & Heinrichs, N. (2008). A comprehensive meta-analysis of Triple P-positive parenting program using hierarchical linear modeling: effectiveness and moderating variables. Clinical Child and Family Psychology Review, 11(3), 114–144.CrossRefGoogle Scholar
  58. Ogden, T., & Halliday-Boykins, C. A. (2004). Multisystemic treatment of antisocial adolescents in Norway: replication of clinical outcomes outside of the us. Child and Adolescent Mental Health, 9, 77–83.CrossRefGoogle Scholar
  59. Ogden, T., & Hagen, K. A. (2006). Multisystemic treatment of serious behaviour problems in youth: sustainability of effectiveness two years after intake. Child and Adolescent Mental Health, 11(3), 142–149.CrossRefGoogle Scholar
  60. Okike, K., Kocher, M. S., Mehlman, C. T., & Bhandari, M. (2007). Conflict of interest in orthopaedic research. an association between findings and funding in scientific presentations. J Bone Joint Surg Am, 89(3), 608–613.CrossRefGoogle Scholar
  61. Olweus, D. (1994). Bullying at school: basic facts and effects of a school based intervention program. Journal of Child Psychology and Psychiatry, 35(7), 1171–1190.CrossRefGoogle Scholar
  62. Perlis, R. H., Perlis, C. S., Wu, Y., Hwang, C., Joseph, M., & Nierenberg, A. A. (2005). Industry sponsorship and financial conflict of interest in the reporting of clinical trials in psychiatry. American Journal of Psychiatry, 162(10), 1957–1960.CrossRefGoogle Scholar
  63. Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology, 1(4), 435–450.CrossRefGoogle Scholar
  64. Ranstam, J., Buyse, M., George, S. L., Evans, S., Geller, N. L., Scherrer, B., et al. (2000). Fraud in medical research: an international survey of biostatisticians. Controlled Clinical Trials, 21(5), 415–427.CrossRefGoogle Scholar
  65. Resnik, D. B. (2000). Financial interests and research bias. Perspectives on Science, 8(3), 255–285.CrossRefGoogle Scholar
  66. Russo, J. E., Carlson, K. A., Meloy, M., & Yong, K. (2007). The goal of consistency as a cause of information distortion. Johnson School Research Paper Series No. 04-07.Google Scholar
  67. Sanchez, V., Steckler, A., Nitirat, P., Hallfors, D., Cho, H., & Brodish, P. (2007). Fidelity of implementation in a treatment effectiveness trial of reconnecting youth. Health Education Research, 22(1), 95–107.CrossRefGoogle Scholar
  68. Sanders, M. R. (1992). Every parent: A positive guide to children’s behavior. Sydney: Addison-Wesley.Google Scholar
  69. Sanders, M. R. (1999). Triple P-positive parenting program: towards an empirically validated multilevel parenting and family support strategy for the prevention of behaviour and emotional problems in children. Clinical Child and Family Psychology Review, 2(2), 71–89.CrossRefGoogle Scholar
  70. Shadish, W. R., Cook, T. D., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton-Mifflin.Google Scholar
  71. Sherman, L. W. (2006). “to develop and test:” the inventive difference between evaluation and experimentation. Journal of Experimental Criminology, 2(3), 393–406.CrossRefGoogle Scholar
  72. Sherman, L. W., Farrington, D. P., Welsh, B. C., & MacKenzie, D. L. (Eds.). (2002). Evidence-based crime prevention. London: Routledge.Google Scholar
  73. St Pierre, T. L., Osgood, D. W., Mincemoyer, C. C., Kaltreider, D. L., & Kauh, T. J. (2006). Results of an independent evaluation of project alert delivered in schools by cooperative extension. Prevention Science, 6(4), 305–317.CrossRefGoogle Scholar
  74. Sundell, K., Hansson, K., Löfholm, C. A., Olsson, T., Gustle, L. H., & Kadasjo, C. (2008). The transportability of multisystemic therapy to Sweden: short-term results from a randomized trial of conduct disordered youth. Journal of Family Psychology, 22(4), 550–560.CrossRefGoogle Scholar
  75. Wilson, S. J., & Lipsey, M. W. (2007). School-based interventions for aggressive and disruptive behavior: update of a meta-analysis. American Journal of Preventive Medicine, 33(2, Supplement 1), S130–S143.CrossRefGoogle Scholar
  76. Wilson, S. J., Lipsey, M. W., & Derzon, J. H. (2003a). The effects of school-based intervention programs on aggressive behavior: a meta-analysis. Journal of Consulting and Clinical Psychology, 71(1), 136–149.CrossRefGoogle Scholar
  77. Wilson, S. J., Lipsey, M. W., & Soydan, H. (2003b). Are mainstream programs for juvenile delinquency less effective with minority youth than majority youth? a meta-analysis of outcomes research. Research on Social Work Practice, 13(1), 2–26.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  1. 1.Institute of Criminology, Sidgwick AvenueUniversity of CambridgeCambridgeUK

Personalised recommendations