Policy Sciences

, Volume 38, Issue 4, pp 269–291 | Cite as

Science ethics as a bureaucratic problem: IRBs, Rules, and Failures of control

Article

Abstract

“Institutionalized science ethics” refers to the statutory, professional and institution-based ethical standards that guide and constrain scientists' research work. The primary institution responsible for implementing institutionalized science ethics is the Institutional Review Board. We examine the limitations of IRBs and institutionalized science ethics, using bureaucratic theory and, especially, theory related to the development and enactment of rules. We suggest that due to the very character of rules-based systems, improvements in IRB outcomes are unlikely to be achieved through either more or better rules or even by bureaucratic reform. Instead, we suggest that improvements in human subject protection can best be advanced through increased participation. Ours is not a call for more participation by the general public but participation, via “Participant Review Boards” of persons who are eligible, by the protocols of the research in question, to serve as subjects. This provides a level of legitimacy and face validity that cannot be obtained by IRB affiliates, even by “external representatives.” In making these points, we review a recent science ethics controversy, the KKI/Johns Hopkins lead paint study. In spite of being approved by IRBs, the study resulted in a civil lawsuit that reached the Maryland Court of Appeals. The case illustrates the limits of institutionalized science ethics and the bureaucracies created for their enactment. The case also underscores the complex and equivocal nature of the ethical guidelines established under the National Research Act.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adelman, S. A. (2001). Appellee's Motion for Partial Reconsideration and Modification of Opinion; Grimes vs. Kennedy Krieger Institute, Inc.; and Higgins vs. Kennedy Krieger Institute, Inc. September.Google Scholar
  2. Agency for Toxic Substances and Disease Registry (ATSDR), ‘US Department of Health and Human Services’ (1988). The Nature and Extent of Lead Poisoning in Children in the United States: A Report to Congress. Washington, D.C.Google Scholar
  3. Barber, B., J. Lally, J. Makarushka and D. Sullivan (1973). Research on Human Subjects: Problems of Social Control in Medical Experimentation. New York: Russell Sage Foundation.Google Scholar
  4. Bell, J., J. Whiton, and S. Connelly (1998). Final Report: Evaluation of NIH Implementation of Section 491 of the Public Health Service Act, Mandating a Program of Protection for Research Subjects. Arlington, VA.Google Scholar
  5. Bernard, S. (2004). ‘Should the centers for disease control and prevention's childhood lead poisoning intervention level be lowered,’ American Journal of Public Health 93: 1253–1260.Google Scholar
  6. Bozeman, B. and L. DeHart-Davis (1999). ‘Red tape and clean air: Title V air pollution permitting implementation as a test bed for theory development,’ Journal of Public Administration Research and Theory 9: 141–177.Google Scholar
  7. Bozeman, B. (2000). Bureaucracy and Red Tape. Upper Saddle River, New Jersey: Prentice-Hall.Google Scholar
  8. Brown, G. E. (1993). ‘The mother of necessity: Technology policy and social equity,’ Science and Public Policy : 411–416.Google Scholar
  9. Campbell, G. (2000). ‘United States Demographics,’ in G. C. Jr., R. Denes and C. Morrison, eds., Access Denied: Race, Ethnicity, and the Scientific Enterprise. New York: Oxford University Press, pp. 7–41.Google Scholar
  10. Center for Disease Control (2004a). Timeline- The Tuskegee Syphillis Study: A Hard Lesson Learned.Google Scholar
  11. Center for Disease Control (2004b). General Lead Information: Questions and Answers.Google Scholar
  12. Chopyak, J. and P. Levesque (2001). ‘Public participation in science and technology: Decision-making: Trends for the future,’ Technology in Society 24: 155–166.CrossRefGoogle Scholar
  13. Cleary, R. E. (1987). ‘The impact of IRBs on political science research,’ IRB 9: 6–10.CrossRefGoogle Scholar
  14. Curry, D. (2001). ‘Court condemns Hopkin-approved study that exposed children to lead dust,’ Chronicle of Higher Education 7: A32.Google Scholar
  15. DeLeon, P. (2002). ‘Policy analysis in the good society,’ The Good Society 11: 37–41.CrossRefGoogle Scholar
  16. DeHart-Davis, L. and S. K. Pandey (2005). ‘Red tape and public employees: Does perceived rule dysfunction alienate managers?,’ Journal of Public Administration Research and Theory 15: 133–148.CrossRefGoogle Scholar
  17. Dunn, C. and G. Chadwick (1999). Protecting Study Volunteers in Research: A Manual for Investigative Sites. Boston, MA: CenterWatch, Inc.Google Scholar
  18. Enserink, M. (2000). ‘Helsinki's new clinical rules: Fewer placebos, more disclosure,’ Science 290: 418–419.CrossRefPubMedGoogle Scholar
  19. Farmer, P. (1999). Infections and Inequalities: The Modern Plagues. Berkeley: University of California Press.Google Scholar
  20. Ferraro, F., E. Szigeti, K. Dawes, and S. Pan (1999). ‘A survey regarding the University of North Dakota Institutional Review Board: Data, attitudes, and perceptions,’ Journal of Psychology 133: 272–280.Google Scholar
  21. Fluss, S. (2000). ‘How the declaration of Helsinki developed,’ Good Clinical Practice Journal 6: 18–22.Google Scholar
  22. Goldman, J. and M. Katz (1982). ‘Inconsistency and Institutional review boards,’ The Journal of the American Medical Association 248: 197–202.CrossRefGoogle Scholar
  23. Gray, B. H. (1978). ‘Institutional review boards as an instrument of assessment: Research involving human subjects in the U.S.,’ Science, Technology, and Human Values 4: 34–46.Google Scholar
  24. Gray, B., R. Cooke and A. Tannenbaum (1978). ‘Research involving human subjects,’ Science 201: 1094–1101.PubMedCrossRefGoogle Scholar
  25. Gross, E. (1953). ‘Some functional consequences of primary controls in formal work organizations,’ American Sociological Review 18: 368–373.CrossRefGoogle Scholar
  26. Howard-Jones, N. (1982). ‘Human experimentation in historical and ethical perspectives,’ Social Science and Medicine 16: 1429–1448.CrossRefPubMedGoogle Scholar
  27. Human, D. and S. Fluss (2001). ‘The world medical association's declaration of Helsinki: Historical and contemporary perspectives,’ http://www.wma.net/e/ethicsunit/helsinki.htm. Accessed July 29, 2004.
  28. Johnson, E. B., S. Jackson Lee, D. Hooley, and B. Lee (1998). Dissenting Views to Unlocking the Future: Report of the House Science Committee Science Policy Study. Washington, D.C.Google Scholar
  29. Jones, J. (1993). Bad Blood: The Tuskegee Syphilis Experiment. New York: Free Press.Google Scholar
  30. Kaiser, J. (2001). ‘Human subjects: Court rebukes Hopkins for lead paint study,’ Science 293: 1567–1569.CrossRefGoogle Scholar
  31. Laird, F. (1993). ‘Participatory analysis, democracy, and technological decision making,’ Science, Technology and Human Values 18: 341–361.CrossRefGoogle Scholar
  32. Lamphear, B. and M. Weitzman, et al. (1996). ‘Racial differences in urban children's environmental exposure to lead,’ American Journal of Public Health 86: 1460–1463.PubMedCrossRefGoogle Scholar
  33. Lane, E. (2005). Decision-Making in the Human Subjects Review System. Unpublished doctoral dissertation. Atlanta, GA: School of Public Policy, Georgia Institute of Technology.Google Scholar
  34. Lemmons, T. and B. Freedman (2000). ‘Ethics review for sale? conflict of interest and commercial review boards,’ Milbank Quarterly 78: 547–562.CrossRefPubMedGoogle Scholar
  35. Lewin, T. (2001). ‘US investigating John Hopkins study of lead paint hazard,’ The New York Times August 24.Google Scholar
  36. Moon, M. J. and S. Bretschneider (2002). ‘Does the perception of red tape constrain IT innovativeness in organizations? Unexpected results from a simultaneous equation model and implications,’ Journal of Public Administration Research and Theory 12: 273–291.Google Scholar
  37. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Accessible at: http://ohsr.od.nih.gov/guidelines/belmont.html.
  38. National Research Council (1993). Measuring Lead Exposure in Infants, Children, and Other Sensitive Populations. Washington, D.C.: National Academy Press.Google Scholar
  39. National Institute of Health (2005) Title 45 CFR Part 46 Protection of Human Subjects, Office of Human Subjects Research, http://www.nihtraining.com/ohsrsite/guidelines. Accessed May 6, 2005.
  40. Organ, D. and C. Greene (1981). ‘The effects of formalization on professional involvement: A compensatory process,’ Administrative Science Quarterly 26: 237–252.CrossRefGoogle Scholar
  41. Pandey, S. K. and P. G. Scott. (2002). ‘Red tape: A review and assessment of concepts and measures,’ Journal of Public Administration Research and Theory 12: 553–580.Google Scholar
  42. Rodgers, J. and G. Stephenson. (2001). Kennedy Krieger, Hopkins, UMMS and Others Ask Court of Appeals to Reconsider Issues in Lead Paint Study Decision. John Hopkins Medical Institute. Sept. 17, 2001.Google Scholar
  43. Roig-Franzia, M. and R. Weiss (2001). ‘Md. appeals court slams researchers,’ Washington Postt. August 20, 2001. Front page.Google Scholar
  44. Ross, L. F. (2002). ‘In defense of the Hopkins lead abatement studies,’ Journal of Law, Medicine, and Ethics 30: 50–57.CrossRefGoogle Scholar
  45. Rothman, D. J. (1998). ‘The nuremberg code in light of previous principles and practices in human experimentation,’ in U. Tröhler and S. Reiter-Theil, eds., Ethics Codes in Medicine: Foundations and Achievements of Codification Since 1947. Aldershot, England, and Brookfield, VT, USA: Ashgate Publishing, pp. 50–59.Google Scholar
  46. Scott, P. G. and S. K. Pandey (2000). ‘The influence of red tape on bureaucratic behavior: An experimental simulation,’ Journal of Policy Analysis and Management 19: 615–633.CrossRefGoogle Scholar
  47. Schulz, M. (1998). ‘Limits to bureaucratic growth: The density dependence of organizational rule births,’ Administrative Science Quarterly December, 1998.Google Scholar
  48. Shaul, R. (2002). ‘Reviewing the reviewers: The vague accountability of research Ethics committees,’ Critical Care 6: 121–122.CrossRefPubMedGoogle Scholar
  49. Shensul, J. J. (2002). ‘Democratizing science through social science research partnership,’ Bulletin of Science, Technology, and Society 22: 190–202.CrossRefGoogle Scholar
  50. Simon, H. A. (1957). Administrative Behavior. New York: Mac Millan.Google Scholar
  51. Spriggs, M. (2004). ‘Canaries in the mines: Children, risk, non-therapeutic research, and justice,’ Journal of Medical Ethics 30: 176–181.PubMedGoogle Scholar
  52. Surowiecki, J. (2004). The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. New York: Doubleday.Google Scholar
  53. Thompson, J. (1967). Organizations in Action. New York: McGraw-Hill.Google Scholar
  54. U.S. Food and Drug Administration (2005). Guidelines for Institutional Review Boards and Clinical Investigators http://www.fda.gov/oc/ohrt/irbs/faqs.html#IRBOrg Accessed May 10, 2005.
  55. Zaal, R. and L. Leydesdorff. (1987). ‘Amsterdam science shop and its influence on university research: The effects of ten years of dealing with non-academic questions,’ Science and Public Policy 14: 310–316.Google Scholar
  56. Zhou, X. (1993). ‘The dynamics of organizational rules,’ American Journal of Sociology 98: 1134–1166.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, Inc. 2006

Authors and Affiliations

  1. 1.Research Value Mapping Program, School of Public PolicyGeorgia TechAtlantaU.S.A.

Personalised recommendations