Institutional Review Board (IRB) Approval

Chapter
Part of the Health Informatics book series (HI)

Abstract

The Institutional Review Board (IRB) has to approve most studies that involve human subjects. This chapter first discusses the origin of the Institutional Review Board, which goes back as far as World War II when experiments were conducted on prisoners, to illustrate how different historical events resulted in an official review being required for experimentation on human subjects. Then, the components of the information submitted for review are discussed. This includes the study rationale, its design and how data are to be gathered and published. In addition, special attention is paid to the need for the study participants to provide informed consent. For participants to be able to give such consent the researcher needs to provide all necessary information, participants need to be able understand it and then need to agree to taking part in the study. This process is not without its consequences and has been shown to affect the timeliness of research, the study’s recruitment success and also the representativeness of the study sample.

References

  1. 1.
    Dunn CM, Chadwick GL (2004) Protecting study volunteers in research: a manual for investigative sites, 3rd edn. Thomson Centerwatch, BostonGoogle Scholar
  2. 2.
    Baumrind D, Milgram S (2010) Classic dialogue: was Stanley Milgram’s study of obedience unethical? In: Slife B (ed) Clashing views on psychological issues. McGraw-Hill, New York, pp 26–42Google Scholar
  3. 3.
    Milgram S (1963) Behavioral study of obedience. J Abnorm Soc Psychol 67(4):371–378CrossRefGoogle Scholar
  4. 4.
    Garfinkel SL, Cranor LF (2010) Institutional review boards and your research: a proposal for improving the review procedures for research projects that involve human subjects and their associated identifiable private information. Commun ACM 53(6):38–40CrossRefGoogle Scholar
  5. 5.
    Macklin R (2008) How independent are IRBs? IRB Ethics Hum Res 30(3):15–19Google Scholar
  6. 6.
    Hilgartner S (1990) Research fraud, misconduct, and the IRB. IRB Ethics Hum Res 12(1):1–4CrossRefGoogle Scholar
  7. 7.
    Scheessele MR (2010) CS expertise for institutional review boards. ACM Comput 53(8):7CrossRefGoogle Scholar
  8. 8.
    Committee on Health Literacy – Institute of Medicine of the National Academies (ed) (2004) Health literacy: a prescription to end confusion. The National Academies Press, WashingtonGoogle Scholar
  9. 9.
    Tokuda Y, Doba N, Butler JP, Paasche-Orlow MK (2009) Health literacy and physical and psychological wellbeing in Japanese adults. Patient Educ Couns 75(3):411–417PubMedCrossRefGoogle Scholar
  10. 10.
    Flesch R (1948) A new readability yardstick. J Appl Psychol 32(3):221–233PubMedCrossRefGoogle Scholar
  11. 11.
    Thomas G, Hartley RD, Kincaid JP (1975) Test-retest and inter-analyst reliability of the automated readability index, Flesch reading ease score, and the fog count. J Reading Behav 7(2):149–154Google Scholar
  12. 12.
    Weis BD (2007) Health literacy and patient safety: help patients understand. Manual for clinicians, 2nd edn. AMA and AMA Foundation, ChicagoGoogle Scholar
  13. 13.
    Taylor WL (1953) Cloze procedure: a new tool for measuring readability. Journalism Q 30:415–433Google Scholar
  14. 14.
    Trifiletti LB, Shields WC, McDonald EM, Walker AR, Gielen AC (2006) Development of injury prevention materials for people with low literacy skills. Patient Educ Couns 64(1–3):119–127PubMedCrossRefGoogle Scholar
  15. 15.
    Leroy G (2008) Improving health literacy with information technology. In: Wickramasinghe N, Geisler N (eds) Encyclopaedia of healthcare information systems. Idea Group, Inc., Information Science Reference, HersheyGoogle Scholar
  16. 16.
    Leroy G, Eryilmaz E, Laroya BT (2006) Health information text characteristics. In: American Medical Informatics Association (AMIA) annual symposium, Washington DC, 11–15 November 2006Google Scholar
  17. 17.
    Leroy G, Helmreich S, Cowie J (2010) The influence of text characteristics on perceived and actual difficulty of health information. Int J Med Inform 79(6):438–449PubMedCrossRefGoogle Scholar
  18. 18.
    Leroy G, Helmreich S, Cowie JR (2010) The effects of linguistic features and evaluation perspective on perceived difficulty of medical text. In: Hawaii International Conference on System Sciences (HICSS), Kauai, 5–8 January 2010Google Scholar
  19. 19.
    Leroy G, Helmreich S, Cowie JR, Miller T, Zheng W (2008) Evaluating online health information: beyond readability formulas. In: AMIA, Washington DC, 8–12 November 2008Google Scholar
  20. 20.
    LeBlang TR (1995) Medical battery and failure to obtain informed consent: Illinois Court decision suggests potential for IRB liability. IRB Ethics Hum Res 17(3):10–11CrossRefGoogle Scholar
  21. 21.
    Campbell EG, Weissman JS, Clarridge B, Yucel R, Causino N, Blumenthal D (2003) Characteristics of medical school faculty members serving on institutional review boards: results of a national survey. Acad Med 78(8):831–836PubMedCrossRefGoogle Scholar
  22. 22.
    Dyrbye LN, Thomas MR, Mechaber AJ, Eacker A, Harper W, Massie FS Jr, Power DV, Shanafelt TD (2007) Medical education research and IRB review: an analysis and comparison of the IRB review process at six institutions. Acad Med 82:654–660PubMedCrossRefGoogle Scholar
  23. 23.
    Miller T (2008) Dynamic generation of a health topics overview from consumer health information documents and its effect on user understanding, memory, and recall. Doctoral Dissertation, Claremont Graduate University, ClaremontGoogle Scholar
  24. 24.
    Kho ME, Duffett M, Willison D, Cook DJ, Brouwers MC (2009) Written informed consent and selection bias in observational studies using medical records: systematic review. BMJ 338:b866PubMedCrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  1. 1.School of Information Systems and TechnologyClaremont Graduate UniversityClaremontUSA

Personalised recommendations