Advertisement

Studying Clinical Information Resources

  • Charles P. Friedman
  • Jeremy C. Wyatt
Part of the Computers and Medicine book series (C+M)

Abstract

In Chapter 1 we introduced the challenge of conducting evaluations in medical informatics and discussed specific sources of complexity that give rise to these challenges. In Chapter 2 we introduced the range of approaches that can be used to conduct evaluations in medical informatics and across many areas of human endeavor. Chapter 2 also stressed that the evaluator can address many of these challenges by viewing each evaluation as anchored by specific purposes. Each study is conducted for some identifiable client group, often to inform specific decisions that must be made by members of that group. The work of the evaluator is made possible by focusing on the specific purposes the particular study is designed to address, often framing them as a set of questions and choosing the approach or approaches best suited to those purposes. A study is successful if it provides credible information to help members of an identified audience make decisions.

Keywords

Information Resource None None Medical Informatics Resource Function Clinical Prediction Rule 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Heathfield HA, Wyatt J: Philosophies for the design and development of clinical decision-support systems. Methods Inf Med 1993;32:1–8.PubMedGoogle Scholar
  2. 2.
    Wyatt JC: Clinical data systems, Part III: Developing and evaluating clinical data systems. Lancet 1994;344:1682–1688.PubMedCrossRefGoogle Scholar
  3. 3.
    Smith MF: Prototypically topical: software prototyping and delivery of health care information systems. Br J Health Care Comput 1993; 10(6):25–27.Google Scholar
  4. 4.
    Smith MF: Software Prototyping. London: McGraw-Hill, 1991.Google Scholar
  5. 5.
    Heathfield HA, Hardiker N, Kirby J, Tallis R, Gonsalkarale M: The PEN and PAD medical record model: development of a nursing record for hospital-based care of the elderly. Methods Inf Med 1994;33:464–472.PubMedGoogle Scholar
  6. 6.
    Sommerville I: Software Engineering. Reading, MA: Addison Wesley 1992.Google Scholar
  7. 7.
    Fox J: Decision support systems as safety-critical components: towards a safety culture for medical informatics. Methods Inf Med 1993;32:345–348.PubMedGoogle Scholar
  8. 8.
    Perrault L, Wiederhold G: System design and evaluation. In: Shortliffe E, Perrault L, Wiederhold G, Fagan L (eds) Medical Informatics. Reading, MA: Addison-Wesley 1990:151–177.Google Scholar
  9. 9.
    Wellwood J, Spiegelhalter DJ, Johannessen S: How does computer-aided diagnosis improve the management of acute abdominal pain? Ann R Col Surg Engl 1992;74:140–146.Google Scholar
  10. 10.
    Hart A, Wyatt J: Evaluating black boxes as medical decision-aids: issues arising from a study of neural networks. Med Inf (Lond) 1990;15:229–236.CrossRefGoogle Scholar
  11. 11.
    Wyatt J: Nervous about artificial neural networks? Lancet 1995;346:1175–1177.PubMedCrossRefGoogle Scholar
  12. 12.
    Antman E, Lau J, Kupelnick B, Mosteller F, Chalmers T: A comparison of the results of meta-analysis of randomised controlled trials and recommendations of clinical experts. JAMA 1992;268:240–248.PubMedCrossRefGoogle Scholar
  13. 13.
    Holbrooke A, Langton K, Haynes R, Mathieu A, Cowan S: PREOP: development of an evidence-based expert system to assist with preoperative assessment. Proc Symp Comput Applications Med Care 1991;15:669–673.Google Scholar
  14. 14.
    Nguyen T, Perkins W, Laffey T, Pecora D: Knowledge base verification. AI Magazine 1987;8:69–75.Google Scholar
  15. 15.
    Suwa M, Scott AC, Shortliffe EH: Completeness and consistency in a rule-based system. In: Buchanan BG, Shortliffe EH (eds) Rule-Based Expert Systems. Reading, MA: Addison-Wesley, 1984:159–170.Google Scholar
  16. 16.
    Heckerman D, Horwitz E: The myth of modularity in rule-based systems. In Lemmer J, Kanal L (eds) Uncertainty in AI 2. Amsterdam: Elsevier, 1988:115–121.Google Scholar
  17. 17.
    Shwe M, Tu S, Fagan L: Validating the knowledge base of a therapy planning system. Methods Inf Med 1989;28:36–50.PubMedGoogle Scholar
  18. 18.
    Miller PL, Sittig DF: The evaluation of clinical decision support systems: what is necessary versus what is interesting. Med Inf (Lond) 1990;15:185–190.CrossRefGoogle Scholar
  19. 19.
    Clancey WJ: Viewing knowledge bases as qualitative models. IEEE Expert 1989;4:9–23.CrossRefGoogle Scholar
  20. 20.
    Spiegelhalter DJ: Evaluation of medical decision-aids, with an application to a system for dyspepsia. Stat Med 1983;2:207–216.PubMedCrossRefGoogle Scholar
  21. 21.
    Gaschnig J, Klahr P, Pople H, Shortliffe E, Terry A: Evaluation of expert systems: issues and case studies. In: Hayes-Roth F, Waterman DA, Lenat D (eds) Building Expert Systems. Reading, MA: Addison Wesley, 1983.Google Scholar
  22. 22.
    Wasson JH, Sox HC, Neff RK, Goldman L: Clinical prediction rules: applications and methodological standards. N Engl J Med 1985;313:793–799.PubMedCrossRefGoogle Scholar
  23. 23.
    Lundsgaarde HP: Evaluating medical expert systems. Soc Sci Med 1987;24:805–819.PubMedCrossRefGoogle Scholar
  24. 24.
    Miller PL: Evaluating medical expert systems. In: Miller PL (ed) Selected Topics in Medical AI. New York: Springer-Verlag, 1988.CrossRefGoogle Scholar
  25. 25.
    Nykanen P (ed): Issues in the Evaluation of Computer-Based Support to Clinical Decision Making. Report of SYDPOL WG5. Denmark: SYDPOL, 1989.Google Scholar
  26. 26.
    Wyatt J, Spiegelhalter D: Evaluating medical expert systems: what to test and how? Med Inf (Lond) 1990;15:205–217.CrossRefGoogle Scholar
  27. 27.
    de Dombal FT, Leaper DJ, Horrocks JC, et al: Human and computer-aided diagnosis of acute abdominal pain: further report with emphasis on performance of clinicians. BMJ 1974;1:376–380.PubMedCrossRefGoogle Scholar
  28. 28.
    Mant J, Hicks N: Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction. BMJ 1995;311:793–796.PubMedCrossRefGoogle Scholar
  29. 29.
    Stead W, Haynes RB, Fuller S, et al: Designing medical informatics research and library projects to increase what is learned. J Am Med Inf Assoc 1994;1:28–34.CrossRefGoogle Scholar
  30. 30.
    Haynes R, McKibbon K, Walker C, Ryan N, Fitzgerald D, Ramsden M: Online access to MEDLINE in a clinical setting. Ann Intern Med 1990;112:78–84.PubMedGoogle Scholar
  31. 31.
    Lindberg DA, Siegel ER, Rapp BA, Wallingford KT, Wilson SR: Use of MEDLINE by physicians for clinical problem solving. JAMA 1993;269:3124–3129.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Charles P. Friedman
    • 1
    • 2
  • Jeremy C. Wyatt
    • 3
  1. 1.University of North CarolinaPittsburghUSA
  2. 2.Center for Biomedical InformaticsUniversity of PittsburghPittsburghUSA
  3. 3.Imperial Cancer Research FundLondonUK

Personalised recommendations