Using Data for Quality Assessment and Improvement

  • Galen L. Barbour
Part of the Computers in Health Care book series (HI)

Abstract

For more than a quarter century, clinical data collection and analysis have been carried out in the name of quality; more often than not the analysis involved the retrospective evaluation of circumstances with outcomes that were less than anticipated. In most cases, the data were the basis of a “search for the guilty” and provided the basis for punitive actions taken to remove “bad apples” (Berwick, 1989). Physician activities were generally a focal point for these searches, and individual practitioners naturally developed a defensive attitude toward the entire data collection and analysis process. These pessimistic attitudes about the value of the quality process generally include a rather negative disposition toward the entire issue of measurement. There is a concept, instilled in us from early childhood, that measurement is always followed by some type of judgement like grades and report cards. Most of us feel we fall short, in one way or another, in some kind of measurement. Because we are reluctant to be judged inadequate, a reasonable defense is to avoid being measured. The human desire to avoid measurement has been amplified, for many physicians, by the punitive use of measurement and data obtained through the “quality process. ”

Keywords

Catheter Pneumonia Assure Beach Expense 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barbour GL. 1994. Development of a quality improvement checklist for the department of Veterans Affairs. Joint Commission Journal of Quality Improvement, 20, 127–139.Google Scholar
  2. Barbour GL. 1995. Assuring quality in the department of Veterans Affairs: “What can the private sector learn?” Journal of Clinical Outcomes Management, 2, 67–76.Google Scholar
  3. Bean KP. 1994. Data quality in hospital strategic information systems: A summary of survey findings. Top Health Information Management, 15, 13–25.Google Scholar
  4. Berwick DM. 1989. Continuous improvements as an ideal in health care. New England Journal of Medicine, 320, 53–56.CrossRefGoogle Scholar
  5. Dayhoff RE, & Maloney DL. 1992. Exchange of Veterans Affairs medical data using national and local networks. Annals of the New York Academy of Science, 670, 50–66.CrossRefGoogle Scholar
  6. DesHarnais S. 1994. Information management in the age of managed competition. Joint Commission Journal of Quality Improvement, 20, 631–638.Google Scholar
  7. Goldman RL. 1992. The reliability of peer assessments of quality of care. Journal of the American Medical Association, 267, 958–960.PubMedCrossRefGoogle Scholar
  8. Goldman RL, & Thomas TL. 1994. Using mortality rates as a screening tool: The experience of the department of Veterans Affairs. Joint Commission Journal of Quality Improvement, 20, 511–522.Google Scholar
  9. Grover FL, Johnson RR, Shroyer ALW, Marshall G, & Hammermeister KE. 1994. The Veterans Affairs continuous improvement in cardiac surgery study. Annals of Thoracic Surgery, 58, 1845–1851.CrossRefGoogle Scholar
  10. Hammermeister KE, Johnson R, Marshall G, & Grover FL. 1994. Continuous assessment and improvement in quality of care: A model from the department of Veterans Affairs cardiac surgery. Annals of Surgery, 219, 281–290.CrossRefGoogle Scholar
  11. lezzoni L. 1990. Using administrative diagnostic data to assess the quality of hospital care: Pitfalls and potential of ICD-9-CM. International Journal for Techology Assessment of Health Care, 6, 272–281.CrossRefGoogle Scholar
  12. Khuri SF, Daley J, Henderson W, Barbour G, Lowry P, et al. 1995. The national Veterans Administration surgical risk study: Risk adjustment for the comparative assessment of the quality of surgical care. Journal of the American College of Surgery, 180, 519–531.Google Scholar
  13. Nadzam DM, et al. 1993. Data-driven performance improvement in health care: The joint commission’s indicator measurement system (IM System). Joint Commission Journal of Quality Improvement, 19, 492–500.Google Scholar
  14. Rollins RJ. 1994. Patient satisfaction in VA medical centers and private sector hospitals: A comparison. Health Care Supervision, 12, 44–50.Google Scholar
  15. Walder D, Barbour GL, Weeks HS, Duncan WH, & Kaufman A. 1995. VA’s external peer review program. Federal Practice, 12, 31–38.Google Scholar
  16. Wilson NJ, Cleary PD, Daley J, & Barbour G. 1994a. Correlates of patient reported quality of care in Veterans Affairs hospitals. San Diego, CA: Association for Health Services Research Conference.Google Scholar
  17. Wilson NJ, Cleary PD, Daley J, & Barbour G. 1994b. Using patient reports of their care experiences to measure system performance. 122nd American Public Health Association Meeting. Washington, DC.Google Scholar
  18. Wilson NJ, Daley J, Thibault G, & Barbour G. 1994. Patient reported quality care in Veterans Affairs hospitals. Brigham and Women’s Hospital Symposium on Preventive Medicine and Clinical Epidemiology. Boston, MA.Google Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Galen L. Barbour

There are no affiliations available

Personalised recommendations