During the past decade, efforts to improve the quality of health care have become a major priority. Public reporting, pay-for-performance initiatives, and highly influential volunteer strategies such as the 100,000 Lives Campaign reflect the current emphasis of transparency, measurement, and accountability in care quality. Despite the popularity and enthusiasm for the principles underlying these initiatives, translating them into improved clinician performance and health care outcomes has been challenging.1,2

Graduate medical education (GME) has also evolved over this period. Practice-based learning and systems-based practice have emerged as important Accreditation Council for Graduate Medical Education (ACGME) competencies (http://www.acgme.org/outcome/). The first 2 phases of the ACGME Outcomes Project focused on structure and processes of training programs, with an emphasis on producing valid assessments of residents’ attainment of competency in these areas. In phase 3, which begun in July 2006, programs are expected to show how educational outcomes data is used to improve individual resident and overall program performance. Phase 3 is a critical step in creating a new generation of physicians that we hope will be more responsive to initiatives that seek to modify their practices, and that will help create and lead a greater number and wider variety of quality innovations in their local practice communities.

Thus, the study by O’Mahoney et al.3 in this issue of Journal of General Internal Medicine (JGIM) represents an important example of a practice and educational innovation that incorporates practice-based learning and systems-based practice principles. O’Mahoney et. al. sought to incorporate reviews of quality measures and other aspects of care coordination during daily multidisciplinary rounds (MDR) in their hospital. Involving a range of caregivers that included house staff, social workers, and care coordinators, initiation of MDR appeared to produce modest improvements in adherence to a number of core measures and larger improvements in smoking cessation and vaccination measures. The study design raises a number of questions of internal validity that threaten or limit the conclusions of their study—many of which are common in quality and safety research and not unique to this study specifically.4 For example, were there other initiatives underway during MDR that might have had overlapping effects? While the authors dealt with the threat of secular trends in core measures by using sophisticated statistical techniques to account for the trend in performance during the baseline period, secular events that coincided with the intervention period could still have played a role. The use of a concurrent control group, such as the private attending group, could have helped to strengthen their findings; although this, too, has limitations. Another key question relates to “who changed?” Because the greatest improvements appeared to occur for smoking cessation and vaccination provision, it would be helpful to know to what extent these services were provided by house staff, nurses, or other specific personnel.

Another unique feature of this study is that it took place in a setting not often represented in the pages of JGIM: community-based teaching hospitals. As such, it represents a large GME training constituency and a substantial number of hospitals. Although the quality movement is now in full gear, many hospitals are still struggling in their ability to meet urgent, public reporting imperatives, increase communication between caregivers, and improve care efficiency. Thus, this study, despite the limitations reviewed above, is highly laudable.

Which elements of the MDR led to improvement and which are most critical? Whereas the authors are not able to describe the relative importance of these potential factors, it seems likely that the success of MDR was mediated by a number of key mechanisms. First, by its very nature, MDR clearly appears to have facilitated communication between all team members (nurses, attending physicians, trainees, care coordination staff). Second, the MDR approach provided a venue for each medical team to set goals of care for each day. While it appears that the “goals” centered primarily around appropriate delivery of (or documentation of adherence to) core measures, it seems likely that this venue provided the opportunity to anticipate discharge needs earlier; this latter effect would have been one mechanism by which plans to begin smoking cessation or administer vaccines may have begun earlier; earlier planning may have also been a means by which LOS was reduced. Third, the chief of the medical service led the MDRs. The presence of senior leadership at MDR provided the credibility and accountability that is often necessary to move physicians from contemplation to action.

The MDR was highly innovative in its use of real-time audit and feedback to trainees. Typical audit and feedback activities outside of the GME setting usually involve practitioners receiving a performance “report card,” often with peer comparison data, which is intended to be followed by introspection and change in practice.5 Requiring practitioners to provide an explicit response to the audit, or tagging reimbursement or bonuses to performance may speed change in behavior. Because education about the importance of behavior change through physician leadership or academic detailing has traditionally taken place outside of the audit and feedback process, it may not be surprising that these types of interventions have generally shown disappointing effects on practitioner behavior. In the study by O’Mahoney et al., the MDR-integrated audit and feedback within an academic detailing format led by a clinical champion, which probably maximized the likelihood that the responsible team members would act upon any deficiencies.

Until recently, GME has traditionally focused on the provision of information and measurement of uptake and mastery of information; this process has also involved feedback and audits of knowledge, but little attention has been paid to how or whether health care is affected by the educational programs. Quality improvement, in contrast, focuses more on how measures of processes and outcomes can be used to modify behavior, and generally pays little attention to the attitudes or knowledge required to affect behavior change. This contrast provides a clear opportunity for GME and quality improvement to inform each other’s activities. Didactic teaching and experiential learning, measurement of resident performance, followed by “reflection” on interventions or changes in one’s practice on resident performance are thought to be the most effective way to teach residents about QI. MDR-like models point out one way to achieving this aim in a manner that meets educational needs and organizational priorities, while maintaining patient centeredness.