What Can the VA Teach Us About Implementing Proven Advances into Routine Clinical Practice?
- First Online:
- Cite this article as:
- Whittle, J. & Segal, J.B. J GEN INTERN MED (2010) 25(Suppl 1): 77. doi:10.1007/s11606-009-1146-y
The annual budget for the National Institutes of Health (NIH) now exceeds $30 billion.1 Health research spending by other federal agencies, including the Veterans Health Administration (VA), National Center for Health Statistics, the Department of Defense and the Agency for Healthcare Research and Quality (AHRQ) totals additional billions. Policymakers and the public are increasingly interested in ensuring that this enormous investment translates into improved health. This is reflected in the emphasis on translation in the NIH’s flagship Clinical Translational Science Award (CTSA) grant program, which asks applicant institutions to train scientists in translational research, develop methods for research translation and create an “integrated academic home” for translational researchers.2 This translation includes both using basic science discoveries to generate effective treatments and moving effective treatments from research settings into routine community practice.
However, the best way for researchers and health care providers to work together to move proven clinical interventions into routine practice has not been established. This is a particularly important issue for the readers of the Journal, who are often called on to facilitate such movement, as well as to study it. Indeed, general internists have lead roles in many of the funded CTSA programs. Thus, we were happy to publish this supplement, “Connecting Research and Patient Care,” which summarizes research and discussions from the 2008 meeting of the VA’s Quality Enhancement Research Initiative (QUERI). The VA QUERI program seeks to facilitate implementation of clinical advances into routine practice within the VA health care system, the largest integrated health care system in the United States. QUERI researchers seek to understand implementation both on a project basis—what worked at these facilities, for this condition—and across projects. Although QUERI researchers work both within and outside the VA health care system, the case examples in the articles in this supplement are all in VA settings. While this VA-centric set of data might make one wary of generalizing lessons learned to other settings (it certainly was an issue for us, the editors), the close collaboration between researchers and managers in the VA makes it a particularly useful laboratory for understanding what works, and why. In addition, the VA has had considerable success with planned transformation and has long lived in the public eye, which may make it more willing to embrace change, and to accept the need for outside researchers to critique that change.3 Finally, the VA relies heavily on salaried physicians, who may be more able to volunteer the time needed to serve as effective clinical collaborators and to take time to provide researchers with feedback regarding what aspects of an implementation program were helpful. We asked each author to ensure that they did not assume knowledge of VA idiosyncrasies and to suggest lessons that would apply in health care settings outside the VA system.
Although rapidly growing, implementation science is a very young field, as reflected in the articles we have included in the supplement. As clinicians and researchers who are not focused on implementation science, we found evaluation of these articles very challenging. Rather than careful comparisons of two approaches to implementation, we found case studies in which researchers and managers described “how we did it.” Rather than evidence syntheses that identified the most rigorous studies in an extensive body of literature, we received manuscripts describing important areas of innovation, such as electronic personal health records, but almost no evaluative research has been published.
As we read further, however, we recognized significant contributions to a new field. Holt and colleagues found a wealth of studies examining different measures of readiness for change.4 Their summary of existing measures is likely to be a useful resource for others considering these issues. Hynes and her colleagues catalogue the ways that health information technology (HIT) has been used to facilitate implementation projects within the VA, which is known for its effective HIT infrastructure.5 These examples provide a starting point for others seeking to use their investment in HIT to facilitate changes in practice.
Three of the manuscripts identify specific areas that have not received adequate attention from implementation researchers. First, Yano and colleagues point out that the challenges of implementing interventions in women, a historically underserved group in the VA, may be substantially different than those in the dominant male VA population.6 We suggest that in other settings, there will be other populations that are not well served by implementation approaches that work for most people. Implementation researchers seeking to improve overall quality of care must be mindful of this possibility and measure implementation effectiveness in population subgroups, as well as overall. Second, Nazi et al. describe burgeoning interest in the use of personal health records and note that there is currently little evidence that they make any difference to health outcomes.7 This is reminiscent of the broader literature showing that the adoption of new technologies often proceeds without evidence of net health benefit.8 The articles in this supplement, and the field in general, draw heavily on that literature, and on the broader literature addressing the diffusion of innovation.9 Finally, Damush and colleagues examine how the VA can implement programs to support patient self management (PSM).10 They use the framework of Wagner’s Chronic Care Model11 to point out the challenges of both implementation of PSM support and evaluation of the effect of PSM support on health outcomes outside the research setting.
In the end, despite an initial hesitancy to publish anecdote, it was the case studies that most captured our interest. They covered a range of important clinical topics, used diverse measures of success and varied in scope. Almost all used mixed methods, reflecting a tension between the need to document changes in measureable outcomes and the recognition that stories about what worked at one institution can be informative to colleagues at another.
These case reports provided a variety of lessons. We found that reports were particularly useful when they had adequate detail, that is, when the reader could really understand what was done. Thus, the report by Hall on the success of the Family Collaborative Map web-based tool is enhanced by the availability of the actual website in the public domain.12 Damush et al. describe their planned introduction of a nurse-based smoking cessation program on general medical and surgical wards, but also tell us that nurses working in the intensive care and mental health units asked that their sites be added to the program over the initial objections of their local institutional review board.10
Probably the most interesting were reports that described interventions that did not work and the authors’ interpretations of why they failed. Cohen and colleagues point out that none of 50 eligible patient families was referred to an educational intervention designed to educate family members about schizophrenia and its management.13 Moreover, 88% of the patients in the same study did not want their families more involved with their mental health treatment providers. Thus, despite well-founded guidelines calling for increasing family support as a key component of schizophrenia management, little progress was made in the two pilot sites where significant support was available to facilitate this introduction. It is absolutely essential that such lessons be disseminated widely as soon as possible after they are learned, and with sufficient detail to allow others to learn from the failure.
Although Berwick and others have summarized how lessons from the broader field of dissemination can be applied to health care,14 we believe this eclectic collection of stories is a key component of building a robust evidence base for implementation research. Just as case reports and case series described by self aware clinicians were the meat of medical literature in the first half of the twentieth century and continue to provide important clues today, careful descriptions of their own experience by self-critical implementers will be an invaluable resource going forward. This is particularly important for individuals who are unsuccessful in their implementation efforts. We worry that the pressure to present one’s medical center as triumphant in all endeavors (or just plain embarrassment) will discourage such reporting. However, we are confident that the clinical thought leaders, scientists and innovative managers who make up the readership of JGIM will continue to look for lessons in everything they do. We hope to receive such reports as original research submissions in the future.