Synchrony: a state in which things happen, move, or exist at the same time.

Merriam-Webster Online Dictionary, accessed 7/30/2011

Cardiac Resynchronization Therapy (CRT) is based on the notion that directs ventricular pacing may induce changes favoring a ventricular mass in which synchrony is abnormal, termed “dyssynchrony.” Clearly, the therapy has yielded dramatic responses in many patients. However, there remains a high rate of non-response among those to whom the therapy is currently offered, and it is likely that CRT has thus far not been offered to some who would benefit.

Given the prevalence of ischemic etiology among patients who might benefit from CRT, it is essential to understand this substrate. Any ventricle which has been scarred by prior infarction is, by definition, dyssynchronous. However, not all such dyssynchrony is remediable by CRT. For example, we and others have demonstrated that scar magnitude and/or location may preclude therapy response.1 It is increasingly clear, however, that scar is but one of the elements of cardiovascular structure/function which determine CRT outcome. Other elements include the “wellness” and “electrical wiring” of the non-scarred ventricular mass, valve competency, pulmonary and peripheral vascular dynamics, and atrial contribution. The current clinical endeavor to characterize and measure dyssynchrony and thus predict CRT outcomes has almost exclusively focused on electrical wiring, using gross indices such as QRS morphology and duration, or more refined indices derived from a variety of imaging platforms, including echocardiography, nuclear, and magnetic resonance. It is our thesis that the inadequate assurance of benefit among patients currently undergoing CRT is attributable to this limited focus.

Enter the paper by Verna et al., published in the current issue of the Journal.2 These investigators set out to understand the contribution of left ventricular contractile reserve (LVCR), measured using radionuclide ventriculography (RVG), to the prediction of CRT outcomes. Although they did not exclusively study patients with ischemic cardiomyopathy, the context provided by the inclusion of patients with non-ischemic etiology actually provides additive value. They found that LVCR was an effective predictor of who among a cohort of patients with ischemic cardiomyopathy and “dyssynchrony” would respond favorably to CRT. Their data appear to provide evidence, consistent with our thesis, that “wellness” (measured by LVCR) and “wiring” (measured by RVG indices which are well-defined in their paper) data may be synergistic. As with all good papers their data raise significant questions relating to methodology and thus conclusions. First, we are suspicious that their cohort was quite “sick”—the authors do not provide a clean look at this, but we feel it is justifiable considering age, QRS duration, and high prevalence of NYHA IV status. It bears keeping this in mind, because such data may have limited application to the community of patients being currently considered for CRT, which, if anything, is becoming increasingly “well.” Second, they used a rather murky “hypo-akinetic” segmentation method in lieu of the clearer scar burden characterization, which we see as a lost opportunity in the ischemic cohort given that most of these patients also underwent SPECT imaging. Third, their definition of dyssynchrony was based on very limited background data. Although we have no particular reason to dispute this definition, because of the high prevalence of “dyssynchrony” in this cohort we worry that the conclusions might exclude patients who don’t meet the criteria but may still benefit from CRT. Fourth, although preserved contractile response is likely to indicate “remediability” of dysfuntional myocardium, perhaps even in nonischemic cardiomyopathy, the specificity of this finding needs exploration in larger studies. It is likely that among the less sick cohorts with less scarred myocardium currently being referred for CRT, a high prevalence of dobutamine “response” as defined here might attenuate its effectiveness as a decision tool. Finally, we note a surprisingly low incidence of resynchronization after CRT, despite the high prevalence of left bundle branch block. We wonder whether this result implies a deficiency in how CRT was deployed in this cohort, thus bringing the veracity of the data into question.

Verna et al. should be congratulated on their insightful contribution. It is only through efforts such as theirs that CRT utilization will improve, thus avoiding unnecessary morbidity and expense while availing the therapy to all who would benefit. We would be remiss if we were not able to remind readers that, herein, our definition of remediability is limited to that which can be accomplished using CRT. Remedies not involving CRT which are both current (e.g., longitudinal optimization of medicinal therapies, minimization of ischemic burden, aggressive management of comorbidities) and futuristic (e.g., reduction of ventricular scar burden) must also play a role in optimizing the outcomes of affected people.