1 Background

Introduction: Cardiac resynchronization therapy (CRT) is an established treatment for heart failure in selected patients [1]. The primary aim of CRT is to resynchronize a dyssynchronous contraction of the left ventricle. Dyssynchrony may be caused by left bundle branch block or other conduction disturbances, and current guidelines emphasize the importance of prolonged QRS duration (QRSd) in the selection criteria for suitable candidates, where a class I indication is given only to patients with LBBB and QRS duration ≥ 150 ms, and patients with non-LBBB have a stronger recommendation if the QRSd is ≥ 150 ms (IIa) compared to < 150ms (IIb) [2]. Similarly, the magnitude of reduction of QRS duration has in several studies been associated with better clinical outcome and higher probability of echocardiographic reverse remodeling [3]. However, there are still a significant number of patients in this group who do not improve after CRT.

It is well-known that individual programming of the device’s timing of atrioventricular (AV) and ventriculo-ventricular (VV) delay can be important in order to maximize the benefit of CRT [4]. The introduction of quadripolar electrodes and device-based algorithms for optimization of AV and VV delays and delivery of LV-only pacing and fusion pacing have greatly increased the programming options in each individual CRT-treated patient. All major vendors of CRT devices now have built-in optimization algorithms in their devices, and most of these algorithms have shown non-inferiority to echocardiography-optimized device settings regarding short-term outcome. Head-to-head comparisons between different vendors’ algorithms have not been performed, and it is not clear which is the best strategy for optimizing device settings. It would be appealing to apply a uniform (validated) optimization strategy to all patients, regardless of device brand. Retrospective studies have consistently indicated that a larger reduction in QRS duration is associated with better outcome, as well as improved reverse remodeling [3]. A recent study has also shown that by combining the built-in algorithm (in this case the Sync AV algorithm) with individually tailored AV delay, it was possible to obtain a greater mean reduction in QRSd, compared to using the algorithm alone [5].

We aimed to evaluate if it is feasible to obtain additional QRS reduction, on top of the built-in algorithms in the device, by adjusting AV and VV delays in a structured way, in an all-comer CRT population. We also aimed to assess whether larger QRS reduction was associated with better clinical outcome.

We aimed to develop and implement a pragmatic vendor-independent strategy for CRT optimization in a tertiary care referral center, and to evaluate if the successful implementation of this optimization scheme resulted in better clinical outcome.

2 Methods

The study was performed in a tertiary care center. Medical records of 254 consecutive patients with left bundle branch block (LBBB) according to the 2018 ACC/AHA/HRS criteria and a class I indication for CRT, during the period 2015-2020, were retrospectively evaluated [6]. The right atrial lead was typically placed in the right appendage, the right ventricular lead in the apex or septum (operator’s preference). The coronary sinus lead was placed in a lateral, posterolateral or posterior position if possible, and anterior position only as a last resort. Left ventricular lead position was retrospectively evaluated in the left anterior oblique and right anterior oblique views by an experienced electrophysiologist (RB), using the 17-segment model, and positions were split into lateral (anterolateral or inferolateral), anterior, inferior, or apical position [7].

Implants performed during the first 3 years (2015-2017) were designated as the control group, and in these patients, the suggested settings from the device-based algorithms were used, if applicable. This included primarily the aCRT algorithm from Medtronic and the QuickOpt algorithm from Abbott [8, 9]. For patients where the algorithms could not be used, typical programming in the control group included a fixed AV time at least 20 ms shorter than the intrinsic conduction time to ensure biventricular capture and simultaneous pacing of the right and left ventricle, or (in the case of permanent atrial fibrillation) synchronous biventricular pacing without trigger mode. Starting in 2018, an active 12-lead electrocardiogram (ECG)-based optimization of QRS duration reduction post-implant was implemented, and these patients were designated as the intervention group. There was a gradual implementation, and the strategy was fully implemented in 2020 and onwards, where all patients routinely went through the optimization process. The method is summarized in Fig. 1. Starting in the year 2018, postoperative QRS duration and morphology were evaluated in a structured stepwise way at various device settings, including the use of specific device algorithms when applicable (AdaptiveCRT, SyncAV, SmartDelay) with manual modifications of AV and VV delays and LV-only pacing when applicable, aiming to maximize the reduction of the QRS duration. A pacing electrode was chosen based on the longest Q-LV or longest RV-LV conduction time (if no intrinsic AV conduction). If the best vector had a threshold at or above the limit of 3.0 V/1.0 ms, then the second-best vector was chosen. If two vectors were similar, the one with the lowest threshold and/or highest impedance was chosen. Vectors with diaphragmatic stimulation were not considered suitable and hence excluded. The suggestion from the device-based algorithm was then tested and evaluated using 12-lead ECG with different AV intervals (as suggested by Varma et al. [5]). If LV-only or fusion pacing was the suggested setting, then a standard BiV setting was also tested. For the BiV setting, a fixed AV delay at least 20 ms shorter than intrinsic conduction was chosen, typically 140/170 ms, or shorter if needed. Finally, LV pre-activation was evaluated with – 20 ms and – 40 ms respectively. The setting with the overall narrowest QRS complex was then chosen. If two settings were similar in QRS duration, a morphology with visible LV pre-activation (i.e., early positive deflection in lead V1 and/or lead I) was favored. Optimization was performed immediately post-operatively or the day after. If the chosen settings resulted in subjective improvement and no objective signs of worsened heart failure, the settings were kept the same at follow-up visits. Follow-up interval was typically at 2 months, then at 6 months (for non-responders) and then every 12 months, plus continuous remote monitoring. Digital ECGs before and after CRT implantation were collected and QRS duration reduction was automatically analyzed, with manual inspection and validation of correct position of the automatic timing calipers.

Fig. 1
figure 1

ECG optimization scheme

The primary endpoint was a composite of hospitalization due to heart failure or death from any cause.

2.1 Statistical methods

SPSS version 27 (IBM) was used for statistical analyses. Normally distributed data is presented as mean ± standard deviation; non-normally distributed data is presented as median [interquartile range]. Cox regression analysis was used in time-dependent analysis to evaluate the hazard ratio for the primary composite endpoint (death and heart failure hospitalization). Variables with a univariable p-value < 0.10 were entered into a multivariable model. A Kaplan-Meier analysis with log-rank test was used to compare survival between the different time periods, and for groups with different magnitudes of QRSd reduction. For the comparison between the two implant periods, the time of follow-up was capped at 2 years, to eliminate the different times of follow-up inherent to the study design. For all analyses, a two-sided p-value < 0.05 was considered significant.

3 Results

A total of 254 patients were included and were followed for up to 6 years (median 2.9 [1.8–4.1] years). The time spent on postoperative optimization was not uniformly recorded, but typically varied between 15 and 45 min.

During follow-up, 82 patients (32%) reached the primary endpoint; in total, there were 53 deaths (21%) and 58 (23%) heart failure hospitalizations. Baseline demographic data is presented in Table 1. Median QRS duration pre-implant during the entire time period was 162 ms [150–174] and post-implant 146 ms [132–160]. Progressively, more patients underwent structural QRSd reduction evaluation for each year, and correspondingly, the mean reduction in QRS duration was progressively larger for each year during the intervention period, changing from − 9.5ms in the control group to − 24 in the year 2020 (p = 0.005) (Fig. 2). The use of LV-only pacing algorithms, AV and VV times are reported in Table 1. LV-only pacing was used more often during the intervention period, compared to the control period, but the reduction in QRS duration was not significantly different between those with LV-only pacing and patients with biventricular pacing from both RV and LV electrodes. Overall, at the group level, the sensed and paced AV times did not differ significantly between the control and intervention period, but LV pre-activation was shorter in the intervention period.

Table 1 Baseline characteristics stratified for implant period
Fig. 2
figure 2

QRS duration reduction per implant period (years)

During the intervention period, fewer patients had their LV leads placed in an anterior position. Cox regression analysis was used to determine the hazard ratio for QRS duration reduction with regards to the combined primary endpoint; HR 0.89 [CI 0.80–0.99] per 10-ms QRS reduction, p = 0.037. If QRS duration reduction was dichotomized using the median value (− 14 ms), the corresponding HR for QRS reduction ≥ 14 ms was 0.57 [0.33–0.98] p = 0.038, compared to QRS reduction < 14 ms (Table 2). In a multivariate Cox regression model, variables with p < 0.05 were entered. The final model was thus adjusted for age, gender, NYHA class, ischemic etiology, CRT-P/CRT-D, left ventricular ejection fraction (LVEF), and diabetes, and the adjusted hazard ratio for larger QRS reduction was 0.54 [0.29–0.98] (p = 0.04).

Table 2 Cox regression analysis for risk of death or hospitalization for heart failure within 2 years post-implant

In Kaplan Meier analysis a QRS reduction > 14 ms was associated with a lower risk of death or heart failure hospitalization (see Fig. 3, p = 0.049). When comparing the cohort from 2020 (with the full effect of the optimization procedure, − 24.5ms QRSd reduction on average) with the control cohort, the patients from 2020 had a significantly better survival free of heart failure hospitalization (see Fig. 4, p = 0.01).

Fig. 3
figure 3

Kaplan-Meier curve showing survival free of heart failure hospitalization stratified for reduction of QRS duration during CRT (cutoff is the median value, a reduction of 14 ms)

Fig. 4
figure 4

Kaplan Meier curve showing survival free of heart failure hospitalization stratified for implant period (2015–2017 vs. 2020) and truncated to 2 years of follow-up

4 Discussion

We show that it is feasible to obtain a larger QRSd reduction in an all-comer CRT-treated population, by using an individualized optimization strategy on top of, or instead of, the device-based optimization algorithms. Despite similar baseline demography and baseline QRSd, the use of a structured optimization resulted in narrower paced QRS complex, and this was in turn associated with a lower risk of heart failure hospitalization and all-cause mortality. Overall, there were only minor differences in the programmed delays between the intervention and control periods, suggesting that there is no general rule to shorten or prolong the intervals in order to achieve larger QRS reduction, but rather that individualization of the AV and VV intervals is key. There was a trend for longer AV delays in the intervention period, which may have allowed for more fusion with intrinsic conduction in the right bundle branch, thereby narrowing the QRS complex and providing better ventricular synchrony.

4.1 Rationale for AV and VV optimization in relation to QRS reduction

There are several pathophysiologic advantages of optimizing the AV interval in CRT. Many patients with LBBB also have a prolonged PR interval and hence there is ineffective LV filling resulting in diastolic mitral regurgitation and fusion of the E and A waves which can be visualized with echocardiography. CRT can overcome this by the programming of shorter AV intervals, but too short an AV delay may result in early closure of the mitral valve prior to actual systole, with the risk of diastolic mitral regurgitation and again ineffective LV filling. Too short AV delays can also be insufficient for optimal filling in the typically large left ventricle of a heart failure patient, which may also have a significant diastolic dysfunction with elevated filling pressures, further compromising LV filling in diastole. Based on this knowledge, the first optimization strategies employed echocardiography, using either a computed “optimal” delay to allow for the best LV filling (Ritter’s method) or an iterative testing-method to determine which setting resulted in the best velocity time integral flow across the aortic or mitral valve (iterative method) [10, 11]. The landmark CRT studies employed various strategies for AV optimization; Care-HF and MIRACLE used echocardiography optimization, COMPANION used a device-based electrical delay algorithm, RAFT CRT used short fixed AV-delays and the MADIT-CRT used no specific AV optimization [1, 12,13,14]. Device-based algorithms have typically been validated in non-inferiority studies compared to echocardiography-based settings, using LV remodeling as a primary surrogate endpoint [8, 9, 15, 16].

None of the abovementioned validation studies have focused on QRS narrowing as a primary target in CRT, but in a pilot study, Varma et al. recently used the SYNC AV algorithm (Abbott) as a base and then added an individually tailored AV delay “on top of” the device-based suggestion [5]. The SYNC AV algorithm measures the intrinsic AV interval and then subtracts a fixed time (default – 50 ms) to time LV activation for optimal fusion with the intrinsically activated right bundle wavefront. The authors investigated several AV delays and were able to show that the optimal offset varied between − 10 and −60 ms, and that mean QRSd narrowing varied between − 12% (standard BiV pacing with fixed AV delay 140/110 ms) to − 24% (optimal SYNC AV offset). This is in line with the results of our study, where we expand on the previous findings by showing that additional QRS narrowing is feasible, regardless of the device brand and intrinsic algorithm, using a structured approach. LV-only pacing with the aCRT algorithm (Medtronic) has been shown to produce similar improvements in cardiac function compared to biventricular pacing, but a higher proportion of super-responders [17]. In our cohort, the increased use of LV-only pacing algorithms may therefore have provided additional beneficial effects in the intervention group, on top of QRS duration reduction effects. However, in Cox regression analysis, the association with clinical outcome was not significant.

Optimization of VV intervals has not been prospectively evaluated in larger studies, and if it has been evaluated, it has usually been in combination with AV interval optimization, and hence, the effect of additional VV optimization is difficult to tease out [18]. Nevertheless, VV interval optimization is part of all major vendors’ programmable options. The intrinsic algorithms focus on the delta between delays when pacing from the RV electrode and sensing from the LV electrode, and vice versa. Optimizing the VV delay can theoretically be of value, for instance if there is scar surrounding one of the electrodes, making the initial wave-front propagation slower in a unidirectional fashion, manifested by variability in RV-sensing vs. LV-sensing times, and a longer spike-Q interval on the ECG (see central illustration). The clinical impact of optimizing the VV interval remains to be proven, but we hypothesized that if VV optimization can further enhance the QRSd reduction after electrode and AV intervals are optimized, then it may possibly contribute to a better clinical outcome as well.

4.2 QRS duration reduction in relation to clinical outcome in CRT

No prospective randomized trials with clinical outcome as endpoint have investigated a pure QRSd reduction strategy such as ours, and QRSd reduction has not been uniformly reported in the major clinical trials. However, some trials have shown that larger QRSd reduction correlates to better clinical outcome and reverse remodeling [19], and in a recent systematic meta-analysis of 1524 patients from 5 prospective and 6 retrospective studies, there was a significant association between QRSd reduction and favorable echocardiographic response to CRT [20]. In our study the clinical effect was evident only when comparing the year with the best QRSd reduction (i.e., 2020) versus the control years, implying that a substantial relative additional reduction is required for a translation into better clinical outcome compared to device-based algorithms alone. This requires some time and expertise on the part of the nurse or physician who performs the optimization, but after a run-in phase, it should not take more than 20 additional minutes per patient—time well spent if clinical outcomes can be improved.

4.3 Limitations

This was a retrospective single-center study, with the inherent limitations of such a design. The implants were performed during a 6-year period, and the control group had longer follow-up since they were implanted earlier. Even though device-related differences such as activation of LV-only algorithms and LV lead position were not significant in Cox regression models, the combined effect of these differences may have had an interaction with clinical outcome, favoring the intervention group. There may also be residual confounding between the groups, based on changes in referral patterns during this time period, even though baseline demography was similar between the groups, and multivariable adjustment was performed. The association between QRSd reduction and clinical outcome was recorded in the entire cohort, but this does not necessarily mean that the intervention on CRT optimization had a causal effect on the clinical outcome.

5 Conclusion

Implementing an ECG-based general strategy of CRT device optimization by aiming for shorter QRS duration is feasible in a structured clinical setting, and results in larger reductions in QRS duration post-implant. In patients with larger QRS reduction, compared to those with smaller QRS reduction, there is an association with a lower risk of mortality and heart failure hospitalization. If confirmed in prospective trials, this strategy may become useful for improving clinical outcome for CRT recipients, regardless of device brand and underlying etiology of heart failure.