Abstract
Impact of variability in the measured parameter is rarely considered in designing clinical protocols for optimization of atrioventricular (AV) or interventricular (VV) delay of cardiac resynchronization therapy (CRT). In this article, we approach this question quantitatively using mathematical simulation in which the true optimum is known and examine practical implications using some real measurements. We calculated the performance of any optimization process that selects the pacing setting which maximizes an underlying signal, such as flow or pressure, in the presence of overlying random variability (noise). If signal and noise are of equal size, for a 5choice optimization (60, 100, 140, 180, 220 ms), replicate AV delay optima are rarely identical but rather scattered with a standard deviation of 45 ms. This scatter was overwhelmingly determined (ρ = −0.975, P < 0.001) by Information Content, \( {\frac{\text{Signal}}{{{\text{Signal}} + {\text{Noise}}}}} \), an expression of signaltonoise ratio. Averaging multiple replicates improves information content. In real clinical data, at resting, heart rate information content is often only 0.2–0.3; elevated pacing rates can raise information content above 0.5. Low information content (e.g. <0.5) causes gross overestimation of optimizationinduced increment in VTI, high falsepositive appearance of change in optimum between visits and very wide confidence intervals of individual patient optimum. AV and VV optimization by selecting the setting showing maximum cardiac function can only be accurate if information content is high. Simple steps to reduce noise such as averaging multiple replicates, or to increase signal such as increasing heart rate, can improve information content, and therefore viability, of any optimization process.
Background
After implantation of a resynchronization device (biventricular pacemaker or defibrillator) not all patients undergo optimization even though guidelines recommend that AV and VV delay should be optimized, and even though clinical trials have only demonstrated survival benefit of individually optimized CRT. Are clinicians right to cut corners from the trialvalidated, guidelinemandated process? To answer this, the basic science of optimization needs to be examined.
For optimization of atrioventricular (AV) delay, commonly a range of AV settings is tested, whilst monitoring a marker of cardiac function such as echocardiographic velocity–time integral [1, 2] (VTI, a surrogate of stroke volume [3]) or left ventricular dP/dt [4, 5]. The pacemaker setting that gives the best cardiac function is then defined as the optimum. A similar process can also be carried out for the delay between activation of left and right ventricular leads (VV delay).
However, every measurement has uncertainties, which might conceal the true optimum. This uncertainty in our measurement of VTI (or of any other marker for monitoring cardiac function [6, 7]) arises from numerous factors including natural biological variability [8]. Therefore, repeating the “optimization protocol” often provides different optima, as shown in Fig. 1.
There are several clinically important questions. First, if the optimum is not necessarily the ‘true’ underlying optimum, can we at least express its precision, for example, as a 95% confidence interval?
Second, can we trust the measured increase in VTI as a good estimate of the ‘true’ average underlying increase in VTI?
Third, if optimizations 6 months later show that many patients’ optima have changed, would this imply that patients require more frequent reoptimization? [7, 9]
Finally, how can the precision of the optimization protocol be maximized?
It would be difficult and contentious to attempt to answer these questions by doing clinical studies. This is partly because in clinical practice it is normally assumed that the apparent optimum is indeed the true optimum (or at least the nearest of the tested settings to the true optimum). Persons other than the operator conducting the optimization process itself rarely entertain the possibility that spontaneous variability of the monitored measurement during the optimization procedure arising from beattobeat variability and inherent measurement uncertainty has caused the optimum to be misidentified. Confidence intervals are not reported for individual clinical patients’ optima [1, 2, 4, 10].
In this study, therefore, we created mathematical simulation having properties exactly like reallife studies, but in which we could truly know the underlying optimum, despite the presence of overlying noise. To understand the realistic balance between underlying optima and overlying noise, we looked at published studies of optimization.
Information content
A convenient way of quantifying in reallife optimizations the relative contributions of underlying true signal information versus overlying random noise (illustrated in Fig. 2) is using “information content”. Signal, in this context, is the genuine underlying betweensetting difference in VTI, which for computational convenience can be expressed as a variance (average of the squared deviate between the underlying value of each setting and the mean of all settings). Noise, correspondingly, is the unwanted variability that occurs when measures are repeated at the same setting. This too can be expressed as a variance (average of the squared deviate between individual replicate measurements at a setting and the underlying value of that setting). The advantage of using variances is that their sum is the total observed variance. The variance observed over a series of settings can be decomposed into the variance arising from the genuine betweensetting differences (signal magnitude) and the remainder which is noise variance. The proportion of the total variance which is signal can be called “information content”.
The reasons to use information content rather than simply signaltonoise ratio are three fold. First, the information content conveniently varies between 0 and 1, rather than extending to infinity. Second, it is symmetrical: noise content is 1 minus information content, which makes it clear that there are two contributors to observed differences between settings. Third, it is numerically identical to the intraclass correlation coefficient, a simple index of reproducibility used in biological research.
Published data
Information content can be calculated in any study for which both the overall variability and the noise variability are available. We present in Table 1 information content for three detailed physiological studies conducted in research environment where special attention was given to accuracy [11–13]. For each row of this table, we calculated for each patient the signal size (expressed as a variance) and the noise size (expressed as a variance), and displayed the average values across all patients. Each study had measurements at more than one heart rate, or via more than one monitoring technique, and so had more than one row. Where raw data of multiple replicates were available to us [12], noise variance was quantified directly. Where data of only a single replicate were available [13], noise variance was defined as the dispersion (expressed as a variance) of raw data away from a bestfit regression parabola between the observed measurements and the AV delay. Where noise variance was published graphically [11], it was read off the graph. Signal variance was defined as the total observed variance of that patient minus noise variance. Because the protocols differed between studies, this table should not be used to compare optimization technologies, but rather just to obtain an idea of the realistic range of information content achievable. It should be remembered that these were conducted in ideal research environments when there was effectively no time pressure. Routine clinical practice, because of time pressure, typically falls short of such ideal protocols that might require as many as 1,500 beats to be acquired and analysed [14].
In this study, we present a simple way to establish the impact of spontaneous beattobeat variability, by simulating an optimization in which there is a known underlying optimum setting at which cardiac function is best, and alternative settings at which cardiac function decays away. In the simulation, we can then superimpose random variability simulating clinical beattobeat variability (the “noise”). This simulation gives information whose applicability is completely general across any method of optimization that is based on selecting the settings which gives the most favourable value of a cardiovascular measure.
We aimed to determine

how reliable optimization is

how one can quantify the confidence interval of any observed optimum

whether one should trust an apparent increment in cardiac function

whether the observation that optima change over time is a good reason to increase the frequency of repeat optimization, and finally

whether there are any straightforward steps we can take to improve the quality of the optimization process.
Methods
Observed measurement = underlying signal + superimposed noise
We constructed a simulation to identify the impact of noise variance, which is the random variability occurring between one beat and another. This noise is superimposed on the signal, which is the “true” underlying effect of the pacemaker setting changes in real patient data. In clinical practice, signal and noise cannot be separated in individual raw data points because each such observed measurement contains both contributions mixed together (however, if replicate measurements are made, their interreplicate variance can be subtracted from the total variance of the observed raw data to reveal the signal variance).
Simulation
In keeping with real patient data [15], the underlying signal in our model was constructed as an inverted parabola with its peak—the underlying optimum—at 140 ms. The vertical size of the parabola was scaled to have the desired signal magnitude. The magnitude was defined as the average of the squared deviation from the mean: this definition is computationally identical to that of variance. Separately, we programmed noise as normally distributed random values with mean zero and variance as desired. The signal and noise were added together to create the simulated observations. This process was repeated separately for each simulated patient. For each analysis in this study, 1,000 patients were simulated.
We tested signal and noise sizes over a wide range, but for clarity in this paper we have presented a limited number of values, ensuring that the full spectrum of relative sizes of signal magnitude and noise variance is encompassed.
Identification of optimum
We defined the optimal setting as the one which gave the highest measurement of cardiac function [3, 7, 16]. Because of the presence of noise, the measured value of this optimum may not be the same as the underlying optimum. The measured hemodynamic parameter is not specified, but it could represent VTI [3], blood pressure or dP/dt. The measurement is expressed without physical units, for simplicity and generality. Because signal and noise always will have the same unit, the choice of the unit has no impact on reliability of optimization.
Confidence intervals of the optimum
We simulated repeat optimizations within the same individual and collected the resulting optima in order to see how widely these optima were scattered. We defined the 95% confidence interval of a single optimization as 1.96 × the standard deviation of this collection of observed optima. This is the confidence interval that would be appropriate to report for each patient’s individual optimization, although by this method it is of course necessary to carry out several optimizations per patient in order to calculate the confidence interval.
Results
Impact of information content on consistency of detecting optimum, using a single beat at each setting
With signal and noise both configured to be the same size, the underlying curved shape of the signal was not always evident in the observed measurements (signal + noise). Nevertheless inevitably, in each run, one of the settings yielded the highest measurement and was duly selected as the observed “optimum”. Since this was not always the true underlying optimum, the observed optima showed some scatter (as shown schematically in Fig. 2).
For each combination of signal and noise size, we quantified the observed scatter of optimization as the standard deviation of difference between the optima obtained on two successive optimizations of the same patient. We calculated the information content from the known sizes of signal and noise.
When signal and noise were equal, there was an optimization scatter (standard deviation) of 45 ms. Making the signal magnitude small made the scatter of the observed optimum wider. Making the signal larger made the scatter of the observed optimum narrower (Fig. 3, Spearman rank correlation coefficient ρ = 0.973, P = 0.021). When the noise was made smaller, the scatter of the observed optimum narrowed. When the noise was made larger, the scatter of the observed optimum widened (Fig. 3, ρ = 0.991, P = 0.0017).
The information content was the overwhelming determinant of the scatter of optima (ρ = 0.979, P < 0.001, Fig. 3). In the worst case scenario, i.e. information content near zero, the scatter of optimization was ~80 ms, the implied range, 60–220 ms, covers the full range of settings over which the simulations are performed.
We can compare this to the expected behaviour of an entirely worthless optimization method, which would be to use no physiological information but simply to select one of the settings (60, 100, 140, 180, 220 ms) at random and announce it to be the optimum. From first principles, the mean “optimum” expected from such an approach is 140 ms, and the expected variance (average square of deviate from that mean) is simply (80^{2} + 40^{2} + 0^{2} + 40^{2} + 80^{2})/5 = 3,200 ms^{2}, giving an expected optimization scatter (SD of difference, SDD) of √2 × √3,200 = 80 ms. This forms an effective limit on how poorly reproducible any optimization amongst these settings can be: SDD can never be more than 80 ms, for this range of tested settings.
Figure 3 shows that the information content needs to be rather high before the scatter of optimization even comes close to values that clinicians may consider acceptable. Even to get the SDD of successive optima down to 25 ms, for example, we need information content of 0.91, i.e. signaltonoise ratio of 10:1.
Size of confidence interval of the observed optimum
We calculated the size of the confidence interval of the observed optimum for a range of possible signal and noise size combinations (and therefore information content) as shown in Table 2.
Impact of averaging multiple replicates on reproducibility
We tested the impact of changing a clinic’s optimization policy to making, not just a single measurement at each pacemaker setting, but several raw replicates (3, 10, 30 or 100), with average of all those replicate raw measurements in that patient being plotted and used to select that patient’s optimum setting. This process improved the fidelity with which the observed measurements reflected the underlying physiological value.
Effectively, the absolute impact of noise was reduced. For example, using averages of 3 replicate raw measurements reduced effective noise variance to onethird (Table 2). With this elevation of the signaltonoise ratio, the shape of the underlying signal was more faithfully depicted in the observed measurements (Fig. 4, left panels), and the true optimum more likely to be detected (Fig. 4, right panels).
Apparent versus true size of improvement on optimization
We measured the apparent size of the increase in the measured variable upon optimization. To make the results easy to interpret, we simulated patients to arrive in the optimization clinic with a reference setting of 100 ms and undergo an optimization procedure. In each case, the underlying true optimum is 140 ms, but because of noise variability, the setting selected as optimum may be this or another setting.
We calculated several aspects. First, the proportion of patients in whom the observed optimum was a correct reflection of the underlying true optimal AV delay.
Second, we calculated by how much the observed optimum appeared to be better than the reference state. In reality, we also knew how much the underlying optimum was better than the underlying reference state, and we reported this value too, for comparison. This enabled us to report the extent to which the apparent increase over or underestimated the underlying benefit. It was always an overestimation, as shown in the column “Extent of Illusion” in Table 3. The size of this illusion was strongly determined by the information content, with lower information contents leading to larger illusory improvements (ρ = –0.975, P < 0.001).
Third, we calculated the observed difference between the “best” setting and “worst” setting. Because we knew the underlying difference between the true best and worst, we were able to report this too, for comparison. Again we were thereby able to calculate the illusory element (Fig. 5).
Apparent change in optimum over time
We simulated repeating the optimization process after the passage of time, keeping the underlying optimum the same between sessions. We calculated whether the observed optima seemed to change between sessions and by how much.
For each signal and noise combination, we observed the resulting distribution of differences between the optima found at the 1st and 2nd optimization visits. Figure 4 shows these distributions which have information content of 0.91, 0.50 and 0.09, respectively. Since the true underlying optimum did not change between visits, all changes in observed optimum were false. The proportion of patients of patients giving this false apparent change in optimum is shown by dark shading on the Fig. 6.
Low information content was strongly linked to the likelihood of falsepositive detection of change in optimum (ρ = 0.975, P < 0.001)
When signal and noise were of equal size (information content ~0.50) about twothirds of the patients have spurious apparent changes in optimum between visits. Even when the signaltonoise ratio was 10:1, giving an information content of 0.91, still onethird of patients had spurious apparent changes in optimum. Only when signaltonoise ratio reached 30:1 (information content ~0.97), did the proportion of patients getting falsepositive apparent change in optima fall to a clinically respectable 7% (top panel, Fig. 6).
Discussion
In this study, we have shown that uncritically selecting the pacemaker setting which gives the best value of a monitored variable might be little better than random selection amongst a set of AV settings. These findings are generally applicable to any optimization method that relies on testing a series of settings whilst monitoring some measure of cardiac function (such as echocardiographic velocity–time integral or pressure or any other cardiovascular marker) and then picking the setting that gives the highest measurement.
It is overwhelmingly important for signaltonoise ratio (information content) to be high, otherwise a series of illusions automatically arise in any clinical data analysis.
Illusion 1: “We have selected the true underlying optima”
One tends to assume that the setting which gives the highest measurement is the best. However, our study shows that only a very small amount of variability is enough to seriously compromise this assumption because the true biological effect may also be very small. With signal and noise of equal size for example, in ~50% of cases (Fig. 6) the optimum detected will not be the true optimum but an erroneous alternative.
The confidence interval of a clinical optimization is never reported and (surprisingly) rarely asked for. A wide confidence interval will have immediate comprehensibility to any clinician reviewing the result. The simplest way to calculate the confidence interval of optimization is to carry it out on several occasions (e.g. immediately, one after the other) and calculate the standard deviation. The 95% CI would be the mean ± 1.96 × standard deviation. To make this reasonably valid, we would need to perform at least three or four optimizations. Of course this would be extremely timeconsuming and is therefore not realistic for routine clinical practice with current monitoring techniques.
Alternatively one can determine information content of the clinic’s optimization process in general. This could be calculated once and then applied to all similar patients without having to carry out multiple replicate optimizations in each new clinical patient. Fortunately, information content is easy to calculate: it is essentially the intraclass correlation coefficient. This can be calculated quickly for a representative group of patients by any laboratory. This is similar, in principle, to using concepts of statistical power analysis to routine clinical practice.
Illusion 2: “The optimization increased flow (or pressure) by X and was therefore worthwhile”
It is tempting to average the apparent increments in velocity–time integral (or whatever measure was used for optimization) achieved in an optimization service, and believe that first, (a) the process is almost always increasing stoke volume, (b) the size of the average increase in stroke volume is ‘X’ which sounds clinically worthwhile and (c) since the increment is statistically significant it is not likely to be a chance finding.
This study reveals all three of these tempting conclusions to be wrong. First, the setting selected as apparently optimum will always have a higher measured cardiac function than the reference setting (except where the reference setting happens to be selected as the optimum). Even if an optimization method was just roulette amongst n tested settings, then in (n−1)/n cases (i.e. almost always) it would be selecting an optimum different from reference. Therefore, the statement that stroke volume is higher on the optimal setting is meaningless.
Second, ironically, the worse the optimization method, the larger the illusionary increase in stroke volume.
Third, unless carefully constructed [8], the statistical test is assessing whether changes in stroke volume are randomly distributed (some positive, some negative) with a mean of zero. But each patient’s increment will always be either positive or zero (never negative), so the average increment will always be statistically significantly positive unless the sample size is very small. Indeed, the worse the optimization method, the more likely the apparent increment is to be statistically significantly positive.
Illusion 3: “The optimum has changed between X months and now”
A wellestablished and indispensable optimization clinic may start to consider how often these optimizations should be carried out [7, 9]. Is the contrast between patients’ optima on subsequent visits a useful guide? Our analyses now show that if a technology has poor information content (low signaltonoise ratio), reproducibility will be poor. For example, when signal and noise are approximately equal (information content = 0.5) at 6 months (or any other time), the optimum will falsely appear to have changed, purely through noise, 65% of the time (Table 4). Ironically, the worse the optimization process, the more the data will seem to encourage more frequent optimizations. The giveaway clue to this would be that however frequently we reoptimize, there would still be a similar proportion who would seem to need a change in setting.
Illusion 4: “We should not waste time making multiple replicate measurements at each setting in clinical practice”
In a busy clinical department, it may seem an unnecessary multiplication of work to make more than one measurement at each setting. Instead, it may seem rational to concentrate on ensuring that each measurement is acquired and analysed properly by welltrained staff. Unfortunately, the reasons for beattobeat variability in measurement are many, and inadequate skill on the part of the sonographer or interpreter is typically not the dominant contribution. Rather, there is substantial beattobeat variability in transvalvular blood flow, ventricular volumes, arterial blood pressure and dP/dt. These variations may be due to respiration and numerous other lesseasily monitored physiological processes that take place over periods of seconds and minutes. They will not disappear through wishful thinking alone. Instead, averaging multiple replicate acquisitions gives us a powerful method to reduce the effective noise. Effective noise (the variance of the averaged value from R replicate raw measurements) falls in direct proportion to 1/R, providing a simple way to improve the information content. Another strategy is to elevate heart rate, since this increases the size of the signal [12].
Illusion 5: “We should optimize using whatever measurement method we are most familiar with”
Inter and intraobserver variability may not be the dominant source of noise, rather there may well be genuine biological variation between beats. Even with excellent clinical acquisition and measurement technique, if the biological variability is large in comparison with the true signal between settings, information content will be low. We should quantify information content directly and not assume that the technique with which we are most familiar has a high information content.
Illusion 6: “Between separate beats, variability in my laboratory is only X%, therefore this measure is suitable for use in optimization”
That X%, being the ratio between variability and mean measurement, is not the relevant ratio for quality of optimization. Reliability of optimization depends on the ratio between beattobeat variability (noise) and betweensetting variability (signal). The ratio is much less favourable than X%. For example, a VTI measurement might have a mean value of 10 cm and a standard deviation of 1 cm, giving a coefficient variance of 10%. However, the relevant signal is not 10 cm but the standard deviation between settings which may only be (for example) 1 cm. In this case, the information content would be \( {\frac{1}{1 + 1}} = 0. 5 \). The naive figure of 10% variability, in isolation, is of no relevance.
A simple method of calculating information content of a cardiovascular measure used clinically for optimization
Because this study was carried out using computer simulation, it was possible to know the size of the true underlying signal, as well as the size of the noise, and thereby state the information content directly.
In vivo, one can calculate information content by measuring total variance and noise variance, since although the underlying signal magnitude cannot be directly observed, it is the difference between them. We need to carry out several optimizations in the same patients. Suppose one carries out R replicate sets of optimizations in one patient. First, calculate the variance of all the raw measurements (V _{raw}). Then one can calculate the mean measurement at each pacemaker setting and then the variance (V _{m}) of these means. V _{m} will tend to be smaller than V _{raw}, because the impact of noise is reduced by the averaging process. The lower the information content in the measurement, the larger its noise in comparison with its signal, and therefore the more markedly V _{m} will differ from V _{raw}. In brief, the information content is approximately the ratio V _{m}/V _{raw}, when K is large. More elaborately, accommodating for R not always being large,
An example of how to calculate information content in a single patient, using only standard spreadsheet software, is shown in Fig. 7.
In practice, the examples of published data on information content in Table 1 show that even with timeconsuming methodology, including a high number of replicates and many beats measured per replicate, information content can still be low.
How many replicates are really needed in clinical practice?
Clinicians cannot afford to waste time in clinical practice on performing unnecessary numerous measurements during optimization. Nor, though, can they waste time performing apparent optimizations that they should know will be worthless before the patient even lies down on the couch. To choose rationally the number of replicates to perform, it is vital to decide how precisely the patient’s optimum needs to be identified.
In clinical practice, each individual physician can decide what level of precision is suitable in their context and can easily calculate the number of replicates required to achieve this as long as the information content of a single replicate of their local method is known. The number of replicates required for a range of such combinations is shown in Table 4.
For example, a clinician may wish to know the AV optimum with a 95% confidence interval of ±10 ms. How many replicates are needed depends on the heart rate at which optimization is to be carried out (Table 1). Studies at resting heart rate have found rather low information contents around 0.3.
If a confidence interval of optimization of ±10 ms is wanted in this context, from Table 4 it can be seen that the number of replicates needing to be conducted at each setting is 59.
At higher heart rates such as 90 bpm, information content is approximately 0.5–0.7 for several methods (Table 1). Achieving a confidence interval of ±10 ms now only requires 11–25 replicates, as shown in Table 4, which might be achievable. At higher heart rates still, the number of replicates needed continues to fall (Fig. 8).
Adjustment of VV delay, in contrast, exerts a much smaller signal effect on physiological measurements than AV adjustment by a factor of about 5–7 fold [15]. Even if the variation in blood pressure is just fivefold smaller, the information content is roughly 25fold smaller—because it is the variances (squared deviates) that matter. Therefore, even assuming a favourably elevated (90 bpm) heart rate, a favourable range of AV optimization information contents of 0.5–0.7, and a possible relative signal variance for VV of (1/7)^{2} to (1/5)^{2}, the information content for VV would lie between 0.01 and 0.03. It can be seen from Table 4 that this necessitates well over 500 replicates at each setting to achieve the desired precision of optimization.
Although there are detailed descriptions of meticulous protocols [14] even putting together 1,500 beats of data does not give high information content. Multi beat averages reduce noise, but if signal is small, information content may still be small. High heart rate raises signal magnitude [12] and has allowed a higher information content to be obtained.
Clinical implications
No clinical optimization protocol currently specifies a number of replicates to be carried out, whilst giving a quantitative reasoning. This may be because the impact of noise has not been considered or measured. It may not be rational to conduct an optimization without ensuring adequate precision of the optimum. Although there may be a clinical imperative to be seen to be doing something, we should not necessarily give in to perceived pressure to conduct a placebo procedure. Worse still, if the apparent optimization is in fact no different to randomization amongst a constrained range, it is inescapable that half of all such procedures worsen cardiac efficiency rather than improve it.
If we want our optimization service to be delivering clinical valuable results, there are three generic steps we should take. First, we should have as large an underlying signal as possible. For blood pressure changes, it has been reported that the signal is larger in absolute terms at higher heart rates than at lower heart rates [12].
Second, we should have as small a noise as possible. We should not criticise operators for inadequate care when they may simply be correctly measuring biological variability. Instead we should design our measured variable and protocol to have a high information content.
Third, we can take averages of multiple replicate measurements of cardiac function at each AV delay setting. An Rfold replication will have the same beneficial effect as reducing the noise variance of individual measurements by Rfold. This can be applied to any measurement technique, but of course carries the cost of increased labour.
Realization of these inherent properties of optimization should encourage us to mandatorily report the noise and information content of our monitored variable in our hands. We should be able to therefore present the confidence interval with every optimization we carry out. This may be uncomfortable.
We emphasize that in this article we are not recommending one method of measurement (e.g. VTI or pressure) over another, nor suggesting whether measurement should be invasive or noninvasive. The choice of measurement modality for optimization should be prejudged by personal preference or based on whim, but rather selected on the balance of relevant properties. The most important property of optimization (a process that recommends small adjustments to pacemaker settings) is the precision with which the recommendation is given. This is a neutral article which simply provides a language to rationally evaluate, discuss and improve this precision.
Practical recommendations
This analysis is completely general to all optimization schemes which test a range of settings and select the one with the greatest measurement. Any laboratory conducting optimization can use Eq. 2 and Fig. 8 to calculate their typical information content. In concert with device physicians, who can recommend an acceptable confidence interval, the laboratory can see how many replicates are required.
Such an estimated number of replicates required only applies to an “average” patient in the population. The size of the signal may vary between patients. For example, one patient may have a particularly critical dependency on AV setting and another a belowaverage amount of dependency. The former would need fewer replicates to identify the optimum within a given size of confidence interval, and the later would need more. Similarly, one patient may have more noise for one of many reasons, including deeper respiration due to acute physiological distress; chronic lung disease that enhances ventilatory fluctuation in haemodynamics; obesity impairing image quality; agitation impairing probe position maintenance. This would necessitate more replicates.
But whilst individual patients may have different strict needs for replication, all patients will need more replicates if the optimization technique has poor information content. Any protocol document (which specifies and optimization technique) to be credible must at least give quantitatively sound guidance as to the number of replicates needed for an average patient to obtain optimization with a level of precision widely considered reasonable. If a protocol does not give such guidance, clinical time pressures may lead to all patients having optimizations that are, on average, worthless (helping half slightly and harming half slightly).
Conclusions
Information content, the proportion of the observed differences in the measurements at different settings that is genuinely due to the change in settings, has an overwhelmingly important impact on the meaningfulness of the pacemaker optimization processes. Although easy to measure, it is rarely reported or commented on, and may be surprisingly low unless steps are taken to improve it.
Low information content leads to frequent misidentification of the optimum. However, worse than this, it inflates the apparent benefit of optimization: counterintuitively, the worse the optimization method, the better it will superficially appear (unless one asks about information content).
Worst of all, because low information content makes apparent optima more variable, the poorer the optimization method, the more frequently one will feel compelled to reoptimize the patient (unless we ask about information content).
Information content is easy to improve for any technique. All that is needed is (a) to use a technique where the underlying difference between settings is as large as possible, (b) to use a technique with beattobeat variability as small as possible and (c) to make multiple measurements at each setting and calculate the average.
If, despite these steps, information content is still low, clinical resources could be saved by selecting a setting arbitrarily or even at random, with no additional loss to the patient’s physiology. We do not make this suggestion for fun but to point out the seriousness of the present situation. Optimization is not optimization when it is roulette.
References
Valzania C, Eriksson M, Boriani G, Gadle F (2008) Cardiac resynchronization therapy during rest and exercise: comparison of two optimization methods. Europace 10:1161–1169
Scharf C, Li P, Muntwyler J, Chugh A, Oral H, Pelosi F, Morady F, Armstrong WF (2005) Ratedependant AV delay optimization in cardiac resynchronization therapy. PACE 28:279–284
Barold SS, Ilercil A, Herweg B (2008) Echocardiograhic optimization of the atrioventricular and interventricular intervals during cardiac resynchronization. Europace 10:88–95
Gol M, Niazi I, Giudici M, Leman RB, Sturdivant JL, Kim MH, Yu Y, Ding J, Waggoner AD (2007) A prospective comparison of AV delay programming methods for haemodynamic optimization during cardiac resynchronization therapy. J Cardiovasc Electrophysiol 18:490–496
Kass DA, Chen CH, Curry C, Talbot M, Berger R, Fetics B, Nevo E (1999) Improved left ventricular mechanics from acute VDD pacing in patients with dilated cardiomyopathy and ventricular conduction delay. Circulation 99:1567–1573
Reiter MJ, Hindman MC (1982) Haemodynamic effects of acute atrioventricular sequential pacing in patients with left ventricular dysfunction. Am J Cardiol 49:687–692
Zhang Q, Fung JW, Chan YS, Chan HC, Lin H, Chan S, Yu CM (2008) The role of repeating optimization of atrioventricular interval during interim and longterm followup after cardiac resynchronization therapy. Int J Cardiol 124:211–217
Turcott RG, Witteles RM, Wang PJ, Vagelos RH, Fowler MB, Ashley EA (2010) Measurement precision in the optimization of cardiac resynchronization therapy. Circ Heart Fail 3:395–404
Porciani MC, Dondina C, Macioce R, Demarchi G, Cappelli F, Lilli A, Pappone A, Ricciardi G, Colombo PC, Padeletti M, Jelic S, Padeletti L (2006) Temporal variation in optimal atrioventricular and interventricular delay during cardiac resynchronization therapy. J Card Fail 12:715–719
Anselmino M, Antolini M, Amellone C, Piovano E, Massa R, Trevi G (2009) Optimization of cardiac resynchronization therapy: echocardiographic versus semiautomatic device algorithms. Congest Heart Fail 15:14–18
Auricchio A, Stellbrink C, Block M, Sack S, Vogt J, Bakker P, Klein H, Kramer A, Ding J, Salo R, Tockman B, Pochet T, Spinelli J (1999) Effect of pacing chamber and atrioventricular delay on acute systolic function of paced patients with congestive heart failure. Circulation 99:2993–3000
Whinnett ZI, Davies JE, Willson K, Chow AW, Foale RA, Davies DW, Hughes AD, Francis DP, Mayet J (2006) Determination of optimal atrioventricular delay for cardiac resynchronization therapy using acute noninvasive blood pressure. Europace 8:358–366
van Geldorp IE, Delhaas T, Hermans B, Vernooy K, Broers B, Klimusina J, Regoli F, Faletra FF, Moccetti T, Gerritse B, Cornelussen R, Settels JJ, Harry JGM, Crijns HJGM, Auricchio A, Prinzen FW. Comparison of a noninvasive arterial pulse contour technique and echo Doppler aorta velocitytimeintegral on stroke volume changes in optimization of CRT. Europace 2010 (in press)
Auricchio A, Stellbrink C, Sack S, Block M, Vogt J, Bakker P, Mortensen P, Klein H (1999) The pacing therapies for congestive heart failure (PATHCHF) study: rationale, design, and endpoints of a prospective randomized multicenter study. Am J Cardiol 11;83(5B):130D–135D
Whinnett ZI, Davies JE, Willson K, Manisty CH, Chow AW, Foale RA, Davies DW, Hughes AD, Mayet J, Francis DP (2006) Haemodynamic effects of changes in atrioventricular and interventricular delay in cardiac resynchronisation therapy show a consistent pattern: analysis of shape, magnitude and relative importance of atrioventricular and interventricular delay. Heart 92:1628–1634
Nishimura RA, Hayes DL, Holmes DR Jr, Tajik AJ (1995) Mechanism of hemodynamic improvement by dualchamber pacing for severe left ventricular dysfunction: an acute Doppler and catheterization hemodynamic study. J Am Coll Cardiol 25:281–288
Acknowledgments
Imperial College London has conducted research into pacemaker optimization funded by Medtronic Incorporated. The British Heart Foundation has funded several of the authors: DPF FS/10/038/28268, PP PG08/114, KW PG07/065. AH and DPF received funding support from NIHR biomedical research centre scheme and the British Heart Foundation Research Excellence Award scheme.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/bync/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Pabari, P.A., Willson, K., Stegemann, B. et al. When is an optimization not an optimization? Evaluation of clinical implications of information content (signaltonoise ratio) in optimization of cardiac resynchronization therapy, and how to measure and maximize it. Heart Fail Rev 16, 277–290 (2011). https://doi.org/10.1007/s1074101092035
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1074101092035
Keywords
 Cardiac resynchronization therapy
 Biventricular pacemaker
 Optimization
 Echocardiography
 Velocity–time integral
 Blood pressure