Introduction

Bacterial infection remains a major cause of suffering and death, particularly in patients with impaired host defence. Severe sepsis and septic shock are the most common causes of mortality in critically ill patients and account for 10% of ICU admissions [1]. Despite over 20 years of intense basic science and epidemiologic research, mortality has remained at approximately 50% [2] with the average annual cost in the USA for treating severe sepsis amounting to $16.7 billion nationally [3]. In spite of immense and detailed knowledge regarding mechanisms, pathways, mediators, transcription factors, receptor levels, and gene activation involved in the host response to severe sepsis leading to organ dysfunction, the understanding of the whole system working in concert is limited.

Clinically, we diagnose infection based on an increased absolute value of one or more vital signs along with other clinical evidence of both a pathogen and host response. A diagnosis and intervention is 'triggered' whenever vital signs exceed specific thresholds [4]. Although it is the best system currently available, waiting for a fever to spike or the development of tachycardia to signify alteration to the host response represents a crude and potentially late means to diagnose infection. Clinicians are well aware that late diagnosis, unexpected and rapid deterioration, ICU admission, and organ dysfunction represent common case histories of critically ill patients. Early diagnosis, rapid and adequate resuscitation, restoration of oxygen delivery, timely institution of appropriate antibiotics, and source control can undeniably save lives in patients with severe infection. In fact a study by Kumar and colleagues [5] demonstrated that each hour of delay in the initiation of the duration of effective antimicrobial therapy was associated with a mean decrease in survival of 7.6%.

The concept of early goal-directed therapy (EGT) was popularized by Rivers and colleagues [6] with outpatients who presented symptoms of a systemic infection. In a large single-centre randomized trial, it was demonstrated that patients who were assigned to EGT, which involved maintaining adequate systemic oxygen delivery and tissue perfusion, presented with reduced subsequent organ dysfunction and improved hospital survival compared to those receiving standard therapy. In a recent publication, examining EGT in severe sepsis, Rivers and colleagues [7] concluded that the EGT protocol resulted in improved patient outcome and cost-effectiveness in treating sepsis. However, they noted that EGT still requires further development and optimization. For example, the EGT protocol comprises several interventions, including the administration of fluids, vasoactive agents, and red blood cells. Therefore, it is unclear what aspect(s) of the protocol were actually responsible for the reduction in death for the EGT group.

Serologic tests and biomarkers, such as C-reactive protein, endotoxin, brain natriuretic peptide, procalcitonin, and endogenous protein C, may also aid in the early diagnosis of severe infection in the future [821]. At present, however, their use is limited because of lack of diagnostic capability and long timelines in the return of results.

Given the emergence of early resuscitation as a key factor in improving outcome in sepsis, there is need to develop more rapid, sensitive, and specific diagnostic strategies that could complement or surpass the existing ones.

The host response to infection involves a dynamic web of interactions between organs, cells, mediators, molecules, and genes - thus, giving rise to a complex non-linear system [22]. Complex systems are loosely defined as systems with properties that cannot be wholly understood by understanding the parts of the system [23]. Complex systems science is distinct from and complementary to analytical science and epidemiology and offers an innovatory means to address the problem of diagnosis, prognosis, and prediction of disease. Specifically, complex systems science uses a novel analysis of rhythms or fluctuations over time that encompasses concepts such as nonlinear dynamics, fractals, and 'chaos theory'. Notably, this approach of integrating information over time in individual patients contrasts with traditional methods, which use population-based statistical models to study the absolute value of clinical variables [24]. This complex systems approach of analyzing intervals-in-time or patterns of variation over time has been termed 'variability analysis'.

Variability analysis is essentially a collection of various mathematical and computational techniques that characterize biologic time series with respect to their overall fluctuation, spectral composition, scale-free variation, and degree of irregularity or complexity. A growing exploration of patterns of variation or fluctuations in physiologic time series, particularly heart rate variability (HRV) analysis, has been shown to provide clinically useful and otherwise 'hidden' information about the health of the system producing the dynamics. For example, Fourier spectral analysis of heart rate (HR) data has shown that frequency profiles characterizing HRV are altered during illness and that the degree of alteration of these frequency profiles correlates with illness severity in conditions ranging from hypovolaemia [25] to heart failure [2628], from hypertension [29, 30] to coronary artery disease [31, 32] and from angina [33] to myocardial infarction [34]. These studies demonstrate that HRV is consistently and reproducibly altered in illness, and the degree of HRV alteration is prognostic of illness severity.

There are numerous analysis techniques used to evaluate altered variability or altered patterns of change over time. The most straightforward technique for measuring HRV involves the computation of the standard deviation (SD) of the time-series of time intervals between consecutive heart beats (R-R intervals). Other measures such as the SD of 5 minute averages with specific thresholds (for example, SD <70 ms) and more have been extensively studied [35, 36]. These time domain measures of SD are complemented by frequency domain analyses, which evaluate the frequency spectrum of a HR signal. According to Fourier theory, any time series may be considered as a sum of sinusoidal oscillations with distinct frequencies. Conversion from a time domain to a frequency domain analysis is made possible with the fast Fourier transform (FFT) or the discrete Fourier transform, which can quantify the spectral content of the signal in defined ranges of frequencies [37]. Techniques like wavelet analysis are referred to as 'time-frequency domain analyses'. The wavelet analysis technique not only determines the frequency components of the input signal but also their locations in time [38, 39]. In order to quantify the degree of information, disorder, or complexity of a time-series, entropy analyses produce single (for example, approximate entropy (ApEn) and sample entropy (SampEn)) or multiple values (for example, multiscale entropy (MSE)) that reflect degree of irregularity [4042]. Providing yet another distinct evaluation, scale-invariant analyses provide a measure of patterns of variation that are present across a range of time scales. Given that the frequency of occurrence of variations is inversely proportional to their magnitude and that magnification of the time-series reveals similar patterns, it is possible to quantify scale-invariant variation (utilizing detrended fluctuation analysis or power law analysis) to facilitate comparison between time periods [43].

Taking inspiration from previous studies that used these techniques to quantify HRV and that have consistently shown an inherent link between HRV and illness, the study of HRV and its association with infection has emerged as an important stream of research for the diagnosis and management of sepsis and septic shock. In this review article, we analyze the use of HRV analysis as a means of establishing diagnosis of infection and its capacity to prognosticate severity of infection. In addition, we evaluate the limitations of this technology in its current state, identify future challenges, and propose strategies that may render it a useful clinical bedside application for the management and treatment of sepsis.

Early diagnosis of infection with heart rate variability analysis

Several studies have examined the usefulness of HRV analysis for early diagnosis of infection, particularly in neonates and infants. The majority of work in this area was done by Griffin, Moorman, and colleagues, who developed a novel and proprietary measure, heart rate characteristics (HRC), to assess HRV in infants at risk of developing sepsis. In their studies, it was reported that abnormal HRC with reduced variability and transient decelerations preceded neonatal/infant sepsis [4449]. A predictive model, based on multivariable logistic regression, was developed for the early detection of sepsis in infants. In one of the clinical trials, data were collected from 678 infants who spent more than 7 days in the University of Virginia Neonatal ICU from July 1999 to July 2003 [44]. HRC were measured for 137 subjects whose blood cultures were positive and confirmed sepsis. The HRC were found to be significantly correlated with sepsis and the receiver-operating characteristic analysis yielded an accuracy of 73%. A major finding of the research by Griffin and colleagues is that HRC computation, which involves a continuous monitoring and analysis paradigm, independently complements information available through conventional point-in-time laboratory tests and vital sign assessment. Their studies reported abnormalities in HRC that were noticeable 12 to 24 hours prior to the clinical diagnosis of sepsis based on traditional clinical markers (for example, fever, tachycardia or positive cultures). Although Griffin and colleagues do not provide a numerical value of the specificity achieved by the HRC for early diagnosis of neonatal sepsis, they do mention that 'not all abnormal readings inevitably indicate imminent sepsis or other untoward events'. This observation seems to point towards a diminished specificity of the HRC for early diagnosis of neonatal sepsis.

In another study, Cao, Griffin, Moorman and colleagues [50] employed a measurement of the degree of nonstationarity in the heart rate signal to predict neonatal sepsis. Nonstationarity is defined as the tendency of the statistical properties of a time series (for example, mean, SD) to vary during intervals throughout the time series. They measured the degree of nonstationarity using the Kolmogorov-Smirnov test whereby actual HR data were compared and correlated with a purely stationary (artificially generated) dataset. Although the authors reported abnormal HRC of reduced variability and transient decelerations 12 to 24 hours prior to neonatal sepsis, in addition to a positive correlation between HRC and clinical data as per their earlier work [45], little additional clinical benefit was highlighted vis-à-vis the new stationarity statistic discussed in this paper [50]. The main conclusion was that neonatal heart rate data are predominantly non-stationary and that they become more nonstationary in the early course of sepsis.

The elements of HRC consist of various statistics and measures for analyzing HRV. One such measure developed by Kovatchev, Griffin and colleagues [51] for the early diagnosis of sepsis and systemic inflammatory response syndrome in neonates is the sample asymmetry analysis (SAA). SAA provides a measure of changes in the shape of frequency histograms of R-R intervals, resulting from reduced variability and transient decelerations. In this study conducted on 158 infants admitted to the ICU at the University of Virginia Hospital, the SAA statistic of R-R intervals increased significantly from its baseline value of 3.3, 3 to 4 days prior to the onset of sepsis and systemic inflammatory response syndrome. The authors conclude that these results are clinically useful for predicting early onset of sepsis. However, they also point out the limitations of the technique. The major limitation is that even though sample asymmetry is significantly elevated before sepsis, it shows considerable variation amongst subjects, making it difficult to define a threshold for the onset of sepsis that is valid for all infants. That is, an elevated SAA value in one infant may signify sepsis, although the same elevated value may also be observed in another normal infant - thus, compromising the specificity of the absolute value of the SAA metric.

In addition to HRC, Griffin, Moorman, and colleagues proposed the SampEn analysis [52] of HR time series, a variant of the ApEn analysis [40], to study sepsis in neonates. They studied 89 infants admitted to a tertiary care neonatal ICU. Numerical simulations, based on the SampEn statistic, were performed on 21 subjects who suffered from episodes of sepsis. The major finding was that entropy falls or HR becomes more regular as early as 24 hours before clinical signs of sepsis appear. The main drawback associated with the SampEn metric for early diagnosis of neonatal sepsis is that of false positives. Griffin and colleagues observe that SampEn falls due to increased regularity of neonatal HR time series, which is indeed associated with sepsis. However, they note that such a decrease in SampEn may also be observed when there are spikes or noise in neonatal HR time series, which does not signify increased regularity or sepsis. Therefore, compromised data quality directly impacts the specificity of the SampEn metric.

The studies reviewed in this section highlight the potential clinical benefits of studying HRV for the early diagnosis of infection. Most of the studies report HRV analyses to diagnose sepsis 12 to 24 hours prior to traditional clinical methods. In fact, one study reported observable changes in HRC as early as 3 to 4 days before the onset of sepsis [51]. These studies report a satisfactory sensitivity associated with HRV analysis for the early diagnosis of sepsis, although the specificity is somewhat compromised. Moreover, it remains unclear exactly what one is measuring that far (especially 3 to 4 days) in advance of clinical diagnosis of infection. Does altered HRV provide an early warning of increased risk of infection versus early detection of the presence of infection?

It is noteworthy that the studies reported in the literature for early diagnosis of sepsis utilizing HRV analysis have so far only been performed on neonates. No study has reported the evaluation of the HRV analysis technology for early diagnosis of infection in adults. In addition, a panel of HRV analysis techniques (for example, time, frequency, time-frequency, complexity, and fractal domain analysis techniques) is available to clinicians and investigators. However, no study has employed this panel of techniques for either examining their usefulness or for comparing one against the other for early diagnosis of infection in infants or adults.

In a recent pilot investigation, we (Ahmad and colleagues [53]) studied the usefulness of a panel of HRV analysis techniques for the early diagnosis of sepsis in adult bone marrow transplant patients. We monitored multi-parameter HRV continuously for an average of 12 days in 17 bone marrow transplant patients. Fourteen patients developed sepsis (that is, clinical diagnosis of infection along with systemic inflammatory host response) requiring antibiotic therapy, whereas three did not. On average, for 12 out of 14 infected patients, a significant (25%) reduction prior to the clinical diagnosis and treatment of sepsis was observed in SD, root mean square successive difference (RMSSD), SampEn and MSE, FFT, detrended fluctuation analysis, and wavelet HRV metrics. For infected patients (n = 14), wavelet HRV demonstrated a 25% drop from baseline 35 hours prior to sepsis on average. For three out of three non-infected patients, all measures, except RMSSD, SampEn, and MSE showed no significant reduction. Thus, our study demonstrated satisfactory sensitivity and specificity of multiple HRV analysis metrics for the early diagnosis of sepsis in adults. However, these preliminary results require further validation with a larger sample size.

Based on studies reviewed in this section, including our own pilot study, we believe that further investigation, both observational as well as experimental, is warranted to realize the full potential of HRV analysis for the early diagnosis of infection.

Prognostication of infection with heart rate variability analysis

Determining prognosis, namely mortality risk that is secondary to the severity of the host response to infection, is another important avenue of research pertaining to HRV analysis and infection. The prognostic capacity of HRV analysis in sepsis has been evaluated by correlating standardized measures of illness severity with severity of alteration in HRV, and by demonstrating poor outcome or high mortality in subgroups of patients who demonstrate significantly altered HRV. In this manner, several studies have shown that HRV analysis may provide successful prognostication of infection in the critically ill.

The frequency domain Fourier spectral analysis [37] has often been used as a prognostic tool for the prediction of patient outcome in the ICU [54]. In one study, involving 52 patients from an adult ICU, a progressive decrease (down-trend) in the power densities of the Fourier low frequency (LF) and very low frequency spectra was found to be a significant marker of deterioration and mortality [55]. The predicted outcome was based on trend changes in the LF and very low frequency components and correlated positively with Acute Physiological and Chronic Health Evaluation (APACHE) II scores. The authors report a compromised sensitivity and specificity of the Fourier spectral analysis in predicting outcome whenever the input HR signal became nonstationary, that is, increased or decreased abruptly. These abrupt changes caused surges in the FFT LF components, which reduced the overall accuracy of predicting subsequent deterioration and mortality. In another ICU study, Fourier spectral analysis of HR in combination with haemodynamic, echocardiographic, and serum cardiac markers was used for prognostication of patients suffering from sepsis [56]. In this investigation, consisting of 25 patients with septic shock, the mortality rate was 60%. HRV was measured by means of Fourier high frequency (HF) and LF components of the R-R interval time series. A positive correlation was found between LF power and mortality. The authors conclude that HRV measures based on Fourier spectral analysis have the potential for prognostication of infection in critically ill patients. In a similar study, Piepoli and colleagues [57] analyzed 40-minute continuous electrocardiogram signals in the intensive therapy unit using Fourier spectral analysis in 12 patients during septic shock and during recovery from septic shock. Ten patients recovered, whereas two died. For the 10 patients who recovered, the normalized FFT LF component (LFnu) increased from 17 ± 6 during septic shock to 47 ± 9 (P < 0.02) by the time of discharge (post-shock). The two patients who died did not show an improvement in the LFnu component.

There is a high risk of multiple organ dysfunction syndrome (MODS) in patients with sepsis in the ICU. Fourier spectral analysis was used by Pontet and colleagues [58] as an early marker of MODS in septic patients. Their study followed 46 septic patients who had no signs of MODS at the time of admission to the ICU. Eleven of the 46 patients subsequently developed MODS, 28 did not, and 7 were excluded. Thus, the patients were divided into MODS (n = 11) and non-MODS (n = 28) groups. Despite similar APACHE II scores for the two groups, most of the eight HRV indices (computed during the first 24 hours of admission) were found to be significantly reduced in the MODS group of patients. Fourier LF power correlated positively with subsequent MODS while HF power was significantly reduced for patients who subsequently developed MODS. The FFT LF component achieved the highest accuracy (receiver-operating characteristic area under the curve = 0.87, sensitivity = 91%, specificity = 79%) in predicting MODS. The mortality rate was 60% in the MODS group whereas it was 0% in the non-MODS group. These results and others [54, 59] demonstrate that HRV analysis is not only of potential clinical use but may actually surpass established clinical techniques (for example, APACHE II scores) for identifying the risk of MODS in patients suffering from sepsis.

HRV analysis was also evaluated as a clinical tool to predict the outcome of sepsis in the emergency department. A study performed in the emergency department by Barnaby and colleagues [60] validated earlier work performed in the ICU [57, 61], whereby a normalized LF component (LFnu) was correlated with the APACHE II score and the Sequential Organ Failure Assessment (SOFA) score and an inverse relationship between LFnu and the two measures of illness severity was demonstrated. Moreover, in this study, Barnaby and colleagues determined the threshold values for both LFnu and the ratio of LF to HF (LF/HF) components. All patients who had an LFnu value < 0.5 or an LF/HF ratio < 1.0 required more interventions (for example, ventilatory support, ICU admission, and so on) or died. Those patients who had an LFnu value > 0.5 or an LF/HF ratio > 1.0 did not require such support and did not die. Again, Barnaby and colleagues' results for HRV thresholds in the emergency department conformed to an earlier investigation performed in the ICU where it was demonstrated that sepsis was the main condition leading to a decrease in the LF/HF ratio (LF/HF < 1) and that a reduced LF/HF ratio correlated with an increased risk of death [62]. Chen and Kuo [63] performed another similar study, evaluating which emergency department patients with sepsis will progress to septic shock. They characterized HRV using not only the frequency domain Fourier spectral analysis but also the time domain RMSSD metric. They found that at baseline (first 10 minutes of ECG recording), the RMSSD metric, HF and normalized HF (HFnu) components were increased in those patients who developed septic shock within 6 hours of being admitted to the emergency department. Normalized LF (LFnu), LF, and LF/HF components were decreased in this group.

The evaluation of Fourier-based HRV analysis for prognosticating infection has indeed produced some promising results. However, there seems to be a lacuna in the literature with regards to the study of prognostic value of other HRV analysis techniques such as fractal domain detrended fluctuation analysis, time-frequency domain wavelet analysis, and complexity domain SampEn or MSE analyses. Although the Fourier analysis is the oldest and most traditional means of characterizing HR fluctuations, it has certain limitations. For example, as noted by Yien and colleagues [55], it may produce spurious results where the analyzed data are non-periodic and nonstationary (that is, when the statistical properties of a signal, including mean and standard deviation, vary markedly during the interval being evaluated). Physiologic time series are inherently nonstationary and non-periodic. Hence, a comprehensive and critical evaluation of alternative techniques is imperative for further development of this technology as a robust tool in the ICU, emergency department, and otherwise.

Discussion and future work

Several theories exist regarding the pathophysiology of altered variability, and an exploration of this topic requires its own analysis and discussion. However, a state of decreased overall variation, lower LF variation, and decreased complexity (that is, decreased entropy) has been shown to consistently correlate with the presence and severity of systemic infection. These changes are likely related to reduced adaptability associated with a stressed physiologic state and/or activation of the sympathetic response. Nonetheless, alteration of variability offers a clinically useful and quantifiable measure of alteration in the physiologic state of the human body.

While ample evidence indicates that HRV monitoring may enhance the diagnosis and prognosis of infection, the specific added-value and process of implementation of this technology at the bedside are not yet adequately addressed. For example, while Griffin and colleagues [45] report that HRC produced by their systems complements information available through conventional laboratory tests, randomized controlled clinical trials to document the definitive added-value and clinical impact of this technology have not yet been reported. Critical evaluation is required to determine the lead time of HRV monitoring, and importantly, the clinical consequences of earlier detection, and initiation of treatment.

HRV measures do not always provide sufficient discrimination between sick and healthy subjects, as noted by Brahm Goldstein [64], who regards HRV analysis as a 'premature tool' to study neonatal sepsis. Thus, improving specificity is a major challenge for the HRV monitoring technology for diagnosing and prognosticating infection. Indeed, specificity needs to be studied in light of the fact that there could be multiple potential causes of altered HRV, including noise.

The identification of patient groups optimally suited to evaluate HRV and infection have not yet been identified as most of the studies that examined this link were performed on neonates. Indeed, equivalent studies need to be performed on adults.

The most commonly published HRV assessment technique for diagnosing and prognosticating infection is the frequency domain Fourier spectral analysis. This method, however, may be prone to erroneous results in the presence of noise and nonstationarity that are inherent to physiologic signals. Hence, the investigation of a plurality of HRV analysis techniques is required to offer a comprehensive evaluation of HRV and its true value in studying infection. This is particularly important as the pathophysiology of altered variability remains an active area of investigation.

In summary, given existing evidence and potential for further technological advances, we believe that longitudinal (that is, repetitive or continuous evaluation over time), individualized (that is, detection of patients' own variability, rather than conforming to population averages or thresholds), and comprehensive HRV monitoring (employing a panel of techniques) in critically ill patients at risk for or with existing infection offers a means to harness the clinical potential of this bedside application of complex systems science.