Skip to main content
Log in

Predicting survival in critical patients by use of body temperature regularity measurement based on approximate entropy

  • Original Article
  • Published:
Medical & Biological Engineering & Computing Aims and scope Submit manuscript

Abstract

Body temperature is a classical diagnostic tool for a number of diseases. However, it is usually employed as a plain binary classification function (febrile or not febrile), and therefore its diagnostic power has not been fully developed. In this paper, we describe how body temperature regularity can be used for diagnosis. Our proposed methodology is based on obtaining accurate long-term temperature recordings at high sampling frequencies and analyzing the temperature signal using a regularity metric (approximate entropy). In this study, we assessed our methodology using temperature registers acquired from patients with multiple organ failure admitted to an intensive care unit. Our results indicate there is a correlation between the patient’s condition and the regularity of the body temperature. This finding enabled us to design a classifier for two outcomes (survival or death) and test it on a dataset including 36 subjects. The classifier achieved an accuracy of 72%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Abásolo D, Hornero R, Espino P, Poza J, Sánchez C, la Rosa d (2005) Analysis of regularity in the eeg background activity of Alzheimer’s disease patients with approximate entropy. Clin Neurophysiol 8(116):1826–1834

    Article  Google Scholar 

  2. Altman D (1991) Practical statistics for medical research. Chapman & Hall, London

  3. Bruhn J, Ropcke H, Rehberg B, Bouillon T, Hoeft A (2000) Electroencephalogram approximate entropy correctly classifies the occurrence of burst suppression pattern as increasing anesthetic drug effect. Anesthesiology 93:981–985

    Article  Google Scholar 

  4. Dale D (2004) The febrile patient. Cecil textbook of medicine, 22nd edn. Saunders/Elsevier, Philadelphia

  5. Daya S (2003) The t-test for comparing means of two groups of equal size. Evidence-based obstetrics and gynecology 5:4–5

    Google Scholar 

  6. Dinarello C, Gelfand J (2001) Alteration in body temperature. Harrison’s Principles of Internal Medicine, 15th edn. McGraw Hill, New York

  7. Engoren M (1998) Approximate entropy of respiratory rate and tidal volume during weaning from mechanical ventillation. Crit Care Med 26:1817–1823

    Google Scholar 

  8. Fawcett T (2006) An introduction to roc analysis. Patt Recognit Lett 27(8):861–874

    Article  MathSciNet  Google Scholar 

  9. Fleischer L, Pincus S, Rosenbaum S (1993) Approximate entropy of heart rate as a correlate of postoperative ventricular dysfunction. Anesthesiology 78:683–692

    Article  Google Scholar 

  10. Hines WW, Montgomery DC (1990) Probability and statistics in engineering and management science, 3rd edn. Wiley, New York

  11. Ho K, Moody G, Peng C, Mietus J, Larson M, Levy D, Goldberger A (1997) Predicting survival in heart failure case and control subjects by use of fully automated methods for deriving nonlinear and conventional indices of heart rate dynamics. Circulation 3(96):842–848

    Google Scholar 

  12. Hornero R, Aboy M, Abasolo J, McNames J, Goldstein B (2005) Interpretation of approximate entropy analysis of intracranial pressure approximate entropy during acute intracranial hypertension. IEEE Trans Biomed Eng 52:1671–1680

    Article  Google Scholar 

  13. Hornero R, Aboy M, Abasolo J, McNames J, Wakeland W, Goldstein B (2006) Complex analysis of intracranial hypertension using approximate entropy. Crit Care Med 34:87–95

    Article  Google Scholar 

  14. Kaplan D, Furman M, Pincus S, Ryan S, Goldberger A (1991) Aging and the complexity of cardiovascular dynamics. Biophys J 59:945–949

    Google Scholar 

  15. Lim T, Loh W (1996) A comparison of tests of equality of variances. Comput Stat Data Anal 22:287–301

    Google Scholar 

  16. Mackowiack P (2000) Temperature regulation and the pathogenesis of fever. Mandell, Douglas, and Bennett’s principles and practice of infectious diseases, 5th edn. Churchill Livingstone, London

  17. Obuchowski NA (2003) Receiver operating characteristic curves and their use in radiology. Radiology 229:3–8

    Google Scholar 

  18. Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci USA 88:2297–2301

    Article  MATH  MathSciNet  Google Scholar 

  19. Pincus S (1996) Older males secrete luteinizing hormone and testosterone more irregularly and joint more asynchronously, than younger males. Proc Natl Acad Sci USA 93:14100–14105

    Article  MathSciNet  Google Scholar 

  20. Pincus SM, Keefe DL (1992) Quantification of hormone pulsatility via an approximate entropy algorithm. Am J Physiol (Endocrinol Metab) 262:741–754

    Google Scholar 

  21. Pincus S, Cummings T, Haddad G (1993) Heart rate control in normal and aborted SIDS infants. Am J Physiol (Regul Integr Comp Physiol) 264:R638–R646

    Google Scholar 

  22. Radhakrishnan N, Gangadhar B (1998) Estimating regularity in epileptic seizure time series data. A complexity-measure approach. IEEE Eng Med Biol Mag 17:89–94

    Article  Google Scholar 

  23. Rezek I, Roberts S (1998) Stochastic complexity measures for physiological signal analysis. IEEE Trans Biomed Eng 45:1186–1191

    Article  Google Scholar 

  24. Richman JS, Moorman JR (2000) Physiological time-series analysis using approximate entropy and sample entropy. Am J Physiol Heart Circ Physiol 278:2039–2049

    Google Scholar 

  25. Ryan SM, Goldberger AL, Pincus SM, Mietus J, Lipsitz LA (1994) Gender and age-related differences in heart rate dynamics: are women more complex than men? J Am Coll Cardiol 24:1700–1707

    Google Scholar 

  26. Shapiro SS, Wilk MB (1965) An analysis of variance test for normality. Biometrika 52:591–611

    Google Scholar 

  27. Varela M, Jiménez L, Fariña R (2003) Complexity analysis of the temperature curve: new information from body temperature. Eur J Appl Physiol 89:230–237

    Google Scholar 

  28. Varela M, Calvo M, Chana M, Gómez-Mestre I, Asensio R, Galdós P (2005) Clinical implications of temperature curve complexity in critically ill patients. Critical Care Med 33(12):2764–2771

    Article  Google Scholar 

  29. Veldhuis J, Johnson M, Veldhuis O, Straume M, Pincus S (2001) Impact of pulsatility on the ensemble orderliness (approximate entropy) of neurohormone secretion. Am J Physiol (Regulat Integr Comp Physiol) 281:R1975–R1985

    Google Scholar 

  30. Veriteq (2005) Data loggers. Veriteq http://www.veriteq.com

  31. Yue S, Wang CY (2001) The influence of serial correlation on the mann-whitney test for detecting a shift in median. Adv Water Resour 25:325–333

    Google Scholar 

  32. Zhang XS, Roy R (2001) Derived fuzzy knowledge model for estimating the depth of anesthesia. IEEE Trans Biomed Eng 48:312–323

    Article  Google Scholar 

  33. Zhang J, Wu Y (2005) Likelihood-ratio tests for normality. Comput Stat Data Anal 49:709–721

    Google Scholar 

  34. Zweig M, Campbell G (1993) Receiver-operating characteristic (roc) plots: a fundamental evaluation tool in clinical medicine. Clin Chem 39:561–577

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. Cuesta.

Appendix

Appendix

1.1 Regularity estimation

Biomedical signal regularity measurement has proven to be an effective way to obtain new information from these signals that correlates well with clinical condition [28]. One of the most used mathematical tools to compute regularity in signals is ApEn [1]. This is a measure aimed at obtaining the regularity of a data series because it reflects the probability that patterns within the series are not followed by similar ones. Therefore, a data series containing many repetitive patters will have a low ApEn, whereas a less predictable one will have a higher ApEn [11].

The algorithm for computing ApEn is as follows. Given an input data series x[n] of length N, an epoch of valid temperature recordings, two input parameters must be chosen in order to compute its ApEn, the length of the pattern m, and the distance threshold r.

A data series pattern of length m is given by:

$$x_{m}(i)=\left \{x[i], x[i+1], \ldots, x[i+m-1]\right\}$$

that is, m refers to the number of consecutive temperature measures assumed to form a possible repetitive pattern within x[n], and starting at sample x[i].

The distance between two generic patterns x m (i) and x m (j) is given by:

$$d\left(x_{m}\left(i\right), x_{m}(j)\right)=\max\left(\left|x\left[i+k\right]- x\left[j+k\right]\right|\right), \quad 1 \leq k \leq m.$$
(A.1)

The distance threshold r determines if x m (i) and x m (j) can be considered similar when d(x m (i), x m (j)) ≤  r. Given the set of all possible patterns of length m, (x m (1), x m (2), ... , x m (Nm + 1)), we define:

$$C_{r,m}(i)= \frac{k_{i,m}(r)} {N-m+1}$$
(A.2)

where k r,m(i) is the number of patterns x m (j) that are similar to x m (i) according to the distance threshold r. Hence, C r,m(i) is the fraction of patterns of length m starting at j, 1 ≤  j ≤  Nm + 1 whose distance to pattern starting at i, is below the threshold r, that is, they are considered to be similar to pattern x m (i). This fraction is computed for each pattern, and then another quantity can be defined as:

$$\phi^{m}(r)=\frac{1}{N-m+1} \sum^{N-m+1}_{i=1}\log C_{r,m}(i).$$

Finally, the computation of the ApEn of a temperature epoch x[n], ApEn(m,r) is given by:

$$\hbox{ApEn}(m,r)=\left[\phi^{m}(r)-\phi^{m+1}(r)\right]$$
(A.3)

Namely, ApEn quantifies the relative prevalence of repetitive patterns of length m compared with patterns of length m + 1 [11]. ApEn is computed for all the epochs in the temperature register, and then the mean μ y  = mean(ApEn(x i [n])), ∀x i [n] ∈Y p , is obtained.

1.2 Hypothesis validation

ApEn was calculated for every temperature register in classes A and B as described in previous section, and the mean for both classes was obtained, \(\mu_{A}=\hbox{mean}(\mu_{y_{i}}), \forall y_{i} \in A,\) and \(\mu_{B}=\hbox{mean}(\mu_{y_{j}}), \forall y_{j} \in B.\) The objective of the hypothesis validation was aimed at assessing if μ A and μ B differences were statistically significant. There are several statistic tests for this validation but in order to consider all the possible scenarios, we chose two complementary tests [10]. The first one, the classical parametric Student’s t test [5], based on the assumptions of data normality and homoscedasticity, difficult to make when not many input instances are available, and the second one, the Mann–Whitney test [31], a non-parametric method that does not require the normality assumption.

For the Student’s t-test, the null hypothesis H 0 is that the two ApEn means for classes A and B are considered to be equal, and then the objective is to decide whether to accept or reject such hypothesis. In order to be able to carry out this test, data normality, homoscedasticity, and independence must apply. Normality can be assured using the Shapiro–Wilks test [26, 33]. Homoscedasticity can be confirmed by means of the Bartlett test [15], and independency by a sample correlation study [10].

Taking a distribution as normal when not many observations are available may lead to incorrect conclusions. The Mann–Whitney U test (MW) [2], a non parametric test, can be carried out instead in order not to make such assumptions, and be able to assess if there are significative differences between the two populations with respect to their medians. Again, null hypothesis H 0 states the two populations from which samples have been drawn have equal medians, and the alternative hypothesis H 1 states medians are different.

To carry out the test, both groups are put together and observations are rank-ordered from lowest to highest. Then rankings are returned to the class, A or B, to which they belong. The test statistic U is given by Yue and Wang [31]:

$$U={\rm min} \{U_{1}, U_{2}\}$$

with:

$$U_{1}=n_{A} n_{B}+\frac{n_{A}(n_{A}+1)}{2}-W_{A}$$
$$U_{2}=n_{A}n_{B}+\frac{n_{B}(n_{B}+1)}{2}-W_{B}$$

and where U 1 is the total number of class A observations preceding class B observations, and the other way round for U 2. W A and W B are the rank sums for each class.

Finally, additional tests were carried out to accept or reject the assumptions of normality, homoscedasticity and independence for the data [26, 33].

1.3 ROC analysis

ROC analysis is a very useful tool to select a classifier and visualize its performance and behaviour [8]. It has been used in many medical diagnosis applications. If the previous statistical tests determine that both classes have different means, a classifier can be designed with this method. The input to the classifier is the regularity measure obtained with ApEn, and the output is a mapping to a predicted class.

Our classification problem consists of mapping an input instance (mean ApEn of a temperature epoch) to one of the classes in the discrete set {A,B }. If we call A the positive class, and B the negative class, we can define the following performance metrics for the classifier:

  • True positive (TP): instance is A and it is classified as A.

  • False positive (FP): instance is B but it is incorrectly classified as A.

  • True negative (TN): instance is B and it is classified as B.

  • False negative (FN): instance is A and it is incorrectly classified as B.

  • Sensitivity: correctly classified instances of A divided by the total number of A instances.

  • Sensitivity: correctly classified instances of A divided by the total number of A instances.

  • Specificity: correctly classified instances of B divided by the total number of B instances.

  • Accuracy: ratio of correctly classified instances: \(\frac{({\rm TP}+{\rm TN})}{({P}+{N})},\) where P and N are the total number of positives and negatives, respectively.

A threshold is used to obtain a crisp classifier, that is, instances can only belong to a single class. If the score is greater than that threshold we map instance into class A, otherwise into class B. The objective is to find an optimal threshold that maximizes accuracy.

The ROC curve is plotted considering each possible threshold as a different classifier, obtaining a set of points in the ROC space that form the resulting curve, a step function. Only one of the possible classifiers is finally chosen, that considered optimal from the accuracy point of view. Finally, the area under the ROC curve (AUC) is computed in order to assess performance.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cuesta, D., Varela, M., Miró, P. et al. Predicting survival in critical patients by use of body temperature regularity measurement based on approximate entropy. Med Bio Eng Comput 45, 671–678 (2007). https://doi.org/10.1007/s11517-007-0200-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11517-007-0200-3

Keywords

Navigation