Évaluation statistique d’un biomarqueur

Méthodes en Médecine d’Urgence / Methods in Emergency Medicine
  • 151 Downloads

Résumé

Un biomarqueur permet d’établir un diagnostic de maladie, d’évaluer sa sévérité, d’estimer un risque ou de guider une intervention thérapeutique (théranostique). Bien que des progrès très importants aient été accomplis dans la standardisation de la méthodologie des essais cliniques, beaucoup reste à faire dans le domaine de l’évaluation des biomarqueurs. La performance diagnostique ou pronostique d’un biomarqueur peut être évaluée à l’aide de la sensibilité, de la spécificité, et des valeurs prédictives positive et négative, dont l’influence de la prévalence doit être prise en compte. L’utilisation des rapports de vraisemblance permet de prendre en compte l’information existante (probabilité prétest) et l’amélioration apportée par le biomarqueur. La courbe ROC (receiver operating characteristic) et son aire sous la courbe (AUCROC) sont importantes pour une appréciation globale du biomarqueur et le choix d’un seuil. La définition d’une zone d’incertitude et les méthodes de reclassification constituent des approches modernes de l’évaluation des biomarqueurs, qui ne doivent pas faire négliger le critère de qualité essentiel qu’est la puissance statistique. De fait, toute étude diagnostique doit inclure un calcul a priori du nombre de patients à inclure. Les études diagnostiques restent encore trop souvent conduites avec des méthodologies et des analyses statistiques inadaptées, limitant la validité et la robustesse des résultats observés, et donc leurs valeurs cliniques. Les investigateurs doivent les prendre en compte dans la construction de leurs études, les rédacteurs en chef et les relecteurs des journaux quand ils les analysent et les acceptent pour publication, et les lecteurs quand ils en prennent connaissance.

Mots clés

Biomarqueur Performance diagnostique Évaluation Urgence BNP Troponine Pronostic 

Statistical evaluation of a biomarker

Abstract

A biomarker may provide a diagnosis, assess disease severity, estimate a risk, or guide other clinical interventions such as the use of drugs. Although considerable progress has been made in standardizing the methodology and reporting of randomized trials, less has been accomplished concerning the assessment of biomarkers. The diagnostic performance may be evaluated by its sensitivity, specificity, positive and negative predictive values and accuracy; but the influence of prevalence on some of these indices should be considered. Likelihood ratios can be used in conjunction with existing information to enhance the prediction of an outcome. The receiver operating characteristic curve (ROC) and its associated area (AUCROC) are important to globally asses a biomarker and choose a cut-off value for clinical use. Alternatively, the definition of a zone of uncertainty is possible and is advocated. The techniques of reclassification (net reclassification index and integrated discrimination improvement) are becoming more widely used. The power issue remains crucial, and every diagnostic study should include a calculation of the number of patients required to achieve the research goal. Biomarker studies are often presented with poor biostatistics and methodological flaws or bias that precludes them from providing a reliable and reproducible scientific message. Some recommendations have recently been published but they are not comprehensive. Investigators should be aware of these issues when designing their studies, editors and reviewers when analyzing a manuscript, and readers when interpreting results.

Keywords

Biomarker Diagnostic usefulness Evaluation Emergency BNP Troponin Prognosis 

Références

  1. 1.
    Baker M (2005) In biomarker we trust? Nature Biotechnol 23:297–304CrossRefGoogle Scholar
  2. 2.
    Riou B (2004) Troponin: important in severe trauma and a first step in the biological marker revolution. Anesthesiology 101:1259–1260PubMedCrossRefGoogle Scholar
  3. 3.
    Altman DG, Schulz KF, Moher D, et al (2001) The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med 134:663–694PubMedGoogle Scholar
  4. 4.
    Bossuyt PM, Reitsma JR, Bruns DE, et al (2003) The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 138:W1–W12PubMedGoogle Scholar
  5. 5.
    Lijmer JG, Mol BW, Heisterkamp S, et al (1999) Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 282:1061–1066PubMedCrossRefGoogle Scholar
  6. 6.
    Ray P, Le Manach Y, Riou B, Houle T (2010) Statistical evaluation of a biomarker. Anesthesiology 112:1024–1032CrossRefGoogle Scholar
  7. 7.
    Jebali MA, Hausfater P, Abbes Z, et al (2007) Assessment of the accuracy of procalcitonin to diagnose postoperative infection after cardiac surgery. Anesthesiology 107:232–238PubMedCrossRefGoogle Scholar
  8. 8.
    Nobre V, Harbarth S, Graf JD, et al (2008) Use of procalcitonin to shorten antibiotic treatment duration in septic patients: a randomized trial. Am J Respir Crit Care Med 177:498–505PubMedCrossRefGoogle Scholar
  9. 9.
    Simon T, Verstuyft C, Mary-Krause M, et al (2009) Genetic determinants of response to clopidogrel and cardiovascular events. N Engl J Med 360:363–375PubMedCrossRefGoogle Scholar
  10. 10.
    Ray P, Arthaud M, Birolleau S, et al (2005) Comparison of brain natriuretic peptid and probrain natriuretic peptid in the diagnosis of cardiogenic pulmonary edema in patients older than 65 years. J Am Geriatr Soc 53:643–648PubMedCrossRefGoogle Scholar
  11. 11.
    Konstantinides S, Geibel A, Olschewski M, et al (2002) Importance of cardiac troponins I and T in risk stratification of patients with acute pulmonary embolism. Circulation 106:1263–1268PubMedCrossRefGoogle Scholar
  12. 12.
    Hausfater P, Juillien G, Madonna-Py B, et al (2007) Serum procalcitonin measurement as diagnostic and prognostic marker in febrile adult patients presenting to the emergency department. Crit Care 11:R60PubMedCrossRefGoogle Scholar
  13. 13.
    Howell MD, Donnino M, Clardy P, et al (2007) Occult hypoperfuision and mortality in patients with suspected infection. Intensive Care Med 33:1892–1899PubMedCrossRefGoogle Scholar
  14. 14.
    Zweig MH, Campbell G (1993) Receiver-operating characteristics (ROC) plots: a fundamental evaluation tool in clinical medicine. Clin Chem 39:561–577PubMedGoogle Scholar
  15. 15.
    Christ-Crain M, Jaccard-Stoltz D, Bingisser R, et al (2004) Effect of procalcitonin-guided treatment on antibiotic use and outcome in lower respiratory tract infections: cluster-randomized, singleblinded intervention trial. Lancet 363:600–607PubMedCrossRefGoogle Scholar
  16. 16.
    Schneider HG, Lam L, Lokuge A, et al (2009) B-type natriuretic peptide testing, clinical outcomes, and health services use in emergency department patients with dyspnea: a randomized trial. Ann Intern Med 150:365–371PubMedGoogle Scholar
  17. 17.
    Marshall JC, Reinhardt K, for the International Sepsis Forum (2009) Biomarkers of sepsis. Crit Care Med 37:2290–2298PubMedCrossRefGoogle Scholar
  18. 18.
    Parmigiani G (2002) Modeling in medical decision making: a Bayesian approach. John Wiley and sons, New York, NYGoogle Scholar
  19. 19.
    Foxcroft DR, Kypri K, Simonite V (2009) Bayes’ Theorem to estimate population prevalence from Alcohol Use Disorders Identification Test (AUDIT) scores. Addiction 104:1132–1137PubMedCrossRefGoogle Scholar
  20. 20.
    Fagan TJ (1975) Nomogram for Bayes theorem. N Engl J Med 293:257PubMedGoogle Scholar
  21. 21.
    Alberg AJ, Park JW, Hager BW, et al (2004) The use of “overall accuracy” to evaluate the validity of screening or diagnostic tests. JGIM 19:460–465PubMedCrossRefGoogle Scholar
  22. 22.
    Youden WJ (1950) Index for rating diagnostic tests. Cancer 3:32–35PubMedCrossRefGoogle Scholar
  23. 23.
    Hilden J, Glasziou P (1996) Regret graphs, diagnostic uncertainty and Youden’s index. Stat Med 15:969–986PubMedCrossRefGoogle Scholar
  24. 24.
    Falcoz PE, Laluc F, Toubin MM, et al (2005) Usefulness of procalcitonin in the early detection of infection after thoracic surgery. Eur J Cardiothorac Surg 27:1074–1078PubMedCrossRefGoogle Scholar
  25. 25.
    Brenner H, Gellefer O (1997) Variation of sensitivity, specificity, and likelihood ratios and predictive values with disease prevalence. Stat Med 16:981–991PubMedCrossRefGoogle Scholar
  26. 26.
    Cook NR (2007) Use and misuse of the receiver operating characteristic curve in risk stratification. Circulation 115:928–935PubMedCrossRefGoogle Scholar
  27. 27.
    Lemiale V, Renaud B, Moutereau S, et al (2007) A single procalcitonin level does not predict adverse outcomes of women with pyelonephritis. Eur Urol 51:1394–1401PubMedCrossRefGoogle Scholar
  28. 28.
    Gibot S, Kolopp-Sarda MN, Béné MC, et al (2004) Plasma level of a triggering receptor expressed on myeloid cells-1: its diagnostic accuracy in patients with suspected sepsis. Ann Intern Med 141:9–15PubMedGoogle Scholar
  29. 29.
    Sweets JA (1988) Measuring the accuracy of diagnostic systems. Science 240:1285–1293CrossRefGoogle Scholar
  30. 30.
    Ray P, Arthaud M, Lefort Y, et al (2004) Usefulness of B-type natriuretic peptide in elderly patients with acute dyspnea. Intensive Care Med 30:2230–2236PubMedCrossRefGoogle Scholar
  31. 31.
    McClish DK (1989) Analyzing a portion of the ROC curve. Med Decis Making 9:190–195PubMedCrossRefGoogle Scholar
  32. 32.
    Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148:839–843PubMedGoogle Scholar
  33. 33.
    DeLong ER, DeLong DM, Clarke-Pearson DL (1988) Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44:837–845PubMedCrossRefGoogle Scholar
  34. 34.
    Hajian-Tilaki KO, Hanley JA, Joseph L, Collet JP (1997) A comparison of parametric and nonparametric approaches to ROC analysis of quantitative diagnostic tests. Med Decis Making 17:94–102PubMedCrossRefGoogle Scholar
  35. 35.
    Zhang DD, Zhou XH, Freeman DH Jr, Freeman JL (2002) A non-parametric method for the comparison of partial areas under ROC curves and its application to large health care data sets. Stat Med 21:701–715PubMedCrossRefGoogle Scholar
  36. 36.
    Maisel AS, Krishnaswamy P, Nowak RM, et al (2002) Rapid measurement of B-type natriuretic peptide in the emergency diagnosis of heart failure. N Engl J Med 347:161–167PubMedCrossRefGoogle Scholar
  37. 37.
    Schisterman EF, Perkins NJ, Bondell H (2005) Optimal cutpoints and its corresponding Youden index to discriminate individuals using pooled blood samples. Epidemiology 16:73–81PubMedCrossRefGoogle Scholar
  38. 38.
    Perkins NJ, Schisterman EF (2006) The inconsistency of “optimal” cutpoints obtained using two criteria based on the receiver operating characteristics curve. Am J Epidemiol 163:670–675PubMedCrossRefGoogle Scholar
  39. 39.
    Hausfater P, Fillet AM, Rozenberg F, et al (2004) Prevalence of viral infection markers by polymerase chain raction amplification and interferon-alpha measurement among patients undergoing lumbar puncture in an emergency department. J Med Virol 73:137–146PubMedCrossRefGoogle Scholar
  40. 40.
    McNeil BJ, Keeler E, Adelstein SJ (1975) Primer on certain elements of medical decision making. N Engl J Med 293:211–215PubMedCrossRefGoogle Scholar
  41. 41.
    Metz CE (1978) Basic principles of ROC analysis. Sem Nucl Med 8:283–288CrossRefGoogle Scholar
  42. 42.
    Cantor SB, Sun CC, Tortolero-Luna G, et al (1999) A comparison of C/B ratios from studies using receiver operating characteristic curve analysis. J Clin Epidemiol 52:885–892PubMedCrossRefGoogle Scholar
  43. 43.
    Ewald B (2006) Post- hoc choice of cut points introduced bias to diagnostic research. J Clin Epidemiol 59:798–801PubMedCrossRefGoogle Scholar
  44. 44.
    Beck JR, Shultz EK (1986) The use of relative operating characteristics (ROC) curve in test performance evaluation. Arch Pathol Lab Med 110:13–20PubMedGoogle Scholar
  45. 45.
    Hilgers RA (1991) Distribution-free confidence bounds for ROC curves. Methods Inf Med 30:96–101PubMedGoogle Scholar
  46. 46.
    Molinaro AM, Simon R, Pfeiffer RM (2005) Prediction error estimation: a comparison of resampling method. Bioinformatics 21:3301–3307PubMedCrossRefGoogle Scholar
  47. 47.
    Carpenter J, Bithell J (2000) Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians. Stat Med 19:1141–1164PubMedCrossRefGoogle Scholar
  48. 48.
    Fellahi JL, Hedoire F, Le Manach Y, et al (2007) Determination of the threshold of cardiac troponin I associated with an adverse postoperative outcome after cardiac surgery: a comparative study between coronary artery bypass graft, valve surgery, and combined surgery. Crit Care 11:R106PubMedCrossRefGoogle Scholar
  49. 49.
    Brown MD, Reeves MJ (2003) Interval likelihood ratios: another advantage for the evidence-based diagnostician. Ann Emerg Med 42:292–297PubMedCrossRefGoogle Scholar
  50. 50.
    Ware JH (2006) The limitations of risk factors as prognostic tools. N Engl J Med 355:2615–2617PubMedCrossRefGoogle Scholar
  51. 51.
    Pepe MS, Janes H, Longton G, et al (2004) Limitations of the odds ratio in gauging the performance of a diagnostic, prognostic, or screening marker. Am J Epidemiol 159:882–890PubMedCrossRefGoogle Scholar
  52. 52.
    Cook NR, Ridker PM (2009) Advances in measuring the effect of individual predictors of cardiovascular risk: the role of classification measures. Ann Intern Med 150:795–802PubMedGoogle Scholar
  53. 53.
    Pencina MJ, D’Agostino RB Sr, D’Agostino RB Jr, Vasan RS (2008) Evaluating the added predictive ability of a new marker: from are under the ROC curve to reclassification and beyond. Stat Med 27:157–172PubMedCrossRefGoogle Scholar
  54. 54.
    Hausfater P, Megarbane B, Dautheville S, et al (2010) Prognostic factors in non-exertionnal heatstroke. Intensive Care Med 36:272–280PubMedCrossRefGoogle Scholar
  55. 55.
    Greenland S (2008) The need for reorientation toward costeffective prediction: comments on “Evaluating the added predictive ability of a new marker: from area under ROC curve to reclassification and beyond” by MJ Pencina et al, Statistics in Medicine. Stat Med 27:199–206PubMedCrossRefGoogle Scholar
  56. 56.
    Saah AJ, Hoover DR (1997) “Sensitivity” and “specificity” reconsidered: the meaning of the terms in analytical and diagnostic settings. Ann Intern Med 126:91–94PubMedGoogle Scholar
  57. 57.
    Altman DG (2000) Diagnostic tests. In: Altman DG, Machin D, Bryant TN, Gardner MJ (eds) Statistics with confidence, 2nd edition. BMJ Books, Bristol, pp 105–119Google Scholar
  58. 58.
    De Winter RJ, Koster RW, Sturk A, Sanders GT (1995) Value of myoglobin, troponin T, and CM-MBmass in ruling out an acute myocardial infarction in the emergency room. Circulation 92:3401–3407PubMedGoogle Scholar
  59. 59.
    Mower WR (1999) Evaluating bias and variability in diagnostic test. Ann Emerg Med 33:85–91PubMedCrossRefGoogle Scholar
  60. 60.
    Hausfater P (2011) Procalcitonine et infection. Ann Fr Med Urg 1(Suppl 1): (in press)Google Scholar
  61. 61.
    Hausfater P, Hurtado M, Pease S, et al (2008) Is procalcitonin a marker of critical illness in heatstroke? Intensive Care Med 34:1377–1383PubMedCrossRefGoogle Scholar
  62. 62.
    Fellahi JL, Hanouz JL, Manach YL, et al (2009) Simultaneous measurement of cardiac troponin I, B-type natriuretic peptide, and C reactive protein for the prediction of long-term cardiac outcome after cardiac surgery. Anesthesiology 111:250–257PubMedCrossRefGoogle Scholar
  63. 63.
    Janes H, Pepe MS (2008) Adjusting for covariates in studies of diagnostic, screening, or prognostic markers: an old concept in a new setting. Am J Epidemiol 168:89–97PubMedCrossRefGoogle Scholar
  64. 64.
    Amour J, Birenbaum A, Langeron O, et al (2008) Influence of renal dysfunction on the accuracy of procalcitonin to diagnose postoperative infection after vascular surgery. Crit Care Med 36:1147–1154PubMedCrossRefGoogle Scholar
  65. 65.
    Rivera R, Antognini J (2009) Perioperative drug therapy in elderly patients. Anesthesiology 110:1176–1181PubMedCrossRefGoogle Scholar
  66. 66.
    Trinquart L, Ray P, Riou B, Texeira A (2011) Natriuretic peptide testing in EDs for managing acute dyspnea: a meta-analysis Am J Emerg Med 29:(in press)Google Scholar
  67. 67.
    Begg CB (1987) Biases in the assessment of diagnostic tests. Stat Med 6:411–423PubMedCrossRefGoogle Scholar
  68. 68.
    Fischer JE, Bachmann LM, Jaeschke R (2003) A reader’s guide to the interpretation of diagnostic test properties: clinical example of sepsis. Intensive Care Med 29:1043–1051PubMedCrossRefGoogle Scholar
  69. 69.
    Charpentier S, Dehoux M, Lauque D (2010) Troponines ultrasensibles. Ann Fr Med Urg 1(Suppl 1):(in press)Google Scholar
  70. 70.
    Bachmann LM, Puhan MA, ter Riet G, Bossuyt PM (2006) Sample sizes of studies on diagnostic accuracy: literature survey. BMJ 332:1127–1129PubMedCrossRefGoogle Scholar
  71. 71.
    Flahault A, Cadilhac M, Thomas G (2005) Sample size calculation should be performed for design accuracy in diagnostic test studies. J Clin Epidemiol 58:859–862PubMedCrossRefGoogle Scholar
  72. 72.
    Obuchowski NA (1998) Sample size calculations in studies of test accuracy. Stat Methods Med Res 7:371–392PubMedCrossRefGoogle Scholar
  73. 73.
    Liu JP, Ma MC, Wu CY, Tai JY (2006) Tests of equivalence and non-inferiority for diagnostic accuracy based on the paired areas under ROC curves. Sat Med 25:1219–1238CrossRefGoogle Scholar
  74. 74.
    Obuchowski NA, McClish DK (1997) Sample size determination for diagnostic accuracy studies involving binormal ROC curve indices. Stat Med 16:1529–1542PubMedCrossRefGoogle Scholar
  75. 75.
    Obuchowski NA (1998) Sample size calculations in studies of test accuracy. Stat Methods Med Res 7:371–392PubMedCrossRefGoogle Scholar
  76. 76.
    Li CR, Liao CT, Liu JP (2008) A non-inferiority test for diagnostic accuracy based on the paired partial areas under ROC curves. Stat Med 27:1762–1776PubMedCrossRefGoogle Scholar
  77. 77.
    Levy MM, Fink MP, Marshall JC, et al (2003) 2001 SCCM/ESICM/ACCP/ATS/SIS International sepsis definitions conference. Crit Care Med 31:1250–1256PubMedCrossRefGoogle Scholar
  78. 78.
    Abosaif NY, Tolba YA, Heap M, et al (2005) The outcome of renal failure in the intensive care unit according to RIFLE: model application, sensitivity, and predictability. Am J Kidney Dis 46:1038–1048PubMedCrossRefGoogle Scholar
  79. 79.
    Valenstein PN (1990) Evaluating diagnostic tests with imperfect standards. Am J Cin Pathol 93:252–258Google Scholar
  80. 80.
    Glueck DH, Lamb MM, O’Donnell CI, et al (2009) Bias in trials comparing paired continuous tests can cause researchers to choose the wrong screening modality. BMC Med Res Methodol 9:4PubMedCrossRefGoogle Scholar
  81. 81.
    Ray P, Birolleau S, Lefort Y, et al (2006) Acute respiratory failure in elderly patients: characteristics, prognosis, and impact of initial treatments on the prognosis. Crit Care 10:R82PubMedCrossRefGoogle Scholar
  82. 82.
    Henkelman RM, Kay I, Bronakill MJ (1990) Receiver operating characteristic (ROC) analysis without truth. Med Decis Making 10:24–29PubMedCrossRefGoogle Scholar
  83. 83.
    Obuchowski NA (2006) An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale. Stat Med 25:481–493PubMedCrossRefGoogle Scholar
  84. 84.
    Glasziou P, Irwig L, Dekks JJ (2008) When should a new test become the current reference standard? Ann Intern Med 149:816–822PubMedGoogle Scholar
  85. 85.
    Reid MC, Lachs MS, Feinstein AR (1995) Use of methodological standards in diagnostic tests research. Getting better but still not good. JAMA 274:645–651PubMedCrossRefGoogle Scholar
  86. 86.
    Obuchowski NA, Lieber ML, Wians FH (2004) ROC curves in Clinical Chemistry: uses, misuses, and possible solutions. Clin Chem 50:1118–1125PubMedCrossRefGoogle Scholar
  87. 87.
    Zethelius B, Berglund L, Sundström J, et al (2008) Use of multiple biomarkers to improve the prediction of death from cardiovascular causes. N Engl J Med 358:2107–2116PubMedCrossRefGoogle Scholar
  88. 88.
    Katz MH (2003) Multivariable analysis: a primer for readers of medical research. Ann Intern Med 138:644–650PubMedGoogle Scholar
  89. 89.
    Deeks JJ, Altman DG, Bradburn MJ (2001) Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. In: Egger M, Davey Smith G, Altman DG (eds) Systematic reviews in health care: meta-analysis in context, 2nd ed. BMJ Books, LondonGoogle Scholar
  90. 90.
    Moher D, Liberati A, Tetzlaff J, et al (2009) Preferred reporting items for systematic reviews and meta-analysis: The PRISMA statement. PLOS Med 6:e100097CrossRefGoogle Scholar
  91. 91.
    Whiting P, Rutjes AWS, Reitsma JB, et al (2009) The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 6:9CrossRefGoogle Scholar
  92. 92.
    Deeks JJ (2001) Systematic reviews of evaluations of diagnostic and screening tests. BMJ 323:62CrossRefGoogle Scholar
  93. 93.
    Lijmer JG, Mol BW, Heisterkamp S, et al (1999) Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 282:1061–1066PubMedCrossRefGoogle Scholar
  94. 94.
    Harbord RM, Deeks JJ, Egger M, et al (2007) A unification of models for meta-analysis of diagnostic accuracy studies. Biostatistics 8:239–251PubMedCrossRefGoogle Scholar
  95. 95.
    Rutter CM, Gatsonis CA (2001) A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Stat Med 20:2865–2884PubMedCrossRefGoogle Scholar
  96. 96.
    Hamza TH, Arends LR, van Houwelingen HC, Stijnen T (2009) Multivariate random effects of meta-analysis of diagnostic test with multiple thresholds. BMC Med Res Methodol 9:73PubMedCrossRefGoogle Scholar
  97. 97.
    Sing T, Sander O, Beerenwinkel N, Lengauer T (2005) ROCR: visualizing classifier performance in R. Bioinformatics 21:3940–3941PubMedCrossRefGoogle Scholar

Copyright information

© Société française de médecine d'urgence and Springer-Verlag France 2011

Authors and Affiliations

  1. 1.Service d’accueil des urgences, CHU La Pitié-Salpêtrière, Assistance publique-Hôpitaux de Paris (AP-HP), UMRS 956UPMC université Paris-VIParisFrance
  2. 2.Département d’anesthésie-réanimation, CHU La Pitié-SalpêtrièreAssistance publique-Hôpitaux de Paris (AP-HP)ParisFrance
  3. 3.Department of AnesthesiologyWake Forest University School of MedicineWinston-SalemUSA

Personalised recommendations