Introduction

The incidence of sepsis is rising and the severe forms [severe sepsis (SES) including septic shock] are still a major cause of death [1, 2]. Patients requiring immunosuppressive therapy are more prevalent with febrile neutropenia (FN) being a frequent life-threatening complication [3]. Infective endocarditis is also increasing in incidence, with high mortality and morbidity [4, 5]. Early initiation of appropriate antimicrobial therapy reduces the morbidity and mortality of these severe infections, prompting the prescription of broad-spectrum antimicrobial agents before the results of microbiological diagnosis (MD) are known [68]. This empirical first-line therapy is one of the factors explaining the increase in antimicrobial-resistance prevalence [9]. Moreover, antimicrobial therapy usually remains empirical, since microbial documentation in the blood is usually obtained for a maximum of 30% of cases for FN, and 35–50% for SES [1013].

Blood cultures (BC) are still the main biological tools for identifying the microbial pathogen(s) associated with severe infections [14]. Their results are critical for the choice of an appropriate antimicrobial treatment, especially in cases of resistant bacteria [2, 15, 16]. However, their low positivity rates (10–20% overall) and delayed results (median 2–3 days) make them useful mostly for escalation or de-escalation in the days following the onset of sepsis [17, 18]. Molecular detection has been developed to provide a shorter time to results and to detect microbial nucleic acids in patients with receipt of antibiotic therapy [12]. Meta-analyses for testing molecular detection of pathogen in the blood have reported these tests as less sensitive and less specific than BCs, taken as the gold standard, and consequently these tests are rarely used in the diagnostic standard workup [19, 20]. It should be noted that in the previous studies, many cases were shown with positive molecular tests and BC-negative with clinical status showing signs of bloodstream infections [21, 22]. The cost-effectiveness of molecular direct testing in blood remains unknown [23].

We conducted a multicentre open-label cluster-randomised crossover clinical trial to assess the clinical and economic impact of molecular detection of pathogens in blood for patients with severe infections, such as SES, FN, and suspicion of infective endocarditis (SIE) [24]. Our hypothesis was that molecular direct testing, in addition to a conventional workup, would provide relevant and timely information for the adjustment of antimicrobial therapy and that the additional cost would be offset by successful infection management.

Methods

Study procedures and participants

The study was conducted in 55 clinical wards of 18 university hospitals (each hospital being a cluster) during two consecutive 6-month periods, randomly assigned as intervention (IP) or control (CP) (standard care) periods. Details of the protocol are provided in the supplementary text and at the clinical trials website (https://clinicaltrials.gov NCT00709358). Patients aged ≥18 years were consecutively enrolled when meeting the diagnosis of (1) SES (including septic shock) [1, 10] (2) a first episode of FN [3, 13] or (3) suspicion of infective endocarditis, as defined below.

During the two periods, at least two BC sets were collected within 24 h after inclusion [14]. During IP, direct molecular testing was additionally performed on blood using the CE-IVD LightCycler® SeptiFast test (LSF; Roche Diagnostics, Meylan, France) (see supplemental methods for details on testing). During the two periods, additional BCs and other specimens were submitted for microbiological examination at the discretion of the physician and processed following general guidelines [14]. The results of microbiological tests, including LSF tests during IP, were transmitted to clinical wards in a time-line for physicians to initiate or modify antimicrobial therapy following general recommendations [3, 7, 25].

Endpoints

The primary endpoint was MD, i.e. detection of pathogens in the blood samples using results of BCs during CP and of both BCs and molecular tests during IP.

Secondary endpoints included the pathogens identified, the turn-around time (TAT) (i.e., time interval from taking blood samples to transmission of results), and the number of patients receiving an appropriate treatment. Appropriateness was evaluated by comparing the pathogen detected and the list of antimicrobial agents prescribed within the 7 days after inclusion [3, 7, 25]. The complications were observed until the end of the study (EOS) (discharge, death, 30 days for SES and FN, 45 days for SIE). Costs were measured at the EOS.

Statistical analysis

The study was designed as a superiority trial. With the hypothesis of an effect modification by type of infection, the study was powered on the primary endpoint in subgroups of SES and FN. Assuming a documentation prevalence of 35–50% in SES [10] and 30% in FN, [11, 13], at least 480 and 440 patients in the SES and FN groups, respectively, were needed to show a 15% absolute difference, considering a two-sided type I error of 0.05, a type II error of 0.10, an intracluster correlation coefficient of 0.01, and an 18-hospital number of clusters. Sample size was not estimated for SIE, since the overall number of cases was supposed to be lower than 150 [5].

Prevalence of the primary endpoint was compared between the two groups with the χ 2 test, and the absolute risk reduction (ARR) was calculated with their 95% confidence interval (CI). To account for clustering (confounding and effect modification by centre), a random-centre effect logistic regression (multilevel model with patients at level 1 and hospital at level 2) was analysed. To estimate the intervention effect, we computed the odds ratios (ORs) and their 95% CIs in a multilevel model, taking into account the order of intervention to control for a potential carryover effect.

Cost-effectiveness evaluation

The prospective economic evaluation was concurrent with the randomized trial, in accordance with the CHEERS (Consolidated Health Economic Evaluation Reporting Standards) statement [26]. We estimated the incremental cost-effectiveness of using LSF in addition to a standard workup from the perspective of the hospital with a 30-day time horizon [27, 28]. Effectiveness was defined as the primary endpoint. Hospital resources were valued by adjusting the 2013 average national cost of each patient’s diagnosis related group (DRG) with their actual length of stay and resources used during their hospitalisation. Types of resources and unit costs are described in Supplementary Table 2. A cost-effectiveness analysis was conducted to estimate incremental costs (difference in per-patient costs between groups) per incremental microbial documentation. The uncertainty of the results was analysed by the non-parametric bootstrap method to make multiple estimates of the ICER by randomly re-sampling the patient population to create sub-samples. Using this bootstrap analysis, the scatter plot of 1,000 ICERs is presented on the cost-effectiveness plane [29, 30]. Data reporting was performed according to CONSORT (Consolidated Standards of Reporting Trials) guidelines [31].

Results

Patients and primary outcome

Of 1459 eligible patients, 1416 were included as 731 during IP and 685 during CP (Fig. 1), and as 907 (64%) with SES, 440 (31.1%) with FN, and 69 (4.9%) with SIE. Patient characteristics, depicted in Table 1, were not different between the two periods.

Fig. 1
figure 1

Flowchart of the patients according to the intervention and control periods and with regard to type of infection

Table 1 Demographic and baseline disease characteristics

During IP, MD was positive for 285/731 (39.0%) patients compared with 193/685 (28.2%) during CP (P < 0.001) (Table 2). A higher MD rate was also significantly observed when excluding 41 cases with putative contaminants (e.g. one test positive with coagulase-negative staphylococci) observed at a rate of 2.7 and 3%, in CP and IP, respectively. Using multilevel modelling, neither centre-effect (P = 0.65) nor effect-modification by centre (P = 0.85) were observed, but a significant effect with regard to the subgroup of infection (P = 0.03). Kappa’s agreement between the results of molecular tests and those of BCs was poor (0.2693 ± 0.0366 SD) with only 83 patients with both positive molecular test and BCs in the IP.

Table 2 Primary and secondary outcomes according to the study period in the intention-to-treat population

Among patients suffering from SES, 198/464 (42.6%) patients had MD during IP, compared with 124/442 (28.1%) during CP (OR 1.89, 95% CI 1.43–2.50, P < 0.001). The intervention resulted in an absolute increase in the MD rate by 14.5% (95% CI 8.4–20.7). MD was significantly associated with the primary site of infection being other than pulmonary, with community-acquired infection and with severity criteria (Table 3). Multivariate analysis adjusted for these variables did not change the association between the IP and the MD rate (OR 1.89; 95% CI 1.36–2.63, P = 0.001).

Table 3 Factors affecting microbiological documentation in the blood among the 907 patients with severe sepsis, univariate and multivariate analyses

The MD rate was similar for patients with FN, and only a trend for an association was observed among patients with SIE (Table 2).

Secondary outcomes

Pathogens identified

The use of the molecular test resulted in a significantly higher number of patients in whom Gram-negative bacilli (GNB), Gram-positive cocci (GPC), or fungi were detected in their blood (Table 2; Fig. 2). This was observed in the whole population, as well as in patients with SES [19.6 vs. 11.5% for GNB, P < 0.001; 23.7 vs. 15.6% for GCP, P = 0.002; and 2.8% (13 cases) vs. 0.7% (3 cases) for fungi, P = 0.02, for IP and CP, respectively]. The detection of bacterial species not included in LSF (e.g., strict anaerobes, Salmonella spp., Bacillus spp. and others) was similar in the two periods (P = 0.18). Polymicrobic infections (concomitant detection of more than one pathogen in the blood) were more often detected during IP [160/731 (22%) vs. 75/685 (11%), P = 0.001). The pathogens detected during the two periods are detailed in Table 4.

Fig. 2
figure 2

Venn diagram presenting the microbial diagnosis given by blood cultures (BC) and the molecular test (LSF) for patients during the intervention period. GNB Gram-negative bacilli (enterobacteria, acinetobacter and pseudomonades), GPC Gram-positive cocci (staphylococci, streptococci and enterococci), blue circle BC positive cases, green circle positive molecular test. Cases could be diagnosed with more than one pathogen

Table 4 List and number (%) of the microbial pathogensa per species that were detected in the bloodb during the two periods

Time interval from blood collection to results

Among the patients with positive MD, the median TAT from blood collection to technical validation on the one hand and to transmission to the clinicians on the other hand, were significantly shorter during IP (Table 2). It has to be noted that, for patients with SES, the median TAT was below 24 h. Results were first transmitted by telephone (73.6%), computer interface (23.6%), Fax (14.5%), and mailing (10.9%). Interaction between laboratories and wards was as usual for discussion of the results.

Appropriate antimicrobial treatment

Considering the 1416 included patients, antimicrobial agents prescribed were beta-lactams (95.5%), aminoglycosides (43.2%), fluoroquinolones (26.5%), glycopeptides (21.1%), other antibiotics (31.7%), and antifungal agents (17.7%), with no difference between the two periods (Supplementary Table 3). In this whole population, a significantly higher number of patients received an appropriate therapy during IP (Table 2). However, when only the 478 patients with positive MD were considered, the rates of appropriate therapy were similar for the two periods (90.5 and 90.2%, respectively). An optimal treatment, i.e. more targeted towards the etiological microbes was observed in 263/395 (66.6%) patients with no difference between CP and IP (70.7 and 63.9%, P = 0.16) (supplementary figure). During IP, when 285 cases were observed as MD-positive, clinicians attested they modified the antimicrobial treatment according to the LSF results in 29.4% (79/269 answers) out of which de-escalation was done for 49 cases (62%).

Complications and mortality

Complications were observed in 362 cases (31.7% of 1142 patients documented for complications) as an extension of the infection (n = 190, 16.6%) or a new infection episode (n = 172, 15.1%), with no difference between the two periods [32.1% (185/577) and 29.6% (167/565), P = 0.34]. Among SES patients, the 7-day mortality rate was 17.3% (149/863) with no significant difference between the two periods (18.7 vs. 15.8% in IP and CP, respectively; P = 0.38), even when analyses were adjusted for confounders (Supplementary Table 4). We checked that there were no relation between the positivity of the molecular test and mortality.

Economic evaluation

The cost-effectiveness analysis used information from all patients with complete primary outcome and cost data. Resource utilisation and costs are presented in Table 5. The costs associated with the molecular test were calculated at an average of €475.20 per test including technician time, with each patient having an average of 1.9 tests. There were no significant differences between the two periods even for investigations or number of days with antimicrobial treatment (13.1 days in both periods) (Table 5). Median total costs were €14,826 vs. €17,828, for IP and CP, respectively (P = 0.8). Sub-group analyses by disease did not show a cost difference either.

Table 5 Cost (in euros) in total population for the control and intervention periods

Figure 3 shows the cost-effectiveness of the molecular test as a scatterplot of mean cost and effect differences. The key uncertainty that drove the incremental cost-effectiveness ratio was the size of the effectiveness effect, represented on the horizontal axis. The difference in effectiveness was evenly distributed on each side of the vertical axis for patients with FN, indicating no benefit during IP. The scatterplot for patients with SES indicated a weak dominance with a positive effectiveness effect and a reduced hospital cost as shown by the higher density below the horizontal axis.

Fig. 3
figure 3

Incremental cost and effectiveness of the molecular test when compared to standard workup: cost effectiveness plane for incremental costs and difference in 24-h documentation for neutropenic patients and patients with severe sepsis

Discussion

In this multicentre cluster-randomised crossover trial including 1416 patients, we found that adding direct molecular detection of pathogens in the blood of patients hospitalised with severe sepsis resulted in an overall higher microbial diagnosis rate than with conventional diagnosis, which was made on the basis of blood cultures. Moreover, the time to results was shorter in the IP, leading to bacteremia and fungemia being diagnosed in less than 24 h in most cases, without an increase in hospital costs.

In patients with severe infections, since the pathogen is recovered at most in 50%, the others are treated by empirical antimicrobial regimens without consideration of appropriateness or de-escalation being possible [1, 32]. Direct detection of microbial pathogens in the blood was developed in the 2000s to circumvent culture limitations [33]. LSF was the first commercial kit to provide standardisation and enable comparisons between studies [12, 34]. Since its clinical performance was mostly compared with blood cultures [22], meta-analyses concluded that it had a lack of sensitivity (68%) and specificity (86%) leading to abandoning the test [19, 20]. However, results which were seen as false-positives, i.e. low specificity, were also seen as true-positives when the clinical status was the gold standard (septic shock, for instance) or when the LSF results were compared with biomarkers of infection [12, 19, 35]. The presence of dormant or non-cultivable microbes in the blood can also explain the discrepancy, as well as the fact that most patients have already received antimicrobial agents, leading to false-negative BCs [36]. This raised the question of whether blood cultures can still be considered the gold standard for documenting bloodstream infections. It is also known that, with regard to clinical status, BCs show a lack of specificity (one-third are falsely positive due to contaminants, leading to excess treatment) and sensitivity (at least half of them are falsely negative in patients with severe infections) [912, 37]. We therefore decided to investigate the relevance of the molecular detection of pathogens in an interventional study with the aetiological microbial diagnosis as the primary outcome, whatever the assay—blood culture or molecular test—providing the positive result. We hypothesised that adding molecular detection to conventional cultures would increase the number of cases with microbiological documentation.

The results we obtained in the CP for positive detection of pathogens in the blood were concordant with previous studies on large cohorts of severe sepsis, with similar severity scores and mortality rates, even with recent studies using the new definitions of sepsis [10, 38]. For FN, the positivity rate was also concordant with previous studies conducted on first episodes of FN, which is much higher than in secondary episodes [11, 13]. This high rate of positivity in the blood during IP is similar to that described in previous studies on direct molecular detection where infections were as severe as in our study [12].

Because the aetiological microbes were more often documented in the IP, we looked at the consequences on the prescription of antimicrobial agents to treat the infection. We did not observe significant differences in prescription, either quantitatively (number of patients with treatment and cost per patient) or qualitatively (antimicrobial spectra), even for SES cases. This was probably because the treatment was not protocolled according to the pathogen identification, and the LSF test did not provide susceptibility results more than methicillin resistance of staphylococci [1, 3, 25]. It may also be due to the intervention itself, since polymicrobial infections and fungal infections were more often diagnosed, requiring broad-spectrum antibacterial agents and, in some cases, the addition of antifungal agents. Lastly, we did not detail the dosage of the agents, which was shown recently to be underestimated in most patients with severe infections [39]. Patients were managed under standard care conditions and, although the therapeutic approach may have varied between investigators, ward, and hospital centres, the results were mostly dependent upon the type of infection, and not the centre. The outcome was similar between the two periods, with a mortality rate concordant with the severity scores at inclusion [1, 10].

Because most molecular tests are more expensive than blood cultures, we investigated the cost of implementing systematic molecular detection in blood in addition to standard care. In a previous study, the cost was lower (€32,228 vs. €42,198) for 48 patients having LSF plus BC versus 54 patients having only BC [40]. In our study, the addition of LSF provided only a trend for lower hospital costs and higher effectiveness (i.e., microbiological diagnostic yield) for patients with SES. This was probably because the length of ICU stay was not affected by the earlier identification of micro-organisms, as also observed in other studies [8].

Limitations and strengths

There are two main limitations of our study. The first is that we present our results in 2016 when sepsis definitions have changed [41]. The strengths of our study remain since the patient characteristics were as severe as recent cohorts examined with the new sepsis criteria [8, 38]. The second limitation is the LSF test itself since it does not provide antimicrobial susceptibility testing results, as well as the most recent kits of this kind [42, 43]. Consequently, although MD was obtained for more patients and more rapidly during IP, we did not observe any difference in the antimicrobial treatment and outcomes. A recent controlled study showed that reducing TAT to pathogen identification in BCs, without specific AST as in our study, was able to decrease the prescription of broad-range antimicrobials and to increase de-escalation and appropriate escalation, at the condition that antibiotic stewardship is also provided [44], which was not done in our study. Minor weaknesses of the study concerns the suspicion of endocarditis, because the number of patients was too low to yield any conclusions since infective endocarditis infections are fairly rare (1/100,000 cases observed yearly) and their diagnosis, according to Duke and Li definitions, already includes microbiological results from BCs and from culture of removed valves [4]. Here, we aimed to include patients before diagnosis as we sought to include more documentation (i.e., we increased the number of cases with definite infective endocarditis). Although there was a trend in association, the numbers of patients with SIE were too small to show significant differences and a specific study for this indication is needed. Although we were disappointed by the results for patients with a first episode of FN, they confirmed the results of previous smaller studies [11, 34]. It was suggested that, in FN, the diagnostic yield of molecular detection could be higher in patients already receiving antimicrobial agents, i.e., at the second or later febrile episode, rather than in naïve patients. Regarding the lack of benefit for documenting FN, other assays should be explored and the infectious nature of the associated fever may need to be reconsidered.

The strengths of our study are the following. This is the first randomised interventional study on the use of direct molecular detection of pathogens in the blood by a commercial test. The originality of this study is that we did not compare results of the molecular test to those of blood cultures taken as the gold standard, but instead evaluated what direct molecular testing brings in standard care with regard to clinical status assessing the primary outcome of microbial diagnosis. Strengths of the study also include its size and design. The study was planned as a cluster-randomised trial because of practical (cost of equipment and technical staff) and organisational (training of laboratory technicians on-site) constraints. This design allowed the optimisation of compliance with the assigned strategy. A common pitfall of cluster-randomised trials is an imbalance in patient characteristics and patient management. Therefore, we planned a crossover design to minimise imbalance between groups, and randomised the order of intervention. This crossover design was suitable because there was low risk of a carryover effect. We obtained comparable baseline characteristics of severe sepsis groups during the two periods and adjustment for potential confounders did not change the results of the crude analysis. The strengths also include the molecular test itself since the main limiting factor of molecular tests in direct diagnosis is their analytical sensitivity, i.e. their ability to detect bacterial and fungal DNA without natural amplification by culture. LSF is one of the rare tests that can reproducibly detect 100 CFU/ml. The future test should be as sensitive as LSF but should detect resistance genes.

Conclusions

Our study demonstrated, in a multicentre randomised controlled trial, that performing molecular detection of pathogens in addition to standard care of blood cultures, increases the number of septic patients with microbial diagnosis, and shortens the time to start a species-specific antimicrobial therapy. A step further is now necessary with molecular antimicrobial resistance testing combined with protocolled strategy with regard to epidemiological data of the centre or stewardship for antimicrobial agents. This can bring more impact, especially in cases of infection caused by multidrug-resistant pathogens [44].