Skip to main content

Rapid response teams improve outcomes: no

Rapid response systems (RRS) have been widely endorsed and adopted by hospital systems in industrialized countries to respond to deteriorating patients outside the intensive care unit (ICU) [1]. There is face validity in the idea that a system that promotes early recognition of deterioration and the implementation of time-critical interventions would save lives. Recent systematic reviews point to the potential effectiveness of RRS, but important caveats need to be considered [24].

The vast majority of studies included in these analyses were before–after studies with no contemporaneous control (Table 1) [2, 3]. Whilst this methodology is simple to implement and understand, the absence of a control group limits causal inference. The observed effects may be due to changes in case-mix, referral patterns, or overall improvements of ward-based care over time independent of an RRS. It is often unclear from the reported studies whether there were other interventions occurring at the same time.

Table 1 Summary of study design, patient population, and hospital mortality

Bristow and colleagues performed a controlled before–after study comparing an intervention hospital to two control hospitals contemporaneously [5]. This methodology can account for secular trends, but does not completely account for baseline differences. Hospitals may experience changes not attributable to the intervention and not accounted for by regression techniques. The unadjusted analysis suggested benefit from implementation of the RRS, but the adjusted analysis yielded conflicting results [5]. After adjustment one control hospital performed worse while the other performed better than the RRS hospital. This suggests that the observed treatment effect of the RRS may be significantly attributable to observed and unobserved baseline imbalances.

The interrupted time series (ITS) conducted by Howell and colleagues is a more robust methodology [6]. Their study of an RRS in an academic center found a reduction in unexpected mortality [patients without a do not attempt resuscitation (DNAR) order], but no reduction in overall hospital mortality [6]. The reasons for this observation are unclear. About 8 % of RRS activations led to a DNAR order, but it is unlikely that increased utilization of DNAR orders would completely account for the observed difference in expected and overall hospital mortality. The ITS design does not account for time-varying confounders and other unmeasured events unrelated to the study intervention. For example, the introduction of the RRS may have coincided with a general increased utilization of DNAR orders independent of the RRS. This study utilized the patient’s usual care team and mandated communication with the patient’s treating physician within 1 h of activation using explicit criteria [6]. This raises the possibility that the RRS acted as a substitute for timely senior clinical input from the usual care team [7].

Priestley and Hillman performed experimental evaluations of RRS [8, 9]. Rapid response systems are applied at a system level and individual patient randomization risks contamination between control and intervention participants in the same system. Priestly and colleagues used a stepped wedge design where each hospital ward was randomly allocated consecutive time points to implement the RRS such that at the end of the trial all wards had implemented the intervention [8, 10]. This methodology is attractive when end users believe an intervention is beneficial because all participating sites end up with the intervention and only the timing of implementation varies. Unfortunately, this study did not record measurements at each time interval so we cannot address the influence of time spent receiving the intervention on its effectiveness [10]. The study reported 7450 eligible patients, but only 2903 (39 %) were randomly allocated at a ward level [8]. The reasons for this are unclear making the primary analysis subject to the fallibilities of a non-randomized trial.

The MERIT study was a large cluster randomized control trial (RCT) of an RRS involving 23 hospitals in Australia [9]. Twelve hospitals were randomly allocated to implement an RRS after receiving 4 months of education [9]. The primary outcome was a composite measure of cardiac arrests among patients without preexisting DNAR, deaths among patients without preexisting DNAR, and unplanned ICU admission measured at 6 months. The study reported a non-significant effect of the RSS on the primary outcome although several aspects of study design and conduct have been debated [9]. An RRS may take longer than 6 months to mature and there may have been insufficient time for this in the study [11]. The study was powered for the primary outcome and important differences in components of the composite outcome may have been missed. For example, increases in unplanned ICU admissions may have resulted in fewer cardiac arrests and unexpected deaths. Ideally a composite measure should be comprised of components that are of similar clinical importance, occur with similar frequency, and have similar responses to the intervention [12]. Another concern with the MERIT trial is contamination. Half of the cardiac arrest team activations in the control hospitals were for patients without cardiac arrests. This suggests a possible substitution effect where cardiac arrest teams may have performed informal RRS functions in control hospitals [9].

Rapid response systems are complex interventions and the literature reflects the challenges of evaluation. As such it is important to remember that any intervention can have unintended consequences [13]. The incorrect use of early warning scores and poorly understood escalation pathways may either fail to detect deteriorating patients or generate unnecessary workloads [14]. Transitioning care to an ICU-based team when patients are most vulnerable may increase care fragmentation and erode the usual care team’s (nurse and physician) skill in managing patients with acute deteriorations. The ideal composition of providers responding to a clinically deteriorating patient is unknown. Physician presence, intensity of RRS activation, and the time spent on implementation were not associated with the effectiveness of RRS [2, 15]. The costs of an RRS are substantial (i.e., staffing, equipment, consumables, and education and training) and more cost-effective interventions may be available.

Interventions with strong face validity are conceptually attractive, but require robust scientific evaluation to ensure that anticipated benefits materialize and are not offset by important unanticipated consequences. The current evidentiary basis is at risk of bias and insufficient to support implementation of RRS outside of the context of evaluation. Until we have sufficient high-quality studies, RRS should be considered unproven.


  1. Jones DA, DeVita MA, Bellomo R (2011) Rapid-response teams. N Engl J Med 365(2):139–146

    CAS  Article  PubMed  Google Scholar 

  2. Maharaj R, Raffaele I, Wendon J (2015) Rapid response systems: a systematic review and meta-analysis. Crit Care 19:254

    Article  PubMed  PubMed Central  Google Scholar 

  3. Chan PS et al (2010) Rapid response teams: a systematic review and meta-analysis. Arch Intern Med 170(1):18–26

    Article  PubMed  Google Scholar 

  4. Winters BD et al (2013) Rapid-response systems as a patient safety strategy: a systematic review. Ann Intern Med 158(5 Pt 2):417–425

    Article  PubMed  PubMed Central  Google Scholar 

  5. Bristow PJ et al (2000) Rates of in-hospital arrests, deaths and intensive care admissions: the effect of a medical emergency team. Med J Aust 173(5):236–240

    CAS  PubMed  Google Scholar 

  6. Howell MD et al (2012) Sustained effectiveness of a primary-team-based rapid response system. Crit Care Med 40(9):2562–2568

    Article  PubMed  PubMed Central  Google Scholar 

  7. O’Horo JC et al (2015) The role of the primary care team in the rapid response system. J Crit Care 30(2):353–357

    Article  PubMed  Google Scholar 

  8. Priestley G et al (2004) Introducing Critical Care Outreach: a ward-randomised trial of phased introduction in a general hospital. Intensive Care Med 30(7):1398–1404

    Article  PubMed  Google Scholar 

  9. Hillman K et al (2005) Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet 365(9477):2091–2097

    Article  PubMed  Google Scholar 

  10. Woertman W et al (2013) Stepped wedge designs could reduce the required sample size in cluster randomized trials. J Clin Epidemiol 66(7):752–758

    Article  PubMed  Google Scholar 

  11. Jones D et al (2005) Long term effect of a medical emergency team on cardiac arrests in a teaching hospital. Crit Care 9(6):R808–R815

    Article  PubMed  PubMed Central  Google Scholar 

  12. Tomlinson G, Detsky AS (2010) Composite end points in randomized trials: there is no free lunch. JAMA 303(3):267–268

    CAS  Article  PubMed  Google Scholar 

  13. Reynolds J et al (2014) The practice of ‘doing’ evaluation: lessons learned from nine complex intervention trials in action. Implement Sci 9:75

    Article  PubMed  PubMed Central  Google Scholar 

  14. Smith GB et al (2015) Early warning scores: unravelling detection and escalation. Int J Health Care Qual Assur 28(8):872–875

    Article  PubMed  Google Scholar 

  15. Mahlmeister LR (2006) Best practices in perinatal care: the role of rapid response teams in perinatal units. J Perinat Neonatal Nurs 20(4):287–289

    Article  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ritesh Maharaj.

Ethics declarations

Conflicts of interest

The authors have no conflict of interest to declare.

Additional information

Contrasting viewpoints can be found at: doi:10.1007/s00134-016-4219-5 and doi:10.1007/s00134-016-4253-3.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Maharaj, R., Stelfox, H.T. Rapid response teams improve outcomes: no. Intensive Care Med 42, 596–598 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: