Advertisement

Do physicians and nurses agree on triage levels in the emergency department? A meta-analysis

  • E. Pishbin
  • M. Ebrahimi
  • A. MirhaghiEmail author
Originalien
  • 65 Downloads

Abstract

Introduction

Few studies have focused on the agreement between emergency physicians and nurses in the triage of emergency patients. The aim of this meta-analytic review was to examine the level of inter-rater reliability between physicians and nurses on the use of triage scales.

Methods

Detailed searches of a number of electronic databases were performed up to 1 September 2018. Studies that reported sample sizes, reliability coefficients, and a comprehensive description of the assessment of the inter-rater reliability of physicians and nurses were included. The articles were selected according to the Guidelines for Reporting Reliability and Agreement Studies (GRRAS). Two reviewers involved in the study selection, quality assessment, and data extraction performed the review process. The effect size was estimated by z-transformation of reliability coefficients. Data were pooled with random-effects models, and a meta-regression was performed based on the method of moments estimator.

Results

Twelve studies were included. The pooled coefficient for the level of agreement between physicians and nurses was substantial, with a value of 0.756 (confidence interval [CI] 95%: 0.659–0.828). The level of agreement was higher for the weighted kappa (κ), live cases, and pediatric cases than the unweighted κ, paper-case scenarios, and adult cases, respectively. The level of agreement between physicians and nurses on triage scales has improved over time (B = 0.011; P < 0.001).

Conclusion

Overall, the level of agreement between physicians and nurses was substantial, and this level of agreement should be considered acceptable for triage in the emergency department. Further studies on the level of agreement between physicians and nurses are needed.

Keywords

Triage Physician Nurses Emergency treatment Reliability and validity Meta-analysis 

Sind sich Ärzte und Krankenpflegende einig bei der Triage-Stufe in der Notaufnahme? Eine Metaanalyse

Zusammenfassung

Einleitung

Nur in wenigen Studien wurde bisher zur Übereinstimmung zwischen Ärzten und Krankenpflegenden bei der Triage von Notfallpatienten geforscht. Ziel des vorliegenden metaanalytischen Reviews war es, den Grad der Interrater-Reliabilität zwischen Ärzten und Krankenpflegenden beim Einsatz von Triage-Skalen zu untersuchen.

Methoden

Nach bis zum 01. September 2018 publizierten Arbeiten wurde in einer Reihe elektronischer Datenbanken detailliert gesucht. Aufgenommen wurden Studien, welche die Stichprobenumfänge sowie die Zuverlässigkeitskoeffizienten angaben und die Bewertung der Interrater-Reliabilität ausführlich darstellten. Anhand der GRRAS(Guidelines for Reporting Reliability and Agreement Studies)-Kriterien wurden Arbeiten ausgewählt. Zwei an der Studienauswahl, der qualitativen Evaluierung und der Datenextraktion beteiligte Gutachter führten den Review-Prozess durch. Durch z‑Transformation von Zuverlässigkeitskoeffizienten wurde die Effektgröße abgeschätzt. Die Daten wurden mit Zufallseffektmodellen zusammengeführt, und auf Grundlage der Momentenmethode wurde eine Metaregression durchgeführt.

Ergebnisse

Zwölf Studien wurden in die Analyse einbezogen. Mit einem gepoolten Koeffizient von 0,756 (95%-Konfidenzintervall [95%-KI] 0,659–0,828) stellte sich der Grad der Übereinstimmung zwischen Ärzten und Krankenpflegenden als erheblich dar. Bei gewichtetem Kappa (κ), Live-Fällen und pädiatrischen Patienten war die Übereinstimmung höher als bei ungewichtetem κ, kasuistischen Szenarien und Erwachsenen. Die Triage-Skalen-Übereinstimmung zwischen Ärzten und Krankenpflegenden erhöhte sich im zeitlichen Verlauf (B = 0,011; p < 0,001).

Schlussfolgerung

Insgesamt zeigte sich eine beträchtliche Übereinstimmung zwischen Ärzten und Krankenpflegenden, der ermittelte Übereinstimmungsgrad ist für die Triage in der Notaufnahme als akzeptabel anzusehen. Weitere Studien zu dieser Fragestellung sind erforderlich.

Schlüsselwörter

Triage Arzt Krankenschwestern Notfallbehandlung Zuverlässigkeit und Gültigkeit Metaanalyse 

Introduction

Triage scales were introduced to ensure reliability and safety throughout the triage process. The use of comprehensive triage scales has increased rapidly in recent decades, simultaneously with a rise in the expectation of concordance among triage staff in the emergency department (ED) [1]. It is of primary importance that hospital personnel have the required competences to perform triage successfully. According to emergency associations, a registered nurse needs to perform triage [2, 3]. A number of studies have investigated the role of physicians in triage, including their role in triage in the ED [4, 5]. According to some research, a physician may guide the selection of medical services effectively, thereby mitigating the negative effect of overcrowding and improving patient flow through the triage process in the ED.

The use of nurses in triage is more cost-effective for hospitals than is the use of physicians. As triage also requires experienced clinicians, using nurses in the triage room rather than experienced physicians can cut hospital costs [7]. However, prior to drawing conclusions about the relative advantages of physicians versus nurses in performing triage, it is imperative to investigate the reliability of triage decisions made by physicians and nurses [6]. Assessments of inter-rater reliability can examine the consistency across clinicians’ decisions and the extent of the similarity in physicians’ and nurses’ triage practices. It is important to know that clinical decisions must be reliable to be valid. High inter-rater reliability (κ = 0.81–1.00) denotes similarity in the performance of physicians and nurses. Information on the level of agreement between triage decisions made by physicians versus those made by nurses can shed light on similarities in decision making. The level of similarity is very important, as it determines the comparability of their performances. It should be noted that information on the level of agreement does not shed light on the accuracy of triage decisions (validity).

Poor agreement between physicians and nurses may become a source of conflict in the ED and lead to increased tension in the physician–nurse relationship. The findings of previous studies that examined the inter-rater reliability of triage decisions made by physicians and nurses were inconclusive (κ range: 0.186–0.94) [8, 9]. Information is lacking on concordance between physicians and nurses in terms of triage decision making [8, 9]. Computing a pooled estimate of a reliability coefficient could help identify the potential effect of emergency clinicians in terms of profession on triage concordance [10]. Therefore, the aim of this meta-analytic review was to determine the level of agreement between physicians and nurses in triage decision making.

Methods

Literature search

Electronic databases, including Scopus and PubMed, were searched from conception to 1 September 2018. A publication date filter was not used. The search terms included “Reliability”, “Triage”, “Physician”, “Nurse”, “Agreement” and “Emergency.” Relevant citations in the reference lists of the included studies were hand searched to identify other potential articles Two researchers independently examined the search results to recover potentially eligible articles (Fig. 1). If needed, the authors of the research papers were contacted to retrieve supplementary information.
Fig. 1

Flow chart of literature search and selection process

Eligibility criteria

Irrelevant and duplicated results were eliminated. Only studies that used formal triage scales were included. Only English-language publications were reviewed. The latest version of the triage scale was identified through papers published after 2008 for the Canadian Triage and Acuity Scale (CTAS) and 2012 for the Emergency Severity Index (ESI). Articles were selected according to the Guidelines for Reporting Reliability and Agreement Studies [11]. According to these guidelines, only studies that reported descriptions of the sample size, number of raters and subjects, sampling method, rating process, statistical analysis, and reliability coefficients were included in the analysis. Each of these items was graded qualified if described in sufficient detail in the paper. A qualified paper was defined as one with a qualifying score of >6 out of eight criteria. Disagreements were resolved by consensus. Articles that did not report the type of reliability were excluded from the analyses.

Data extraction

In the next phase, information was retrieved on the following parameters: participants (age group and size), raters (profession and size), instruments (live vs. scenario), origin and publication year of the study, reliability coefficient, and methodology. The reliability coefficients extracted from the articles were as follows: inter-rater reliability, kappa (κ) coefficient (weighted and unweighted), intraclass correlation coefficient, Pearson’s correlation coefficient, and Spearman’s rank correlation coefficient. In a meta-regression, each sample was considered a unit of analysis. If the same sample was reported in two or more articles, it was included once. In contrast, if several samples of different populations were reported in one study, each sample was separately included as a unit of analysis.

Data analysis

Pooling data of reliability coefficients was performed for both types of reliability. The most qualified articles reported reliability coefficients using kappa statistics (an r type of coefficient, ranging from −1.00 to +1.00). The agreement was defined as poor (κ = 0.00–0.20), fair (κ = 0.21–0.40), moderate (κ = 0.41–0.60), substantial (κ = 0.61–0.80), or almost perfect (κ = 0.81–1.00) [12]. Kappa can be treated as a correlation coefficient in a meta-analysis. Q and I2 statistics were used to assess statistical heterogeneity among the selected studies. Variables with Q statistics with a P-value < 0.05 were considered heterogeneous. It means that the amount of total variance was more than we would expect based on the within-study error, a random effects model was assumed [13]. To obtain the correct interpretation, back-transformation (z to r transformation) of the pooled effect sizes to the level of primary coefficients was performed [14, 15]. Fixed effects and random effects models were applied. The data were analyzed using Comprehensive Meta-Analysis software version 2.2.050 (Biostat, Inc., Englewood, NJ, USA).

Simple meta-regression analysis was performed according to the method of moments estimator [16]. In the meta-regression model, to detect potential predictors of reliability coefficients, the effect size was considered the dependent variable, and the studies and characteristics of the subjects were considered the independent variable. The z-transformed reliability coefficients were regressed on the publication year of the study and distance. Distance was defined as the distance from the origin of each study to the place of origin for the triage scales. The meta-regression was performed using a random effects model because of the presence of significant between-study variation [17].

Results

Study selection and characteristics

The search strategy identified 96 primary citations relevant to the present study on the agreement between emergency physicians and nurses in ED triage. A total of 12 articles met the inclusion criteria (Fig. 1). Subgroups were organized for method of reliability (intra-/inter-rater) and reliability statistics (weighted/unweighted κ). The level of agreement among the reviewers in the final selection of the articles was almost perfect (κ = 1.0).

The sample size was 9653. The studies were conducted in four countries (Canada, Iran, Korea, and the USA). The publication year of the studies ranged from 1999 to 2015 (median: 2009). Seventy-five percent of all the studies were conducted using the latest version of the triage scale. All the studies used inter-rater reliability. The weighted κ coefficient was the most commonly used statistic (Table 1). The overall pooled coefficient was 0.756, denoting substantial agreement between physicians and nurses (95% confidence interval [CI]: 0.659–0.823; z-value: 10.410, P < 0.001, Q‑value model: 984.176, df: 11, P = 0.001; Fig. 2).
Table 1

Studies on reliability between physicians and nurses in emergency department triage

Study name

Guideline

Age

Patient

Statistics

Methods

CO

Beveridge et al. 1999 [29]

CTAS

Case-mix

Scenario

κw

INTER

Canada

Wuerz et al. 2000 [30]

ESI

Case-mix

Scenario

κw

INTER

USA

Wuerz et al. 2001[26]

ESI

Case-mix

Scenario

κw

INTER

USA

Baumann et al. 2005 [27]

ESI

Pediatric

Scenario

κw

INTER

USA

Durani Y, 2007 [28]

ESI

Pediatric

Scenario

κw

INTER

USA

Choi et al. 2009 [31]

ESI

Case-mix

Scenario

κw

INTER

Korea

Durani Y, 2009 [28]

ESI

Pediatric

Scenario

κw

INTER

USA

Platts-Mills et al. 2010 [32]

ESI

Adult

Live cases

κw

INTER

USA

Green et al. 2012 [20]

ESI

Pediatric

Live cases

κuw

INTER

USA

Jafar-Rouhi et al. 2013 [21]

ESI

Pediatric

Live cases

κuw

INTER

USA

Esmailian et al. 2014 [8]

ESI

Case-mix

Live cases

κw

INTER

Iran

Pourasghar et al. 2015 [9]

ESI

Case-mix

Live cases

κuw

INTER

Iran

κw weighted kappa, κuw unweighted kappa, INTER inter-rater reliability, CO country of origin, CTAS Canadian Triage and Acuity Scale, ESI Emergency Severity Index

Fig. 2

The pooled coefficients estimates of triage reliability coefficients based on statistics

Subgroup analyses

The weighted κ and unweighted κ was 0.797 and 0.719, respectively, denoting substantial reliability in the level of agreement between physicians and nurses (95% CI: 0.662–0.882 and 0.568–0.823, respectively) (Fig. 2). The level of agreement between physicians and nurses in terms of their assessments of paper-based scenarios and live cases was also substantial, with values of 0.683 (95% CI: 0.546–0.784) and 0.793 (95% CI: 0.679–0.869), respectively. Furthermore, the level of agreement on adult and pediatric patients was substantial, with a value of 784 (95% CI: 0.622–0.881) for adult cases and 0.772 (95% CI: 0.637–0.862) for pediatric cases.

Moderator effect

A meta-regression analysis was performed based on the method of moments for distance and publication year. Studies reported a higher pooled coefficient for locations closer to the origin of the triage scales was not significantly greater than those further from the origin of the ESI (Table 2). The reliability of triage decisions improved over time.
Table 2

Meta-regression of Fisher’s Z‑transformed κ coefficients on predictor variablesa

Independent variable

B

SEb

P

Distance from ESI originb

−0.0000

0.002

0.82

Publication year

0.011

0.002

0.000

SEb Standard Error, ESI Emergency Severity Index

aUsing studies relating to ESI reliability with weighted κ coefficients

bThe ESI triage scale originated in Boston, MA, USA

Sensitivity analysis

A sensitivity analysis of the impact of the triage scale on the agreement between physicians and nurses showed that the reliability of the ESI was substantial, with a value of 0.777 (95% CI: 0.675–0.849; z-value: 9.36, P < 0.001; Q‑value model: 983.50, df: 10, P = 0.001; after removing one study by Beveridge et al. that used the CTAS).

Discussion

The results revealed a substantial level of agreement between physicians and nurses in triage decision-making. Thus, the inter-rater reliability of physicians and nurses appears to be acceptable. As a result, triage decisions are consistent in the ED. However, most of the studies (10 of 12) used weighted κ statistics to report agreement between emergency physicians and nurses. As weighted κ statistics overestimate inter-rater reliability among raters [18], the results should be interpreted with caution. It is also important to note that the actual level of agreement between physicians and nurses may be lower than that found in the current study. Weighted κ statistics demonstrate higher reliability than do unweighted κ statistics because they place more emphasis on large differences between ratings than that of small differences [19]. It is important to note that errors resulting in one category misclassification may endanger the life of critically ill patients. However, unweighted κ statistics provide a reasonable estimation of reliability in terms of the level of agreement between physicians and nurses [18]. In the present study, only three of the studies used unweighted κ to report reliability [8, 20, 21]. Therefore, the sample is not enough to draw any further conclusion.

Kappa is not notably different between case-mix (adult) and pediatric cases (0.784 vs. 0.772). This finding may be due to the ESI using a similar framework for both adult and pediatric patients [22]. The main difference between adult and pediatric versions is mainly vital signs criteria. Therefore, it is expected that the reliability coefficient does not differ greatly between age subgroups. It probably does not make much difference to the level of overall reliability.

In the present study, the level of agreement between physicians and nurses in terms of live cases and paper-case scenarios was congruent with that of the study by Worster et al. [23], and the use of live cases was associated with increased reliability as compared with that of paper-case scenarios (0.793 vs. 0.758). Worster et al. demonstrated that there was moderate to high agreement between live cases and paper case scenarios (0.9 vs. 0.76), with paper-case scenarios generally receiving lower triage scores than live cases [23]. Live cases usually provide important clinical clues for raters, resulting in higher reliability of raters’ triage scores. The slight difference shows that it cannot be a source of bias in overall pooled coefficients.

As shown in Table 2, the concordance between physicians and nurses has increased with time. This may be due to multiple revisions of triage scales in recent years, supporting the idea that triage systems need to be regularly updated. In addition, both physicians and nurses have amassed a large amount of experience in the use of these scales since their introduction some years ago. They have also received training in the use of these scales in the ED. Of note, reliability differs from validity. Thus, substantial reliability does not necessarily indicate acceptable validity.

The meta-regression showed no significant difference in terms of the distance from the origin of the triage scale. However, the reliability coefficient in countries that are far from the country of origin of the scale was lower than that of countries close to the origin of the scale. However, the difference is not significant because the ESI framework is mostly objective and based on vital signs criteria (Table 2). The simplicity and objectivity of the ESI may limit the role of contextual and local factors in the reliability between physicians and nurses. This finding is congruent with that of a previous study on the reliability of ESI [6]. On the other hand, subjective triage scales, such as the Manchester Triage Scale (MTS), showed a significant difference in terms of the distance from the origin of the MTS [24]. One possible reason for this difference might be that the complaint-based nature of the MTS reduces both the objectivity and reliability of the triage scale. It could be translated in different cultural contexts in countries outside the U.K. with different health care systems and cultures of health care practice [1]. In contrast to the MTS, the ESI triage scale has been adapted for use in other countries, despite cultural diversities [6]. Only one study by Storm-Versloot et al. has directly compared the reliability of the ESI versus that of the MTS [25]. They reported that the MTS showed greater inter- and intraobserver agreement than the ESI using paper-based clinical scenarios. Meta-analytic studies showed no difference in terms of the reliability of the ESI and MTS. In these studies, the pooled coefficient for the ESI triage scale was 0.791 (95% CI: 0.787–0.795), and it was 0.751 (95% CI: 0.677–0.810) for the MTS [6, 24]. Therefore, none of them have been able to increase the reliability level to almost perfect. Based on our literature search, no studies seem to have assessed the level of agreement between physicians and nurses in MTS and ATS. The absence of studies on the inter-rater reliability between physicians and nurses in terms of the use of the MTS has not been addressed in the literature. The latter may be due to the fact that the MTS is composed of 50 algorithms, and it is relatively complicated. Thus, it is not rational for emergency physicians to learn the MTS only for research purposes. This may be true for ATS as well. In contrast, it is easy to learn the ESI. Thus, most studies that assessed the level of agreement between emergency physicians and nurses have focused on the ESI. It deserves further investigation.

The present study has a number of limitations. In the present analysis, only a few studies included an unweighted κ for inter-rater agreement between raters. As this study was mainly limited to weighted κ, the results should be interpreted with caution, and potential overestimation of the pooled coefficients must be considered. Contingency tables were not adequately reported in the included studies. Thus, we do not know whether the reliability differences among physicians and nurses are related to which level of the triage scale. Most disagreement may possibly relate to triage levels 2–4, as patients in level 1 and 5 are easily distinguishable from those in the other triage categories (levels 2–4). Patients in level 1 are clearly critically ill, and patients in level 5 are completely stable and ambulatory.

Although the triage scale supports evidence-based practice in the ED [26, 27], there is a considerable gap between research and clinical practice, even at the best of times [28]. Therefore, the reliability in the level of agreement between physicians and nurses in the field may differ from that reported herein. It is worth mentioning that triage reliability hardly exceeds the substantial level, except in situations where there are highly homogenous raters, such as expert–expert agreement. In such cases, the reliability coefficient could be almost perfect (0.90) [12].

Conclusion

Overall, the level of agreement between physicians and nurses was substantial, and this level of agreement should be considered acceptable for triage in the ED. The reliability of triage decisions has improved over time because of practice effect. Therefore, the reliability between physicians and nurses at this point just needs more development to reach almost perfect agreement and decrease disagreement. Further studies on the level of agreement between physicians and nurses are needed.

Notes

Compliance with ethical guidelines

Conflict of interest

E. Pishbin, M. Ebrahimi and A. Mirhaghi declare that they have no competing interests.

For this article no studies with human participants or animals were performed by any of the authors. All studies performed were in accordance with the ethical standards indicated in each case.

References

  1. 1.
    Parenti N, Reggiani MLB, Iannone P, Percudani D, Dowding D (2014) A systematic review on the validity and reliability of an emergency department triage scale, the Manchester Triage System. Int J Nurs Stud 51(7):1062–1069CrossRefGoogle Scholar
  2. 2.
    The Emergncy Nurses Association (2011) Triage Qualifications: Position Statement. Des Plaines, Ill, USAGoogle Scholar
  3. 3.
    Ebrahimi M, Mirhaghi A, Mazlom R, Heydari A, Nassehi A, Jafari M (2016) The Role Descriptions of Triage Nurse in Emergency Department: A Delphi Study. Scientifica (Cairo) 2016:5269815Google Scholar
  4. 4.
    Abdulwahid M, Booth A, Kuczawski M, Mason S (2016) The impact of senior doctor assessment at triage on emergency department performance measures: systematic review and meta-analysis of comparative studies. Emerg Med J 33(7):504–513 (Jul)CrossRefGoogle Scholar
  5. 5.
    Burström L, Nordberg M, Ornung G, Castrén M, Wiklund T, Engström M et al (2012) Physician-led team triage based on lean principles may be superior for efficiency and quality? A comparison of three emergency departments with different triage models. Scand J Trauma Resusc Emerg Med 20;20:57 (Aug)CrossRefGoogle Scholar
  6. 6.
    Mirhaghi A, Heydari A, Mazlom R, Hasanzadeh F (2015) Reliability of the Emergency Severity Index: Meta-analysis. Sultan Qaboos Univ Med J 15(1):e71–7. (Epub 2015 Jan 21)Google Scholar
  7. 7.
    Mirhaghi A, Roudbari M (2011) A survey on knowledge level of the nurses about hospital triage. Iranian Journal of Critical Care Nursing 3(4):167:74Google Scholar
  8. 8.
    Esmailian M, Zamani M, Azadi F, Ghasemi F (2014) Inter-Rater Agreement of Emergency Nurses and Physicians in Emergency Severity Index (ESI) Triage. Emerg (Tehran). Fall 2(4):158–161Google Scholar
  9. 9.
    Pourasghar F, Daemi A, Tabrizi J, Ala A (2015) Inter-rater Reliability of Triages Performed by the Electronic Triage System. Bull Emerg Trauma 3(4):134–137 (Oct)PubMedPubMedCentralGoogle Scholar
  10. 10.
    Petitti D (1994) Meta-Analysis, Decision Analysis, and Cost Effectiveness Analysis. Oxford University Press, New York, NY, p 69Google Scholar
  11. 11.
    Kottner J, Audige L, Brorson S, Donner A, Gajewski BJ, Hróbjartsson A et al (2011) Guidelines for reporting reliability and agreement studies (GRRAS) were proposed. Int J Nurs Stud 48(6):661–671CrossRefGoogle Scholar
  12. 12.
    Julius S, Wright CC (2005) The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther 85(3):257–268Google Scholar
  13. 13.
    Borenstein M, Hedges L, Higgins J, Rothstein H (2009) Introduction to Meta-Analysis. John Wiley & Sons: Chichester, p 187–203Google Scholar
  14. 14.
    Hedges LV, Olkin I (1985) Statistical Methods for Meta-analysis. Academic Press, San DiegoGoogle Scholar
  15. 15.
    Rosenthal R (1991) Meta-Analytic Procedures for Social Research. SAGE, Newbury ParkCrossRefGoogle Scholar
  16. 16.
    Chen H, Manning AK, Dupuis J (2012) A Method of Moments Estimator for Random Effect Multivariate Meta-Analysis. Biometrics 68(4):1278–1284CrossRefGoogle Scholar
  17. 17.
    Riley R, Higgins J, Deeks J (2011) Interpretation of random effects meta-analyses. BMJ 342:d549CrossRefGoogle Scholar
  18. 18.
    Goransson K, Ehrenberg A, Marklund B, Ehnfors M (2005) Accuracy and concordance of nurses in emergency department triage. Scand J Caring Sci 19(4):432–438CrossRefGoogle Scholar
  19. 19.
    Ebrahimi M, Re MA (2015) inter-rater reliability and validity of the Ministry of Health of Turkey’s mandatory emergency triage instrument. Emerg Med Australas 27(5):496–497 (Oct)CrossRefGoogle Scholar
  20. 20.
    Green N, Durani Y, Brecher D, DePiero A, Loiselle J, Attia M (2012) Emergency Severity Index Version 4: A Valid and Reliable Tool in Pediatric Emergency Department Triage. Pediatr Emerg Care 28(8):753–757CrossRefGoogle Scholar
  21. 21.
    Jafari-Rouhi A, Sardashti S, Taghizadieh A, Soleimanpour H, Barzegar M (2013) The emergency severity index, version 4, for pediatric triage: a reliability study in Tabriz Children’s Hospital, Tabriz, Iran. Int J Emerg Med 6(1):36. [Epub ahead of print]Google Scholar
  22. 22.
    Gilboy N, Tanabe P, Travers D, Rosenau A, Eitel D (2005) Emergency Severity Index, Version 4: Implementation. Handbook Rockville: Ahrq Publ, p 2–4Google Scholar
  23. 23.
    Worster A, Sardo A, Eva K, Fernandes C, Upadhye S (2007) Triage Tool Inter-rater Reliability: A Comparison of Live Versus Paper Case Scenarios. J Emerg Nurs 33:319–323CrossRefGoogle Scholar
  24. 24.
    Mirhaghi A, Mazlom R, Heydari A, Ebrahimi M (2017) The reliability of the Manchester Triage System (MTS): a meta-analysis. J Evid Based Med 10(2):129–135.  https://doi.org/10.1111/jebm.12231 CrossRefPubMedGoogle Scholar
  25. 25.
    Storm-Versloot MN, Ubbink DT (2009) Chin a Choi V, Luitse JS. Observer agreement of the Manchester Triage System and the Emergency Severity Index: a simulation study. Emerg Med J 26(8):556–560 (Aug)CrossRefGoogle Scholar
  26. 26.
    Wuerz RC, Travers D, Gilboy N, Eitel DR, Rosenau A, Yazhari R (2001) Implementation and refinement of the emergency severity index. Acad Emerg Med 8(2):170–176CrossRefGoogle Scholar
  27. 27.
    Baumann MR, Strout TD (2005) Evaluation of the Emergency Severity Index (version 3) triage algorithm in pediatric patients. Acad Emerg Med 12(3):219–224CrossRefGoogle Scholar
  28. 28.
    Durani Y, Brecher D, Walmsley D, Attia M, Loiselle L (2009) The emergency severity index version 4 reliability in pediatric patients. Pediatr Emer Care 25:751:3Google Scholar
  29. 29.
    Beveridge R, Ducharme J, Janes L, Beaulieu S, Walter S (1999) Reliability of the Canadian emergency department triage and acuity scale: interrater agreement. Ann Emerg Med 34(2):155–159 (Aug)CrossRefGoogle Scholar
  30. 30.
    Wuerz RC, Milne LW, Eitel DR, Travers D, Gilboy N (2000) Reliability and validity of a new five-level triage instrument. Acad Emerg Med 7(3):236–242CrossRefGoogle Scholar
  31. 31.
    Choi M, Kim J, Choi H, Lee J, Shin S, Kim D et al (2009) Reliability of Emergency Severity Index version 4. Ann Emerg Med 54:95–96.  https://doi.org/10.1016/j.annemergmed.2009.06.336 CrossRefGoogle Scholar
  32. 32.
    Platts-Mills TF, Travers D, Biese K, McCall B, Kizer S, LaMantia M et al (2010) Accuracy of the Emergency Severity Index triage instrument for identifying elder emergency department patients receiving an immediate life-saving intervention. Acad Emerg Med 17(3):238–243CrossRefGoogle Scholar

Copyright information

© Springer Medizin Verlag GmbH, ein Teil von Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Emergency Medicine, School of MedicineMashhad University of Medical SciencesMashhadIran
  2. 2.Nursing and Midwifery Care Research CenterMashhad University of Medical SciencesMashhadIran

Personalised recommendations