Evaluation of the Marburg Heart Score and INTERCHEST score compared to current telephone triage for chest pain in out-of-hours primary care

Introduction Chest pain is a common and challenging symptom for telephone triage in urgent primary care. Existing chest-pain-specific risk scores originally developed for diagnostic purposes may outperform current telephone triage protocols. Methods This study involved a retrospective, observational cohort of consecutive patients evaluated for chest pain at a large-scale out-of-hours primary care facility in the Netherlands. We evaluated the performance of the Marburg Heart Score (MHS) and INTERCHEST score as stand-alone triage tools and compared them with the current decision support tool, the Netherlands Triage Standard (NTS). The outcomes of interest were: C‑statistics, calibration and diagnostic accuracy for optimised thresholds with major events as the reference standard. Major events are a composite of all-cause mortality and both cardiovascular and non-cardiovascular urgent underlying conditions occurring within 6 weeks of initial contact. Results We included 1433 patients, 57.6% women, with a median age of 55.0 years. Major events occurred in 16.4% (n = 235), of which acute coronary syndrome accounted for 6.8% (n = 98). For predicting major events, C‑statistics for the MHS and INTERCHEST score were 0.74 (95% confidence interval: 0.70–0.77) and 0.76 (0.73–0.80), respectively. In comparison, the NTS had a C-statistic of 0.66 (0.62–0.69). All had appropriate calibration. Both scores (at threshold ≥ 2) reduced the number of referrals (with lower false-positive rates) and maintained equal safety compared with the NTS. Conclusion Diagnostic risk stratification scores for chest pain may also improve telephone triage for major events in out-of-hours primary care, by reducing the number of unnecessary referrals without compromising triage safety. Further validation is warranted. Supplementary Information The online version of this article (10.1007/s12471-022-01745-0) contains supplementary material, which is available to authorized users.


INTRODUCTION 3
Scientific and clinical background, including the intended use and clinical role of the index test 3 4 Study objectives and hypotheses 3 METHODS Study design 5 Whether data collection was planned before the index test and reference standard were performed (prospective study) or after (retrospective study) Where the full study protocol can be accessed 3 & 7 30 Sources of funding and other support; role of funders 7 AIM STARD stands for "Standards for Reporting Diagnostic accuracy studies". This list of items was developed to contribute to the completeness and transparency of reporting of diagnostic accuracy studies. Authors can use the list to write informative study reports. Editors and peer-reviewers can use it to evaluate whether the information has been included in manuscripts submitted for publication.

EXPLANATION
A diagnostic accuracy study evaluates the ability of one or more medical tests to correctly classify study participants as having a target condition. This can be a disease, a disease stage, response or benefit from therapy, or an event or condition in the future. A medical test can be an imaging procedure, a laboratory test, elements from history and physical examination, a combination of these, or any other method for collecting information about the current health status of a patient.
The test whose accuracy is evaluated is called index test. A study can evaluate the accuracy of one or more index tests.
Evaluating the ability of a medical test to correctly classify patients is typically done by comparing the distribution of the index test results with those of the reference standard. The reference standard is the best available method for establishing the presence or absence of the target condition. An accuracy study can rely on one or more reference standards.
If test results are categorized as either positive or negative, the cross tabulation of the index test results against those of the reference standard can be used to estimate the sensitivity of the index test (the proportion of participants with the target condition who have a positive index test), and its specificity (the proportion without the target condition who have a negative index test). From this cross tabulation (sometimes referred to as the contingency or "2x2" table), several other accuracy statistics can be estimated, such as the positive and negative predictive values of the test. Confidence intervals around estimates of accuracy can then be calculated to quantify the statistical precision of the measurements.
If the index test results can take more than two values, categorization of test results as positive or negative requires a test positivity cut-off. When multiple such cut-offs can be defined, authors can report a receiver operating characteristic (ROC) curve which graphically represents the combination of sensitivity and specificity for each possible test positivity cut-off. The area under the ROC curve informs in a single numerical value about the overall diagnostic accuracy of the index test.
The intended use of a medical test can be diagnosis, screening, staging, monitoring, surveillance, prediction or prognosis. The clinical role of a test explains its position relative to existing tests in the clinical pathway. A replacement test, for example, replaces an existing test. A triage test is used before an existing test; an add-on test is used after an existing test.
Besides diagnostic accuracy, several other outcomes and statistics may be relevant in the evaluation of medical tests. Medical tests can also be used to classify patients for purposes other than diagnosis, such as staging or prognosis. The STARD list was not explicitly developed for these other outcomes, statistics, and study types, although most STARD items would still apply.

DEVELOPMENT
This STARD list was released in 2015. The 30 items were identified by an international expert group of methodologists, researchers, and editors. The guiding principle in the development of STARD was to select items that, when reported, would help readers to judge the potential for bias in the study, to appraise the applicability of the study findings and the validity of conclusions and recommendations. The list represents an update of the first version, which was published in 2003.
More information can be found on http://www.equator-network.org/reporting-guidelines/stard.