Medical tests are indispensible for clinicians and provide information that goes beyond what is available by clinical evaluation alone. Systematic reviews that attempt to determine the utility of a medical test are similar to other types of reviews-for example, those that examine clinical and system interventions. In particular, a key consideration in a review is how much influence a particular study should have on the conclusions of the review. This chapter complements the original Methods Guide for Effectiveness and Comparative Effectiveness Reviews (hereafter referred to as the General Methods Guide),1 and focuses on issues of particular relevance to medical tests, especially the estimation of test performance (sensitivity and specificity).

The evaluation of study features that might influence the relative importance of a particular study has often been framed as an assessment of quality. Quality assessment—a broad term used to encompass the examination of factors such as systematic error, random error, adequacy of reporting, aspects of data analysis, applicability, specifying ethics approval and detailing sample size estimates—has been conceptualized in a variety of ways.2, 3 In addition, some schemes for quality assessment apply to individual studies and others to a body of literature. As a result, many different tools have been developed to formally evaluate the quality of studies of medical tests; however, there is no empirical evidence that any sort of score based on quantitative weights of individual study features can predict the degree to which a study is more or less “true.” In this context, systematic reviewers have not yet achieved consensus on the optimal criteria to assess study quality.

Two overarching questions that arise in considering quality in the sense of “value for judgment making” are: 1) Are the results for the population and test in the study accurate and precise (also referred to globally as the study’s “internal validity”), and 2) is the study applicable to the patients relevant to the review (an assessment of “external validity” with regard to the purpose of the review)? The first question relates to both systematic error (lack of accuracy, here termed bias) and random error (lack of precision). The second question distinguishes the relevance of the study not only to the population of interest in the study (which relates to the potential for bias) but, most importantly for a systematic review, the relevance of the study to the population represented in the key questions established at the outset of the review (i.e., applicability).

This chapter is part of the Methods Guide for Medical Test Reviews produced by the Agency for Healthcare Research and Quality (AHRQ) Evidence-Based Practice Centers (EPC) for AHRQ and the Journal of General Internal Medicine. Similar to the General Methods Guide,1 assessment of the major features that influence the importance of a study to key review questions are assessed separately. Chapter 6 of this Guide considers the evaluation of the applicability of a particular study to a key review question. Chapter 7 details the assessment of the quality of a body of evidence, and Chapter 8 covers the issue of random error, which can be addressed when considering all relevant studies through the use, if appropriate, of a summary measure combining study results. Thus, this chapter highlights key issues when assessing risk of bias in studies evaluating medical tests—systematic error resulting from design, conduct, or reporting that can lead to over- or under-estimation of test performance.

In conjunction with the General Methods Guide,1 and the other eleven chapters in this Methods Guide for Medical Test Reviews, the objective is to provide a useful resource for authors and users of systematic reviews of medical tests.

EVIDENCE FOR BIASES AFFECTING MEDICAL TEST STUDIES

Before considering risk of systematic bias, it is useful to consider the range of limitations in medical test studies. In a series of studies of bias in the context of medical test literature, Whiting et al. reviewed studies of the impact of a range of specific sources of error in diagnostic test studies conducted from 1966 to 2000.35 In the review, the term "test" was defined broadly to include traditional laboratory tests, clinical examinations, imaging tests, questionnaires, pathology, and measures of health status (e.g., the presence of disease or different stages/severity of a disease).6 Each test included in the analysis was compared to a reference standard, defined as the best comparator test to diagnose the disease or health condition in question. The results of this analysis indicated that no conclusions could be drawn about the direction or relative magnitude of effects for these specific biases. Although not definitive, the reviews showed that bias does occur and that some sources of bias—including spectrum bias, partial verification bias, clinical review bias, and observer or instrument variation—are particularly common in studies of diagnostic accuracy.3 As a guide to further work, the authors summarized the range of quality issues arising in the reviewed articles (Table 1).

Table 1 Commonly Reported Sources of Systematic Bias in Studies of Medical Test Performance

Elements of study design and conduct that may increase the risk of bias vary according to the type of study. For trials of tests with clinical outcomes, criteria should not differ greatly from those used for rating the quality of intervention studies.1 However, medical test performance studies differ from intervention studies in that they are typically cohort studies that have the potential for important sources of bias (e.g., complete ascertainment of true disease status, adequacy of reference standard, and spectrum effect). The next section focuses on some additional challenges in assessing the risk of bias in individual studies of medical test performance.

COMMON CHALLENGES

Several common challenges exist when assessing the risk of bias in studies of medical test performance. The first challenge is to identify the appropriate criteria to use. A number of instruments are available for assessing many different aspects of individual study quality—not just the potential for systematic error, but also the potential for random error, applicability, and adequacy of reporting.3 Which of the existing instruments or which combination of criteria from these instruments are best suited to the task at hand?

A second common challenge is how to apply each criteria in a way that is appropriate to the goals of the review. For example, a criteria that is straightforward for the evaluation of laboratory studies may be less helpful when evaluating components of the medical history or physical examination. Authors must ensure that the review remains true to the spirit of the criterion and is sufficiently clear to be reproducible by others.

Inadequacy of reporting, a third common challenge, does not in itself lead to systematic bias but limits the adequate assessment of important risk of bias criteria. Thus, fairly or unfairly, studies with less meticulous reporting may be assessed as having been less meticulously performed and as not deserving the same degree of attention given to well-reported studies. In such cases, when a study is otherwise judged to make a potentially important contribution, reviewers may need to contact the study’s authors to obtain additional information.

PRINCIPLES FOR ADDRESSING THE CHALLENGES

Principle 1: Use Validated Criteria to Address Relevant Sources of Bias

In selecting criteria for assessing risk of bias, multiple instruments are available, and reviewers must choose the one most appropriate to the task. Two systematic reviews have evaluated quality assessment instruments specifically in the context of diagnostic accuracy. West et al.9 evaluated 18 tools (six scales, nine guides, and three EPC rating systems). All of the tools were intended for use in conjunction with other tools relevant for judging the design-specific attributes of the study (for example, quality of RCTs or observational studies). Three scales met all six criteria considered important: 1) the Cochrane Working group checklist,10 2) the tool of Lijmer et al.,11 and 3) the National Health and Medical Research Council checklist.12

In 2005, Whiting et al. undertook a systematic review and identified 91 different instruments, checklists, and guidance documents.4 Of these 91 quality-related tools, 67 were designed specifically for diagnostic accuracy studies and 21 provided guidance for interpretation, conduct, reporting, or lists of criteria to consider when assessing diagnostic accuracy studies. The majority of these 91 tools did not explicitly state a rationale for inclusion or exclusion of items; neither have the majority of these scales and checklists been subjected to formal test-retest reliability evaluation. Similarly, the majority do not provide a definition of the components of quality considered in the tool. These variations are a reflection of inconsistency in understanding quality assessment within the field of evidence-based medicine. The authors did not recommend any particular checklist or tool, but rather used this evaluation as the basis to develop their own checklist, the Quality Assessment of Diagnostic Accuracy Studies (QUADAS).

The QUADAS checklist attempted to incorporate the sources of bias and error that had some empirical basis and validity.68 This tool contains elements of study limitations beyond those concerned with risk of systematic bias; it also includes questions related to reporting. An updated version of this scale, called QUADAS-2, identifies four key domains (patient selection, index test(s), reference standard, and flow and timing), which are each rated in terms of risk of bias.13 The updated checklist is shown in Table 2.

Table 2 QUADAS-2 Questions for Assessing Risk of Bias in Diagnostic Accuracy Studies*

We recommend that reviewers use criteria that assess the risk of systematic error that have been validated to some degree from an instrument like QUADAS-2. Chapters 6 and 8 discuss applicability and random error, which are other important aspects of quality assessment. In addition to disregarding irrelevant items, systematic reviewers may also need to add additional criteria from other standardized checklists such as Standards for Reporting of Diagnostic Accuracy (STARD)14 or Strengthening the Reporting of Genetic Association Studies (STREGA),15 (an extension of the Strengthening the Reporting of Observational Studies in Epidemiology [STROBE]).16

Principle 2: Standardize the Application of Criteria

In order to maintain objectivity in an otherwise subjective process, it is useful to standardize the application of criteria. There is little empirical evidence to inform decisions about this process. Thus, we recommend that the review team establish clear definitions for each criterion. This approach is demonstrated in the Illustration section below. In addition, it can be useful to pilot the criteria definitions with at least two reviewers. In this way, reviewers can revise unreliable terms and measure the reliability of the ultimate criteria.

Consistent with previous EPC guidance and other published recommendations,2 we suggest summarizing study limitations across multiple items for a single study into simple categories. Building on the guidance given in AHRQ’s General Methods Guide,1 we propose using the terms “low,” “medium,” and “high,” to rate risk of bias. Table 3 illustrates the application of these three categories in the context of diagnostic accuracy studies. It is useful to have two reviewers independently assign studies to categories, and to reconcile disagreements by discussion. A crucial point is that whatever definitions are used, reviewers should establish the definitions in advance of the final review (a priori) and should report them explicitly.

Table 3 Categorizing Individual Studies into General Quality Classes*

Principle 3: Decide When Inadequate Reporting Constitutes a Fatal Flaw

Reviewers must also carefully consider how to handle inadequate reporting. Inadequate reporting, in and of itself, does not introduce systematic bias, but it does limit the reviewers’ ability to assess the risk of bias. Some systematic reviewers may take a conservative approach by assuming the worst, while others may be more liberal by giving the benefit of the doubt.

When a study otherwise makes a potentially important contribution to the review, reviewers may resolve issues of reporting by contacting study authors. When it is not possible to obtain these details, reviewers should document that a study did not adequately report a particular criteria.

More importantly, it must be determined a priori whether failure to report some criteria might represent a “fatal flaw” (i.e., likely to make the results either uninterpretable or invalid). For example, if a review is intended to apply to older individuals yet there was no reporting of age, this could represent a flaw that would cause the study to be excluded from the review, or included and assessed as “high” with regard to risk of bias. Reviewers should identify their proposed method of handling inadequate reporting a priori and document this carefully.

ILLUSTRATION

A recent AHRQ systematic review evaluated the accuracy of reporting family history and the factors that were likely to affect accuracy.17, 18 The index test was patients’ self-reports of their family history, and the reference standard test could include verification of the relatives’ status from either medical records or disease or death registries. The methods chapter identified a single instrument (QUADAS) to evaluate quality of the eligible studies. The reviewers provided a rationale for their selection of items from within this tool; they excluded four of 14 items and gave their justifications for doing so in an appendix. Additionally, the reviewers provided contextual examples of how each QUADAS item had been adapted for the review. As noted in Table 4, partial verification bias was defined in the context of self-reported family history as the index test, and verification by the relatives (through either direct contact, health record, or disease/death registry) was the reference test. The authors provided explicit rules for rating this quality criterion as “yes,” “no,” or “unclear”.

Table 4 Interpretation of Partial Verification Bias: the Example of Family History17, 18*

The systematic reviewer can choose to present ratings of individual QUADAS criteria in tabular form as a percentage of the studies that scored “yes,” “no,” or “unclear” for each criteria. The developers of the tool do not recommend using composite scores.6

SUMMARY

An assessment of methodological quality is a necessary activity for authors of systematic reviews; this should include an evaluation of the evidence for studies of medical test performance. Judging the overall quality of an individual study involves examining the size of the study, the direction and degree of findings, the relevance of the study, and the risk of bias in the form of systematic error, internal validity, and other study limitations. In this chapter of the Methods Guide for Medical Test Reviews, we focus on the evaluation of systematic bias in an individual study as a distinctly important component of quality in studies of medical test performance.

KEY POINTS

  • When assessing limitations in studies of medical tests, systematic reviewers should select validated criteria that examine the risk of systematic error.

  • Systematic reviewers should categorize the risk of bias for individual studies as “low,” “medium,” or “high.”

  • Two reviewers should independently assess individual criteria as well as global categorization.

  • Reviewers should establish methods for determining an overall categorization for the study limitations a priori and document these decisions clearly.