Increasingly, the imager is called upon to be the coordinator of care and to demonstrate that imaging is adherent to guideline-directed care. More recently, the use of appropriateness criteria (AC) has been applied as a means to identify optimal candidates for advanced cardiac imaging. The most frequently referenced criteria have been developed by the American College of Radiology (ACR) and the American College of Cardiology (ACC).1,2 As an example, there is one document from the ACC which provides rating tables for stable ischemic heart disease1 and six different publications from the ACR highlighting ratings for patients presenting with dyspnea and suspected CAD from low to high probability of coronary disease as well as for evaluation of asymptomatic individuals.2 In the current issue of the Journal of Nuclear Cardiology, Bagrova and colleagues calculate the concordance between appropriate ratings from the ACC as compared to the ACR. They note that overall concordance for appropriate ratings from a relatively large cohort of referred patients is modest (with a kappa statistic of 0.32). As noted by these authors, the agreement for an appropriate indication was high, at 89%, between the ACC and ACR rating. The greatest disagreement occurred within categorization of those referral indications classified as maybe or usually not appropriate. In practice, discordance would arise, for example, when a provider uses one set of criteria and the insurance provider another. This is one of a series of similar articles from this group3,4,5 and importantly highlights the challenges in implementation and hurdles in consistent results of appropriateness based on utilization of rating systems from the ACC vs. ACR. But, does this matter?

A compelling question is how will the use of AC impact referral patterns and is this concept of appropriate use helpful in clinical care. The ACC’s appropriate use criteria (AUC), for example, are not without their challenges. The 2014 multimodality statement does not allow for comparative assessment between modalities. By not identifying a “single best” diagnostic test for a given patient, test substitution practices can then be applied by a health plan; in particularly favoring lower cost procedures. And, when compared to clinical practice guidelines, “reasonable use” which dictates categorization for an AUC can have a Class I to IIa level of evidence. In addition, the AUC are based on expert opinion as compared to evidence-based which could allow incorporation of biases into the grading. Of course, the term “reasonable use” is critical to the application of AUC or the ACR’s AC and this is the crux of the analysis and discussion noted in the current series. A goal for the development of all AUC and AC is to make them simplistic so as to facilitate referral of a patient presenting for risk or symptom evaluation. With simplicity, there can be notable confusion and challenges in implementation.

One major concern with retrospective studies like the one reported in the Journal is selection and verification bias. Most of the current patient series are referred or retrospective cohorts.3,4,5,6,7 One may speculate as to whether we need to examine those who are not but should have been referred and how the AUC/AC alter (i.e., improve) clinical decision making on the part of the referring physician. A more detailed evaluation of available population data would be helpful including knowledge of the denominator of appropriate or inappropriate candidates and their course of care. The definition of an appropriate procedure is testing that will positively benefit the patient. An examination of only those referred does not allow us to define the population who has been harmed because they did not undergo testing. This should be a goal for future research in this area.

Central to the AC debate, is how a decision is made to use one set of criteria over another. What role dose the referring physician’s ease of use, either with algorithms or application-based programs, impact the decision to use ACC vs. ACR criteria. The perspective of the radiology benefits managers would also be of great import in the comparative evaluation of the AUC and AC. We commonly hear from third party payers that there are innumerable requests for approval based on inappropriate reasons on the part of the referring physician. Based on the current report by Bagrova,3 we see that the majority of referred patients have appropriate indications for testing, as one would expect given that testing has already been performed. What one might envision is that a broader evaluation of requests for imaging authorization and a greater understanding of denominator of at-risk patients would be immensely informative to the field of cardiac imaging.

A key takeaway message from the current report is that as much as the addition of appropriateness criteria has been helpful to reduce the use of nuclear imaging in rarely appropriate indications, discordance in categorization will be problematic when comparing multiple criteria. This results in a lack of clarity in the selection of the, “Right Test for the Right Patient.” Better concordance in the categorization of different published criteria is fundamental to creating stability in the field of imaging and to form the basis for strategic planning of the healthcare needs of our patient populations. We applaud the efforts of this research group and their incredible contributions to the field of cardiac imaging. A view to the future in particular with the advantages of the electronic tracking capabilities at most medical centers is that we will have a better understanding of the candidate patient population needing imaging and how the AUC/AC can be applied to benefit patients.