Introduction

Data analytics referable to quality and safety in medical practice have recently taken on increased importance as healthcare consumers push for increased healthcare accountability through increased data access, transparency, participation in medical decision-making, and payment models in which reimbursements are in part tied to quality [14]. In medical imaging practice, radiation dose and image quality are critical components of these analyses, with the former gaining increased attention as computed tomography (CT) utilization and resulting radiation dose measures dramatically rise [57].

At the heart of radiation dose management is the concept of ALARA, which is an acronym for “as low as reasonably achievable” [8]. This well-established mandate calls for radiology providers to achieve high levels of image quality while maintaining low and medically acceptable radiation dose. In the past decade, a shift has taken place from the principle of “image quality as good as possible” to “image quality as good as needed” [9]. This has in effect modified the traditional tenets of ALARA by placing a greater emphasis on radiation dose (i.e., safety) and less emphasis on image quality, as long as image quality is adequate to enable an accurate diagnosis [10, 11].

As medical imaging service and technology providers strive to achieve these principles for maximal radiation dose reduction and “clinically adequate” image quality, it becomes clear that a static approach to radiation dose reduction is neither practical nor prudent. Both radiation dose and image quality are inherently dynamic in nature and affected by individual patient attributes, technology, clinical context, and exam type. For these reasons, it is important that any methodology used to optimize quality and safety in tandem takes into account the dynamic nature and multiplicity of factors affecting quality and safety. In order to accomplish this challenge, standardized data is required to create a referenceable database, which provides for objective metadata analysis and creation of best practice guidelines.

The Interaction Effects of Quality and Safety

One interesting and well-documented interaction effect between image quality and radiation dose takes place in digital radiography, in the form of “dose creep” [12]. The automatic optimization of image contrast and density in digital radiography systems often makes it impossible to determine whether a radiographic image is over or underexposed, judging by its density. Instead, underexposure manifests itself as increased image noise, while overexposure is rewarded by higher image quality. When exposure parameters are chosen manually (e.g., pediatric, intensive care unit patients), technologists tend to favor overexposure because underexposure will lead to radiologist complaints of poor image quality. The resulting dose creep leads to a gradual increase in exposure parameters over time, which can have an important clinical effect when multiple follow-up exams are performed or when patients are particularly radiosensitive [9].

Computed tomography (CT) is another example of a digital imaging modality in which image quality continues to improve as the exposure increases [13]. While there is no detrimental impact on image quality for higher-dose CT protocols, these do come at the cost of increased (and often unnecessary) radiation exposure levels. While a number of CT radiation dose reduction strategies have been reported [1419], the ensuing interaction effects that these have on image quality have not been critically analyzed and should be required for any comprehensive approach to combined safety and quality analysis.

When CT radiation dose is reduced by decreasing the milliampere-second (mAs) value, there is a linear increase in resulting noise, which degrades image quality [20, 21]. The ability to accommodate for increased noise with low mAs to CT protocols is often dependent upon the exam type and clinical indication [13]. As an example, when comparing chest and abdominal CT exams, high-contrast objects in the lung will not be routinely affected by increased noise, whereas low-contrast liver lesions will be adversely affected by increased noise and may result in diagnostic inaccuracies. An alternative strategy for CT radiation dose reduction is increased pitch, which will effectively increase CT slice thickness, which can increase volume averaging and decreased resolution. In the example of a chest CT, increased pitch and effective slice thickness could result in volume averaging and missed diagnosis of a small pulmonary nodule.

The net result is that radiation dose and medical image quality often move in concert to one another, but their effect on one another is often variable, depending upon the individual patient and clinical context. The dynamic nature of this interaction creates challenges for medical imaging providers in their attempt to simultaneously reduce radiation dose to the lowest possible levels, while maintaining sufficient image quality for accurate and definitive clinical diagnosis. An effective solution requires a method in which standardized radiation and image quality data can be recorded in tandem and analyzed in accordance to exam type, patient attributes, and clinical context to create data-driven best practice clinical guidelines.

Data Standardization

In order to perform meta-analysis, which is a fundamental requirement for evidence-based medicine and the creation of best practice guidelines, you must first create referenceable databases which are populated with standardized data. Representative examples of standardized image quality and radiation metrics are listed in Tables 1 and 2 [22, 23] and function to create a unified method of presenting image quality and radiation dose measures using an easily understood numerical Likert scale. Since image quality scoring is currently subjective in nature, safeguards would be required to ensure that the recorded image quality scores are unbiased, reproducible, and consistent across the myriad of providers providing these scores. Strategies to accomplish this include (but are not limited to) external validation of image quality scoring by a neutral and unbiased third party (e.g., image quality accreditation bodies), solicitation of individual image quality sources from multiple sources (e.g., supervisory technologist, medical physicist, radiologist), and statistical analysis of large sample size image quality scores to assess inter-rater variability. One could envision a time in the near future where computerized image quality algorithms could be developed, which in theory could objectify the process of image quality analysis and reduce the dependence on subjective image quality analysis [24].

Table 1 Standardized grading system for technical image quality
Table 2 Standardized grading system for radiation dose as a measure of medical imaging safety

The standardization of radiation dose data (Table 2) would provide an easy-to-understand metric which places into context the radiation dose measurement of the current exam relative to its peer group of comparable exams. Exam comparability can be defined by a number of variables (Table 3), which at a minimum would allow comparative exam radiation dose to be placed in the context of exam type, anatomy, and clinical indication. As the databases expand in size and scope, the analysis of “exam comparability” can be expanded to include additional variables which may be deemed relevant including patient attributes (e.g., age, body habitus), institutional profile (e.g., size, type, and location of service provider), and technology profile (hardware and software used for image acquisition and processing). The general idea is to provide a data-driven methodology for comparative radiation dose analysis based upon exam, patient, and provider similarities. It would be illogical to compare radiation dose data from a small rural hospital provider with that of a large urban academic provider, for the resources of these two institutional providers are far different. At the same time, it is unwise to compare radiation dose data from different patient populations (e.g., age, body habitus), knowing these different patient groups differ in radiation sensitivity and magnitude of dose for a given exam type.

Table 3 Variables used in determination of exam comparability (creation of the peer reference group)

The end goal is fairly simple and straightforward: provide a standardized scoring system which compares apples to apples (i.e., comparative integrity), classifies the data as an easy to understand numerical value (similar to BI-RADS), and provides a simplistic means in which these two independent measures (i.e., radiation dose and image quality) can be used to provide a single measure of combined quality and safety.

The Quality-Safety Index

After standardizing the measurements for radiation dose (i.e., safety) and image quality relative to comparable exams, the next step is to determine the relationship of quality and safety for that specific exam. One could easily create an exam-specific “Quality-Safety Index” (QSI) by combining these two individual measures into a single value, with the image quality value serving as the numerator and the safety value (i.e., radiation dose) serving as the denominator. Using the standardized scoring systems described in Tables 1 and 2, one can derive a Quality-Safety Index which would range in value from 5 (i.e., the highest level of combined quality and safety) to 0.2 (i.e., the lowest level of combined quality and safety). A Quality-Safety Index score of 1 represents an intermediate value and would serve as the reference point in which combined quality and safety scores were commensurate with the large pool of comparable exams. One could still retain the ability to view each measure independently, which would provide a point of reference as to the individual quality and safety scores achieved relative to peer exams. The Quality-Safety Index score would however create a unique way in which the combination of quality and safety can be viewed in tandem, which is of particular importance in everyday practice, in which an imaging service provider will intentionally strive to maximize one value over another, based upon clinical priorities.

In order to illustrate how the Quality/Safety Index would work in everyday practice and be used to optimize the balance between radiation dose and image quality, we will take the example of chest CT angiography in a 60-year-old male patient with acute onset of chest pain. In conventional practice, a standard protocol would be used with minimal attention paid to radiation dose reduction, but a primary focus on image quality, with particular attention to contrast bolus timing. As a result, the completed exam radiation dose would in all likelihood fall into a range commensurate with the larger pool of patients undergoing chest CT angiography, which equates to a radiation score of 3 using the standardized scoring criteria listed in Table 2. Image quality would be dependent upon a wide array of factors including (but not limited to) contrast bolus timing, volume and rate of contrast administration, patient body habitus, patient compliance (e.g., breath holding, movement), image processing, and acquisition parameters. In the event that the image quality was degraded by any of these factors, limiting diagnosis, additional and/or repeat CT imaging may be required which would increase the radiation dose. Alternatively, diminished image quality may result in diagnostic error, diminished diagnostic confidence, performance of additional imaging exams (e.g., ultrasound, nuclear medicine), or delayed clinical management [22]. For the purposes of discussion, we will assume that for this particular exam, overall image quality was deemed to be “Diagnostic,” which equates to an image quality score of 3 using the standardized scoring system, with a resulting Quality Index score of 1 (i.e., image quality, 3; radiation dose, 3).

Now if we modify a few of the variables, we can see how this would potentially affect image quality, radiation dose, and the Quality/Safety Index scores. In the first example, we will modify patient attributes, which could include body habitus, compliance, and prior medical/imaging history. If the patient was morbidly obese, the resulting radiation dose would be increased, which could change the radiation dose score from a 3 to a 4 or 5. In doing so (and assuming no change in image quality), the derived Quality/Safety Index score would change from a 1 to a 0.75 or 0.50. This illustrates the importance of factoring in patient body habitus to the analysis, in order to accurately compare “apples to apples” and not mistakenly combine patients of different size into one all-inclusive radiation dose analysis. On a similar note, if we were to take into account the patient’s cumulative radiation/medical history, we could determine the relative degree of radiation risk, which in this case could be affected by the fact that the patient had received prior radiation therapy in the treatment of lung cancer. Due to this increased risk, it is imperative that a more aggressive approach to radiation dose reduction be performed on this patient. As a result, the “standard” CT angiography protocol may be replaced by a “low dose” protocol, which may reduce radiation dose by 50 % and convert the radiation dose score to a 2. If image quality was maintained as “Diagnostic” (i.e., through the use of low-dose filters and iterative reconstruction), the resulting Quality/Safety Index score would be 1.5 (quality, 3; radiation, 2). One may even wish to be more aggressive in radiation dose reduction, thereby reducing the Safety (i.e., radiation) score to 1, which in turn may reduce the image quality score to 2 (i.e., limited) and a resulting Quality/Index score of 2.0.

Since it is often difficult to prospectively determine how these protocol modifications may affect image quality, a number of tools are available to assist in the evaluation, with the hopes of maximizing the balance between quality and safety. One application which has been described is a noise simulation tool for CT which provides one with the ability to incrementally increase noise to an image (which equates with defined levels of CT dose reduction), in order to directly visualize the interaction effects between noise and image quality [2527]. This simulation tool could be applied to historical images of the patient (when a prior comparable exam has been performed) or to a contemporary scout image. The technologist or radiologist using this application could in theory modify the image to achieve an ideal inflection point, in which the lowest “acceptable” image quality is achieved at the lowest radiation dose. In order to apply this to the clinical context, the historical or scout images can be selectively chosen to reflect the anatomy of highest clinical priority.

An alternative strategy would be to prospectively mine the Quality/Safety Index database to identify “comparable exams” (i.e., similar patient profiles, clinical context, and technology in use) which fulfill the desired search criteria. If, for example, the operator desires to maximize radiation dose reduction while achieving a minimum image quality score of 2, they could input the search criteria and identify exams in the database which fulfill the requirements, along with the protocol parameters used. This would in theory leverage the cumulative experience of large numbers of comparable exams and provide real-time decision support and educational benefit to the operators. If linked images were available to review, the operator could directly visualize the image quality of these exams, thereby providing added value to the data mining process.

In the next example, the patient has an imaging folder containing a recent comparable exam (i.e., chest CT angiography), which provides a reference for baseline anatomy and pathology. By having the ability to leverage preexisting image data, the operator has the ability to target a limited region of interest on the current study and more aggressively reduce the radiation dose. In this situation, comprehensive image quality takes on lesser importance than in the situation where no recent comparable exam is present and would thereby necessitate a more comprehensive and higher-quality exam. These same concepts can be applied to the myriad of patient and exam types being performed. As the criticality of radiation dose reduction becomes heightened (e.g., pediatric imaging), the utility would in theory increase. The important take-home point is that both safety and quality have great importance, but the dynamic nature of these variables and the patients in whom they are being evaluated requires an alternative approach to conventional practice in which quality and safety are largely viewed in isolation to one another.

Quality Assurance

The Quality-Safety Index database could be used for a number of quality improvement applications including staff education/training, performance evaluation, technology procurement, service provider selection, real-time protocol optimization, identification of quality assurance (QA) outliers, and establishment of best practice guidelines.

From a QA perspective, the database can be used to define acceptable thresholds for individual image quality, radiation dose, or combined Quality-Safety Index scores. In the event that any of these predefined thresholds are exceeded (i.e., inferior to a defined threshold), an automated QA alert can be sent to the designated parties (e.g., radiology administrator, supervisory technologist, department chief), alerting them of the performance deficiency so that immediate action can be taken. The safety feature of this application could even be programmed at the modality level at the time the protocol parameters are instituted. In the event that the estimated radiation dose measurements exceed the predefined threshold for the specific exam type and patient profile, an alert will be sent to the technologist with the recommendation for protocol modification.

Longitudinal analyses of the data can also be performed for trending analysis, in order to demonstrate temporal changes in quality and safety relating to a specific exam type, operator, or technology. These trending analyses can be used for targeted education and training, technology assessment, and protocol modification. A detailed discussion of additional QSI analytics and decision support tools will be discussed in a companion article [28].

Conclusion

The concept of combining standardized quality and safety measures into a single numerical measurement (i.e., Quality-Safety Index) is of particular relevance in the current practice environment where quality requirements are often reduced to the lowest acceptable measure in order to maximize patient safety. These standardized image quality and radiation data can in turn be used to populate referenceable databases, which can be used for meta-analysis, real-time decision support, performance and QA analytics, and establishment of best practice guidelines.

In addition to the application of the Quality-Safety Index for radiation dose and image quality assessment, this concept can also be applied to other medical imaging applications including contrast administration and interventional procedures. The net result is the creation of a standardized tool for co-analyzing quality and safety as it pertains to medical imaging practice, allowing for individual patient, service provider, and exam/procedural attributes. The radiology community should view this concept as an opportunity to enhance clinical outcomes and serve as a proactive leader in patient safety and quality initiatives. The ability to objectively validate quality and safety deliverables in service delivery can serve as a means to combat existing commoditization trends and competition from non-radiologist providers.