Journal of Digital Imaging

, Volume 22, Issue 1, pp 89–98

Digital Radiography Reject Analysis: Data Collection Methodology, Results, and Recommendations from an In-depth Investigation at Two Hospitals

Authors

    • Clinical Applications Research LaboratoryCarestream Health Inc.
  • W. James Sehnert
    • Clinical Applications Research LaboratoryCarestream Health Inc.
  • Bruce Reiner
    • Department of RadiologyMaryland VA Healthcare System
  • Eliot L. Siegel
    • Department of RadiologyMaryland VA Healthcare System
  • Arthur Segal
    • Department of RadiologyRochester General Hospital
  • David L. Waldman
    • Department of Imaging SciencesUniversity of Rochester Medical Center
Article

DOI: 10.1007/s10278-008-9112-5

Cite this article as:
Foos, D.H., Sehnert, W.J., Reiner, B. et al. J Digit Imaging (2009) 22: 89. doi:10.1007/s10278-008-9112-5

Abstract

Reject analysis was performed on 288,000 computed radiography (CR) image records collected from a university hospital (UH) and a large community hospital (CH). Each record contains image information, such as body part and view position, exposure level, technologist identifier, and—if the image was rejected—the reason for rejection. Extensive database filtering was required to ensure the integrity of the reject-rate calculations. The reject rate for CR across all departments and across all exam types was 4.4% at UH and 4.9% at CH. The most frequently occurring exam types with reject rates of 8% or greater were found to be common to both institutions (skull/facial bones, shoulder, hip, spines, in-department chest, pelvis). Positioning errors and anatomy cutoff were the most frequently occurring reasons for rejection, accounting for 45% of rejects at CH and 56% at UH. Improper exposure was the next most frequently occurring reject reason (14% of rejects at CH and 13% at UH), followed by patient motion (11% of rejects at CH and 7% at UH). Chest exams were the most frequently performed exam at both institutions (26% at UH and 45% at CH) with half captured in-department and half captured using portable x-ray equipment. A ninefold greater reject rate was found for in-department (9%) versus portable chest exams (1%). Problems identified with the integrity of the data used for reject analysis can be mitigated in the future by objectifying quality assurance (QA) procedures and by standardizing the nomenclature and definitions for QA deficiencies.

Key words

Reject analysisquality assurancedigital radiography

Introduction

Digital radiography systems are in use throughout the medical imaging community and now represent the standard of care at many hospitals and imaging centers. To date, however, there is little reported in the technical literature on the quality performance, as measured in terms of reject rates, associated with the clinical use of these systems. The term reject refers to radiographs of patients that are judged by the technologist acquiring the image to be clinically unacceptable and needing to be repeated. Nonpatient captures, such as images that are used for quality control (QC) purposes, are also categorized as rejects.

The data required to calculate reject rates for digital systems have historically been difficult to obtain.1 This problem has been further compounded by the lack of the software infrastructure necessary to centrally compile data for radiology departments that have multiple digital-capture devices.2 Quality assurance (QA) tools such as digital dashboards and device clustering software platforms are now available from some manufacturers (Carestream Health at http://www.carestreamhealth.com/). These software tools facilitate access to the objective data necessary to analyze and report on reject statistics and digital radiography equipment-utilization performance across an entire institution.

We describe the methodology used to compile a comprehensive database consisting of more than 288,000 computed radiography (CR) patient image records from two hospitals having all-digital radiology departments, and we report on the results of the reject analysis performed on that database.

Materials and Methods

A reject-tracking tool was activated on 16 Kodak DirectView CR Systems (Rochester, NY,USA) at a university hospital (UH) and on 4 Kodak DirectView CR Systems at a large community hospital (CH) (Carestream Health at http://www.carestreamhealth.com/). These 20 devices represented all of the CR systems within the 2 hospitals. With the reject-tracking software enabled, technologists were required to enter a reason for rejection for any rejected image before the CR system would allow another image to be scanned. This ensured that every captured CR image, whether accepted or rejected, was accounted for in the database and in the subsequent reject analysis. Table 1 shows the reject-reason terminology that was used in the reject-tracking tool at each hospital. The reasons for rejection are configurable within the reject-tracking software, and before the start of this investigation, each site had preestablished their own list of reasons.
Table 1

Reasons for Rejection for UH and CH

CH

UH

Clipped anatomy

Clipped anatomy

Positioning error

Positioning error

Patient motion

Patient motion

Low index

Underexposure

Test

Blank cassette

Artifact

Artifact

Other reason

Other reason

Clipped marker

High index

Overexposure

Incorrect marker

Low exposure index

Low exposure index

No reason

No reason

High exposure index

Equipment failure

A research workstation was integrated into the picture archiving and communication systems (PACS) network at each hospital for the purpose of providing a centralized remote query and retrieval mechanism for the image records stored within the CR systems. Each workstation consisted of a computer (Precision 370, DELL Computer, Round Rock, TX, USA) equipped with a 19 in. color LCD monitor (VA902b, ViewSonic, Walnut, CA, USA), a 3-MP high-resolution diagnostic display (AXIS III, National Display Systems, San Jose, CA, USA) and a 250-MB portable hard drive (WD2500B011, Western Digital, Lake Forest, CA, USA). Customized software (not commercially available) was loaded onto the research workstations, which allowed image records to be remotely and automatically downloaded from each of the CR systems. An image record was composed of image-centric information including the CR device identifier (ID), body part, view position, technologist ID, exposure information, and—if the image was rejected—the reason for rejection. The image record also contained the unprocessed image for all rejects and for many of the accepted exams. If the image record contained the unprocessed image, the diagnostic rendering state was also captured so that the image processing could be reproduced according to the hospital preferences. Protected health information was scrubbed from each record so that they were not traceable to the patient.

Image records were collected from all four CR systems at the CH for a period of 435 consecutive days. Image records were collected from all 16 CR systems at UH for a period of 275 consecutive days. The database was populated with both accepted and rejected records. For 6,000 of the clinically rejected records, image pairs were created that consisted of the clinically rejected image, i.e., one not suitable for diagnosis, along with one subsequently repeated image of acceptable diagnostic quality. The data from each CR system was then compiled into a common database containing more than 288,000 image records. The data collection protocol was approved by each hospital’s investigational review board.

The reject portion of the database initially included records for both clinical images and nonpatient images. Records corresponding to nonpatient images, such as phosphor plate erasures and test phantom exposures, were filtered from the reject database using a combination of computer-based image analysis and visual inspection. The filtering process reduced the initial size of the CH portion of the reject database by 38% and the UH portion of the reject database by 25%. The filtered database was then analyzed to determine the frequency distributions of accepted and rejected patient images and to compute the reject rates across different exam types. Reject rates were calculated by dividing the number of rejected images by the sum of the number of rejected and accepted images.

Results

A summary breakdown for each hospital of the exam-type distribution of accepted and rejected images and corresponding reject rates is shown in Table 2. The analysis revealed the reject rate for CR patient images across all departments and across all exam types was 4.4% at UH and 4.9% at CH. The most frequently occurring exam types, having reject rates of 8% or greater, were found to be common to both institutions (skull/facial bones, shoulder, hip, spines, in-department chest, pelvis). The reject rates for in-department chest versus portable chest exams were found to be dramatically different at both sites with a ninefold greater reject rate for in-department chest (9%) versus portable chest (1%).
Table 2

Database Summary from UH and CH

 

Accepts

Rejects

Reject Rate (%)

UH

Portable chest

21,040

208

1.0

Chest

18,523

1834

9.0

Knee

15,165

695

4.4

Foot

10,371

142

1.4

Ankle

9,699

188

1.9

Abdomen

8,515

355

4.0

Pediatric chest

7,114

84

1.2

Shoulder

6,373

779

10.9

Wrist

5,595

143

2.5

Hand

4,933

56

1.1

Hip

4,760

505

9.6

Tibia fibula

4,596

65

1.4

Lspine

4,480

392

8.0

CSpine

4,259

398

8.5

Pediatric abdomen

4,252

51

1.2

Barium study

4,178

106

2.5

Pelvis

3,699

322

8.0

Elbow

3,414

156

4.4

Femur

2,652

101

3.7

Extremity

2,022

24

1.2

Forearm

1,936

55

2.8

TSpine

1,148

121

9.5

Humerus

1,038

41

3.8

Trauma series

541

23

4.1

FAS

533

22

4.0

Spine

532

49

8.4

Thorax

359

22

5.8

Skull

308

19

5.8

Facial bones

124

32

20.5

Bone survey

58

0

0.0

Joint

29

4

12.1

Cranium

13

1

7.1

Nasal bones

13

1

7.1

General abdomen

8

1

11.1

Other

6

1

14.3

IVP

5

1

16.7

Long bone

5

1

16.7

Custom 1

1

0

0.0

Total

152,297

6,998

4.4

CH

Portable chest

28,777

273

0.9

Chest

26,856

2,578

8.8

Abdomen

11,424

448

3.8

Knee

7,826

353

4.3

Ankle

5,736

89

1.5

Foot

4,744

121

2.5

Wrist

4,360

71

1.6

LSpine

4,248

404

8.7

Hand

4,049

71

1.7

Shoulder

3,453

352

9.3

CSpine

3,302

394

10.7

Hip

2,799

297

9.6

Tibia fibula

2,548

27

1.0

Elbow

2,357

50

2.1

Fingers toes

2,080

47

2.2

Forearm

1,613

42

2.5

Femur

1,195

79

6.2

TSpine

1,087

124

10.2

Ribs sternum

952

107

10.1

Pelvis

940

142

13.1

Humerus

701

40

5.4

Facial bones

471

64

12.0

Skull

369

72

16.3

Pediatric chest

368

31

7.8

Pediatric abdomen

73

1

1.4

Nasal bones

64

8

11.1

Spine

54

4

6.9

Scoliosis

41

2

4.7

Cranium

25

2

7.4

LLI

22

5

18.5

Joint

6

0

0.0

General abdomen

3

1

25.0

Pattern

0

1

100.0

Total

122,543

6,300

4.9

Table 3 shows a detailed breakdown of the frequency of occurrence of rejected exams by body part and view position for CH. Table 4 shows the equivalent breakdown for UH. For Tables 3 and 4, the rows are sorted from top to bottom by most-to-least frequently occurring body part type for the combined total of accepted and rejected exams. Similarly, the columns are sorted from left to right by most frequently occurring to least frequently occurring view position for the combined total of accepted and rejected exams. Tables 5 and 6 were sorted in the same manner as Tables 3 and 4 and show the distribution of reject rates for each body part and view position for CH (Table 5) and UH (Table 6).
Table 3

Body Part and View Position Distribution of Rejected Patient Exams Collected from Four CR Systems Over a 435-day Period at a Large CH

CH

AP

Lateral

PA

Other

XTable

LPO

RPO

LLDecub

RAO

LAO

RLDecub

LL

RL

Total

Portable chest

269

2

1

1

         

273

Chest

1,216

764

572

3

1

  

11

 

1

10

  

2,578

Abdomen

298

21

10

4

5

16

4

74

6

1

9

  

448

Knee

119

133

 

65

16

8

9

 

2

1

   

353

Ankle

44

27

 

6

4

2

4

 

1

1

   

89

Foot

45

42

3

19

1

5

3

 

2

1

   

121

Wrist

6

35

12

7

7

1

2

 

1

    

71

LSpine

84

226

3

18

23

24

23

 

1

  

2

 

404

Hand

8

23

25

5

 

1

4

 

3

2

   

71

Shoulder

123

183

2

33

3

2

6

      

352

Cspine

120

109

 

70

41

25

24

 

2

3

   

394

Hip

145

85

 

1

65

1

       

297

Tibia fibula

8

13

 

2

3

1

       

27

Elbow

19

20

 

8

2

 

1

      

50

Fingers toes

10

10

16

5

 

1

1

 

3

1

   

47

Forearm

12

21

1

2

6

        

42

Femur

38

35

  

6

        

79

TSpine

26

84

 

10

4

        

124

Ribs Sternum

44

15

1

5

 

22

18

 

2

    

107

Pelvis

119

10

1

6

 

5

1

      

142

Humerus

19

18

  

3

        

40

Facial bones

36

5

6

12

 

1

4

      

64

Skull

41

9

14

5

1

1

      

1

72

Pediatric chest

6

12

11

1

1

        

31

Pediatric abdomen

1

            

1

Nasal bones

1

4

1

2

         

8

Spine

2

2

           

4

Scoliosis

1

1

           

2

Cranium

2

            

2

LLI

4

          

1

 

5

Joint

              

General abdomen

       

1

     

1

Pattern

1

            

1

Total

2,867

1,909

679

290

192

116

104

86

23

11

19

3

1

6,300

Table 4

Body Part and View Position Distribution of Rejected Patient Exams Collected from 16 CR Systems over a 275-day Period at a UH

UH

AP

Lateral

PA

XTable

Other

LPO

RPO

LLD

RAO

RL

LAO

RLD

LL

Total

Portable chest

206

 

1

        

1

 

208

Chest

1,138

464

218

1

 

1

 

4

1

 

2

5

 

1,834

Knee

160

113

55

254

113

        

695

Foot

69

29

4

24

16

        

142

Ankle

102

42

1

34

4

 

3

 

2

    

188

Abdomen

241

5

2

4

1

30

18

53

 

1

   

355

Pediatric chest

58

12

9

1

   

1

   

3

 

84

Shoulder

152

117

22

350

128

7

3

      

779

Wrist

7

75

18

34

8

     

1

  

143

Hand

11

17

25

 

2

 

1

      

56

Hip

92

46

 

366

1

        

505

Tibia fibula

24

13

 

28

         

65

LSpine

154

176

 

19

 

23

19

     

1

392

CSpine

220

95

 

27

13

23

20

      

398

Pediatric abdomen

30

 

5

1

   

14

  

1

  

51

Barium study

22

2

27

  

20

3

2

12

15

2

1

 

106

Pelvis

277

7

 

13

1

12

12

      

322

Elbow

55

65

6

25

5

        

156

Femur

38

25

 

38

         

101

Extremity

7

9

7

 

1

        

24

Forearm

8

26

6

14

1

        

55

Tspine

44

53

 

19

5

        

121

Humerus

19

19

 

2

 

1

       

41

Trauma series

23

            

23

FAS

15

 

1

    

6

     

22

Spine

18

26

  

1

1

3

      

49

Thorax

4

4

1

1

2

1

  

7

 

2

  

22

Skull

8

5

3

3

         

19

Facial bones

23

1

4

 

3

1

       

32

Bone survey

             

0

Joint

     

1

2

 

1

    

4

Cranium

  

1

          

1

Nasal bones

 

1

           

1

General abdomen

1

            

1

Other

1

            

1

IVP

1

            

1

Long bone

1

            

1

Custom 1

              

Total

3,229

1,447

416

1,258

305

121

84

80

23

16

8

10

1

6,998

Table 5

Body Part and View Position Distribution of Reject Rates for Patient Images Collected from Four CR Systems over a 435-day Period at a Large CH

CH

AP (%)

Lateral (%)

PA (%)

Other (%)

XTable (%)

LPO (%)

RPO (%)

LLDecub (%)

RAO (%)

LAO (%)

RLDecub (%)

LL (%)

RL (%)

Combined (%)

Portable chest

1

18

1

33

0

0

0

0

0.9

Chest

11

8

7

25

17

0

0

7

0

20

17

0

8.8

Abdomen

3

6

2

8

36

8

4

6

3

6

6

0

0

3.8

Knee

3

7

0

4

2

3

4

4

2

4.3

Ankle

2

1

0

1

3

1

2

2

3

1.5

Foot

2

3

1

4

2

3

2

6

3

2.5

Wrist

1

3

1

1

12

1

2

1

0

1.6

LSpine

7

10

4

13

10

8

8

0

4

0

29

8.7

Hand

2

2

2

1

0

1

4

3

4

1.7

Shoulder

6

16

40

9

15

3

6

0

0

0

9.3

CSpine

9

11

0

18

12

8

8

33

23

0

10.7

Hip

8

9

0

4

22

50

0

9.6

Tibia fibula

1

1

1

2

2

0

0

0

1.0

Elbow

2

3

0

3

11

0

2

0

0

2.1

Fingers toes

2

2

3

2

0

2

2

5

2

2.2

Forearm

2

3

1

2

11

0

0

0

0

2.5

Femur

5

7

0

0

6

0

6.2

TSpine

7

11

31

19

0

10.2

Ribs sternum

6

36

6

22

0

16

14

18

0

10.1

Pelvis

13

10

11

27

29

8

0

0

13.1

Humerus

5

6

0

0

75

0

0

5.4

Facial bones

14

7

5

30

0

13

31

0

0

0

12.0

Skull

18

8

20

28

25

50

0

0

0

33

16.3

Pediatric chest

3

10

13

33

0

7.8

Pediatric abdomen

2

0

0

0

0

1.4

Nasal bones

14

11

5

50

0

0

0

11.1

Spine

11

9

0

0

0

0

0

0

0

6.9

Scoliosis

3

13

4.7

Cranium

12

0

0

0

0

0

7.4

LLI

15

18.5

Joint

0

0

0

0.0

General abdomen

0

25.0

Pattern

Combined

4.0

6.8

5.0

5.0

8.7

5.8

5.2

6.1

3.3

2.7

8.4

10.0

12.5

4.9

Table 6

Body Part and View Position Distribution of Reject Rates for Patient Images Collected from 16 CR Systems over a 275-day Period at a UH

UH

AP (%)

Lateral (%)

PA (%)

XTable (%)

Other (%)

LPO (%)

RPO (%)

LLD (%)

RAO (%)

RL (%)

LAO (%)

RLD (%)

LL (%)

Combined (%)

Portable chest

1

0

6

    

0

   

33

 

1.0

Chest

16

7

4

9

0

8

0

13

11

0

29

23

0

9.0

Knee

3

6

5

6

3

0

0

     

0

4.4

Foot

1

1

4

6

8

0

0

 

0

 

0

  

1.4

Ankle

2

1

17

4

3

0

8

 

40

 

0

  

1.9

Abdomen

3

9

6

7

50

5

4

10

0

33

0

0

0

4.0

Pediatric chest

1

5

3

2

   

1

 

0

 

7

 

1.2

Shoulder

5

19

14

13

17

6

4

 

0

    

10.9

Wrist

2

3

1

12

6

0

0

 

0

 

33

 

0

2.5

Hand

5

1

1

0

4

0

7

 

0

 

0

 

0

1.1

Hip

7

10

0

11

11

0

0

      

9.6

Tibia fibula

1

1

0

3

0

       

0

1.4

LSpine

8

8

0

31

0

7

6

0

    

25

8.0

CSpine

11

6

 

4

19

9

8

 

0

 

0

 

0

8.5

Pediatric abdomen

1

0

4

2

 

0

0

4

0

0

50

0

 

1.2

Barium study

2

2

3

  

5

2

4

2

4

3

2

0

2.5

Pelvis

7

9

0

19

13

14

14

   

0

 

0

8.0

Elbow

4

4

6

11

4

0

0

 

0

    

4.4

Femur

3

5

0

4

0

        

3.7

Extremity

3

1

1

0

3

   

0

    

1.2

Forearm

1

3

2

6

3

        

2.8

TSpine

7

9

 

30

14

0

0

      

9.5

Humerus

3

5

0

15

0

25

0

      

3.8

Trauma series

4

  

0

         

4.1

FAS

3

0

2

0

   

9

   

0

 

4.0

Spine

10

7

 

0

 

17

38

      

8.4

Thorax

5

24

2

25

 

7

0

 

6

 

2

  

5.8

Skull

7

6

14

3

0

0

0

  

0

   

5.8

Facial bones

26

5

17

0

33

 

0

     

0

20.5

Bone survey

0

0

0

   

0

      

0.0

Joint

0

 

0

 

0

13

25

 

25

 

0

  

12.1

Cranium

 

0

9

 

0

        

7.1

Nasal bones

 

13

0

0

0

        

7.1

General abdomen

11

            

11.1

Other

 

0

 

0

0

        

14.3

IVP

17

            

16.7

Long bone

17

            

16.7

Custom 1

 

0

           

0.0

Combined

3.8

4.5

2.7

8.2

5.4

6.4

5.7

6.9

3.4

3.7

4.0

6.9

1.5

4.4

The combination of positioning errors and anatomy cutoff was the most frequently occurring reason for rejection, accounting for 45% of all rejects at CH and 56% at UH. Improper exposure (either too low or too high) was the next most frequently occurring reject reason (14% of rejects at CH and 13% at UH), followed by patient motion (11% at CH and 7% at UH). Smaller percentages of rejects were attributed to artifacts, clipped or missing markers, and/or unspecified other reasons.

Chest exams (including in-department and portable chest exams) were the single-most frequently performed CR procedure at both institutions (26% at UH and 45% at CH). Whereas both institutions also have dedicated digital radiography (DR) rooms, the number of DR rooms at UH is greater, which partially explains the relatively lower overall percentage of CR chest exams. A further influencing factor of this difference is the inclusion of five CR systems from within the orthopedic clinic at UH, a high-volume facility accounting for very few chest exams. At both institutions, approximately half of all CR chest exams were captured in-department and half were captured using portable x-ray equipment. It should be noted that when the body part was designated as chest, it was interpreted that the image was captured within the radiology department with the patient in the erect position. We have a high level of confidence that this interpretation was accurate for all images labeled with either the posteroanterior (PA) or lateral view positions but suspect that this interpretation may be inaccurate for some images labeled as chest anteroposterior (AP). With the nominal workflow, the technologist specifies the body part and view position information for each image before it is scanned. This is done for purposes of indexing the appropriate image-processing algorithm parameters and for populating the Digital Imaging Communications in Medicine image header (http://medical.nema.org). The CR systems were all configured so that if the technologist did not specify the body part and view position, the system would default to chest AP. A visual inspection of a large sampling of accepted chest AP images revealed that some of these appeared to have the characteristics of a portable chest and, more appropriately, should have been assigned portable chest. However, there was no practical way to retrospectively confirm the exam type. The percentage of rejected chest AP images having portable chest x-ray imaging characteristics is considerable, suggesting that the acquiring technologist may not have overridden the default body part designation before rejecting the image. Furthermore, a small percentage of rejected images labeled chest AP were visually identified as other exam types altogether (nonchest). We suspect that this may have occurred for workflow efficiency purposes, based on the technologist knowing a priori that the image would be rejected based on the acquisition situation, e.g., “the technologist observed that the patient moved during the exposure.” The result of these situations was that a thorough visual inspection of chest AP-rejected images had to be performed—and erroneous data set aside—before calculating reject-rate statistics.

Discussion

Problems were encountered with the integrity and the consistency of the data that was collected from both sites. These problems resulted from a combination of factors, including limitations in commercial CR capture software and hardware infrastructure, lack of standard terminology and associated definitions for QA deficiencies, and inconsistent QA practices.

Extensive filtering of data records had to be performed to eliminate nonpatient images from the analysis, e.g., phosphor plate erasures and test phantom exposures. Test phantom exposures performed for QC procedures were generally labeled as body part “Pattern” and as the reason for rejection “Test,” thus they were easily filtered in the reject analysis. However, no specific protocol was established to label CR scans performed for purposes of plate erasure, which flooded the reject database. There was a significant number of rejected images assigned “Test” as the reason for rejection, but they were inappropriately labeled using the default body part of chest AP. Plate erasure images were detected and set aside from further analysis by ad hoc filtering and visually reviewing rejected images that had abnormally low exposure index (EI) values, which is a vendor-specific parameter that is indicative of the exposure to the CR plate.

Whereas the “spirit” of the QA deficiency terminology was similar at the two sites (Table 1), the specific language used for labeling a rejected image was configured differently in the CR system software at each site; and in a few instances, they were configured differently among CR systems within the same site. There were also examples of redundant terminology. For instance, CH had choices in the list of reasons for rejection that included “High Index” and “High Exposure Index;” UH had choices in their list of reasons for rejection that included “Low Exposure Index” and “Underexposure.” Moreover, examples were found through independent visual review that indicated that the interpretation of the terminology was also inconsistent among technologists. For example, there was ambiguity discovered in the use of “Positioning Error” and “Anatomy Clipping.” Each of these terms can have a well-articulated, unambiguous definition; however, visual review of the rejected images from each site indicated that the terms were used, essentially, interchangeably. An interesting observation was discovered when visually characterizing rejected and subsequently repeated and accepted image pairs that were initially rejected because of patient motion. Ostensibly, two very distinct interpretations of patient motion existed because a significant percentage of each type of interpretation was identified. The first type of motion reject consisted of examples where the patient moved during the actual exposure with the motion defect manifesting itself in the image as blur. This was clearly evident when comparing the original rejected image with the subsequently repeated and accepted image. Not surprisingly, most of the images that fit this categorization were the typical long-exposure time exams such as the lateral chest. A second, very different type of rejected image that was assigned to “Patient Motion” as the reason for rejection, manifested itself in the image not as blur but, rather, as an incorrect anatomical positioning. Again, by comparing the reject with the subsequently repeated accepted image, it appeared that the patients had moved to a different, but stationary, position from the time the technologist originally positioned them. The use of motion as the reason for rejection is certainly legitimate in this situation; although this type of rejected image might alternatively, and just as legitimately, be termed a “Positioning Error.” Another interesting, highly confounding example of terminology ambiguity was observed in images that were assigned “Underexposure” as the reason for rejection. EI frequency distributions were generated for images rejected for “Underexposure.” Somewhat surprisingly, it was found that approximately 30% of these instances had EI values that were well within the range for a normal exposure. Upon visual inspection of these cases, it was discovered that the images were rendered by the image-processing software to be too bright, which is a classic characteristic of an underexposed image for a screen-film system. Further characterization of these images, coupled with making brightness and contrast adjustments to the image rendering, showed that virtually all chest x-ray exams that fit this category had suboptimal positioning with too much of the abdomen included within the field of view. The poor positioning caused the image-processing algorithms to render the images with lower than desired average density, which, in turn, resulted in the technologist interpreting the image as being underexposed. Another 20% of the images that were rejected because of underexposure were found to have normal EI values; however, upon visual review, they were found to have excessive noise appearance. Further characterization of the noise in these images revealed that the noise appearance was the result of images that were captured using CR plates that were not recently erased, which is noncompliant with the manufacturer’s recommended quality control procedure for preventing stray radiation artifacts, a.k.a., “stale plate noise.”

Whereas rejection because of overexposure occurred infrequently, these cases are cause for concern because of considerations regarding patient dose. The authors reviewed all of the rejected images that were assigned “Overexposure” or “High Exposure Index” as the reason for rejection. In more than 90% of the cases, these images were rendered suboptimally as a consequence of the impact on the code-value histogram caused by extremely high exposure. Another way of saying this is that simple brightness and contrast adjustments could likely have salvaged these images without needing to expose the patient to additional radiation—a patient who already received a relatively higher-than-normal exposure. The use of “Overexposure” as a reason for rejection with CR for cases where reprocessing is an option suggests additional training is required.

A protocol issue existed with the use of “Other” as the reason for rejection. Considerable numbers of rejected images were assigned the reason “Other.” A comment field was provided in the reject-tracking software to accompany the use of “Other.” However, this was not a software-required entry, and the comment field was most often left blank.

The order-of-magnitude difference in reject rates between in-department and portable chest exams seemed surprising at first because the known difficulties of capturing diagnostic-quality portable chest images would logically be expected to cause an increase in the number of images that need to be repeated. Some of the problematic aspects of portable chest imaging include infrequent use of antiscatter grids, inconsistent techniques, difficulty in positioning patients, patients unable to maintain a breath hold, and less-capable x-ray generators. These factors, taken together with the diagnostic tasks requiring visualization of low-contrast features, such as tube and line tip placements and pneumothorax, should increase the probability of limited-quality images. Close observation of technologist rounds at two hospitals, however, revealed that the dramatic difference in reject rates between in-department and portable chest x-ray exams was related to the inability of technologists to view the images at the point of capture. CR systems are often centrally located; in some cases, the CR units are located on different floors of the hospital than the intensive care unit. Technologists may expose a CR cassette and not view an image until an hour or more after capture, at which point the workflow impact of performing a repeat is prohibitive. The unfortunate consequence of this scenario is that an increased percentage of suboptimal portable chest images may be released to the PACS for interpretation.

Conclusions

Comprehensive and accurate digital radiography QA requires that a mechanism be put into place to force technologists to enter reject data information into a database, e.g., the capture device software should require that this data be entered for each rejected image before another image can be scanned. The reject data must include the reason for rejection, technologist ID, patient ID, and equipment- and exposure-related information. Moreover, the software and hardware infrastructure must be in place so that all image records, including both accepted and rejected records, are centrally accessible and appropriately organized. Digital dashboards that centrally collect and compile image statistics are now available to accomplish this function. However, mechanisms to enable a QA technologist or medical physicist to visually inspect rejected images must also be provided.

Standardized terminology and definitions for QA deficiencies must be established, along with the associated training, to eliminate the inconsistent and sometimes inappropriate labeling of rejected images. Protocols must be established that require the comment fields to be complete whenever there is a nonspecific reason for rejection. Unless the image is tagged as a reject, systems generally do not provide a way to prevent a QC image from being delivered to the PACS. Consequently, protocols must be implemented whereby images that are rejected because of preventive maintenance or QC-related reasons are properly labeled so they are easily distinguished from patient-related rejected images. One way to ensure that this occurs is to require that for each rejected image, the technologist specify the exam type and reason for rejection, i.e., eliminating the notion of a default exam type. This should reduce the number of erased-plate images that are mislabeled. Adopting standardized terminology and adhering to best-practice protocols will allow sites to more fully understand their QA performance and drive them to more focused training programs.

Better QC methods may significantly benefit portable chest x-ray image quality, including having the capability to display digital radiography images at the point of capture. Mobile CR and DR systems now provide this capability.

To summarize, there is an opportunity to improve the completeness and accuracy of reject analysis for digital radiography systems through the standardization of data entry protocols and improved reporting and analysis methods. Accurate reject analysis provides the basis from which to develop targeted training programs and helps to mitigate the largest source of patient repeat exposures.

Copyright information

© Society for Imaging Informatics in Medicine 2008