European Archives of Oto-Rhino-Laryngology

, Volume 268, Issue 5, pp 643–651 | Cite as

How trustworthy is a diagnosis in head and neck surgical pathology? A consideration of diagnostic discrepancies (errors)

  • Julia A. Woolgar
  • Alfio FerlitoEmail author
  • Kenneth O. Devaney
  • Alessandra Rinaldo
  • Leon Barnes


A comprehensive consideration of the categorization, causes, detection and consequences of diagnostic discrepancies (errors) in surgical pathology (SP) can be uncomfortable for both clinicians and pathologists, and probably explains why the topic is rarely discussed outside case conferences and multidisciplinary team meetings. In recent years, audit and quality assurance programs have become an integral part of SP and have raised awareness amongst pathologists and resulted in the introduction of various safeguards such as participation in accreditation and national external quality assurance schemes. Most clinicians are less well acquainted with the causes and detection of errors in SP. Furthermore, pathologists may use subtle differences in terminology to convey diagnostic uncertainty but these may be overlooked by clinicians looking for a definitive diagnosis before formulating a treatment plan, and this oversight provides further potential for misunderstanding or error. This editorial aims to summarize the detection and frequency of diagnostic discrepancies; the types of diagnostic discrepancies and their relative frequency; and to outline the consequences for the patient, relatives, and the pathology and clinical/surgical teams.

Detection and frequency of diagnostic discrepancies

The definition of diagnostic discrepancies (errors) in SP and their identification, and frequency, remains controversial [1]. One widely accepted definition is that a diagnostic error represents “the assignment of a pathologic diagnosis that does not represent the true nature of disease (or lack of disease) in a patient” [2]. However, since SP is a subjective interpretation of complex visual data, it is difficult to define a true/correct diagnosis for any given specimen [1, 3, 4]. “Expert” opinions/diagnoses are known to harbor significant bias [4]. To paraphrase an observation of a former US Supreme Court Justice (who was discussing the position of the Supreme Court in American law), the “experts” do not speak last because they are infallible; they are infallible because they are the last to speak. Even a “reasonable standard” diagnosis by an experienced general pathologist will have bias determined by practice patterns, training, experience, personal anecdotes and human error [3, 5]. Using clinical outcome as a robust measure of the true diagnosis of a tissue specimen is impractical and unreliable [6, 7]. Hence, surrogate measures of error that assess precision instead of accuracy are widely used [3].

One of the most popular surrogate measures of error is peer review (double reading) as in departmental audit. To have potential positive clinical impact, this must be performed soon after the initial diagnosis is made, and ideally before sign-out [8, 9]. The type of cases selected for peer review impacts on the error detection rate. For random review, a survey of multiple laboratories found a mean discrepancy rate of 6.7%, of which only 5.4% had moderate-marked effects on patient care [10]. Higher error detection rates are found when the review is directed at notoriously difficult lesions such as pigmented skin lesions [11]. Clinicopathological conferences and seeking a colleague’s opinion provide an informal peer review and are useful in preventing or detecting errors in routine practice [12, 13]. Formal peer review can only reasonably be used to assess the overall error rate of a department since the number of cases that would need to be analyzed to accurately assess trends between pathologists is too large to be reasonable in clinical practice [14]. Double reporting of malignancies is the standard protocol in many departments and while it may protect the pathologist, it is somewhat incongruous since most legal action relates to missed diagnoses (false-negative cases) [15].

Analysis of the correlation between frozen section and the final pathology results is an essential component of a departmental quality assurance program [16]. In three large studies [17, 18, 19], the discrepancy rate ranged from 1.42 to 1.8% and the deferral rate from 4.2 to 4.6%. Sampling errors accounted for 70% of discrepancies with interpretation errors accounting for 30%. In a large multi-institutional study [17], patient management was unaffected in 74% of discrepant cases and greatly affected in 2.5%. The utility and diagnostic accuracy of frozen section examination are related to the tissue involved and are notoriously poor for thyroid [20, 21].

In head and neck cancer, good correlation between frozen section analysis of surgical margins and final diagnosis is an important goal that is yet to be achieved [22]. There is good correlation between frozen section and permanent sections of the same tissue (98.3% accuracy, 88.8% sensitivity and 98.9% specificity in the study of 420 margins reported by DiNardo et al. [22]), but poor correlation between frozen section analysis and the final margins on the resection specimen. For example, in the same study [22], only 40% of patients with positive final margins on the resection specimen were detected by frozen section biopsy suggesting that the poor accuracy rates are due to inadequate selection of tissue for intra-operative frozen section analysis rather than inaccurate sampling of the frozen specimen. The success of sentinel lymph node biopsy also relies on high accuracy of the intra-operative frozen section examination. In a study of 82 cervical sentinel nodes from 31 oral and oropharyngeal cancer patients, 93% sensitivity and 94% negative predictive value rates were achieved with fine-sectioned frozen tissue analysis [23]. The value of correlation studies in highlighting discrepancies is exemplified by a study of sentinel nodes in 44 oral cancer patients in which multi-slice frozen section analysis achieved sensitivity, specificity, overall accuracy, positive and negative predictive value rates of 90.9, 100, 99.1, 100 and 99%, respectively, compared to corresponding rates of 27.3, 99, 92, 75 and 92.6% for imprint cytology [24].

Many institutions review outside material on any patient seen for treatment and this seems a useful check since reported discrepancy rates of general sign-out cases range from 1.4 to 11.3% with almost 60% of major discrepancies resulting in a change in clinical management [25, 26, 27, 28, 29]. Female reproductive tract, gastrointestinal tract, head and neck, skin, and genitourinary tract cases account for 85% of major disagreements [25, 28]. Dealing with discrepant cases can be difficult. As a matter of professional courtesy, the original pathologist should be contacted and told the reason for the change and whether ancillary studies were performed, ideally before the amended report is signed out [28].

Amended report analysis is a further surrogate measure of error in SP [16]. In general, it is recommended that an “addendum” report is used to convey additional information that was not available at the time of issuing the original final report, such as the results of special stains so long as the additional information does not significantly alter the intent of the original report. If, for any reason, there is a significant change in intent of the original report, then it is recommended that an “amended” (revised, corrected) report is issued. These are intended for urgent attention of clinicians. As well as clearly stating the revised diagnosis, an amended report should include a specific reference to what has changed and the reasons for the change. In addition, the term “corrected report” may be used to correct typographical and transcription errors and changes in the non-diagnostic fields such as patient identifiers. When used in this way, an analysis of amended and corrected reports provides useful data. A study quantifying the number of amended reports issued by 359 institutions [30] suggested that laboratories should strive to reach the “best practice” value of the 10th percentile (0.22 amended per 1,000 reports).

Categorization of diagnostic discrepancies

Diagnosis is the determination of the nature of a disease condition, typically based on the patient’s medical history, the physical symptoms and signs and often on the results of laboratory tests and radiological imaging which frequently precede invasive techniques such as fine needle aspiration cytology (FNAC) and surgical biopsy. Most clinicians and pathologists appreciate the benefits and limitations of cytological diagnosis and are likely familiar with reports of FNAC-histology correlation studies in the head and neck region which highlight the importance of cytopathologist experience and expertise and that the diagnostic accuracy is dependent on the site of aspiration with particular difficulties in differentiating reactive lymphoid hyperplasia from lymphoma and in diagnosing follicular thyroid lesions [31, 32, 33]. A recent study of 191 salivary gland FNACs from a single academic center [34] reported 79.1% overall accuracy in distinguishing benign from malignant lesions and the sensitivity for salivary neoplasia was 89.4%. Hence, when carried out well, FNAC has a significant role in triaging patients and can reduce unnecessary surgeries. Ultrasound guidance allows a cytopathologist to perform FNACs in smaller, non-palpable lesions and target complex lesions with confidence and accuracy, thus achieving a better outcome [35]. The advantages of ultrasound-guided FNAC coupled with on-site cytology in a one-stop neck mass clinic which allows immediate assessment of the aspirate and repeated passage, if necessary, are well recognized and reduced the inadequacy rate from 15 to 4% in a recent study of 274 patients [36]. Even in difficult areas like thyroid pathology, on-site cytology with routine second opinion review of indeterminate biopsies can be helpful and potentially obviate the need for diagnostic thyroidectomy in 25% of patients without increasing the false-negative rate [36].

Diagnosis of frozen sections is highly dependent on the quality of the sections and frozen-final diagnosis correlation studies highlight the potential difficulties and limitations [37]. Indeed, most surgeons consider a diagnosis based on histological assessment of fixed tissue essential before embarking on anything but a simple excision biopsy. The potential pitfalls associated with this “routine histological diagnosis” merit detailed consideration.

Errors can occur at all stages of the diagnostic process from misidentification of the specimen through to errors in report writing and delivery. In some cases, there may be no clinical consequences. For example, calling a completely excised benign intradermal melanocytic nevus a pilomatricoma has no clinical significance (although it has implications for the pathologist) and it is likely that such errors remain undiscovered. Other errors have minor or major potential to cause harm to the patient, but if discovered early, harm may be averted and the error can be classified as a “near miss”. Actual harm may be classified as minimal, mild, moderate, severe or unknown [38].

A classification system based on cause is useful in raising awareness and averting future potential errors. Foucar [7] described seven groups of errors including errors due to “local” circumstances that were out of the pathologist’s control; “work habit/work environment” errors resulting from the working systems of individual pathologists; and errors due to lack of knowledge or misinterpretation (“knowledge-based interpretive error”) of the individual pathologist. The remaining groups of errors were attributable to specialty-wide issues such as errors due to low diagnostic precision/accuracy and a lack of knowledge/understanding of specific lesions/conditions. Awareness, attention to detail and systematic checks can help eliminate errors due to “local” circumstances. Providing the pathologist with the means to increase their concentration and work more deliberately can reduce “human lapses” and other work habit/work environment errors. Problems due to specialty wide issues can be addressed in training but by their inherent nature, cannot easily be eliminated. Studies comparing diagnosis of skin lesions by general pathologists and dermatopathologists illustrate the problems of non-specialist diagnosis of difficult and unusual lesions [39].

Another proposed classification [10] defines discrepancies as a difference between the original interpretation and the interpretation of the secondary review and then further classifies the discrepancies by cause—change in patient information, changes in interpretation, discovery of a typographical error and so on. Changes in interpretation may be within the same category of either benign or malignant, or benign lesions may be classified as malignant and vice versa [10].

Zarbo et al. [12] proposed a three-step approach to classification. The first step categorized each error into one of four general error types or defects, with further subcategorization into specific error type. “Defect in specimen” may be categorized as lost specimen, inadequate size, erroneous description/measurement, extraneous tissue, inadequate sampling, and failure to carry out pertinent ancillary studies such as immunostaining. “Defect in identification” may concern the patient, the tissue, laterality and anatomical location. “Defect in interpretation” may be a false-negative (under-call), a false-positive (over-call) or misclassification. “Defect in report” may be erroneous or missing non-diagnostic information; confusing/unclear diagnostic information and terminology; errors in dictation, typing, computer formatting and report delivery.

A recent study [40] has detailed 75 specimen labeling errors that occurred during the analytical phase of analysis (that is, within the pathology laboratory). These labeling errors represented 0.25% of cases, 0.068% of blocks and 0.03% of slides within a single laboratory over an 18-month period. The findings merit detailed consideration since they provide a worrisome insight into a single type of “near miss”. How many similar “near misses” actually translate into real errors is unknown and cannot readily be estimated. Of the 75 labeling errors that were detected, 55 (73%) involved the patients name (a patient identification error), 18 (24%) involved site (a specimen identification error), and 2 (3%) involved an incorrect label number but same patient and site. The majority of mislabelings (69%) occurred within the gross (cut-up) room, 25% occurred in the histology laboratory and 6% in the pathology office. During the gross/cut-up procedure, tissues were placed in an incorrectly labeled cassette. In the majority of cases, the labeled cassettes were switched between two sequential specimens—these were small biopsy specimens in which prelabeled cassettes had been placed with the incorrect paperwork and specimen containers by the laboratory assistant. Errors occurring in the gross room were usually recognized in the histology laboratory, by a surgical pathologist or the referring clinician on reading the patient’s SP report. In contrast, 63% of the 19 errors which occurred within the histology laboratory involved certified histology technicians—blocks or sections were incorrectly matched with the slides that had been labeled by pencil on the frosted end of the glass slide or the penciled number had been incorrectly written. The other 37% of the 19 histology laboratory errors involved a laboratory assistant sticking an incorrect “permanent” label onto the pencil-labeled slide. Errors occurring within the histology laboratory were most commonly detected by the sign-out pathologist. The four errors occurring within the pathology office concerned cases received for review and were due to an incorrect case number being written on outside slides; these errors were spotted by the reporting pathologist. It was judged that 13 of the total 75 errors (17%) would have resulted in the wrong therapy if they had not been detected. Interestingly, the reasons for lack of impact in the remaining 62 errors (83%) included discordance between patient sex and reported specimen source (prostate gland in a female patient), diagnosis without subsequent impact on patient care (cholelithiasis in a patient known not to have had a cholecystectomy) and incorrect diagnosis identical to the actual patient diagnosis.

A further prerequisite for an accurate histological diagnosis is that the histological diagnosis must be made on appearances that accurately reflect the lesion as a whole (are truly representative). Poor clinical acumen may lead to poor selection of the biopsy site; poor surgical technique may result in a specimen that is too small (in particular, too shallow), crushed or fragmented. Inappropriate use of marker sutures may destroy critical features. Laboratory-based sampling errors are well described [41] and include inadequate dissection/slicing of the specimen and deficiencies in the selection of tissue for processing. Poor technical skills/expertise may lead to excessive loss of tissue during production of thin histological sections. Deficiencies in technique and inadequate/faulty machinery may result in poor quality sections that may lead to oversight or misinterpretation by the pathologist.

Good communication between the clinician and the pathologist is essential and may prevent misinterpretation and overt errors. Failure to convey the clinical appearance/behavior of a lesion may lead the pathologist to make an inaccurate diagnosis such as “no evidence of malignancy” rather than “inadequate/non-diagnostic biopsy specimen”. The histological report may be erroneous due to typographical/clerical errors, or misinterpreted due to the use of unclear/confusing language. Also, there may be a lack of understanding of the terms favored by individual pathologists. Many pathologists use non-definitive terms such as “in keeping with”, “consistent with”, “highly suggestive of”, “favor”, “suggestive of”, “suspicious of”, “reminiscent of” “not inconsistent with” to convey different diagnostic “weight”. Clinicians may fail to recognize these subtle differences in terminology that can be used to convey doubt/uncertainty and there is wide variation in individual interpretation of phrases in both the groups [42]. Despite calls for adoption of a limited number of descriptive phases that are mutually understood and acceptable since the mid 1990s, the problem remains. For example, a more recent study comparing clinician comprehension with pathologist intent in written pathology reports [43] found surgeons misunderstood pathologists’ reports 30% of the time. Familiarity with report format and clinical experience helped reduce the gap, but paradoxically, sophisticated improvements to report formatting interfered with comprehension and increased the number of misunderstandings. The problem is not confined to clinicians and pathologists—a survey [44] of terminology interpretation of what constitutes a diagnosis of cancer by cancer registries found that pathologists did not intend a definite diagnosis of cancer in 4 of 13 terms regarded as confirmatory by cancer registries.

Another problem in interpretation of pathology reports is that clinicians may fail to appreciate that similar histological appearances can occur in several conditions but the pathologist may only mention the condition suggested or queried on the pathology request form. Numerical certainty descriptors are an integral part of some quality assurance schemes [45] but are not generally used in routine diagnostic practice where terms such as “provisional” or “working” diagnosis are more commonly employed to convey diagnostic doubt/uncertainty.

Table 1 shows some of the more common errors seen in head and neck consultation practice.
Table 1

Representative common errors seen in head and neck consultation service


Mistaking benign tangentially embedded squamous mucosal biopsies for invasive well differentiated squamous cell carcinoma


Under or over diagnosing verrucous carcinoma


Mistaking pseudoepitheliomatous hyperplasia for squamous cell carinoma


Confusing spindle cell carcinoma for sarcoma


Mistaking metastatic HPV-positive non-keratinizing squamous cell carcinoma of the tonsil or base of tongue in a lymph node for a branchial cleft cyst or “branchial cleft carcinoma”


Confusing respiratory epithelial adenomatoid hamartoma for inverted Schneiderian papilloma


Mistaking reactive stromal atypia in sinonasal polyps for sarcoma


Confusing lobular capillary hemangioma of the sinonasal tract for glomangiopericytoma


Failure to recognize allergic mucus or allergic fungal sinusitis


Confusing polymorphous low grade adenocarcinoma and epithelial-myoepithelial carcinoma for pleomorphic adenoma


Mistaking basal cell adenocarcinoma for sebaceous gland carcinoma


Mistaking acinic cell carcinoma as adenoma and cystic acinic cell carcinoma as benign cyst


Confusing necrotizing sialometaplasia for low grade mucoepidermoid carcinoma or squamous cell carcinoma


Mistaking oncocytic hyperplasia for oncocytoma


Failure to recognize variants of myoepithelial neoplasms and how to distinguish whether they are benign or malignant


Confusing ameloblastoma of the sinonasal tract for adenoid cystic carcinoma


Failure to recognize some of the more aggressive odontogenic cysts such as keratocystic odontogenic tumor (odontogenic keratocyst) and glandular odontogenic cyst


Confusing dental papilla for odontogenic myxoma


Under and over diagnosing encapsulated follicular-derived thyroid neoplasms


Mistaking middle ear adenoma (carcinoid) for metastastic adenocarcinoma or, at times, for even extramedullary plasmacytoma


Confusing paraganglioma of the middle ear for granulation tissue


Mistaking atypical carcinoid of larynx for paraganglioma


Confusing large cell neuroendocrine carcinoma of the larynx for atypical carcinoid


Mistaking small cell neuroendocrine carcinoma of the larynx for undifferentiated carcinoma


Mistaking atypical carcinoid of the larynx for adenocarcinoma not otherwise specified


Confusing atypical carcinoid of the larynx for typical carcinoid


Mistaking basaloid squamous cell carcinoma of the larynx for small cell neuroendocrine carcinoma


Confusing small cell neuroendocrine carcinoma for Merkel cell carcinoma


Mistaking basaloid squamous cell carcinoma for adenoid cystic carcinoma


Confusing basaloid squamous cell carcinoma for adenosquamous carcinoma


Mistaking acantholytic squamous cell carcinoma for adenosquamous carcinoma


Confusing adenosquamous carcinoma for mucoepidermoid carcinoma


Confusing fibrous dysplasia, ossifying fibroma and periapical cemento-osseous dysplasia


Mistaking chondrometaplasia for chondroma and low-grade chondrosarcoma


Confusing spindle cell lipoma/pleomorphic lipoma for liposarcoma


Mistaking adamantinomatous craniopharyngioma for epidermoid cyst or dermoid cyst

Consequences of diagnostic discrepancies

These depend on when and how the discrepancy is discovered and the severity of any clinical effects. General considerations include legal considerations. There is no doubt that some pathologists find that their diagnoses and their formal interpretations are colored by fear of litigation. While there are no statistics to back up this assertion, (private) anecdotal communications make it clear that there can sometimes be a temptation to say less, or to be less committed, when the specter of a lawsuit hangs over that pathologist’s head. Certainly, there is no virtue in a pathologist overreaching, attempting to make a diagnosis or propose an interpretation not supported by appearances of the tissue being examined; but nor is the patient helped by a pathologist whose fear of the possibility of litigation makes him or her more reticent about reporting on specimens when those specimens would actually support a more definitive diagnosis than the one actually proffered to the clinicians.

A related matter, grounded in the intersection of law and medical errors, is the matter of extending apologies for undesirable outcomes. In North America in particular, the last few years have seen a growing interest in formalizing the mechanism, both within individual hospitals and at the state legislative level, for disclosing to patients and their families the occurrence of medical errors when they occur, and in some instances offering compensation at the outset [46, 47]. Indeed, almost three-quarters of US state legislatures have enacted “apology laws.” At the risk of oversimplifying, these laws exclude expressions of sympathy after accidental harmful events as proof of liability.

Ideally, such a strategy would reduce the number of lawsuits which are either frivolous or driven by anger; information is still being collected to establish whether or not this is actually the case. Insofar as pathologists are concerned, the ordinary gap between those pathologists and the patients (as the pathologists do not deal directly with patients in most instances) makes it harder to see how the “apology” movement will affect pathology practice. It may be that the patient’s clinicians could be deemed the surrogates of the pathologists, delivering sympathy and the offer of compensation to patients who have been harmed by pathology errors (which is likely what happens most often, at present). Alternatively, this may provide an impetus for pathologists to step out from behind their laboratory doors and take a more direct role, at least in this particular situation, in dealing with patients face-to-face.

The media is apt to create concern about the accuracy of diagnosis in SP and cytopathology. Detailed analysis of the medical literature cited by the media shows that “painting the big picture” and “hitting the highlights” can be profoundly misleading [48]. For example, much attention has been focused on the accuracy of FNAC, with the media misrepresenting FNAC as the “definitive” diagnosis rather than the “triage” or “working” diagnosis and portraying FNAC-surgical biopsy discrepancies as major “headline” diagnostic errors indicative of pathologist incompetence. The clinically significant diagnostic (cognitive) error rate in SP reported in the literature varies from 0.26 to 1.2% when uncovered by prospective review of all cases or discovered on review of a random sample of cases [49].

Medical negligence claims against pathologists are still relatively uncommon but have increased steadily in recent years and often result in substantial damages [50]. The judicial system defines “error” as patient injury resulting from medical negligence. The four standards that must be justified to establish the validity of a medical negligence claim are duty, breach (negligence), causation of injury (proximal cause) and damage [51, 52].

The duty incumbent upon the reporting pathologist is accurate, specific and timely reporting of a biopsy or resection specimen. Errors in interpretation can be directly attributable to the pathologist. In addition, errors may result from events that are indirectly under the pathologist’s control and may constitute vicarious liability. A pathologist recognizing that poor technical quality may impact on slide interpretation and failing to correct the technical deficiency may have exercised negligent supervision. The demonstration of breach (negligence) is the failure to execute the accepted standard of care as defined by expert testimony. Standard of care is the professional behavior expected of a diligent, careful and informed physician, and is a national standard, often equated with “best practice”. For causation of injury, it must be shown that the failed standard of care has somehow resulted in a patient injury which would not have occurred had the appropriate care been given. Injury consequent to pathological misdiagnosis may result in inappropriate treatment or delay in diagnosis. Damage is the injury sustained by the patient as a direct result of the error resulting from failure to meet the standard of care. The plaintiff’s case will fail if all four issues cannot be shown to be operative (burden of proof) [51, 52].

For multiple and complex reasons, analysis of successful claims for damages can only be regarded as an approximate indicator of clinically significant SP errors, but, nevertheless, it reveals some interesting findings. A review of 355 pathology claims reported to an American physician-owned professional liability insurance company from 1998 to 2003 [49], found 14% of SP claims showed no particular pattern relating to specimen type, diagnostic category, or diagnostic error (“random” errors) while over 85% fell into repetitive patterns of specimen type or diagnostic category suggesting “systematic” errors. Overall, 63% of these claims involved a false-negative diagnosis of malignancy and 22% a false-positive diagnosis. The majority of systematic claims involved melanoma, breast and gynecological pathology. Operational errors accounted for 22 (6.5%) of the total 335 claims. Specimen mix-ups accounted for 59% of operational errors, followed by lost biopsies, specimen contamination (“floaters” and “pick-up”), mislabeled biopsy sites and a transcriptional error (failure to type “no” in front of “malignant cells identified”). Seven (2%) of the 355 claims involved false-negative diagnoses of malignant salivary gland tumors. Metastatic squamous cell carcinoma misdiagnosed as branchial cleft cyst and failure to diagnose extranodal lymphoma in the nasopharynx were also mentioned as occasional claims in the false-negative category.

Although the psychological sequelae of false-positive recall in breast screening has been widely reported [53, 54, 55], there appears to be less information on the effects of false-negative tests and delayed diagnosis. Moreover, there is a dearth of information on the psychological effects of any type of incorrect or delayed pathology diagnosis in head and neck patients. This is understandable given the sensitive and ethically fraught nature of incorrect or delayed diagnoses, and their relative rarity in head and neck pathology. Most head and neck patients are anxious following a biopsy and want to know the result as soon as possible. In general, delays are either due to human error and unforeseen technical failures, or the difficulty of the case, and can be inconvenient and frustrating for the pathologist and clinician. For the patient and their relatives, an unexpected delay or a request for a repeat biopsy also means heightened anxiety and uncertainty. The long-term effects of a delayed diagnosis probably depend on the biopsy findings. A delayed diagnosis of malignancy may result in anxiety over unnecessary progression of the disease before treatment and the possible consequent changes in the treatment plan. If the disease has advanced, it may no longer be curable or the quality of life may be diminished by radiotherapy and/or chemotherapy. In addition, there is the possibility of a claim for medical negligence. Victims of misdiagnosed cancer are entitled to compensation for their “increased risk of harm” due to the delayed diagnosis, in addition to damages for their pain and suffering, loss of normal life, and medical expenses caused by the medical negligence [51, 52].

The effects of errors on the pathology and clinical/surgical teams are complex. Personnel directly involved may suffer a loss of confidence, anxiety and stress, feelings of guilt, and may require remedial education or re-training which can have further ramifications. Again, there is a dearth of evidence-based data which likely reflects the reluctance of pathologists to verbalize their concerns with colleagues or seek professional help. Fears of an impending outside review of cases may cause distress for weeks or months and may result in suspension from the workplace while the review is carried out. Settlement of claims for negligence can take years and even if settled out of court, are costly in terms of psychological distress, time and money for the patient and relatives, the pathologist/pathology team and other hospital personnel, expert medical witnesses, and so on. The effects of manpower shortages and the occurrence of stress and burn-out have been studied in healthcare workers in general [56], and in surgeons [57] and oncology employees [58], but equivalent data for pathologists is not readily available.

Reduction of errors

Improving the quality of SP diagnosis is hugely important. In the United Kingdom, pathology reports are involved in 70% of episodes of patient care within the National Health Service. In the United States, in oncology alone, around 1.5 million patients each year have their diagnosis established by pathological interpretation of a tissue sample [59]. Millions more patients have a biopsy to rule out cancer. The paramount goal should be to prevent errors occurring [60]. The medical profession favors confidential reporting of errors and near-misses in an attempt to identify types and patterns of occurrence which should ultimately improve safety and reliability. All healthcare workers responsible for specific tasks must be properly educated and motivated to perform those tasks with as few errors as possible. There must be written protocols detailing responsibilities including contingencies when those responsibilities are not met. Successful completion of tasks should be documented especially when processes involve several sequential stages. The opportunity for making errors must be reduced by removing unnecessary stages/processes, and the timely introduction of new technology such as bar codes and radio frequency chip devices. Nevertheless, despite all efforts, it is inevitable that some errors will occur. Hence, it is essential that checks are in place and routinely carried out even though they may seem largely redundant. Every member of the healthcare team must be aware that therapeutic decisions involving surgery are irrevocable and the potential damage caused by inappropriate surgery is irreversible. Clinicians have an essential role in SP error reduction by efficient test ordering; providing accurate, pertinent clinical information; procuring high-quality specimens and ensuring that they reach the laboratory quickly and in good condition; promptly following up test-results; effectively communicating concerns about potentially discrepant diagnoses; and advocating second opinions on the pathology diagnosis in specific situations [59, 60]. The responsibility for following up test reports, and in particular, interpreting the findings should not fall on junior staff. The importance of reading the complete pathology report carefully, checking the meaning of words and terms, if necessary, must be emphasized during training and become the absolute routine. Regular clinicopathological conferences should be encouraged since they help break down the barriers between clinicians and pathologists and allow better understanding of the difficulties facing each team.


  1. 1.
    Leong AS, Braye S, Bhagwandeen B (2006) Diagnostic “errors” in anatomical pathology: relevance to Australian laboratories. Pathology 38:490–497PubMedCrossRefGoogle Scholar
  2. 2.
    Raab SS (2004) Improving patient safety by examining pathology errors. Clin Lab Med 24:849–863PubMedCrossRefGoogle Scholar
  3. 3.
    Sirota RL (2006) Defining error in anatomic pathology. Arch Pathol Lab Med 130:604–606PubMedGoogle Scholar
  4. 4.
    Foucar E (1998) Error identification: a surgical pathology dilemma. Am J Surg Pathol 22:1–5PubMedCrossRefGoogle Scholar
  5. 5.
    Coffin CS, Burak KW, Hart J, Gao ZH (2006) The impact of pathologist experience on liver transplant biopsy interpretation. Mod Pathol 19:832–838PubMedGoogle Scholar
  6. 6.
    Renshaw AA (2006) Comparing methods to measure error in gynecologic cytology and surgical pathology. Arch Pathol Lab Med 130:626–629PubMedGoogle Scholar
  7. 7.
    Foucar E (2005) Classification of error in anatomic pathology: a proposal for an evidence-based standard. Semin Diagn Pathol 22:139–146PubMedCrossRefGoogle Scholar
  8. 8.
    Renshaw AA, Cartagena N, Granter SR, Gould EW (2003) Agreement and error rates using blinded review to evaluate surgical pathology of biopsy material. Am J Clin Pathol 119:797–800PubMedCrossRefGoogle Scholar
  9. 9.
    Lind AC, Bewtra C, Healy JC, Sims KL (1995) Prospective peer review in surgical pathology. Am J Clin Pathol 104:560–566PubMedGoogle Scholar
  10. 10.
    Raab SS, Nakhleh RE, Ruby SG (2005) Patient safety in anatomic pathology: measuring discrepancy frequencies and causes. Arch Pathol Lab Med 129:459–466PubMedGoogle Scholar
  11. 11.
    Troxel DB (2003) Pitfalls in the diagnosis of malignant melanoma: findings of a risk management panel study. Am J Surg Pathol 27:1278–1283PubMedCrossRefGoogle Scholar
  12. 12.
    Zarbo RJ, Meier FA, Raab SS (2005) Error detection in anatomic pathology. Arch Pathol Lab Med 129:1237–1245PubMedGoogle Scholar
  13. 13.
    Wakely SL, Baxendine-Jones JA, Gallagher PJ, Mullee M, Pickering R (1998) Aberrant diagnoses by individual surgical pathologists. Am J Surg Pathol 22:77–82PubMedCrossRefGoogle Scholar
  14. 14.
    Ramsay AD (1999) Errors in histopathology reporting: detection and avoidance. Histopathology 34:481–490PubMedCrossRefGoogle Scholar
  15. 15.
    Kornstein MJ, Byrne SP (2007) The medicolegal aspect of error in pathology: a search of jury verdicts and settlements. Arch Pathol Lab Med 131:615–618PubMedGoogle Scholar
  16. 16.
    Roy JE, Hunt JL (2010) Detection and classification of diagnostic discrepancies (errors) in surgical pathology. Adv Anat Pathol 17:359–365PubMedCrossRefGoogle Scholar
  17. 17.
    Zarbo RJ, Hoffman GG, Howanitz PJ (1991) Interinstitutional comparison of frozen-section consultation. A College of American Pathologists Q-Probe study of 79,647 consultations in 297 North American institutions. Arch Pathol Lab Med 115:1187–1194PubMedGoogle Scholar
  18. 18.
    White VA, Trotter MJ (2008) Intraoperative consultation/final diagnosis correlation: relationship to tissue type and pathologic process. Arch Pathol Lab Med 132:29–36PubMedGoogle Scholar
  19. 19.
    Novis DA, Gephardt GN, Zarbo RJ (1996) Interinstitutional comparison of frozen section consultation in small hospitals. A College of American Pathologists Q-Probes study of 18,532 frozen section consultation diagnoses in 233 small hospitals. Arch Pathol Lab Med 120:1087–1093PubMedGoogle Scholar
  20. 20.
    Olson S, Cheema Y, Harter J, Starling J, Chen H (2006) Does frozen section alter surgical management of multinodular thyroid disease? J Surg Res 136:179–181PubMedCrossRefGoogle Scholar
  21. 21.
    Basalo F, Ugolini C, Proietti A, Iacconi P, Berti P, Miccoli P (2007) Role of frozen section associated with intraoperative cytology in comparison to FNA and FS alone in the management of thyroid nodules. Eur J Surg Oncol 33:769–775Google Scholar
  22. 22.
    DiNardo LJ, Lin J, Karageorge LS, Powers CN (2000) Accuracy, utility, and cost of frozen section margins in head and neck cancer surgery. Laryngoscope 110:1773–1776PubMedCrossRefGoogle Scholar
  23. 23.
    Tschopp L, Nuyens M, Stauffer E, Krause T, Zbären P (2005) The value of frozen section analysis of the sentinel lymph node in clinically N0 squamous cell carcinoma of the oral cavity and oropharynx. Otolaryngol Head Neck Surg 132:99–102PubMedCrossRefGoogle Scholar
  24. 24.
    Terada A, Hasegawa Y, Yatabe Y, Hyodo I, Ogawa T, Hanai N, Ikeda A, Nagashima Y, Masui T, Hirakawa H, Nakashima T (2008) Intraoperative diagnosis of cancer metastasis in sentinel lymph node of oral cancer patients. Oral Oncol 44:838–843PubMedCrossRefGoogle Scholar
  25. 25.
    Manion E, Cohen MB, Weydert J (2008) Mandatory second opinion in surgical pathology referral material: clinical consequences of major disagreements. Am J Surg Pathol 32:732–737PubMedCrossRefGoogle Scholar
  26. 26.
    Abt AB, Abt LG, Olt GJ (1995) The effect of interinstitution anatomic pathology consultation on patient care. Arch Pathol Lab Med 119:514–517PubMedGoogle Scholar
  27. 27.
    Kronz JD, Westra WH, Epstein JI (1999) Mandatory second opinion surgical pathology at a large referral hospital. Cancer 86:2426–2435PubMedCrossRefGoogle Scholar
  28. 28.
    Weir MM, Jan E, Colgan TJ (2003) Interinstitutional pathology consultations. A reassessment. Am J Clin Pathol 120:405–412PubMedCrossRefGoogle Scholar
  29. 29.
    Tsung JS (2004) Institutional pathology consultation. Am J Surg Pathol 28:399–402PubMedCrossRefGoogle Scholar
  30. 30.
    Nakhleh RE, Zarbo RJ (1998) Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: a College of American Pathologist’s Q-probes study of 1,667,547 accessioned cases in 359 laboratories. Arch Pathol Lab Med 122:303–309PubMedGoogle Scholar
  31. 31.
    Howlett DC, Harper B, Quante M, Berresford A, Morley M, Grant J, Ramesar K, Barnes S (2007) Diagnostic adequacy and accuracy of fine needle aspiration cytology in neck lump assessment: results from a regional cancer network over a one year period. J Laryngol Otol 121:571–579PubMedCrossRefGoogle Scholar
  32. 32.
    Layfield LJ (2007) Fine-needle aspiration in the diagnosis of head and neck lesions: a review and discussion of problems in differential diagnosis. Diagn Cytopathol 35:798–805PubMedCrossRefGoogle Scholar
  33. 33.
    Tandon S, Shahab R, Benton JI, Ghosh SK, Sheard J, Jones TM (2008) Fine-needle aspiration cytology in a regional head and neck cancer center: comparison with a systematic review and meta-analysis. Head Neck 30:1246–1252PubMedCrossRefGoogle Scholar
  34. 34.
    Zhang S, Bao R, Bagby J, Abreo F (2009) Fine needle aspiration of salivary glands: 5-year experience from a single academic center. Acta Cytol 53:375–382PubMedCrossRefGoogle Scholar
  35. 35.
    Wu M (2010) A comparative study of 200 head and neck FNAs performed by a cytopathologist with versus without ultrasound guidance: evidence for improved diagnostic value with ultrasound guidance. Diagn Cytopathol (in press)Google Scholar
  36. 36.
    Ganguly A, Giles TE, Smith PA, White FE, Nixon PP (2010) The benefits of on-site cytology with ultrasound-guided fine needle aspiration cytology in a one-stop neck lump clinic. Ann R Coll Surg Engl 92:660–664PubMedCrossRefGoogle Scholar
  37. 37.
    Ferlito A, Boccato P, Shaha AR, Carbone A, Noyek AM, Doglioni C, Bradley PJ, Rinaldo A (2001) The art of diagnosis in head and neck tumors. Acta Otolaryngol 121:324–328PubMedCrossRefGoogle Scholar
  38. 38.
    Raab SS, Grzybicki DM, Mahood LK, Parwani AV, Kuan SF, Rao UN (2008) Effectiveness of random and focused review in detecting surgical pathology error. Am J Clin Pathol 130:905–912PubMedCrossRefGoogle Scholar
  39. 39.
    Trotter MJ, Bruecks AK (2003) Interpretation of skin biopsies by general pathologists: diagnostic discrepancy rate measured by blinded review. Arch Pathol Lab Med 127:1489–1492PubMedGoogle Scholar
  40. 40.
    Layfield LJ, Anderson GM (2010) Specimen labeling errors in surgical pathology: an 18-month experience. Am J Clin Pathol 134:466–470PubMedCrossRefGoogle Scholar
  41. 41.
    Woolgar JA, Triantafyllou A (2009) Pitfalls and procedures in the histological diagnosis of oral and oropharyngeal squamous cell carcinoma and a review of the role of pathology in prognosis. Oral Oncol 45:361–385PubMedCrossRefGoogle Scholar
  42. 42.
    Attanoos RL, Bull AD, Douglas-Jones AG, Fligelstone LJ, Semararo D (1996) Phraseology in pathology reports. A comparative study of interpretation among pathologists and surgeons. J Clin Pathol 49:79–81PubMedCrossRefGoogle Scholar
  43. 43.
    Powsner SM, Costa J, Homer RJ (2000) Clinicians are from Mars and pathologists are from Venus. Arch Pathol Lab Med 124:1040–1046PubMedGoogle Scholar
  44. 44.
    Silcocks P, Page M (2001) What constitutes a histological confirmation of cancer? A survey of terminology interpretation in two English regions. J Clin Pathol 54:246–248PubMedCrossRefGoogle Scholar
  45. 45.
    Partham DM (2005) Are external quality assurance (EQA) slide schemes a valid tool for the performance assessment of histopathologists? Pathol Res Pract 201:117–121CrossRefGoogle Scholar
  46. 46.
    MacDonald N, Attaran A (2009) Medical errors, apologies and apology laws. CMAJ 180:11–13PubMedCrossRefGoogle Scholar
  47. 47.
    Wei M (2007) Doctors, apologies, and the law: an analysis and critique of apology laws. J Health Law 40:107–159PubMedGoogle Scholar
  48. 48.
    Frable WI (2006) Surgical pathology—second reviews, institutional reviews, audits, and correlations. What’s out there? Error or diagnostic variant? Arch Pathol Lab Med 30:620–625Google Scholar
  49. 49.
    Troxel DB (2006) Medicolegal aspects of error in pathology. Arch Pathol Lab Med 130:617–619PubMedGoogle Scholar
  50. 50.
    Wick MR (2007) Medicolegal liability in surgical pathology: a consideration of underlying causes and selected pertinent concepts. Semin Diagn Pathol 24:89–97PubMedCrossRefGoogle Scholar
  51. 51.
    Epstein JI (2001) Pathologists and the judicial process: how to avoid it. Am J Surg Pathol 25:527–537PubMedCrossRefGoogle Scholar
  52. 52.
    Martello J (1999) Basic medical legal principles. Clin Plast Surg 26:9–14PubMedGoogle Scholar
  53. 53.
    Gilbert FJ, Cordiner CM, Affleck IR, Hood DB, Mathieson D, Walker LG (1998) Breast screening: the psychological sequelae of false-positive recall in women with and without a family history of breast cancer. Eur J Cancer 34:2010–2014PubMedCrossRefGoogle Scholar
  54. 54.
    Aro AR, Pilvikki Absetz S, van Elderen TM, van der Ploeg E, van der Kamp LJ (2000) False-positive findings in mammography screening induces short-term distress—breast cancer-specific concern prevails longer. Eur J Cancer 36:1089–1097PubMedCrossRefGoogle Scholar
  55. 55.
    Brewer NT, Salz T, Lillie SE (2007) Systematic review: the long-term effects of false-positive mammograms. Ann Intern Med 146:502–510PubMedGoogle Scholar
  56. 56.
    Felton JS (1998) Burnout as a clinical entity—its importance in health care workers. Occup Med (Lond) 48:237–250CrossRefGoogle Scholar
  57. 57.
    Sharma A, Sharp DM, Walker LG, Monson JR (2008) Stress and burnout in colorectal and vascular surgical consultants working in the UK National Health Service. Psychooncology 17:570–576PubMedCrossRefGoogle Scholar
  58. 58.
    Demirci S, Yildirim YK, Ozsaran Z, Uslu R, Yalman D, Aras AB (2010) Evaluation of burnout syndrome in oncology employees. Med Oncol 27:968–974PubMedCrossRefGoogle Scholar
  59. 59.
    Raab SS, Grzybicki DM (2010) Quality in cancer diagnosis. CA Cancer J Clin 60:139–165PubMedCrossRefGoogle Scholar
  60. 60.
    Novis DA (2004) Detecting and preventing the occurrence of errors in the practices of laboratory medicine and anatomic pathology: 15 years’ experience with the College of American Pathologists’ Q-PROBES and Q-TRACKS programs. Clin Lab Med 24:965–978PubMedCrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Julia A. Woolgar
    • 1
  • Alfio Ferlito
    • 2
    Email author
  • Kenneth O. Devaney
    • 3
  • Alessandra Rinaldo
    • 2
  • Leon Barnes
    • 4
  1. 1.Oral Pathology, School of Dental Sciences and Dental HospitalUniversity of LiverpoolLiverpoolUK
  2. 2.ENT ClinicUniversity of UdineUdineItaly
  3. 3.Department of PathologyAllegiance HealthJacksonUSA
  4. 4.Department of PathologyUniversity of Pittsburgh School of MedicinePittsburghUSA

Personalised recommendations