Comparative Clinical Pathology

, Volume 12, Issue 4, pp 174–181 | Cite as

Evaluation of acanthocyte count as a diagnostic test for canine haemangiosarcoma

  • M. S. Tant
  • J. H. Lumsden
  • R. M. Jacobs
  • B. N. Bonnett
Original Article


A retrospective case-control study was conducted based on the records of 80 dogs with visceral haemangiosarcoma (HSA) and 200 dogs with various diseases that had clinical features similar to HSA. All dogs were more than 1 year old, had histologically confirmed disease, and had a complete blood count performed prior to the final diagnosis. A standard protocol was used to count acanthocytes on one blood film from each dog. Acanthocyte count had a maximum diagnostic sensitivity of 53.8% (and specificity of 61.5%) at a cutpoint of ≥1 acanthocyte/2,000 red blood cells. A diagnostic specificity of 100% (and sensitivity of 7.5%) was achieved at a cutpoint of >71 acanthocytes/2,000 red blood cells. The precision of acanthocyte count, within and between raters, varied from poor (unweighted kappa = 0.26) to good (weighted kappa = 0.71) due to the subjective nature of the identification of acanthocytes. Although dogs with acanthocytes were more likely to have HSA (P=0.02), and dogs with HSA had higher acanthocyte counts than controls (P=0.003), acanthocyte count had limited ability to distinguish between dogs with HSA and unaffected dogs with similar signs, as indicated by the receiver operator characteristic plot, which lay approximately along the diagonal. There was no level of acanthocytosis at which HSA could be ruled out, and although HSA could be ruled in at counts >71 acanthocytes/2,000 red blood cells, only six of the 80 dogs with HSA in the study could be identified by this cutpoint.


Acanthocyte Canine Diagnosis Haemangiosarcoma 


The acanthocyte is a normovolaemic, irregularly spiculated red blood cell (RBC) that is characterised by 2–12 (sometimes up to 20) finger-like projections unevenly distributed over the surface of the cell. The projections vary in length and width and typically have blunt or clubbed tips (Bessis 1977). Acanthocyte formation in the dog has been attributed to altered erythrocyte membrane lipid composition as a result of disturbances in lipoprotein metabolism secondary to liver disease (Shull et al. 1978; Rebar et al. 1981). However, acanthocytes are also found in dogs with non-hepatic disease, especially conditions characterised by red cell fragmentation (Rebar et al. 1981; Weiss et al. 1993). The mechanism of acanthocyte formation in canine haemangiosarcoma (HSA) is not known; altered lipoprotein metabolism may play a role, especially in hepatic HSA, but fragmentation injury is also likely to occur within the tortuous vascular spaces and low-oxygen environment of large, blood filled-tumours (Rebar et al. 1980).

A link between acanthocytosis and HSA was first suggested in a report by Gelberg and Stackhouse (1977), who described “numerous” acanthocytes in the peripheral blood of three dogs with splenic HSA. In subsequent years, several other authors observed acanthocytes in dogs with HSA (Rebar et al. 1980; Hirsch et al. 1981; Ng and Mills 1985). A “definite association of acanthocytosis with haemangiosarcoma in the dog” was reported by one author (Hirsch et al. 1981).

However, the studies linking acanthocytosis and HSA were descriptive only, involved small numbers of dogs, and frequently did not include appropriate controls. These studies, also, did not assess the validity or usefulness of acanthocytosis as a diagnostic indicator of HSA. In spite of this, a correlation between acanthocytosis and HSA is described in general veterinary textbooks (Couto 1989; Jain 1993; Thrall and Weiser 1997). Clinical situations may arise where the presence or absence of acanthocytes may influence a clinician’s decision regarding the probability of HSA in an individual patient with compatible clinical signs.

The purpose of our study was to investigate the association between HSA and acanthocytes and to assess the utility of acanthocyte count as a diagnostic test for canine HSA, in terms not only of analytical accuracy and precision, but also diagnostic sensitivity, specificity, and predictive value.

Materials and methods

We designed a retrospective case–control study using the medical records of the Veterinary Teaching Hospital (VTH) of the Ontario Veterinary College for the period January 1983 to February 1994. A preliminary selection of cases and controls from the computerised medical records database of the VTH was followed by detailed examination of the paper files for all candidate dogs in order to determine final eligibility for the study.

Definitions and criteria for inclusion and exclusion

Cases were defined as dogs with a histological diagnosis of HSA based on surgical biopsy or post-mortem examination. For inclusion in the study, dogs had to be more than 1 year old and had to have an eligible complete blood count (CBC) in the medical record. An eligible CBC was defined as the first sample collected after the dog’s admission to the VTH, before intravenous therapy had been started, and no more than 8 days prior to the histological diagnosis. Dogs were excluded from the case group if they had a second concurrent cancer or other primary disease such as diabetes mellitus or hypothyroidism diagnosed during hospitalization or at post mortem.

Controls were defined as dogs with clinical signs or preliminary findings resembling HSA, but with a final diagnosis of disease other than HSA. Eligible findings included abdominal mass; abdominal, thoracic, or cardiac neoplasia; abdominal, thoracic, or pericardial effusion; and internal haemorrhage. Control dogs were required to have a histological diagnosis of their disease, to be more than 1 year old, and to have an eligible CBC as defined for the case group. Dogs were excluded from the control group if they had concurrent HSA or were suffering from recent trauma, post-surgical complications, or acute surgical conditions such as gastric dilatation and torsion.

Dogs were excluded from both case and control groups if they had received intravenous fluid therapy, a blood transfusion, or chemotherapy prior to collection of the CBC. Dogs were also excluded if either the medical record or peripheral blood smear was unavailable for examination.

Acanthocyte study

One eligible blood smear was retrieved for each of the 280 dogs in the study. Prior to microscopic examination, the slides were randomised and masked so that each slide was identified only by an assigned sequence number. Using a Miller’s ocular (Leica, Toronto, Canada), which is the reticule used for routine reticulocyte counts (Brecher and Schneiderman 1950), the primary author (rater A), a graduate student in veterinary clinical pathology, evaluated 1,500–2,000 RBCs in the monolayer of the blood film. Among these RBCs, the number of acanthocytes was counted.

The repeatability of acanthocyte counts was determined by the measurement of the agreement within and between raters on repeated readings of a subset of the original 280 blood films. The intra-observer agreement involved a subset of 195 of 280 films, randomly selected and re-examined by rater A. The inter-observer agreement study was performed by an independent observer (rater B), an experienced veterinary clinical pathologist. In the inter-observer study, rater B examined the first 95 of the 195 slides read twice by rater A. Rater B then re-examined 62 of these 95 films to allow further assessment to be made of intra-observer agreement. For these agreement studies, acanthocytes were counted in accordance with the original protocol, and the smears remained masked, being identified only by a sequence number.

Photographic study

Owing to inter-observer variation in the acanthocyte study, a photographic study was designed to investigate the causes of variation, and to establish if a consensus gold standard existed for the acanthocyte. For this study 5×7-in. colour photomicrographs were taken (Provis True Research System Microscope Model AX70, Olympus Optical Company, Japan) of blood smears from dogs with histologically confirmed HSA and from control dogs. All fields were selected at ×400 and photographed to give a final magnification of ×1,000. A total of 37 photographs was taken and included were fields with many acanthocytes, fields with no acanthocytes, and fields with artefactual erythrocyte shape changes that might be confused with acanthocytes. The photographs were randomised and masked, and given to four observers (raters 1–4) who independently examined the photographs and recorded the number of acanthocytes seen in each photograph. The observers included the two original raters, plus two diplomates of the American College of Veterinary Clinical Pathology, each with extensive experience in veterinary haematology. When the photographs had been examined once, they were re-randomised and masked, and the process was repeated by all four observers.

Statistical analysis

Descriptive statistics, two-sample Student’s t-test, and chi-square tests were performed on acanthocyte counts taken from blood smears of dogs with HSA and control dogs. We evaluated acanthocyte count as a diagnostic test for HSA by calculating its sensitivity, specificity, and predictive value, using histopathology as the gold standard. We examined possible diagnostic cutpoints, using a receiver operator characteristic (ROC) plot (Zweig and Campbell 1993).

In the blood film study, precision of acanthocyte counts was assessed from both continuous data (original counts) and categorical data after the counts had been grouped into 0, 1–15, 16–50, and >50 acanthocytes/2,000 RBCs. These categories were arbitrarily assigned to approximate the clinical decision points of absent, mild, moderate and severe acanthocytosis, respectively. Precision was expressed as agreement within and between raters. Agreement was quantified via the kappa (κ) statistic for categorical data, and the intra-class correlation coefficient (ICC) for continuous data. Both unweighted kappa (κUW), and weighted κ (κW), were used for categorical data to assess exact and relative agreement, respectively. The 95% confidence intervals (Shoukri and Edge 1996) were calculated for all κ.

In the photographic study, we quantified agreement, using κUW, κW, and ICC. For intra-observer agreement, acanthocyte counts were grouped into 17 ordinal categories: 0, 1–2, 3–5, 6–10...111–130, 131–150 acanthocytes/photograph. For inter-observer agreement, the data were categorised as 0, 1–15, 16–50 and >50 acanthocytes/photograph to parallel the groupings used for inter-observer agreement in the blood film study.

The STATISTIX software program (version 1.0 for Windows, 1996, Analytic Software, Tallahassee Fla., USA) was used for the descriptive statistics and for tests of association. The ICCs were generated with the Statistical Analysis System Institute software program (version 1 for Windows, 1996, SAS Institute, Cary, N.C., USA).


The final study group consisted of 80 dogs with HSA and 200 control dogs. The dogs with HSA all had masses in the thoracic or abdominal viscera, or both. The frequency distribution of acanthocyte counts for cases and controls is presented in Fig. 1. The range of acanthocyte counts was 0–241 acanthocytes/2,000 RBCs for dogs with HSA, and 0–71 acanthocytes/2,000 RBCs for controls. The mean acanthocyte count (± standard deviation) was 18±42/2,000 RBCs for dogs with HSA, which was significantly higher than the value 4±10/2,000 RBCs for the control group (P=0.003). The chi-square test demonstrated a significant association between the presence of acanthocytes and HSA (P=0.02). The coefficient of variation (CV) varied with the degree of acanthocytosis; at very low acanthocyte counts (<2/2,000 RBCs) the CV was 235.51%, at high counts (>20/2,000 RBCs) it was 83.2%, and in the middle-range counts (2–20/2,000 RBCs) it was 68.4%. Acanthocytes were not found in 46% of dogs with HSA, and in the control group, acanthocytes were present in 38.5% of the dogs, including 13 dogs (6.5%) with moderate to marked acanthocytosis (>15 acanthocytes/2,000 RBCs).
Fig. 1

Frequency distribution of acanthocyte count for dogs with HSA (n=80) and controls (n=200)

Acanthocyte count had a maximal diagnostic sensitivity of 53.8% (and specificity of 61.5%) at a cutpoint of ≥1 acanthocyte/2,000 RBCs. Maximum diagnostic specificity was 100% (sensitivity 7.5%) at a cutpoint of >71 acanthocytes/2,000 RBCs, although specificity was greater than 90% for all counts >10 acanthocytes/2,000 RBCs. The sensitivity and specificity of acanthocyte count at all possible cutpoints is presented graphically as an ROC curve in Fig. 2. Positive predictive value was good (85.7%) for counts greater than 67 acanthocytes/2,000 RBCs and reached 100% for counts greater than 71 acanthocytes/2,000 RBCs. In comparison, negative predictive values were equivocal (72.9%–78.5%) across all cutpoints.
Fig. 2

ROC plot for acanthocyte count. The diagonal dotted line represents a test with no ability to distinguish between the presence or absence of a given disease

Blood film study

Exact acanthocyte counts were not highly repeatable in the blood film study, and the κUW values for intra-observer agreement were generally poor for both observers. However, a relative measure of intra-observer agreement was achieved, as reflected by the higher κW values (Table 1). The ICC for rater A was excellent (0.92); however rater B exhibited systemic disagreement between first and second readings, resulting in a relatively poor ICC (0.51).
Table 1

Intra-observer agreement for acanthocyte counts on peripheral blood smears. Shown are κUW and κW values (95% CI) for categorical data and ICCs for continuous data. n=number of slides read in each reading



Categorical data

Continuous data






0.32 (0.19–0.44)

0.64 (0.52–0.75)




0.46 (0.27–0.66)

0.72 (0.55–0.88)




0.30 (0.14–0.47)

0.62 (0.5–0.76)


a195 of 280 blood films read twice by rater A as initial intra-observer study

b62 of 195 blood smears read twice by rater A that were also read twice by rater B

A similar pattern was found for inter-observer agreement: agreement was poor between raters A and B (Table 2) on first and second readings for exact acanthocyte counts (κUW=0.26 and 0.39), although relative agreement was fair to good (κW=0.53 and 0.71). The ICCs for three different inter-observer comparisons are shown in Table 3 and indicate good to excellent agreement, with better agreement on the second reading than on the first.
Table 2

Inter-observer agreement (raters A, B) for acanthocyte counts on peripheral blood smears. Shown are κUW and κW values (95% CI) for categorical data. n=number of slides in subset







0.26 (0.12–0.4)

0.53 (0.38–0.67)



0.39 (0.20–0.58)

0.71 (0.53–0.85)

Table 3

Inter-observer agreement (raters A, B) for acanthocyte counts on peripheral blood smears. ICCs for the subset of 62 blood smears read by both observers









Photographic study

In the photographic study, the κUW values for intra-observer agreement across 17 categories of acanthocyte counts were fair to poor for raters 1–3, and good for rater 4. The κW values and ICC for the individual raters were excellent (Table 4).
Table 4

Intra-observer agreement for acanthocyte counts on photographs. Shown are κUW and κW values (95% CI) and ICCs for raters 1–4 for two readings of 37 photographs of peripheral blood containing varying numbers of acanthocytes






0.48 (0.31–0.64)

0.97 (0.94–0.99)



0.31 (0.13–0.49)

0.91 (0.84–0.98)



0.51 (0.34–0.67)

0.96 (0.94–0.99)



0.65 (0.47–0.83)

0.96 (0.93–1.00)


Exact inter-observer agreement between the four raters for both readings of the photographs varied from poor to excellent, with the majority of κUW values falling between 0.40 and 0.57, indicating only fair agreement (Tables 5 and 6). When measured with κW, the level of agreement improved dramatically, and the majority of values indicated excellent agreement. The ICC for the four raters for first and second readings was 0.71 and 0.70, respectively.
Table 5

Inter-observer agreement for first reading of acanthocyte counts on photographs. Shown are κUW and κW values (95% CI) for raters 1–4 for 37 photographs of peripheral blood containing varying numbers of acanthocytes


Rater 1

Rater 2

Rater 3








0.47 (0.26–0.68)

0.54 (0.32–0.75)


0.80 (0.64–0.96)

0.91 (0.83–0.99)

0.52 (0.30–0.74)

0.69 (0.53–0.85)


0.56 (0.35–0.77)

0.77 (0.61–0.92)

0.60 (0.38–0.82)

0.78 (0.65–0.91)

0.62 (0.41–0.83)

0.83 (0.71–0.95)

Table 6

Inter-observer agreement for the second reading of acanthocyte counts on photographs. Shown are κUW and κW values (95% CI) for raters 1–4 for 37 photographs of peripheral blood containing varying numbers of acanthocytes


Rater 1

Rater 2

Rater 3








0.12 (−0.04 to 0.29)

0.46 (0.28–0.65)


0.57 (0.36–0.78)

0.83 (0.72–0.93)

0.28 (0.09–0.48)

0.58 (0.41–0.75)


0.40 (0.19–0.60)

0.66 (0.47–0.85)

0.53 (0.31–0.75)

0.78 (0.67–0.90)

0.45 (0.21–0.68)

0.76 (0.62–0.90)

Although there was good inter-rater agreement when the four raters were considered as a group, there were patterns of variability between observers. There was consistently better agreement between raters 1 and 4, and between raters 2 and 3. Also, within the four observers, rater 4 had inter-observer κw values consistently greater than 0.75, while rater 2 had κw values consistently below 0.75.


The unique feature of this study, in contrast to earlier descriptive reports, is that the inclusion of an appropriate comparison group, permits a statistical association to be established between acanthocytosis and HSA. The utility of acanthocyte count as a diagnostic test can also be assessed.

One weakness of the case–control study, as an experimental design, is the potential for bias in the selection of the study population (Hayden et al. 1982). This bias is frequently a result of incomplete or inaccurate records, as well as flawed selection criteria. In the current study, although computerised medical records were used for initial selection, the effect of poor record quality was minimised by the direct examination of original documents. This both ensured the accuracy of the data, and confirmed the eligibility of all dogs in the study. In addition, the selection criteria for inclusion and exclusion were specific and unequivocal, and were applied with equal rigour to both cases and controls to avoid bias.

Histopathology was used as the best “gold standard” available to ensure that all dogs in the case group truly had HSA and that dogs in the control group had other diseases. Primary selection of dogs for the case group was relatively straightforward and depended on a recorded histological diagnosis of HSA. However, selection of the control group was problematic, since, beyond the prerequisite histological diagnosis of disease other than HSA, selection required a reconstruction of the clinical diagnostic process to determine if HSA might reasonably have been a rule-out for the dog. It could be argued that clinical findings such as abdominal mass and pericardial effusion were arbitrary criteria applied to controls even though similar findings were not required for the dogs in the case group. However, the clinical features matched in the controls were those most commonly reported in dogs with HSA and, therefore, represented at least the “classic” presentations of HSA. We made every effort to select control dogs with conditions that resembled some aspect of HSA, but whether HSA was a differential diagnosis at the time of clinical presentation was difficult to know from the existing records. The use of a control group in this study is an improvement over previous studies, and such a group is a credible backdrop against which to study the association between acanthocytosis and HSA.

Diagnostic tests are performed for a variety of reasons (Sackett et al. 1991b), but for the clinician, a primary objective is to classify subjects into clinically meaningful subgroups so that appropriate patient care can be implemented (Martin and Bonnett 1987). The usefulness of a diagnostic test is determined by its ability to differentiate between different diseases with similar clinical signs (Sackett et al. 1991b). Usefulness is measured by objective criteria such as diagnostic sensitivity and specificity, precision (repeatability), and predictive value (Sackett et al. 1991c), as well as by subjective criteria, which help to determine the advantages of one test over another (e.g. allows for earlier detection of disease, or is less expensive, invasive, or time-consuming) (Sackett et al. 1991b).

The results of the current study confirm that there is a statistical association between acanthocytosis and HSA. However, the study also demonstrates that acanthocyte count has limited utility as a diagnostic test for the disease.

Dogs with HSA were significantly more likely to have acanthocytes and to have significantly higher acanthocyte counts than controls. However, the high percentage of case dogs that did not have acanthocytes in their blood films resulted in a very poor sensitivity for acanthocyte count. Maximum sensitivity was 53.8% at a level of ≥1 acanthocyte/2,000 RBCs. A consequence of the low sensitivity is that there is no level of acanthocytosis at which HSA can be ruled out.

Although sensitivity was poor, specificity was excellent for all degrees of acanthocytosis except for very low acanthocyte counts. A consequence of the high specificity was a strong positive predictive value for marked acanthocytosis of >71 acanthocytes/2,000 RBCs. At this cutpoint, positive predictive value was 100%, and acanthocyte count could be expected confidently to rule in HSA. However, the associated sensitivity at this level was 7.5%. So, although marked acanthocytosis would rule in HSA, it would be successful in detecting disease in only a small proportion of truly affected dogs.

Positive predictive value was high, due to the relatively small proportion of control dogs that had acanthocytes. By comparison, negative predictive values were equivocal, approximately 75%, at all levels of acanthocytosis, due to the high proportion of dogs with HSA that did not have acanthocytes.

The ROC curve for acanthocyte count falls slightly away from the diagonal, indicating some ability to distinguish between HSA and other disease states. However, the curve remains closer to the diagonal than to the upper left corner at all points, illustrating the generally weak performance of acanthocyte count as a diagnostic test. Also, there is no single point that could be interpreted as a cutpoint with any greater diagnostic significance than any other cutpoint, a further demonstration of the limited utility of the test.

We assessed the repeatability of acanthocyte counts by both intra-observer and inter-observer agreement, using κ and the ICC. We used quadratic weighting of kappa, which penalises extreme disagreement, to quantify the amount of disagreement between observers. The interpretation of κ used in this study, following Fleiss (1981), was: κ<0.40 represents poor agreement beyond chance, κ>0.75 represents excellent agreement beyond chance, and values between 0.40 and 0.75 represent fair (0.40<κ<0.59) to good (0.60<κ≤0.75) agreement beyond chance.

The advantage of the κ statistic is that it can measure agreement between categories that have been structured to represent the diagnostic cutpoints used in the day-to-day application of the test. The disadvantage of the κ statistic, is that artefactual disagreement can be created when values are similar but on opposite sides of a fixed cutpoint. This drawback is counterbalanced by use of the ICC.

The ICC measures variation within a group of observations relative to variation between groups. If the observations in a group are similar, they are said to be correlated and will yield a positive correlation coefficient (Snedecor and Cochran 1967; Keuhl 1994). The ICC is essentially identical to kappa (Fleiss 1981) and is interpreted similarly (Sackett et al. 1991b). Its advantages are that, because it is based on continuous data, agreement is not lost across cutpoints, and it is also able to detect systematic variation between observations by producing a lower ICC when bias exists. [Bias is defined as “any effect at any stage of an investigation or inference tending to produce results that depart systematically from the true values” (Last 1983).]

Blood film study

The κUW values for both intra-observer and inter-observer agreement were poor for both raters in the blood film study, indicating there was little agreement within and between raters for exact acanthocyte counts. Poor agreement can usually be attributed to three sources: the examiner, the examined, and the examination (Sackett et al. 1991a). The examiners in this study may have introduced disagreement through differences in (1) visual acuity, due either to individual biological variation or to eye strain and fatigue; (2) tolerance for tedious repetitious work, which may have affected the time an observer was willing to spend on assessing individual erythrocytes; (3) interpretation of the test protocol and application of the criteria for acanthocyte identification; (4) experience, primarily in counting acanthocytes.

The blood films, as the “examined”, may have been responsible for a large portion of both the intra-observer and the inter-observer disagreement. Variability in the quality of the blood films, uneven distribution of acanthocytes within the monolayer, and the presence of crenation and drying artefacts, may have skewed the number of acanthocytes present in a selected field or affected the ability of observers to identify acanthocytes.

The examination process was also likely to be a source of considerable disagreement. We made efforts to minimise variation by using standardised equipment, written protocols, and preliminary, joint, training sessions. However, other factors that might have contributed to disagreement are the inherent error associated with counting and the variability in the appearance of acanthocytes. The expected error for acanthocyte count might be expected to be the same as for the routine reticulocyte count, as described elsewhere (Brecher and Schneiderman 1950; Furlong 1973), and would be inversely related to the number of cells counted and the percentage of acanthocytes present. Therefore, acanthocyte counts of 1% (20 acanthocytes/2,000 RBCs) would be expected to have a CV of approximately 22%, and counts of 0.05% (1 acanthocyte/2,000RBCs) would have an expected CV of almost 100%. The actual CVs were worse than the expected values, especially at very low counts, which demonstrates that counting error alone could account for disagreement between raters.

The acanthocyte itself might have been a source of variability in the examination. Some authors report that acanthocytes can be “readily recognised” (Hirsch et al. 1981). Yet others warn about the need to differentiate acanthocytes from echinocytes and staining artefacts (Gelberg and Stackhouse 1977), suggesting that a degree of discernment may be necessary in some situations. The “classic” acanthocyte is unlikely to be mistaken for anything else, but there is no irrefutable way for an observer to determine if a poikilocyte is an acanthocyte. There is an element of decision making involved in the identification of acanthocytes, and this adds considerably to the potential for disagreement between raters.

The subset of blood films used to assess inter-observer and intra-observer agreement might also have contributed to poor precision. The 95 and 62 slides read by rater B were not randomly selected, and it is possible that there was uneven representation of mild, moderate, and marked acanthocytosis. If one level of acanthocytosis were more difficult to read than another level, and the smaller subsets contained a higher proportion of these slides, then a lower rate of agreement would be expected. However, rater A achieved an identical ICC for the 62 slides common to both observers and the 195 slides read for the initial intra-observer study. This suggests that the subset of 62 slides was representative, and that sampling was not responsible for the loss of precision evident in the ICC for rater B.

Photographic study

The photographic study was an effort to assess how much of the observer disagreement in the blood film study might be due to imperfect and/or inconsistent recognition of acanthocytes, and to determine if there was a “consensus gold standard” for the acanthocyte. Photographs were used to eliminate the variability associated with field selection, focus, biological variation etc. It was expected that by using this standardised presentation, four experienced individuals would achieve high intra-observer and inter-observer agreement on acanthocyte counts. Since substantial agreement may serve in lieu of a true gold standard (Martin and Bonnett 1987), the demonstration of a “consensus” gold standard for the acanthocyte would then imply that the poor agreement in the blood film study was likely owing to variability in observers or blood films rather than a conceptual difference about what constituted an acanthocyte.

The data from the photographic study demonstrated that acanthocyte counts were repeatable with relative, though not perfect, precision by trained observers. The difference between κUW and κW for intra-rater agreement affirmed that while most raters did very poorly at repeating acanthocyte counts exactly, even in the rigidly controlled presentation, they were consistently close in their quantification. It could be argued that 17 categories for acanthocytes counts was unduly restrictive and contributed to lower rates of exact agreement. However, with a photograph as a fixed presentation, a higher degree of intra-rater precision might reasonably be expected. It was important to determine if that level of agreement could be achieved, and by scoring with narrower categories, it was possible to define the level of agreement that was actually achieved.

In contrast, inter-observer agreement was scored more generously, because under “working conditions” the exact number of acanthocytes is not likely to be relevant, as compared to the general categories of absent, mild, moderate or marked acanthocytosis. Therefore, the categories of 0, 1–15, 16–50, and >50 acanthocytes/2,000 RBCs were deemed more appropriate, as they had been for the blood film study. Using broad categories and κW to give credit for relative agreement, most observers were able to achieve good to excellent scoring for all inter-observer comparisons, with the exception of rater 2.

There was a repeatable sub-grouping of agreement among raters 1, 2, 3, and 4 that did not correlate with years of experience in clinical pathology. The constructed κ tables for the four observers revealed that raters 1 and 3 were consistently more generous in their quantification of acanthocytes than raters 2 and 4, with an average threefold to fivefold difference in actual acanthocyte counts. For any given photograph, rater 1 tended to have the highest count followed in decreasing order by rater 3, rater 4, and rater 2. The significance of this finding is that a consensus gold standard for the acanthocyte could not be demonstrated. Even among experienced veterinary clinical pathologists, there was a strong subjective element in the identification of acanthocytes that resulted in a liberal versus a conservative interpretation of what constituted an acanthocyte. It is important to note that, although the two sub-groups of raters differed in actual acanthocyte counts, there was good general agreement among the four raters about which photographs showed mild, moderate, or marked acanthocytosis. Furthermore, the ICCs for first and second readings of the photographs were the same for these observers. This demonstrated that inter-rater agreement was repeatable and that some measure of precision had been achieved in the counting of acanthocytes, in spite of the inter-rater variability.

The photographs that generated the most variability were ones with numerous crenated red cells and red cells with miscellaneous spiculation. These changes are likely to be a common source of confusion to many observers when counting acanthocytes.

The most consistent acanthocyte counts were achieved by rate A, who had, over the course of these studies, counted more acanthocytes than the other observers. This suggested that experience might have enhanced precision, without necessarily improving accuracy. It was clear that observers were either conservative or liberal in their interpretation of what constituted an acanthocyte, and it was this basic conceptual difference that generated most of the disagreement between observers. Rater A was conservative in the identification of acanthocytes. Therefore, the arbitrary cutpoints of 0, 1–15, 16–50, and >50 acanthocytes/2,000 RBCs, which were established on the data generated by Rater A, should be considered to be conservative as well.


This study provides statistical evidence of an association between acanthocytosis and the presence of HSA in the dog. Dogs with acanthocytes in their peripheral blood smears were more likely to have HSA, and dogs with HSA had higher acanthocyte counts than dogs with clinically similar diseases. These findings corroborate the anecdotal evidence in the literature that dogs with HSA frequently have acanthocytosis.

However, acanthocyte count is a poor diagnostic test for the HSA in individual patients, despite a proven statistical association between acanthocytosis and the disease. Technically, the test suffers from having neither a formal gold standard nor a consensus gold standard for the acanthocyte itself. The subjective nature of acanthocyte identification, together with the many other sources of variability extant in most technical tests, results in generally poor precision for acanthocyte counts. The lack of precision does not foster confidence in the accuracy of the test. The sensitivity and specificity reported in this study are based on one conservative observer’s interpretation of acanthocytes and are likely to be an optimistic estimation of how the test would perform in the “real world”. In other laboratories or diagnostic situations, with other observers, the precision of acanthocyte counts might be even worse, and diagnostic sensitivity and specificity, as poor as they are to begin with, would deteriorate accordingly. However, the study did demonstrate that in select situations, counts were repeatable with at least moderate precision, which provides a modicum of validity for the test.

From a clinical perspective, the presence or absence of acanthocytes does not help in the diagnosis of HSA. There is no level of acanthocytes that will enable a clinician to rule out HSA. The only diagnostic value of acanthocyte count lies in its excellent positive predictive value at acanthocyte counts of >71 acanthocytes/2,000 RBCs (3.55%). By conservative standards this represents marked acanthocytosis. At this level, acanthocyte count would confidently rule in HSA, although it would detect only a small proportion of diseased dogs.

In conclusion, the study confirms that acanthocytosis is associated with HSA, but that for all practical purposes, acanthocyte count is an unreliable and insensitive diagnostic test with limited clinical usefulness.



The contribution of Dr. Kristiina Ruotsalo as an independent observer in this study is gratefully acknowledged, as is the statistical support and advice provided by Dr. Frank Pollari. The efforts of both individuals are sincerely appreciated. This project was financially supported by the Ontario Veterinary College Pet Trust Fund.


  1. Bessis M (1977) Erythrocytic series. In: Blood smears reinterpreted. Springer, New York, pp 64–66Google Scholar
  2. Brecher G, Schneiderman M (1950) A time-saving device for the counting of reticulocytes. Am J Clin Path 20:1079–1083Google Scholar
  3. Couto CG (1989) Diseases of the lymph nodes and the spleen. In: Ettinger SJ (ed) Textbook of veterinary internal medicine, 3rd edn. Saunders, Philadelphia, pp 2225–2245Google Scholar
  4. Fleiss JL (1981) The measurement of interrater agreement. In: Statistical methods of rates and proportions, 2nd edn. Wiley, New York, pp 212–236Google Scholar
  5. Furlong MB (1973) Interpreting the reticulocyte count. Postgrad Med 54:207–211PubMedGoogle Scholar
  6. Gelberg H, Stackhouse LL (1977) Three cases of canine acanthocytosis associated with splenic neoplasia. Vet Med Small Anim Clin 72:1183–1184PubMedGoogle Scholar
  7. Hayden GF, Kramer MS, Horwitz RI (1982) The case-control study. J Am Med Assoc 247:326–331CrossRefGoogle Scholar
  8. Hirsch VM, Jacobson J, Mills JHL (1981) A retrospective study of canine hemangiosarcoma and its association with acanthocytosis. Can Vet J 22:152–155PubMedGoogle Scholar
  9. Jain NC (1993) Erythrocyte physiology and changes in disease. In: Essentials of veterinary hematology. Lea & Febiger, Philadelphia, pp 133–158Google Scholar
  10. Kuehl RO (1994) Experiments to study variances. In: Statistical principles of research design and analysis. Duxbury, California, pp129–158Google Scholar
  11. Last JM (1983) A Dictionary of epidemiology. Oxford University Press, Toronto, p 10Google Scholar
  12. Martin SW, Bonnett B (1987) Clinical epidemiology. Can Vet J 28:318–325Google Scholar
  13. Ng CY, Mills JN (1985) Clinical and haematological features of haemangiosarcoma in dogs. Aust Vet J 62:1–4PubMedGoogle Scholar
  14. Rebar AH, Hahn FF, Halliwell WH, et al. (1980) Microangiopathic hemolytic anemia associated with radiation-induced hemangiosarcomas. Vet Pathol 17:443–454PubMedGoogle Scholar
  15. Rebar AH, Lewis HB, DeNicola DB, et al. (1981). Red cell fragmentation in the dog: an editorial review. Vet Pathol 18:415–426PubMedGoogle Scholar
  16. Sackett DL, Haynes RB, Guyatt GH, et al. (1991a) The clinical examination. In: Clinical epidemiology, 2nd edn. Little, Brown, Boston, pp 19–49Google Scholar
  17. Sackett DL, Haynes RB, Guyatt GH, et al. (1991b) The selection of diagnostic tests. In: Clinical epidemiology, 2nd edn. Little, Brown, Boston, pp 51–68Google Scholar
  18. Sackett DL, Haynes RB, Guyatt GH, et al. (1991c) The interpretation of diagnostic data. In: Clinical epidemiology, 2nd edn. Little, Brown, Boston, pp 69–152Google Scholar
  19. Shoukri MM, Edge VL (1996) Statistical analysis of cross-classified data. In: Statistical methods for health sciences. CRC Press, Boca Raton, pp 41–135Google Scholar
  20. Shull RM, Bunch SE, Maribei J, et al. (1978). Spur cell anemia in a dog. J Am Vet Med Assoc 173:979–982Google Scholar
  21. Snedecor GW, Cochran WG (1967) One-way classifications. Analysis of variance. In: Statistical methods, 6th edn. Iowa State University Press, Iowa, pp 294–296Google Scholar
  22. Thrall MA, Weiser MG (1997) Hematology. In: Pratt PW (ed) Laboratory procedures for veterinary technicians. pp 33–84Google Scholar
  23. Weiss DJ, Kristensen A, Papenfuss N (1993) Quantitative evaluation of irregularly spiculated red blood cells in the dog. Vet Clin Pathol 22:117–121PubMedGoogle Scholar
  24. Zweig MH, Campbell G (1993) Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. Clin Chem 39:561–577PubMedGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2003

Authors and Affiliations

  • M. S. Tant
    • 1
    • 2
  • J. H. Lumsden
    • 2
  • R. M. Jacobs
    • 2
  • B. N. Bonnett
    • 3
  1. 1.Vita-Tech Canada Inc.1345 Denison StreetMarkhamCanada
  2. 2.Department of Pathobiology, Ontario Veterinary CollegeUniversity of GuelphGuelphCanada
  3. 3.Department of Population Medicine, Ontario Veterinary CollegeUniversity of GuelphGuelphCanada

Personalised recommendations