A frequently uttered wisecrack amongst academic surgeons refers to those manuscripts in which the authors outnumber the study subjects. While ampullary carcinoma is rare, manuscripts on the subject do not usually engender this kind of criticism. As Dr. O’Connell and colleagues state in their manuscript, published in the current issue of Annals of Surgical Oncology: while uncommon, ampullary carcinoma represents the second most common cause of periampullary malignancy.1 They utilized the Surveillance Epidemiology and End Results (SEER) tumor registry to analyze the outcomes of more than 3000 patients who underwent surgical resection for ampullary carcinoma. As the authors state, this represents the largest such analysis of outcomes for patients with this uncommon malignancy. They conclude that, compared to most series published in the last 15 years, SEER data reveals a lower resection rate, a higher overall mortality rate, and lower observed 5-year survival. These findings are not particularly enlightening in and of themselves, but rather serve as a gentle reminder of the fragile truths we accept daily in the practice of surgical oncology: that data derived from small, retrospective single-institution studies are replete with numerous biases. In the current study, the authors reference reports from revered institutions such as Johns Hopkins, Memorial Sloan–Kettering, and Toronto General.24 It should not be surprising, but rather expected, that patient outcomes from these institutions might look different from those of a population treated in smaller centers, as is represented by the SEER registry. To the authors’ credit, they do a reasonably thorough job of discussing the biases that contribute to at least some of these differences. The first striking difference reported by the authors refers to differential rates of surgical resection. In the SEER registry, 40% of patients with ampullary carcinoma underwent resection as compared to 82% at Memorial and 88% at Johns Hopkins. As noted in the manuscript, there is likely a strong referral (selection) bias wherein tertiary care centers specializing in complex gastrointestinal and hepatobiliary surgery will see patients who have been prescreened as more likely to be resectable. The more aggressive resection policies of these groups also certainly contribute to the difference in resectability rates. One important piece of data from the O’Connell et al. paper is that more than 18% of ampullary cancer patients with stage Ia disease did not undergo surgical resection. While the explanation for this cannot be discerned from the available data, this is consistent with recent findings on the treatment of pancreatic duct carcinoma and point to an area that deserves further study.5 Differences in perioperative mortality rates were also noted between single-institution studies (generally less than 5%) and the SEER registry data (8%). Such differences have been the subject of an extensive body of literature that generally reveals perioperative mortality to be bound to comorbid illness (not assessable by SEER), and surgeon and hospital volume in a relationship that is not easily definable.

Finally, the 5-year survival for patients in the current study was 36.8%, whereas survival rates from single-institution studies ranged from 40% to 70%. Interestingly, however, some of the larger studies, including those from Johns Hopkins and Memorial Sloan-Kettering revealed only a modest difference from the SEER data, with both revealing a 5-year survival rate of 46%. This difference could easily be explained by the lower perioperative mortality rate at these centers, by a more favorable patient population in terms of both stage and overall health status, and perhaps by a more liberal use of adjuvant therapy. The authors err however when they overreach to conclude that their “data highlight the debate for pancreaticoduodenectomy to be regionalized to high volume centers”. For the reasons stated previously and acknowledged by the authors, differences in outcomes between large registries and single-institution studies reflect numerous biases and not simply a differential in the quality of health care. One bias the authors fail to acknowledge is perhaps one of the most potent of all: publication bias. That is, of course, our very human tendency to publish results only when they are favorable. Thus, it should always be expected that unbiased registry data will reveal outcomes that are inferior to those self-reported by individuals and it is probably unrealistic to expect these two to be concordant. The concept of regionalization has been much debated, but remains impractical given US geography and healthcare finance. Rather than being an agent that drives regionalization, the O’Connell et al. study and others like it serve the important purpose of raising our consciousness regarding what may be a more accurate picture of the natural history of rare diseases and the current patterns of care nationwide. They serve the important functions of generating hypotheses that warrant testing in the setting of multi-institutional or cooperative group trials and by pointing out where these robust datasets fail us, thereby suggesting important data fields that should be incorporated in future registry design. Finally, when such studies reveal that potentially curable patients are not receiving surgical care, they point to potential areas for improvement in education and process which can raise the level of cancer care across the country, a very important contribution indeed.