Surveys are valuable tools to collect opinions, trend application of knowledge, and provide information to develop future strategies. In general, surveys are highly accurate reflections of the personal opinions of the responders. The responses should not be accepted as evidence-based truths. The information that is reported is the result of learning, memory, anecdotes, and bias. This is in contrast to the multiple choice examination where there is a correct answer based on scientifically derived, evidence-based acceptable responses. Thus, survey results can and must be interpreted in the context of the surveyed population, the subsets of that population, relative density of the responses (number queried vs. number responding), motivation for responding, and motivation for given responses. Reviewing several of these predetermined biases gives us a greater understanding of the complexity and insight that can be deduced from the survey reported in the article “Getting to Better Cancer Care: Results of a Society of Surgical Oncology” by Wong et al.1 For example, consider how the following questions might be answered. Should more resources be provided for cancer research? The answer should be a near-unanimous yes if asked of the members of the Society of Surgical Oncology. However, consider how different the answer might be if the word “cancer” was replaced by the word “melanoma.” We can all imagine how the members of the organization would be realigned by the subsetting of the agenda. This fascinating feature of group surveying is evident in many of the responses.

It is worthwhile to further consider some features of the population surveyed. In general, those surveyed are specialized surgeons, in primarily “academic”-based practices. They are salaried employees whose referral bases are derived in part from their recognition as experts and also from the branding of their home institution. This is in clear contrast to the other portion of surveyed, the private practice surgical oncologist who most likely has a blended practice of a general and oncologic surgery, that may include mandatory emergency and coverage call and many primary responsibilities for practice management (billing, collections, staffing, space rental, equipment, and office consumables, etc.). There may also be a differential in the support provided by the institution—academic facility vs. private/community hospital for practice components such as expensive and technically advanced electronic medical record or access to trainees. This will clearly impact on the individual’s ability to complete questionnaires due to available time and attitude of the value of the results. This disparity is well defined in our respondents.

By its mission, the Society of Surgical Oncology has an agenda of producing the best results for cancer patients and disseminating the most modern and best practices. The membership has been defined over the years by the evolving membership requirements. Twenty years ago, these requirements included documentation of 50 or more major cancer cases per year and contribution to the peer-reviewed surgical literature. More recently, the requirements were adjusted to include simply a focus in cancer care, opening the membership to a wider subspecialty group. There were 25% respondents who had been in practice for more than 20 years, probably putting them in the first group, and an additional 25% were in practice for more than 11 years. This interesting demographic might play a role in the posture of the responses. Thus, the responses may represent a more diverse group in terms of cancer focus with only half of the surveyed entering the SSO under the more restrictive membership requirements. Given that 80% of cancer care in the United States is delivered in the community setting—and only one-third of the respondents are from the community—there is a large potential bias in the survey as it attempts to use the responses of where and by whom care should be given versus where it is currently being provided. The pool of university-based surgeons saw a larger percentage of cancer patients—their practices were more exclusive. The survey reflected this reportable demographic fact with 52% of the university surgical oncologists seeing more than 20 new cancer patients per month, while for the private practice respondents only 30% saw more than 20 new patients per month.

It is difficult to interpret the distribution of fellowship training for breast surgical oncologists where only one-fifth had fellowship training. However, this fact certainly brings up a very controversial set of questions. Is the lack of fellowship training a deficit for these surgical oncologists who have been able to adopt the skill set to have highly competent breast practices? Is the breast fellowship an artificial designation? Are those surgeons who complete a well-balanced surgical oncology fellowship that creates a pluripotent trained fellow able to function equally well? Or are there just not enough breast fellowships to train enough surgeons for the breast specialty? This represents the most common product of surveys—more questions.

The responses to where cancer surgery should be performed highlight the intensive bias or maybe self-confidence of our members. Academic surgical oncologists (ASO) and private practice surgical oncologist (PPSO) both believed that a high volume of reference cases were not alone critical for quality care. The PPSO relied primarily on the surgeon’s skill and less on location. In contrast, the ASO included both surgeon and location. Personally, I think both components are important, but the relative worth of each is uncertain and the ability to measure outcome even more problematic. Then we must consider the “relative value” of any outcome. Setting aside obvious and unarguable endpoints of serious complications and survival, would a two-day increased length of stay in a rural hospital without an infrastructure for home health be “worse” than an early discharge from a major high-volume center?

The responding SSO members are to be congratulated for embracing the concept of evidence-based medicine and clinical practice guidelines as drivers for care delivery. However, there is a reluctance to be held to these as standards. This dichotomy of “talking the talk” and “walking the walk” is the very essence of the success or failure of evidence-based best practice. It is a theme that has derailed another major effort in improvement of care—involvement in clinical trials. This is an area where the Society could place serious emphasis.

In summary, this survey highlights the areas of controversy that exist. Because of the ability to examine the responses from skilled surgeons in various practice settings, the inherent self-protective and self-confident elements of survey responses are clear. For me, I do not think the plans for quality cancer care can be the results of surveys done within special interest populations. The only way to continuously improve care is to be honest enough to perform and measure one’s practice along evidence-based guidelines. As a group, we should be willing to place some meaningful parameter at risk—that might be payment for adherence to guidelines or inclusion in payer contracts based on benchmark outcomes, or even membership in specialty societies could be tied to both adherence to evidence-based practice and the outcomes generated from conformation to those guidelines.