As computers become increasingly powerful, ubiquitous, and integrated into clinical practice, it seems logical that health professionals should use them to support their clinical practice. Unfortunately, the benefits of computerized clinical decision support have not yet been fully realized.1 In this issue of JGIM, Medow et al. report a study attempting to clarify our understanding of this problem.2 Medical residents first committed to a course of action (admit a hypothetical patient with pneumonia to the floor or the intensive care unit) and then received management advice that contradicted their decision. The “intervention” in this study was the source of advice–either an “expert pulmonologist” or a “decision aid” message citing an evidence-based prediction rule (the case was constructed so that two prediction rules3,4 would each justify a different management approach). The investigators found that residents paid more attention to the decision aid, suggesting that the failure to incorporate decision aid advice goes deeper than simply the human/nonhuman source. While the conclusions are necessarily tentative, the study draws attention to a number of important issues related to the study of medical decision-making.

First, Medow et al. elected to study decision-making in the context of patient management. This contrasts with most research on medical decision-making, which has focused primarily on diagnosis and diagnostic error rather than approaches to management. Diagnostic error is easier to conceptualize than management error because there is usually only one correct diagnosis, and correctness can usually be clearly established (at least in hindsight) using objective data. Yet diagnosis is, at least from one perspective, only the first step toward the more relevant (albeit less straightforward) clinical challenges of management and prognosis. Moreover, the relationship between diagnosis and management is rarely linear, but rather interconnected, complex, and dynamic.5 Clinicians often initiate empiric management before establishing a definitive diagnosis. An emergency room physician may make the wrong diagnosis but follow a correct management approach (admit the patient); should this be counted as an error? Physicians rarely pinpoint the precise etiology of back pain, yet nearly all cases resolve with appropriate (conservative) management. As elaborated below, clinicians typically face multiple acceptable management strategies, and these commonly change over time. Finally, for these and other reasons the cognitive processes underlying management decisions may differ from those employed in diagnosis. Without undervaluing the need to better understand diagnostic decision-making, I perceive an imperative to devote more attention to decision-making at the management stage, and to the interrelationships between diagnostic and management decisions.

Second, perhaps the most elegant and intriguing aspect of this study was the absence of a single correct solution. Either approach would have been acceptable. While such ambiguity was planned in this study, it tends to be the rule rather than the exception in the real world of clinical practice. There is virtually always more than one “correct” way to manage pneumonia, hypertension, back pain, cancer, etc. Moreover, selecting a management strategy involves incorporation of patient preferences (the oft-neglected “back to the bedside” step of evidence-based medicine6). Clinician-patient management discussions frequently include decisions about further diagnostic testing, blurring the boundary between diagnosis and management. Patients often need something other than the textbook answer “next best test” or “next best step.” Attempts to simplify this for the purposes of research, quality improvement, clinical practice guidelines, or determination of physician payment will be challenging at best, and at worst may result in conclusions irrelevant to actual practice.

Third, Medow et al. pitted a human expert against a decision aid. Of course, in practice medical decisions rarely require such dichotomous choices. On the contrary, difficult decisions (diagnostic, management, and otherwise) usually involve input from multiple sources including physicians, nurses and other health professionals, patients and family members, and (not “or”) computerized resources. Thus, in medical decision-making research the relevant question is not simply whether or not the decision-maker independently arrives at an appropriate diagnostic/management strategy, but also whether he or she would have arrived at the same decision in the real world (using other resources if perceived necessary).

Fourth, this study takes a reductionist approach to research.7 This approach has obvious limitations in the external validity (generalizability) of the findings. The scenario was tightly scripted, and the low-fidelity written-text case presentation had little resemblance to actual clinical practice. Nonetheless, such approaches can offer great insight into problems that are difficult to study in natural settings. In one sense, this study represents a “basic science” approach to this problem. Basic science research must be extended, replicated, and reconceptualized, often using subjects other than the population of interest, until a robust understanding of the object under study has been achieved. Results can then be translated to the real world and tested in clinical trials. While not infallible, this approach (studying problems in isolation and then verifying findings in the real world) has proven indisputably useful in clinical medicine. Basic science studies in medical decision-making and medical education have highlighted limitations in traditional decision-making paradigms,8,9 and suggested evidence-based principles for future research and teaching.10,11 Such studies should continue and should be complemented by research testing these principles in practice.

Finally, this study ultimately concerns trust or confidence—in oneself and in other sources of information. Overconfidence has been identified as a major cause of physician error,12 but distinguishing appropriate from inappropriate confidence is difficult.13 Well-founded confidence is critical for efficient expert clinical practice; ill-founded confidence can be disastrous. I'm not really surprised that residents placed more trust in an empirically derived decision aid (to which they may have had previous exposure) than in a disembodied, anonymous “expert.” Were these residents overconfident? Those who ignored the decision aid's advice might have based their choice not on unfounded self-confidence, but on defensible reasoning (for example, mental computation of a risk score using another prediction rule). Perhaps they would have attended more to advice from a local physician (mentioned by name) with a reputation for clinical expertise. In either case, residents would be demonstrating trust in an information source with which they had greater familiarity and a priori confidence. Our understanding of confidence and overconfidence, and in particular how to identify and remedy the latter, will require further research.13 In the meantime, if we wish clinicians to place their trust in something new (e.g., a computerized decision aid) then we will need to figure out how to build such trust.

In summary, as clinicians attempt to maximize patient safety, minimize overtesting, and integrate the latest medical knowledge, new models for decision support will be required. Studies focused on management as well as diagnostic decision-making, embracing the complexities of clinical practice and elucidating how to identify and manage overconfidence, will inform the development of such models.