Skip to main content

Precision diagnosis: a view of the clinical decision support systems (CDSS) landscape through the lens of critical care

Abstract

Improving diagnosis and treatment depends on clinical monitoring and computing. Clinical decision support systems (CDSS) have been in existence for over 50 years. While the literature points to positive impacts on quality and patient safety, outcomes, and the avoidance of medical errors, technical and regulatory challenges continue to retard their rate of integration into clinical care processes and thus delay the refinement of diagnoses towards personalized care. We conducted a systematic review of pertinent articles in the MEDLINE, US Department of Health and Human Services, Agency for Health Research and Quality, and US Food and Drug Administration databases, using a Boolean approach to combine terms germane to the discussion (clinical decision support, tools, systems, critical care, trauma, outcome, cost savings, NSQIP, APACHE, SOFA, ICU, and diagnostics). References were selected on the basis of both temporal and thematic relevance, and subsequently aggregated around four distinct themes: the uses of CDSS in the critical and surgical care settings, clinical insertion challenges, utilization leading to cost-savings, and regulatory concerns. Precision diagnosis is the accurate and timely explanation of each patient’s health problem and further requires communication of that explanation to patients and surrogate decision-makers. Both accuracy and timeliness are essential to critical care, yet computed decision support systems (CDSS) are scarce. The limitation arises from the technical complexity associated with integrating and filtering large data sets from diverse sources. Provider mistrust and resistance coupled with the absence of clear guidance from regulatory bodies further retard acceptance of CDSS. While challenges to develop and deploy CDSS are substantial, the clinical, quality, and economic impacts warrant the effort, especially in disciplines requiring complex decision-making, such as critical and surgical care. Improving diagnosis in health care requires accumulation, validation and transformation of data into actionable information. The aggregate of those processes—CDSS—is currently primitive. Despite technical and regulatory challenges, the apparent clinical and economic utilities of CDSS must lead to greater engagement. These tools play the key role in realizing the vision of a more ‘personalized medicine’, one characterized by individualized precision diagnosis rather than population-based risk-stratification.

Improving diagnosis in health care

On September 22, 2015, the Institute of Medicine (IOM) of the National Academy of Sciences released the third report in its Quality Chasm series. This report, which is titled Improving Diagnosis in Health Care, presents a sobering look at the current rates of diagnostic error across the spectrum of clinical practice [1]. The reported rates suggest that diagnostic errors are common and substantially affect the health and well-being of patients. The IOM report defines diagnostic error consequent to one of two failures: a failure to establish an accurate and timely explanation of the patient’s health problem(s); or, a failure to communicate that explanation to the patient. Nowhere is diagnostic accuracy and timeliness more important than in the surgical and critical care settings, where making the correct decision so strongly affects morbidity and mortality. This diagnostic accuracy is founded on monitoring and computing that drives clinical decision support. Accordingly, we review the history and assessed the current state of clinical decision support systems (CDSS) in these surgical and critical care settings.

Combining terms germane to the discussion (clinical decision support, tools, systems, trauma, acute, outcome, cost, NSQIP, APACHE, ICU, and diagnostics), the authors queried the MEDLINE/PubMed database, relying heavily on a few systematic reviews to further guide their search. Results were cross-referenced and the authors eliminated case reports, editorials, and references that contained duplicate information. Special care was given to selecting references focused on the acute care space. Additional search engines (Institute of Medicine, RAND Health, Agency for Health Care Research and Quality, Food and Drug Administration) were targeted to pull reports focused on CDSS. Of those combined resources, 91 were eventually selected on the basis of both temporal and thematic relevance (i.e. critical care, when available) (Table 1).

Table 1 Search strategy

A brief history of clinical decision support systems

Clinical decision support systems (CDSS) were proposed more than a half-century ago. In 1959, Ledley and Lusted described logic and probability in medical decision-making [2]. This seminal work, which focused on the use of computers to improve medical diagnosis, catalyzed efforts to address gaps in clinical delivery failures [39]. The contributions of Sheppard and Kouchoukos [10] and Sheppard et al. [11] on the use of automated measurements and rules as a potential substitute for judgment in the critical care space further advanced the field [10, 11]. Yet it took another thirty-two years for a consensus definition of what constitutes a clinical decision support system (CDSS) to emerge: ‘active knowledge systems that use two or more items of patient data to generate case-specific advice’ [12].

In the wake of the prior two influential Institute of Medicine reports entitled To Err is Human [13] and Crossing the Quality Chasm [14], each of which pointed to faulty systems and processes as the root cause of human medical error, the healthcare industry witnessed a steady rise in the use of CDSS. The use of these medical decision-making tools touches a range of disciplines across the healthcare landscape, from dental [15] to imaging [16]. CDSS have been reported to favorably influence quality and patient safety [1720], to promote prevention and optimal treatment [2123], to reduce medical errors [24, 25], and to improve outcomes [26, 27].

A closer look at these studies reveals that the primary emphasis of CDSS has been on “what to do?” more than “is the diagnosis accurate?”. A 2005 study by Garg et al. offers a thorough analysis of electronic CDSS on provider performance and patient outcomes as part of a systematic review of one hundred studies categorized as systems for diagnosis (10 trials), reminder systems for prevention (21 trials), systems for disease management (40 studies), and systems for drug dosing and drug prescribing (29 trials). In this systematic review, the authors reported on increased practitioner performance (e.g. adherence to guidelines, recognition of functional status problems, time to order treatment) in 62 of the 97 studies selected (64 %), where improvement was defined as a statistically significant positive effect on at least 50 % of outcomes measured [28]. Such improvement only hints at the potential of CDSS to close performance gaps in critical care. For example, failure to discern a common critical state, the adult respiratory distress syndrome (ARDS), is associated with failure to consistently provide evidence-based low-tidal volume ventilation in 13.4 to 45 % of cases [2931].

All of this begs two questions: “Are computers helping physicians reach the correct diagnosis?”; and “Does having a correct diagnosis materially improve treatment and outcome?”

Current management of complex disease and critical illness remains largely dependent upon accumulated experience and pattern repetition. Such reliance predisposes to various biases, with predictable results: under treatment of some patients, who subsequently have high complication rates, and the overtreatment of others, leading to prolonged pain, the potential for perioperative complications and the expenditure of valuable healthcare resources. A vivid example is found in the management of open wounds, where dehiscence (the failure of a wound to stay closed after the wound edges are approximated, a skin graft is placed or the wound is covered by other tissue) leads to delays in definitive closure, marked increase in costs, worse pain and nutritional setbacks. It is therefore important to attempt closure only when success is highly likely. Yet reliance on accumulated experience and pattern recognition results in failure rates of 15 % [32]. CDSS, if properly implemented, have the potential to not only contain costs but also improve outcomes by ensuring the optimal treatment is delivered at the right time.

Indeed, the imperative to deliver the right care, right now, every time is central to the work of our research consortium, the surgical critical care initiative (www.sc2i.org) [33] and will resonate with related disciplines requiring complex decision-making. Errors in judgment, in decisions, and in performance lead to protracted lengths of stay [34], substantial increases in hospital costs [35], and considerable escalations in both perioperative and late mortality rates [36]. We believe CDSS, used in conjunction with reproducible treatment protocols, can play a pivotal role in mitigating these adverse clinical events. Indeed, the work of Morris et al. [37] raises the question of replicable methods to reduce the error rate amongst clinical decision makers.

Clinical decision support in the critical and surgical care settings

The purpose of CDSS in the critical care space, whether rule-based or probabilistic, is to augment, rather than replace, a provider’s ability to infer a correct diagnosis. This augmented inference is different from rule-based negative feedback control models that have been shown effective in fully closed-loops titrating specific therapies to isolated endpoints (such as targeting blood pressure) [38, 39]. Critical illness and injury—particularly that separate from perioperative trajectories following elective surgery—have sufficiently complex dynamics to justify maintaining humans in the control loop. CDSS validated through controlled clinical trials can nevertheless frame options based on evidence. This is foundational to “Right Care” (i.e. accurate diagnosis and treatment plan). Moreover, by presenting those options and probabilistic outcomes, CDSS can accelerate the implementation of the selected tests or treatments. This is foundational to “Right Now”.

Current structure promotes filtering and integrating patient-specific discrete blocks of data to reduce information overload as a strategy to improve medical decision-making. The architecture of these tools is standardized in that they typically combine three components into a decision support platform: the clinical and/or physiological information, an underlying model that analyzes the data, and an interface to collect inputs and display outputs. CDSS organized in this fashion can support a wide range of tasks [40]. Of particular interest to the critical and surgical care settings are tasks pertaining to diagnosis (matching a patient’s signs and symptoms) and treatment planning (operationalizing guidelines following specific diagnoses). We will return to the issue of diagnostic precision and guideline specificity later in this review.

The overall influence of CDSS on clinical practice, whether it touches on quality and patient safety, clinical outcomes, or the avoidance of medical errors, is well documented [41, 42]. In cases where large, variable, and multi-source data are being processed, CDSS affect the quality of the decisions being made, leading to improved clinical outcomes. We believe this is particularly true in instances where multi-disciplinary teams care under strained workload and time pressures [43], such as trauma or complex surgical care (Table 2). In 1998, Lanceley et al. highlighted the struggle to balance provider input (surgical, medical, and diagnostic expertise) with health-care guidelines to achieve superior outcomes.

Table 2 Non-exhaustive list of CDSS used in surgical and intensive care settings

CDSS have historically been divided into knowledge-driven (e.g., powered by if-then statements) and data-driven (machine learning from large datasets implementing Bayesian or neural networks, fuzzy logic theories, symbolic reasoning) support systems. While all CDSS fundamentally attempt to address a four-prong problem (i.e. processing an increasingly large and heterogeneous volume of medical information, accounting for missing clinical or physiologic information, physician overload, and medical errors), their complexity can vary substantially from tool to tool [44]. Whereas alert systems may follow strict IF-THEN logic rules to report on an adverse drug interaction, systems powered by artificial intelligence can calculate the probability of an emerging adverse condition, even in the presence of missing data. We think this is where methods such as fuzzy logic, by virtue of their connectives and inference rules, are better suited to the critical-care space, a discipline characterized by complex causal chains and uncertainty [45]. In the current era of advanced computing that allows for near instantaneous identification and analysis of reference populations, the distinction between knowledge-driven and data-driven approaches appears to have been blurred. Our systematic review of the literature yielded predominantly data-driven support systems since they have been most usefully applied for cases requiring complex medical and surgical decision-making [46].

Although first introduced in 1984 [47], CDSS for critical care are exclusively knowledge-based products that, while generally effective, have been traditionally driven by clinical and physiological inputs; biomarker data, when used, are limited to the most conventional laboratory values. One example of a widely used clinical decision support tool is the outcomes-driven National Surgical Quality Improvement Program (NSQIP) risk calculator, developed by the Veterans Health Administration and implemented in the private sector by the American College of Surgeons [4850]. This CDSS, using data from 400 hospitals and 1.4 million patients, estimates patient-specific postoperative complication risks for more than 1500 unique surgical procedures across multiple specialties. The effectiveness of the NSQIP Risk Calculator in containing surgical complications is the subject of rigorous, on-going, monitoring and reporting. A 2007 study released by the American College of Surgeons asserts that use of this particular CDSS was associated with a 47 % drop in postoperative mortality and a 43 % drop in postoperative morbidity rates from 1991 to 2006 [51]. The success of the NSQIP Risk Calculator led to further development of predictive tools, to include one which prospectively identifies patients at high-risk for surgical complications based on their own characteristics [52]. Other tools, like the Physiological and Operative Severity Score for the enumeration of mortality and morbidity (POSSUM) have demonstrated their utility in limiting post-operative complications [53]. Surgical wound infections are the second most common nosocomial disease, occurring in 2 to 5 % of all surgeries and 20 % of abdominal surgeries [54]; a simple automated reminder to the anesthesia team and the surgeon to administer prophylactic antibiotics can favorably influence surgical outcomes [55].

Two other tools used predominantly for care quality comparisons, performance improvements and research in the critical care setting are the acute physiology and chronic health evaluation (APACHE), introduced in 1985 to rate the severity of illness for adult patients admitted to intensive care units, and the sequential organ failure assessment (SOFA), which also provides a valuable prognostic estimate of in-hospital survival [56]. The predictive values generated by the underlying models rely on a variety of physiologic and laboratory inputs (demographics, vitals, oxygenation, chemistry, hematology). In 2014, Gultepe et al. [57] reported further improvements to these approaches, using advanced machine-learning techniques to develop a model capable of identifying patients at high risk for hyperlactatemia. While the aforementioned tools prove helpful in predicting mortality rates for groups of critically ill patients [58], we concur with the widely recognized point that, in light of their limitations, none can be a perfect substitute for the provider’s expert clinical judgment in the care of an individual patient [59]. Therein lies the limitation of ‘current-gen’ tools, which tend to achieve optimal group outcomes through risk stratification rather than generate personalized assessment and risk predictions; yet this latter path towards precision diagnosis and recommendation is precisely where ‘next-gen’ clinical decision support tools should go, as suggested by Loghmanpour et al. [60] in their 2014 assessment of a Bayesian-based model to treat end-stage heart failure and reinforced by the release of the newly released, and aforementioned, ‘Improving Diagnosis in Health Care’ IOM report.

The performance of CDSS in emergency departments seems variable. While one study may question the accuracy of the tool as an arbiter in diagnostic decision support [61], another points to the robustness of an artificial neural network as a viable first-screening methodology to differentiate cardiac from non-cardiac chest pain [62]. Differences in underlying methodologies notwithstanding, and in the context of an increased development of CDSS in the critical care arena, more research in this field appears warranted to address the utility—and physician acceptance—of these decision-support tools.

Although precision medicine is not a new concept, substantial advances in both genotyping and microarray/biochips in the past two decades [63] elevate the potential of tailoring medical treatment to a patient’s biological characteristics, needs, and preference during all stages of care. Our ability to leverage ‘big data’ [64] should augment the arsenal of decision-support models currently in-use in the operating room (e.g. anesthesia information management systems—AIMS) or Intensive Care Units (e.g. Architecture for intensive care unit decision support—ACUDES; Rhea). The AIMS and ACUDES, like most CDSS, are representative of current generation of electronic health records, capturing discrete packets of information in relational databases and processing these inputs, for intraoperative quality assurance and research in the case of the former and to manage a patient’s temporal evolution in the case of the latter.

A consequence of this explosion of data, and of particular interest to the critical and surgical care settings, is the implementation of machine learning CDSS that are able to suggest biomarker causal dependencies, even in the absence of data sets. This is a relatively new field, one in which promising results have already been reported [65]; in their five-year study, Forsberg et al. estimated the survival of 189 consecutive patients with operable skeletal metastases by integrating multifaceted clinical data into a prognostic model. The underlying statistical method used for this analysis, Bayesian Belief Network (BBN), yielded a robust and accurate CDSS.

Challenges to development and clinical insertion

While a few risk-stratification CDSS have proven durable in the critical-care setting, further development and subsequent deployment of patient-specific predictive tools has not occurred. The reasons are many: converting data into a clinically relevant model, difficulty in integrating the tool into the clinical workflow (to include Electronic Health Records), and ethical/legal implications of ignoring or overriding recommendations.

The literature also points to providers resisting utilization, yet another obstacle to the widespread implementation of CDSS [6668]. Several factors fuel this challenge: the system not obscuring the physician’s autonomy and responsibility; the availability of hardware, technical support and training; the integration of the system into workflows; and the relevance and timeliness of the recommendations provided. Improvements in clinical utilization are largely dependent on these dynamics yet bump against a natural inclination to resist change (e.g. threatening independence of thought, frustrating the belief one is already practicing in an evidence-based fashion). Barriers to adoption extend beyond the clinical realm to include wider organizational concerns of altering conservative behavior across institutional functions (e.g. clinical, research, administrative). Strategies pioneered by Morris and colleagues towards the development and distribution of replicable computer protocols have met with some success [69, 70]. In a study focused on multi-site ventilator management, East et al. also reported on the benefits of implementing a CDSS to improve patient morbidity [71]. These results emphasize that a unity of vision across disciplines, within (and potentially across) organizations is prerequisite to the successful implementation of decision-support systems. Responding to the aforementioned challenges is paramount if CDSS are to successfully support improvements in clinical outcomes, safety, and efficiency.

Yet the development of reliable tools appears to be the single largest hurdle to overcome. The availability of electronic patient data, fragmented across multiple hospital databases and systems (laboratory, microbiology, pathology, vital signs, radiology, surgery, etc.), remains a challenge, one which is of particular import for machine learning applications [72, 73]. While the challenge extends to any developed system attempting to achieve interoperability with established EHRs, it is of specific relevance to those disciplines that require the intelligent filtering of massive amounts of data, at a time when both precision diagnosis and optimal method of treatment are vital (e.g. critical and surgical care).

The goal of integrating and extracting useful or new electronic data across heterogeneous repositories remains elusive [74], but the need for CDSS to both manage resource utilization more efficiently and improve the quality of care delivered are well understood. However, their limited availability impedes the advancement of best practices [75] and represent a missed opportunity to reduce undesirable variation in care [69] and improve outcomes [76].

Reports on CDSS adoption-rates, and the adoption rates themselves, in critical care settings are somewhat limited. One informative study focuses on the decision support for safer surgery (DS3) tool. Results of a survey conducted with 23 surgeons on 1006 patients point to attitudes ranging from ‘neutral’ to ‘slightly positive’, with results encouraging adoption rate correlated to a patient’s higher estimated risk for mortality and morbidity [77]. An additional study suggests that reactions to CDSS may differ based on the perceived intent of the tool, whether it reminds providers of things they intend to do, provides information when clinicians are unsure what to do, corrects errors, or recommends an adjustment to the treatment plan [5]. Indeed, acceptance and adoption is likely to be prolonged and sequential, much the way drivers responded to GPS-enabled maps with improved computer-enabled situation awareness leading to trust in, and adoption of, the computed route recommendations.

Consideration must also be given to the ongoing applicability of the CDSS. Their relevance in the clinical sphere relies on their ability to perform optimally throughout their lifecycle [78], which requires some type of feedback loop to ensure dynamic models’ continuous testing and upgrading, as needed. These sustainment activities, which may require the establishment of distinct organizational units [79], carry with them a commitment of resources to ensure CDSS are effective in the long-term.

Reports on the costs of developing and implementing a CDSS are scarce, perhaps because these efforts are predominantly supported via research funding. A 2008 study does provide useful insight as it pertains to the deployment of a decision support tool providing prescribers with patient-specific maximum dosing recommendations based on renal function. Although limited in scope (94 alerts for 62 drugs), this project took no fewer than 924.5 man-hours and $49,669 to reach maturity [80], lending weight to the argument that an extensive commitment of resources, to include clinical staff, is required for the development, testing, and deployment of a CDSS. There is evidence that both the lack of funding and the absence of standardization across homegrown systems have thus far hindered the widespread development and deployment of CDSS [26, 81].

The advent of mobile platforms (e.g. notebooks, tablets, phones) may introduce an opportunity to facilitate the deployment of ‘connected-anytime-anywhere’ clinical decision support tools. However, the successful insertion of these tools in the clinical environment will likely depend on their ability to effectively and securely communicate with EHRs to either pull (e.g. value extraction to inform predictive models) or push (e.g. report on outcomes) data. There appears to be fertile ground for these points-of-care devices to blossom, provided they are fully integrated into the existing clinical IT infrastructure.

CDSS utilization can lead to substantial cost-savings

In 2011, inpatient care accounted for 31.5 % of the USA’s $2.7 trillion health expenditures; the same year, annual spending by hospitals grew 4.3 %, out-pacing overall health payments by 0.4 % [82]. A limited number of manuscripts report on the question of cost as an outcome [83] and yet the question becomes increasingly relevant under a ‘managed care’ framework with limited resources and funding. Available data, however sparse, suggest that investments in CDSS technologies that advance the principles of ‘The Right Care, Right Now; for Every Patient, Every Time’ are likely to have large favorable returns on those investments.

A review of 22 randomized controlled trials (CDSS implemented in real-clinical settings and used by health professionals to aid decision-making at the point of care) indicated a decrease in the number of interventions with a resulting reduction in costs when compared to control groups [84]. A similar cross-sectional analysis of urban hospitals in Texas concluded that higher use of decision-support tools was associated with a 16 % decrease in complications and $538 savings per encounter across all hospital admissions [18].

One of the more enlightening of the cost-focused studies, itself grounded in data generated from 1200 major surgery patients, estimates the average cost of cases with complications leading to a patient’s death (Grade 4) to be $159,345 [85], compared to $27,946 for uneventful cases. A more recent study estimates that reduced surgical complications (site infection, renal failure, graft/prosthesis, wound disruption) can add $2.2 million per 10,000 cases to hospital costs [86].

The impact of CDSS utilization on healthcare economics can further be gleaned from two randomized controlled trials assessing the impact of computed CDSS on antimicrobial utilization; these studies reported a 32 % decrease in use [7] as well as a 23 % cost saving [87], with no significant adverse consequence to either length of stay or mortality.

Our own group, the surgical critical care initiative [33] conducted an internal cost-savings analysis of a tool meant to improve the medical treatment of open wounds associated with trauma, across the injury cycle (days in the intensive care unit and general ward, rehabilitation, avoidance of hospital-acquired infections); those estimates point to savings of $50,000 per patient, or potentially $3.5 billion annually if applied across the 67,000 patients being treated each year with severe extremity injuries (data not published).

Taken together, and enhanced with the observation that the ability of CDSS to prevent incorrect decisions has been repeatedly demonstrated in the literature, these figures support a strong argument that a single superior decision can have substantial and positive impact on healthcare economics.

The impact of integrating and processing large data sets to better inform decisions also has implications in terms of productivity. A 2015 Mayo Clinic study on the effect of decision support on the efficiency of primary care providers in the outpatient practice suggests that use of a CDSS can result in a 65 % estimated saving of the provider’s time [88]; in this 2015 study, Wagholikar et al. assigned thirty patients to ten physicians, half of whom were assigned a CDSS to decide on preventive services and chronic disease management, ultimately pointing to an estimated saving of 3 min per patient. While the study does not touch on this question, the increased efficiency likely had positive ramifications in other areas of clinical care (e.g. number of patients being seen, time at the bedside, research).

There are nevertheless a couple of cautions. In the context of ever-rising concerns about cost-containment, more research in this field is needed; given that only a handful of studies measure cost at all, and only one reports on cost-effectiveness [89]. Future analyses should report total costs transparently, including costs on the aggregate, from development, to maintenance.

No CDSS can be perfect, and this immutable fact raises the spectre of liability exposure in cases of harmful error; while raised [90], this latter issue has not been fully addressed. Good faith reporting of errors is essential. Vetting for clinical use can include methods such as decision curve analysis [91] and net reclassification improvement [92]. These methods can be used not only to compare multiple models, but more importantly to determine which model(s), if any, are ready for insertion and use in the clinical environment.

Regulatory concerns

While newer methods, like decision curve analysis, assist in determining clinical utility, the risks stemming from the use of CDSS still remain largely unknown. Historically, developers have been predominantly concerned with the efficacy of systems rather than their safety [93]. Important to the debate is the fact that the former is generally much easier to demonstrate [94]. As a result, the question of quality and legal liability has become increasingly relevant to the discussion.

The United States Food and Drug Administration [95], the body regulating medical devices, has not been idle in monitoring health information technology (HIT) malfunctions with the potential for human harm. In 2010, these types of malfunctions seemed to be involved in 44 injuries and 6 deaths. Because registration and reporting by vendors is voluntary, these numbers could greatly underestimate a much larger concern [96]. From a safety perspective such occurrences must obviously be interpreted in the relative context of lives saved and outcomes improved by the use of CDSS.

As reported by the FDA, adverse events tend to fall into four major categories. The first category consists of errors of commission, defined as accessing the wrong patient record or overwriting information. The second category consists of errors of omission or transmission, defined as loss or corruption of vital patient data. The third category consists of errors in data analysis. The fourth category includes incompatibility between multi-vendor application and systems, which not only ties into the aforementioned concerns but again speaks to the challenge of interoperability.

The use of notebooks, tablets, or phones as deployment platforms for CDSS introduces yet another layer of complexity. While the FDA provided recent guidance, releasing its guidance on mobile medical applications in 2015 [95], the proposed recommendations remain non-binding and the determination of whether a Medical Device Data System (MDSS) falls under either a high-risk (class III) or a low-risk (class I) taxonomy appears to be up to the users [97].

The FDA considers that both hardware and software tools may be regulated if they fit the definition of a ‘device’, broadly defined as a mobile application which intended for use in the diagnosis of disease or other conditions, the cure, mitigation, treatment, or prevention of a disease, or is intended to affect the structure or function of the body. The Guidance appears to exclude general-purpose calculators or platforms providing medical reference materials [98].

Developers of CDSS must recognize that regulatory oversight, as it pertains to medical devices, is guided primarily by function (i.e. the platform itself, while relevant, does not determine classification). As an example, applications performing simple calculations routinely used in clinical practice (e.g. Body Mass Index, National Institutes of Health Stroke Scale) fall under the ‘enforcement discretion’ category whereas those performing patient-specific analysis along with diagnosis and treatment recommendations must be regulated [98]. In the end, despite several guidance documents released in 2014 and 2015, the ambiguity of classifying CDSS remains, and with it the ability by both developers and providers to determine whether the tool requires a regulatory pathway.

It is apparent that the FDA is taking a growing interest in CDSS and, in the context of maximizing patient safety and implementing post-market surveillance, is cautiously moving towards establishing standards, as well as a process to aggregate and analyze trends to mitigate future risk. To the extent that such tools are required for precision diagnosis, there is some urgency in accelerating development of those standards.

Our conjecture is that CDSS will be increasingly developed and deployed on mobile platforms; the FDA releasing specific guidance on the issue lends credence to our observation. However, much work remains to define and categorize these mobile CDSS, and to subsequently develop recommendations, for review and endorsement by the regulatory bodies. Our community of developers and users should embrace the opportunity to shape this debate, developing and proposing its own sets of risk-mitigation guidelines.

Conclusion

Clinical decision support tools are clinically meaningful to providers and have a measurable impact on diagnosis, treatment and ultimately health of the individual patient. The challenges facing the clinical insertion of medical decision-making tools are however abundant, including integration of large and heterogeneous data sets, provider resistance, and regulatory compliance. Despite these hurdles, CDSS have the potential to effectively manage utilization and improve both efficiency and precision, particularly in disciplines requiring complex decision-making. However, these benefits will only be recognized through randomized controlled trials (i.e. demonstration of effectiveness) and standards sanctioned by regulatory bodies.

Of equal importance is the realization that, in the context of maximizing clinical utility, ‘next-gen’ CDSS need to go beyond population-based risk stratification and move towards individualized and precise risk-predictions. In so doing, these will contribute to the promise of a more personalized approach to medicine by delivering the right treatment, to the right patient, at the right time. Others in the oncology and cardiac transplant arenas have begun to make that leap; we invite the whole medical community to follow suit.

References

  1. Improving Diagnosis in Health Care. Committee on diagnostic error in health care, Board on health care services, Institute of medicine, The national academies of sciences, Engineering, and Medicine. In: Balogh EP, Miller BT, Ball JR, editors. Washington, DC: National Academies Press (US); 2015.

  2. Ledley R, Lugsted L. Reasoning foundations of medical diagnosis; symbolic logic, probability, and value theory aid our understanding of how physicians reason. Science. 1959;130(3366):9–21.

    CAS  Article  PubMed  Google Scholar 

  3. Gorry G, Barnett G. Sequential Diagnosis by computer. JAMA. 1968;205(12):849–54.

    CAS  Article  PubMed  Google Scholar 

  4. Shortliffe E, Davis R, Axkline S, Buchanan B, Green C, Cohen S. Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Comput Biomed Res. 1975;8(4):303–20.

    CAS  Article  PubMed  Google Scholar 

  5. Berner E, Maisiak R, Cobbs C, Taunton O. Effects of a decision support system on physicians’ diagnostic performance. J Am Med Inform Assoc. 1999;6(5):420–7.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  6. Friedman C, Elstein A, Wolf F, Murphy G, Franz T, Heckerling P, et al. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: a multisite study of 2 systems. JAMA. 1999;282(19):1851–6.

    CAS  Article  PubMed  Google Scholar 

  7. Samore M, Bateman K, Alder S, Hannah E, Donnelly S, Stoddard G, et al. Clinical decision support and appropriateness of antimicrobial prescribing: a randomized trial. JAMA. 2005;294(18):2305–14.

    CAS  Article  PubMed  Google Scholar 

  8. Graber M, Mathew A. Performance of a web-based clinical diagnosis support system for internists. J Gen Intern Med. 2008;23(Suppl):37–40.

    Article  PubMed  Google Scholar 

  9. Elkin P, Liebow M, Bauer B, Chaliki S, Wahner-Roedler D, Bundrick J, et al. The introduction of a diagnostic decision support system (DXplain™) into the workflow of a teaching hospital service can decrease the cost of service for diagnostically challenging Diagnostic Related Groups (DRGs). Int J Med Inform. 2010;79(11):772–7.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Sheppard L, Kouchoukos N. Automation of measurements and interventions in the systematic care of postoperative cardiac surgical patients. Med Instrum. 1977;11(5):296–301.

    CAS  PubMed  Google Scholar 

  11. Sheppard L, Kouchoukos N, Kurtts M, Kirklin J. Automated treatment of critically ill patients following operation. Ann Surg. 1968;168(4):596–604.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  12. Wyatt J, Spiegelhalter D. Field trials of medical decision-aids: potential problems and solutions. Proc Annu Symp Comput Appl Med Care. 1991:3–7.

  13. Institute of Medicine (US) Committee on Crossing the Quality Chasm. To err is human: building a safer health system. Washington: Institute of Medicine; 1999.

    Google Scholar 

  14. Institute of Medicine (US) Committee on Crossing the Quality Chasm. Adaptation to mental health and addictive disorders, improving the quality of health care for mental and substance-use conditions. Washington: Institute of Medicine; 2006.

    Google Scholar 

  15. Merijohn G, Bader J, Frantsve-Hawley J, Aravamudhan K. Clinical decision support chairside tools for evidence-based dental practice. J Evid Based Dent Pract. 2008;8(3):119–32.

    Article  PubMed  Google Scholar 

  16. Broder J, Halabi S. Improving the application of imaging clinical decision support tools: making the complex simple. J Am Coll Radiol. 2014;11(3):257–61.

    Article  PubMed  Google Scholar 

  17. Institute of Medicine (US) Committee on Crossing the Quality Chasm. Cross the quality chasm: a new health system for the 21st century. Washington, DC: Institute of Medicine; 2001.

    Google Scholar 

  18. Amarasingham R, Plantinga L, Diener-West M, Gaskin D, Power N. Clinical information technologies and inpatient outcomes: a multiple hospital study. Arch Intern Med. 2009;169(2):108–14.

    Article  PubMed  Google Scholar 

  19. Jaspers M, Smeulers M, Vermeulen H, Peute L. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc. 2011;18(3):327–34.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Ingraham A, Cohen M, Bilimoria K, Dimick J, Richards K, Raval M, et al. Association of surgical care improvement project infection-related process measure compliance with risk-adjusted outcomes: implications for quality measurement. J Am Coll Surg. 2010;211(6):705–14.

    Article  PubMed  Google Scholar 

  21. Eslami S, Abu-Hanna A, de Keiser N. Evaluation of outpatient computerized physician medication order entry systems: a systematic review. J Am Med Inform Assoc. 2007;14(4):400–6.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Pearson S, Moxey A, Robertson J, Hains I, Williamson M, Reeve J, et al. Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990–2007). BMC Health Serv Res. 2009;28(9):154.

    Article  Google Scholar 

  23. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009;(3):CD001096.

  24. Kuperman G, Bobb A, Payne T, Avery A, Gandhi T, Burns G, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14(1):29–40.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Kaushal R, Kern L, Barron Y, Quaresimo J, Abramson E. Electronic prescribing improves medication safety in community-based office practices. J Gen Intern Med. 2010;25(6):530–6.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Bates D, Cohen M, Leape L, Overhage J, Shabot M, Sheridan T. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc. 2001;8(4):299–308.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  27. Bryan C, Boren S. The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature. Inform Prim Care. 2008;16(2):79–91.

    PubMed  Google Scholar 

  28. Garg A, Shikari N, McDonald H, Rosas-Arellano M, Devereaux P, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293(10):1223–38.

    CAS  Article  PubMed  Google Scholar 

  29. Santamaria J, Tobin A, Reid D. Do we practise low tidal-volume ventilation in the intensive care unit? A 14-year audit. Crit Care Resusc. 2015;17(2):108–12.

    PubMed  Google Scholar 

  30. Umoh N, Fan E, Mendez-Tellez P, Sevransky J, Dennison C, Shanholtz C, et al. Patient and intensive care unit organizational factors associated with low tidal volume ventilation in acute lung injury. Crit Care Med. 2008;36(5):1463–8.

    Article  PubMed  Google Scholar 

  31. Kalb T, Rakhelkar J, Meyer S, Ntimba F, Thuli J, Gorman M, et al. A multicenter population-based effectiveness study of teleintensive care unit-directed ventilator rounds demonstrating improved adherence to a protective lung strategy, decreased ventilator duration, and decreased intensive care unit mortality. J Crit Care. 2014;29(4):691.

    Article  PubMed  Google Scholar 

  32. Forsberg JA, Potter BK, Wagner MB, Vickers A, Dente CJ, Kirk AD, Elster EA. Lessons of war: turning data into decisions. EBioMedicine. 2015;2(9):1235–42.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Surgical critical care initiative (SC2i). http://www.sc2i.org. Accessed 28 Apr 2015.

  34. Collins T, Daley J, Henderson W, Khuri S. Risk factors for prolonged length of stay after major elective surgery. Ann Surg. 1999;230(2):251–9.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  35. Dimick J, Chen S, Taheri P, Henderson W, Khuri S, Campbell DJ. Hospital costs associated with surgical complications: a report from the private-sector national surgical quality improvement program. J Am Coll Surg. 2004;199(4):531–7.

    Article  PubMed  Google Scholar 

  36. Khuri S, Henderson W, DePalma R, Mosca C, Healy N, Kumbhani D. Participants in the VA national surgical quality improvement program. Determinants of long-term survival after major surgery and the adverse effect of postoperative complications. Ann Surg. 2005;242(3):326–41.

    PubMed  PubMed Central  Google Scholar 

  37. Blagev D, Hirshberg E, Sward K, Thompson B, Brower R, Truwit J, et al. The evolution of eProtocols that enable reproducible clinical research and care methods. J Clin Monit Comput. 2012;26(4):305–17.

    Article  PubMed  Google Scholar 

  38. Sng B, Tan H, Sia A. Closed-loop double-vasopressor automated system vs manual bolus vasopressor to treat hypotension during spinal anaesthesia for caesarean section: a randomised controlled trial. Anaesthesia. 2014;69(1):37–45.

    CAS  Article  PubMed  Google Scholar 

  39. Uemura K, Kawada T, Zheng C, Sugimachi M. Less invasive and inotrope-reduction approach to automated closed-loop control of hemodynamics in decompensated heart failure. IEEE Trans Biomed Eng. 2015.

  40. Osheroff JA, Pifer EA, Teich JM, et al. Improving outcomes with clinical decision support: an implementer's guide. Boca Raton: Productivity Press; 2005.

    Google Scholar 

  41. Damberg CL, Timbie JW, Bell DS, Hiatt L, Smith A, Schneider EC. Developing a framework for establishing clinical decision support meaningful use objectives for clinical specialties. RAND Corporation; 2012.

  42. McCoy A, Melton G, Wrigth A, Sittig D. Clinical decision support for colon and rectal surgery: an overview. Clin Colon Rectal Surg. 2013;26(1):23–30.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Lanceley A, Savage J, Menon U, Jacobs L. Influences on multidisciplinary team decision-making. Int J Gynecol Cancer. 2008;18(2):215–22.

    CAS  Article  PubMed  Google Scholar 

  44. Moore M, Loper K. An introduction to clinical decision support systems. J Electron Resour Med Libr. 2011;8(4):348–66.

    Article  Google Scholar 

  45. Licata G. Employing fuzzy logic in the diagnosis of a clinical case. Health 2010;2(3):211–24.

    Article  Google Scholar 

  46. Langmead C. Generalize queries and Bayesian statistical model checking in dynamic Bayesian networks; application to personalized medicine. In: Life Sciences Society, editor. 8th International conference on computational systems bioinformatics. 2009. p. 201–212.

  47. Le Gall J, Loirat P, Alperovitch A, Glaser P, Granthil C, Mathieu D, et al. A simplied acute physiology score for ICU patients. Crit Care Med. 1984;12:975–7.

    Article  PubMed  Google Scholar 

  48. https://www.facs.org/media/press-releases/2013/risk-calculator0813.

  49. Ingraham A, Richards K, Hall B, Ko C. Quality improvement in surgery: the American College of surgeons national surgical quality improvement program approach. Adv Surg. 2010;44:251–67.

    Article  PubMed  Google Scholar 

  50. Raval M, Bentrem D, Eskandari M, Ingraham A, Hall B, Randolph B, et al. The role of surgical champions in the American College of surgeons national surgical quality improvement program–a national survey. J Surg Res. 2011;166(1):15–25.

    Article  Google Scholar 

  51. Khuri S, Henderson W, Daley J, Jonasson O, Jones R, Campbell DJ, et al. Principal investigators of the patient safety in surgery study. Successful implementation of the Department of Veterans Affairs’ National Surgical Quality Improvement Program in the private sector: the patient safety in surgery study. Ann Surg. 2008;248(2):329–36.

    Article  PubMed  Google Scholar 

  52. Richman J, Hosokawa P, Min S, Tomeh M, Neumayer L, Campbell DJ, et al. Toward prospective identification of high-risk surgical patients. Am Surg. 2012;78(7):755–60.

    PubMed  Google Scholar 

  53. Scott S, Lund J, Gold S, Elliott R, Vater M, Chakrabarty M, et al. An evaluation of POSSUM and P-POSSUM scoring in predicting post-operative mortality in a level 1 critical care setting. BMC Anesthesiol. 2014;14(1):104.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Making health care safer: a critical analysis of patient safety practices. Rockcille: Agency for Healthcare Research and Quality; 2001.

  55. O’Reilly M, Talsma A, VanRiper S, Kheterpal S, Burney R. An anesthesia information system designed to provide physician-specific feedback improves timely administration of prophylactic antibiotics. Anesth Analg. 2006;103(4):908–12.

    Article  PubMed  Google Scholar 

  56. Vincent J, Moreno R, Takala J, Willatts S, De Mendonça A, Bruining H, et al. The SOFA (Sepsis-related Organ Failure Assessment) score to describe organ dysfuntion/failure. Intensive Care Med. 1996;22:707–10.

    CAS  Article  PubMed  Google Scholar 

  57. Gultepe E, Green J, Nguyen H, Adams J, Albertson T, Tagkopoulos I. From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system. J Am Med Inform Assoc. 2014;21:315–25.

    Article  PubMed  Google Scholar 

  58. Zimmerman J, Kramer A, McNair D, Malila F. Acute physiology and chronic health evaluation (APACHE) IV: hospital mortality assessment for today’s critically ill patients. Crit Care Med. 2006;34(54):1297–310.

    Article  PubMed  Google Scholar 

  59. Ledoux D, Finfer S, McKinley S. Impact of operator expertise on collection of the APACHE II score and on the derived risk of death and standardized mortality ratio. Anaesth Intensive Care. 2005;33(5):585–90.

    CAS  PubMed  Google Scholar 

  60. Loghmanpour N, Druzdzel M, Antaki J. Cardiac health risk stratification system (CHRiSS): a bayesian-based decision support system for left ventricular assist device (LVAD) therapy. PLoS ONE. 2014;9(11):e111264.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Graber M, VanScoy D. How well does decision support software perform in the emergency department? Emerg Med J. 2003;20(5):426–8.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  62. Baxt W, Shofer F, Sites F, Hollander J. A neural network aid for the early diagnosis of cardiac ischemia in patients presenting to the emergency department with chest pain. Ann Emerg Med. 2002;40(6):575–83.

    Article  PubMed  Google Scholar 

  63. Jain K. Personalized medicine. Curr Opin Mol Ther. 2002;4(6):548–58.

    CAS  PubMed  Google Scholar 

  64. Martin-Sanchez F, Verspoor K. Big data in medicine is driving big changes. Year Med Inform. 2014;9(1):14–20.

    CAS  Article  Google Scholar 

  65. Forsberg J, Eberhardt J, Boland P, Wedin R, Healy J. Estimating survival in patients with operable skeletal metastases: an application of a bayesian belief network. PLoS ONE. 2011;6(5):19956.

    Article  Google Scholar 

  66. Moxey A, Robertson J, Newby D, Hains I, Williamson M, Pearson S. Computerized clinical decision support for prescribing: provision does not guarantee uptake. J Am Med Inform Assoc. 2010;17(1):25–33.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Sittig D, Wright A, Osheroff J, Middleton B, Teich J, Ash J, et al. Grand challenges in clinical decision support. J Biomed Inform. 2008;41(2):387–92.

    Article  PubMed  Google Scholar 

  68. Varonen H, Kortteisto T, Kaila M. What may help or hinder the implementation of computerized decision support systems (CDSSs): a focus group study with physicians. Fam Pract. 2008;25(3):162–7.

    Article  PubMed  Google Scholar 

  69. Morris A. Developing and implementing computerized protocols for standardization of clinical decisions. Ann Intern Med. 2000;132(5):373–83.

    CAS  Article  PubMed  Google Scholar 

  70. Morris A. The importance of protocol-directed patient management for research and lung-protective ventilation. In: Dereyfuss D, Saumon G, Hubamyr R, editors. Ventilator-induced lung injury. New York: Taylor & Francis Group; 2006. p. 537–610.

    Google Scholar 

  71. East T, Heermann L, Bradshaw R, et al. Efficacy of computerized decision support for mechanical ventilation: results of a prospective multi-center randomized trial. Proc AMIA Symp. 1999:251–255.

  72. Tsoukalas A, Albertson T, Tagkopoulos I. From data to optimal decision making: a data-driven, probabilistic machine learning approach to decision support for patients with Sepsis. JMIR Med Inform. 2015;3(1):e11.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Sesen M, Peake M, Banares-Alacantara R, Tse D, Kadir T, Stanley R, et al. Lung cancer assistant: a hybrid clinical decision support application for lung cancer care. J R Soc Interface. 2014;11(98):20140534.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Bouamrane M, Tao C, Sarkar I. Managing interoperability and complexity in health systems. Methods Inf Med. 2015;54(1):1–4.

    Article  PubMed  Google Scholar 

  75. Jenders R, Osheroff J, Sittig D, Pifer E, Teich J. Recommendations for clinical decision support deployment: synthesis of a roundtable of medical directors of information systems. AMIA Annu Symp Proc. 2007;11:359–63.

    Google Scholar 

  76. Scott I, Denaro C, Bennet C, Mudge A. Towards more effective use of decision support in clinical practice: what the guidelines for guidelines don’t tell you. Intern Med J. 2004;34(8):492–500.

    CAS  Article  PubMed  Google Scholar 

  77. Norton W, Hosokawa P, Henderson W, Volckmann E, Pell J, Tomeh M, et al. Acceptability of the decision support for safer surgery tool. Am J Surg. 2014;14:00504.

    Google Scholar 

  78. Greenes RA. Why clinical decision support is hard to do. AMIA Annu Symp Proc. 2006;2006:1169–70.

    PubMed Central  Google Scholar 

  79. Services KM. Partners healthcare clinical informatics research and development. 2009.

  80. Fields T, Rochon P, Lee M, Gavendo L, Subramanian S, Hoover S, et al. Costs associated with developing and implementing a computerized clinical decision support system for medication dosing for patients with renal insufficiency in the long-term care setting. J Am Med Inform Assoc. 2008;15(4):466–72.

    Article  Google Scholar 

  81. Bates D, Kuperman G, Wang S, Ghandi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003;10(6):523–30.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Keehan S, Cuckler G, Sisko A, Madison A, Smith S, Lizonitz J, et al. National health expenditure projections: modest annual growth until coverage expands and economic growth accelerates. Health Aff. 2012;31(7):1600–12.

    Article  Google Scholar 

  83. Chaudry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742–52.

    Article  Google Scholar 

  84. Bright T, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux R, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157(1):29–43.

    Article  PubMed  Google Scholar 

  85. Vonlanthen R, Slankamenac K, Breitenstein S, Puhan M, Muller M, Hahnloser D, et al. The impact of complications on costs of major surgical procedures: a cost analysis of 1200 patients. Ann Surg. 2011;254(6):907–13.

    Article  PubMed  Google Scholar 

  86. Guillamondegui O, Gunter O, Hines L, Martin B, Gibson W, Clarke P, et al. Using the national surgical quality improvement program and the tennessee surgical quality collaborative to improve surgical outcomes. J Am Coll Surg. 2012;214(4):709–14.

    Article  PubMed  Google Scholar 

  87. McGregor J, Weekes E, Forrest G, Standiford H, Perencevich E, Furuno J, et al. Impact of a computerized clinical decision support system on reducing inappropriate antimicrobial use: a randomized controlled trial. J Am Med Inform Assoc. 2006;13(4):378–84.

    Article  PubMed  PubMed Central  Google Scholar 

  88. Wagholikar K, Hankey R, Decker L, Cha S, Greenes R, Liu H, et al. Evaluation of the effect of decision support on the efficiency of primary care providers in the outpatient practice. J Prim Care Community Health. 2015;6(1):54–60.

    Article  PubMed  PubMed Central  Google Scholar 

  89. Fillmore C, Bray B, Kawamoto K. Systematic review of clinical decision support interventions with potential for inpatient cost reduction. BMC Med Inform Decis Mak. 2013;13:135.

    Article  PubMed  PubMed Central  Google Scholar 

  90. Kesselheim A, Cresswell K, Phansalkar S, Bates D, Sheikh A. Clinical decision support systems could be modified to reduce ‘alert fatigue’ while still minimizing the risk of litigation. Health Aff. 2011;30(12):2310–7.

    Article  Google Scholar 

  91. Vickers A, Elkin E. Decision curve analysis: a novel method for evaluating prediction models. Med Decis Making. 2006;26(6):565–74.

    Article  PubMed  PubMed Central  Google Scholar 

  92. Leening M, Vedder M, Witteman J, Pencina M, Steyerberg E. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician’s guide. Ann Intern Med. 2014;160(2):122–31.

    Article  PubMed  Google Scholar 

  93. Fox J, Thomson R. Clinical decision support systems: a discussion of quality, safety and legal liability issues. Proc AMIA Symp. 2002;1:265–9.

    Google Scholar 

  94. Leopold S. When, “safe and effective” becomes dangerous. Clin Orthop Relat Res. 2014;472(7):1999–2001.

    Article  PubMed  PubMed Central  Google Scholar 

  95. Administration FFaD. Guidance for industry and food and drug administration. http://www.fda.gov/downloads/MedicalDevices/…/UCM263366.pdf: Food and Drug Administration. 2015.

  96. Testimony of Jeffrey Shuren, Director of FDA’s center for devices and radiological health: hearing before the health information Technology policy committee.(2010).

  97. Karnik K. FDA regulation of clinical decision support software. J Law Biosci. 2014;1(2):202–8.

    Article  PubMed  PubMed Central  Google Scholar 

  98. Buenafe M. Mobile Medical Apps: FDA’s final guidance brings much needed clarity, but some questions remain. 2014.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Arnaud Belard.

Ethics declarations

Conflicts of interest

The authors certify they have no affiliations with or involvement in any organization or entity with any financial or non-financial interest in the subject matter or materials discussed in this manuscript.

Additional information

Disclaimer The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Henry M. Jackson Foundation for the Advancement of Military Medicine, the Department of the Navy, the Department of the Army, the Department of Defense, nor the U.S. Government. Research activities leading to the development of this abstract were funded by the Department of Defense’s Defense Health Program – Joint Program Committee 6/Combat Casualty Care (USUHS HT9404-13-1-0032 and USUHS HU0001-15-2-0001).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Belard, A., Buchman, T., Forsberg, J. et al. Precision diagnosis: a view of the clinical decision support systems (CDSS) landscape through the lens of critical care. J Clin Monit Comput 31, 261–271 (2017). https://doi.org/10.1007/s10877-016-9849-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10877-016-9849-1

Keywords

  • Clinical decision support systems
  • CDSS
  • Healthcare analytics
  • Critical care
  • Complex care
  • Personalized medicine