Resource allocation and priority setting are challenging issues faced by health policy decisionmakers requiring careful consideration of many factors, including objective (e.g., reason) and subjective (e.g., empathy) elements [1]. Criteria used to evaluate healthcare interventions and allocate resources are likely to have profound implications, especially regarding ethical aspects. Ethical principles of resource allocation set forth by the World Health Organization (WHO) include efficiency (maximizing population health), fairness (minimizing health differences) and utility (greatest good for the greatest number) [2]. Consideration of these often conflicting principles requires pragmatic frameworks and the engagement of a broad range of stakeholders to provide accountability for reasonableness (A4R) [37]. Limited resources and inequities in healthcare in both wealthy and developing countries underline the need to allocate optimally [8].

As argued by various authors [912], choices may not be based on rational and transparent processes highlighting the need for processes that take this into account. Indeed, if the mechanism employed to guide the distribution of resources is inequitable, the outcome is also likely to be. Thus, how resources are allocated by health policy decisionmakers around the world remains a challenging issue [13]. Priority-setting is defined as the process by which healthcare resources are allocated among competing programs or people [14]. In the context of increasing healthcare costs in many countries around the world, effective approaches to explicit appraisal and priority setting are becoming critical to allocate resources to healthcare interventions that provide the most benefit to patient health as well as contributing to healthcare systems’ sustainability, equity and efficiency. Indeed, elucidating decision criteria and how they are considered are key to establishing accountability and reasonableness of decisions and fulfils the A4R framework set forth by Daniels and Sabin [6].

Over the past decades, a number of empirical studies have explored systematic approaches to optimize evaluation of healthcare interventions and priority-setting. A number of tools with defined criteria to evaluate and rank interventions have been developed, recognizing the need for such approaches [10, 1528]. As part of a larger collaborative endeavour exploring decision criteria, the aim of this study was to analyse the peer-reviewed literature to identify criteria reported in empirical studies that involved healthcare decisionmakers and in studies describing multicriteria tools. The specific objectives were to identify, categorize and estimate the frequency of decision criteria reported in the literature. This work will support the design of an international survey of decisionmakers on criteria and their relative importance as well as providing a resource for developers of multicriteria-based frameworks.


Search strategy and article selection

An extensive literature search was carried out in June 2010 on Medline and EMBASE databases to identify articles reporting healthcare decision criteria. Because studies reporting criteria (or factors or principles or components) are usually not indexed with such generic terms and because these terms are used in many fields (e.g., diagnostic criteria), a number of algorithms were explored to optimize the search strategy. The optimized search strategy included the following keywords: “decision-making”, “priority-setting”, and “resource allocation”, combined with “funding”, “budget”, “cost-benefit analysis”, “cost-effectiveness analysis”, and “equity”. The research was limited to articles published in English, French, or German over the last 10 years and excluded the following types of studies: clinical trials (phase I to IV), editorials, letters, randomized controlled trials, case reports, and comparative studies. Bibliographies of relevant articles were also searched.

Abstracts of articles thus retrieved were screened to identify appropriate inclusion and exclusion criteria. Studies were included if they reported a set (i.e., > 1) of decision criteria and were:

empirical studies conducted with healthcare decisionmakers (including field-testing of decisionmaking tools, focus groups, questionnaires, interviews)

reviews of such empirical studies, and

conceptual studies describing or proposing a set of decision criteria or a decisionmaking tool.

Studies were excluded if they focused on a single criterion (e.g., cost-effectiveness only) or described a priority-setting exercise without explicitly identifying decision criteria. Studies discussing the goals and advantages of priority-setting per se without reporting specific criteria were also excluded. To avoid double-counting of decision criteria, only one publication was included if several publications from the same group described the same set of decision criteria. For the same reason, studies reported in review articles that we included in our analysis and which reported the criteria of the original studies were also excluded.

Data extraction

Full texts of selected articles were reviewed and data extracted into a table identifying: 1) first author; 2) year of publication 3) method of criteria elicitation or identification, 4) decisionmaking setting, 5) exact term for each criterion as reported in the publication.

Given the variability of terms to describe conceptually similar decision criteria, a hierarchical classification system was developed (Figure 1). Terms referring to the same concept (e.g., “side-effects” and “harm”) were grouped under one criterion (e.g., Safety). Related criteria were grouped under categories (e.g., Health outcomes and benefits of intervention). This process of classification was guided by the structure of the EVIDEM framework, which includes an adaptable set of core and contextual criteria identified from analyses of the literature, of decisionmaking processes worldwide, and discussions with decisionmakers, and which were structured to fulfill the requirements of multicriteria decision analysis (MCDA; i.e., minimum overlap, mutual independence, operationalizability, completeness and clustering) [10, 18, 29]. MCDA principles were applied in the present study to define criteria regrouping terms referring to the same concept and to categorize criteria into a meaningful and intuitive architecture (clustering).

Figure 1
figure 1

Categorization of terms reported in the literature.

Descriptive statistics

The number of times each criterion was cited in the studies retrieved was used as a proxy to identify the criteria perceived to be most important. Descriptive statistics were performed and each occurrence of a term belonging to that criterion was counted. If a study reported two different terms that we grouped under the same criterion, both terms were counted. For example, if a study reported “side effects” and “harm” as separate terms, we counted both of them under the criterion “Safety”. The numbers of citations for each criterion and for each category of criteria were analyzed.


Identification of decision criteria from the literature review

The literature search resulted in a total of 2903 records identified through PUBMED and EMBASE database searching and 243 additional records were identified through bibliographic hand searching (Figure 2). These studies were screened by their abstracts and 2790 were excluded. The remaining 364 studies were assessed for eligibility on the basis of full text and 317 articles were excluded. A total of 40 studies were included (Table  1), all of which were published after 1997, and 33 studies from 2006 to 2010. The majority of studies reported criteria derived from interviews and focus groups (9 studies each) surveys (2) or literature review of studies (5) conducted with healthcare decisionmakers at micro, meso and macro levels of decision and from several regions of the world. Fourteen studies described multicriteria decisionmaking tools.

Figure 2
figure 2

PRISMA diagram.

Table 1 Studies identified in the literature and included in the analysis

Decision criteria classification and descriptive statistics

Large variations in terminology used to define criteria were observed among the studies included; 360 different terms were identified (Table  2). Using the classification system described above, these terms were assigned to 58 unique criteria which were classified into 9 different categories. These were: A) health outcomes and benefits of intervention (6 criteria), B) types of health benefit (2 criteria), C) impact of disease targeted by intervention (4 criteria), D) therapeutic context of intervention (4 criteria), E) economic impact of intervention (9 criteria), F) quality/uncertainty of evidence (6 criteria), G) implementation complexity of intervention (9 criteria), H) priorities, fairness and ethics (7 criteria), I) overall context (11 criteria). Categories were defined to: i) regroup criteria pertaining to the same overall concept (e.g., category “A - Health outcomes and benefits” of intervention includes criteria such as health benefits, life saving, efficacy, effectiveness, safety, patient-reported outcomes and quality of care) and to ii) disentangle criteria specific to the intervention (categories A to F) from criteria specific to the context (G to I).

Table 2 Classification of terms reported in the literature

The classification system and the number of citations for each criterion are reported in Figure 3. The ten most frequently mentioned criteria were: equity, fairness and justice (H4, 32 citations); efficacy/effectiveness (A2, 29 citations); stakeholder interests and pressures (I11, 28 citations); cost-effectiveness (E5, 23 citations); strength of evidence (F2, 20 citations); safety (A4, 19 citations); mission and mandate of health system (I1:19 citations); organizational requirements and capacity (G2, 17 citations); patient-reported outcomes (A5, 17 citations); and need (D2, 16 citations). Among these 10 most frequently cited criteria, three criteria were from the category “A - Health benefits and outcomes of intervention”, highlighting the importance of this consideration in decisionmaking. The other most frequently cited criteria were from seven categories of criteria, indicating that the classification system captured critical criteria in distinct categories.

Figure 3
figure 3

Classification system and number of citations for each criterion.

At the category level (Figure 4), the number of citations was the highest for the category of criteria “Overall context” (106 citations); followed by “Priorities, fairness and ethics” (90 citations); “Health outcomes and benefits of intervention” (81 citations); “Economic impact of intervention” (75 citations); “Implementation complexity of intervention” (65 citations); “Quality and uncertainty of evidence” (49 citations); “Impact of disease targeted” (40 citations); “Therapeutic context of intervention” (37 citations); and “Type of service provided” (18 citations).

Figure 4
figure 4

Number of citations for each category of criteria of the classification system.


This literature review revealed a burgeoning number of studies examining healthcare decision criteria and criteria-based decisionmaking tools, especially over the last five years. Criteria were identified from studies performed in several regions of the world involving decisionmakers at micro, meso and macro levels of decision and from studies reporting on multicriteria tools. Increasingly, the healthcare community is aware that beyond cost-effectiveness, other criteria must be taken explicitly into account for transparent and consistent healthcare decisionmaking and priority-setting [5456]. Indeed, elucidating decision criteria and how they are considered are key to establishing accountability and reasonableness of decisions. This is necessary to fulfill the relevance condition of the accountability for reasonableness (A4R) framework of Daniels and Sabin [6], which states that “Decisions should be made on the basis of reasons (i.e. evidence, principles, values, arguments) that ‘fair-minded’ stakeholders can agree are relevant under the circumstances”.

This analysis revealed a predominance of normative criteria, that is, answering the question “what should be done?” This highlights the importance of considering the actual worth or value of healthcare interventions rather than just feasibility criteria, (“What can be done?”). Of the ten most frequently cited criteria, eight were normative (equity and fairness, efficacy, cost-effectiveness, strength of evidence, safety, mission and mandate of healthcare system, need, patient-reported outcomes) and two were feasibility criteria (stakeholder pressures and interests, organizational requirements and capacity). This is aligned with a review of studies on decision criteria in developing countries [13], and points to the need to include both normative and feasibility criteria in decision and prioritization tools to fully reflect and support the decisionmaking process.

The criterion “equity and fairness” was the most frequently reported. This may reflect that equity is a guiding principle in defining the values on which decisions are based. Equity is difficult to operationalize in decisionmaking and priority-setting processes in a pragmatic manner. It is a complex ethical concept that eludes precise definition and is synonymous with social justice and fairness [57]. It is referred to as “a fair chance for all,” [23] “equality of access to healthcare resources on the basis of need,” [8] “absence of systematic disparities in health (or in the major social determinants of health) between groups with different levels of underlying social advantage/disadvantage” [58]. The WHO advocates concepts of “horizontal equity, providing healthcare to all those who have the same health need, and vertical equity, providing preferentially to those with the greatest need” [57]. The difficulty of considering equity in a pragmatic manner points to the need to include it systematically as operationalizable criteria in the decision process. If not systematic, it is less likely that decisions will be equitable. Decisions are generally fairest when standards are predetermined, explicit and consistently applied [59]. Equity is embedded in consideration of disease severity in prioritization of healthcare interventions. Decisionmakers generally attach more value to interventions for severe disease than for mild disease. This is also translated in the worst-off principle, which relates to an independent concern for severity; “the worse off an individual would be without an intervention, the more highly society tends to value that intervention” [60]. Systematic consideration of criteria defined on the basis of population priorities identified by decisionmakers (e.g., more value for interventions targeted to vulnerable populations such as children, the elderly, those in remote areas) is another pragmatic way to incorporate equity into decisionmaking. Integration of ethical considerations in operationalizable criteria was developed for the comprehensive multicriteria framework EVIDEM [61]. Ethical issues are an integral part of the EUnetHTA core model to ensure their explicit considerations [46], and several frameworks focusing on equity [62] and ethical issues [63] have recently emerged.

Efficacy/effectiveness was the second most frequently reported criterion; as Hawkes discussed recently, “governments are wrestling with the issues of efficacy and fairness in healthcare delivery” [64]. While efficacy measures the effect of an intervention treatment under controlled conditions (such as during clinical trials), effectiveness provides critical information on outcomes actually achieved by an intervention in real life settings. Efficacy and effectiveness are fundamental criteria considered at the regulatory (e.g., FDA, EMA) and reimbursement levels for medicines in many jurisdictions [6567]. Because decisions concerning interventions at policy, clinical and patient level are made with reference to a given context of care (usually standard of care), improvement over existing care rather than absolute efficacy or effectiveness provides the most informative evidence [10]. Indeed, decisions about usefulness of interventions are usually based on relative advantage compared to existing approaches [15]. Comparative effectiveness, “the comparative assessment of interventions in routine practice settings” [68] is meant to help answer the question “does it work in my context?” and is demand-driven research aimed directly at decisionmaker needs [69]. For new interventions, however, effectiveness data is usually not available and decisions are often made on the basis of efficacy data, with the uncertainty inherent in innovation [67]. Evidence-based decisionmaking relies on actual benefits derived from an intervention so mechanisms (such as defining subcriteria) outlining specifically the most relevant outcomes of efficacy/effectiveness in real life are critical to ensure that the dimensions of efficacy/effectiveness are fully captured and communicated.

The third most commonly reported criterion refers to stakeholder interests and pressures. Macro-level decisions are influenced by public pressure and advocacy [13, 15, 38] and the demand for a new program is a powerful argument for decisionmakers at the political level [70]. In a study exploring the basis for immunization recommendations, while vaccine safety was reported as important or very important in making immunization recommendations by all countries regardless of economic status, low and lower middle income countries were significantly more likely than developed countries to report that public pressure was an important factor [9]. Because pressures from groups of stakeholders are often part of the context [10], being aware of pressures and interests at stake and how they may affect decisionmaking and implementation is important and should be explicitly tackled using a framework that encourage systematic consideration of their potential implications when making healthcare decisions.

Cost-effectiveness was the fourth most commonly reported criterion. Cost-effectiveness is frequently used in healthcare decisionmaking [65, 71] but its usefulness is the subject of debate [54, 56]. A review of 36 empirical studies reported that the influence of cost-effectiveness was moderate at micro, meso and macro levels of decision [55]. Designed to incorporate several criteria of decision (e.g., cost, efficacy/effectiveness, safety, quality of life) into an aggregated ratio allowing comparisons of interventions, it fails to include important criteria such as equity and the severity of the targeted condition [59]. In addition, cost-effectiveness thresholds are commonly mistaken for affordability thresholds [59]. Beyond cost-effectiveness ratios, health economic studies generate data that are necessary to evaluate healthcare interventions (e.g., resource utilization and cost consequences of a new intervention compared to existing care).

This study also revealed that strength of evidence is an important aspect in decisionmaking, highlighting the influence of evidence-based medicine. Evidence is usually sought to demonstrate effectiveness (“it works”), show the need for policy action (“it solves a problem”), guide effective implementation (“it can be done”), and clarify cost-effectiveness (“it provides value for money”) [15]. The quality of evidence that decisionmakers use can only be determined when several concepts are considered, such as scientific validity, completeness and relevance to the decisionmaking context [18]. The strength of evidence builds with time as interventions are used in real life and initial decisions made in a context of uncertainty (e.g., randomized clinical trial data in limited populations) may be revisited as evidence accumulates. A common question is how much evidence is enough to make an evidence-based decision [59]. Beyond scientific evidence, decisionmaking also relies on colloquial evidence [72]. Consideration of strength and quality of the different types of evidence remain an important part of the appraisal of interventions.

Safety, a critical element of policy and clinical practice, was the sixth most cited criterion. Safety refers to the frequency and severity of adverse events or complications arising as a result of using the new technology compared to an alternative [22]. Efficacy and safety are the main criteria in the initial evaluation of a new intervention [70]. And the risk-to-benefit equation is a critical component of clinical and regulatory decisionmaking [67].

A number of other criteria were identified highlighting the complexity of healthcare decisionmaking and the need to support this process with tools to ensure consistency, transparency and accountability for reasonableness. An important milestone towards that goal would be to harmonize terminology. Indeed, a large variety of terminology was found in the literature during analysis and classification of criteria. Although a systematic approach was used to classify terms into criteria and overarching categories using the principles of MCDA, such analyses are limited by the subjective interpretation of terms reported by authors. For example, the terms reported in published studies such as “side effects,” “unintended consequences,” “risks,” “harm,” or “adverse effects” were all grouped under the criterion “Safety.” These variations of terminology underline the difficulty of harmonizing the decisionmaking processes, as several authors have noted [10, 11]. It calls for well-defined criteria to avoid confusion and ensure sound application of multicriteria approaches to decisionmaking [11, 73].

Although this analysis was limited to published studies, an extensive analysis of decisionmaking processes from jurisdictions around the world for coverage of healthcare interventions was performed to define the criteria of the EVIDEM framework, which are included in this analysis [10, 18]. In addition, the large number of terms retrieved covers criteria currently used in more than 25 decisionmaking processes for coverage of medicines [65].


This study highlights the importance of considering both normative and feasibility criteria for decisionmaking and priority setting of healthcare interventions. By providing a comprehensive classification of decisionmaking criteria, this analysis can promote reflection on the value of harmonizing terminology in this field. It can also serve as a resource when considering which criteria to include in sound multicriteria approaches (i.e., fulfilling principles of completeness, lack of redundancy, mutual independence, operationalizability and clustering). This analysis is also used as a foundation for the development of an international survey on criteria expected to further expand our knowledge of real-life decisionmaking and advance multicriteria approaches.

Such approaches have the potential to integrate and facilitate pragmatic operationalization of a large range of considerations, including ethical considerations, in a transparent and consistent process. They could provide a common metric for curative and preventive interventions to clearly define best health improvements within resource available, as recently advocated by Volp and colleagues [74]. They may also provide a road map to develop more participative decisionmaking processes by “better combining of many elements” proposed by Culyer [75].