Advertisement

EURO Journal on Decision Processes

, Volume 1, Issue 1–2, pp 115–134 | Cite as

Policy analytics: an agenda for research and practice

  • Alexis TsoukiasEmail author
  • Gilberto Montibeller
  • Giulia Lucertini
  • Valerie Belton
Original Article

Abstract

The growing impact of the “analytics” perspective in recent years, which integrates advanced data-mining and learning methods, is often associated with increasing access to large databases and with decision support systems. Since its origin, the field of analytics has been strongly business-oriented, with a typical focus on data-driven decision processes. In public decisions, however, issues such as individual and social values, culture and public engagement are more important and, to a large extent, characterise the policy cycle of design, testing, implementation, evaluation and review of public policies. Therefore public policy making seems to be a much more socially complex process than has hitherto been considered by most analytics methods and applications. In this paper, we thus suggest a framework for the use of analytics in supporting the policy cycle—and conceptualise it as “Policy Analytics”.

Keywords

Analytics Policy analysis Decision analysis Policy cycle Decision support 

Mathematics Subject Classification

68T05 68T27 68T37 90B50 91A26 91B06 91B10 91C05 91D10 91E45 91F10 

Introduction

Policy making has been a traditional domain of research and practice, where decision analysts1 have introduced formal methods aimed at helping policy makers improve their decisions (for a recent survey see De Marchi et al. 2012). In recent years, the field of decision analysis has been heavily influenced by the “analytics” perspective, which integrates advanced data-mining and learning methods, often associated with increasing access to “big-data” and with decision support systems. This rapidly growing and seemingly very successful field of analytics has been strongly business-oriented since its origin (see Davenport et al. 2010) and is typically focussed on data-driven decision processes. However, in public decisions issues such as individual and social values, culture and public engagement play a much bigger role (White and Bourne 2007) and, to a large extent, characterise the policy cycle of design, testing, implementation, evaluation and review of public policies. From this perspective public policy making seems to be a much more socially complex process than has hitherto been considered by most analytics methods and applications (Almquist et al. 2012; Juntti et al. 2009). This is not to deny the potential for analytics to contribute throughout the policy cycle, thanks to advances in text analytics to analyse social media, as well as the increasing public access to data and tools for analysis (e.g. Google public data explorer,2 Guardian data blog3). The intention of this paper is to propose a framework for policy analytics within which approaches such as these can be appropriately harnessed.

Analysing and supporting the design, implementation and assessment of public policies is not really a new domain. Political scientists have worked in this field for decades (Moran et al. 2006). Equally, economists have pursued much research on rational theories of public decision making and formal methods for the ex-ante and ex-post evaluation of public policies (see for instance Dollery and Worthington 1996). Cost-benefit analysis (CBA) (see Dasgupta and Pearce 1972; Nas 1996) is widely used and, perhaps, the best known method for evaluating public policies among both practitioners and researchers, despite some recent criticisms (for example, Ackerman and Heinzerling 2004; Adler and Posner 2006). Many other approaches have been developed including, in recent years, real options analysis (see Smit and Trigeorgis 2004; Trigeorgis 1990). Decision analysis and operational research (OR) have also developed methods which aim at addressing public policy making (see for instance Pollock et al. 1994). At the same time “analytics” has developed as increasingly an independent domain, both for practice and research, growing out of and bringing together more traditional fields such as statistics, data analysis, data mining, knowledge extraction and machine learning (Davenport and Harris 2007; Davenport et al. 2010; Liberatore and Luo 2010).

An important question arising from the above observations is whether the field of decision analysis needs to give consideration to the specific requirements for analytics within the wide area of supporting public policy making. Thus, the main motivation of this paper is to raise a number of questions and to prompt discussion about the skills and awareness that decision analysts need in order to operate effectively in this domain. In particular:
  • Are practitioners involved in supporting decision processes in the public domain aware of the potential to use analytics?

  • What types of analytics are most appropriate for the type of support they seek to provide?

  • Is decision analysis training appropriate for helping the integration of traditional decision analytic tools and analytics in the area of public policy making?

  • Are researchers considering appropriately the relationships between the specificities of public policy making and analytics?

Rather than seeking to provide an exhaustive answer to the above questions, we provide our perspective on them. We also consider it important to raise and discuss the following assertions:
  • Policy-making is a type of decision process with specific characteristics, thus demanding dedicated analytical methodologies.

  • Business analytics does not always fit the requirements most policy cycles demand, particularly regarding the creation of knowledge to support such cycles.

  • In order to improve both policy making processes and decision aiding processes we need to integrate data-driven decision making with value-driven decision making.

  • Currently, most methods of analytics are based on benchmarking and descriptive approaches, while support for the activities occurring within a policy cycle needs to focus on constructive approaches, including constructive benchmarking and learning.

The paper has the following structure. In Sect. 2, we discuss some key characteristics of policy making processes, distinguishing them from other decision processes. Section 3 reviews the types of policy analyses available to decision analysts and policy makers. We then define what we mean by policy analytics in Sect. 4 and suggest opportunities that it can offer to decision analysts. The paper concludes with some suggestions on how this emerging field can be further developed.

Key characteristics of policy making

What makes a policymaking process different from any other decision process? Why is policy making not just the usual decision making process (but perhaps a little more complex)? We need to understand what characterises policy decisions, making them unique. First of all, we must recognise that speaking about policy-making means speaking about a “policy cycle” (Lasswell 1956). The policy cycle consists of a set of sequential actions linked by some main “goal” or, more generally, by some common public issue. This cycle is composed by eight major steps: issue identification, defining policy objectives, policy design, policy testing, policy finalisation, policy implementation, policy monitoring and evaluation, policy readjustment and innovation.

Not only are policy makers engaged (or should be engaged) in this policy analysis cycle, they are also confronted by five major complexities inherent to public decision making, which we describe next (see also Davies 2004; Dunn 2012; Hill 1997; Kraft and Furlong 2007; Parsons 1995).

Use of public resources

Although the use of public resources may appear obvious in this context, it also has many implications. First, many resources, tangibles or intangibles, used within the policy cycle are provided by the government or other public institutions and, under such a perspective, are considered public goods. Second, during a policy cycle, policy makers use resources provided also by those who are not involved in the policy cycle. Third, public policies allocate (or redistribute) resources and among the beneficiaries we can find people who are not implied in the policy cycle and are potentially considered not to be concerned with it. Fourth, decisions (and thus policies) are “irreversible allocations of resources” opening the way to ethical, moral, intergenerational, social justice, and environmental issues of very basic nature (almost ideological).

Multiple stakeholders

Policy cycles are “de facto” participative, as many different actors (being citizens, groups or organisations) feel the right to be involved, as soon as they discover that one of their private concerns can be associated with that policy cycle. Of course participation can be structured or not, allowed or not, visible or not, formal or informal—but it happens independently from the willingness of who “promotes” that given policy cycle. The result is that within the cycle it is necessary to surface and take into account the different concerns carried by the multiple stakeholders and the different objectives pursued by them, as well as the resources they carry with them; but also to handle the different “languages” practiced by all these stakeholders, the distinctive perceptions they might have of the policy cycle, and the different expectations they present. Last, but not least, it is important to consider the potential confusion that may occur, when a policy cycle does not have a clear “decision maker” due to institutional rules, inconsistent legal frameworks and asymmetries in decision power distribution.

Long-time horizon

A policy cycle usually takes a considerable amount of time even in the case of non-strategic policies. Moreover, the effects and consequences of a policy cycle may become visible only a long time after the cycle occurred and can hold for even longer periods. This may conflict with the agendas of different types of stakeholders. Policy makers usually have short-term agendas due to the timing of politics. Experts and/or analysts may have medium to long-term agendas due to the specific knowledge they have about the policy issue. Citizens may have agendas varying from very short term to very long ones, depending on the concerns they carry in the policy cycle. Such conflicting agendas and different time frames add further structural uncertainty to the policy cycle besides that generated by the difficulty to predict how social and economic scenarios may evolve in the future.

Legitimation and accountability

What do policy makers look for while engaged in a policy cycle? They look for legitimation: legitimation for themselves, for their actions within that policy cycle specifically, for the outcomes of the policy making process and for the policy making process itself. Legitimation can be obtained from different sources (the law, tradition, moral standards, knowledge, practice etc.) and is the cornerstone of the rationality developed by each stakeholder involved in the policy cycle (see Habermas 1990). Under such a perspective, a key aspect in gaining legitimation by the policy making process itself are requirements of increasing the level of participation in the policy cycle, of providing transparency for the decision process (explanations and justifications of the outcomes, clear argumentation etc.) and of demanding more accountability (in the sense of “providing an account of”) from policy makers.

Deliberation

Policy cycles occur in the public domain and, at least part of them, are “public decision processes”. In order to be “public”, decision processes need to establish “deliberations”: those moments of the process where “decisions” are formally adopted, become officially known in the public domain, can be enforced by law, and the allocation of resources linked with the decision becomes irreversible. Deliberations are a crucial part of the policy cycle because they structure the timeline where the cycle occurs. However, this is not linearly perceived by the stakeholders: it becomes more “dense” when deliberations are expected to occur (and immediately afterwards).

Why should decision analysts pay special attention to the characteristics mentioned above? Such characteristics are strictly inter-related. They contribute to characterisation of how the policy making process is structured and, potentially, allow us to understand how it could be conducted. Under such a perspective, decision analysts are expected to provide some decision support exactly in these cases where one or more stakeholders need to establish their position and actions in the policy cycle, an issue captured by the concept called “Action-Arena” (Ostrom 1986) or “Interaction Space” (Ostanello and Tsoukiàs 1993). Action arenas (and interaction spaces) are informal settings (or structures) in which several stakeholders become involved (some purposely, others not). Such a structure permits the establishment of local regulations and rationalities (escaping for instance from market regulations in the case “commons” are the object of the policy cycle). They are characterised by a “meta-object” (a concern recognised and shared by the participants). Such structures act as a “container” for several different problems (possibly totally unrelated) which induce several different decision processes within the same structure and policy cycle (see for instance the experience described by Salmenkaita and Salo 2002).

In other terms, decision analysis is expected to be used during the whole policy cycle, from the agenda setting to the assessment of alternative actions and their consequences, up to ex-post evaluation of the whole policy. Under such a perspective, we need both a richer toolkit of methods to support decision makers along the process, as well as a comprehensive methodology allowing for a coherent structuring of the decision aiding process (see Bouyssou et al. 2000; Belton and Stewart 2002; Tsoukiàs 2007; Tsoukiàs 2008). But is there a demand for analysis in these contexts? And if so, what kind of analysis is already available to policy makers? This is discussed in the next section.

Analysis for policy makers

In this section we suggest that there is a growing demand for policy analysis and briefly review different types of modelling already developed with this intent. We then examine, with a policy analysis perspective, the main components of the analytics movement.

Demand for policy analysis

Policy decisions impact large numbers of citizens on many different aspects of social, economic and cultural life. As for any type of decision process, policy makers have to assume the full responsibility of their policies with respect to both private and public stakeholders. Under such a perspective, as we argued before, policy-making is a complex decision process and has always been a field where decision support has been sought: from using statistical information to applying decision analytic and OR methods (Dorling and Simpson 1999; Larson and Odoni 1981; Rosenhead 1981; Pollock et al. 1994).

However, in recent years we can observe a number of trends affecting the nature of demands policy makers address to analysts of any type (for a detailed discussion see Hill 1997; Kraft and Furlong 2007; Moran et al. 2006; Nutley et al. 2003):
  • an increasing demand for participation in the policy making process, coming from opinion groups and individual citizens;

  • an increasing mistrust between citizens and policy makers, as well as between citizens and “experts”;

  • an increasing social fragmentation, resulting in a loss of representativeness by traditional political parties and social organisations (such as the trade unions);

  • an increasing mistrust of science and consequent limits on its ability to convince citizens about policy consequences and impacts;

  • a rapid growth in the amount of information to which citizens can have access, information provided (most of the times) without any check on reliability and truthfulness.

The result of such trends is that the policy making process became ever more challenging, since the demand for accountability and legitimation, for both the process and its outcomes, becomes stronger.

To some extent this situation is captured by the appearance in recent years of several manuals and guidelines concerning the assessment of public policies at national and European level, such as:
  • the cost-benefit analysis manual of the European Union4;

  • the evalsed manuals concerning the assessment and use of the European social fund5;

  • the green and magenta books of the UK government6;

  • the public policy assessment book of the UK government7;

  • the Italian law concerning the environmental assessment of public works8;

  • the French law concerning the technological risk assessment plans.9

These documents extend a well-established tradition in the USA related to the use of CBA on policy decisions (for an account of using CBA in the United States regulatory process see Shapiro and Morrall 2012; see also the world bank manuals about CBA10).

Modelling for policy making

The use of formal analysis in supporting policy making has a long tradition in the literature. Besides the traditional field of public policy analysis, OR has also been often used to analyse public policies and recommend actions. A relatively recent push for evidence-based management (Pfeffer and Sutton 2006), which has also been extended to public decision making (Tavakoli et al. 2000), has increased the attractiveness of formal policy analysis. In this section we briefly review such developments.

Public policy analysis

The field of public policy analysis (for an overview see the classic textbook by Dunn 2012 and also Parsons 1995) has its roots in economics and in political science. In this field, economists’ main focus is on how market-structures can better allocate scarce public resources, while political scientists are mostly concerned with understanding the roles of politics and the government in public policy processes (Lindblom and Woodhouse 1993).

A large body of public policy analysis is devoted to retrospective (ex post) analysis, which tries to understand the causes and consequences of policies after they have been implemented. Equally relevant in policy analysis is the role of prospective (ex ante) analysis, which encompasses the forecasting of consequences if policies were to be implemented and prescriptions about which policies should be implemented (Dunn 2012).

Common tools for prescriptive analysis in this field are net present value assessments of costs and benefits of potential public policies, as well as CBA (Munger 2000), and cost effectiveness analysis, the latter often employed for military expenditure decisions (Dunn 2012). Challenges in those analyses involve how to properly monetise all benefits and how to set up an adequate discount rate for public goods without market prices (Ackerman and Heinzerling 2004; Adler and Posner 2006; Montibeller and Franco 2011).

The evidence-based policy making approach, introduced by the UK government in the 1990s, represents the most recent “practice oriented” attempt to strength the policy making process: “(…) evidence-based policy helps people make well informed decisions about policy, programmes and projects by putting the best available evidence from research at the heart of policy development and implementation” (Davies 2004). This attempt extends the idea of governing based on “facts” (instead of ideology) typical of the European culture since the enlightenment (for a discussion see Dryzek 2006). However, this approach, criticised under many aspects (see Almquist et al. 2012; De Marchi et al. 2012), failed to become a standard, perhaps mainly because of its inability to convince policy makers that the typical difficulties of legitimating public policies could be overtaken by using it.

Public sector operational research

After the early developments of OR, and successful military applications during World War II, business applications followed, particularly in the USA. In the 1960s RAND started using OR methods for public decisions, initially only dealing with hard data, but later on developing methods for policy analysis which could take into account soft factors, such as values and subjective judgments, as well as future scenarios (RAND 1996). In the late 1960s the British central government implemented an OR group, with the intention of promoting the use of OR in decision and policy making (Kirby 2000).

Some limitations of traditional OR methods, such as an overreliance on quantitative data and the use of an expert mode of analysis (Franco and Montibeller 2010), led to the development, in Britain, of problem structuring methods (for an overview Mingers and Rosenhead 2001). These methods rely heavily on participative engagement with decision makers, adopting a facilitative mode of engagement (Franco and Montibeller 2010), and simple, often qualitative, models. Such methods have been used extensively in public sector decisions, particularly in Britain and continental Europe (Rosenhead 1992; Shaw et al. 2006).

Another important source of policy analysis support was the development of decision analysis in late 1960s (Raiffa 1968), with the use of expert judgement in defining subjective probabilities of outcomes, and further extensions to decision with multiple objectives in mid 1970s (Keeney and Raiffa 1993). Decision analysis has been used extensively since then for the analysis of many important policy decisions, particularly in the USA and Britain (e.g. Merkhofer and Keeney 1987; Rosoff and Von Winterfeldt 2007; Morton et al. 2009). A more recent stream of development is the use of decision analysis embedded in recurrent processes of public prioritisation (e.g. Bana e Costa et al. 2008; Del Rio Vilas et al. 2013).

Other areas of OR that have made significant contributions to policy making include system dynamics (e.g. Forrester 1992, Morecroft 1988; Zagonel and Rohrbaugh 2008) and the use performance measurement approaches in the public sector, for example data envelopment analysis which was initiated by an application about the effectiveness of public schools (see Charnes et al. 1978), with several application in the public sector (for example Emrouznejad et al. 2008 and Thanassoulis 1995).

Business analytics

Business analytics has been initially developed mainly for the private sector (Davenport and Harris 2007; Davenport et al. 2010), although applications in other areas, for example health analytics and learning analytics are also growing (e.g. Fitzsimmons 2010 and Buckingham Shum 2012). Seen from a very pragmatic point of view, “analytics” is an umbrella term under which many different methods and approaches converge. These include statistics, data mining, business intelligence, knowledge engineering and extraction, decision support systems and, to some extent, OR and decision analysis. The key idea consists in developing methods through which it is possible to obtain useful information and knowledge for some purpose, this typically being conducting a decision process in some business application.

The distinctive feature in developing “analytics” has been to merge different techniques and methods in order to optimise both the learning dimension as well as its applicability in real decision processes. In recent years “analytics” has been associated to the term “big data” in order to take into account the availability of large data bases (and knowledge bases as well) possibly in open access (open data organisations are now becoming increasingly available). Such data come in very heterogeneous forms and a key challenge has been to be able to merge such different sources, in addition to solving the hard algorithmic problems presented by the huge volume of data available.

However, the mainstream approach developed in these years is based on two restrictive hypotheses. The first is that the learning process is basically “data driven” with little (if any) attention paid to the values which may also drive learning. This is with regard both to “what” matters and also to “why” it matters, potentially incorporating considerations of the extent to which different stakeholder perspectives are valued or trusted. The second is that, in order to guarantee the efficiency of the learning process seen, from an algorithmic point of view, as a process of pattern recognition, it is necessary to use “learning benchmarks” against which it is possible to measure the accuracy and efficiency of the algorithms. While this perspective makes sense for many applications of machine learning, it is less clear how it can be useful in cases where learning concerns values, preferences, likelihoods, beliefs and other subjectively established information, which is potentially revisable as constructive learning takes place. We suggest that this calls for a new type of analytics, “Policy Analytics”, which we present in the next section.

Policy analytics

This section begins with a tentative definition of what we mean by “Policy Analytics”. We recognise that the definition might not be as precise as we have wished, and indeed may evolve over time, but we have tried to capture its main facets within the definition.

A definition of the concept

To support policy makers in a way that is meaningful (in a sense of being relevant and adding value to the process), operational (in a sense of being practically feasible) and legitimating (in the sense of ensuring transparency and accountability), decision analysts need to draw on a wide range of existing data and knowledge (including factual information, scientific knowledge, and expert knowledge in its many forms) and to combine this with a constructive approach to surfacing, modelling and understanding the opinions, values and judgements of the range of relevant stakeholders. We use the term “Policy Analytics” to denote the development and application of such skills, methodologies, methods and technologies, which aim to support relevant stakeholders engaged at any stage of a policy cycle, with the aim of facilitating meaningful and informative hindsight, insight and foresight.

What can policy analytics offer?

So what can policy analytics offer to policy makers dealing with complex policy decisions? We suggest some avenues that might be explored by decision analysts.

In terms of data requirements, we see two major roles for policy analytics, in the same way as in business analytics. The first one is to explore existing data sets. The second one is to gather data and create new databases, to explore particular issues relevant to policy makers. However, the nature of policy decisions, which we described above, makes these two roles very distinct from business analytics.

In exploring data which already exists (such as citizens’ votes, preferences and demands, their relation to demographics, etc.) decision analysts must recognise that the multiple stakeholders involved and/or affected by a decision, and the multiple objectives pursued by the policy makers, mean that benchmarks cannot be easily defined or arbitrarily set. Policy analytics thus needs to focus on constructive approaches, including constructive benchmarking and learning, to support decisions in the policy cycle. The need to construct benchmarks rather than simply basing them on measured performances or behaviours, or arbitrarily defining them, requires an understanding of the stakeholders’ values and their power structure which, often, also calls for their engagement in the design of the analysis. The need to promote constructive learning requires a focus on models for learning rather than models to give the “right” answer (De Geus 1988) and a recognition that the role of the analytical models is a constructive one (Roy 1993; Watzlawick et al. 1974).

For example, consider a situation in which healthcare providers are assessed on the basis of the average time a patient stays in the hospital. This apparently simple indicator is subject to many contextual factors alongside differing social and political interpretations. For example, in some countries “long stays” may be considered by patients as an indicator of “ineffective” and “expensive” health services, in particular where payment for the cost of care is direct or through purchased insurance, as in the USA. On the other hand, in countries where healthcare is provided by the state “long stays” could be considered as high quality care, indicating a health service which does not discharge a patient until assured of a full recovery. Yet another perspective is that long stays are indicative of inadequate provision of continuing care, resulting in frustration for the patient who cannot be discharged from hospital because the appropriate support is not available in the community. We do not discuss whether such approaches to health are justified or not. What is important is to note that the same statistical information (data) is likely to be interpreted differently depending on context, values and culture.

Now consider the different stakeholders involved in health care management: the patients and their families, the health workers, the managers of the service and the political authorities under which health care is delivered in terms of public policy. Once again the same information (average time a patient stays in a hospital) will be considered differently depending on the stakeholder’s objectives; for instance, the general manager of a hospital under pressure to increase the intake of new patients versus the patient wishing to be fully treated versus the politician wanting to be re-elected. In using information such as this to inform any policy decision we need to consider its purpose, what is valued by whom and how, the context and culture where the policy is being developed and where it will be implemented. In order for a policy to be “legitimated” it will need to address such multiple (possibly conflicting) concerns or to be imposed by a policy maker owning a legitimation obtained beyond that precise context. In the first (and in practice more usual) case, legitimation is obtained by exchanging resources: a critical resource being information and knowledge. It is at this stage that “analytics” becomes crucial, since it should provide not just supporting information, but legitimating information11 (showing for instance that reducing the average stay in hospital within a welfare state context will in the long term result in either worsening the quality of the service or in increasing its long term costs). The construction of such information requires taking into account long term effects and consequences consistent with the timing of most policy cycles.

Perhaps an even more important role for policy analytics is its potential for creating new databases, gathering information which is relevant for the analysis and decision making, and exploring the data to support the policy-making cycle. Again constructivism (Roy 1993; Watzlawick 1984) provides a proper conceptual background for this role, in our view, given the complex nature of societal problems and the need to understand multiple perspectives, consider multiple impacts on different sectors of the society, and assess options under multiple and often conflicting objectives.

Another key aspect, is that policy analytics must emphasise value-driven analysis which can support value-driven decision-making (Keeney 1996), rather than being highly data-driven as is often the case in the context of business analytics. A value driven analysis understands that the alternative policies are means to achieve the values and objectives that society is pursuing. Different policies will have different impacts on the extent to which such values are achieved or upheld and, as these values often represent objectives held by different stakeholders, may impact unevenly on different segments of society.

Related to this, another opportunity which has to date been often neglected by policy analysis and public sector OR, is in supporting the design of better policies with a value-driven analysis perspective (Gregory and Keeney 1994; Montibeller and Franco 2011). Indeed within this perspective, the analysis is seen as supporting pro-active policy making, which tries to address problems and improve society, instead of reactively coping with public dissatisfaction and complaints.

We envisage several opportunities for the use of policy analytics in organisational contexts. A major opportunity is to embed analyses to inform decisions in the policy analytic cycle, for instance, in helping the identification of issues, in predicting impacts of possible policies, in supporting policy design, in simulating policy implementation, and in helping the evaluation and monitoring of implemented policies. In Table 1 we suggest how policy analytics can support each step in the policy cycle and try to distinguish it from how business analytics would provide support in a similar business setting.
Table 1

The role of policy analytics in the policy cycle

Steps in the policy cycle

Business analytics

Policy analytics

Issue identification

Definition of issue by the analyst

Analyst understands perspectives from different stakeholders

Defining policy objectives

Data-driven definition of attributes

Value-, cultural- and stakeholder-driven definition of objectives

Policy design

Data-driven design of alternative policies

Innovative and value-driven design of alternative policies

Policy testing

Data-based testing and learning (data mining, predictive analysis)

Multiple tests to assess potential impacts (citizens surveys, data mining, prospective analysis, etc.)

Policy finalisation

Sensitivity analysis of the results, given the input parameters

Robust analysis of the results, given broad issues and multiple values being considered

Policy implementation

Implementation is typically straightforward, given the issue considered

Analysis helps implementation, mapping resistances and side effects of the policy

Policy monitoring and evaluation

Evaluation conducted against the success criterion initially set

Multiple and contested success criteria; evaluation is value and stakeholder based

Policy readjustment and innovation

Innovation is data-driven and thus reactive

Innovation is value-driven and thus proactive

We next conclude the paper, discussing the motivations we raised in the introduction and how policy analytics might help addressing them, and suggesting several directions for further development in this field.

Conclusions and further directions of development

In this paper we have suggested a conceptualisation for policy analytics and proposed a framework for it use in supporting public decisions processes occurring within a policy cycle. We argued that there is demand for policy analysis, and reviewed briefly methods that provide formal analysis in this context, such as public policy analysis, public sector OR, decision analysis, and data development analysis applied to assessing public policies.

We also argued that business analytics, which was developed mainly for supporting decisions in the private sector, while powerful and increasingly applied, has two limiting hypothesis: it is data driven and needs clear benchmarks to be set. We believe that these are challenging if one wants to apply analytics in supporting the policy cycle.

Let us now discuss the three motivations for which we started this discussion:
  • Are practitioners involved in supporting decision processes in the public domain aware of the potential to use analytics? Only to some extent. First, the types of analytics readily available have been designed for business purposes. Despite being generally helpful (in allowing data to support more effectively decision processes), business analytics may not fully fit the requirements of the policy cycle as these have been presented in this paper. Second, policy making is, and will remain, essentially a value driven decision process. In addition to learning from data we need to learn from values: under such a perspective practitioners need tools, methods, and models allowing the consideration of this perspective. Furthermore, the clients (policy makers, citizens, social groups, various stakeholders) need also to understand the difference between data versus value-driven analysis: if offering simple data driven decision support can be misleading, demanding simple data driven policy support can also be misleading. Policies are not in the data, they are in the values. Third, such tools, methods, and models need to support the whole policy cycle.

  • Is decision analysis training appropriate for helping the integration of traditional decision analytic tools and analytics in the area of public policy making? We think that perhaps not entirely. Most of the training of decision analysts does not address the specific requirements of the policy cycle, while most of the training of policy scientists does not address the issue of using formal methods of decision support. We need to establish training for decision analysts that could cover the whole spectrum of policy cycle issues, potentially from an interdisciplinary perspective and incorporating awareness of the social and political contexts of policy making. In doing so we must view decision support as a methodology, not just as a collection of tools and methods. Decision analysts need to understand the complexity of the policy cycle, to be flexible enough to shift from one approach to another one (from problem structuring methods to quantitative modelling, from learning procedures to justification construction), to be able to integrate different methodologies and methods in a coherent and effective manner (Bennett 1985; Mingers and Brocklesby 1997; Brown et al. 2006; Pollack 2007) and to be aware of the possibilities offered by the new technologies but also to be clever enough to construct new paths within.

  • Are researchers giving appropriate consideration to the relationships between the specificities of public policy making and analytics? The above discussion introduces a number of challenges for researchers. If such challenges are to be pursued, then our research needs to address them. In the following we present an agenda of issues we consider relevant (but notice that it is not an exhaustive list).

  • Preference learning: if policy making should reflect societal values, then we need to learn about them (the policy makers’ values, stakeholders’ values, etc.). Values are often operationalized as preference statements, either comparative or absolute ones. The field of preference learning is increasingly becoming an important research area (see Fürnkranz and Hüllermeier 2010) addressing such issues, although more from a machine learning perspective; Belton et al. (2009) discuss the interaction between individual learning and model learning and associated research challenges in the context of interactive decision making. A more constructive learning approach needs to be developed (see Bruner 1986; Mousseau 2005).

  • Scenario planning: the long-term implications of policy making imply the need to consider the range of possible futures, sometimes characterised by deep uncertainties and calling for the development of future scenarios (see Godet 2000; Montibeller et al. 2006; Schroeder and Lambert 2011; Ram and Montibeller 2012; Stewart et al. 2013). Further research on how scenarios are constructed (from paths along a decision tree to precise configurations of interaction spaces or arenas) and how to address issues of robustness in scenario planning would be welcome (see Liesiö et al. 2007; Levy et al. 2000; Perny and Spanjaard 2003; Roy 1998; Vincke 1999; Wong and Rosenhead 2000).

  • Argumentation theory: as already introduced by Habermas (1981), legitimated policies are the ones which are appropriately explained, justified, supported and not sufficiently confuted (i.e. argued). Argumentation theory (Aristotle 1990; Schopenhauer 1864; Toulmin 1958) establishes a formal and rational framework for how to construct, use, exchange and confute arguments. Although hardly used as support in policy analysis (for exceptions see Atkinson et al. 2004; Cartwright and Atkinson 2008; Modgil and Prakken 2012; Rehg et al. 2005) there is scope for further investigating several of its dimensions: the construction of explanations and justifications; the construction of argumentation scenarios; the issue of legitimate arguments (in collective debate); the relationship between game theory and argumentation theory.

  • Support for problem structuring and formulation: a large part of the decision support activities occurring within a policy cycle are about understanding, formulating and structuring “problems”. Problem structuring methods (see Franco et al. 2006; Shaw et al. 2007) are now widely acknowledged as part of decision analytic tools and there is a growing but still small body of research and practice on how to integrate such methods with other formal and/or quantitative methods (for examples see Belton et al. 1997; Bana e Costa et al. 1999; Belton and Stewart 2010; Montibeller and Belton 2006; Montibeller et al. 2007a, 2007b and Howick and Ackermann 2011 for a survey of practice in OR). More importantly, from the viewpoint of analytics, there are challenges on how problem formulation and problem structuring can be conducted in the absence of small-groups of decision makers, which has been typically the case in traditional applications of problem-structuring methods.

  • Reformulation of decision problems: policy cycles involve, often, long decision processes. They also are “learning processes” for stakeholders implied in the cycle. Updating, revising and reformulating decision models is a regular activity within policy cycles and are all activities which could potentially be supported. This is an issue already addressed in the literature, both from a general point of view (see Gärdenfors 1988; Katsuno and Mendelzon 1991) and for specific decision support purposes (see Liberti 2009; Tsoukiàs 1991), but we are still far from having a comprehensive framework which could be also practically applied (although we have now specific formal languages for some classes of reformulation problems, see Liberti et al. 2010).

  • Design of alternatives: most decision problems discussed in the literature consider the set of alternatives on which they apply as “given”, although we know that in practice frequently such a set needs to be constructed. There is little in the literature addressing this problem (for an overview see Belton and Stewart 2002, 2010; Franco and Montibeller 2011), despite the awareness of it (for example, Keeney 1996; Goodwin and Wright 1998; Keller and Ho 1988). However, policy makers rarely come with established alternatives. Most of the policy cycle is about designing or constructing alternatives. Actually, most of “smart” policy making is about “innovative design” of “innovation policies” (Montibeller and Franco 2011), that is, designing alternatives considered unconceivable at that moment in time (creative design). Simon (1954) had already discussed this cognitive activity in his seminal work, without providing operational and/or formal methods for addressing it. More recently Hatchuel and Weil (2009) introduced C–K theory (C–K = concept–knowledge) as a general design theory opening a way to extending decision analysis by operationally addressing this issue which merits further exploration. There have also been suggestions for value-focused brainstorming of decision alternatives (Keeney 2012; Montibeller et al. 2009), an approach which is resonant with Corner et al. (2001) dynamic decision problem structuring.

  • Decision aiding practice: the great majority of research in our field concerns “theory”. As emphasised in Tsoukiàs (2007) it focuses on how decisions are “taken”, underestimating both the theoretical and practical problems of how to support decision processes. Moreover, while the issue of introducing decision models within an organisational context has been discussed in the literature (Nutt 1993), very little has been done as far as inter-organisational contexts are concerned (see Munda 2008). Given the specificities of the policy cycle, exploring the practical aspects of providing decision support (such as getting access to policy makers, interacting with decision-makers, conveying results from the decision analyses, and learning from previous intervention) remains a key research issue.

  • Interdisciplinary research: the above mentioned research priorities clearly call for more interdisciplinary research engaging disciplines such as artificial intelligence, computer science, sociology, policy analysis, decision analysis, and cognitive sciences. The reader will note that the references list of this paper already cites contributions from all such areas. Either policy analytics will emerge as a strong interdisciplinary research area or it will never succeed.

Concluding the paper, we hope to have highlighted that policy analytics represents a key opportunity for the future of decision analysis, but poses challenges with implications for research, training and practice. Perhaps most importantly, it may provide an opportunity for how the world around us can be improved by decision analysts.

Footnotes

References

  1. Ackerman F, Heinzerling L (2004) Priceless: on knowing the price of everything and the value of nothing. The New Press, USAGoogle Scholar
  2. Adler MD, Posner ED (2006) New Foundations of Cost Benefit Analysis. Harvard University PressGoogle Scholar
  3. Almquist R, Grossi G, Jan van Helden G, Reichard Ch (2012) Public sector governance and accountability. Critical perspectives of accounting. doi: 10.1016/j.cpa.2012.11.005
  4. Aristotle (1990) Nicomachean Ethics. In: Bywater I (ed) Originally published in 350 bc, English edn. Oxford University Press, OxfordGoogle Scholar
  5. Atkinson K, Bench-Capon T, Mcburney P (2004) Parmenides: facilitating democratic debate. Electronic Government, LNCS 3183, Springer, Verlag, pp 313–316Google Scholar
  6. Bana e Costa CA, Ensslin L ,Correa EC, Vansnick J-C (1999) Decision support systems in action: integrated application in a multicriteria decision aid process. Eur J Oper Res 113:315–335Google Scholar
  7. Bana e Costa CA, Lourenço JC, Chagas MP, Bana e Costa JC (2008) Development of reusable bid evaluation models for the Portuguese electric transmission company. Decis Anal 5:22–42Google Scholar
  8. Belton V, Stewart TJ (2002) Multiple criteria decision analysis: an integrated approach. Kluwer Academic, DordrechtCrossRefGoogle Scholar
  9. Belton V, Stewart TJ (2010) Problem structuring for multiple criteria analysis, in new trends in multicriteria decision analysis. In: Greco S, Figuiera RJ, Ehrgott M (eds) Springer International Series in Operational Research and Management Science, pp 209–240Google Scholar
  10. Belton V, Ackermann F, Shepherd I (1997) Integrated support from problem structuring through to alternative evaluation using COPE and V∙I∙S∙A. J Multiple Criteria Anal 6:115–130CrossRefGoogle Scholar
  11. Belton V, Branke J, Eskelinen P, Greco S, Molina J, Ruiz F, Slowinski R (2009) Interactive multiobjective optimization from a learning perspective, in multiobjective optimization—interactive and evolutionary approaches. In: Branke J, Deb K, Miettinen K, Slowinski R (eds) LNCS State-of-the-Art Surveys (5252) Springer, Verlag, pp 405–434Google Scholar
  12. Bennett PG (1985) On linking approaches to decision aiding: issues and prospects. J Oper Res Soc 36:659–669Google Scholar
  13. Bouyssou D, Marchant Th, Pirlot M, Perny P, Tsoukiàs A, Vincke Ph (2000) Evaluation and decision models: a critical perspective. Kluwer Academic, DordrechtGoogle Scholar
  14. Brown J, Cooper C, Pidd M (2006) A taxing problem: the complementary use of hard and soft OR in the public sector. Eur J Oper Res 172(6):666–679CrossRefGoogle Scholar
  15. Bruner J (1986) Actual minds, possible worlds. Harvard University Press, CambridgeGoogle Scholar
  16. Buckingham Shum S (2012), Learning analytics, UNESCO Policy BriefGoogle Scholar
  17. Cartwright D, Atkinson K (2008) Political engagement through tools for argumentation. In: Proceedings of COMMA, pp 116–127Google Scholar
  18. Charnes A, Cooper W, Rhodes E (1978) Measuring the efficiency of decision-making units. Eur J Oper Res 2:429–444CrossRefGoogle Scholar
  19. Corner J, Buchanan J, Henig M (2001) Dynamic decision problem structuring. J Multi-Criteria Decis Anal 10:129–141CrossRefGoogle Scholar
  20. Dasgupta P, Pearce DW (1972) Cost-benefit analysis: theory and practice. Macmillan, BasingstokeGoogle Scholar
  21. Davenport TH, Harris JG (2007) Competing on analytics: the new science of winning. Harvard Business Press, BostonGoogle Scholar
  22. Davenport TH, Harris JG, Morison R (2010) Analytics at work: smarter decisions, better results. Harvard Business Press, BostonGoogle Scholar
  23. Davies PT (2004) Is evidence-based government possible? Jerry Lee lecture. http://www.nationalschool.gov.uk/policyhub/downloads/JerryLeeLecture1202041.pdf
  24. De Geus AP (1988) Planning as learning. Harv Bus Rev 66:70–74Google Scholar
  25. De Marchi G, Lucertini G, Tsoukiàs A (2012) From evidence based policy making to policy analytics. Cahier du LAMSADE 319, Université Paris Dauphine, ParisGoogle Scholar
  26. Del Rio Vilas VJ, Voller F, Montibeller G, Franco LA, Sribashayam S, Watson E, Hartley M, Gibbens JC (2013) An integrated process and management tools for ranking multiple emerging threats to animal health. Prev Vet Med 108:94–102CrossRefGoogle Scholar
  27. Dollery B, Worthington A (1996) The evaluation of public policy: normative economic theories of government failure. J Interdiscip Econ 7:27–39Google Scholar
  28. Dorling D, Simpson S (1999) Statistics in society. Hodder Arnold, LondonGoogle Scholar
  29. Dryzek JS (2006) Policy analysis as critique. In: Moran M, Rein M, Goodin RE (eds) The Oxford handbook of public policy. Oxford University Press, Oxford, pp 190–203Google Scholar
  30. Dunn WN (2012) Public policy analysis, 5th edn. PearsonGoogle Scholar
  31. Emrouznejad A, Barnett RP, Tavares G (2008) Evaluation of research in efficiency and productivity: a survey and analysis of the first 30 years of scholarly literature in DEA. Socio-Econ Plan Sci 42:151–157CrossRefGoogle Scholar
  32. Fitzsimmons P (2010) Rapid access to information: the key to cutting costs in the NHS. Br J Healthc Manag 16:448–450Google Scholar
  33. Forrester JW (1992) Policies, decisions and information source for modelling. Eur J Oper Res 59:42–63CrossRefGoogle Scholar
  34. Franco LA, Montibeller G (2010) Facilitated modelling in operational research. Eur J Oper Res 205:489–500CrossRefGoogle Scholar
  35. Franco LA, Montibeller G (2011) Problem structuring for multicriteria decision analysis interventions. In: Cochran et al. (ed) Wiley encyclopaedia of operations research and management science. Wiley, USAGoogle Scholar
  36. Franco LA, Shaw D, Westcombe M (2006) Special issue: problem structuring methods I. J Oper Res Soc 57:757–883CrossRefGoogle Scholar
  37. Fürnkranz J, Hüllermeier E (2010) Preference learning. Springer, VerlagGoogle Scholar
  38. Gärdenfors P (1988) Knowledge in flux. MIT Press, CambridgeGoogle Scholar
  39. Godet M (2000) The art of scenarios and strategic planning: tools and pitfalls. Elsevier Science Inc., New YorkGoogle Scholar
  40. Goodwin P, Wright G (1998) Decision analysis for management judgment. Wiley, New YorkGoogle Scholar
  41. Gregory RS, Keeney RL (1994) Creating policy alternatives using stakeholder values. Manage Sci 40:1035–1048CrossRefGoogle Scholar
  42. Habermas J (1981) The theory of communicative action. In: Th. McCarthy (ed) Translated (1984–87). Polity, CambridgeGoogle Scholar
  43. Habermas J (1990) Logic of the social sciences. MIT Press, BostonGoogle Scholar
  44. Hatchuel A, Weil B (2009) C-K design theory: an advanced formulation. Res Eng Des 19:181–192CrossRefGoogle Scholar
  45. Hill M (1997) The public policy process. Pearson Education Limited, HarlowGoogle Scholar
  46. Howick S, Ackermann F (2011) Mixing OR methods in practice: past, present and future directions. Eur J Oper Res 215:503–511CrossRefGoogle Scholar
  47. Juntti M, Russel D, Turnpenny J (2009) Evidence, politics and power in public policy for the environment. Environ Sci Policy 12:207–215CrossRefGoogle Scholar
  48. Katsuno H, Mendelzon AO (1991) On the difference between updating a knowledge base and revising it. In: Proceedings of KR’91, pp 387–394Google Scholar
  49. Keeney RL (1996) Value-focused thinking: a path to creative decision-making. Harvard University Press, BostonGoogle Scholar
  50. Keeney RL (2012) Value-focused brainstorming. Decis Anal. doi: 10.1287/deca.1120.0251 Google Scholar
  51. Keeney RL, Raiffa H (1993) Decisions with multiple objectives: preferences and value tradeoffs, 2nd edn. Cambridge University Press, CambridgeGoogle Scholar
  52. Keller LR, Ho JL (1988) Decision problem structuring: generating options. IEEE Trans Syst, Man, Cybernetics 18:715–728CrossRefGoogle Scholar
  53. Kirby MW (2000) Operations research trajectories: the Anglo–American experience from the 1940s to the 1990s. Oper Res 48:661–670CrossRefGoogle Scholar
  54. Kraft M, Furlong SR (2007) Public policy. Politics, analysis and alternatives, 2nd edn. CQ Press, WashingtonGoogle Scholar
  55. Larson RC, Odoni AR (1981) Urban operations research. Prentice Hall, NJGoogle Scholar
  56. Lasswell HD (1956) The decision process: seven categories of functional analysis. University of Maryland Press, College ParkGoogle Scholar
  57. Levy JK, Hipel KW, Kilgour DM (2000) Using environmental indicators to quantify the robustness of policy alternatives to uncertainty. Ecol Model 130:79–86CrossRefGoogle Scholar
  58. Liberatore M, Luo W (2010) The analytics movement: implications for operations research. Interfaces 40:313–324CrossRefGoogle Scholar
  59. Liberti L (2009) Reformulations in mathematical programming: definitions and systematics. RAIRO 43:55–85CrossRefGoogle Scholar
  60. Liberti L, Cafieri S, Savourey D (2010) The reformulation-optimization software engine. In: Proceedings of ICMS, pp 303–314Google Scholar
  61. Liesiö J, Mild P, Salo A (2007) Preference programming for robust portfolio modeling and project selection. Eur J Oper Res 181:1488–1505CrossRefGoogle Scholar
  62. Lindblom CE, Woodhouse EJ (1993) The policy-making process, 3rd edn. Prentice Hall, NJGoogle Scholar
  63. Merkhofer MW, Keeney RL (1987) A multiattribute utility analysis of alternative sites for the disposal of nuclear waste. Risk Anal 7:173–194CrossRefGoogle Scholar
  64. Mingers J, Brocklesby J (1997) Multimethodology: towards a framework for mixing methodologies. Omega 25:489–509CrossRefGoogle Scholar
  65. Mingers J, Rosenhead J (2001) Rational analysis for a problematic world revisited: problem structuring methods for complexity, uncertainty and conflict. Wiley, USAGoogle Scholar
  66. Modgil S, Prakken H (2012) A general account of argumentation with preferences. Artif Intell. http://dx.doi.org/10.1016/j.artint.2012.10.008
  67. Montibeller G, Belton V (2006) Causal maps and the evaluation of decision options. J Oper Res Soc 57:779–791CrossRefGoogle Scholar
  68. Montibeller G, Franco LA (2011) Raising the bar: strategic multi-criteria decision analysis. J Oper Res Soc 62:855–867CrossRefGoogle Scholar
  69. Montibeller G, Gummer H, Tumidei D (2006) Combining scenario planning and multi-criteria decision analysis in practice. J Multi-Criteria Decis Anal 14:5–20CrossRefGoogle Scholar
  70. Montibeller G, Belton V, Lima MVA (2007a) Supporting factoring transactions in Brazil using reasoning maps: a language-based DSS for evaluating accounts receivable. Decis Support Syst 42:2085–2092CrossRefGoogle Scholar
  71. Montibeller G, Belton V, Ackermann F, Ensslin L (2007b) Reasoning maps for decision aid: an integrated approach for problem-structuring and multi-criteria evaluation. J Oper Res Soc 59:575–589CrossRefGoogle Scholar
  72. Montibeller G, Franco LA, Lord E, Iglesias A (2009) Structuring resource allocation decisions: a framework for building multi-criteria portfolio models with area-grouped options. Eur J Oper Res 199:846–856CrossRefGoogle Scholar
  73. Moran M, Rein M, Goodin GE (2006) The Oxford handbook of public policy. Oxford University Press, OxfordGoogle Scholar
  74. Morecroft JDW (1988) Systems dynamics and microworlds for policymakers. Eur J Oper Res 35:301–320. doi: 10.1016/0377-2217(88)90221-4 CrossRefGoogle Scholar
  75. Morton A, Airoldi M, Phillips LD (2009) Nuclear risk management on stage: a decision analysis perspective on the UK’s committee on radioactive waste management. Risk Anal 29:764–779. doi: 10.1111/j.1539-6924.2008.01192.x CrossRefGoogle Scholar
  76. Mousseau V (2005) A general framework for constructive learning preference elicitation in multiple criteria decision aid. Cahier du Lamsade 223, Université Paris Dauphine, ParisGoogle Scholar
  77. Munda G (2008) Social multi-criteria evaluation for a sustainable economy. Springer, VerlagCrossRefGoogle Scholar
  78. Munger MC (2000) Analyzing policy: choices, conflicts, and practices. WW Norton, New YorkGoogle Scholar
  79. Nas TF (1996) Cost-benefit analysis: theory and application. Sage Publications, LondonGoogle Scholar
  80. Nutley SM, Walter I, Davies HTO (2003) From knowing to doing: a framework for understanding the evidence-into-practice agenda. Evaluation 2:125–148CrossRefGoogle Scholar
  81. Nutt PC (1993) The formulation processes and tactics used in organizational decision making. Organ Sci 4:226–251CrossRefGoogle Scholar
  82. Ostanello A, Tsoukiàs A (1993) An explicative model of `public’ interorganisational interactions. Eur J Oper Res 70:67–82Google Scholar
  83. Ostrom E (1986) An agenda for the study of institutions. Public Choice 48:3–25CrossRefGoogle Scholar
  84. Parsons DW (1995) Public policy: an introduction to the theory and practice of policy analysis. Edward Elgar, UKGoogle Scholar
  85. Perny P, Spanjaard O (2003) An axiomatic approach to robustness in search problems with multiple scenarios. In: Proceedings of UAI03, pp 469–476Google Scholar
  86. Pfeffer J, Sutton RI (2006) Hard facts, dangerous half-truths and total nonsense: profiting from evidence-based management. Harvard Business Press, BostonGoogle Scholar
  87. Pollack J (2007) Multimethodology in series and parallel: strategic planning using hard and soft OR. J Oper Res Soc 58:1–12CrossRefGoogle Scholar
  88. Pollock SM, Rothkopf MH, Barnett A (1994) Operational research and the public sector. North Holland, AmsterdamGoogle Scholar
  89. Raiffa H (1968) Decision analysis: introductory lectures on choices under uncertainty. Addison-Wesley England, OxfordGoogle Scholar
  90. Ram C, Montibeller G (2012) Exploring the impact of evaluating strategic options in a scenario-based multi-criteria framework. Technol Forecast Soc ChangeGoogle Scholar
  91. RAND (1996) 50th Project air force. RAND Corporation. Available at: http://www.rand.org/content/dam/rand/www/external/publications/PAFbook.pdf
  92. Rehg W, McBurney P, Parsons S (2005) Computer decision-support systems for public argumentation: assessing deliberative legitimacy. AI Society 19:203–228CrossRefGoogle Scholar
  93. Rosenhead J (1981) Operational research in urban planning. Omega 9:345–364CrossRefGoogle Scholar
  94. Rosenhead J (1992) Into the Swamp: the analysis of social issues. J Oper Res Soc 43:293Google Scholar
  95. Rosoff H, von Winterfeldt D (2007) A risk and economic analysis of dirty bomb attacks on the ports of Los Angeles and long beach. Risk Anal 27:533–546CrossRefGoogle Scholar
  96. Roy B (1993) Decision science or decision-aid science? Eur J Oper Res 66:184–203CrossRefGoogle Scholar
  97. Roy B (1998) A missing link in OR-DA: robustness analysis. Found Comput Decis Sci 23:141–160Google Scholar
  98. Salmenkaita J-K, Salo A (2002) Rationales for government intervention in the commercialisation of new technologies, Technol Anal Strateg Manag 14:183–200Google Scholar
  99. Schopenhauer A (1864) Dialektik in SchopenhauershandschriftlicherNachlaß. J Frauenstädt (ed), Brockhaus, LeipzigGoogle Scholar
  100. Schroeder MJ, Lambert JH (2011) Scenario-based multiple criteria analysis for infrastructure policy impacts and planning. J Risk Res 14:191–214CrossRefGoogle Scholar
  101. Shapiro S, Morrall JF (2012) The triumph of regulatory politics: benefit-cost analysis and political salience. Regul Gov 6:189–206CrossRefGoogle Scholar
  102. Shaw D, Franco LA, Westcombe M (2006) Problem structuring methods: new directions in a problematic world. J Oper Res Soc 57:757–758CrossRefGoogle Scholar
  103. Shaw D, Franco LA, Westcombe M (2007) Special issue: problem structuring methods II. J Oper Res Soc 58:545–700CrossRefGoogle Scholar
  104. Simon HA (1954) A behavioral model of rational choice. Quart J Econ 69:99–118Google Scholar
  105. Smit HTJ, Trigeorgis L (2004) Strategic investment: real options and games. Princeton University Press, USAGoogle Scholar
  106. Stewart ThJ, French S, Rios J (2013) Integrating multicriteria decision analysis and scenario planning: review and extension. Omega 41:679–688CrossRefGoogle Scholar
  107. Tavakoli M, Davies HTO, Thomson R (2000) Decision analysis in evidence-based decision making. J Eval Clin Prac 6:111–120CrossRefGoogle Scholar
  108. Thanassoulis E (1995) Assessing police forces in England and Wales using data envelopment analysis. Eur J Oper Res 87:641–657CrossRefGoogle Scholar
  109. Toulmin S (1958) The uses of argument. Cambridge University Press, CambridgeGoogle Scholar
  110. Trigeorgis L (1990) A real options application in natural resource investments. Adv Futures Options Res 4:153–164Google Scholar
  111. Tsoukiàs A (1991) Preference modelling as a reasoning process: a new way to face uncertainty in multiple criteria decision support systems’. Eur J Oper Res 55:309–318CrossRefGoogle Scholar
  112. Tsoukiàs A (2007) On the concept of decision aiding process. Ann Oper Res 154:3–27CrossRefGoogle Scholar
  113. Tsoukiàs A (2008) From decision theory to decision aiding methodology. Eur J Oper Res 187:138–161CrossRefGoogle Scholar
  114. Vincke Ph (1999) Robust solutions and methods in decision aid. J Multi-Criteria Decis Anal 8:181–187Google Scholar
  115. Watzlawick P (1984) The invented reality: how do we know what we believe we know?. Norton, New YorkGoogle Scholar
  116. Watzlawick P, Weakland JH, Fisch R (1974) Change; principles of problem formation and problem resolution. Norton, New YorkGoogle Scholar
  117. White L, Bourne H (2007) Voices and values: linking values with participation in OR/MS in public policy making. Omega 35:588–603CrossRefGoogle Scholar
  118. Wong H-Y, Rosenhead J (2000) A rigorous definition of robustness analysis. J Oper Res Soc 51:176–182Google Scholar
  119. World Bank (2010) Cost-benefit analysis in world bank projects. Washington. Available at:http://siteresources.worldbank.org/INTOED/Resources/cba_overview.pdf
  120. Zagonel AA, Rohrbaugh J (2008) Using group model building to inform public policy making and implementation. In: Oudrat-Ullah H, Spector JM, Davidsen PI (eds) Complex decision making, Springer, Berlin, pp 113–138Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and EURO - The Association of European Operational Research Societies 2013

Authors and Affiliations

  • Alexis Tsoukias
    • 1
    Email author
  • Gilberto Montibeller
    • 2
  • Giulia Lucertini
    • 1
  • Valerie Belton
    • 3
  1. 1.LAMSADE-CNRS, Université Paris DauphineParisFrance
  2. 2.Department of ManagementLondon School of EconomicsLondonUK
  3. 3.Department of Management ScienceUniversity of StrathclydeGlasgowUK

Personalised recommendations