Keywords

1 Introduction

The wider availability of data and the growing technological advancements in data collection, management, and analysis introduce unprecedented opportunities, as well as complexity in policy making. This condition questions the very basis of the policy making process towards new interpretative models.

Growing data availability, in fact, increasingly affects the way we analyse urban problems and make decisions for cities: data are a promising resource for more effective decisions, as well as for better interacting with the context where decisions are implemented.

Such multiplicity of data and its different sources poses several challenges to policy making. First, the availability of a large amount of data improves the accuracy and completeness of the measurements to capture phenomena that were previously difficult to investigate but, at the same time, increases the level of complexity in the approaches finalized to process, integrate, and analyse this data (Einav and Levin 2013).

Second, processing data is not neutral and irrelevant for its usability in decision making processes. The selection and interpretation of a large amount of unstructured information, deriving from data, requires a human based approach finalized to find what emerging correlations between data are significant or not. In doing so, tools to examine data are crucial, considering that non-human agents develop potentially partial ways of understanding the world around them (Mattern 2017) and that some tools, such as algorithms, can act as technical counters to liberty (Greenfield 2017, p. 257).

Third, the huge amount of real-time, automated, volunteered data pushes towards an epistemological change in the methodological approaches of empirical sciences, transforming how we observe and interpret urban phenomena, moving from a “hypothetical-deductive method, driven by an incremental process of falsification of previous hypotheses” to “an inductive analysis at a scale never before possible” (Rabari and Storper 2015, p. 33). In addition to using data to test previous hypotheses, new phenomena and correlations between them may emerge as the result of the massive processing of data (Kitchin 2014), with repercussions in decision making activities in a short-medium term planning perspective.

Finally, while data is a non-neutral tool for addressing planning issues, the actors that produce, manage and own data, both public and private—with the latter typically being corporations active in fields outside traditional regulations—configure an unprecedented geography of power, a more complex arena in which urban problems are defined, discussed and finally addressed by new constellations of actors.

These different implications and conditions related to the larger availability of data, from data production, management, and analytics, to its potential in decision making processes for both private and public actors, find synthesis in the expression of “data shake”.

By dealing with the operative implications in the use of a growing amount of available data in policy making processes, this article discusses the chance offered by data in the design, implementation, and evaluation of a planning policy, starting from a critical review of the evidence-based policy making approaches (Sect. 1.2), for introducing the relevance of data in the policy design experiments (Sect. 1.3) and the conditions for its uses. Acknowledging the impossibility of simply relying on data for framing urban issues and possible solutions to them, and considering the potential disruptions brought by data into the urban planning practices, this paper focuses on policy processes where data is used, rather than simply focusing on technological solutions fostered by data.

2 Evidence-Based Policy Making: New Chances Coming from the Data Shake

2.1 About Evidence-Based Policy Making

Evidence-based policy making (EBPM) represents an effort started some decades ago to innovate and reform the policy processes for the sake of more reliable decisions; the concept considers evidence as being a key reference for prioritizing adopted decision criteria (Lomas and Brown 2009; Nutley et al. 2007; Pawson 2006; Sanderson 2006). The key idea is to avoid, or at least reduce, policy failures rooted in the ideological dimension of the policy process, by adopting a rationality having a solid scientific basis. The fact that evidence should come from scientific experts and guide the policy makers’ activities appeared and still appears a panacea to several scientists in the policy making and analysis domain: this makes evidence based policy making a sort of expectation against which policy makers, and political actors in general, can be judged (Parkhurst 2017, p. 4).

The evidence-based policy movement, as Howlett (2009) defines it, is just one effort among several others to be undertaken by governments to enhance the efficiency and effectiveness of public policy making. In these efforts, it is expected that, through a process of theoretically informed empirical analysis, governments can better learn from experience, avoid errors, and reduce policy-related contestations.

Finding a clear definition of the concept is not easy. In the policy literature, the meaning is considered sort of “self-explaining” (Marston and Watts 2003) and is associated with empirical research findings. Many scholars refer the evidence-based policy concept as evolving from the inspiring experience in medicine: here, research findings are key references for clinical decisions, and evidence is developed according to the so-called “golden standard” of evidence gathering that is the “randomized controlled trial” a comparative approach to assess treatments against placebos (Trinder 2000). Following the large importance assumed in medicine and healthcare, there was then an increase in research and policy activists pushing for evidence-based approach in other domains of policy making more related to social sciences and evidence produced by the social science research, covering a wider range of governmental decision making processes (Parsons 2001).

Moreover, the spreading of the evidence-based concept in policy making corresponds to the infiltration of instrumentalism in public administration practices following the managerial reforms of the last decades: the key value assigned to effectiveness and efficiency by managerialism represented a driving force for evidence-based policies (Trinder 2000, p. 19), so emphasizing procedures, sometimes at the expense of substance.

The key discussion is on what makes evidence such: the evidence-based approach in policy making is strictly correlated to the procedure, empirical procedure, that makes evidence reliable. The spreading of the concept made social sciences look at their procedural and methodological approaches to collect evidence although the research categories of social science are missing deeply structured empirical approaches.

Evidence matters for public policy making” as Parkhurst (2017, p. 3) underlines by presenting and discussing three examples,Footnote 1 despite the concept collecting several critics and concerns all together incriminating the supporters of the evidence-based concept of being scarcely aware of the socio-political complexity of policy making processes. Howlett (2009) has summarized such critics and concerns in four main categories (Table 1.1).

Table 1.1 Key concerns raised about the emphasis on evidence in policy making

Public policy issues have a prevailing contested, socio political nature that amplifies the complexity of evidence creation processes: decision processes in public policy making is not a standard, not a rational decision exercise; it is more a “struggle over ideas and values” (Russell et al. 2008, p. 40, quoted by Parkhurst 2017, p. 5), it is related to visions of the future and principles, so hardly manageable through rational approaches and science.Footnote 2 In this respect, Parsons (2001) considers that, when values are involved more than facts and evidence, policy processes are required which are “more democratic and which can facilitate … deliberation and public learning” (p. 104).

2.2 Evidence-Based Policy Making and the Data Shake: The Chance for Learning

The increasing production of huge amounts of data, its growing availability to different political subjects, and the wide exploration of the data potentials in decision making for both private and public actors, are proceeding in parallel with the fast advancements in technologies for data production, management and analytics. This is what we call the data shake, and it is not only related to the larger and larger availability of data but also to the faster and faster availability of data-related technologies. As a consequence, data-driven approaches are being applied to several diverse policy sectors: from health to transport policy, from immigration to environmental policy, from industrial to agricultural policy. This is shaking many domains and, as never before, also the social science domain: the larger availability of data, in fact, and easy to use data-related technologies, make data usable also by non-experts so widening the complexity of social phenomena.

Nevertheless, although the data shake appears to have promising and positive consequences in policy and policy making, existing literature underlines the role of some consolidated critical factors affecting the chance for data to achieve such a promising perspective. As highlighted by Androutsopoulou and Charalabidis (2018, p. 576), one of the key factors is “the demand for broader and more constructive knowledge sharing between public organisations and other societal stakeholders (private sector organisations, social enterprises, civil society organisations, citizens).” Policy issues “require negotiation and discourse among multiple stakeholders with heterogeneous views, tools that allow easy data sharing and rapid knowledge flows among organisations and individuals have the potential to manage knowledge facilitating collaboration and convergence”. The response to such a demand implies relevant expertise in organizations to adopt the “right” data, among the wide range of available data sets, to analyse the data and to produce the effective evidence to guarantee knowledge production and sharing.

Another key factor is related to the use of data when dealing with social problems: as again highlighted by Androutsopoulou and Charalabidis (2018), there is an issue of proper use of data to develop a reliable description of the problem and the formulation of effective policy measures. Also in this case, the selection of the proper data set or sets, the application of a data integration strategy, the design of analytical tools or models able to be effective in representation without losing the richness of information embedded in data, and the consequent formulation of effective policy measure consistent with the problem description are not simple rational decisions and imply also the consideration of approaches to public debates to negotiate both the vision and interpretation of the social problem and the solution to adopt.

The simple existence of more and different data and the related availability of technical tools do not grant the solution of the issues identified by the opponents to the evidence-based policy making concept. This last point explains why Cairney (2017, pp. 7,8,9) concludes that attention is needed to the politics of evidence-based policy making: scientific technology and methods to gather information “have only increased our ability to reduce but not eradicate uncertainty about the details of a problem. They do not remove ambiguity, which describes the ways in which people understand problems in the first place, then seek information to help them understand them further and seek to solve them. Nor do they reduce the need to meet important principles in politics, such as to sell or justify policies to the public (to respond to democratic elections) and address the fact that there are many venues of policy making at multiple levels (partly to uphold a principled commitment, in many political systems, to devolve or share power)”.

Better evidence, possibly available thanks to the data shake, may eventually prove that a decision is needed on a specific issue, or prove the existence of the issue itself; still it cannot yet clarify whether the issue is the first in priority to be considered or show what the needed decision is: the uncertainty and unpredictability of socio-political processes remain unsolved although better manageable.

Still, something relevant is available out there. Although the socio-political complexity of policy making stays unchanged, the data shake is offering an unprecedented chance: the continuous production of data throughout the policy making process (design, implementation, and evaluation) creates the chance to learn through (not only for neither from) the policy making process. This opportunity is concrete as never before. The wide diversity of data sources, their fast and targeted production, the available technologies that produce easy to use analytics and visualizations create the chance for a shift from learning for/from policy making into learning by policy making so allowing the improvement of the substance and procedure at the same time as a continuous process.

The learning opportunity is directly embedded in the policy making process as the chance to shape social behaviours, responses, and achieve timely (perhaps even real-time) effects (Dunleavy 2016) is out there. Learning by (doing in) policy making is possible and benefits from a new role of evidence: no longer (or not only) a way to legitimate policy decisions, no longer (or not only) an expert guide to more effective and necessary policy making rather a means for learning, for transforming policy making into a collective learning process. This is possible as the data shake gives value to the evidence used over time (Parkhurst 2017) so enabling its experimental dimension.

3 The Smart Revolution of Data-Driven Policy Making: The Experimental Perspective

3.1 About Policy Experiments and Learning Cycles

In social science, a policy experiment is any “[…] policy intervention that offers innovative responses to social needs, implemented on a small scale and in conditions that enable their impact to be measured, prior to being repeated on a larger scale, if the results prove convincing” (European Parliament and Council 2013, art.2 (6)). Policy experiments form a useful policy tool to manage complex long-term policy issues by creating the conditions for “ex-ante evaluation of policies” (Nair and Howlett 2015): learning from policy experimentation is a promising way to approach “wicked problems” which are characterised by knowledge gaps and contested understandings of future (McFadgen and Huitema 2017); experiments carried out in this perspective, in fact, generate learning outcomes mainly made of relevant information for policy and under dynamic conditions (McFadgen 2013).

The concept of policy experimentation is not new. An explanatory reconstruction of the concept development has been carried out by van der Heijden (2014), who quoted John Dewey (1991 [1927]) and Donald Campbell (1969, p. 409) as seminal contributions to it. In particular: Dewey already considered that policies should “be treated as working hypotheses, not as programs to be rigidly adhered to and executed. They will be experimental in the sense that they will be entertained subject to constant and well-equipped observation of the consequences they entail when acted upon, and subject to ready and flexible revision in the light of observed consequences” (pp. 202–203); while Campbell considered experimental an approach in which new programs are tried out, as they are conceived in a way that it is possible both to learn whether they are effective and to imitate, modify, or discard them on the basis of apparent effectiveness on the multiple imperfect criteria available (p. 409). van der Heijden considers that Dewey and Campbell had in mind the idea of experimenting with the content of policy programs (testing, piloting, or demonstrating a particular policy design), rather than the process of policy design.

Still, as van der Heijden observes, silent remains as to the actual outcomes of such experimentations, and this consideration makes the scope of his article that develops two main conclusions:

  • experimentation in environmental policy is likely to be successful if participation comes at low financial risk and preferably with financial gain (see Baron and Diermeier 2007; Croci 2005, quoted by van der Heijden);

  • in achieving policy outcomes, the content of the policy-design experiments matters more than the process of experimentation.

Intercepting both policy contents and experimentation process, and focussing on the governance design of policy making, McFadgen and Huitema (2017) identified three types of experiments: the expert driven “technocratic” model, the participatory “boundary” model, and the political “advocacy” model. These models differ in their governance design and highlight how experiments produce learning; together with what types of learning they activate.

In the technocratic model, experts work as consultants; they are asked to produce evidence to support or refute a claim within the context of political disagreement. In this model, policy makers are out of the experiment, but they supply in advance the policy problem and the solution to be tested.

In the boundary model, experiments (working on borders among different points of view) have a double role: producing evidence but also debating norms and developing a common understanding. In this kind of experiment, the involvement of different actors is crucial for the experiment to be productive of knowledge and discussion at different cognitive levels (practical, scientific, political).

In the advocacy model the experiment is aimed at reducing objections to a predefined decision. These experiments are tactical and entirely governed by policy makers who are obviously interested in involving other actors. This kind of experiment can also be initiated by non-public actors, even with different scopes.

McFadgen and Huitema (2017) also highlight the different learning taking places during the three different experimental models. They distinguish mainly three kinds of learning (Fig. 1.1).

Fig. 1.1
figure 1

Learning effects in policy experimentation (extracted from Table 1 in McFagden and Huitema 2017, pp. 3–22)

Taking into consideration the goals and the differences in participants of the three experiment models, McFadgen and Huitema (2017) find that: technocratic experiments mainly generate high levels of cognitive learning, little normative, and some relational learning, which is mainly due to the disconnection between experiments and the policy makers; boundary experiments are expected to produce relational and normative learning while low levels of cognitive learning due to the large importance assigned to debating and sharing; advocacy experiments cognitive and normative learning are expected to be activated but little relational learning and this is due to the intentional selection of participants.

Learning in policy experiments is crucial and is mainly related to the opportunity embedded in learning to become appropriation of the knowledge developed throughout the experiment. Consequently, the rationale behind an experimental approach to policy making is to boost public policy makers’ ownership and commitment, thus possibly increasing the chances that successful experiments are streamlined into public policy.

The experimental dimension, especially in the boundary and the advocacy models, is crucial in policy design and policy implementation. It makes the policy evaluation scope transversal to the other steps of the policy cycle—described by Verstraete et al. (2021)—as well as supportive of the other steps. It transforms policy making into an experimental process as it introduces co-design and co-experience paving the way for embedding new points of view and new values in the context of the policy. Design and implementation, in this perspective, become reciprocal and integrated (Concilio and Celino 2012; Concilio and Rizzo 2012) and:

  • learning is enhanced and extended to participants by designing “with”, not merely “for”;

  • exchange and sharing of experiences are more effective than information transfer and sharing;

  • involved actors become the owners of the socio-technical solutions together with technological actors and decision makers;

  • changes in behaviours (the main goal of any policy making) are activated throughout the experiments.

Based on this, different levels of integration are possible and, among them, the most advanced is the so-called triple-loop learning flow in policy experimentations (Yuthas et al. 2004; see also Deliverable 3.1 by the Polivisu ProjectFootnote 3).

3.2 Policy Cycle Model Under Experimental Dimension

As introduced in the previous section, the experimentations and the reflection on the operative implications in the use of data in urban management and decision making processes are at the base of a consistent production of critical ex-post evaluations on the potentials and limits of data-informed policy making produced in the last years (e.g. Poel et al. 2015; Lim et al. 2018).

The process of policy creation has been left in the background by the focus on the content, rather than the process of policy design and, in some cases, without a proper reflection about the selection, processing, and use of data to identify individual or collective human needs and formulate solutions that “can be not arrived at algorithmically” (…); and which cannot be “encoded in public policy, without distortion” (Greenfield 2017, p. 56).

Actually, it is well accepted that a policy process is not a linear and deterministic process; it is a set of decisions and activities that are linked to the solution of a collective problem where the “connection of intentionally consistent decisions and activities taken from different public actors, and sometimes private ones (are addressed) to solve in a targeted way a problem” (Knoepfel et al. 2011, p. 29).

In this process, data offer support for strategic activities by aggregating information on a time series that support and validate prediction models for long-term planning; for tactical decisions, conceived as the evidence-informed actions that are needed to implement strategic decisions and, finally, for operational decisions, giving support to day-to-day decision making activities in a short-term planning perspective (Semanjski et al. 2016).

From a policy perspective, strategic, tactical, and operational decisions use, and are supported by, data in different ways along the stages of a policy making process. In the design, in the implementation, and in the evaluation of a policy, data provides insights in allowing the possibility to discover trends and to eventually analyze their developing explanation; in fostering public engagement and civic participation; in dynamic resource management; and, finally, in sustaining the development of “robust approaches for urban planning, service delivery, policy evaluation and reform and also for the infrastructure and urban design decisions” (Thakuriah et al. 2017, p. 23). Among them, an approach in which data may support a policy making process dealing with a different time frame and multi-actor perspectives can be based on the policy cycle model, which means conceiving policy as a process composed of different steps (Marsden and Reardon 2017) to which data contributes differently.

The policy cycle, here not be interpreted as a rigorous, formalistic guide to the policy process, but as an idealized process, as a “means of thinking about the sectoral realities of public policy processes”, has the potential to capture the potential of data shake if used in a descriptive way more than in normative one.

This policy model can be conceptualized as a data-assisted policy experimentation cycle, consisting of interrelated cyclical stages: the stages are strongly interdependent, integrated, and overlapping due to the broad availability of data at the core of policy making’s experimental dimension.

In doing so, the policy cycle model can represent a “bridge”, a sort of “boundary object” (Star and Griesemer 1989) in which different operational and disciplinary dimensions (planning, data analytics, data mining) can interact and cross-fertilize each other since it offers an organized structure, in which data provides a viable base for acting in each stage.

Based on this, the major weaknesses recognized in the policy cycle model, considered too simplistic in practice, giving a false impression of linearity and discrediting its assumption of policy as sequential in natureFootnote 4 (Dorey 2005; Hill 2009; Howlett and Ramesh 2003; Ryan 1996) may be overpass thanks to the experimental perspective, able to foster a less linear interpretation of policy cycle, transformed in a continuous process in which overlap among policy stages.

3.3 The Time Perspective in the Experimental Dimension of Policy Making

Decisions for and about cities are made at different urban scales, refer to different strategic levels and have different time perspectives, with reciprocal interdependencies that are changing due to data availability. Here we mainly focus on the interplay between the different steps of decisions in policy making (those introducing the long-time perspective) and those necessary for the daily management of the city (connected to the shortest, real-time perspective), an intersection at which data can play a key role (Fig. 1.2).

Fig. 1.2
figure 2

Decision/reasoning along diverse timeframes

Short-term management is embedded in the smart sphere of decisions impacting cities: here decisions are less analytic and more routine. Routines may depend on data-driven learning mechanisms (also using data series) supporting smart systems to recognize situations and apply solutions and decisions that have already been proven to work. The decision has a temporary value related to the specific conditions detected in a precise moment by the smart system.

Opposite to real-time decisions, policy making works in a long-time perspective. Anticipatory is the prevailing mode for reasoning in this case data-driven models are often adopted as supporting means to deal with the impacts of the policy measures, representing thus a relevant source for exploring decision options mainly having a strategic nature (since they consider recurring issues and aim at more systemic changes).

Between short-term and long-term decisions a variety of situations is possible, which may be considered as characterized by decisions having a reversible nature: they are neither strategic in value (like those oriented to a long term perspective for systemic changes), nor aiming at dealing with temporary, contingency situations asking decisions which are known as having the same (short) duration of the phenomenon to be managed. For such decisions, the reasoning is not (fully) anticipatory and its temporariness allows reflection as embedded in action. Within the three different timeframes, actions are different in nature and show different use and role of data:

  • in the short term, the action (the smart action) is mainly reactive; real time data are used as reference info to interpret situations;

  • in the medium term, the action is mainly adaptive; data series, including current data, are used to detect impacts of the action itself and to improve it along time;

  • in the long term, the action has a planning nature; data series become crucial to detect problems and to develop scenarios for long lasting changes.

The interdependency between policy design, implementation and evaluation is strictly related to two factors, especially when considering the role (big) data can play. Design and implementation can be clearly and sequentially distinguished when a systematic, impact-oriented analysis is possible at the stage of design as it allows a clear costs and benefits assessment of different action opportunities (Mintzberg 1973).

Comprehensive analyses have the value to drive long range, strategic actions, and consequently show a clear dependency and distinction of the implementation from the design cycle. At the same time, the bigger is the uncertainty (not only related to the possible lack of data, rather also consequent to the high complexity of the problem/phenomenon to be handled), the smaller is the chance to carry out a comprehensive analysis.

Therefore, goals and objectives cannot be defined clearly and the policy making shifts from planning towards an adaptive mode. Inevitably, this shift reduces the distance between design and implementation, transforming policy design into a more experimental activity that uses learning from implementation into food for design within adaptive dynamics (Fig. 1.3).

Fig. 1.3
figure 3

Real time management vs policy making

Coherently with the discussion on the time frame perspective adopted, it may be clear that a merge between policy design and implementation is consistent with the situation described in the medium term: within an adaptive mode for decisions, policy making can clearly become experimental.

4 Conclusions: Beyond the Evidence-Based Model

Evidence based policy making is surely the key conceptual reference when trying to grasp the potentials that the growing availability of data and related technologies offer to policy making. As it is clear from the previous paragraphs, the concept has been widely discussed in the literature and can be considered the key antecedent of experiment-driven policy making.

Experiments may refer to both the policy strategies and measures. They can reduce the risk of trial-errors approaches while considering the learning in action opportunity to improve, adapt, adjust the policy while making it in order to increase its capacity to affect the context in an evolving manner.

Differently from Mintzberg’s considerations (1973), the merge between policy design and implementation does not represent a sort of inevitable, but not preferred option when a comprehensive analysis is not possible. In the era of data availability, this merge can be looked at as an opportunity to create policies while verifying the policies themselves throughout their interactions with the contexts.

The growing availability of diverse and rich data sets represents an opportunity for evidence to be transformed into a more valuable resource then what it was intended to be by the evidence-based policy making supporters: not only, or not necessarily, a means to support the scientific rationality of the decision making process, rather the drivers to reflection and learning through action.