Algorithmic decision-making and policy analytical capacity
Based on the preceding considerations, ADM systems have no place at the very heart of democratic politics: the formation of a political will and of setting the guiding goals and values for a society. They can, however, contribute to decision-making on the level of choosing and implementing policy measures for realizing previously defined objectives that emerged from the democratic process. The task is then to find the best solution to solve a given problem or maximize a certain good. Knowledge and evidence play an important role in dealing with such means–ends relations and for increasing the effectiveness and efficiency of policy decisions (Sanderson 2002).
Human cognitive capacities for instrumental problem-solving alone hardly suffice for reaching good solutions to complex problems: Their capabilities for solving problems are usually small in relation to the complexity of problems, and inferring an optimal solution from given information about a multifaceted problem is compromised by various psychological biases and reliance on heuristics (see, e.g., Elster 2007; Tversky and Kahneman 1991; for an application in the field of politics see Stolwijk and Vis 2020). In sum, as Simon (1990, p. 7) has noted, human rational behavior “is shaped by a scissor whose two blades are the structure of task environments and the computational capabilities of the actor.” ADM may help to overcome these limitations as extensive information processing for evidence-based decision-making promises to improve policy outcomes—or at least to avoid policy failures (Howlett 2009, p. 157). The source of such failures, according to Howlett (2009, p. 161), is often a lack of what he calls “policy analytical capacity,” which involves, as a core component, adequate forms of information management. This analytical capacity stands for “the amount of basic research a government can conduct or access, its ability to apply statistical methods, applied research methods, and advanced modelling techniques to this data and employ analytical techniques […] in order to gauge broad public opinion and attitudes […] and to anticipate future policy impacts” (Howlett 2009, p. 162). Here, the capacities of ADM systems may help to avoid policy failures because they can produce relevant insights through the capacity to synthesize information into actionable knowledge (Höchtl et al. 2016; Kettl 2016). By performing scoring and classification tasks that are used for predictions or risk assessments and data-driven policy simulations, ADM systems can serve to assign scores to policy choices indicating their expected success or failure. They very much suit the idea that “governments can better learn from experience and both avoid repeating the errors of the past as well as better apply new techniques to the resolution of old and new problems” (Howlett 2009:154).
Specifically, one can feed ADM systems with data about policy instruments adopted in different contexts and about their consequences so that they “learn” relations between policy instrument choice and policy outcomes. Through systematically harnessing data on a whole range of features that describe the decision situation, one might use ADM systems similarly to medical analyses of drug effects: Processing huge datasets about the use of drugs, including in combination with other drugs, to uncover unknown treatments for diseases as well as interactions between drugs that have undesirable effects (Costa 2014). This could work equally well in the area of policy instrument choices—which is akin to “treatments” for dealing with identified societal problems. Furthermore, cumulative data and experience allow for developing ever-more refined models about expected effects of policy instrument choices.
To some extent implemented ADM systems already realize these capabilities. For instance, the US Food and Drug Administration’s (FDA) has piloted a system that informs regulatory activity. The algorithmic tool processes vast amounts of reports of adverse events for the purpose of detecting and addressing undesirable drug effects after launching a drug on the market. The agency aims to then use these results to adaptively inform its rulemaking and policy guidance.
Hence, insights into the suitability of policy instruments can inform higher-level political decision-making. Conceivably, strong evidence that certain instruments are unlikely to achieve desired effects deters political actors from pursuing a certain course for achieving a policy objective. Notably, the use of algorithmic models to inform policy choices occurs largely for risk assessments, resource planning, and fiscal planning. These are decision areas which are comparatively structured, where there are clearly measurable impacts, and where outcomes and trade-offs can be quantified.
These conditions are even more likely to be present on the operative level of administrative decision-making, where most ADM tools are, to date, implemented by government bodies. Taking the example of New Zealand as an advanced adopter, the government’s Algorithmic Assessment Report (2018) lists over 30 systems used by ministries and agencies. These applications either partly automate administrative processes or they produce insights to improve its public services and make more efficient use of its resources. Examples of this include forecasting future service needs, performing recidivism risk assessments in criminal justice, and identifying cases of tax fraud.
Regarding some applications adopted by government bodies, it is easy to see how they may inform policy decisions on a higher level of decision-making. The German city of Mannheim, for instance, monitors and analyzes educational success of students by tracing the effects of various socio-demographic features and uses insights obtained from these analyses to inform municipal policy decisions. In a similar vein, some Danish municipalities predict localized needs for assistance among the elderly, which may then aid in policy planning.
All in all, the kind of knowledge that ADM systems may produce to support decisions could indeed enhance the analytical capacity of the government and help to avoid policy failure. Through detailed monitoring of the performance in a policy area and registering policy instrument choices, they can reveal what works better in some situations than in others.
Obstacles to using ADM systems for evidence-based policy choices
Although the information-processing capacity of ADM systems implies a strong potential to foster better decision-making about adequate policy instruments, there remain several barriers that ultimately limit the value of ADM systems for realizing policy goals. The mere existence of policy-analytical capacity and evidence does not straightforwardly lead to better decision-making. This is because even where objectives are pre-defined—or consensual—the choice of the means to realize these objectives is not entirely a matter of evidence. Only in rather idealizing, rationalist models of policy-making do information and evidence directly guide policy decisions. Indeed, the process of policy making has been described as one that is rather messy and in which learning processes hardly occur (Cohen et al. 1972). Political actors may furthermore not have an interest in following available evidence to best solve a policy problem (Kogan 1999; Sanderson 2002). Kettl (2016) and van der Voort et al. (2019) note that the use of Big Data analytics and the insights that they may realize will not improve policy making per se because decision support based on ADM systems will also be subject to conflicting motives by political decision makers. More importantly, however, other considerations that loom large in the political realm often supersede the instrumental motive of obtaining and using evidence to best deal with a given policy problem. Knowledge and evidence that may serve to attain a policy goal are likely to be evaluated in terms of whether it is in line with ideological goals and will yield political gains.
For instance, if an ADM system identifies that certain tax provisions will lead to more tax evasion, whereas the use of a different instrument would dampen it, this may go against the ideological view of policy actors who favor those tax provisions. These actors will therefore want to dismiss that kind of evidence, while others might use it for their ideological goals. In the same vein, policy actors may adhere to certain policy paradigms in the form of beliefs about what adequate solutions to given policy problems are (Hall 1993). Accordingly, they will interpret and evaluate evidence from a specific perspective and dismiss it to the degree that it goes against such a paradigm.
As van der Voort et al. (2019) write, political actors may even try to interfere in data collection, processing and the interpretation of information in order to obtain results that support, or at least do not contradict, their ideological views. The authors describe an instructive case in which political goals overshadowed other concerns: an informational dashboard for the city Milan that synthesizes information about the urban and informational landscape of the city. While the data collection and processing linked to this dashboard could produce evidence for optimizing services provided by the city, political actors, in view of the upcoming Expo 2015, partly interfered with this system to make the city appear in the desired light (van der Voort et al. 2019, pp. 35–36). Political stakes and goals thus trumped the goal of obtaining accurate evidence.
Altogether, this means that ADM systems can become simply another form of expert knowledge which political actors may leverage in order to support their views. This is possible because, even on the level of policy instrument choice and implementation, ambiguity is reintroduced and is met with a considerable degree of discretion (see also Lodge and Mennicken 2019; Veale and Brass 2019). Specifically, there are no definite answers when it comes to important technical properties of the adopted information systems: what exactly should be optimized and how; what constitutes an acceptable performance and by which formalizable measure should it be assessed? This means that even though objectives are clearly defined, it is not clear per se what “good” decision-making means when it must be translated into an ADM system (Veale and Brass 2019)
Moreover, even if there is a clear notion of what counts as the proper standard of “good” decision-making, the evidence that ADM systems produce ultimately depends on the data which are used—data which are necessarily shaped by social practices and forms of knowing. If there are sedimented relationships in the data—and the social reality that it represents—these can be learned and reproduced by algorithmic systems and lead to unfair discrimination, as has been shown for ADM used, e.g., for criminal justice risk assessments, applicant recruiting or content filtering (e.g., Eubanks 2017; Noble 2018). In predictive policing, for instance, an ADM system uses police control and arrest data to assess the risk of crime in city areas to guide police patrols accordingly. If there is already a practice of unfair discrimination in policing and arresting certain social groups, this will likely be learned and reproduced by an algorithmic system—unless it is explicitly addressed and corrected for.
From this, it follows that existing structures of knowing shape what kind of evidence and quality of decision-making ADM tools can and will produce. As critical algorithm studies have highlighted, algorithmic systems are never neutral technical entities. Not only are they designed by humans, and either deliberately or unwittingly incorporate values and assumptions of their developers (Mittelstadt et al. 2016, p. 7; Noble 2018, pp. 1–2), they are also implemented in a societal context in which they may acquire pre-existing biases.