Keywords

1 Introduction

The Sustainable Development Goals (SDGs) mark an important new era in global governance. As previous chapters have already discussed, since the mid-2000s, it has become increasingly clear that global monitoring initiatives must be transformed in order to address grand challenges. Crises such as climate change and the recent pandemic have shown clearly that, although the role of International Organisations (IOs) is key, unless national governments take on the challenge of addressing these emergencies head on, the legitimacy and ultimate success of such efforts is questionable. This has driven to paradigmatic-level change (Best, 2014) in the global governance of shared challenges, where the onus of responsibility for both the decisions and their consequences has moved onto the countries, rather than merely IOs. The design of monitoring programmes and the accountability for their successful introduction and implementation are no longer a matter of IOs’ expertise only, but rather are open for negotiations and consensus-building processes for all.

In other words, monitoring regimes have become a subject of both technocratic and democratic logics, and the SDGs—as will be shown in this chapter—are a prime example of this change. The SDGs and their epistemic infrastructure managed to become so extensive precisely because of the types of linkages (or the ‘second order’ of epistemic infrastructure we outlined in the Introduction) that link more closely country-level decision-makers and global structures of International Organisations. Country participation is one of the foundational principles of the SDGs and the priority to ‘leave no one behind’ explicitly requires all participating actors to be involved. The process of ‘democratisation’ is not only a matter of equity but also a matter of political buy-in into the infrastructure of measurement within the SDGs.

This, of course, poses a set of fresh challenges for IOs, since the new participatory and country-led paradigm mandates that the SDGs are subject to dual technocratic and democratic legitimacy (Krick, 2018). On the one hand, the monitoring programme is legitimate, based on the technocratic logic of quantification and associated values such as objectivity, expert advice and evidence-based policymaking (Merry, 2016). According to this logic, numbers are powerful as they allow for standardisation and monitoring according to set benchmarks (Hansen & Porter, 2012). The focus on technocratic legitimacy is grounded in the rationalisation (and consequently de-politicisation) of public policy whereby the decision-making is seen as more effective when it is devoid of political pressures (Jasanoff, 2011). On the other hand, the democratic logic surrounding performance exercises sees the value of their embeddedness in national and local politics and highlights the role of social control and transparency over the decision-making process. Furthermore—particularly drawing on STS work on co-production (Jasanoff, 2004) and new modes of knowledge production such as Mode-2 science (Nowotny et al., 2001) and post-normal science (Funtowicz & Ravetz, 1993)—scholars have argued that democratic modes of knowledge production and the opening up the processes of both evidence- and decision-making to the public not only strengthens their legitimacy but also improves their quality as they mobilise multiple viewpoints, values and forms of politics and draw on different knowledge systems, going beyond narrow expert view (Bandola-Gill et al., 2022).

Consequently, the SDGs and their epistemic infrastructures are subject to lines of accountability that might be in contradiction with one another. Focusing on the democratisation of number-based governance is particularly challenging as it poses a challenge to the rules of ‘mechanistic objectivity’ (Porter, 1995) and might require trade-offs between the professional and scientific standards of statistical reasoning with the political priorities of the usability of the indicators (Bandola-Gill, 2021). The move towards democratisation had important consequences for the processes of developing and implementing the indicators for the SDGs (in particular in contrast to the IO-led MDGs) but also significantly changed the role and position of expertise in the processes of quantification (more on that in Chap. 8—see also Fontdevila & Grek, 2020). This chapter explores this tension in depth by focusing on how technocracy and democracy are navigated within the processes of implementation of the SDGs. This is not only a question of contradictory logics, but rather promotes a more fundamental discussion of the nature of quantification in contemporary global governance, where the power of numbers is no longer taken for granted, based on their inherent epistemic and political qualities.

2 UN and the ‘participatory turn’

As the Millennium Development Goals were progressing, it became increasingly clear that the goals set up would not be achieved (Fukuda-Parr, 2017). Even though the causes of the apparent under-delivery of the MDGs were varied, the commonly discussed reason was its top-down structure and lack of engagement—and consequently buy-in—from the countries involved. Thus, this apparent failure was taken on board in the design of the SDGs and led to a paradigmatic-level change in how the indicator framework for the SDGs was designed and implemented. This change is evident in actors’ accounts of the set-up of the monitoring exercise:

There it was [the MDGs], a very clubby affair. It was basically just us agencies sitting and talking together and all that and very well-meaning of course, but I guess it was a tad elitist in the sense that there are 20 people in a room versus 200. […] So, just that type of dialogue and all that we didn’t have before the SDGs, and also dialogue with countries. At first, the countries were very much, naturally—they were very annoyed at the international agencies being in the front seat and them being in the back seat. This is a country-led process and it was completely flipped and then there was the discomfort with that also, because how can we have you measure something that you are judging your own progress by; it’s like you grading your own paper. But I think, so the entente has been reached and there is, I think the statistical world will be better for it. (World Bank, 15)

This quote clearly illustrates two key tensions embedded in this new paradigm of engagement in the global monitoring system—on the one hand, it breaks with the Western-centric, elitist view of the development and promotes more equal participation by the countries who are most affected by these systems. On the other hand, such ‘opening up’ risks methodological challenges, as it necessarily involves countries in the politics of measurement in a much more direct way.

The participatory turn of the UN monitoring system occurred not merely at the level of procedural ‘behind the scenes’ politics, but rather, it was embedded in the key document establishing the SDGs. The flagship document of the Rio Conference (see: Chap. 1)—The Future We Want—was at its core a political declaration of inclusivity of the different voices into the governance of the SDGs. For example:

We reaffirm the key role of all levels of government and legislative bodies in promoting sustainable development. We further acknowledge efforts and progress made at the local and sub-national levels and recognize the important role that such authorities and communities can play in implementing sustainable development, including by engaging citizens and stakeholders and providing them with relevant information, as appropriate, on the three dimensions of sustainable development. We further acknowledge the importance of involving all relevant decision-makers in the planning and implementation of sustainable development policies. (UN General Assembly, 2012, p. 8)

As evident in this quotation, the inclusion of not only the policymakers but also a range of other stakeholders (such as the civil society and national representatives) was seen as necessary for the SDGs’ success. This—importantly—was a paradigm set for all the countries, and not only rich funders from the North. The Future We Want (UN General Assembly, 2012) explicitly discusses the involvement of developing countries as equal and necessary participants in sustainable development governance. As indicated in the following:

We reaffirm the importance of broadening and strengthening the participation of developing countries in international economic decision-making and norm-setting, and in this regard take note of recent important decisions on reform of the governance structures, quotas and voting rights of the Bretton Woods institutions, better reflecting current realities and enhancing the voice and participation of developing countries, and reiterate the importance of the reform of the governance of those institutions in. (UN General Assembly, 2012, p. 19)

These political declarations went even further in ‘Transforming Our World’ (UN General Assembly, 2015), the cornerstone document, establishing the SDGs as a political programme. The SDGs from the outset was an initiative relying on the participation of stakeholders:

All countries and all stakeholders, acting in collaborative partnership, will implement this plan. We are resolved to free the human race from the tyranny of poverty and want and to heal and secure our planet. We are determined to take the bold and transformative steps which are urgently needed to shift the world on to a sustainable and resilient path. As we embark on this collective journey, we pledge that no one will be left behind. (UN General Assembly, 2015, p. 1)

The journey set up in the SDGs is a collective one—making it everyone’s stake to progress and ultimately realise the set of ambitious goals. Furthermore, again, as it was the case in The Future We Want, this new partnership paradigm is rooted in the solidarity with the poorest:

The scale and ambition of the new Agenda requires a revitalized Global Partnership to ensure its implementation. We fully commit to this. This Partnership will work in a spirit of global solidarity, in particular solidarity with the poorest and with people in vulnerable situations. It will facilitate an intensive global engagement in support of implementation of all the Goals and targets, bringing together Governments, the private sector, civil society, the United Nations system and other actors and mobilizing all available resources. (UN General Assembly, 2015, p. 10)

Here again, the document positions the SDGs as a monitoring programme produced with developing countries as key partners. Furthermore, the document posits the partnership as being one of a wider spectrum of such collaborations, involving national actors, the private sector and civil society. Thus, the SDGs become a participatory monitoring tool, requiring ‘buy-in’ in the broadest sense and pointing to consensus-building as the key underpinning principle of the new framework. Taken together, these two documents clearly show the underpinning logic of the SDGs as one of participation across institutional boundaries but also—and perhaps more importantly—across previously traditional lines of power and influence.

Of course, this does not mean that the introduction of this ‘participatory’ approach to statistics was straightforward—quite the opposite. Production of statistics is, in the end, not only a process aimed at consensus, but arguably predominantly a technical process following a specific set of methodologies. One way in which these democratic and technocratic ideals were married was in the concept of ‘country ownership’. As a concept, it did not yield all the decision-making power to countries, but rather it attempted (not always successfully) to integrate political buy-in into the production of methodologically robust indicators. The technical group responsible for the indicator development—the IAEG-SDGs—has set up country ownership as one of their key goals. This positions this highly technical body as a broker of connections, rather than merely a methodological ombudsman:

The role of the IAEG-SDGs members should include consultation and coordination within their own national statistical system, and should also include reaching out to the countries in their respective region and sub-regions. (IAEG-SDGs, 2015a, p. 2)

This point is further repeated in the discussion, as reported:

During the discussion under this agenda item members of the IAEG-SDGs commented on the relationship between national, regional and global indicators, the need to ensure national ownership of the global indicator framework, the importance of statistical frameworks. (IAEG-SDGs, 2015a, p. 10)

The choice of focus on ‘ownership’ in relation to securing meaningful country participation is interesting here: on the one hand, it is malleable enough to bring together both the technocratic and the democratic logics of the SDGs. On the other, focusing on ‘ownership’ does not surrender the technocratic standards of the indicator development, but still communicates the need for countries to adopt the measurement and policy prioritisation of the indicators as political projects.

3 Democratic and Technocratic Logics in Action

The political declarations outlined in the key SDG documents and structures materialised in the ways the relationship between countries and IOs was designed and put in place. The principles of participation and technocracy—even though contradictory—were predominantly discussed as indivisible. Even though at the level of political declarations, some level of discrepancy was to be expected, the translation of these principles into specific measurement processes led to tensions and contradictions, particularly in various practices occurring at the intersection of work of experts and national policymakers and civil servants.

In particular, three settings were particularly prone to this dual logic of technocracy and democracy: (1) practices of securing country ‘buy-in’ into the monitoring frameworks; (2) practices of production of indicators on the country level requiring navigation between the overall standards of reporting and the local politics and finally, (3) practices of producing and using ‘imperfect numbers’. Across these three types of settings—explained in detail in the remaining part of this chapter, the problem of navigating democratic and political accountability became not only visible but also actionable (Fontdevila & Grek, 2020; Grek, 2020). It required maintaining sufficient levels of technocratic accountability to reap the benefits of the ‘epistemic virtue’ (Daston & Galiston, 2007) of numbers (such as standardisation, objectivity and universality), whilst combined such virtue with important political calculations, such as securing consensus-building, collective action and the political acceptability of both the entire SDG framework and the specific numbers produced in the process.

3.1 Country Buy-In

One of the difficulties was the process of securing and maintaining country buy-in into both the monitoring framework of the SDGs but also to the specific indicators. The interviewed experts across the organisations saw it as a crucial process—at times even more important than the more technical process of developing indicators themselves. This process (not unlike other quantified practices, see Chap. 3 on harmonisation) was happening in a two-way manner on the global and country levels.

On the global level, the key objective was to secure buy-in into the SDG framework itself. Here, the process of ‘buy-in’ was essentially a process of consensus-building. This required (at least for some indicators) a negotiation across the countries and IOs—and often even across the IOs themselves. An example here could be the negotiation of one of the most unique indicators within the SDG framework: 1.2.2 indicator of multidimensional poverty. This indicator was met with opposition from two directions: the countries as well as the World Bank.

The opposition of the World Bank was made on technocratic grounds. The key argument was one against measuring this target by using an index, aggregating different dimensions of poverty into one number. For example, as indicated during the second meeting of the IAEG-SDG:

The selection of appropriate indicators for global monitoring depends on the interpretation of SDG target 1.2. If reducing poverty in every dimension is the major concern, then a dashboard approach—measuring each dimension of poverty/deprivation separately—would be an appropriate way to monitor progress at the global level. However, this would add a significant number of additional indicators to the framework, and since the SDG framework as a whole is a dashboard approach, introducing a smaller dashboard for SDG 1.2 could be confusing. Moreover, if the interest is to monitor the change in all dimensions of poverty using a single statistic, then there is an argument for considering a composite indicator—such as the MPI and others, which can be disaggregated to obtain the proportion of men, women, and children in poverty as required by the target. (IAEG-SDGs, 2015b, p. 3)

Multiple interviewees in the World Bank likened the Multidimensional Poverty Index (MPI) to be looking at a car dashboard: instead of tracking the level of oil or fuel in the car, the MPI was trying to ‘summarize’ how the whole car works using a single figure. This argument was made from a methodological standpoint and as such was based primarily on technical considerations.

The second source of opposition came from the countries that were rejecting what they saw as a limiting unitary approach to the measurement of multidimensional poverty. In particular, there was opposition to introducing the Global MPI as the countries thought it was not capturing their country-defined concept of poverty. The stakes were high, as the lack of ‘buy-in’ into the measure was seen as risking its complete removal from the framework. Eventually, the process was successful, thanks to negotiations with the relevant countries and compromises. As recalled by a United Nations Development Programme (UNDP) expert:

A lot of it was about advocating for the measure in the SDG framework and that’s an indicator that had some political pushback. So, in the initial meetings of the Inter-agency and Expert Group on SDG Indicators, our position was to advocate for it and to convey to the Member States the sense that if it was adopted as an indicator there would be significant support behind it, organisational support standing behind it and we did have partners with us that have been supporting that indicator, notably Oxford University, so OPHI in Oxford with Sabina, but as well as UNICEF, which has done quite a bit in also supporting countries in measuring multidimensional poverty. UNDP of course has developed a measure, that was way before my time, but UNDP has developed a measure and has been supporting the measure in quite a few countries and, to me, it’s a measure that’s important because it links quite closely with the policy priorities of a country, because it’s just not a metric, but it’s a metric that’s about those policy issues that matter for a country and for the wellbeing of citizens in a country. (UNDP, 2)

Although the global MPI measure was eventually not introduced, the measure of the multidimensional poverty that was in the end included was one based on country-defined measures without any custodian IO agency taking responsibility for the measure; instead, countries themselves were to act as custodians of the MPI. Thus, the MPI represents a clear case of the level of political and technical compromising that the international community was prepared to succumb to, since this is the only target within the SDG framework that does not have an IO responsible for its implementation. This solution (advocated e.g., by Mexico) was seen as a consensus allowing countries to employ their own definitions and approaches to multidimensional poverty without the top-down methodological guidelines. This consensus—even though it secured the buy-in—was not seen as optimal from the perspective of the technocratic logic, as it risked measures that are not robust, comparable or indeed methodologically correct. As highlighted by an academic consultant:

There are I can’t remember how many, 369 targets in the SDGs? 368 of those targets have an international organization whose job it is to help national governments produce those measures and statistics. The 1 target out of the 369, where no international organization have any responsibilities to multi-dimensional poverty measure, I don’t know why that is, but UNICEF does a bit of help, the World Bank does a bit of help. It’s no one’s job. Of course, you then get World Bank, UNDP, UNICEF, proposing different things, which is less than helpful if you’re a national statistical office in Tuvalu or somewhere. Of course, the regional organizations try and also suggest different things, and academics suggest different things. We do hope he does. You end up with a nice selection, and no one [source]. […] There’s a lack of technical help and expertise from international and UN organizations to governments to do something which they’ve never done before. Most countries have no experience in producing multi-dimensional poverty measures. They have, obviously, a statistical office that has professional ideas on what the quality should be of a national statistic. When they look at a lot of the proposed measures, they don’t conform to their professional standards. That’s a problem, so they’ll resist. (Academic, 1)

On the country level, the process of securing buy-in into the indicators was equally complex. The twofold logic of the SDGs—drawing on both technocratic and democratic accountabilities—had to be navigated in the process of producing the measurement for specific SDG targets (Fontdevila & Grek, 2020). The interviewed experts working at country level were unified in their perception that the production of an indicator is at once a political, but also a complex methodological process:

I think that the global conversation is a very different one from the country level conversation. The country-level conversation, nobody would disagree at the Bank if you were to say that building up poverty measures is a political process as much as a technical one. And then, and you see if you go into the details on how poverty lines are constructed across the world in which the Bank provided support, there’s lots of variability in those. Whether some items are included or not included. How things are, whether prices are deflated across the country or not. All of those are like because in that particular country that country wanted this versus the other. (World Bank, 4)

Interestingly, even though the interviewees were not disillusioned about the politics of the measurement process around the MPI, they—with surprising uniformity—saw the central raison d’etre of the expert collaboration as providing ‘technical assistance’ to countries. As described by one of the interviewees:

Typically, it’s technical support, technical assistance that we provide to countries to compute MPI. We do that, UNDP does it, UNICEF as well does it quite a bit and it’s technical support, it’s capacity building and we’ve done some workshops, we’ve trained them on how to compute MPI and we’ve done also in collaboration with UNICEF some joint sessions to work together. At the country level, there is quite a number of countries where UNDP country offices work with governments and we’re helping compute the MPI. (OPHI, 1)

Therefore, the key goal of the IOs working on the country level was to support the technical process of measurement and provide the capacity and knowledge base to fulfil the monitoring requirements. However, behind this ‘basic’ function was a much more subtle process of producing quantified knowledge that is politically acceptable (Bandola-Gill, 2021). Country ‘ownership’ therefore became a matter of navigating the tensions of usability of the indicators with achieving technical robustness: that is, produce indicators that reflect the reality but also are acceptable to governments, fit the existing data structures, reflect trends and generally behave as all such global comparable numbers do:

I think it boils down to how you can work with your counterparts to, in a way, get them to accept that this is what the evidence says, but also understand that they don’t only have technical considerations, they have other considerations and work with them in terms of well how can this be useful to you. Maybe it’s not the news you expected, but it’s still the news, so what does this mean, you know, is there something that can be done proactively about this and so on. But it’s not always easy and we have faced situations where the government didn’t want to publish the numbers and the numbers have been not published or have been published with a delay. (World Bank, 5)

Here again, the strategy was to combine the technical advice (e.g., within the country report) with more specific political work, such as giving governments a ‘heads up’ about failing performance, exploring the potential risks, re-framing ‘bad’ numbers and generally performing functions that produce a politically and technically legitimate monitoring exercise.

3.2 Developing Indicators as a Participatory Process

The tension between technocratic and political accountability was further enacted on the country level when indicators were being developed and further adjusted. Similar to the global exercise, the process of developing country-level indicators was increasingly seen—by both experts and governments alike—as a participatory exercise. The indicators were produced through a collaborative process, which included the representative of IOs, government departments, academia and civil society or even members of specific populations affected by the process of measurement (e.g., the poor). The idea of ‘technical assistance’ provided by the IOs discussed in the preceding section when enacted on the country level went well beyond just scientific advice, and instead, it was a process of negotiation between multiple interests, ideas and objectives. As summarised by one of the interviewees, when it comes to developing the specific measures ‘the process is more important than an outcome’ (UNICEF).

This ‘participatory turn’ in the development of metrics was justified in multiple ways. On the one hand, the interviewees pointed to the issue of quality improvement of the indicator, as ‘user involvement’ might help to identify local issues or challenges which could help to make the indicator more robust. For example, as discussed by the following interviewee:

I think that the challenge is it takes longer and you coordinate more people and they may have different views. But I think that the advantage is that if you think of who are the experts, well the experts are poor people who experience this […] I think it really sensitises you to that. So for example in El Salvador, talk about the challenges, the government made a trial measure with health. education, living standards and work. And then UNDP supported a two-year consultative process where people articulated their own deprivations. And El Salvador at that point was sort of the murder capital of the world, unfortunately. And the government said yes, but that’s not poverty. But then there was an engagement and a dialogue and then by the end of that violence and esparcimiento [were included]. A place for like children to play, for the old people to drink coffee. (OPHI, 1)

Therefore, the interviewees saw these participatory processes as imbuing a technical process of developing an indicator—by its nature focusing on universalistic principles of science—with local meanings, ideas and politics. The predominant perception of this process was that it improved the measurement itself, as it allowed for a closer reflection of the reality of poverty.

On the other hand, interviewees mentioned more ‘political’ benefits of the participatory approach to developing the indicators. In particular, involving the wider spectrum of users was seen as increasing the political value of indicators through two means—by legitimising numbers and by improving their usability in political contexts. This idea was put forward for example by the following interviewee:

With multidimensional poverty what matters more than anything is the process. And if you don’t have the right partners at your table for conversation it’s not going to be useful. It’s not going to be used. You can do an index, you can do it with only NSO, the Institute of Statistics, fine, you do it. You have a new number to report every year but it’s not going to make any change. Unless you have the right people. (UNICEF, 6)

Involving the stakeholders was seen as a way of assuring the legitimacy of the indicator—it was not only technocratically legitimate but also reflected the broader consensus of a wide variety of actors agreeing over the measure (Bandola-Gill, 2021). As indicated in the following quote:

The other kind of participation which is vital is different parts of government and academia. Because by the time the measure is launched you want the government to own it, to be willing to act on it, understand it. You want the field leaders, the key idea, thought leaders in the country to understand it, otherwise when the press release comes, they might be caught off guard and try to discredit it. So, there’s often also consultations with these other actors to make sure, that their input is gained so that measures really builds off their wisdom and knowledge and then also so that they understand it and support it and see how it could be useful to them. (UNDP, 2)

Furthermore—and perhaps even more significantly—involvement of the stakeholders in the indicator development process was seen as a way of assuring the use of an indicator by the policymakers. The co-production approach to indicator building was seen by the vast majority of interviewees as the best strategy of making the indicators ‘usable’ in policy, for example, introducing new policy programmes aimed at improving the indicators. Involving government officials in the development of an indicator raised the stakes for the indicator and was seen as a powerful motivating force to account for it in the governing practices (e.g., introducing the programmes to improve the indicator).

Nevertheless, despite these benefits of participatory approaches to the indicators’ development process, the production of an indicator was not seen as entirely a user-driven process but rather was constrained by technocratic considerations. During meetings, the main structuring force of the agenda was the design of the indicator, hence the limits of the inclusion of the debate were in fact outlined by the methodology. This necessarily led to some conflict over the measurement and a need to navigate the democratic ideals with the technocratic standards. For example, the interviewees mentioned the female genital mutilation (FGM) as an issue that many of the stakeholders wanted to add as dimension of poverty. However, adding FGM into the model was rejected on the methodological basis of the need for the indicator to improve over time and irreversible procedures would render these indicators to be not responsive (or at least in the nearer future). Therefore, even though it was politically (and democratically) important, it was rejected on the technocratic grounds. Nevertheless, the civil society actors who saw the development an indicator as an access point to government and an opportunity to shape the political agenda. As such, even in cases that technical considerations prevailed, the participatory approach to measurement played a role in de-objectifying measurement (cf. Desrosières, 2015). Here, the goal was to continue the debate and keep issues on the agenda through contested indicator design, rather than either exclude them altogether or alternatively ‘naturalise’ them through their validation; either option would render them invisible and thus politically and technically less useful.

3.3 The Power of Imperfect Numbers

The final site of navigation of political and technocratic accountability was the production of ‘quality’ numbers. Thus far, it is becoming clear that the production of numbers is not happening despite politics or against politics, but rather that the modes of quantification embedded in the SDGs are from their inception shaped irrevocably by this ‘dual logic’ of technocracy and democracy. The preceding sections have shown that this inherent tension has shaped the practices and principles of producing numbers for the SDGs. In particular, this section explores how the growing focus on the political value of numbers (and consequently a move away from pure technocratic modes of accountability and legitimation) opens up new possibilities and roles for ‘imperfect’ numbers. We look at three types of such numbers: ambiguous numbers, placeholder numbers and provisional numbers and the different political roles these ‘imperfect’ numbers play.

3.3.1 Ambiguous Numbers

The first type of imperfect numbers is the ambiguous ones. The democratic criteria for the development of the SDGs required a nuanced process of consensus-building and walking the tightrope between the country politics and the methodological principles of measurement. Here we can return to indicator 1.2.2 (of multidimensional poverty). The final wording of the indicator was:

The proportion of men, women and children of all ages living in poverty in all its dimensions according to national definitions.

This was undeniably an acceptable consensus by the countries and IOs. Nevertheless, the wording of the indicators (combined with the lack of a custodian agency) did not ‘solve’ any existing conflicts over how to best measure multidimensional poverty but rather offered a way of avoiding the conflict altogether. In particular, the wording of this indicator was so open that different actors went as far as interpreting the meaning of the indicator 1.2.2. within the SDG differently. For example, the interviewees within UNICEF interpreted this indicator as a child-centric measure:

It’s not just a disaggregation. However, not everybody reads it that way. So, some people say no, that doesn’t mean that we have to measure child poverty specifically, it means we have to disaggregate child poverty. So, there are some disagreements on how to interpret that particular indicator. We, we meaning the people in UNICEF that work with data, we have no doubt that this is what it is, and this was the intention. […] But the colleagues that have participated in that very adamantly said no, this is what we meant, and this is why it’s different. And there was a lobby, and there is a whole coalition where UNICEF participates with NGOs that lobbied very hard to make this happen. (UNICEF, 8)

Contrary to this perception, the interviewees within Oxford Poverty and Human Development Initiative (OPHI)/UNDP and the World Bank interpreted this indicator as disaggregation of household-level data. This ‘interpretive flexibility’ (Sahay & Robey, 1996) of the indicator has even further increased the breadth of the indicator to encompass multiple different forms of measurement as acceptable within the SDG framework. Consequently, different countries could continue carrying out existing measurements and IOs could continue promoting their own approaches to measurement while at the same achieving the multi-stakeholder consensus over the SDG1. Collective action in this context was dependent on the ambiguity of numbers: the latter was acting as a unifier of a diverse field of practices.

Here, the political function of this ‘imperfect’ number was to enable multiple meanings (and consequently values, agendas, ideas, etc.) within one monitoring framework. As such, this ‘strategic ambiguity’ (Sillince et al., 2012) of the indicator enabled it to act as a boundary object in the original meaning of the term (Star, 2010): that is, it allowed for different interpretations and actions between different groups, without necessarily solving the conflict amongst them.

3.3.2 Placeholder Numbers

A slightly different approach to accommodating ‘imperfect’ numbers was to use a number that although was not gathering full support, it could still be used as a ‘placeholder’ number (the usual term applied to such imperfect numbers by the expert community). These placeholder numbers were used as temporary solutions until the consensus over another—and improved—measure could be formulated. Therefore, this meaning of numbers was grounded in the assumption of the changeability of metrics, rather than their stability. These placeholder numbers were important enablers of political action, as they did not halt the political process and allowed to move on with other items on the agenda. One example of such placeholder numbers was the GDP, as summarised by an expert sitting on the IAEG-SDGs:

We’ve said that we could only put things in at particular times because otherwise there are like 2,000 people who would like to have 2,000 indicators more. But we have said that we have the GDP as a placeholder. So, if they would have this fantastic number that they’re talking about we could probably just make a switch, or if that isn’t allowed then we could test because it takes a long time to make this anyway, so we could test it and then we could put it in by 2025 or. Like I’ve said to very many people who want to do things, if you have really good research studies or good analysis, nothing stops you from using the really tiny indicator that you have and when you’re talking about that you just add this analysis and say, talking about sustainability and tourism, then we think that blah-blah-blah and if this thing would happen then it would be fantastic. (National Statistician, 1)

Such reliance on placeholder numbers has important implications. On the one hand, placeholder numbers have a productive role, since the ‘ever-perfectability’ of numbers (Rocha de Siqueira, 2017) is a field of constant negotiation. Using a placeholder does not stop the debate over how to improve the measurement, so it is inherently generative of new ideas, approaches, connections and bodies of expertise, without endangering the technical process; on the contrary, it appears as strengthening robustness. On the other hand, using placeholder numbers can, perhaps remarkably, de-politicise numbers. As we know from the literature on evidence-based policymaking (e.g., Weiss, 1979), the calls for better-quality or ‘perfect’ evidence are often mobilised by political actors as a delay tactic, allowing to delay decision-making and retain the status quo.

3.3.3 Provisional Numbers

The final category of imperfect numbers is provisional numbers. These types of numbers are produced ‘ad-hoc’ using methodological shortcuts or using approaches that are drawing on rather provisional and imperfect datasets in the first place. Provisional numbers are used to ignite political action through their argumentative power: interviewees appeared aware that the numbers are convincing, even when they are not completely methodologically robust.

The underpinning logic of the provisional number is one of the argumentative power of numbers. Here, the interviewees acknowledged the importance of having one ‘killer number’—one that could shock or embarrass the policymakers:

I think that to me, the most important thing is a clear ‘fact’. So it’s like the fact that children are twice as likely to live in poverty as adults. That’s the thing, and you can represent that in a figure if you want. You can just use the words. You can put it any way you like. But that’s the, I think distilling the essence of complexity down to a real, a simple truth that can make people see things I guess differently than they’ve seen them before, but fundamentally understand something. (UNICEF, 7)

Having such a convincing and easily travelling ‘fact’ was not an easy feat—and such ‘facts’ were needed more often—and quicker than they could be produced. Sometimes such a number could have been simply ‘guesstimated’ in the meetings as long as they were used in a politically savvy way:

Other’s interest is: get me a number, even if it’s an imperfect number. Look, if you want to measure it for impact at the level of policy, I completely agree anything could work, even a number that you just make up in the middle of the conversation to impress people. That’s OK. (UNICEF, 2)

Nevertheless, these were the rare and most obvious examples of such strategies. More often, the numerical ‘facts’ were produced using valid methodologies albeit ones that involved large doses of uncertainty to produce the key ‘numbers’—particularly global ones, allowing for cross-country comparison. For example, the World Bank interviewees pointed to the need to negotiate between producing the comparable numbers (such as their global poverty number) and navigating uncertainty in employed methodologies, such as nowcasting. Here, the provisionality of numbers was more complex, as it was often veiled by the complexity of the statistical approaches to produce them:

Now, we do have nowcasted numbers and forecasted numbers, so nowcast means if the last surveys are from 2018, we nowcast to the present, the forecast is into the future, but we call it as such. We are very careful to label it as such and so on. So, I think, again, this is also the bar being raised of number, of what we produce and I think it is a good bar to be raised because we also need to, firstly, we need to be responsible. It is because of these pressures [to produce numbers quickly] that the World Bank now produces numbers that are more current than they ever have been, but at the same time if you press it too much then the numbers have no meaning and I think there is a tension there.

But yeah, because if you look back and go back and see how often off we are about the numbers because we are doing the best we can, then there is reason to be sober about it and right now we have actually, the one thing that has improved in the World Bank is that we do a very good job in communicating the numbers and we have learnt a lot from the national accounts folk that they come up with the GDP number and then they constantly keep updating it, going back to the past and updating it. So previously we thought oh, poverty is a tinderbox, very political, we need to be very sure about the numbers and then go out, but now we are a lot more fluid, we say that, OK, it is 9.1% right now and previously the number was 9.2. We do a good job of communicating it and I think what it has done is that it has, while there may be frustration the numbers are dated, there is trust also. (World Bank, 10)

In this context, experts navigated the uncertainty of the provisional numbers by claiming that it allows for greater transparency of the measurement process; second, they created opportunities for action, instead of the paralysing effects of trying to secure accuracy. To counter criticism, interviewees pointed to extensive methodological appendixes with clearly stated limitations of their studies. As such, the political argument for the role of provisional numbers was framed in terms of transparency and usability, and thus further strengthened the measurement process rather than detracting from it.

4 Conclusion

The epistemic infrastructure of the SDGs requires not only an extensive basis of data and indicators and their multiple inscriptions but also linkages between different actors, connecting different parts of the infrastructure. This chapter has explored the expansion of the connections and interdependencies grounded in the new governing paradigm of the SDGS—one of participation of all countries in the decision-making and design of this framework. As we have shown, this participatory logic was embedded in the SDGs from their inception with important consequences for the governance structures of this framework (see also Chap. 2) as well as the implementation of the framework on the country level. Consequently, the SDGs are driven by both the technocratic logic of quantification and the democratic logic of participatory governance. This chapter has illustrated how the newly emergent dual logic of the SDGs has resulted in new connections and linkages as well as creation of new political spaces of governing by numbers. Consequently, this turn to participation played an important political role in communicating the equality as the underpinning value of the SDGs as well as—or perhaps more importantly—securing the buy-in into the epistemic infrastructure of this measurement programme.

One of the underpinning assumptions of the epistemic power of quantification and its influence is the separation of the spheres of science and politics (Lahn & Sundqvist, 2017). And yet—this positioning of measurement as objective and devoid of politics is increasingly challenged not only on the grounds of ethics but also on the grounds of democracy, effectiveness and efficiency. It is increasingly acknowledged that quantification should strive not only for producing ‘global’ knowledge but also for acknowledging different contexts in which measurement is being done. These broader trends have been enacted and transformed within the Sustainable Development Goals. From their inception, the SDGs were designed to be both a highly technocratic monitoring programme of ‘governing by numbers’ (Miller, 2001) and a participatory project aimed at assuring the participation of countries and communities. As we argued in this chapter, this double logic of the SDGs permeated the practices of producing and using data on all levels of governance.

The focus on both democracy and technocracy has proved to be challenging for the experts within the IOs, as neither of the two logics could have been completely satisfied. Instead, the experts engaged in the process of ‘sufficing’ and navigating both logics and types of accountability: the technical and the democratic ones. This balancing act proved to be difficult, as prioritising either one of the two ‘logics’ risks the loss of momentum and support: for example, as indicated in our discussion of different approaches to dealing with imperfect numbers, prioritising methodological practices of mechanistic objectivity (Daston & Galison, 2007) risked stalling collaborative action, politicising it or stopping the political processes aimed at actually fulfilling the targets of the SDGs. Alternatively, the technocratic process was mobilised when delaying practices aimed at changing the focus from often difficult political decisions, turned to seemingly endless and irreconcilable methodological debates. On the other hand, the baseline legitimacy of the process still rested on the epistemic virtues of numbers (cf. Bandola-Gill, 2021). Focusing entirely on democratic accountability risked inviting ‘stealth’ politics whereby the powerful actors got more influence within the ‘participatory’ processes.

This tension between democratic and technocratic modes of production of numbers (and their accountability) shaped the nature of quantification itself. Numbers are no longer ‘fixed points’ (cf. Lahn & Sundqvist, 2017) but rather more fluid entities that could be improved, changed and mobilised in different ways. Consequently, the underpinning logic of quantification changed—it is no longer a process of maximising the quality of numbers (in order to maximise the quality of political processes) but rather it is a process of multivocality (Bandola-Gill et al., 2021) where the quality of the political process is established by the quality of deliberation rather than facts that underpin it.