Keywords

1 Introduction

One of METRO’s most startling findings was that international organisations are not merely ‘centres of calculation’, i.e. the organisations where numbers are produced. Instead, the majority of the experts we interviewed are former statisticians, whose main role is to act as brokers between different levels of governance and actors in the field. Brokerage in the global public policy arena involves primarily facilitating processes of socialisation, collective puzzling and creating interdependencies between a diversity of actors, be them experts, donors or national representatives in order to create conditions of collaboration, mutual trust and agreement. As I will show, alongside the work of measurement, this continuous brokerage work is equally key, if consensus on establishing global goals—and the routes to achieve them—is to be established.

Through IOs’ bilateral relationships with participant nations and local communities, as well as with other IOs, research agencies, donors and funders, brokerage facilitates the adoption of different identities for different audiences and thus creates conditions of trust for achieving consensus. Second—and perhaps more importantly—brokering work creates the necessary bridge between technocratic and political accountability, essential in global monitoring programmes, giving large IOs further legitimation and symbolic capital as purveyors of technocracy and democracy.

Therefore, the focus of this chapter will be on the role of actors’ socialisation, processes of collective puzzling, interdependency and brokerage in the production of goal-setting in education, in order to achieve consensus. After a brief overview of key literature analysing expertise and its main functions and effects, the chapter will move on to the brief examination of two empirical cases of expert brokering work in the field of global education governance: the first will focus on actors’ socialisation by the OECD and its country reviews of education. The second case will explore UN’s participatory turn, paying attention specifically to the work of UNESCO as a trusted education broker through the organisation of technical groupings and meetings for the purposes of measuring the education SDG.

2 The OECD Country Reviews of Education: The Case of Sweden

Adopting a perspective that builds on sociological institutionalism (Lowndes, 2010), IOs are understood as purposive actors who, ‘armed with a notion of progress, an idea of how to create a better life, and some understanding of the conversion process’, have become the ‘missionaries of our time’ (Barnett & Finnemore, 1999; 712). However, this does not in itself explain what has transformed the OECD to one of the most powerful agents of transnational education governance. Martens (2007) has contributed to this discussion suggesting that the ‘comparative turn’—‘a scientific approach to political decision making’ (2007, p. 42)—was the main driver of the OECD success in education governance globally. Through its education statistics, reports and studies, the OECD has achieved a brand which most regard indisputable. Despite a number of critical voices in the field (Brown et al., 2007; Prais, 2003), OECD’s recommendations are accepted as valid by politicians and scholars alike, ‘without the author seeing any need beyond the label “OECD” to justify the authoritative character of the knowledge contained therein’ (Porter & Webb, 2004).

However, despite this context of increasing and deepening influence of its quantitative measures in education, the OECD’s ‘Reviews of National Policies of Education’ show that the assumption that the OECD’s influence is a mere result of its ability to decontextualize and compare is only but half the truth. Although the significance of the technisisation of many—previously political—arguments in education cannot be disputed, here I focus on a less discussed, yet important factor or the OECD success: this is the sustained socialisation of policy actors within national contexts through processes of policy translation and contextual adaptation (Checkel, 2005). As suggested by Checkel (2005), processes of socialisation entail intensive communication, regular meetings, as well as the emergence of mutual trust and shared commitment between actors who are involved in the ‘common project’. Socialisation leads to the construction of a common esprit de corps, defined as the acceptance and internalisation of new norms: ‘the right thing to do’. This, of course, is not always an orderly, observable process. Instead, it is a gradual, multi-layered process that is predominantly governed by a logic of appropriateness, meaning the adoption of institutional rules and norms that ‘regulate the use of authority and power and provide actors with resources, legitimacy, standards of evaluation, perceptions, identities and a sense of meaning’ (Olsen, 1998; 96). As I will show, what we observe in many countries around the world is the making of an almost absolute and indisputable consensus on the role and significance of the OECD as key in reshaping the academic, policy and public debate. Observing and evidencing processes of international and national actors’ socialisation as they take part in these institutional processes is considered an important intellectual tool in making sense of these new realities.

Second, in an attempt to illuminate how socialisation happens, Hugh Heclo’s notion of collective puzzlement (1974), as well as Clarke et al.’s (2015) conceptualisation of how policy moves, are both useful analytical tools. Both sets of ideas help to show how and why it is the coming together of various national and international actors that sustains and reinforces the numbers game, rather than solely the validity or strength of the numbers themselves. Over time (and the allowance of time is crucial here), at least in the field of education, international comparative assessments have created two crucial governing constructs: first, a common language using which diverse actors from the local, national and international ‘levels’ can communicate; second, a new governance system which in effect can be understood as ‘an incremental process reorienting the direction and shape of politics to the degree that (global) political and economic dynamics become part of the organisational logic of national politics and policy-making’ (Ladrech, 1994, p. 69). However, rather than top-down, this is a mutually reinforcing process; Sweden, the case in point in this section, was a key nation in establishing the work of international actors in education, like the IEA and the OECD, and is very active in relation to European governance in education more generally (Grek & Lindgren, 2015).

In order to pre-empt critique, I do not claim that numbers are not important, or that their spectacle through naming and shaming (Nóvoa & Yariv-Mashal, 2003; Simola, 2005; Carvalho, 2012) is not an indispensable part of OECD’s success. Nonetheless, the spectacle has a temporal dimension; it surprises and shocks. Thus, spectacles quickly come and go (think of the embargoed results for example, and the media attention the Programme of International Student Assessment {PISA} receives). Nonetheless, what follows the announcement of the results requires steadfast, diligent and zealous face-to-face policy work in order to carry the numbers deeper into the national imaginary and entrench them into the system. The OECD sustains and builds its policy work through the continuous crafting of its relationship with key education actors in other international organisations (Grek, 2014) and within national contexts.

But how do such processes of socialisation take place? One way to understand and analyse them is through the prism of policy learning defined as an ‘updating of beliefs’:

In public policy, we are eminently concerned with beliefs about policies … This process of updating beliefs can be the result of social interaction, appraisals of one’s experience (often of failure) or evidence-based analysis – or most likely a mix of the three. (Dunlop & Radaelli, 2013, p. 600)

Policy learning theory is certainly not new—from the seminal work of Dolowitz and Marsh on policy transfer (1996) and Haas’ work on epistemic communities (1992), to the advocacy coalition framework (Sabatier & Jenkins-Smith, 1993) and the examination of the EU as a learning organisation (Zito & Schout, 2009), the literature on policy learning is large. Here I go a bit further back in time and focus on Hugh Heclo and his writings about governing as collective puzzling:

Politics finds its sources not only in power but also in uncertainty…Governments not only ‘power’… they also puzzle. Policy making is a form of collective puzzlement on society’s behalf; it entails both deciding and not knowing…. (Heclo, 1974, pp. 305–306)

According to Heclo, more so than politicians, it is the work of civil servants that is crucial in the making of policy; they are bestowed a permanency that politicians do not have, in addition to experience and institutional memory, since ‘to officials has fallen the task of gathering, coding, storing and interpreting policy experience’ (Heclo, 1974, p. 303). However, policy work usually happens through interaction; according to him, ‘it is in interaction (that) these individuals acquire and produce changed patterns of collective action’ (Heclo, 1974, p. 306).

More recently, Clarke and colleagues suggest that policy is never a finished product, to be observed and transferred in a linear manner (2015). Instead, they suggest that,

‘When policy moves, it is always translated: that is, it is made to mean something in its new context. Policy is never a singular entity: it is put together – or assembled – from a variety of elements that are always in the process of being re-assembled in new, often surprising ways. (Clarke et al., 2015, p. 1)Footnote 1

Following both Checkel (2005) and Clarke et al. (2015), the OECD education policy work of the last 20 years has achieved a paradigmatic shift in the thinking and framing of education not only thanks to the cold rationality of numbers, but also crucially through the interpretation and adaptation of its recommendations in myriad venues and opportunities where local, national and international actors interact. Freeman sums up beautifully the impact of such iterative processes of collective learning:

This implies that learning is not simply an interpretative act, a process of registering and taking account of the world; it is, in a fundamental way, about creating the world. It is an active process of making sense (Weick, 1995). Similarly, just as we shop in order to discover what we want (and we might think of some kinds of political learning as “policy shopping”), so we read in order to discover what we think, not just what any given author thinks (Brown & Duguid, 2000). What emerges is a conception of learning as an act of imagination, invention and persuasion as much as (or as well as) comprehension, deduction and assimilation. (2008, p. 15, my emphasis)

The next section empirically analyses the case of Sweden and the OECD, in order to show how such processes of socialisation and collective puzzling happen.

2.1 Socialisation and Learning in Governing: The OECD Reviews of National Policies for Education

As the OECD itself suggests, the ‘Reviews of National Policies for Education’ are one type of their range of activities that lead to analyses of education policy development and implementation (OECD, 2016). According to the OECD, there is involvement of Ministries as well as professional groups, researchers and others, in formulating and carrying out the work and in discussing the findings of the OECD review expert group that visits the country; thus, the circle of participating actors is wide and includes both national and international actors (OECD, 2016). The aim of the Reviews is ‘to improve the understanding of issues, implications for education policies and experience with the range of national policy options and strategies’ (OECD, 2016). Recent ‘National Policies’ include a high number of reviews from a diversity of countries; for example, the Netherlands, Latvia, South Africa, Dominican Republic, Russia, Scotland, Bulgaria, Korea, Ireland, Italy, Estonia, Lithuania, Kazakhstan, Chile and many others. Indeed, going back into the OECD archives, it is difficult to identify countries that have not had an OECD review of their education system.

Education policy reviews proceed in several stages: initially, there is preparation and completion of a background report by the country undergoing review, followed by a two-week mission by an external team of reviewers. The external team then prepares and completes the review report. This is presented at a 1 to 1½ day review session at the OECD Education Committee, when the Minister (with input from senior staff) comments on recommendations and conclusions of the review team and responds to questions of other countries’ delegates to the Education Committee (OECD, 2016).

The report of the external review team, edited to take into account the main points raised in the review session, is then published. According to the OECD, their scope is usually very broad with the goal to provide recommendations on ‘effective policy design and implementation’. Generally the analysis covers ‘strengths and weaknesses which are primarily based on OECD’s collected data (from studies such as PISA, or earlier OECD reviews), national research, review visits to the country and OECD’s extended knowledge base’ (OECD, 2016). Finally, the programme of reviews consists of a follow-up. After a period of about two years, ‘authorities of the country concerned submit a short note to the Education Committee in which they report on progress and developments. Discussion takes place as a regular item in the agenda at a bi-annual meeting of the Education Committee’ (OECD, 2016).

2.2 The OECD Country Review of Sweden (2015) and the Foundation of the Swedish School Commission

The Swedish OECD country review of 2015 was not the first one in the country; another one had preceded it in 2011 (Nusche et al., 2011). However, in light of the negative PISA 2012 results, as well as the general downward spiral of Swedish education performance, it quickly led the Ministry of Education and Research (MoER) to commission the OECD for yet another report of the country’s education system. The objectives of the review were to

1) identify the main reasons for the decreasing trends in Swedish students’ performance; 2) draw on lessons from PISA and other benchmarking countries/regions with an expert analysis of key aspects of education policy in Sweden; and 3) highlight areas of policy and its implementation which might add further value to Sweden’s efforts to improve student performance. (OECD, 2015, p. 13)

The process followed the usual pattern: a background report prepared by the Swedish government, an OECD pre-visit which defined the key areas for review, an OECD team review visit to Sweden in October 2014, as well as a series of other exchanges with experts and stakeholders in Sweden and internationally (OECD, 2015). The two external experts in the team were Richard Elmore, Gregory R. Anrig Research Professor of Educational Leadership, Harvard Graduate School of Education, and Professor Graham Donaldson, the former Scottish HMI Chief Inspector and the then president of the Standing International Conference of Inspectors (SICI)—Donaldson was one of the chief architects of the self-evaluation model in Scotland.

The OECD visit took place between 13 and 22 October 2014 and involved a number of meetings with key actors such as the Ministry; the Swedish National Agency for Education (Skolverket); the Swedish Schools Inspectorate (Skolinspektionen); the two teacher unions (Lärarförbundet and Lärarnas Riksförbund); academics in education research and teacher education (Stockholms universitet and others); the Swedish Association of Local Authorities and Regions (Sveriges Kommuner ochLandsting); and visits in different municipalities and local schools (OECD, 2015).

The report uses quite damning language to describe the state of Swedish education: ‘no other country …saw a steeper decline’ (ibid.; 7); ‘a school system in need of urgent change’ (ibid.; 11); ‘a position significantly below the average’ (ibid.; 27). On the basis of this discursive analysis of the text—which used a language that described a system in crisis—I interviewed key actors that contributed to the report. Their reflections on the process of how the Review was commissioned and its effects were enlightening.

Although they do not themselves use the term socialisation, all interviewees in their interpretation of the influence of PISA in Sweden offered a similar story of staggered events that followed one another; of the involvement of an ever wider set of actors; of the importance of the OECD experts in offering suggestions; and of the central role of the establishment of the Swedish School Commission as a forum of meeting, debate and learning for all the actors involved. Indeed, the title of report of the Commission, ‘Samling för Skolan’ (Gustafsson et al., 2017), denotes precisely the notion of ‘congregation’ or ‘gathering’—the meeting and consensus of different actors around the core of the commission’s study, which was the OECD numbers themselves. Numbers and data are central in the interviewees’ narratives, but so are the meetings, the debates and the continuous coming together of actors in socialising and learning events.

Interestingly, perhaps simultaneously with the rise of the OECD as the ultimate go-to expert organisation, we observe the slow decline of Swedish education research as valid and trustworthy enough to even take part in the PISA data collection process—instead, Andreas Schleicher acquired an almost divine quality that matches closely the religious adherence to PISA in Sweden:

What has happened, you can go back to 2003 , TIMMS and PISA were at MidSweden university, now they are all run by the educational board (Skolverket). And they contract fewer and fewer education researchers for very little time to do some coding, to offer some comments. We were really independent from the government and at the time we were in a lot of the OECD meetings, we were involved. But now it is the educational board which does all that – and they don’t have any researchers, they have project managers but they do not have researchers, they have government bureaucrats….But when Andreas Schleicher is in Sweden it is like we have a visit from God, it is very strange. (Academic 2)

As a result, education researchers do not have an alternative voice in Sweden anymore—when they take part, the majority of them is to validate rather than dispute the PISA results:

Today no one can [criticise PISA] really. PISA has in some sense got so much status that I don’t meet many who can say we can contrast PISA – but a lot of people say we need to discuss the implications of PISA. (Academic 2)

Although the academic community appears to have lost its central position in informing policy, there appears to be a much more diverse and horizontal participation of different actors in policymaking, even if it involves a lot of ‘cherrypicking’. Here, speaking about how the OECD report was commissioned, an interviewee, who later became central to its analysis, suggests:

Many Swedish organisations and persons, researchers, people in the professions [were asked to participate] and were listened to -very selectively of course- and did a lot of cherry picking of what they liked to hear and what they didn’t like to hear – this is what politicians do. (Commission member 4)

What is important here are two developments that seemed to have dominated the Swedish education policyscape since 2000; the first one was the unequivocal rise of the OECD as the golden standard of education research in the country (with the simultaneous downgrading of national education researchers); and second, the rise and broadening up of a debate about a system that was portrayed as in crisis. This picture, given the history of Sweden as a model European education system throughout the twentieth century, in addition to the success of close neighbours, such as Finland, became symbolic of a marked shift in the need to socialise and ‘educate’ all relevant actors about the critical need for change. That process began slowly since the mid-2000s, but became cataclysmic after the damning PISA 2012 report. Indeed, it was PISA 2012 that became the primary reason for launching the Swedish School Commission:

It was a response to the OECD report. If I can give you a bit of the timeline : in December 2013 we have the PISA report, week after that there was a big debate at the parliament about the school crisis. There after Björklund invites the OECD to write the report, even before the report is released and they organised this school commission with Anna Ekström - now the chair is Jan –Eric Gustaffson. Their task was to study the report of the OECD in order to make a Swedish analysis, do we agree what is the to-do list, but this commission has been criticised as being only in favour of this particular view that the PISA results are the only ones that show the truth about Swedish schools today. (Academic 1)

Indeed, the task of the Commission was set out as follows: ‘partly based on the OECD’s recommendations, the schools commission will submit proposals aimed at improving learning outcomes, teaching and equity in Swedish schools’Footnote 2 (Swedish Government, 2015). Indeed, the OECD and its recommendations were central to this debate and in many ways, framed it; this then instigated the work of the Commission that was purposefully staffed by a broad range of actors and that met regularly over two years in a process of learning, socialisation and translation of the OECD recommendations to national policy.

The Swedish School Commission met regularly for two years. Its members were asked to look at evidence and draw conclusions about the direction of travel for Swedish education. Interviewees described these meetings as learning opportunities for all participants involved. They described the Commission as broadly reflecting the wider public and policy debate in Sweden and suggested that its priority is to take the time necessary to offer a ‘Swedish solution’, nonetheless following closely the OECD research and recommendations. Again, in their narratives, they never claim that the OECD data are not central; on the contrary, they describe OECD data as the ‘spine’ that holds them all together. However, they also suggested that there was a national ‘filtering’ process that took place through their meetings, and that was necessary for the interpretation, adaptation, persuasion and at the end adoption of the OECD perspective.

To conclude this section, the OECD Swedish country review of 2015 and the set-up of the Commission that followed represent an illuminating case of the kind of processes of socialisation of actors that was discussed earlier on: in this case, the OECD was invited to enter a national system and combine its quantitative knowledge with a more qualitative perspective, gained from a two-week fieldwork visit, discussions with local actors, as well as a detailed background report supplied by the government of the time. If actors’ socialisation and collective puzzling via the spectacle of country rankings was the state of affairs prior to the rise of a field of global public policy with the SDGs, the UN’s participatory turn, where we will next turn, heralded a whole new era on the role of experts as brokers and particularly UNESCO’s influence, as will be charted in the chapter’s following sections.

3 UN’s ‘Participatory Turn’: Quantifying While Democratising

As 2015 was approaching, it became increasingly clear that the MDGs would not be achieved (Fukuda-Parr, 2017). One of the main causes of this failure was seen as the top-down UN structure that governed the goals. In addition to the failure to establish an effective governing architecture, the calls for decolonising and democratising the global public policy arena were also multiplying at the time and gaining increasing momentum. Hence, apart from establishing ambitious goals in themselves, another key ambition was to alter their governing architecture (compared to how the MDGs were organised), by democratising it and allowing countries to have a much stronger say:

There it was [the MDGs], a very clubby affair. It was basically just us agencies sitting and talking together and all that and very well-meaning of course, but I guess it was a tad elitist in the sense that there are 20 people in a room versus 200. […] So, just that type of dialogue and all that we didn’t have before the SDGs, and also dialogue with countries. At first, the countries were very much, naturally – they were very annoyed at the international agencies being in the front seat and them being in the back seat. This is a country-led process and it was completely flipped and then there was the discomfort with that also, because how can we have you measure something that you are judging your own progress by; it’s like you grading your own paper. But I think, so the entente has been reached and there is, I think the statistical world will be better for it. (World Bank 15)

What is vividly illustrated here are two key tensions embedded in setting up the new global monitoring system: on the one hand, there is a clear break with a top-down, global North-centric view of sustainable development and the promotion of more equal and democratic participation by the countries who would be most affected by these systems. On the other hand, proclaiming such an inclusive design in setting up the monitoring system was seen as risking technical challenges and undermining the authority of expertise, as it would necessarily need to involve countries in the politics of measurement in a much more direct way.

What is of interest is that this ‘participatory turn’ of the UN monitoring system did not merely occur at the level of procedural backstage politics but rather, it was embedded in the key document establishing the SDGs. The flagship document of the Rio Conference—The Future We Want was at its core a political declaration of inclusivity of the different voices into the governance through but also of the Sustainable Development Goals. For example:

We reaffirm the key role of all levels of government and legislative bodies in promoting sustainable development. We further acknowledge efforts and progress made at the local and sub-national levels and recognize the important role that such authorities and communities can play in implementing sustainable development, including by engaging citizens and stakeholders and providing them with relevant information, as appropriate, on the three dimensions of sustainable development. We further acknowledge the importance of involving all relevant decision-makers in the planning and implementation of sustainable development policies. (UN General Assembly, 2012, p. 8)

As evident in this quotation, the inclusion of not only the policymakers but also a range of other stakeholders (such as the civil society and national representatives) was seen as necessary for the success of the Sustainable Development Goals. The Future We Want (UN General Assembly, 2012) explicitly discusses the involvement of developing countries as equal and necessary participants in sustainable development governance. As indicated in the following:

We reaffirm the importance of broadening and strengthening the participation of developing countries in international economic decision-making and norm-setting, and in this regard take note of recent important decisions on reform of the governance structures, quotas and voting rights of the Bretton Woods institutions, better reflecting current realities and enhancing the voice and participation of developing countries, and reiterate the importance of the reform of the governance of those institutions in. (UN General Assembly, 2012, p. 19)

These political declarations went even further in ‘Transforming Our World’ (UN General Assembly, 2015), the cornerstone document, establishing the SDGs as a political programme. The SDGs from the outset were an initiative relying on the participation of stakeholders:

All countries and all stakeholders, acting in collaborative partnership, will implement this plan. We are resolved to free the human race from the tyranny of poverty and want and to heal and secure our planet. We are determined to take the bold and transformative steps which are urgently needed to shift the world on to a sustainable and resilient path. As we embark on this collective journey, we pledge that no one will be left behind. (UN General Assembly, 2015, p. 1)

Therefore, the SDGs journey was proclaimed as a collective one—making it everyone’s stake to progress and ultimately realise the set of ambitious goals. Furthermore, again, as it was the case in ‘The Future We Want’, this new partnership paradigm is rooted in solidarity with the poorest:

The scale and ambition of the new Agenda requires a revitalized Global Partnership to ensure its implementation. We fully commit to this. This Partnership will work in a spirit of global solidarity, in particular solidarity with the poorest and with people in vulnerable situations. It will facilitate an intensive global engagement in support of implementation of all the Goals and targets, bringing together Governments, the private sector, civil society, the United Nations system and other actors and mobilizing all available resources. (UN General Assembly, 2015, p. 10)

Here again, the document positions the SDGs as a monitoring programme produced with developing countries as key partners. Furthermore, the document posits the partnership as being one of a wider spectrum of such collaborations, involving national actors, the private sector and civil society. Thus, the SDGs become a participatory monitoring tool, requiring ‘buy-in’ in the broadest sense in order to achieve consensus—the latter being the key underpinning principle of the new framework. Taken together, these two documents clearly show how the SDGs changed fundamentally the role and practice of expertise: what we observe here is that, alongside the need for technical knowledge, experts are required to work closely with country representatives and a range of other actors in order to achieve agreement on the goals. This extends the kinds of qualities that expert work involves: apart from needing to be statistically and technically highly competent, IO experts would need to also be successful in persuading other actors and securing support and buy-in, so as to be allowed to push on with their work. Surprisingly, what we found in the METRO project was that most of the experts involved in these processes were former statisticians (therefore, they had the technical capacity to understand the issues involved), however, the vast majority of the work that was required of them was to foster these relationships and broker agreement between a very wide diversity of actors with diverse ideas and interests.

These are some of the reasons that the introduction of this ‘participatory’ approach to statistics was not straightforward. The production of globally comparable statistics is a very complex and demanding technical process that cannot always be adjusted and made to fit with actors’ disagreements and political persuasions. One way in which these tensions were resolved—at least rhetorically—was the UN’s devising of the concept of ‘country ownership’. As a concept, it did not yield all the decision-making power to countries, but rather it attempted (not always successfully) to integrate political buy-in into the production of methodologically robust indicators. Thus, the political declarations outlined in the key SDG documents and structures materialised in the ways the relationship between countries and IOs was designed and put in place. The principles of participation and technocracy—even though contradictory—were predominantly discussed as indivisible. Even though at the level of political declarations, some level of discrepancy were to be expected, the translation of these principles into specific measurement processes led to tensions and contradictions, particularly in various practices occurring at the intersection of work of experts and national policymakers and civil servants. In fact, the technical group responsible for the indicator development—the IAEG-SDGs—set up ‘country ownership’ as one of their key goals. Hence, it is clear how this highly technical body is also required to act as a broker of relationships and consensus-making with participant countries, rather than merely do statistical work:

The role of the IAEG-SDGs members should include consultation and coordination within their own national statistical system, and should also include reaching out to the countries in their respective region and sub-regions. (IAEG-SDGs, 2015a, p. 2)

This point is further repeated in the discussion, as reported:

During the discussion under this agenda item members of the IAEG-SDGs commented on the relationship between national, regional and global indicators, the need to ensure national ownership of the global indicator framework, the importance of statistical frameworks. (IAEG-SDGs, 2015a, p. 10)

The choice of focus on ‘ownership’ in relation to securing meaningful country participation is interesting here: on the one hand, it is malleable enough to appear to resolve the technocratic and the democratic tensions of the SDGs. On the other, focusing on ‘ownership’ does not completely surrender quality standards of the indicator development, but communicates the need for countries to negotiate measurement as both a technical process and a political process of deciding on policy prioritisation. Thus, experts are required to maintain sufficient levels of technocratic accountability to reap the benefits of the ‘epistemic virtue’ (Daston & Galiston, 2007) of numbers (such as standardisation, objectivity and universality), while combining their technical capital with navigating important political calculations, such as securing consensus-building, promoting collective action and ensuring the political acceptability of the monitoring process and all the decisions needing to be taken therein. As I will discuss in the next section, this is the kind of work that the UNESCO Institute of Statistics did, as it brought together not only its epistemic authority in the field but also its reputation as a trusted IO in the eyes of countries of the Global South in particular.

4 Experts or Brokers? UNESCO Institute of Statistics as a Trusted Actor

As discussed in the previous section, the SDGs captured the imagination of a wide set of actors in the field, since they purposefully allowed multiple ‘entry points’ in their world: on the one hand, they emphasised the use of technocratic and management principles to create an objectified and measurable field, while also proclaiming to be bottom-up, grass-roots and transformative, distinct from older Western-liberal ideas and practices (Waldmüller et al., 2019). Such an open framing of the SDGs allowed them to move and adapt much faster than previous monitoring exercises, no doubt partly due to the malleability and flexibility of the monitoring framework itself. Thus, the scope and complexity of the SDGs lend itself to a focus on the structures and interlinkages between data, actors and politics, precisely the processes evidenced through the making of the SDG4.

Thus, this section turns the spotlight onto these key people, who, although occupying technical positions, have also been given a strong mandate towards achieving consensus among the wide diversity of participants in what is often referred to as the ‘indicator debate’. Here I focus on the struggles, tensions, as well as the transformations of actors’ epistemic, highly technical knowledge capital into a set of practices that focused primarily on brokerage and on achieving consensus. Crucially, such brokerage practices do not lessen the significance of quantification and specifically the ‘indicator debate’; instead, they promote the monitoring agenda and, through their unfolding, co-opt a wide range of actors.

Thus, in order to demonstrate the workings of brokerage and consensus-making in practice, it is useful to discuss the work of two key indicator groupings, namely the Technical Cooperation Group (TCG) and the Global Alliance for Monitoring Learning (GAML). While this section derives some data from the extensive online documentation of their regular meetings, it is primarily based on actors’ own voices as participants of these meetings and as key active participants in these groups.

Although 2015, the year the SDGs were launched, seemed like the dawn of a new era for the global education community, it left a number of issues open—among them the so-called indicator debate. The Education 2030 (Incheon Declaration, 2015) had established four levels of indicators (global, thematic, regional and national). The first level included up to 11 global indicators, negotiated in a series of meetings of the Inter-Agency Expert Group on SDG Indicators (IAEG-SDGs).Footnote 3 In light of the unequal development of these indicators,Footnote 4 and often the unavailability of data and lack of coverage on a global scale, IEAG-SDGs implemented a 3-tier classification tool,Footnote 5 which categorised indicators depending on their robustness according to internationally established methodologies and standards and the regularity of data production at country level. Importantly, IAEG-SDGs also identified a number of custodian agencies they deemed responsible for the development and refinement of such indicators. In the case of education (SDG 4), the UNESCO Institute of Statistics (UIS) became the responsible entity for 9 out of 11 indicators and was tasked to share the responsibility for the other 2 with UNICEF and the OECD. Given the initial classification of a number of metrics as tier 2 and tier 3 indicators (e.g. indicators for which data are not regularly produced by countries or for which measurement standards are not yet available), their refinement and production rapidly become a priority for UIS, who, as discussed below, perceived their organisational legitimacy and reputation as being closely tied to achieving both technical solutions as well as the consensus of the participant actors about the pertinence and suitability of the indicators under consideration.

Given the complexity of the endeavour, but also in order to guarantee the participation of a wide range of stakeholders, two ad hoc mechanisms/working platforms were created with a view to advancing the development and production of SDG 4 global and thematic indicators. One was the ‘Technical Cooperation Group on the Indicators for SDG 4’ (TCG) and the other was the ‘Global Alliance to Monitor Learning’ (GAML).

The former was established in 2016, being conceived as a space for discussion as well as a technical platform to support UIS in the implementation of the thematic indicator framework. TCG is composed of regionally representative UNESCO Member States, as well as representatives of different IOs (UNESCO, UNICEF, OECD and the World Bank), civil society organisations and the co-chair of the Education 2030 Steering Committee.

GAML, on the other hand, was also created in 2016, being originally defined as an ‘umbrella initiative to monitor and track progress towards all learning-related Education 2030 targets’ (UIS, 2016, p. 49), and tasked with the development of tools, methodologies and shared standards to measure learning outcomes in the context of SDG 4. Its membership is open to any individual or organisation willing to contribute to the work of GAML and includes IOs, civil society organisations, a variety of technical partners and assessment organisations, and representatives of United Nations (UN) Member States. Similar to TCG, GAML operates by definition in an open and participatory manner, with decisions being made through consensus.

Importantly, these platforms did not emerge in a vacuum. On the contrary, and as briefly discussed in the previous chapter, both of them were born out of already existing initiatives launched during the run up towards the approval of the Education 2030 Agenda. More specifically, TCG represents a continuation of the Technical Advisory Group (TAG) established in 2014, chaired by UNESCO and including experts from a range of education-related multi-lateral agencies. GAML, in turn, was a successor of the Learning Metrics Task Force (LMTF), launched in 2012 and envisaged as a multi-stakeholder partnership co-convened by the Center for Universal Education (CUE) at the Brookings Institution and UIS. For both TCG and GAML, there is extensive online documentation of their regular meetings, but they only really come alive in the voices of individual participants of these meetings. According to two interviewees,

The [technical advisory] group that led the developments, that gave us the current indicators for SDG 4 was a precursor of TCG and GAML. And it was essentially the same composition … that group basically just renamed itself as the TCG. They worked on the indicators that were then adopted. (UIS 1)Footnote 6

The Learning Metrics Taskforce was another space where conversations were held and where I think most of the big actors in the global education policy space were somehow represented. I mean if you look at institutions involved … so I think that also contributed to a consensus building that’s become really hard to resist. (Civil society 1)

Nevertheless, despite a certain path dependency, the (re)formation of both TCG and GAML entailed a procedural shift vis-à-vis their own precursors. Both initiatives were explicitly set up in the understanding that they would be subject to a transparency mandate and were expected to operate in a democratic, equitable and inclusive manner. In this sense, both platforms and spaces are subject to a dual form of accountability—they are held responsible for the success of their technical work, but also judged in terms of the quality and inclusive character of their deliberations. To put it differently, both spaces are characterised by an inbuilt tension between the technical and the political accountability of the whole endeavour. An analysis of the work of the two indicator groups is therefore a productive space which serves as the canvas for mapping the struggles of the actors’ positionings and efforts for producing consensus in the field. This tension is primarily evidenced through the centrifugal forces of technocracy, versus the perceived need for SDG4’s inclusivity. As a number of my interviewees suggested, this has created a set of quality assurance problems that, although extant before, were never quite as prominent as in the case of the Education 2030 Agenda (SDG 4):

I think the way that UIS and GAML are trying to manage this is, say, if countries want to submit their national assessment data, that’s fine, no problem. But I think we’re trying to manage by then saying, but we are going to put your data through a quality control process. And that’s going to tell us technically how strong the data are. It doesn’t mean we’re not going to publish it, but the data may be published with an asterisk or footnote or in a slightly different way to signal to the viewer of the data that this is not exactly the same as say a TIMSS score or a PIRLS scoreFootnote 7 … it’s let’s get them in the door, let’s just get it started, get people used to the habit of data on learning, then gradually raise the bar. (World Bank 1)

However, not all IOs share this tendency of prioritising inclusion over data robustness:

There is no clear-cut answer to this, I think it’s a very difficult dilemma. But it also reveals a very different approach between UNESCO and OECD on how to respond to this. And we have encountered this not just in the outcomes metric but also in a way with general statistics. The UNESCO philosophy being we need to be open, we need to accept the constraints and at the end of the day it’s better to have something than to have nothing. At the OECD I have taken a very different approach to this. For me, the most precious currency is trust. If I know that policymakers do not trust the data or don’t trust them to be comparable, the whole thing is of very little value to me, because I want these things to be actually having an impact on this. So basically UNESCO and OECD use the same data source on the administrative side at least. With UNESCO the tables are full; for the OECD we have half of the cells filled with an M, which means actually these data aren’t good enough. (OECD 1)

The latter quotation is exceptionally telling in terms of the work that numbers do in the construction of coalitions of actors, even when the data are not there at all. This is precisely the political function of numbers; even in their absence, they create the conditions for consensus and coalition-building. On the one hand, UNESCO appears to be using statistical data as the means to mobilise an ever greater number of countries to participate in the global measurement operation. UNESCO’S primary ‘currency’ is inclusion and equal participation of all actors in the policy process; it uses numbers as a symbolic emblem of the belief of the organisation in more democratic and transparent processes of transnational education policymaking and monitoring. The OECD and the World Bank, on the other hand, still appear to be immovable from their core technocratic tenet of ‘trust in numbers’; they use peer pressure to encourage countries to conform and participate. The symbolic use of the ‘M’ to denote missing data is not an empty cell; it symbolises in many ways the peer pressure and governing function that numbers have.

However, I do not wish to present these organisations as being in any way monolithic. The majority of actors I interviewed, even ones from the same organisation, often gave divergent views of their organisation’s approach to data robustness and validity. However, it is precisely the contentious issue of the conflict between the technical and the political accountability of the monitoring tool which has been the breeding ground for the emergence of the ‘metrological field’ that governs transnational education. This field is inhabited by individual actors who assume different positions, sometimes following the culture of the organisation that employs them, but also—indeed often—not. These actors use their accumulated epistemic capital in order to transform it into a brokering device that facilitates their visibility, authority and legitimation in the field. More often than not, my interviewees cited their own career trajectories, values, frustrations or aspirations as the reasons which led them to take the position they had assumed; these positions are not permanent and solid. They often change in the face of developments in the ‘field’, i.e. the positionality, advancement and withdrawal of other actors involved in it. This conditions not only how they act, but also how they present themselves; style and substance can be of equal weight here.

4.1 The Role of Meetings

Observing the process and practice of these groups’ gatherings is perhaps the most telling material evidence of how meetings contribute to the production of consensus around numbers and the policy directions that accompany them. Anthropologist Clifford Geertz’s idea of the ‘poetics of power’ (Geertz, 1980) is useful for unravelling the thick layer of dramaturgy coating this apparently technocratic regime. Several of my interviewees suggested that most meetings are performative events, which follow a certain ritual, allowing enough free space to conclude with some loose decisions that determine the agenda for the follow-up meeting. There is a clear-cut distinction of participants from the Global North, whose presence and contributions dominate the meetings, while representatives from countries of the Global South most of the time have a very passive presence, if any at all. This of course does not negate the agency and power of participants from the Global South, especially in relation to exploiting their own perceived weak positioning in order to accomplish specific goals. Thus, the space of the meetings becomes the visual manifestation of those who carry symbolic capital (and exercise authority) and those who do not, and whose lack of symbolic capital ironically enough also becomes a source of strategic positioning, since their agreement to the proposed agenda is required for the process to move on.

Further, the ambiguity and informality of the process, despite being an issue for some in the room, become a valuable, malleable tool in ensuring participation, while at the same time also pushing on with a specific, pre-determined agenda:

This was a big argument in the [removed for anonymity purposes] meeting two years ago. Because initially the [removed for anonymity purposes] ended up being a meeting of all the different actors in the assessment field, and they were kind of fighting among each other trying to frame their own assessment as the best for any SDG monitoring efforts. And this of course means that you have quite a lot of conflict of interest in the room. So one of the things that we as [removed for anonymity purposes] tried to say quite early on was that for this to be a well-functioning body that can actually do some work we would need to be quite clear on how decisions are made. And are we working on consensus basis, how do we deal with the fact that so many people have a conflict of interest; who will draw conclusions; if there’s voting, with what numbers would something have to be supported for it to be carried? And this was a frustration that grew as every session basically just ended with a broad sweeping, this was a very good discussion, thanks guys. And it was never really clear what anything would result in. (Civil society 2)

Interestingly, however, frustration and discord about the lack of transparency are not sufficient reasons to disassociate oneself from these alliances; being present at the discussions even when one is at the receiving end is still considered more valuable than not participating in such meetings. This kind of peer pressure, the discourse of crisis and the need for active involvement despite failings and malfunctions, trumps any hesitations about the process itself. In fact, as we see below, the process is informal enough to invite the complainant to try and sort out their own complaint:

[removed for anonymity purposes] then proposed that what if we had a strategic coordination committee that could approve the agendas in advance and that could try to ensure that this works well. And then [removed for anonymity purposes] was of course invited to be part of this, which was a clever move because we had probably been, if not the, at least one of the most critical voices in the room. So we had a dilemma and ended up actually agreeing to be part of this committee … I think what we struggle with is the fact that we know that just by being in the room, we are giving an indirect blessing of what the [removed for anonymity purposes] is doing. And at the same time, if we are not in the room, then we have no access to the conversations. We don’t know what’s going on. So we still feel like somehow we have to be in the room. (Civil society 1)

Chairing the meetings, although seemingly simply an administrative task, can also play an important role not only in how the meeting is run, but also in the conclusions that are drawn. Some of my interviewees suggested that the choice of chairpersons is strategic and aims to avoid (or prompt) specific actions, or the divergence of the problem to where the IOs want it to be:

Then the chair of the session will basically just wrap up, often without any reference to how things will move on. And this is enabled by the fact that [removed for anonymity purposes] is very seldom chairing the sessions herself, but she asks other people to chair. So you would have for instance the representative of the Australian Department for Trade and whatever it’s called, foreign affairs and trade, I guess, chair a session. And then he will not in a way be expected to do the wrap-up in terms of follow-up, but he’s really just brought in as the one who’s facilitating the session. And then [removed for anonymity purposes] is doing a concluding statement of some sort, where she often has a PowerPoint and she would outline the next steps. But it’s always completely unclear how the critical input from the group is really going to feed in or shape things. (Civil society 4)

Finally, as suggested earlier, the SDGs have prioritised consensus-making and the ‘democratisation’ of data as key in any forward-going process. Given the power asymmetries and the often informal management of the process, such efforts are frequently interpreted as a symbolic gesture that might even threaten the doxa of ‘trust in numbers’—they are, in a sense then, a ‘hetero-doxy’, a necessary deviation from accepted standards to sustain the ambivalence and multiplicity of the field.

It is precisely this conflict between methodological robustness and democratic participation that seems to set the SDG 4 wheels in motion. Although IOs seem to have the epistemic capital to drive the process, the need for participant countries to agree and approve suggests that other forms of capital are key, too. This leverage that participant nations and other civil society organisations have can be seen as problematic at times:

These are the professional standards of the measurement community. These are the instruments that are going to help you align your assessment with what good practice is, we’re recommending you use it. But it feels like we’ve gone into a whole second phase of, and now we’re going to get all the countries together, and we’re going to get everyone’s buy-in. And it keeps going back to this theme of democracy and voice. Is there such a thing as too much democracy, too much consultation? At what stage do you say we just have to run with this, this is what it is? (World Bank 1)

To conclude, one of the greatest difficulties of analysing expertise within the transnational metrological field is its complexity, dynamism and multiplicity. Although quantification has dominated global governance as the new unequivocal doxa of planning the future, it is precisely its open imbrication into political struggles that have transformed it into a powerful governing tool: the work of expert brokerage has been a key tool in achieving this balance and this transformational power.

5 Discussion

The sociology of quantification has richly explained the ways that the work of counting is a deeply political process, despite its claims to rationality and objectivity (Merry, 2016). Indeed, quantification needs to ‘de-politicise’, in order to claim its legitimacy and authority; this is the main reason ‘why International Organisations hate politics’, according to the recent book by Louis and Maertens (2021). Indeed, there have been plenty of detailed accounts of the processes of technicisation that social problems often go under, in order for experts to render them technical, and thus factual and neutral, and distinct from obstructive political struggles and ideologies (Wood & Flinders, 2014). Similarly, Diane Stone uses the term ‘scientization’ to describe the processes of transforming social issues into problems amenable to the scientific cause-effect relationship; the latter is seen as authoritative enough to control or even reduce uncertainty and risk (Broome et al., 2018; Stone, 2017). Of course, there is nothing a-political in such processes of rendering social problems as technical issues; on the contrary, technisication is deeply political work that involves decisions about what to count and what to ignore, which variables to disaggregate and which not (some of these wilful acts of performing ignorance were already discussed in Chapter 3), and how much to spend on collecting and analysing information.

This chapter discussed the political work of technicisation as a project that does not ‘land’ into policy contexts as a top-down agenda, sent from some unknown ‘centre of calculation’, but one that is open to contestation and negotiation with participant countries. One of the underpinning assumptions of the epistemic power of quantification and its influence has been the separation of the spheres of science and politics (Lahn & Sundqvist, 2017). Yet, this positioning of measurement as objective and devoid of politics is increasingly challenged not only on the grounds of ethics and on democracy, effectiveness and efficiency, but also on its ability to win ‘hearts and minds’. It is increasingly acknowledged that quantification should strive not only for producing ‘global’ knowledge but also for acknowledging different contexts in which measurement is being done. Thus, through the discussion of the empirical cases of the OECD’s education country reviews, the UN’s participatory turn, as well as the function and role of technical groupings and their meetings for the production of the education SDG, I showed how expert work is not merely the technical and statistically robust process that quantification promises, but it has come to deliver a function that politically is even more significant and necessary if global comparative metrics and their associated policy goals, are to be achieved: this is the work of socialisation, interdependence and brokerage that many of the experts METRO interviewed are asked to perform in order for the global goals machinery to move, one clog at a time.

First, through the case of the OECD’s country reviews, I showed that, rather than simply offering what has been seen as fast policy solutions (Lewis & Hogan, 2019), the OECD painstakingly enters national sites and works with local actors to create conditions of belonging; that is, it creates conditions fruitful for collective puzzlement, socialisation and policy translation as Heclo (1974) and Clarke et al. (2015) suggest. The set-up of the Swedish School Commission with a remit to study the OECD report in detail and offer recommendations for reform could not have been a better example of quantification as an Ianus-faced process of both the simultaneous de- and re-politicisation of the problem of perceived under-performance in education. Although IOs are the usual suspects in the scholarship that focuses on the production of global comparative metrics, I showed how national actors were equally central in supporting, sustaining and even strengthening these processes. Indeed, some of the interviewees, even when critical of the OECD work, were ready to acknowledge that the OECD sparked a debate that would not have happened otherwise. Progressively, since the mid-2000s, the OECD became an undisputed expert organisation in Sweden, and indeed, as couple of interviewees suggested, a ‘production force’. Close and sustained work with the Ministry, in combination with touching a nerve with the Swedish public (with quotes by Schleicher, such as ‘Swedish schools having lost their soulFootnote 8) were key ingredients of this success. In the case of the OECD and Sweden then, ironically perhaps, ‘governing at a distance’ (Cooper, 1998) appears to require a strange sense of proximity: arguably, these conditions of actors’ socialisation and policy translation are necessary for the kind of paradigmatic policy shift that quantification has led to in global public policy.

Through processes of collective puzzling and social learning, experts bring forward a new mode of regulation which draws on and supports the ‘data dream’ by providing it—at least in some systems—with what it lacked before: a sense of belonging and ‘ownership’ of the project. This is how international reform agendas enter national policy spaces and shape them through slow, continuous and consensual build-up of the new, common esprit de corps—the inescapable ‘right thing to do’ (Meyer, 2005). It is these processes of learning and socialisation that embed the international much deeper into the national consciousness, one often traumatised by the exposure that the damning global comparative data may bring.

The work of expertise, therefore, has undergone changes that may be seen as emblematic of a paradigm shift not only in the regulation of education, but also in regulation per se. Crucially, as I showed, socialisation and the learning that it produces does not merely entail the learning of facts. It is constitutive, generating or strengthening trust, commitments, identifications and loyalties—it embodies, as Hunter has fittingly described, ‘the connective tissue of governing itself’ (Hunter quoted in Newman, 2012).

Indeed, the UN’s participatory turn is evident of very similar tendencies and processes taking place, with the involvement of national actors, at global sites of measurement and decision-making. No matter how important the socialisation of national actors in these processes is, it has to be matched and strengthened by the expansion of connections and interdependencies grounded in the new governing paradigm of the global goals, as exemplified in the production and measurement of the SDGs. As the chapter discussed, this participatory logic was embedded in the SDGs from their inception with important consequences for the governance structures of this framework, as well as the implementation of the framework on the country level. Consequently, one of the key global IOs, the UN—and subsequently all the IOs that work with it—was driven not only by the technocratic logic of quantification, but also the demands for participatory governance.

From their inception, and in contrast to the MDGs and all other global monitoring programmes that preceded them, the SDGs were designed to be both a highly technocratic monitoring programme of ‘governing by numbers’ (Miller, 2001) as well as a participatory project aimed at assuring the participation of countries and communities. Such a double focus on both democracy and technocracy has been challenging for the experts within the IOs, as the technical decisions that had to be made were dependent on a concurrent process of their de-politicisation as technical matters that required data solutions, as well as re-politicised as issues that brought participants together in search of consensus. Such a balancing act required careful expert brokering work, as prioritising one over the other would risk the loss of momentum and support: for example, as the case of the MPI indicator showed, prioritising methodological practices of mechanistic objectivity (Daston & Galison, 2007) risked stalling collaborative action, politicising it or stopping the political processes aimed at actually fulfilling the targets of the SDGs. Alternatively, technical considerations had to often be mobilised as tools of distraction, especially when difficult political decisions had to be made and no consensus was in sight.

This context, where the stability of objectivity was replaced by the fluidity of the continuous consultation processes, shaped to a large degree the new expert work of brokering. What METRO found was that numbers are no longer ‘fixed points’ (cf. Lahn & Sundqvist, 2017) but fluid entities that could always be improved, changed and mobilised in different ways. Experts had to mobilise not only their technical and epistemic capital but also a range of capitals at their disposal, such as the use of evocative language, beautiful data, marathon sessions and Global South participants flown around the world to attend—and thus legitimise—yet another meeting.

Finally, the case of the SDG4 is illustrative of what expert brokering work involves. For a start, although SDG 4 could be seen as a prime example of a transnational soft regulatory instrument (in the tradition of ‘soft’ law, i.e. best practices, expert standards, rankings, ratings, audits, quality assurance and the like), as I showed, it is also substantially different from other quantification exercises. The construction of SDG 4 represents a leap in the practice of transnational soft regulation because, although prescriptive, it also appears as transparent, pluralistic, open and developmental—consensus-making is prioritised by experts and data collection and validation processes are required to be ‘democratic’. I have described the ways in which the centrifugal forces of technical and political accountability have given shape to expert actors’ positionings and political work within it.

Nevertheless, these processes are not smooth and linear. As the empirical material shows, they involve antagonistic relationships of all the actors involved, and increasingly so, given the universal aspirations of the agenda and its claims to ‘democratise’ data monitoring for all the participant nations. Lack of resources creates enormous frustrations and limitations; in many ways, it necessitates the use of pre-existing data. This creates pressures in the relationships of the four major IOs (UNESCO, OECD, UNICEF and the World Bank), since they have to coordinate their work in a context not only restricted by limited budget availability, but also under conditions of attacks on their expertise.

At the heart of this chapter are the paradoxes and the multiple ambivalences that quantification brings to transnational governance. On the one hand, they are necessary for the construction of discursive coalitions of actors who are not known to each other or have not collaborated before. Indicator frameworks and all other subsets of numerical work create what Bourdieu (1977) terms a ‘linguistic market’. While some actors have the epistemic purchase to own and control most of this market, many others, as we have seen, are there—knowingly and willingly—to consume this lingua franca of numbers and transport it back home. Second, numbers’ underlying use as the new doxa of transnational governance legitimates a whole series of informal and ad hoc arrangements, all accepted and all approved in the name of the multiple global crises and the need to construct as broad a consensus as possible; in this fluid and dynamic arena, even bad quality data or even no data would do.

As we have seen above, the SDGs identified a specific failure of all previous statistical large-scale projects to bear fruit and developed a manifest governing programme to influence the behaviour of participating actors—and by ‘participating’ I do not mean only national ‘generalists’ but also highly technical elite experts who are now asked to expand their set of skills and adapt to this new governing reality. It may be that interventions still appear restricted to pushing (and largely financing) the statistical capacity for nations to produce data for governing; nevertheless, this step is seen as (and indeed is) key in achieving ‘transformative’ change. In terms of expert work, it creates different types of contributions (from the highly technical to the politically strategic and diplomatic) that require horizontal relationships between actors that are not fixed but are continuously negotiated and shared. Thus, expert work facilitates the emergence of a global public policy field that transcends the national/international/state/non-state divides.

To conclude, the ambiguity of numbers which describe and simultaneously prescribe allows participant actors to perform their function as transnational actors who can simultaneously take part in collective decision-making and maintain their own particular register of the meeting and its aims and decisions. In this metrological space or field, objective relations are structured by the distribution of economic, epistemic or cultural resources ‘which are or may become active, effective, like aces in a game of cards, in the competition for the appropriation of scarce goods of which this social universe is the site’ (Bourdieu, 1989, p. 17). The perceived weak positioning of Global South actors in the process is a telling example of this.

As a result, the non-existence of any ‘rules of the game’ in this field is often seen in the literature as an ‘institutional void’ (Hajer, 2003), where actors have to make up the rules and processes as they go along. Thus, quantification is key in the production of transnational governance, as it represents the unfolding development of ‘product and process’, constantly moving with that which it seeks to move. Instead of analysing expertise as solely a process of depoliticising social problems through the imposition of a measurement agenda, we observe a process of re-politicisation of policy problems by making them knowable and actionable through expert brokers. Expert work in this context represents an ‘act of performative magic’ (Bourdieu, 2000, p. 243) that is vital to the building of global public policy, as experts attend to and navigate the contested ideas and values which infuse the everyday realities inhabited by all participant actors. Expert work, therefore, as both ‘product and process’—despite all its contestations and failings—shape that which it classifies, generates particular scripts of action and reconfigures policy problems and issues in ways that invite certain possibilities for deliberation and allow the production of ‘consensus by data’.