1 Introduction

This chapter discusses the rise of the competition over measurement which has been structuring the relationships between IOs. The production of data to support comparative assessment and evaluation is one of IOs’ key organisational remits, therefore, they have vested interests in promoting the implementation of their measures over those of others. Consequently, what we observe in the global governance of education is not merely ‘governing by numbers’, but rather a navigation of the market of measurement; this can often lead to conflicts and controversies over statistical data collection, as well as new partnerships and collaborations. Thus, it becomes obvious that it is not merely epistemic authority that governs the production of quantification. Rather, a market logic affects the way data are constructed, collected and compared. In this setting, measures are not merely assessed based on their epistemic qualities—for example, how well they capture the reality of higher education—but rather in their ‘market share’, i.e. the number of countries and agencies agreeing to participate and contribute to the work of measurement.

This chapter studies two empirical cases of ‘measurement markets’: the first examines the rise of the ‘learning assessment market’ that has emerged towards the measurement of learning outcomes for the SDG4. The second case moves away from the transnational space of the SDGs, in order to analyse a case a lot closer to home; that is, the rise of higher education (HE) in Europe as a field of measurement and competition. Whereas most literature that studies HE focuses on the competition between European universities in terms of rankings and research power, here we focus on a different kind of struggle; namely, this is a story of the interdependence of higher education quality assurance agencies, as they struggle for position and purpose in the dense space of quality assurance in education in the EU. As we will see, since the foundation of the EU, higher education has always been central to Europeanisation, a process that intensified with the Bologna Process (1999). However, as with Europeanisation itself, the work of HE quality assurance has transformed into an organic, living entity, taking root and growing in unexpected ways.

2 The Market of Learning Assessments

Previous chapters in this book have already discussed the conflictual character of the negotiations surrounding the focus of the education sustainable development goal (SDG4). However, apart from the competitions and struggles that building the architecture of decision-making towards the SDG4 created, there has also been substantial contestation around the selection of measures for the different indicators, once the latter were decided upon. As Clara Fontdevila eloquently describes, ‘by the mid-2010’s, there were several cross-national assessments (CNAs) in place, but no consolidated methodology to equate and harmonize them’ (2023, p. 6). Although there have been multiple other national datasets and providers that eventually also came into the picture, in this section, we are going to focus specifically on learning assessments, given the significance they acquired post-PISA success, but also, as we will see, the multiplicity and competition over which ones will dominate the ‘market’ (Table 1)

Table 1 The market of learning assessments (Fontdevila, 2023)

Such diversity of measurements of learning outcomes (especially when contrasted to the lack of data for other indicators such as citizenship or gender equity) reflects the ‘learning turn’ and the emphasis on outputs rather than education inputs, as was described in Chapter 2. What is of interest in this section however is that the technical challenges for the harmonisation of these tools were not hidden, but discussed openly by IO actors in two UNESCO World Education Blogs in 2019. Interestingly, in these two blogs, their authors, Silvia Montoya, head of the UNESCO Institute of Statistics, and Luis Crouch, Senior Economist at RTI International, are reflecting on the problems of the ‘learning assessment market’ (Montoya & Crouch, 2019a):

Measuring learning outcomes is key to the Sustainable Development Goal for education (SDG 4). There are about a dozen indicators that measure learning outcomes. Data for these indicators are provided via a market. It may seem odd to think so but think about it for a moment: there are data producers, there are data consumers (countries, policymakers, international agencies and researchers), and there are goods and services exchanged for money (prices) to produce the assessment data. (Montoya & Crouch, 2019a)

Here, two key actors in the making of the SDG4 are open about the ways that learning assessments have become an industry with ‘sellers’ and ‘buyers’, as well as money changing hands in the process. The authors discuss the difficulties of navigating these measures and trying to work with them in order to make them comparable. Interestingly, they reflect not only on the challenge of their own work of commensuration, but also on the ‘market’ itself, which they characterise as a failing one, since it apparently does not adhere to any of the rules that well-functioning markets do:

While the specifics of a market will obviously vary, there are two central questions: does it allocate resources efficiently and equitably? In this blog, we ask this of the learning assessment market, and find the answers fall short…. with learning assessments, there is product differentiation. In fact, no important ‘product’ sold in the learning assessment market is the same as any other, and organizations purposefully differentiate. Some assessments are about skills needed for the labour market, others are curriculum-based. Some are designed for primary education, others focus on lower secondary. Some are citizen-led, others are government-led. And so on. (Montoya & Crouch, 2019a)

Although the core idea of the quotation above is that an efficient market would require some uniformity, rather than differentiation, learning assessment producers intentionally differentiate their products so that they appear to offer a tool that is unique and most closely meets the needs of the country ‘buying’ the product. According to Montoya and Crouch, such differentiation goes against the rules of market efficiency. Nonetheless, in the competition for producing expert knowledge, such competition makes good sense: expert organisations need to differentiate their goods in order to compete in the very dense space of data production for governance. As necessary as alliances and collaborations may be, so is retaining their unique branding and contribution. Indeed, most conflicts between IOs arise when on the one hand they agree to collaborate, while on the other they ‘push’ for their own data instruments and tools, with the World Bank being seen as the usual perpetrator of such moves.

There are also significant barriers for possible competitors to enter the market because it is costly to build a set of good learning assessment questions. New providers typically emerge only to provide a differentiated product. For example, there are assessments serving different geographies (such as initiatives in East Asia) or offering different ways of administering and engaging with the community (citizen-led assessments) as well as different education levels (e.g. the Collegiate Learning Assessment, a higher education standardized test in the United States). (Montoya & Crouch, 2019a)

Another common issue is the differentiation of measures that emerges through the regionalisation of assessments and the difficulties to align them in the production of global data. In order for data to be seen as useful, data producers create assessments that allow countries to compare themselves with their neighbours, rather than with countries at the other side of the globe. Similarly, data producers decide on the focus of the assessment depending on need: as the table above shows, most assessments are focused on the measurement of literacy and numeracy, while fewer ones focus on skills and problem-solving.

There is also price discrimination. Not all countries pay the same. There is some negotiation on price and different levels of subsidies. There is also intermediation. Prices in many cases are negotiated between third party payers (e.g. development partners) and the producer. This can be a good thing in some ways (e.g. the poor pay less) but it also results in non-transparency of prices. (Montoya & Crouch, 2019a)

Finally, this last quotation is a reminder of the costs of producing such learning assessments and the ways these are distributed and differentiated depending on historical and political ties, zones of influence and donor choices. A common gap in the examination of expert knowledge production is its cost—the cost to produce it and the cost to buy it. As Crouch and Montoya suggest, the lack of transparency around costs leads to further competition, lack of trust and continued high prices, especially for those countries that may struggle most to pay for these data: ‘For example, countries are often led to believe that by joining an international assessment they will benefit from economies of scale. Yet why is it that the fees never seem to go down as the pool of participants grows?’ (Montoya & Crouch, 2019b).

Although the two blogs are only a small snapshot into the world of the ‘learning assessment markets’, the choice of language in both blogs is telling: there are mentions of the need to construct ‘consumer guides’, ‘efficiencies’ and the need to ‘provide more transparent price information’. Crouch and Montoya suggest that ‘the processes whereby consumers and producers interact is a black box’ and thus propose the creation of ‘physical marketplaces’ (Montoya & Crouch, 2019b):

Most of us like touching and feeling things we buy. If we are buying a bicycle or car, it is sensible to try it—even if we end up making the final purchase online. The learning assessment market should offer the same experience—a place where users, producers, and international organizations can meet and make sales pitches. (Montoya & Crouch, 2019b)

The production of global learning data is therefore not produced entirely on epistemic grounds; it is, as we have seen, a matter of political choices over time, as well as a ‘product’ of stark competition in the market of measurement, where data producers have to ‘make sales pitches’ to promote their measures over those of others. However, ‘physical’ this market of measurement can be, there are limitations and visual warnings offered to ‘shoppers’, too (Fig. 1):

Fig. 1
A screenshot from the World Education Blog. The text on the left reads shopping for a learning assessment should not be like haggling for vegetables. The right side has a photograph of vegetable sellers.

Image from the World Education Blog ‘The Learning Assessment Market: pointers for countries’ (Montoya & Crouch, 2019b)

Moving away from the transnational space of the production of expertise, the next sections will focus on the case of the quality assurance market in higher education in Europe.

3 The Case of the Quality Assurance Market in HE in Europe

The aim of this section is to analyse the growth and complexity of Quality Assurance (QA) in higher education (HE) in Europe, as a way of understanding the multifaceted and continuously developing market of measuring and quality ‘assuring’ universities in Europe. Indeed, the rise of a complex epistemic infrastructure (Tichenor et al., 2022)—with new materialities and actors—has led to the development of intricate webs of education actors and data that have strengthened the emergence of a European education policy space. In fact, the latter is not an imagined space any longer, either to be embraced or resisted; it has become the officially announced and strategically drawn European Education Area,Footnote 1 as a single and unified EU policy arena, and thus a strategic area of interest that has to be ‘softly’ governed via a multiplicity of measures and agencies.

None of these developments are of course new. Since the turn of the century, a powerful device for the construction of the European education policy space has been the incessant generation of statistical data to monitor performance (Grek, 2016; Lawn, 2011; Lawn & Grek, 2012). The datafication of education policy (Grek et al., 2020) occurred—and partly led to—a fixation on notions of quality assurance and evaluation (Ozga et al., 2011). Indeed, recent decades have seen the notion of ‘quality’ becoming central to attempts to control and develop both public and private institutions, as evident through the proliferation of terms such as ‘quality assurance’, ‘quality enhancement’, audit and ‘quality monitoring’ (Jarvis, 2014). While industrialisation brought the idea of quality assurance to the fore, as the means by which to ensure mass-produced goods could withstand an ‘objective’ quality test against a set of pre-determined criteria, after the 1980s and the rise of New Public Management, ‘quality’ acquired a double meaning. It now relates not only to the quality of products or services but also, crucially, represents a key criterion for judging how organisations are run. ‘Quality gurus’ emerge and quality assurance processes travel from organisation to organisation (Power, 2003). Quality must be measured quantitatively and at all times, and it represents the means through which organisations can be compared and become ‘known’ to citizens/consumers. In the case of transnational policy spaces and political projects, like the EU, quality and all its associated measurement processes, such as those of ‘quality assurance’, become a main mode of ‘soft’ governance (Lawn, 2011), operating through the setting of common benchmarks and standards and the promotion of constant self-regulation as a way to learn and to align oneself with international ‘best practice’.

Since the 1990s, this ‘soft governance’ turn has led to the creation and expansion of a European-level quality assurance market in higher education in Europe (Gornitzka & Stensaker, 2014). QA is often imagined as an instrument for greater internal mobility in Europe, while also advertising and guaranteeing the quality of European skilled labour and knowledge products, in line with European Union goals related to becoming the world’s most advanced knowledge economy. In the following sections, I examine shifts in HE quality assurance, in the form of standards, data and reports. Second, I explore the market of quality assurance actors involved in European QA and measurement processes, their interdependencies and their contestations; I examine European actors, such as ENQA and EQAR, but also the influence of global ones, such as the OECD. Finally, I reflect upon what these explorations reveal about how a market of QA measurement in Europe has evolved over the last two decades, what the position of the Bologna Process has been in these dynamics, and how QA has become a central feature and driver not only of the Europeanisation of HE per se, but also of the construction of a market of measurement as a whole.

3.1 Europeanisation as a Concept and as a Research Conundrum

As discussed extensively elsewhere (Lawn & Grek, 2012), a focus on QA, alongside the expansion in data production and use, and its capacity to flow across Europe (and beyond), illustrates a shift from merely using data to provide a ‘state optic for governing’ (Scott, 1998) into the fabrication of European education as a legible, governable policy space. In Europeanising Education, Martin Lawn and I describe the ways that the positioning of policy actors as ‘policy brokers’, that is people who are located in some sense at the interface between the national and the European, ‘translate’ the meaning of national data into policy terms in the European arena, while at the same time continuously interpret European developments in the national space. Adopting the term ‘brokers’ here, I do not intend to paint a picture of national–transnational exchanges, in which policy brokers operate as frontier guards, and members of European organisations act as carriers of a European policy agenda. Instead, I understand Europe to be fluid and changing, and itself swept by international pressures, simultaneously located in and produced by the global, the idea of the European and the national. In order to capture this constantly moving, liquid and undefined European education space, we start the analysis from a slightly more stable ground: its past. Education policy activity in the European Union (EU) could historically be classified in several ways; for example,the Treaty of Rome (1957), the Single Act (1987) and the Maastricht (1992) and Amsterdam (1997) and Lisbon (2009) Treaties could be seen as five stages (1957–1987; 1987–1992, 1992–1997, 1997–2009 and 2009-) (Ollikainen, 1999; Shaw, 1999). The European Education Policy Space was not determined merely by the fairly stable geographical boundaries of a common market: as early as the 1960s, it became a shared project and a space of meaning, constructed around common cultural and educational values. Indeed, from the 1960s to 1970s, the discourse of a common culture and shared histories was slowly being produced as a cluster of facts and myths about the European ‘imagined community’ rising from the ashes of a destructive Second World War. Education policymaking for the ‘people’s Europe’ took the forms of cultural cooperation, student mobility, harmonisation of qualification systems and vocational training (European Commission, 2006). It did not constitute a purely discursive construct, adding to the list of European myths. It was concretised and pursued through Community programmes, such as COMETT and ERASMUS, involving large numbers of people and travelling ideas (European Commission, 2006). Its impact was arguably limited in relation to the ways European education systems constructed their curricula and tools of governance; subsidiarity was the rule. However, regardless of its relatively limited effects, the project of a ‘people’s Europe’ had a clear ambition: to create a distinct European identity and culture—and to use these resources to enable the governing of a shared cultural and political space.

This brief reminder of the foundational characteristics of Europeanisation is important here for two reasons: first, it helps to throw into relief the defining events that turned the European education space from a rather idealistic project of cultural cohesion to a much sharper competitive reality; and second, it enables us to understand how, when and why the discourse of QA entered this space, and with what impact. For example, research reveals the many points of origin identified by national policy actors in relation to policy requirements that demand data collection—these may originate in Europe or from the wider world of OECD, the United Nations Educational, Scientific and Cultural Organization (UNESCO) or the World Bank. Indeed, for the most part, the source of pressures and requirements does not seem to be of great concern. Instead, policy actors focus on ensuring successful outcomes, on producing ‘world-best’ education through the production and use of data: securing competitive performance is the language of high quality and standards. In the aftermath of a global pandemic, and the ‘protectionist’, primarily national, policy responses that it ensued, there are even greater difficulties in identifying a distinctive European Education Policy Space, as policy actors interpret their brokering as a fusion of European and global influences that places pressure on systems to demonstrate success in terms of measurable outcomes. Such developments suggest that the ‘Europe’ of a collective project of shared trajectories, values and aspirations is less visible than in the past, and focuses attention on the kind of space of governance that the growth of data flows in Europe gives rise to. Looked at in this way, we can see that the governing project of a ‘people’s Europe’ is slowly being turned to a project of individualisation—the production of a Europe of individuals, striving to accomplish the next set of goals, indicators and benchmarks. This project is made possible by the existence of networks through which data may flow, and as I will show, through the competition of a range of measures and monitoring tools that connect individual student performance to national and transnational indicators of performance. Furthermore, the use of these particular technologies of governing signals a shift from the attempted fabrication of Europe through shared narratives and projects to its projection. By this I mean a shift from the production of Europe through the recording and transmission of its existing characteristics and capacities to the moulding of the future through QA processes that shape and project the individual and the nation forward into lifelong engagement with Europe as the most competitive knowledge economy in the world. It is within this conceptual context that we will turn to an examination of higher education and the impact of QA measures in shaping the field for at least two decades now.

3.2 Quality Assurance in Higher Education in Europe

Although the story of efforts for the convergence of higher education in Europe goes as far back as the inception of the European political project in the early 1970s, it was the Bologna Declaration of 1999 that instituted a process that fundamentally reshaped European higher education (Curaj et al., 2018; Enders & Westerheijden, 2014; Schriewer, 2009). While the precise objectives have evolved over time in connection with the work of the Bologna Follow-Up Group (BFUG) and, in particular, the Ministerial Conferences of members, the main goals of the process have concentrated on mobility between, and the compatibility of, higher education systems and the pursuit of quality in higher education (Bergan, 2019). In practical terms, the drive towards these objectives has included a focus on the structuring of systems in accordance with the three-cycle approach (Bachelor, Master’s, Doctorate); the creation of an EHEA Qualifications Framework; and the development of common standards and processes for QA (Bergan & Deca, 2018; Brøgger, 2019). This drive resulted in the announcement of an education space of enhanced mobility and competitiveness, the European Higher Education Area (EHEA) in 2010. Extending beyond the borders of the European Union, the EHEA’s 49 country members—joined by the European Commission and a range of stakeholder organisations—have all agreed to pursue the goals of the Bologna Process, altering their HE systems to facilitate the mobility of students and staff between EHEA members and to enhance the employability of graduates (Barrett, 2017).

These processes of reform have been accompanied by the creation of a wealth of academic and practitioner publications describing the evolution of Bologna and the EHEA, evaluating the strengths and weaknesses of the EHEA, and prescribing future directions for development. The mammoth edited volumes on higher education within the EHEA by Curaj et al. (2012, 2015, 2018) are a clear example of this body of literature. However, gaining analytical purchase on the transformations within HE since the initiation of the Bologna Process, requires stepping outside of an ‘insider’s perspective’ (Dale, 2007) and viewing the developments in their historical and political context. Corbett (2012), for example, highlights how European higher education cooperation and governance have changed with the onset of the Bologna process and the creation of the EHEA, with new European policy arenas being created where there had been relatively little European-level action. As both Dale (2007) and Corbett (2011) indicate, all this reflects wider transformations in the role of the university in the era of knowledge economies and, in particular, the notion of a Europe of Knowledge (Corbett, 2012; Dale, 2007). The rapid adoption of the push for Bologna reforms and the EHEA—with 45 countries involved by 2005 (Bergan, 2019)—speaks to their political resonance for this changing context.

One of the most significant dimensions of change associated with the Bologna Process has been the role and influence of the European Union in HE. The European Commission’s scope of action in education is restricted by the subsidiarity principle in education, but the Bologna Process has provided a means for the Commission to fulfil its supporting obligations and to work around such limitations (Brøgger, 2016; Capano & Piattoni, 2011). Despite being initially positioned outside the Bologna Process and the development of the EHEA, the European Commission has come to occupy a central role in driving the agenda (Dakowska, 2019; Robertson, 2008). Keeling describes, for example, how the European Commission began to dominate the higher education discourse in the 2000s, with the Commission’s involvement in the language politics around research policy and the Bologna Process contributing significantly to ‘the development of a widening pool of “common sense” understandings, roughly coherent lines of argument and “self-evident” statements of meaning about higher education in Europe’ (2006, p. 209). As Magalhães et al. (2012) explain, part of this process of European consolidation has been the ability of the European Commission to bring together, or to articulate (Veiga, 2019), multiple agendas and discourses in ways that expand the legitimacy of European-level action in the EHEA and in higher education more broadly. Of particular importance was the drawing together of the development of the EHEA with the economic agenda of the Lisbon Strategy, which sought to ‘to make the Union the most competitive and dynamic knowledge economy in the world’ (Krejsler et al., 2012), and the Modernisation Agenda for universities, inspired by the New Public Management school of thinking (Enders & Westerheijden, 2014). Crucial here also is the ability of the Commission to allocate funding to support activities that align with its conception of what the EHEA should be, especially given the lack of overall EHEA funding (Bergan, 2019). As I will show, we observe an organic growth of actors and datasets in the field of quality assurance in Europe, boosted by the growth and expansion of quantification in policymaking; in this context, quality assurance does not represent merely a tool of governing higher education but has also become a market of measurement, with universities being proclaimed as carriers of ‘global Europe’ and the ‘European way of life’ (EC, 2022).

3.3 Quality, Data and Standards

This section explores major shifts that have occurred in the production of data for quality assurance and measurement in European higher education since the inception of the Bologna Process. I focus principally on three key documents and datasets—the Standards and Guidelines for Quality Assurance in the EHEA (ESG) (developed by ENQA, see below), the European Quality Assurance Register for Higher Education (EQAR), and the European Tertiary Education Register (ETER)—which together illustrate the increase in both the scope and the complexity of the changing QA architecture. Crucially, these three developments have helped create the foundations, and interconnections, that facilitate further diversification and expansion of the QA market of measures.

One of the first organisations to emerge in connection with the initial Bologna developments was the European Association for Quality Assurance in Higher Education (ENQA). ENQA is a stakeholder organisation whose membership comprises principally of quality assurance agencies (QAAs). QAAs perform reviews of higher education institutions and programmes, making them key actors in higher education systems. In addition to serving as the main representative of this key constituency in European higher education, ENQA has also taken a lead on developing the underlying infrastructure of QA in Europe (Ala‐Vähälä & Saarinen, 2009). In 2003, the Bologna Ministerial Communique called on ENQA alongside the European Students’ Union (ESU, previously ESIB), European Universities Association (EUA) and European Association of Institutions in Higher Education (EURASHE)—the other members of what came to be called the E4—to develop an agreed set of standards, procedures and guidelines on QA (E4, 2011). This followed a recognition in the Berlin Communique of the Bologna Process that the ‘quality of higher education has proven to be at the heart of the setting up of a European Higher Education Area’, with Ministers stressing the ‘need to develop mutually shared criteria and methodologies on quality assurance’ (p. 3).

The outcome of the ENQA-led process was the 2005 creation of the European Standards and Guidelines (ESG), which were adopted as part of the Bologna Process’s Bergen Communique and which were framed as a step towards greater consistency in QA across the EHEA and enhanced trust and qualification recognition between different contexts. The ESG outline standards and guidelines for different types of QA processes and the different actors involved in them. The standards set out broad and basic requirements in order for institutions and QAAs to be compliant with the ESG, such as that ‘institutions should have formal mechanisms for the approval, periodic review and monitoring of their programmes and awards’ (Standard 1.2). The guidelines provide ‘additional information about good practice and in some cases explain in more detail the meaning and importance of the standards’, although it was not ‘considered appropriate to include detailed “procedures”’ (p. 11) in the guidelines. The first part of the ESG focuses on internal QA processes within higher education institutions, the second on QA by external actors (i.e. QAAs) and the third on QAAs themselves. For external QA processes, for example, ESG compliance requires that ‘Any formal decisions made as a result of an external quality assurance activity should be based on explicit published criteria that are applied consistently’ (Standard 2.3), while for QAAs it is required, for instance, that ‘Agencies should have clear and explicit goals and objectives for their work, contained in a publicly available statement’ (Standard 3.5). Backed by the force of their collective acceptance by the Bologna Process members, these standards and guidelines make claims about how we can come to know the presence or absence of ‘quality’.

As suggested by the lack of specification for the ‘mechanisms’, ‘criteria’ or ‘goals and objectives’ mentioned in the three standards above, a key characteristic of the ESG is the openness and ambiguity of the standards and guidelines (Brogger & Madsen, 2021; Gornitzka & Stensaker, 2014). In part, this appears to be a response to the tension present throughout European-level education initiatives between the drive to harmonise practices to facilitate integration, and mobility, and the political and practical realities of Europe’s varied set of education systems. Therefore, while the ESG are working towards the ‘establishment of a widely shared set of underpinning values, expectations and good practice in relation to quality and its assurance’, the report states that diversity and variety are ‘generally acknowledged as being one of the glories of Europe’ and correspondingly ‘sets its face against a narrow, prescriptive and highly formulated approach to standards’. Keeping a studied ambiguity in the formulation of the ESG likely serves as a means of ensuring its acceptability to a wider range of European states and education systems. Rather than a strict standardisation, this might be seen as ‘setting the outer borders within which there is scope for diversity’, as one actor in the space put it for the Bologna framework at large (SB int.). Such a description of the role and function of the ESG fits particularly well with the conceptualisation of this space as an epistemic infrastructure; that is, building the conditions and structures that, at a later stage and possibly by other actors, can be ‘filled’ with new inscriptions and procedures that will make the infrastructure intelligible and useful and grow it anew. Indeed, as another interviewee articulated, the balancing act of the ESG has been to have it ‘prescriptive enough in order to induce the change needed, but also general enough to have so many countries being able to work with it’ (CG int.). Perhaps because of this breadth and ambiguity, the ESG has been one of the most successful harmonising elements of the Bologna Process (Bergan, 2019). Pointing to the transformative power of the ESG, the same interviewee commented, for instance, that QAAs can push reforms with governments by saying that ‘we have the standards, and we have all the colleagues in Europe that are doing it like this, and then we have to align’ (CG int.); peer pressure is therefore strong, one of the most influential qualities of governing by data.

In 2015, the ESG were updated to reflect changes that had occurred with respect to other elements of Bologna, such as qualifications frameworks, as well as broader shifts, for example towards student-centred learning (ESG, 2015). While compliance is by no means universal, the initial success of the ESG encouraged this evolution and expansion in scope. The inclusion of a new item in the ESGs, or indeed a changed interpretation, likely gives a higher likelihood of members adjusting their systems to incorporate the new directions. As one interviewee put it, ‘people think that if they put something more in the ESGs that it has the chance to be really implemented tomorrow in those… 49 countries which are members’ (CG int.). In recent years, there have been plenty of prompts for further amendments to the ESG in connection with the popularisation of micro-credentials, for example, and the spread of digital learning approaches associated with the COVID-19 pandemic and other longer-term trends. While initially presented as a simple, technical instrument for QA practices, the ESG can be seen here to act as a governing instrument in higher education (Stensaker et al., 2010), with the potential for alterations to and expansions of the material infrastructure of the ESG to reflect new strategic choices and policy trends and, crucially, to induce corresponding changes in the education systems of member countries.

Further, the foundational nature of the ESG can be seen in the case of the European Quality Assurance Register (EQAR), which was created in 2008 as part of following up on one of the recommendations of the initial ESG report. EQAR was the first legal entity to be created through the Bologna Process, and it functions, in some ways, as a guardian of the ESG. QAAs apply to be part of the register, and thus legitimated as trustworthy agents, and are only listed if they are judged to be compliant with the ESG by EQAR’s Register Committee (EQAR, 2020). Through this process, EQAR transforms ‘QA agencies in Europe into QA agencies of Europe’ (Hartmann, 2017, p. 319). Register decisions are made on the basis of external reviews of QAAs that are generally coordinated by ENQA, who, along with the rest of the E4, are founding members of EQAR. The existence and effective functioning of EQAR and, to some extent, ENQA depends, therefore, on the ESG. Part of the power of EQAR and ENQA, however, is their ability, emerging from their recognised responsibility for carrying out the above duties, to create procedures and systems for interpreting the ESG so as to decide on compliance. The way in which these internal procedures and systems operate has the potential to affect which QAAs are labelled as EQAR registered, with acceptance on the register opening doors to performing QA activities in different countries, as well as which higher education institutions and programmes are recognised as being vetted by an EQAR-registered agency, which can influence, for example, how qualifications are recognised (or not) as students and graduates move between contexts.

Through its register processes and reporting procedures, EQAR has been a key driving force behind an impressive expansion in the epistemic infrastructure around QA in Europe. As well as the reports prepared for admission onto the register and periodic renewal, EQAR also requires reports whenever a registered agency adjusts their practices in a way that might have an impact on their compliance with the ESG. A major expansion in EQAR’s data flows and capabilities came in 2017 when EQAR launched the Database of External Quality Assurance Results (DEQAR). DEQAR collects and collates data not just on the QAAs that are part of the register but also, through the reports submitted by those QAAs, on the institutions and programmes that those QAAs have reviewed (EQAR, 2021). As of June 1st, 2022, DEQAR contained nearly 74,000 reports on over 3000 institutions. The foundational structure of the ESG is, again, key here, as one interviewee described: ‘DEQAR of course is also very closely related to the ESG standards for higher education and I think you couldn’t expand it to another sector or copy it or replicate it into another sector without having a similar kind of agreed European standard available…. If you don’t have an agreed standard, then what is the meaning of being in a database, what does it stand for?’ (CT int.). The processes of harmonisation connected with the ESG, therefore, have allowed for data produced across countries, QAAs and HE institutions to be transformed into European data and metrics.

The market relating to QA does not exist in isolation but is interlinked with other infrastructures and projects. Examining a third key development, the creation and growth of the European Tertiary Education Register (ETER), provides an example of such interlinkages and helps illustrate the increasing complexity of the market of quality measures in European higher education. ETER started as an academic project funded by the European Commission, which has also supported EQAR and ENQA. ETER sought to respond to an absence in the higher education data infrastructure in Europe, as one interviewee put it: ‘a core function of ETER is to provide a list of institutions. You might think it’s a stupid task, but such a list did not exist before ETER in Europe’ (BL int.). Although conceptually simple, the creation of the register requires important processes of categorisation, standardisation and commensuration, which have built on existing data standards while agreeing and deciding upon new ones. The significance of simply having such a register available should not be understated, with an underlying standardised way of recognising and recording institutions and their characteristics being extremely valuable for the potential interoperability of different higher education data systems in Europe. Crucially, the existence of such a dataset opens the door for more extensive analysis of the state of European education through the use of the student, graduate, financial and other data collected for each institution in the database.

Through the DEQAR Connect project, funded by the European Commission, the quality assurance and measurement infrastructure provided by ETER and DEQAR have been linked together. As an interviewee described, DEQAR uses ‘ETER as an underlying data source of basic institutional information’, noting further that EQAR ‘only added the quality assurance related information to it’ (CT int.). Working in combination with ETER’s infrastructure on institutions allows EQAR to now present information on, for example, the proportion of a country’s students that are studying at an institution that has been reviewed by an EQAR-registered agency. This represents a significant expansion of the data that EQAR can provide and also moves EQAR closer to dealing with higher education institutions rather than just QAAs. Furthermore, in addition to being connected with ETER, DEQAR data is now being integrated into the workflows of national recognition centres, which offer authoritative advice and guidance to higher education institutions on the recognition of qualifications and assessments (CT int.). This points to an important connecting together of national-level infrastructure associated with qualification recognition with European-level infrastructure concerning QA.

As well as being significant in their own right, these three developments are illustrative of the broader proliferation of a European-level development of the data infrastructure that supports the construction of a ‘market’ that measures quality in higher education (Gornitzka & Stensaker, 2014). Other spaces for discussions on QA have also been built, principally the European Quality Assurance Forum, which generate materialities in the form of reports, minutes, presentations and more. Furthermore, in 2012, Eurydice took charge of the Bologna implementation reports, as they came to be called; the latter have become a central vehicle for evaluating movement across EHEA countries on the core commitments of Bologna, including on QA and recognition. Significantly, Eurydice draws on data and insights from a range of actors in European higher education in order to compile the reports, including EQAR and ENQA, pointing to the significance of the interlinkages between actors, the second layer of this market of measurement, to which we will turn next.

3.4 The Market of Actors

No market could have expanded to the extent and complexity that quality assurance in European universities has over the last 20 years without the efforts of a range of key actors. Following on from the previous discussion, this section focuses on the ways the market of measurement in HE quality assurance has extended to include a range of organisations that are creating new interdependencies and alliances but also new conflicts over policy influence and direction.

One of the more established actors in the field is the aforementioned ENQA. It has seen its influence grow substantially during the last decade, moving from being one of the many stakeholders in the Bologna Process to a much more strategic and policy-oriented role. ENQA was established in 2000 as the European Network for Quality Assurance in Higher Education, only to be renamed four years later to as an ‘Association’ (Ala‐Vähälä & Saarinen, 2009). Although its remit from its inception has been to ‘represent QAAs in the EHEA’, to ‘support them nationally’ and to ‘provide them with services and networking’, in recent years its influence has grown. While it is primarily a stakeholder organisation, ENQA has developed a significant role in driving policy concerning QA and is trying to steer the field in new ways (Sarakinioti & Philippou, 2020). ENQA has played a key role in creating, updating and disseminating the ESG, as explained previously. However, according to its strategic plan 2021–2025, ENQA is also pursuing ‘knowledge-based development’ and exploring ‘new ways of quality assurance’ by becoming a forum for ‘…facilitating the discussion on any changes in higher education and its provision and the consequences these changes may entail’. (ENQA, 2021).

Such a broad strategic vision in terms of shaping the field has become a significant aspect of ENQA’s work. ENQA sees its role as a policy actor, strategically placed in close proximity to the European Commission: ‘ENQA is based in Brussels for a reason. So it’s mostly the director that is based there, who is joining different types of activities, meetings with the Commission’ (CG int.). ENQA derives its status from, on the one hand, its established connection with the BFUG through being a consultative member and, on the other, the sheer strength of the number of organisations it represents:

The weight of ENQA is being given by the members. So when you go to a table, when it’s about higher education policy, and then there you represent 55 quality assurance agency members which are compliant with the ESGs from 40 countries, and then you also represent 55 affiliates also from outside Europe, that also makes you an important network. (CG int.)

Indeed, the expansion of the work and of ENQA’s influence to other world regions has given it particular momentum. Not only does this increase networking, but also, crucially, it promotes the standing of European higher education as a global higher education actor:

There are other networks of quality assurance agencies from all over the globe, African, Asian, United States, so the collaboration with those networks is important. So what we try to do is to learn from each other, but also our objective is to promote the European standards and guidelines, because of course we believe they are good. (CG int.)

A particularly revealing example of complex interdependencies and contestations in the QA market is the relationship of ENQA with its sister organisation, EQAR. Throughout our examination of the two organisations, there has often been the potential for confusion—not only by us as researchers but crucially also in the field itself—about the distinctions between the two organisations and their work (Huisman et al., 2012). This seems to spark an inclination to differentiate one’s own organisation from the other as a way of sustaining the need for the continued existence of both, especially in a field ridden by complaints for duplicity of efforts and over-reliance on bureaucratic form-filling:

EQAR is not the one that is developing the policies or providing services to the members or representing them. They are just a register…. Of course, if for example there is a discussion on revising the ESGs of course they will be involved. But maybe you know that EQAR was founded by the E4. So ENQA is the founding member let’s say of EQAR. So they are our kids in a way. (CG int.)

EQAR actors, however, do not necessarily see themselves as ‘just a register’. They are also an organisation whose role has evolved and grown, such that EQAR can now suggest that they can and should influence policy on QA in more fundamental ways:

I would say that the role evolved over this, well, now nearly 15 years in two ways. On the one hand, let’s say, from the very beginning EQAR was a very technical and bureaucratic organisation….But then I think very soon EQAR also became involved as an organisation that, let’s say, informs the policymaking discussions in the Bologna Process, because of course the governments and other stakeholders were keen to also have EQAR there as an organisation that can give some input and share the expertise that we make and gather from this work of registering agencies, of reviewing which agencies comply with the ESG and so on. And that has become or that has grown little by little over the years and now also there is quite some work done on maintaining a knowledge base on our website, on analysing what is happening in quality assurance in Europe. (CT int.)

Perhaps more so than the micro-disagreements of how the hierarchy or the dependencies among these organisations work, or the extent to which there is a degree of mission overlap or not, what is interesting here is that the growth and expansion of the market of measures (and the agencies that produce them) is seen as ‘organic’. For EQAR, an example of this spiralling of work into different directions and branches is the establishment of DEQAR. On the one hand, DEQAR was described as an ‘obvious’ or ‘not that far-fetched idea’. On the other, however, it has been portrayed as ‘a major change of our role in these 13 years…[since] …now we are dealing with the level of higher education institutions by having a database of them and that’s of course quite a big difference for our work’ (CT int.). In other words, although EQAR’s primary role was to work with QAAs, the expansion thanks to the creation of DEQAR means that EQAR now has links not only with QAAs but also with European universities themselves. Such an organic growth and expansion of the QA activities is extraordinary and well beyond what the Bologna Process set out to achieve. ENQA and EQAR perform a lot more than just the technical role of inspecting HE institutions on the basis of the ESGs. Sitting at the BFUG table as experts in QA and representatives of QAAs, they contribute to shaping the future and strategic direction of EHEA. Furthermore, the market of measurement remains intact since, while maintaining their networking function and their allyship with other QA organisations, they continue to preserve their own unique contribution and presence in the field.

A second key actor in the broader field of measuring and evaluating quality in European Higher Education is the Organisation for Economic Cooperation and Development (OECD). Although the OECD is best known in the field of education for the establishment and success of the Programme for International Student Assessment (PISA), less known but equally significant is their work in other areas and especially in higher education. Examples of this work in the European space abound. The Labour Market Relevance and Outcomes of Higher Education (OECD 2022) is one such project, and it receives substantial support from the European Commission, with the main participant nations being Austria, Hungary, Portugal and Slovenia. The project explores issues such as the emergence of ‘alternative credentials’ and the use of ‘big data’ to understand graduate skills and digitalisation in higher education. Previous project participants were Norway, Mexico and four US states. The global nature and reach of the OECD is a valuable resource in the efforts to establish the EHEA as a global player. The ability of the OECD to offer comparative data from other competitor world regions is one of the main reasons for its increasing involvement with issues of quality in HE. In its work to extend the comparisons and the evidence base beyond Europe, the OECD has made use of connections with the ETER project, as explained by an OECD interviewee:

We've been involved in that process for, I think, at least on the advisory board since the project’s inception… And part of what we’re doing at the moment is trying to develop the similar data source that draws on ETER, but also draws on other national data collections that are available for the non-EU/OECD countries.

Establishing such international comparisons and linking the European HE quality processes with those of other OECD countries is a key endeavour for both the European Commission and the OECD (Grek, 2014, 2016; Sorensen & Robertson, 2020). Correspondingly, this is a well-established collaborative relationship that has grown substantially over the years, as another interviewee shares: ‘We’re involved, they invite us to all their working meetings and likewise we invite them to ours (DC int.)’. Of course, it is not only the Commission that benefits from the OECD’s expertise. This is a two-way relationship that influences and is advantageous to both. The OECD benefits from the data the Commission generates as well as, crucially, from the funding available from the Commission for their work:

We actually use quite heavily the EU’s surveys, for example, the Labour Force Survey and, similarly, labour force surveys in the non-EU countries to assess a range of labour market outcomes of higher education…. So we run this policy survey and when we’re running that, we would obviously take into account work that the European Commission has done previously. We would consult with them on that to make sure that we’re not, first of all, duplicating the work, because obviously we all have to be efficient, and then also to make sure that what we’re producing makes sense from their perspective as well as from the perspective of our member countries. (GG INT)

In terms of the kind of collaboration and work that the OECD offers, some interviewees stressed the OECD’s independence and expert function, as compared with a more politicised Commission, while others emphasised the benefits of continued dialogue, with the Commission setting the strategic direction and the funding and the OECD responding to these policy priorities. Similar to other areas of education policy, the relationship between the OECD and the European Commission is a ‘symbiotic relationship’, reaping substantial benefits for both organisations:

The European Union is definitely a voice in our, you know, through the European countries that sit in the Group of National Experts. Certainly we would understand very well the priorities of the European Union. (GG int)

‘There has evolved a division of labour and a symbiotic relationship between the Commission and the OECD over the last 15 years or so…They’re the people with the wallet! So, in a sense, we’re more likely to be working within the framework of problems and priorities that they’ve identified, so it might be the case that the Commission will say to us, we’re really concerned about digitalisation in higher education, in which case we would say, oh, well, we agree, we think that’s a really important topic and we could support you in a couple of different ways, here, we’ll give you a couple of examples of what we might do. But there, you see how that it’s a dialogue’ (TW).

Finally, as the above suggests, the European Commission is a powerful actor in European quality assurance and evaluation processes. Through its membership of the BFUG, but more importantly, through its provision of funding and its convening power (Brøgger, 2019; Cone & Brøgger, 2020; Dakowska, 2020), the Commission has been able to influence the education policy direction as a whole, both inside and outside Bologna, and, thus, has often been the driving force behind the building of QA infrastructure. The way in which the European Commission coordinates the higher education space is subtle, yet, over time, it is effective in generating change in actors. The ‘pull’ of the Commission—through its funding, networks, data and indicators, and dominant discourses—changes the field in which European higher education actors operate such that it is that bit more likely that their next step will be in the preferred policy direction of the Commission, resulting, with enough time, in substantial movement in that direction. Note that this does not suggest that the Commission drags other actors along, that actors cannot or do not move away from the Commission’s preferred policy direction, or that the Commission is entirely alone in trying to stack the odds in its favour. Instead, it accounts for how—by making it that bit easier to move with and towards the Commission, due in no small part to it being the ‘wallet’ in the field—the Commission can softly direct the evolution over time of the infrastructure around QA and measurement in European higher education.

To further its ambitions into the future, major developments are being planned by the Commission for continuing the evolution of QA market of measures, including steps towards its regulation and management. The 2022 Communique, for instance, proposes the creation of a European Quality Assurance and Recognition System. This system will build a European space ‘where the quality of qualification is assured, the qualifications are digitised and recognised automatically across Europe, doing away with the bureaucracy that hinders mobility, access to further learning and training or entering the labour market’. It seems that, yet again, the discourse and practices of quality assurance are being put to work for the fabrication of the ideal common Europe, where universities do not just promote a ‘European way of life’ but also bring it to fruition. The practices around quality assurance and evaluation, therefore, do not simply represent technical processes by which mobility in Europe is facilitated. Instead, quality becomes a central governing device in an expansive and ever-changing data infrastructure, through which new strategic directions are drawn, new and old actors are interlinked and the construction of a market of measurement continues apace.

4 Discussion

Building on a rich set of documents and interview data, this chapter focused on an analysis of how the production of expert knowledge for policy has developed a functional and ever-expanding market of measurement over the last two decades.

A first prominent characteristic of the market of measurement is the ‘organic’ growth of its data and processes, emerging as they did from opportunities perceived and seized by particular actors at particular moments, rather than from being clearly tied to a pre-planned strategic progression. In both the cases of the learning assessment markets and of quality assurance in higher education in Europe, we observe the development of a market of measures through the balancing out of supply and demand, as well as through the initiatives of new actors, as they saw opportunity in the field, both in terms of real returns and—perhaps primarily—as ways to establish an influential position in a rapidly growing field. While the creation of DEQAR and its linking together with ETER, for example, certainly fit within the broad strategic vision of the Bologna Process, they came about because actors sought to make use of their assets, i.e. the volumes of reports they had processed and the collaborations they had invested in. This finding chimes with literature that suggests the power of numbers to ‘acquire a life of their own’ (Fourcade, 2016).

Similarly, this organic growth of the market of measurement also points to the role of its multi-layered temporality: on the one hand, the foundational nature of learning assessments as the essential building blocks of any global learning data illustrates the potential for market development to have a sequential temporality, where the completion of one block allows for the building of the next. On the other hand, however, the regular, cyclical collection of national and international assessment data speaks to different rhythms of market change and operation; both sequential and cyclical change are operating simultaneously like clock cogs. To explain further, while part of the market of measurement operates having a future orientation, others adopt cyclical and repetitive process that establish a ritual and a way of folding new data into the infrastructure. It is through analysing these events as part of a dynamic market of measurement as opposed to discrete policy events that these temporalities become visible.

In addition, the market of measurement creates competition and differentiation between actors. On the one hand, I examined the relationship between EQAR and ENQA, for example, and their struggles over the relative positions of the two in the QA market and the policy space that it helps to constitute. On the other hand, I also discussed how data producers do not construct uniform assessments, but prefer to differentiate their products geographically or substantively. Over time this has led to attempts at enhanced ‘market brand’ differentiation between data producers and IOs in order to more clearly delineate their respective positions in the market of measurement. One important aspect of the changing roles and positions of actors in this space is the new forms of expertise that the establishment and maintenance of these interlinkages and competitions require. The experts working in this field are no longer merely statisticians and data scientists. Increasingly, as we saw in this and previous chapters, what is needed is a new type of expert: expert brokers who can produce ‘sales pitches’ and can persuade that their measurement product is better than the next one. These insights, namely on organic growth, temporality and competition and differentiation, paint a picture of a fluid and significantly multi-polar space of expert knowledge production—an enhanced market of measurement that has, by now, grown too large to fail.