Introduction

Ratings and rankings (R&R) have assumed considerable importance in the (global) governance of education, especially of universities. Ratings related to the quality of research and teaching are increasingly used by national governments to allocate public funding where peer reviewers find “quality” and “excellence.” Similarly, (inter)national rankings are, more than ever, used to construct a sense of “scarcity of reputation” on a global scale (Brankovic et al. 2018), leading universities to invest considerably in their brands, driven by images of distinction to attract material and symbolic resources and talented individuals (Drori et al. 2016). Much of the literature has explored the impact of rankings on the policymaking level, on university strategic action, or on students’ choices (see Dill and Soo 2005; Clarke 2007; Bowman and Bastedo 2009; Hazelkorn 2011; Collins and Park 2016). Far less attention has been devoted to understanding consequences for disciplines and schools or other university subunits. Exceptional and important accounts include studies of the effects of rankings on law schools in the USA (Espeland and Sauder 2009, 2016) or on business schools (Wedlin 2006; Rasche et al. 2014). Rankings are often viewed as instruments that foster surveillance and normalization (Espeland and Sauder 2009), thus changing the perceptions of legal education due to the internalization of forms of control and the imposition of a process of normalization based on comparisons of performance (Espeland and Sauder 2016). Similarly, rankings “discipline” business schools by enhancing the visibility of individuals’ performances, by defining “normal” behavior, and by shaping how people understand themselves and the world around them (Rasche et al. 2014). Wedlin (2006) uncovers rankings as classification mechanisms that shape and structure fields and establish their boundaries. Other multidisciplinary fields, such as education, and how their key actors strategically behave and are impacted by ratings and rankings, have not been researched in-depth; thus, we pursue this here.

Contrary to more established, prestigious academic fields such as law or, more recently, business studies, education has long struggled to bolster its legitimacy within the higher education system, especially in research universities and in English-speaking countries (see, e.g. Lagemann 2000). This challenge often resulted from the incorporation of teacher education into the university. Further, its variable traditions as a multidisciplinary field that focuses on the study of educational processes and practice via numerous, sometimes conflicting, disciplinary lenses make it more challenging to grasp holistically (Lawn and Furlong 2009; McCulloch 2017). Tensions between the academic field and the field of practice (Biesta 2011) have not only questioned the scientific legitimacy of the field but also the status of education as an academic discipline within the higher education system and universities themselves (Furlong 2013). Given the rise of “ranking regimes” (Gonzales and Núñez 2014) and “performative accountability” (Oancea 2008) in higher education, we here investigate how such developments affected the behavior of university subunits, focusing on 75 Schools of Education (SoE) in the UK. The UK is a particularly insightful case because it has, since the mid-1980s, developed an encompassing system of evaluation that has in turn generated much of the power of ratings and rankings (Marques et al. 2017).

We examine the construction of the institutional environment in which Schools of Education are embedded and their evolving strategic behavior in competing for symbolic resources and legitimacy within the organizational field. Concretely, how have third parties intensified competition, linking SoE as competitors within their institutional environment, via now-ubiquitous ratings and rankings? While research ratings are produced by the Research Excellence Framework (REF)—previously: Research Assessment Exercise (RAE)—under the jurisdiction of the Higher Education Funding Councils (HEFCs), research rankings are produced by media organizations, such as Times Higher Education or The Guardian, for their own profit.

Firstly, we conceptualize the conversion process of ratings into rankings as an influential change process that has directly and indirectly affected the institutional environment of higher education (Meyer and Rowan 1977; Scott 1992; Meyer et al. 2007) in which Schools of Education are embedded. To understand the strategic behavior of university subunits, we conceptualize Schools of Education as “organizational strategic actors” (Krücken and Meier 2006; Ramirez 2013; Seeber et al. 2015) that compete, not only for material resources, but especially for symbolic resources, such as reputation and legitimacy. They do so internally within their specific organizational structures, vying for status in disciplinarily stratified organizations, and externally within the stratified UK higher education system as a whole.

Secondly, we discuss the study's data and methods. We conducted 22 expert interviews with members of the field of educational research in the UK. Moreover, we analyze the “organizational vocabulary” (Meyer and Rowan 1977) and the application of results of ratings and rankings (R&R) by the majority of UK SoE (n = 75) via their websites.

Thirdly, we reconstruct the development of the highly competitive institutional environment of SoE by charting the introduction and institutionalization of the rating system of the UK’s research evaluation system, and the role of media organizations in (re)framing competition and driving its marketization. Later, we show how, in the case of SoE, R&R are used differentially by SoE—according to their position in the Times Higher Education Research Intensity 2014 GPA rank order. We next highlight how such a competitive institutional environment is reshaped by R&R that are utilized by the SoE to bolster their inter-organizational and intra-organizational competitive advantage. R&R also trigger strategic behavior that is visible in several effects, ranging from changes in research management structures to the establishment of new internal practices. Finally, we discuss the results and their implications.

Institutional environment and strategic actorhood in higher education: a neo-institutional perspective

Anchoring our endeavor theoretically in neo-institutional thinking, we conceptualize the conversion of research evaluation ratings into rankings as a crucial shift in the institutional environment of contemporary higher education. In the case of UK-based Schools of Education, we investigate the rising influence and impact of R&R. As defined by Scott (1992), institutional environments are characterized “by the elaboration of rules and requirements to which individual organizations must conform in order to receive legitimacy and support” (p. 132). Following the seminal work of Power (1997) that charted the rise of an “audit society,” in which accountability and evaluation have become ever more ubiquitous, numerous variations of this argument have explained change in the governance of higher education systems. Notions of “performative accountability” (Oancea 2008), “ranking regimes” (Gonzales and Núñez 2014), or “governing by numbers” (Ball 2017) all identify the worldwide transformation of higher education into a more “accountable” sector, whose outputs are explicitly measured and evaluated through numerous forms of R&R.

The concept of “organizational field” has been used in institutional theory to understand the relationship between institutional environments and organizations (Aldrich and Ruef 2006; Scott 2013). More than analyzing one individual organization in relationship to its institutional environment, the concept of organizational field takes as its unit of analysis the totality of the relevant actors (DiMaggio and Powell 1991). In this context, organizational fields can be defined as a group of organizations—embedded in the same institutional framework (cultural-cognitive blueprints, norms, and rules and regulations)—that compete for the same resources and legitimacy (DiMaggio and Powell 1991; Scott 1992; Wedlin 2006; Brankovic 2018). Resources are not only material assets, such as the research funding distributed by RAE/REF ratings after each round of evaluation, but also symbolic assets such as reputation or prestige whose distribution is increasingly determined by rankings (Bastedo and Bowman 2010). But, as DiMaggio and Powell (1991) point out, every organization must take into account other organizations, because they not only compete for resources within their environment, but also for political power and for institutional legitimacy.

We extend such arguments to UK Schools of Education, as subunits within the complex organizational structures of universities, that must compete both for inter-organizational and intra-organizational legitimacy. In so doing, diverse R&R provide measures and details of performance and reputation at various levels. Therefore, we look at the production of ratings by the UK higher education funding bodies and the production of rankings by media organizations as the “third parties that set the framing competition” (Hasse and Krücken 2013: 183). Such R&R link Schools of Education, inter-organizationally and intra-organizationally, in a competition for both material and symbolic resources and legitimacy.

For instance, Wedlin (2006) shows how rankings can be perceived as arenas for boundary-work in organizational fields in her examination of business school rankings. She conceptualizes rankings as classification mechanisms that contribute to build perceptions of which organizations belong to the field and which do not—uncovering how field boundaries are subject to struggle and conflict. Thus, rankings as classification mechanisms are influential in field formation as they not only differentiate but also stratify, identifying (non) members and creating models that business schools attempt to emulate. Indeed, in their foundational text of sociological neo-institutionalism, Meyer and Rowan (1977) argue that institutional environments lead to homogenization mirrored in the formal structures of organizations. Both the labels in organizational charts and the organizational vocabulary used in official documents are sound indicators of homogenization within a field. Higher education scholarship has evidenced signs of homogenization among universities (see Hüther and Krücken 2016), including in the vocabulary usually found in mission statements (Kosmützky and Krücken 2015) or welcome addresses (Huisman and Mampaey 2015).

Nevertheless, such accounts have not only shown homogenization but also differentiation. In fact, the adoption of the same vocabulary (isonymism) does not necessarily mean that organizations implement the same practices (isopraxism) or imply isomorphism in structural form (Erlingsdóttir and Lindberg 2005). Such aspects highlight the dynamic nature of competitive organizational fields (Wedlin 2006; Brankovic et al. 2018) and also show how universities increasingly are considered organizational actors along with the “image of an integrated, goal-oriented entity that is deliberately choosing its own actions and that can thus be held responsible for what it does” (Krücken and Meier 2006: 241). If inter-organizational stratification between universities produced by rankings has received considerable attention (Bloch et al. 2018), intra-organizational segmentation within university subunits has yet to be studied comprehensively. Recent studies have shed light on how R&R produce differences in resources and status within universities’ subunits and trigger strategic behavior (Cantwell and Taylor 2013; Rosinger et al. 2016). Therefore, looking at the case of UK Schools of Education, we consider strategic behavior of university subunits, which deserves further analysis.

Previous research on the institutionalization of the research evaluation system in the UK demonstrated how the ideas of “quality,” “excellence,” and “impact” have been—gradually, but incontrovertibly—embedded in the institutional environment of UK higher education, also in the multidisciplinary field of education (Marques et al. 2017; Zapp et al. 2018). Schofield et al. (2013) find that older UK universities tend to stress the international dimension and the caliber of their staff, while the younger ones rely more on regional familiarity and student experience to mark their qualities. Moreover, while highly reputed universities tend to have long enjoyed high status and thus easily build an international brand of “excellence” in a digital age, less well-reputed universities must struggle, tending to have more national, local, and intra-institutional brand orientations, even when they share the same high valuation of “excellence” that reveals global tendencies of homogenization (Mampaey and Huisman 2016), and simultaneously national organizational differentiation. We extend such arguments to UK Schools of Education, uncovering how universities’ disciplinary-based organizational subunits embedded in a stratified institutional environment present themselves to the world. We expect to find both homogenization and differentiation as the scarce good of reputation is (re) distributed within disciplinary hierarchies in universities and within a highly stratified higher education system.

Methods and data

For numerous reasons, the UK higher education system provides an important case study. Its universities still enjoy strong institutional autonomy (Shattock 2012) within a highly competitive environment. Its stratification reflects a high level of marketization, exemplary for the growing evidence of convergence towards a market-oriented model (Dobbins and Knill 2014), and the first and strongest of the research evaluation systems that have developed across Europe (Marques et al. 2017; Zapp et al. 2018). Anchoring our analysis in neo-institutional theorizing and examining the specific case of UK Schools of Education, our analysis is guided by two questions: (1) What influence and impact do ratings and rankings have in framing a competitive institutional environment for UK Schools of Education? (2) How do Schools of Education use ratings and rankings to bolster their competitive advantage and what kinds of strategic behavior do such ratings and rankings trigger?

We conducted 22 in-depth expert interviews with members in the field of educational research. While one important interview was conducted with a member of a UK Higher Education Funding Council, the remaining 21 interviews were conducted with academics who assumed boundary-spanning activities within schools and universities. The selection of our purposive sample was then specified based upon two main criteria: (a) seniority in the field, and (b) the boundary-spanning nature of their career. The first criterion is related to the field members that were subjected to evaluation in at least two research assessment exercises, and that could provide information about the long-lasting effects of the research evaluation system in the Schools in which they developed or currently develop their professional activities. The second criterion is related to the boundary-spanning roles that academics embody (Tushman and Scanian 1981), leading to interviews of those not only directly impacted by the funding instrument but those also involved in some peer review or management position within the field, including participation in RAE/REF panels as peer reviewers, directors of research, and directors of departments or schools. Because we are interested in exploring the formation of the competitive institutional environment in which SoE are embedded, the questions were related to the experience of the field members along their career trajectory, often in different units and organizations. These interviews were transcribed and analyzed using MAXQDA, following the phases of thematic analysis according to Braun and Clarke (2006). Four major themes were created: institutional environment (67 codes), ratings (86 codes), rankings (23 codes), and strategic behavior (75 codes). Each interview is anonymously coded as field member: FM1 to FM21 for academics and FM22 for the staff member of the Higher Education Funding Council.

To complement the interviews that provided personal retrospective data, we analyzed the websites of those 75Footnote 1 UK Schools of Education that participated in REF 2014. We conducted content analysis of their use of REF ratings, REF-based media rankings, and all vocabulary used to describe their research and/or organizational structures (usually found in the Front Page, Research page, and/or About Us page). For each School, we collected such information and analyzed the data using MAXQDA. Here, four major themes were created: ratings (91 codes) for explicit references to any form of rating, rankings (59 codes) for explicit references to any type of ranking, where (48 codes) to understand where Schools of Education declare their results, and organizational vocabulary (194 codes) to understand the ways that Schools of Education present themselves and their research to the world. To assure reliability, two members of our research team applied the 392 codes to our dataset. While our initial aim was to look only for the use of REF ratings and REF-based media rankings, we soon perceived that several Schools made explicit references to different forms of national ratings and national and global rankings, combining results to bolster their profile. Thus, we decided to include them as subcodes in the coding matrix. This initial empirical finding suggested the need to chart differential usage and analysis of the conversion process of ratings into rankings.

As mentioned before, the UK higher education system is well known for its strong and relatively stable prestige and status distinctions (Scott 2001). Nevertheless, recent studies have challenged such propositions, for instance, through the study of the changing student body (Tight 2007) or the presence of a wide range of variant or alternative models of the university—not only the globally recognized Oxbridge (Tight 2009)—as the UK shifted from system differentiation towards institutional diversification (Scott 2009). Boliver's (2015) results confirm the increasingly blurred boundaries within the UK higher education system that nevertheless, she argues, exhibits four distinctive clusters, based on analysis of research income, organization size, percentage of postgraduates, and RAE 2008 scores: A stark division between older and newer universities is still evident as well as the consolidated position of Oxford and Cambridge as the most elite tier of universities among older universities, the difference in teaching between older and newer universities is less pronounced (indicating that newer universities affirm their mission as teaching-led universities), the Russell Group universities do not form an elite group distinctive from their older counterparts, and finally, new universities form a fourth cluster with less resources and attended by less socioeconomically advantaged students. Such studies confirm the maintained binary division between old and new universities, while at the same time emphasizing the institutional diversification among research-led and teaching-focused universities.

Our sample is composed of roughly half older and half newer universities (those organizations whose status as university was conferred by the Further and Higher Education Act 1992 or later). Analyzing their position in the Times Higher Education Research Intensity 2014 GPA rank order, we evaluate their organizational vocabularies and their uses of R&R to uncover patterns of homogenization and differentiation among 75 SoE. We next turn to the results.

Strategic behavior among UK Schools of Education within a competitive and stratified institutional environment

Here, we show the results related to the shaping of the competitive and stratified institutional environment for UK universities and the resulting shifts in strategic behavior of Schools of Education. The first part is based on the expert interviews, while the second also derives findings from the website analysis of the uses of ratings and rankings by the 75 SoE.

Shaping the competitive institutional environment for Schools of Education

Within UK higher education, the interviewees were unanimous in flagging two decisive moments marking the intensification of competition. The first relates to the turning point in the funding arrangements of the higher education sector; the second concerns the introduction of a research evaluation system that has continuously evolved, becoming stronger and more formalized. The Further and Higher Education Act (1992) replaced the previous UK-wide funding body, the Universities Funding Council, and created bodies to fund higher education in England and Wales, with impact on the funding arrangements in Scotland and in Northern Ireland. Moreover, the legislation ended the binary division between universities and polytechnics by granting the latter university status. The second turning point refers to the introduction of the research evaluation system and its research rating system designed to distribute research funding accordingly to criteria and indicators of quality judged by peer review (for a comprehensive study of the evolution of the research evaluation system, see Marques et al. 2017). Its genesis can be traced to 1986, under the umbrella of the Universities Funding Council, but the first UK-wide exercise was conducted in 1992, under the jurisdiction of the funding bodies in the UK’s four nations. Previously, the distribution of funding was allocated in block grants solely to universities without distinguishing funds for research or teaching.

This research evaluation system institutionalization and the inclusion of new organizations (new universities) as competitors for research funding are understood as critical junctures, the genesis of rising competition within UK higher education as an institutional environment in which research is viewed increasingly as a commodity:

“…suddenly it introduced a competitive element in terms of relating funding to outputs of different kinds… it introduced a whole sector of new practices related to the judgment of quality related to indicators or measures. This reduction of quality to a 1, 2, 3, 4 measurement framework. It was very alien. But had an enormous impact in the way people thought about their work, which intensified over the different RAE exercises. And it explains fundamentally a translation of research into a commodity. It was a process of commodification. Where the research was increasingly judged not in terms of intrinsic worth, but in terms of its performative measures and its income generation. So, during this period, from the 1990s, income generation was becoming increasingly important, and the research was one of the ways income has been generated” (FM10).

Throughout the institutionalization of the exercise (RAE 1996, RAE 2001, RAE 2008, and REF 2014), numerous changes restructured the system to formalize and standardize procedures among panels and between disciplines or units of assessment (see Marques et al. 2017). Two important moments in the intensification of the competition within the field are changes in the rating scale and the government’s decision, in 2003, to concentrate the allocation of funding in the top-rated universities, exacerbating stratification. In RAE 1996 and RAE 2001, the research evaluation system allocated research funding based on a seven-point rating scale (1, 2, 3b, 3a, 4, 5, 5*) attributing to each department an overall profile. From RAE 2008 onwards, the rating is distributed in percentages for each point (e.g., 3 = 25%), and based of a rating scale that ranges from 0 to 4: “unclassified,” “below the standard of nationally recognized work,” 1* as “nationally recognized,” 2* as “internationally recognized,” 3* “internationally excellent,” and 4* as “world-leading” (REF 2014). Before 2003, departments that achieved a 3a were awarded funding, while in the following years only departments rated as 4 (RAE 2001) and 3 (RAE 2008 and REF 2014) were granted research funding. This pressured universities to make sure that they reach at least the rating that would secure the allocation of some funding:

“Unfortunately, the government said we had to move to only rewarding the very best. That didn’t actually shift much money around, and it had the drawback of telling people that what we classified as two-star research wasn’t worth funding. And that’s very unfortunate... I would be very proud if I was doing two-star research, you are still doing stuff that moves the discipline forward, which is quite satisfactory. However, according to the government’s decision you concentrate more money on the best” (FM22).

“The idea is that originally, you know, research was rated as one, two, three, four, five, and even two-rated institutions got some money. And what’s been happening over the past 20-30 years is that this funding has become more and more selective. So now only the very, very top institutions are getting any REF-funded money” (FM16).

A second development was the introduction of a transparency policy in 2001 to ensure that RAE/REF submissions and results are available to the wider public, which led to the use of ratings results by media organizations. While Higher Education Funding Councils (HEFCs) in the UK produce ratings to inform and legitimate the allocation of funds, media organizations produce rankings to make profits, often through sales of newspapers, magazines, guidebooks, and online sources. Also, since 2001, the Times Higher Education has created RAE league tables (for RAE 2001) and “tables of excellence” (for REF 2008 and for REF 2014), contributing to the mediatization of these scientific evaluations. In the RAE 2001’s league table, only universities were ranked. In RAE 2008, not only universities themselves, but also individual “Units of Assessment” were ranked, through the calculation of a grade point average (GPA) for each unit, department, and, finally, university. In the REF 2014, the same rationale was applied, with the addition of a new category called “research power” to tie-break universities with the same GPA. The result is based on the multiplication of a GPA and the total number of full-time equivalent staff submitted to the REF. Similarly, The Guardian newspaper has published RAE/REF rankings since 2001. For the last exercise, The Guardian used the “power rankings,” created by Research Fortnight, to determine the total funding allocation for each organization.

The gradual evolution of scrutiny of media companies to not only rank universities’ performance but also that of departments and schools can be understood as “shaping forces,” that exacerbate the competitiveness of the environment in which universities and schools are embedded:

“But be aware that big tables are constructed by the journalists. Now, it’s not our job, I am afraid, to make it easy for anyone in the league tables to win… I think in practice there’s a project system as well and it is more extreme in most ways than the system we run. All people have got to do is publish ideally four, it might produce less, outputs. If you publish four decent outputs, there is no pressure doing anything else” (FM22).

“Oh, enormously, enormously [media rankings on the intensification of competition]. What happens is, it makes these sorts of rankings dominate everything, because you’ve got these rankings out there. So, I think it’s had a huge impact” (FM17).

Competition is explained less by the anticipation of consumers’ needs, but rather by the “third parties that set the framing competition, thereby linking potential competitors to each other” (Hasse and Krücken 2013: 183). Here, we identify the roles of peer review panels and Higher Education Funding Councils in producing ratings and the media organizations in the production of rankings as the third parties that link Schools of Education as direct competitors in an organizational field. We next turn to the strategic behavior of Schools of Education in order to gain resources and achieve legitimacy within the field. How do SoE react within such an increasingly competitive institutional environment?

All leaders? Institutional effects of R&R and the strategic behavior of Schools of Education

Presenting findings, we analyze here the strategic behavior of Schools of Education, examining particularly the “organizational vocabulary” that they use to describe their activities, and the use of diverse ratings and rankings to show how they compete for material and symbolic resources, such as reputation and legitimacy, within the field. We complement such results with the interviews. While we identify isonymism and, to a certain extent, isopraxism, we also distinguish important differences between the 75 SoE that participated in the last REF. Figure 1 shows the organizational vocabulary used among UK Schools of Education, taking into consideration their position in the Higher Education REF 2014: Subject ranking on intensity – 2014 GPA rank order, which is produced based on the results of REF ratings. The top 20 is composed solely of older universities, and only four new universities are within the top 40. The lower positions, by contrast, are filled only by new universities.

Fig. 1
figure 1

Organizational vocabulary among UK Schools of Education according to their position in the Times Higher Education REF 2014: Subject ranking on intensity – 2014 GPA rank order) (n = 75)

Despite such stratification of the SoE, the results show that no matter the position in the league tables, SoE make use of terms of singularity to describe their organizational capacity in the competitive environment in which they are embedded. Therefore, designations such as “leader” (52%), “excellence” (44%), “quality” (35%), “reputation” (32%), “impact” (16%), “innovative” (16%), “top” (16%), “strong” (13%), or “thriving” (8%) can be found as important designators used by SoE ranging from the highest to the lowest positions. This result shows isonymism—adoption of the same organizational vocabulary—among SoE in their attempts to gain symbolic resources. Even with the common use of certain adjectives to show their “marks of distinction” and prestige, there is variation in their use. SoE that occupy the highest positions in the table place special emphasis on affirming their position as “leaders,” their “reputation,” their classification in the table as “top” organizations, or their status as “world-class” schools. In contrast, SoE positioned in the lowest ranks rather place strong emphasis on the “quality” and “excellence” of their activities, especially in relationship to teaching, and the “thriving” status of their work. Such results show that while those in the highest positions make these references to maintain their position, those in the lowest refer to their developmental trajectory (on the move) and future status. In fact, interviewees stated that post-1992 universities understand the evaluation as a way to improve their standing, to prove that they are “REF-able,” and ultimately, to increase their inter-organizational legitimation in the field as research providers within a highly stratified higher education system:

“They are pretty much all very enthusiastic about the system because they all manage to get reputation. They all got some area, where something has happened, they may not earn much money, but at universities like that the research money is a drop in the ocean, compared to the student money. They are not in the research game for money. They are in it for reputation” (FM22).

Another important result relates to the homogenization in the use of words such as “excellence,” “impact,” and “quality” that are so entrenched in the REF vocabulary. Despite variations in use of certain terms by SoE, “excellence,” “impact,” and “quality” are designators that are evenly used by SoE, no matter their position in the table, reflecting the adherence of SoE to the research evaluation system and its considerable and growing impact on UK higher education since the mid-1980s. Figure 2 provides a clear picture of how “marks of distinction” are translated into concrete results and usages. Overall, 68% of our sample (51 SoE) makes explicit reference to some form of R&R, while the rest make no reference to them. Moreover, 37% of those SoE that do make reference to R&R opt to refer to both forms. We observe that no matter the position in the league table, references to both ratings and rankings are found.

Fig. 2
figure 2

References to R&R according to the position of Schools of Education in the Times Higher Education REF 2014: Subject ranking on intensity – 2014 GPA rank order) (n = 75)

Most importantly, within those SoE that mention R&R, the vast majority of SoE behave similarly in using the REF definitions of starred levels for the produced outputs (88%): “nationally recognized,” “internationally recognized,” “internationally excellent,” and “world-leading” are by far more used than others, such as references to the scale (1*, 2*, 3*, 4*) (20%), “impact” (33%), or the vitality and sustainability of the unit in the “environment” criteria (22%):

“This gives our Unit of Assessment a Grade Point Average (GPA) of 3.3 which means that we are ranked 5th in the field of education nationally and ranked joint 1st in the UK for world-leading research impact” (Durham, 1–20).

“Our educational research was rated highly in the UK-wide Research Excellence Framework (REF), with 76% of our research rated as internationally recognised, internationally excellent, or world-leading. Overall, Edge Hill U. was listed as the biggest improver in the league table published in The Times (+33 places)” (Edge Hill, 40-60).

Yet simultaneously with such homogenization, important differences also deserve mention. One important difference is that the SoE ranked in the highest positions make explicit references to both REF-based media rankings (85%) and to the REF rating (80%), which means that almost every single SoE that is placed in the first 20 places shows their “mark of distinction” in the research evaluation system either by stating their position in the ranking, their rating or both. A second key difference is that other forms of ratings, such as that of the Office for Standards in Education (OFSTED),Footnote 2 related to the quality of teacher education provision is mentioned by organizations across the board, except by those SoE that range between 20th and 40th position. This shows that SoE in the highest but also in the lowest positions strategically show their “marks of distinction” in both teaching and research activities, while others only identify their achievements in research. A third noteworthy difference is related to the use of rankings. Seven out of nine rankings referred to were produced by media organizations for the national market and scope, such as The Times Good University Guide (17%), the Complete University Guide (14%), the National Student Survey (11%), the Guardian University Guide (8%), the Russell Group Ranking (3%), and the Good Teacher Training Guide (3%). Only two global rankings were mentioned: the QS World University Rankings (17%) and the The World Ranking (6%). This shows the contextualized nature of education as a nation-oriented discipline and the organizational field relationships of SoE within their national or regional scope of action. Moreover, SoE placed in the highest positions use more comprehensive international or national rankings, while those in middling or lowest positions mentioned rankings related to teaching activities. The emphasis placed in certain rankings can be seen as an indicator of the SoE orientation towards more teaching-led or research-led and more international or national profiles: “Of course the REF ranking is important, but our aim is the QS ranking” (FM15).

As we have seen, both forms (R&R) play an important role in defining the image(s) of SoE either to maintain “marks of distinction” or to strive for higher marks in the case of new universities. But not only newer universities struggle for legitimacy in comparison to their older counterparts. The multidisciplinary field of education also must compete for intra-organizational resources and reputation in comparison to other SoE and units of assessment. Moreover, several interviewees reported that being highly-rated/ranked help SoE to gain institutional legitimacy (FM3, FM19, FM20) within their university:

“…and we came X (within the first 20 places), which was, for us, very low, very low... The majority of the departments of X were on the top ten or top five, on top” (FM17).

“Education departments are not well regarded in universities. Education is a field that has a lot of academic snobbery within and across universities. Quite often, universities are looking to close their Education departments and being highly ranked has made a difference. I’ve found that in X, I’ve found that here and certainly here in the university I’ve found that all sorts of doors are opened... So, that’s really, really helpful” (FM3).

Because ratings and rankings forms “…are very sticky results. They are very tenacious, since they stay with you for a very long time. Even after the next exercise…” (FM2). The desire to remain or achieve the top positions is perceived as something that creates tension and triggers strategic behavior among Schools of Education. Such strategies can range from internal practices on the management of research production to more profound changes on the overall structures of departments in research management. Regarding the latter, despite the fact that the RAE/REF did not directly trigger major restructuration, certainly it has influenced the arguments for re-organizing structures, such as separating teaching from research, setting up new structures entirely dedicated to research (management) or to attract research income to sustain jobs (FM4, FM20):

“What has been built was a really imaginative and potentially transformative structure to envision what education can look like within the university… that does lead to significant resourcing in order to perform to its potential within the time and the space of the next Research Excellence Framework” (FM20).

Regarding informal practices, members of the field identified the set-up of internal peer review assessments, including a more focused strategy devised to help colleagues’ career progression through co-writing processes as well as coaching (FM3, FM14, FM16, FM21). Perhaps most interesting is a convergence of results about the set-up of internal evaluations in which senior scholars read and rate the papers to be submitted for evaluation—in essence, performing internal quality control prior to the official submission (FM2, FM6, FM10, FM21). This exemplifies isopraxism—the implementation of similar practices across organizations. Therefore, field members have become more strategic in the selection of which staff members’ publications should be submitted from one exercise to another, hoping to maximize the possible ratings of publications and diminish potential threats to the overall quality profile of their SoE:

“And then we would have a very highly advanced data section and we were looking at everything, looking at CVs and stuff. The chair of the faculty serves as a kind of generally friendly interrogation of ‘ok, what are you doing?’ … The fundamental question is who do you submit, that’s the first thing, really. So, you have to assess everybody’s work… Every university, I think, will have some system of trying to rate what they think the panel will rate” (FM21).

A third strategic feature enunciated refers to recruitment policies and procedures influenced by RAE/REF and decision-making when hiring researchers: “So, I think in any appointment you make on the research, on the academic side, it was always having in mind REF, as one of the criteria” (FM3). On one hand, there is evidence to create specific profiles or at least to hire people with certain characteristics, such as strong quantitative skills (FM4, FM16). Thus, “universities were pushed to buy researchers” (FM10) leading to a transfer market of knowledge producers (FM4, FM8, FM9, FM16, FM20):

“What happened is that you basically got a transfer market for professors set up. So, a university realised that if it had very highly productive professors, the money coming back as a result of the research assessment exercise, and the extra research funding that comes with it, you actually make a profit on that professor” (FM8).

Finally, a more recent strategy is related to the implemented evaluation criteria of “impact” in the REF 2014 exercise and one of the main concerns for the next one in 2021, in which this dimension will count for 25%. Despite the recent inclusion of such criteria, several field members refer to the perceived effects that the notion of “impact” has triggered within the SoE: Strategic behavior in selecting which cases to have assessed and how to write them, and reorienting the direction of research towards questions and designs that enhance usability of research results in other societal spheres:

“Institutions are constantly on the lookout for potential impact case studies. They start to draft them and refine them from very early on, so when they come to the next research assessment, whenever it is, they’ve got several they can choose from and will make, you know, political choices about which ones to strengthen even more. It has changed a lot of behavior” (FM2).

Discussion and conclusions

Our analysis of the shaping of the competitive institutional environment in which UK Schools of Education are embedded highlighted the “third parties that set the framing competition” (Hasse and Krücken 2013: 183) among them. In the context of SoE in the UK, these third parties are the Higher Education Funding Councils that organize the rating of research quality and the media organizations that convert these research ratings into rankings to be marketed for their own profits. The former set the frame of competition related to material resources—research allocation based on indicators of quality derived through peer review. The latter, as classification mechanisms (Wedlin 2006), contribute to the on-going stratification of higher education by sorting SoE into high, middle, or low positions, increasing the competition and the pressures for SoE to maintain their status and/or strive for reputation.

By analyzing the organizational vocabulary of SoE and the uses that they make of R&R, we observe their embeddedness in the highly competitive institutional environment of UK higher education and their strategic behavior in negotiating this environment. Concretely, we showed that no matter the position of the SoE in league tables, the majority of SoE define themselves as “leaders” or among the “top” with obtained “reputation” in their research and/or teaching activities within a “strong” or “thriving” environment. While the analysis of the “organizational vocabulary” (Meyer and Rowan 1977) reflects isonymism—adoption of the similar terms (Erlingsdóttir and Lindberg 2005)—among SoE, we also find variation among SoE that reflects the stratified and increasingly competitive institutional environment in which SoE operate.

Perhaps the most interesting result is related to the uniformity—isonymism—among SoE in the use of REF-based vocabulary such as “excellence,” “quality,” and “impact” in their organizational descriptions. Such homogeneity is also found in their strategic use of myriad R&R, despite the clear positioning of SoE in the league tables. On one hand, such results show that SoE make use of their “marks of distinction” in order to secure or strive for reputation within their environment, particularly visible in the use of the REF rating definitions of the starred levels to characterize the quality of research they produce. “World leading,” “international excellence,” or “recognized internationality” are used to characterize not only the product but also the producer. This “repackaging of university performance” (Sidhu et al. 2011) in narratives of prestige, words such as these have officially become part of the self-characterization of the SoE.

We also found considerable variation in the use of such forms, especially concerning the scope of activities (teaching or research) and the regional, national or international levels referenced, confirming the findings of the study of UK universities’ welcome addresses by Mampaey and Huisman (2016). While the SoE at or near the top are older and well-established schools, those closer to the bottom are younger, with less time to have grown their programs, recruited renowned faculty members, and established their reputations. Several important variations we noted confirm prior research: organizations at the top make more use of rankings, while those in the middle position make strong references to the REF ratings, indicating behavior of striving for as-yet unattained reputation and prestige (Schofield et al. 2013). Still others explain their position with such statements as: “We are one of the youngest universities in the UK but we are already leading the way in adding value to society, which we call social impact” (Northampton, ranked 65th). Indeed, the next cycle of the REF in 2021 will award fully 25% for “impact,” thus potentially reshuffling the ratings awarded and thus the rankings based upon them. This exemplifies the dynamic nature of the competitive institutional environment in which SoE are embedded. Such reshaping effects have also been demonstrated in the field of business schools (Wedlin 2006). Moreover, the analysis has confirmed that SoE not only compete for legitimacy inter-organizationally—newer universities attempt to strive for reputation in relationship to older and more established ones—but also intra-organizationally, by attempting to attain more legitimacy and counteract the often-lamented marginality of education within larger university structures. Taking into consideration that universities strategically allocate the “quality-related” research funding among their subunits, we call attention to pursue future avenues of research that take into consideration the effects of ratings, rankings, and research evaluation in the on-going organizational segmentation of universities—and the resultant unequal concentration of resources—discussed in recent literature (Cantwell and Taylor 2013; Rosinger et al. 2016).

Despite on-going critiques of REF, the competitive setting of ratings and rankings not only creates “marks of distinction” but also triggers strategic behavior among SoE, within the confines of a highly stratified institutional environment that emphasizes organizational actorhood. Rankings as mechanisms of classification have the capacity to make or break reputations and reallocate status, triggering tensions and exacerbating struggles within the field (Wedlin 2006). Through the interviews, we identified contemporary strategic behaviors. These ranged from changes in the structures of schools concerning research management (separation of teaching and research and setting up new structures for research management) to internal practices (internal peer review assessments, co-writing and coaching, setting-up of internal evaluations, recruitment policies, and the selection of case studies to show “impact”). While we observe isonymism and, to a certain extent, isopraxism among SoE, our data do not yet confirm isomorphic consequences in terms of structural changes among disciplinary units within the universities studied. Longitudinal analyses of such organizational structures to assess isomorphism across and between organizations is necessary. Such a research avenue could also contribute to our understanding of organizational segmentation and on-going stratification, and the strategic behavior of universities’ subunits as they strive to maintain and gain status.

Overall, these results demonstrate how universities’ subunits such as SoE became “organizational strategic actors” (Krücken and Meier 2006; Hasse and Krücken 2013) that expand their internal management capacities and appear as integrated and goal-oriented units jockeying for position. Therefore, we also consider that the concept of universities as “organizational strategic actors’ may be extended to university subunits and we call attention to needed explorations of the horizontal and vertical relationships between such subunits, especially in highly competitive environments. This study contributes to our understanding of contemporary higher education system dynamics by uncovering how ratings are converted into rankings and how this process in turn triggers strategic behavior that impacts universities and (multi-)disciplinary subunits, such as Schools of Education, within them. It also calls attention to the need to study disciplinary subunits as it emphasizes the importance of contextual frames of competition and their effects on inter-university and intra-university stratification and reputation alike.