Here, we show the results related to the shaping of the competitive and stratified institutional environment for UK universities and the resulting shifts in strategic behavior of Schools of Education. The first part is based on the expert interviews, while the second also derives findings from the website analysis of the uses of ratings and rankings by the 75 SoE.
Shaping the competitive institutional environment for Schools of Education
Within UK higher education, the interviewees were unanimous in flagging two decisive moments marking the intensification of competition. The first relates to the turning point in the funding arrangements of the higher education sector; the second concerns the introduction of a research evaluation system that has continuously evolved, becoming stronger and more formalized. The Further and Higher Education Act (1992) replaced the previous UK-wide funding body, the Universities Funding Council, and created bodies to fund higher education in England and Wales, with impact on the funding arrangements in Scotland and in Northern Ireland. Moreover, the legislation ended the binary division between universities and polytechnics by granting the latter university status. The second turning point refers to the introduction of the research evaluation system and its research rating system designed to distribute research funding accordingly to criteria and indicators of quality judged by peer review (for a comprehensive study of the evolution of the research evaluation system, see Marques et al. 2017). Its genesis can be traced to 1986, under the umbrella of the Universities Funding Council, but the first UK-wide exercise was conducted in 1992, under the jurisdiction of the funding bodies in the UK’s four nations. Previously, the distribution of funding was allocated in block grants solely to universities without distinguishing funds for research or teaching.
This research evaluation system institutionalization and the inclusion of new organizations (new universities) as competitors for research funding are understood as critical junctures, the genesis of rising competition within UK higher education as an institutional environment in which research is viewed increasingly as a commodity:
“…suddenly it introduced a competitive element in terms of relating funding to outputs of different kinds… it introduced a whole sector of new practices related to the judgment of quality related to indicators or measures. This reduction of quality to a 1, 2, 3, 4 measurement framework. It was very alien. But had an enormous impact in the way people thought about their work, which intensified over the different RAE exercises. And it explains fundamentally a translation of research into a commodity. It was a process of commodification. Where the research was increasingly judged not in terms of intrinsic worth, but in terms of its performative measures and its income generation. So, during this period, from the 1990s, income generation was becoming increasingly important, and the research was one of the ways income has been generated” (FM10).
Throughout the institutionalization of the exercise (RAE 1996, RAE 2001, RAE 2008, and REF 2014), numerous changes restructured the system to formalize and standardize procedures among panels and between disciplines or units of assessment (see Marques et al. 2017). Two important moments in the intensification of the competition within the field are changes in the rating scale and the government’s decision, in 2003, to concentrate the allocation of funding in the top-rated universities, exacerbating stratification. In RAE 1996 and RAE 2001, the research evaluation system allocated research funding based on a seven-point rating scale (1, 2, 3b, 3a, 4, 5, 5*) attributing to each department an overall profile. From RAE 2008 onwards, the rating is distributed in percentages for each point (e.g., 3 = 25%), and based of a rating scale that ranges from 0 to 4: “unclassified,” “below the standard of nationally recognized work,” 1* as “nationally recognized,” 2* as “internationally recognized,” 3* “internationally excellent,” and 4* as “world-leading” (REF 2014). Before 2003, departments that achieved a 3a were awarded funding, while in the following years only departments rated as 4 (RAE 2001) and 3 (RAE 2008 and REF 2014) were granted research funding. This pressured universities to make sure that they reach at least the rating that would secure the allocation of some funding:
“Unfortunately, the government said we had to move to only rewarding the very best. That didn’t actually shift much money around, and it had the drawback of telling people that what we classified as two-star research wasn’t worth funding. And that’s very unfortunate... I would be very proud if I was doing two-star research, you are still doing stuff that moves the discipline forward, which is quite satisfactory. However, according to the government’s decision you concentrate more money on the best” (FM22).
“The idea is that originally, you know, research was rated as one, two, three, four, five, and even two-rated institutions got some money. And what’s been happening over the past 20-30 years is that this funding has become more and more selective. So now only the very, very top institutions are getting any REF-funded money” (FM16).
A second development was the introduction of a transparency policy in 2001 to ensure that RAE/REF submissions and results are available to the wider public, which led to the use of ratings results by media organizations. While Higher Education Funding Councils (HEFCs) in the UK produce ratings to inform and legitimate the allocation of funds, media organizations produce rankings to make profits, often through sales of newspapers, magazines, guidebooks, and online sources. Also, since 2001, the Times Higher Education has created RAE league tables (for RAE 2001) and “tables of excellence” (for REF 2008 and for REF 2014), contributing to the mediatization of these scientific evaluations. In the RAE 2001’s league table, only universities were ranked. In RAE 2008, not only universities themselves, but also individual “Units of Assessment” were ranked, through the calculation of a grade point average (GPA) for each unit, department, and, finally, university. In the REF 2014, the same rationale was applied, with the addition of a new category called “research power” to tie-break universities with the same GPA. The result is based on the multiplication of a GPA and the total number of full-time equivalent staff submitted to the REF. Similarly, The Guardian newspaper has published RAE/REF rankings since 2001. For the last exercise, The Guardian used the “power rankings,” created by Research Fortnight, to determine the total funding allocation for each organization.
The gradual evolution of scrutiny of media companies to not only rank universities’ performance but also that of departments and schools can be understood as “shaping forces,” that exacerbate the competitiveness of the environment in which universities and schools are embedded:
“But be aware that big tables are constructed by the journalists. Now, it’s not our job, I am afraid, to make it easy for anyone in the league tables to win… I think in practice there’s a project system as well and it is more extreme in most ways than the system we run. All people have got to do is publish ideally four, it might produce less, outputs. If you publish four decent outputs, there is no pressure doing anything else” (FM22).
“Oh, enormously, enormously [media rankings on the intensification of competition]. What happens is, it makes these sorts of rankings dominate everything, because you’ve got these rankings out there. So, I think it’s had a huge impact” (FM17).
Competition is explained less by the anticipation of consumers’ needs, but rather by the “third parties that set the framing competition, thereby linking potential competitors to each other” (Hasse and Krücken 2013: 183). Here, we identify the roles of peer review panels and Higher Education Funding Councils in producing ratings and the media organizations in the production of rankings as the third parties that link Schools of Education as direct competitors in an organizational field. We next turn to the strategic behavior of Schools of Education in order to gain resources and achieve legitimacy within the field. How do SoE react within such an increasingly competitive institutional environment?
All leaders? Institutional effects of R&R and the strategic behavior of Schools of Education
Presenting findings, we analyze here the strategic behavior of Schools of Education, examining particularly the “organizational vocabulary” that they use to describe their activities, and the use of diverse ratings and rankings to show how they compete for material and symbolic resources, such as reputation and legitimacy, within the field. We complement such results with the interviews. While we identify isonymism and, to a certain extent, isopraxism, we also distinguish important differences between the 75 SoE that participated in the last REF. Figure 1 shows the organizational vocabulary used among UK Schools of Education, taking into consideration their position in the Higher Education REF 2014: Subject ranking on intensity – 2014 GPA rank order, which is produced based on the results of REF ratings. The top 20 is composed solely of older universities, and only four new universities are within the top 40. The lower positions, by contrast, are filled only by new universities.
Despite such stratification of the SoE, the results show that no matter the position in the league tables, SoE make use of terms of singularity to describe their organizational capacity in the competitive environment in which they are embedded. Therefore, designations such as “leader” (52%), “excellence” (44%), “quality” (35%), “reputation” (32%), “impact” (16%), “innovative” (16%), “top” (16%), “strong” (13%), or “thriving” (8%) can be found as important designators used by SoE ranging from the highest to the lowest positions. This result shows isonymism—adoption of the same organizational vocabulary—among SoE in their attempts to gain symbolic resources. Even with the common use of certain adjectives to show their “marks of distinction” and prestige, there is variation in their use. SoE that occupy the highest positions in the table place special emphasis on affirming their position as “leaders,” their “reputation,” their classification in the table as “top” organizations, or their status as “world-class” schools. In contrast, SoE positioned in the lowest ranks rather place strong emphasis on the “quality” and “excellence” of their activities, especially in relationship to teaching, and the “thriving” status of their work. Such results show that while those in the highest positions make these references to maintain their position, those in the lowest refer to their developmental trajectory (on the move) and future status. In fact, interviewees stated that post-1992 universities understand the evaluation as a way to improve their standing, to prove that they are “REF-able,” and ultimately, to increase their inter-organizational legitimation in the field as research providers within a highly stratified higher education system:
“They are pretty much all very enthusiastic about the system because they all manage to get reputation. They all got some area, where something has happened, they may not earn much money, but at universities like that the research money is a drop in the ocean, compared to the student money. They are not in the research game for money. They are in it for reputation” (FM22).
Another important result relates to the homogenization in the use of words such as “excellence,” “impact,” and “quality” that are so entrenched in the REF vocabulary. Despite variations in use of certain terms by SoE, “excellence,” “impact,” and “quality” are designators that are evenly used by SoE, no matter their position in the table, reflecting the adherence of SoE to the research evaluation system and its considerable and growing impact on UK higher education since the mid-1980s. Figure 2 provides a clear picture of how “marks of distinction” are translated into concrete results and usages. Overall, 68% of our sample (51 SoE) makes explicit reference to some form of R&R, while the rest make no reference to them. Moreover, 37% of those SoE that do make reference to R&R opt to refer to both forms. We observe that no matter the position in the league table, references to both ratings and rankings are found.
Most importantly, within those SoE that mention R&R, the vast majority of SoE behave similarly in using the REF definitions of starred levels for the produced outputs (88%): “nationally recognized,” “internationally recognized,” “internationally excellent,” and “world-leading” are by far more used than others, such as references to the scale (1*, 2*, 3*, 4*) (20%), “impact” (33%), or the vitality and sustainability of the unit in the “environment” criteria (22%):
“This gives our Unit of Assessment a Grade Point Average (GPA) of 3.3 which means that we are ranked 5th in the field of education nationally and ranked joint 1st in the UK for world-leading research impact” (Durham, 1–20).
“Our educational research was rated highly in the UK-wide Research Excellence Framework (REF), with 76% of our research rated as internationally recognised, internationally excellent, or world-leading. Overall, Edge Hill U. was listed as the biggest improver in the league table published in The Times (+33 places)” (Edge Hill, 40-60).
Yet simultaneously with such homogenization, important differences also deserve mention. One important difference is that the SoE ranked in the highest positions make explicit references to both REF-based media rankings (85%) and to the REF rating (80%), which means that almost every single SoE that is placed in the first 20 places shows their “mark of distinction” in the research evaluation system either by stating their position in the ranking, their rating or both. A second key difference is that other forms of ratings, such as that of the Office for Standards in Education (OFSTED),Footnote 2 related to the quality of teacher education provision is mentioned by organizations across the board, except by those SoE that range between 20th and 40th position. This shows that SoE in the highest but also in the lowest positions strategically show their “marks of distinction” in both teaching and research activities, while others only identify their achievements in research. A third noteworthy difference is related to the use of rankings. Seven out of nine rankings referred to were produced by media organizations for the national market and scope, such as The Times Good University Guide (17%), the Complete University Guide (14%), the National Student Survey (11%), the Guardian University Guide (8%), the Russell Group Ranking (3%), and the Good Teacher Training Guide (3%). Only two global rankings were mentioned: the QS World University Rankings (17%) and the The World Ranking (6%). This shows the contextualized nature of education as a nation-oriented discipline and the organizational field relationships of SoE within their national or regional scope of action. Moreover, SoE placed in the highest positions use more comprehensive international or national rankings, while those in middling or lowest positions mentioned rankings related to teaching activities. The emphasis placed in certain rankings can be seen as an indicator of the SoE orientation towards more teaching-led or research-led and more international or national profiles: “Of course the REF ranking is important, but our aim is the QS ranking” (FM15).
As we have seen, both forms (R&R) play an important role in defining the image(s) of SoE either to maintain “marks of distinction” or to strive for higher marks in the case of new universities. But not only newer universities struggle for legitimacy in comparison to their older counterparts. The multidisciplinary field of education also must compete for intra-organizational resources and reputation in comparison to other SoE and units of assessment. Moreover, several interviewees reported that being highly-rated/ranked help SoE to gain institutional legitimacy (FM3, FM19, FM20) within their university:
“…and we came X (within the first 20 places), which was, for us, very low, very low... The majority of the departments of X were on the top ten or top five, on top” (FM17).
“Education departments are not well regarded in universities. Education is a field that has a lot of academic snobbery within and across universities. Quite often, universities are looking to close their Education departments and being highly ranked has made a difference. I’ve found that in X, I’ve found that here and certainly here in the university I’ve found that all sorts of doors are opened... So, that’s really, really helpful” (FM3).
Because ratings and rankings forms “…are very sticky results. They are very tenacious, since they stay with you for a very long time. Even after the next exercise…” (FM2). The desire to remain or achieve the top positions is perceived as something that creates tension and triggers strategic behavior among Schools of Education. Such strategies can range from internal practices on the management of research production to more profound changes on the overall structures of departments in research management. Regarding the latter, despite the fact that the RAE/REF did not directly trigger major restructuration, certainly it has influenced the arguments for re-organizing structures, such as separating teaching from research, setting up new structures entirely dedicated to research (management) or to attract research income to sustain jobs (FM4, FM20):
“What has been built was a really imaginative and potentially transformative structure to envision what education can look like within the university… that does lead to significant resourcing in order to perform to its potential within the time and the space of the next Research Excellence Framework” (FM20).
Regarding informal practices, members of the field identified the set-up of internal peer review assessments, including a more focused strategy devised to help colleagues’ career progression through co-writing processes as well as coaching (FM3, FM14, FM16, FM21). Perhaps most interesting is a convergence of results about the set-up of internal evaluations in which senior scholars read and rate the papers to be submitted for evaluation—in essence, performing internal quality control prior to the official submission (FM2, FM6, FM10, FM21). This exemplifies isopraxism—the implementation of similar practices across organizations. Therefore, field members have become more strategic in the selection of which staff members’ publications should be submitted from one exercise to another, hoping to maximize the possible ratings of publications and diminish potential threats to the overall quality profile of their SoE:
“And then we would have a very highly advanced data section and we were looking at everything, looking at CVs and stuff. The chair of the faculty serves as a kind of generally friendly interrogation of ‘ok, what are you doing?’ … The fundamental question is who do you submit, that’s the first thing, really. So, you have to assess everybody’s work… Every university, I think, will have some system of trying to rate what they think the panel will rate” (FM21).
A third strategic feature enunciated refers to recruitment policies and procedures influenced by RAE/REF and decision-making when hiring researchers: “So, I think in any appointment you make on the research, on the academic side, it was always having in mind REF, as one of the criteria” (FM3). On one hand, there is evidence to create specific profiles or at least to hire people with certain characteristics, such as strong quantitative skills (FM4, FM16). Thus, “universities were pushed to buy researchers” (FM10) leading to a transfer market of knowledge producers (FM4, FM8, FM9, FM16, FM20):
“What happened is that you basically got a transfer market for professors set up. So, a university realised that if it had very highly productive professors, the money coming back as a result of the research assessment exercise, and the extra research funding that comes with it, you actually make a profit on that professor” (FM8).
Finally, a more recent strategy is related to the implemented evaluation criteria of “impact” in the REF 2014 exercise and one of the main concerns for the next one in 2021, in which this dimension will count for 25%. Despite the recent inclusion of such criteria, several field members refer to the perceived effects that the notion of “impact” has triggered within the SoE: Strategic behavior in selecting which cases to have assessed and how to write them, and reorienting the direction of research towards questions and designs that enhance usability of research results in other societal spheres:
“Institutions are constantly on the lookout for potential impact case studies. They start to draft them and refine them from very early on, so when they come to the next research assessment, whenever it is, they’ve got several they can choose from and will make, you know, political choices about which ones to strengthen even more. It has changed a lot of behavior” (FM2).