Blunt facts?

Over the course of the past century, rankings have proliferated dramatically. Loosely understood as a specific way of quantifying and presenting performance comparisons (Werron & Ringel, 2017), rankings can nowadays be found in an increasing number of sectors, at any level, from local to global. Nation-states, businesses, cities, restaurants, and artists are among the many entities whose performances are regularly compared, quantified and rank-ordered. Not only are rankings becoming more common and in some cases quite salient, but they are also becoming more diverse and elaborate, and some of them notably resource-intensive to produce and sustain. At the same time, keeping track of the latest rankings results, and not least of the requirements for participation and data submission, has become ever more demanding. And in some domains, rankings have become ubiquitous to such an extent that they are now an object of a sustained interest of both key stakeholders and scholars studying it. Higher education is one of those domains.

Rankings of higher education institutions, whether we refer to them as college or university rankings, have been around for more than a century (Hammarfelt et al., 2017; Myers & Robe, 2009). But despite this long history, it was not before the 1980s and the first editions of college rankings published by US News & World Report that rankings attained the status of a phenomenon beyond narrow policy and otherwise specialized circles, in the United States at least (Espeland & Sauder, 2007). In a somewhat similar fashion, the first so-called global rankings of universities in the early 2000s would prompt worldwide interest, effectively making them a hotly debated topic across policy, administrative, and scholarly circles (Brankovic et al., 2022; O’Connell, 2013; Paradeise & Filliatreau, 2016). About a decade later, Sarah Amsler would observe:

We are drowning in words about rankings: how they emerged, how to design them, how to theorize them, their classifications and comparisons, the extent of their effectiveness for different purposes, why to critique them, why to defend them, improvements that will make them methodologically robust, etc. Writing about rankings has become a global business. What more can possibly be said? (Amsler, 2014, p. 155, emphasis added)

One could have expected at this point that the scholarly conversation on rankings was about to reach saturation, leading to a decline in interest and overall attention. However, this is not what happened. If “writing about rankings,” as Amsler put it, was a global business a decade ago, it seems that the business only grew in the years that followed. Notably, throughout the second decade of this century, rankings further evolved, became more complex to produce and maintain, but arguably also more of a business-as-usual in everyday higher education affairs. By this point, the Academic Ranking of World Universities (ARWU, also commonly known as the Shanghai ranking), the rankings produced by Quacquarelli Symonds (QS) and Times Higher Education (THE), and later also U-Multirank, became common references in higher education management, policy, and research. And although rankings as such have stopped being inherently newsworthy for many people in and around higher education, they have continued to make international and national news world over (Barats, 2020; Shahjahan et al., 2022). Scholars have kept writing about them, with even greater intensity it seems.Footnote 1

A striking characteristic of much of the literature on rankings in higher education—and especially when it comes to the works published in journals specializing in education, higher education, and science studies—is that it largely takes rankings for granted. The questions such as why and how rankings have become so pervasive are rarely addressed as an empirical puzzle. Rather, regardless of their attitude toward academic performance evaluation (critical, neutral, or affirmative), scholars would typically assume that rankings have origins and drivers beyond higher education, and are loosely connected, sometimes causally even, with globalization, managerialism, neoliberalism, recent geopolitical, and other broader trends (e.g., Collins & Park, 2015; Hazelkorn, 2016; Locke, 2014; Lynch, 2014; Peters, 2019).Footnote 2 This writing is often permeated with a sense that rankings were and still are, one way or the other, inevitable. It is further notable that a great deal of research is more or less explicitly evaluative and normative: university rankings would be judged—and often deemed inadequate—for their fitness for purpose, soundness of their methodologies, transparency, or their performative and other kind of effects on higher education (e.g., see recent volumes edited by Hazelkorn & Mihut, 2021b; Stack, 2021; Welch & Li, 2021). Without any doubt, the research on rankings to date has greatly contributed to a better understanding of the dynamics surrounding rankings in higher education. And yet, we do not seem to be any closer to explaining their ubiquity—and to some extent also their legitimacy—than we were a decade ago.

To address this problem, we propose seeing the proliferation and normalization of rankings in higher education as part of the larger process of institutionalization (Colyvas & Jonsson, 2011). The very claim that rankings are becoming institutionalized, however unproblematic it may seem, requires qualification. Broadly speaking, and for our purposes here, we see institutionalization as a process in which a phenomenon, such as rankings, becomes progressively embedded in a community’s belief systems, norms, and practices (Berger & Luckmann, 1966; DiMaggio & Powell, 1983; Jepperson, 1991; Zucker, 1977). Its existence becomes taken for granted and as such rarely questioned by members of the community. In suggesting that this could be the case with rankings in higher education, we do not mean to say that rankings are not criticized or alternatives to existing rankings are not discussed. What we mean is that even for critical scholars a world without them may seem too far off. The widespread sentiment shared by proponents and opponents that rankings are inevitable, in fact, attests that this indeed may be the case. Despite the ongoing criticism addressed at specific rankings and their producers, for most actors in higher education the idea that higher education institutions operate within hierarchies, based on some notion of quality or performance, seems to be a legitimate one. In view of such a widespread consensus, it does not surprise that a rank given to a university often assumes the status of a “blunt fact”—which universities can do little to ignore (Sauder & Espeland, 2009, p. 77).

Observing rankings through the lens of institutionalization, however, does not mean that the future developments can only take one direction, that is, the one of further or deeper institutionalization. For institutionalization is, as a rule, an uneven, patchy process, of varying depth, and not least also a reversible one (Oliver, 1992). Therefore, applying the lens of institutionalization allows us to open up a space for asking questions about the nature of this process. Doing this would require us to go beyond the issues of rankings’ legitimacy as such and their alignment with higher educations’ historical tendency—in some contexts at least—to vertically differentiate between higher education institutions based on some notion of quality, performance, or prestige (Brankovic, 2018; Clark, 1983). Institutionalization, in particular in a strong form, would arguably mean that rankings themselves become an institution and are as such integrated into the modes of reproduction, and not only “contingent on alignment with existing cultural and cognitive frames” (Colyvas & Jonsson, 2011, p. 45). Neither taken-for-grantedness nor legitimacy are sufficient conditions for considering rankings as permanent features of higher education. For this to be the case, the “chronic” (re)production, legitimation, and maintenance of rankings would need to be also embedded in higher education’s frames, rules, and routines (Colyvas & Jonsson, 2011; Jepperson, 1991).Footnote 3 By implication, for the institutionalization of rankings to be considered a process unfolding on a global scale, this embeddedness would need to be evidenced not only in some parts of the world, but progressively all over the world. Whether this is indeed happening, and not least the degree to which it may be happening, is an empirical question whose addressing requires moving beyond the usual accounts on how and why rankings have become pervasive in higher education.

In examining this process, we start with a simple point: in order to question the institutionalization of rankings as a social phenomenon, we need to take seriously both the broader social, political, and historical conditions, and the conditions specific to higher education, and treat them as objects of empirical investigation. From this follows that we need to observe not only the broad institutional developments, but also how these shape and are shaped by the dynamics pertaining to the social-organizational and even individual levels of the social order (Jepperson & Meyer, 2011); and relatedly, not only how rankings diffuse, but also how they “stick” (Colyvas & Jonsson, 2011, p. 30). This requires turning our gaze to the ensemble that possibly fosters the institutionalization of rankings: not only the organizations producing them, but also policy makers, universities and their administrators, publishers, researchers, consultants, and many others (as illustrated or empirically shown in, e.g., Hallett & Ventresca, 2006; Scott et al., 2000; Zilber, 2002), some of whom are possibly even unaware of their contribution. For us scholars who study rankings, addressing this question, by definition, also implies turning the analytical gaze towards ourselves, given that we too—like those rankings we write (and rant) about—are invested in knowledge claims about higher education. Last but not least, it means being reflexive about our position(s) as participants in higher education, who are usually employed at the institutions evaluated by the very rankings we seek to study objectively. This is one way the challenges arising in researching university rankings could differ from those arising in researching, for example, rankings of prisons or hospitals: “we” are both observers and parties concerned.

Once we recalibrate our lens for capturing the material and discursive conditions that undergird rankings, and not least the spatial and temporal ones, it becomes apparent that in front of us lie not some mysterious and immovable forces, but very real people, processes, and structures. It is when we start observing interactions, routines, practices, and actors, and not least how they evolve, that we are in a position to see rankings—and organizational status hierarchies (re)produced through them—as an artifice whose institutionalization is not a linear path dependency, but a practical accomplishment that we still know too little about. The question then becomes how rankings proliferate, persist, and become embedded, which urges us to take a more comprehensive view on the multiplicity of contributing and potentially entangled factors. As we start treating the ubiquity and legitimacy of rankings as empirical questions, grand narratives such as globalization and neoliberalism—while surely important macro-conditions—become in and of themselves unsatisfactory as standalone explanations.

Continuities, interdependencies, and engagement

The special issue The Institutionalization of University Rankings: Continuities, Interdependencies, Engagement aims to broaden the burgeoning scholarly conversation on rankings in higher education, specifically by shifting the attention towards empirical questions about the underlying conditions of their ubiquity, both in higher education and higher education research. This special issue brings together scholars of diverse backgrounds who have different interests with regards to rankings and higher education. In the spirit of the journal, they draw from a range of epistemic traditions, including sociology, history, organization and management studies, critical theory, and of course higher education and science studies. The seven articles, each from a different angle and with a different theoretical or empirical foci, aim to contribute to a better understanding of what university rankings are, where they come from, the way they operate, and how they have become increasingly entangled in the processes and structures of higher education and science.

In the remainder of this section we organize the contributions along three themes: continuities, interdependencies, and engagement. Specifically, these themes draw attention to (a) how rankings, their relevance, and the purposes and meaning(s) assigned to them evolve over time (continuities); (b) what relationships and structural entanglements emerge around and keep rankings going (interdependencies); and (c) how the institutionalization of rankings is fostered through the interaction with different audiences (engagement). These are not discrete categories and are better viewed as strategies of evidencing the multiplicity of conditions that enable the sustenance and proliferation of rankings in higher education. Under each theme, we discuss a selection of articles included in the special issue, whereby we highlight how each connects to continuity, interdependencies, or engagement. Notably, and as we show later on, none of the articles is limited to one theme, as they are all stand-alone works that make their own contributions to the study of rankings, which also go beyond this special issue. We proceed by discussing each theme in turn.

Continuities

There is a tendency in the literature to narrate the history of college and university rankings as a succession of events, usually centering on the publication of various “successful” rankings and their respective methodologies (e.g., Hazelkorn & Mihut, 2021a; Myers & Robe, 2009; Usher, 2017). Paired with the enduring interest in the effects, this tendency has led to foregrounding certain rankings, in particular the more recent ones, at the expense of the context in which present and past rankings were produced and the circumstances that enabled (or impeded) their impacts. However, left unchecked, this tendency carries with itself a risk of promoting narratives of history that are overly deterministic and too simplified. Furthermore, taking specific historical events and neatly delineated eras for granted, instead of questioning them, could easily magnify change and novelty at the expense of continuity and more gradual transitions. After all, questioning the institutionalization of rankings requires a better understanding of not only why some rankings succeed or perish, in the sense given above, but also appreciating the historical contingencies in their evolution.

A striking example of a continuous and gradually intensifying interest in rankings stretching over many decades is found in the history of rankings in the United States. In their contribution, titled “The emergence of university rankings: a historical-sociological account,” Wilbers and Brankovic (2021) take a closer look at this process, with the aim to better understand the circumstances under which ranking universities based on repeated observations became a widely accepted method of discussing quality and excellence. The authors argue that the increasing attention to rankings during the 1960s and 1970s—across administrative, policy, and scientific circles—was possible in large part due to a shift in the understanding of what it meant to perform as a university. The crux of this understanding, which found a fitting expression in the zero-sum ranking table, is that the performance of one organization can be determined only in relation to the performances of other organizations. By casting light on the often-neglected developments during the 1960s and 1970s, the article also contributes to a fuller understanding of the historical context leading to the 1980s—the decade in which US News & World Report would emerge on the college ranking scene.

China is a somewhat contrasting case. Here, too, we see a prolonged interest in rankings, but pursued chiefly by the country’s central government. In their article “The politics of university rankings in China,” Ahlers and Christmann-Budian (2023) argue that in the case of China, university rankings are an organic part of the state’s top-down science, technology, and innovation policy structure, rather than a tool primarily for international comparison, as it is sometimes portrayed. Since the first national ranking published by the ministry for science and technology in the 1980s, via the first international ranking by Shanghai Jiao Tong University in the early 2000s, until today, the discourse on rankings in China has been largely shaped by national concerns. Western rankings and other performance indicators would be increasingly challenged, also on the grounds that they are not sufficiently relevant for the Chinese context. Ahlers and Christmann-Budian conclude that, unlike most countries in which international rankings tend to exert a more direct influence on universities, in China, this effect has always been thickly mediated by its government’s strategic interest in higher education and science—which long precedes the first global rankings.

While both articles speak to continuities, they are also revealing of the interdependencies between those credited with producing a ranking and other parties contributing to it, including their supporters, sponsors, and critics, thus a more nuanced picture of what makes a ranking—and its effect(s)—possible and indeed likely. Clearly, and as the two cases also illustrate, the constellation of actors and the nature of their involvement vary across countries as well as over time. This, however, complicates the usual understanding of agency when it comes to university rankings, in which the role of the so-called “rankers” tends to be over-dramatized. Yet, as we shall argue in the following section, considering agency as distributed rather than unitary has potentially much to offer to the study of rankings.

Interdependencies

The best-known university rankings nowadays are complex undertakings. To produce a ranking, major ranking organizations rely on continuous participation of multiple third parties, including universities, their administrative staff, and individual faculty members. Universities supplying data or academics completing reputation surveys are examples of this participation. The complexity becomes evident also when we observe that rankers are embedded in an increasingly dense network of organizations that collect, supply, and compile data for various rankings (Chen & Chan, 2021; Krüger, 2020; Williamson, 2021). Finally, the complexity is evident from the fact that some rankers are engaged in a host of additional and supplementary activities: for instance, they organize events, sell consulting services to universities and governments, and actively engage with various other audiences. However, research tends to treat rankings as almost exclusively a doing of the organizations owning major rankings, such as THE and QS, whereby the role and agency of other actors, such as that of higher education institutions, are often neglected (as also recently argued by Locke, 2021). Understanding how rankings become (or do not become) institutionalized requires broadening this scope by, among others, paying close notice to distributed agencies and therefore interdependencies between rankers and other actors in higher education.

In their study “The power in managing numbers: changing interdependencies and the rise of ranking expertise,” Chun and Sauder (2022) investigate universities’ ranking management departments in South Korea. The authors note that these units have become increasingly influential because they have turned the management of rankings within and across universities into a valuable new form of expertise. This new kind of expertise, the authors argue, has reshaped key interdependencies both within universities and between universities and external constituents, and has further led to a change in work routines and new organizational practices. Drawing on the insights from their study, Chun and Sauder challenge the usual line of argument in which rankings are credited with generating competition by calling attention to how they also lead to new forms of cooperation. They thus conclude:

A key recipe for successful rankings is to incorporate multiple actors to collectively build expertise in the management of rankings. Rankings maintain and extend their influence over higher education through proliferating relational ties and interdependencies among universities, rankings, and other external parties. (Chun & Sauder, 2022, p. 17).

Interdependencies often arise in relation to resources, which are especially important to consider in the case of commercially driven ranking organizations. In addition to playing the role of impartial arbiters in creating rankings of higher education institutions, these organizations also act as businesses who sell services to those same institutions, including advertising and consulting. In his contribution to the special issue, titled “Does conflict of interest distort global university rankings?” Igor Chirikov (2022) tackles the relationship between these two roles. He sets out to empirically determine whether a conflict of interest leads to privileging certain universities over others by examining the effect of the universities in Russia contracting with QS on ranking outcomes. The findings suggest that, regardless of changes in the institutional characteristics, the universities with frequent QS-related contracts did improve their respective ranks more than their competitors. Similar to the recent study by Jacqmin (2021), in which the author analyzes the relationship between advertising on THE website and THE rankings, Chirikov’s work urges us to pay closer attention to the nature of linkages and (inter)dependencies that emerge between ranking organizations, higher education institutions, and other parties directly and indirectly involved in sustaining rankings.

As both Chun and Sauder’s and Chirikov’s studies demonstrate, rankings are made possible through an ongoing collaboration and cooperation between different parties, whose interests and orientations are variously aligned. In his contribution to the special issue, “How university rankings are made through globally coordinated action: a transnational institutional ethnography in the sociology of quantification,” Gary Barron (2022) is specifically concerned with the distributed nature of the production of rankings. He conceptualizes data and infrastructure work across sites as globally coordinated action, whereby individual members of academic and administrative staff become a part of that infrastructure, as they work with the data on an ongoing basis across countless physical sites. Barron notes that the contribution of higher education institutions to the production of rankings is not uniform; rather, it assumes different modalities, which center on individuals and organizational units doing routine work, often without having a full grasp of the overarching network of relationships of which they make a vital part.

Studying interdependencies, and not least how they evolve, holds promise for improving our understanding not only of how rankings produce new relationships in higher education, but also how they strengthen and transform existing ones. The importance of relationships between actors has also been recognized in the recent work by Engwall, Edlund, and Wedlin (2023) on the spread of evaluations, including rankings, in academia, and the work by Brankovic, Ringel, and Werron (2022) on the role of boundary work in the legitimation of university rankings. Moreover, one can easily see the relevance of studying interactional and relational dynamics for advancing our understanding of continuities vis-à-vis rankings, and history more generally. And, as we shall see in the following section, interdependencies are not only structural but also very much discursive and even actively initiated and sustained by rankers and their audiences.

Engagement

One important aspect of rankings is the fact that they are a public comparison of performances (Brankovic et al., 2018). Even though this has been highlighted in some of the classic works on the subject (e.g., Espeland & Sauder, 2007), research largely takes the public character of rankings for granted. Hence, what precisely follows from it is not well understood. Upon closer observation, we note that, as a function of their public character, rankings engage with various expert and non-expert audiences for which they serve—or are believed to serve—different needs. As different as these audiences are, what they have in common is that they need, or are perceived as being in need of, orientation, legitimation, or status signals (Esposito & Stark, 2019; Hamann & Schmidt-Wellenburg, 2020). Yet how rankings invoke, engage, and influence their audiences, and how this engagement contributes to or challenges their institutionalization, is something we have very little insight into.

Reaching out to and catering for various audiences has always marked rankers’ efforts to secure attention and legitimacy. The growing importance of social media in public life has made it an attractive space for various organizations to, on the one hand, promote their causes, services, or products and, on the other, foster engagement across diverse audiences. In their article “The ‘LOOMING DISASTER’ for higher education: how commercial rankers use social media to amplify and foster affect,” Riyad Shahjahan, Adam Grimm, and Ryan Allen (2021) critically examine the social media activities of THE and QS. Analyzing THE’s Twitter feed and QS’ Facebook page, the authors show how these organizations use storytelling to frame and sell their products and services. Both organizations, the authors find, use social media platforms to mobilize collective emotional states and actions, in particular precarity that comes with feelings of uncertainty, insecurity, anxiety, and/or competition. The article highlights social media’s uniqueness as affective infrastructure, given its cost-effectiveness, ease of consumption, the broad and quick outreach it allows, and not least its interactive nature.

In their contribution “The discursive resilience of university rankings,” Hamann and Ringel (2023) survey the discursive environment of rankings and distinguish two modes of critique: a rather fundamental mode that draws attention to the negative effects of rankings, and a more technical mode that is concerned with their methodological shortcomings. Rankers either counter this criticism and offer alternative narratives, or confidently highlight rankings’ scientific proficiency and stress that rankings are always to be developed and improved further. Crucially, the confident responses to criticism also include attempts to engage critics, inviting them into a productive conversation about how rankings could be developed further. The ensuing and seemingly never-ending conversation between rankers and critics about how to arrive at more rigorous assessments helps bestow university rankings with what the authors refer to as “discursive resilience.” The contribution emphasizes not only that critique is an important element of the institutionalization of university rankings, but, in a general sense, also sensitizes for discursive dynamics that emerge organically and therefore unfold “behind the backs” of rankers and their critics..

Both contributions bring to the fore the importance of the digital sphere as a site of engagement and interaction, in particular social media, which have so far not attracted much interest from research on rankings (for some exceptions, see Lim, 2021; Stack, 2016). Rankers seem to take their social media and other types of online engagement quite seriously, which urges us to look beyond their ranking tables if we are to grasp a fuller picture of how a particular effect is produced. The recent study by Hansen and Van den Bossche (2022), which tracks rhetorical change in the Times Higher Education’s rankings coverage, vividly captures how subtle yet potent this effect can be. Altogether, these insights indicate that dynamics in the public domain—whether digital, in print or in person—are worth observing when asking questions about how rankers invoke, engage, and influence their audiences.

What more can possibly be said?

The overarching takeaway of the special issue is that the institutionalization of rankings is driven not only by macro-societal trends, but also by the ongoing engagement in multiple arenas and spheres between different parties, both inside and outside of higher education. How this plays out, however, has not been made an object of sustained empirical or theoretical interest in higher education studies, in particular not on a par with the amount of interest in describing and interpreting the effects or methodologies of rankings. We therefore wish to reiterate that, if we are to better understand how rankings in higher education become a taken-for-granted social fact, as well as how their taken-for-grantedness is challenged, greater theoretical and empirical attention is needed to the dynamics pertaining to the social-organizational and individual levels of the social order. We propose that this attention is directed to the questions that concern continuities, interdependencies, and audience engagement.

The collection of articles included in the issue is a step in the direction of what we see as an important theoretical and empirical challenge. As we noted, the three themes are not mutually exclusive, nor do the contributions to the special issue shed light only on one at a time. Rather, they closely relate to each other and are very much intertwined. And so, as rankings evolve, they shape and reshape interdependencies between actors and take up expectations of different audiences. Continuity is, after all, another way of acknowledging at least some degree of path dependency at work. Rankings build resilience in part because their engagement of different audiences is ongoing and gradually normalized and in part because their production is the work of many diverse actors, who repeatedly contribute to the collective endeavor, from one ranking cycle to the next. These considerations merit further investigation. Ultimately, as with the study of institutionalization processes more generally, puzzling over the institutionalization of rankings equally invites us to consider the temporal and the spatial, the material and the discursive.

Echoing the calls made by some of the authors in the special issue, we believe that there is a great deal we can learn about rankings by systematically comparing continuities, interdependencies, and engagement in different parts of the world and at different levels. Comparative studies have so far advanced our understanding of a range of phenomena in higher education (Kosmützky & Nokkala, 2014; Kosmützky et al., 2020; Teichler, 2014). And they hold good promise for unraveling the antecedents and consequences of continuities, but also of discontinuities; interdependencies and the absence thereof; and, finally, engagement as well as disengagement. Furthermore, there is a lot to be learned about rankings in higher education by extending our scope and comparing them with the institutionalization of other devices of evaluation (Hamann et al., 2023). But comparative research need not be limited to higher education and can take into account rankings in other domains (for an illustration, see Brankovic et al., 2021). After all, we should not forget that rankings in higher education (and higher education as such, for that matter) are part of a larger societal field in which various quantitative indicators have come to wield significant influence in recent years (de Rijcke et al., 2016; Erkkilä & Piironen, 2018; Mennicken & Espeland, 2019; Pardo-Guerra, 2022). In view of this, empirically identified similarities and differences between rankings and other kinds of quantification-based comparisons—within and beyond higher education—could be fruitfully exploited for the purpose of furthering our understanding of rankings in higher education.

In closing, we hope that this special issue would deepen our collective appreciation of the complexity of rankings as a social phenomenon and inspire those working on the topic to consider some of the perspectives and insights herewith offered. We equally hope that this issue would stimulate us all to further question the taken-for-grantedness of rankings in higher education as well as in higher education research. Not because we believe that rankings are all bad or harmful (if this were indeed so, things would be, we suspect, far simpler), but because it is our task as scholars to question everything in society, including—and perhaps especially—our own assumptions and beliefs.