Abstract
Global university rankings influence students’ choices and higher education policies throughout the world. When rankers not only evaluate universities but also provide them with consulting, analytics, or advertising services, rankers are vulnerable to conflicts of interest that may potentially distort their rankings. The paper assesses the impact of contracting with rankers on university ranking outcomes using a difference-in-difference research design. The study matches data on the positions of 28 Russian universities in QS World University Rankings between 2016 and 2021 with information on contracts these universities had for services from QS—the company that produces these rankings. The study compares the fluctuations in QS rankings with data obtained from the Times Higher Education rankings and data recorded by national statistics. The results suggest that the universities with frequent QS-related contracts had an increase of 0.75 standard deviations (~ 140 positions) in QS World University Rankings and an increase of 0.9 standard deviations in reported QS faculty-student ratio scores over 5 years, regardless of changes in the institutional characteristics. The observed distortions could be explained by university rankers’ self-serving bias that benefits both rankers and prestige-seeking universities and reinforces the persistence of rankings in higher education.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
University rankings aspire to maintain a high level of credibility and influence within the higher education sector (Altbach, 2012; Hazelkorn, 2015; Lim, 2018). The credibility of rankings is based on a belief that rankers provide impartial information for prospective students, university administrators, and policy makers. It is implied that all universities are evaluated equitably or at least according to a uniform set of criteria. However, rankers face conflicts of interest when, in addition to objectively evaluating universities’ performance, they offer these universities fee-based analytical, consulting, and advertising services. The conflicts of interest potentially interfere with the objectivity of measures used in rankings and may provide some universities advantages that are not related to their institutional quality. Biased measures could misinform prospective students, universities, governments, and funders about global standings of universities and countries.
There is robust evidence from other sectors of the economy that conflicts of interest distort evaluations. In the financial sector, the business model of issuer-paid credit rating agencies was found to lead to biased evaluations that contributed to the financial crisis of 2008–2009 (Kashyap & Kovrijnykh, 2016). In the environmental monitoring industry, third-party auditors who are chosen and paid by the firms that they audit were found to systematically underreport plant emissions (Duflo et al., 2013). In sports, football coaches participating in the USA Today Coaches Poll distort their rankings to reflect their own team’s reputation and financial interests (Kotchen & Potoski, 2011). In the medical sector, incentives or gifts provided to physicians by pharmaceutical companies lead to biases in prescribing and professional behavior (Wazana, 2000). Is the industry of university rankings different?
University rankers claim that conflicts of interest do not affect the ranking process and that internal procedures are in place to prevent consulting/advertising services from influencing ranking outcomes (Bailey, 2015; Redden, 2013). Most rankers claim to conduct external audits and establish advisory boards to make sure that their processes are transparent and unbiased. But rankers fail to provide any systematic evidence to the public that university rankings are unaffected by conflicts of interest. Rankers do not disclose which universities bought advertising, analytics, or consulting services from them and how much they paid for these services.
The lack of evidence on whether conflicts of interest bias university rankings is a major omission, considering that rankings influence students’ choices and divert considerable resources (both financial and managerial) from universities and governments (Sauder & Espeland, 2009). Unlike other sources of biases in global university rankings (see Bowman & Bastedo, 2011; Ioannidis et al., 2007; Marginson, 2014; Selten et al., 2020), the conflicts of interest seemingly built into the university ranking system have rarely been discussed in the academic literature.
The current study is the first attempt to empirically assess the impact of contracting with rankers on university outcomes in global university rankings. This study uses unique matched data on the progress of 28 Russian universities in QS World University Rankings from 2016 to 2021 and contracts these universities had for services from QS between 2013 and 2020. Russian universities are required by law to publicly report all large procurement contracts at the Russian portal of government procurements.
QS World University Rankings was chosen for this study over other global rankings for three reasons. First, Quacquarelli Symonds (QS) offers universities a wider array of services than other rankers. For example, it offers a fee-based rating system, QS Stars, that evaluates universities and awards them from 0 to 5 + “gold stars.” The stars indicate “quality” of the university and appear on the website next to the university name, including in the main QS World University Rankings table. Second, Russian universities had a much larger number of contracts with QS than with any other ranker. Third, QS has been frequently called out by media outlets and higher education experts for having and not reporting conflicts of interest (Bailey, 2015; Redden, 2013; Stack, 2016).
University rankings and conflicts of interest
Major global university rankings—QS, Times Higher Education (THE), U.S. News & World Report, and the Academic Ranking of World Universities (“Shanghai”) rankings—are produced by profit-seeking organizations. Considering that over the years rankings have become both more diverse and resource-intensive to produce, ranking companies face greater pressures to generate profits in an increasingly competitive field (Brankovic et al., 2018; Lim, 2018). In the prevailing business model for this industry, lists of rankings are publicly available and usually “sold” at a loss in order to generate revenue from other business activities, which generally involve the sale of additional data and services to both the public and to ranked universities (Lim, 2021). For instance, THE and U.S. News offer a substantial amount of subscription-based content for the public, consisting of news and reports on higher education and, in the case of U.S. News, additional ranking data and information about universities for prospective students. Additionally, all four rankers mentioned above offer services to universities that include advertising, access to additional ranking data and analytics, consulting (typically for branding, quality improvement, and reporting strategies), and access to events and workshops. As such, a state of mutual resource dependency exists between universities, which rely upon favorable rankings to generate tuition and other types of revenues, and rankers, which depend on the additional revenues generated by selling consulting, premium data, and analytics to universities.
How do resource dependencies of university rankers affect the way rankings are produced? An emerging stream of literature on “ranking entrepreneurship”—a perspective that focuses on the production side of rankings (Rindova et al., 2018)—suggests that rankers employ a variety of strategies to promote a sense of constant and intense global competition in higher education in order to position their analytics, consultancy or advertising services as part of the “solution” that helps universities to be more competitive (Brankovic et al., 2018; Lim, 2018; Shahjahan et al., 2022; Stack, 2016). For example, rankings are published on a regular basis every year to create the perception that higher education is a dynamic competitive field (Brankovic et al., 2018). Rankers further support that perception by framing higher education policy problems in terms of the precarity of global prestige, and then mobilize the resulting anxiety and fear of low ranking to sell their services and analytical tools to university administrators (Shahjahan et al., 2022).
However, current academic literature on university rankings rarely discusses a more direct implication of the fact that rankers depend on resources generated from selling services to ranked universities—namely, potential biases resulting from conflicts of interest. When firms or individuals both evaluate (or, in this case, rank) other organizations and receive payments from them, it creates a conflict of interest between the objective evaluation and evaluation that is beneficial to the client (Duflo et al., 2013). In a review of the scholarship on rankings, there appear to be only descriptive accounts that conflicts of interest may undermine the work of university rankers (e.g., Stack, 2016). They do not include empirical evidence to demonstrate the impact of conflicts of interest on ranking outcomes. One notable exception is a recent study by Jacqmin (2021) which has shown that advertising in the Times Higher Education magazine is associated with an improvement in THE World University Rankings. However, the study design does not allow identifying whether this improvement is driven by the commercial relations between the ranker and universities or by other factors.
The literature on corporate audit could provide useful guidance for empirical studies of conflicts of interest in university rankings. Auditors are paid by firms and tasked to independently examine firms’ financial statements, so that external users (e.g., investors) can make sure that these statements are valid, reliable, and accurate (Bazerman et al., 1997). Findings from the audit industry are relevant because university rankings are considered to be a part of the general “audit culture” in higher education, where principles and techniques of financial accounting shape university governance (Shore & Wright, 2015). On a more operational level, the work of university rankers includes an important auditing component: one of their core activities is to ensure that the data that comes from universities or third-party data providers is accurate and aligns with the established methodology of the rankings.
There are at least two major takeaways from the literature on the impact of conflicts of interest on the work of auditors. First, most studies agree that conflicts of interest distort auditors’ evaluations (Bazerman et al., 2002; Clements et al., 2012; Ishaque, 2021; Moore et al., 2006). The negative impact of conflicts of interest on the quality of audit reports is pervasive: for example, the business model of issuer-paid credit rating agencies (where corporations or governments who seek funds pay for credit ratings) has been identified as a critical factor that led to inaccurate ratings and eventually the financial crisis of 2008–2009 (Kashyap & Kovrijnykh, 2016). If a university ranking has a conflict of interest that results in a biased ranking, it may negatively impact student educational choices, immigration procedures (for countries which rely on rankings in determining eligibility points for work visas), and even the hiring decisions by universities and other employers.
Second, distortions in corporate audits are mainly rooted in the unconscious self-serving bias and structural aspects of accounting profession, rather than in corruption and fraud. As Bazerman, Loewenstein, and Moore explain in a paper titled “Why Good Accountants Do Bad Audits,” (2002) cases of deliberate corruption and unethical behavior exist among auditors, but these cases do not explain errors in audit at scale. Rather, it is “because of the often subjective nature of accounting and the tight relationships between accounting firms and their clients, even the most honest and meticulous of auditors can unintentionally distort the numbers in ways that mask a company’s true financial status” (p. 3). Researchers argue that self-serving bias—when individual perceptions of a situation are altered by one’s role in that situation—is the main factor behind distorted audit reports. Self-serving biases thrive in accounting because this type of work is characterized by ambiguity (the possibility of interpreting information in different ways), attachment (auditors are interested in developing long-term relationships with clients that allow them to sell additional consulting services), and approval (auditors do not produce their own accounting but instead merely endorse or reject the client’s accounting) (Bazerman et al., 2002).
In the context of university rankings, ambiguity, attachment, and approval are also important characteristics of rankers’ work. Universities worldwide operate in different institutional environments, and establishing equivalency between seemingly identical categories (e.g., faculty full-time equivalent) leaves ample room for interpretation. As shown by Lim (2018), rankers also invest substantial resources to establish and maintain relationship with university leaders to strengthen their credibility in the field.
The remainder of the paper first describes the setting of the study and the range of services provided by QS to universities and then builds upon insights from the corporate audit literature to analyze the impact of conflicts of interest on the rankings of Russian universities in QS World University Rankings.
Setting: QS World University Rankings and Russian universities
QS was founded in 1990 as a business connecting university admissions offices with prospective international students (Shahjahan et al., 2022). QS clients included major business schools and universities around the world. In 2004, while continuing with its core business, QS joined forces with the Times Higher Education Supplement to publish the QS-THES global university rankings, for which QS curated data collection for the rankings while THE was responsible for the structure and commentary of the rankings (Stack, 2016). In 2009, the two companies split up and created their own separate rankings. QS describes its World University Rankings on its website as “the world’s most popular source of comparative data about university performance” and reports that its website was viewed 149 million times in 2019 (About QS, 2021).
QS World University Rankings methodology evaluates universities across six domains (QS World University Rankings – Methodology, 2021): academic reputation (40% of the overall score), employer reputation (10%), faculty-student ratio (20%), citations per faculty (20%), international faculty ratio (5%), and international student ratio (5%). In addition to its main product, World University Rankings, QS publishes ten additional rankings that include those by academic department, region, and graduates’ employability.
QS is a successful “edu-business” that generated €19.8 million (~ $22 m) in revenue and €432 thousand (~ $480 k) in operating profit in 2015 and approximately €46 m (~ $51.5 m) in revenue in 2019 (Shahjahan et al., 2022). QS provides universities with an array of analytics, consulting, marketing, and advertising services (QS Intelligence Unit | Services, 2021) as well as software solutions (QS Unisolution | About Us, 2021). Analytics services include access to extended data points about the performance of university in QS rankings (with an option to benchmark against the competitors), academic reputation tracking, and employer reputation tracking. Consulting services offer advice on performance improvement of universities and higher education systems. Market insight services provide direct feedback from prospective students by conducting focus groups in various regions.
QS Stars is one of QS’s most controversial products, due to its high potential for creating a conflict of interest. QS Stars is an assessment product that evaluates the client university across at least eight categories (e.g., teaching, research, and internationalization). Depending on the number of points achieved in this evaluation, each university is then awarded a certain number of “gold stars,” ranging from 0 to 5 + . The star result is then displayed on the university profile on the QS website and in the ranking tables. Universities are required to pay for the initial audit as well as an annual fee that allows them to use QS Stars graphics in their promotional materials.
Although the relative resource dependence of various rankers is difficult to quantify, given the lack of publicly available disaggregated financial reporting, the structure of QS business activity suggests a greater dependence upon universities relative to other rankers. Compared to other commercial rankers like Times Higher Education and U.S. News & World Report, QS does not seem to offer substantial subscription-based content for the general public, so revenues are more heavily derived from universities and governments. ARWU revenue mainly comes from universities as well but unlike QS, ARWU creates its rankings using publicly available data and therefore does not rely upon universities to collect or supply ranking data. The resource dependence of QS, coupled with the ambiguities in rankings methodology, potentially increases its risk of exhibiting self-serving bias towards client universities.
The study focuses on contracts between QS and Russian universities from 2013 to 2020. This period is characterized by large state investments in the global competitiveness of Russian universities (Chirikov, 2018). These investments included, for example, additional funding for university infrastructure, and large grants to attract scholars from abroad to work at Russian universities. The major governmental program, the Russian Excellence Initiative, launched in 2013, provided a group of 21 Russian universities with additional resources (up to 20% of their annual budget) to achieve “global excellence” and improve their standings in global university rankings. The initial goal of the project was to have five Russian universities in the top 100 of global university rankings by 2020.
Every year, the Russian government evaluated the progress of universities in global rankings. The results of the evaluation determined the allocation of additional funding for the next year. Performance in global rankings was very high on the agenda at Russian universities during that period, not only among 21 universities participating in the Global Excellence Initiative but also among others that were aspiring to be included in the program. The pressure from the government and media prompted universities to develop very elaborate strategies on how to advance in rankings, establish special units responsible for “ranking relations,” and purchase services from rankers, including QS.
Data and methods
Sample and data sources
The sample consists of the 28 Russian universities that appear in the 2021 QS World University Rankings (QS World University Rankings, 2021). Russia was selected as the sampling frame due to the ready availability of data on university contracts with rankers and additional measures of institutional characteristics. The data for this study were compiled from multiple publicly available sources. Global university ranking data and scores for 2016 to 2021, including those for 28 Russian universities, were collected from QS World University Rankings and THE World University Rankings (THE World University Rankings, 2021).Footnote 1 Self-reported institutional data (including the number of students and faculty and university income), which all Russian universities are required to report, were obtained from the Russian Ministry of Higher Education and Science (MHES) (Monitoring of Universities’ Effectiveness, 2021). Finally, information about QS-related contracts of Russian universities from 2013 to June 2020, including the monetary value of each contract, was obtained from the Russian portal of government procurements (Russian Portal of Government Procurements, 2021).
Contracts were identified as QS-related contracts only if they explicitly mentioned services provided by QS (advertising, license fees, consulting, access to ranking databases, and participation in QS-sponsored events). The contracts included spending for advertising on the QS website www.topuniversities.com, participation/co-sponsorship of events organized by QS, access to extended analytics, consultancy services, and license fees to participate in QS Stars. This sampling procedure was used to identify 128 contracts initiated by 38 universities. It is possible, however, that this procedure may undercount the number of contracts that financially benefit QS, as some universities may have additionally contracted with individual consultants associated with QS or used private firms or non-profits serving as agents without mentioning QS explicitly in the contract language.
According to 128 QS-related contracts initiated between 2013 and 2020, Russian universities spent $3,871,378 on QS services during this period. It is challenging to estimate how much of $3,871,378 went directly to QS. Some contracts were complex and included, for instance, travel and accommodations of participants in addition to registration fees that were collected by QS. More than 90% of the contracts (116 out of 128) were initiated between 2015 and 2020; the median contract value was slightly over $12,000. Of the 38 universities contracting for QS services, 28 are represented in the main QS World University Rankings in 2021, while the remaining 10 universities only participate in other QS rankings, for instance, in BRICS (Brazil, Russia, India, China, and South Africa) rankings, EECA (Emerging Europe & Central Asia) rankings, or rankings by subject.
The analysis is limited to 28 universities that participate in the main QS World University Rankings. In total, 22 of these 28 universities (78%) had 94 QS-related contracts and spent $2,857,880 on QS-related services over the last 8 years.
Following the literature on conflicts of interest in corporate audits, which suggests that strong relationships between auditors and firms are more likely to lead to biased reports, I analyze differences in average 5-year change in rankings between two groups: “frequent clients,” meaning universities that used QS consulting and advertising services on five or more occasions, and “occasional or non-clients,” meaning universities that used these services four times or fewer. The number of contracts as a measure of relationships between universities and a ranker has been chosen over contracts’ monetary value for two reasons. First, the number of contracts better indicates a continuing relationship between a ranker and a university. Second, the number of contracts is a more accurate measure of universities’ engagement with QS because, as noted above, the monetary value of contracts does not always represent the exact amount received by QS. The cut-off point—five or more contracts—differentiates between universities in the top quartile (25%) by the number of contracts with QS and universities in the remaining lower quartiles. In the results section, I also explore if the results are robust to other cut-off points to define QS “frequent clients.”
Key variables
The study has two outcome variables of interest: (1) change in the position (ranking) of Russian universities in QS World University Rankings from 2016 to 2021 and (2) change in QS faculty-student ratio score from 2016 to 2021 for the same universities. As noted in the previous section, faculty-student ratio is a major component of QS World University Rankings that accounts for 20% of the overall rank. Russian universities report very high faculty-student ratios: there are currently 12 Russian universities in the world’s top 100 universities in the 2021 QS rankings list sorted by faculty-student ratio scores (and only one Russian university in the overall top-100). The average score for the faculty-student ratio component for Russian universities is also higher than their average scores for any other component (e.g., academic reputation or citations per faculty).
Eight universities out of 28 in the sample were not included in the 2016 rankings but were included in the 2021 rankings. For five of those eight universities that were first ranked in 2017–2019, I assigned the first available score for that university. For the remaining three universities that were first ranked after 2019, I imputed the average 2016 scores of universities that had the same 2021 range of ranks (from the global dataset of 1,003 universities). If university was ranked in a range of institutions (e.g., 501–550), I assigned the middle rank for such university (525). All student-faculty ratio scores provided by QS were recorded as numbers and did not include intervals.
Research design
The research design includes two parts: (1) descriptive analysis of the QS-related contracts and their association with the ranking progress and (2) difference-in-difference analysis to estimate the impact of QS-related contracts on the progress in QS World University Rankings.
For the descriptive analysis, I first explore differences in the ranking progress in two groups of universities: QS “frequent clients” and “occasional or non-clients.” I use nonparametric Mann–Whitney U test to compare if progress ranking scores were equal in two groups, and a permutation test to examine if the observed differences between two groups were due to chance (a permutation test was implemented using infer package in R). For the permutation test, I calculated mean differences between “frequent clients” and “occasional or non-clients” groups of universities. I then contrasted it against a probability density function generated from mean differences of 10,000 permutations within the same sample. In each permutation, the label of the group was randomly reassigned for each university.
As a robustness check, I also use two unadjusted and covariate-adjusted OLS regression models to determine the correlation between the actual number of QS-related contracts each university had and ranking progress. The adjusted model includes university income per student as a covariate to account for the differences in resources available to universities.
For the difference-in-difference analysis, I investigate whether the number of QS-related contracts impacts universities’ progress in QS rankings. To do so, I estimate two difference-in-difference models. The first model compares outcome differences (change in ranks) between QS “frequent clients” and “occasional or non-clients” in both QS World University Rankings and THE World University Rankings (which was selected for comparison due to the relative infrequency of THE contracts with Russian universities).
The first model assumes that THE rankings and QS rankings reflect institutional and reputational changes in a similar way in their scores. The methodology of THE World University Rankings published by Times Higher Education is similar to QS and includes 13 indicators measuring teaching excellence, research output, citations, international outlook, and industry income (Times Higher Education Rankings Methodology, 2021). Five out of six metrics that QS uses to construct its rankings are shared with THE rankings: academic reputation (THE conducts its own survey), faculty-student ratio, citations per faculty, international faculty ratio, and international student ratio. Longitudinal analysis of both rankings by Selten et al., (2020) indicates that top-400 results by QS and THE are highly correlated over the years (Spearman’s correlation coefficient > 0.8). Twenty-five Russian universities out of 28 participate in both QS and Times Higher Education rankings.
The second model examines changes in faculty-student ratio scores of QS “frequent clients” and “occasional or non-clients,” by comparing QS rankings for 2016–2021 to both THE rankings from the same time period (specification 2a) and the faculty-student ratios computed using data reported to the Russian Ministry of Higher Education and Science from 2015 to 2020 (specification 2b).Footnote 2 The second model assumes that the change in faculty-student ratios is reflected in a similar way in QS faculty-student ratio scores and in faculty-student ratios reported by universities to THE rankings and the Russian MHES. QS calculates the faculty-student ratio score by dividing the number of full-time equivalent (FTE) faculty by the number of full-time equivalent students (QS Intelligence Unit Faculty Student Ratio, 2021). In the 2021 rankings, QS assigned universities a score from 2.2 to 100 relative to other universities. THE calculates the academic staff-to-student ratio (which I have converted into academic staff-student ratios) in a similar way, by dividing the total FTE number of academic staff by the FTE number of students. Faculty-student ratios based on the MHES data are calculated by dividing the number of full-time and part-time permanent faculty (regardless of time worked) by the number of FTE students. Although calculation methods are slightly different, these three metrics should be highly correlated.
The impact of QS-related contracts on changes in university ranking can be estimated by comparing the magnitude of changes (to overall rankings by THE and QS and to faculty-student ratios indicated by MHES, THE, and QS) in “frequent clients” universities (treatment group) to that in “occasional or non-clients” universities (control group). As such, changes in both groups in THE/MHES data are considered as the baseline against which changes in QS are compared. The parallel trend assumption is justified given that QS/THE rankings (and QS/THE/MHES measures of faculty-student ratios) reflect institutional change in a similar way (see above). Similar difference-in-difference research design was previously used by Hanushek and Woessmann (2006) to investigate the effects of school tracking on student achievement across countries.
Both difference-in-difference models are estimated with the following equation:
where Yir is a change of university I in rankings/faculty-student ratio r, Ai is a dummy for a control group (“occasional or non-clients”), Br is a dummy for the type of rankings/faculty-student ratio scores (QS = 1), and Iir is a dummy for treatment observations (“frequent clients”) in QS rankings. The difference-in-difference coefficient β indicates the treatment effect.
Results
Universities that had frequent QS-related contracts improved their standings in QS rankings, on average, by 191 rank positions from 2016 to 2021, while universities that never or seldomly contracted for QS services improved their standings by 74 positions (Table 1 and corresponding Supplementary Figure S2 (left)). The difference in ranking progress is statistically significant between the two groups (W = 39, p value = 0.0236).
Excluding the three universities with imputed 2016 QS ranking scores (which were not ranked until 2019), the difference is statistically significant. Such difference in ranking progress between two randomly created groups of universities is very unlikely to be observed in the sample of Russian universities (permutation test p = 0.035; 95% CI = 6.5 to 227; for permutation test visualization see Supplementary Figure S1). If progress in rankings is defined as the ratio of 2021 ranks to 2016 ranks (as opposed to the difference between 2016 and 2021 ranks), the difference between QS “frequent clients” and QS “occasional or non-clients” is statistically significant.
On average, QS “frequent clients” universities also improved their faculty-student ratio scores by 14 points during the same period while QS “occasional or non-clients” did not (Table 1 and corresponding Supplementary Figure S3 (left)).
These results are robust to a variety of definitions of QS “frequent clients” and “occasional or non-clients” (shifting the cut-off point for “frequent clients” within 0.5 standard deviation (± 2 contracts) from the current cut-off point of 5 + contracts) and to the exclusion of 8 universities with imputed 2016 scores.
As shown in Table 1, there is no statistically significant difference between “frequent clients” and “occasional or non-clients” universities in the change in THE rankings from 2016 to 2021 (also see Supplementary Fig. 2 (right)), the change in THE faculty-student ratios (also see Supplementary Fig. 3 (center)), and the change in MHES faculty-student ratios (also see Supplementary Fig. 3 (right)). “Frequent clients” paid, on average, more than three times more money under their QS-related contracts than “occasional or non-clients.”
As shown in Fig. 1, the number of QS-related contracts is positively correlated with the progress in QS rankings. For every additional QS-related contract, the progress of Russian universities in QS rankings improves by approximately 10 positions. This association is statistically significant in both unadjusted and covariate-adjusted regression models (Table 2).
Finally, both difference-in-difference models indicate that frequent QS-related contracts have led to a substantially better progress in QS World University Rankings (Table 3 and Figs. 2 and 3). First, Russian universities with frequent QS-related contracts improved their position by 0.75 standard deviations (approximately 140 ranks) more than the same universities would have improved without frequent QS-related contracts (p = 0.029; 95% CI = 0.08 to 1.42). The results are robust to variations in the sample due to imputed missing 2016 QS scores for 8 universities. Second, QS faculty-student ratio scores of Russian universities with frequent QS-related contracts increased by 0.94–0.96 standard deviations (approximately 14 QS faculty-student ratio scores) more than the scores of the same universities would have increased without frequent QS-related contracts (comparison with THE ranking data: p = 0.077, CI = − 0.11 to 2.02; comparison with MHES data: p = 0.093; CI = − 0.16 to 2.04). An improvement of 14 QS faculty ratio scores—for example, from 20 to 34—represents an improvement of 174 positions in the list of 1003 universities sorted by the faculty-student ratio in the 2021 QS World University Rankings.
Discussion and conclusion
This study suggests that universities that use services provided by ranking companies more frequently may improve their positions in global university rankings over time, regardless of improvements in their institutional characteristics measured in these rankings (e.g., these universities may increase faculty-student ratio score in rankings without the actual change in faculty-student ratio). The findings are consistent with the studies on the impact of conflicts of interest in other spheres of economy (Duflo et al., 2013; Kashyap & Kovrijnykh, 2016; Kotchen & Potoski, 2011; Wazana, 2000) and provide the first evidence of its kind for higher education settings. Findings suggest that university ranking outcomes may be affected by rankers’ business models and resource dependencies. In its business model, QS emphasizes fee-based services for universities and governments more so than any other major rankers. When a ranker’s resources heavily depend on clients that are directly interested in improving their evaluations, it becomes much harder to balance the goal to increase profits and the interests of those who rely on the rankings for fair and impartial advice. The study illustrates there is a considerable risk that universities providing rankers with the substantial flow of resources will be ranked higher—and, since rankings are a zero-sum game, other universities will be ranked lower.
Russian universities with frequent QS-related contracts progressed better in both overall scores and in faculty-student ratio scores. Notably, faculty-student ratio score is the most heavily weighted indicator (20% of the overall score) that can be “improved” relatively easily by adjusting the way data is reported. Despite all universities having strong incentives to minimize the number of students and maximize the number of faculty when submitting their data, QS “frequent clients” showed more improvement on this measure than their “occasional or non-clients” counterparts. This finding suggests that rankings may be distorted at the stage when rankers make decisions whether to accept or reject data on faculty-student ratios from universities and that rankers may be more likely to accept inflated scores from their “frequent clients” than from other universities.
The literature on corporate audits (Bazerman et al., 2002; Ishaque, 2021; Moore et al., 2006) provides further insights into the mechanism that may lead to distortions in rankings. As with auditors, university rankers may be vulnerable to unconscious self-serving bias when they evaluate data submitted by universities that are also frequent clients. Similarly, the same three aspects identified in auditors’ work—ambiguity, approval, and attachment—may contribute to the self-serving bias of rankers. First, there is a large degree of ambiguity in the definitions of the data that universities submit to rankers. Even seemingly “objective” indicators such as FTE students and FTE faculty could be calculated in multiple different ways at the institutional level. The need for these indicators to be internationally comparable increases ambiguity by possibly compelling rankers to adopt very broad definitions of this concept.
Second, despite rankers collecting some of their own data (e.g., academic reputation surveys), they are dependent on the data submitted by the universities. For example, for indicators such as faculty-student ratio, rankers just approve or reject the data calculated at the university level. Although data are checked for accuracy by comparing current-year data to previous-year data, rankers may use their judgment (rather than fixed criteria) to determine whether submitted data should be accepted or rejected.
Third, attachment plays a crucial role in rankers’ work (Lim, 2018). Rankers are interested in maintaining friendly business relationships with universities and thus are motivated to approve data submitted by “frequent clients.” Managers are evaluated negatively on lost clients and face pressure to accept small inaccuracies in order to grow their business (Bazerman et al., 1997). Rankers may also feel more accountable to their “frequent clients” than to the faceless prospective students who use rankings to make enrollment decisions (Bazerman et al., 2002; Moore et al., 2006). As rankers become more permissive in accepting data from frequent client universities, these universities in turn may come to learn reporting strategies that target existing loopholes in the ranking system and exploit rankers’ greater permissiveness. Taken together, these factors could contribute to rankers interpreting the data submitted by “frequent client” universities differently than data from other types of institutions.
There is one alternative explanation of the observed empirical results that is worth discussing. One can hypothesize that there is an unobserved characteristic of universities that were QS “frequent clients” that has both led to both their frequent QS-related contracts and reporting of inflated faculty-student ratios (e.g., leadership style). However, this alternative explanation is unlikely because this unobserved characteristic should have impacted reporting of inflated faculty-student ratios not only to QS but also to other rankers, including Times Higher Education, which was not found here to be the case.
Self-serving bias is one of the potential mechanisms behind significant distortions in rankings observed in this study. But in addition, it also ultimately contributes to the persistence of rankings in higher education by providing rankers with the flow of resources from prestige-hungry universities that seek to get immediate returns on their spending on rankers’ services. Rankings have become a very elaborate data-intensive product that requires substantial resources to produce (Lim, 2021). To obtain these resources, rankers have established a multi-million-dollar market that allows universities to elevate their organizational status and prestige. As competition in higher education intensifies, universities seek to affiliate with intermediaries that can advance their status relative to others (Brankovic, 2018). Ranking companies have become powerful intermediaries in the higher education sector by providing both higher-status universities with the affirmation of their status and lower-status universities with opportunities to advance their status (Brankovic, 2018).
Universities purchase services from rankers in expectation of immediate results because their performance in rankings is often tied to additional funding, coming from the governments and/or increased tuition. Self-serving bias helps rankers to meet these expectations by allowing universities to quickly advance in the status hierarchy without substantial improvements in institutional quality. As such, both universities and rankers benefit from the self-serving bias that further contributes to the pervasiveness and expansion of university rankings.
Self-serving bias also helps rankers to create additional opportunities to elevate institutional prestige. QS stands out among rankers for its entrepreneurial approach. Not only has the company expanded its offering of rankings to accommodate the growing demand from universities and governments (by region, subject, age, employability, cities, and systems), but it also offers universities other fee-based services to elevate their standings (e.g., QS Stars). It capitalizes upon the prestige of higher-status universities by providing a variety of services to status-seeking universities that will place them “closer to the front row.”
The findings also provide insights into the strategic actions that universities take to influence ranking outcomes. As suggested by Rindova et al., (2018), organizations may seek to influence constituents who provide their subjective assessments of organizations to the rankers. Since university rankings include reputation surveys, universities often attempt to generate more favorable rankings by influencing the nomination of survey participants. QS solicits nominations for experts from universities themselves, so universities engage in the complex task of identifying experts, contacting them, and asking for their consent to be included in the rankers’ database for the QS Global Academic Survey. Additionally, as demonstrated here, some universities take a more direct approach by contracting with rankers themselves in efforts to improve their rankings. These findings complement accounts of other types of organizations engaging proactively with rankers, as part of an organization’s attempts to influence assessment criteria and elevate their own position (Pollock et al., 2018).
Finally, findings contribute to a discussion on the effects of “excellence initiatives” in higher education and, in particular, the Russian Excellence Initiative—the Project 5–100 (Agasisti et al., 2020; Froumin & Lisyutkin, 2018; Matveeva et al., 2021; Oleksiyenko, 2021). There is some evidence that Project 5–100 was instrumental in improving research performance, academic reputation, and internationalization of Russian universities (Agasisti et al., 2020; Matveeva et al., 2021). The paper extends these accounts by discussing possible unintended consequences of using rankings as performance measures. The paper suggests that using rankings as performance measures could re-channel university efforts and resources from institutional improvement to engaging with rankers and inflating rankings scores.
The major limitation of this study is that it has a relatively small sample (n = 28). Small samples may undermine the internal and external validity of the findings. However, larger samples are currently not available to researchers since rankers do not disclose their financial relations with universities. Because the sample includes all Russian universities participating in the QS World University Rankings in 2021, these findings may be generalizable to universities in countries that are similarly situated relative to the global higher education landscape. Another limitation of the study is that it explores only one measure of the relationship between a ranker and universities—the number of contracts—and does not consider the monetary value of these contracts. Larger contracts could potentially have a more significant impact on ranker’s decisions. However, the available data on the monetary value of contracts does not allow identifying how much money went directly to QS or was spent on other services.
Further studies are needed to address these limitations. In particular, further studies could focus on determining country-specific and university-specific factors that may affect the relationship between universities, rankers, and conflicts of interest. When data on the monetary value of contracts between rankers and universities are available to researchers, further studies can explore how the size of contracts affects progress in rankings.
Notes
The 2016 QS World University Rankings data was provided by the authors of the study (Selten et al., 2020). 2021 QS World University Rankings data was downloaded from QS website in January 2021 and matched with 2016 data. Please note that in February 2021 QS updated its website and extended 2021 rankings to 1186 universities (and four more Russian universities in 1001 + category). The study uses the original dataset of 1003 universities.
2015–2020 period in the MHES Monitoring of Universities’ Effectiveness corresponds to 2016–2021 period in both THE and QS rankings.
References
About QS. (2021). Top universities. https://www.topuniversities.com/about-qs
Agasisti, T., Shibanova, E., Platonova, D., & Lisyutkin, M. (2020). The Russian Excellence Initiative for higher education: A nonparametric evaluation of short-term results. International Transactions in Operational Research, 27(4), 1911–1929. https://doi.org/10.1111/itor.12742
Altbach, P. G. (2012). The globalization of college and university rankings. Change: The Magazine of Higher Learning, 44(1), 26–31. https://doi.org/10.1080/00091383.2012.636001
Bailey, T. (2015). University rankings: The institutions that are paying to be good. The New Economy. https://www.theneweconomy.com/business/university-rankings-the-institutions-that-are-paying-to-be-good
Bazerman, M., Loewenstein, G., & Moore, D. A. (2002). Why good accountants do bad audits. Harvard Business Review. https://hbr.org/2002/11/why-good-accountants-do-bad-audits
Bazerman, M., Morgan, K., & Loewenstein, G. (1997). The impossibility of auditor independence. MIT Sloan Management Review, 38, 89–94.
Bowman, N. A., & Bastedo, M. N. (2011). Anchoring effects in world university rankings: Exploring biases in reputation scores. Higher Education, 61(4), 431–444. https://doi.org/10.1007/s10734-010-9339-1
Brankovic, J. (2018). The status games they play: Unpacking the dynamics of organisational status competition in higher education. Higher Education, 75(4), 695–709. https://doi.org/10.1007/s10734-017-0169-2
Brankovic, J., Ringel, L., & Werron, T. (2018). How rankings produce competition: The case of global university rankings. Zeitschrift Für Soziologie, 47(4), 270–288. https://doi.org/10.1515/zfsoz-2018-0118
Chirikov, I. (2018). The Sputnik syndrome: How Russian universities make sense of global competition in higher education. In A. Oleksiyenko, Q. Zha, I. Chirikov, & J. Li (Eds.), International status anxiety and higher education: Soviet legacy in China and Russia (pp. 259–280). Springer: CERC Studies in Comparative Education Series.
Clements, C. E., Neill, J. D., & Stovall, O. S. (2012). Inherent conflicts of interest in the accounting profession. Journal of Applied Business Research (JABR), 28(2), 269–276. https://doi.org/10.19030/jabr.v28i2.6848
Duflo, E., Greenstone, M., Pande, R., & Ryan, N. (2013). Truth-telling by third-party auditors and the response of polluting firms: Experimental evidence from India*. The Quarterly Journal of Economics, 128(4), 1499–1545. https://doi.org/10.1093/qje/qjt024
Froumin, I., & Lisyutkin, M. (2018). State and world-class universities: Seeking a balance between international competitiveness, local and national relevance. In Y. Wu, Q. Wang, & N. C. Liu (Eds.), World-class universities: Towards a global common good and seeking national and institutional contributions (pp. 243–260). Brill.
Hanushek, E. A., & Woessmann, L. (2006). Does educational tracking affect performance and inequality? Differences-in-differences evidence across countries. The Economic Journal, 116(510), C63–C76.
Hazelkorn, E. (2015). Rankings and the reshaping of higher education: The battle for world-class excellence. Springer.
Ioannidis, J. P., Patsopoulos, N. A., Kavvoura, F. K., Tatsioni, A., Evangelou, E., Kouri, I., Contopoulos-Ioannidis, D. G., & Liberopoulos, G. (2007). International ranking systems for universities and institutions: A critical appraisal. BMC Medicine, 5(1), 30. https://doi.org/10.1186/1741-7015-5-30
Ishaque, M. (2021). Managing conflict of interests in professional accounting firms: A research synthesis. Journal of Business Ethics, 169(3), 537–555. https://doi.org/10.1007/s10551-019-04284-8
Jacqmin, J. (2021). Do ads influence rankings? Evidence from the higher education sector. Education Economics, 29(5), 509–526. https://doi.org/10.1080/09645292.2021.1918642
Kashyap, A. K., & Kovrijnykh, N. (2016). Who should pay for credit ratings and how? The Review of Financial Studies, 29(2), 420–456. https://doi.org/10.1093/rfs/hhv127
Kotchen, M., & Potoski, M. (2011). Conflicts of interest distort public evaluations: Evidence from the Top 25 Ballots of NCAA Football Coaches (No. w17628). National Bureau of Economic Research. https://doi.org/10.3386/w17628.
Lim, M. A. (2018). The building of weak expertise: The work of global university rankers. Higher Education, 75(3), 415–430. https://doi.org/10.1007/s10734-017-0147-8
Lim, M. A. (2021). The business of university rankings: The case of the times higher education. In E. Hazelkorn (Ed.), Research handbook on university rankings: History, methodology, influence and impact (pp. 444–453). Edward Elgar Publishers.
Marginson, S. (2014). University rankings and social science. European Journal of Education, 49(1), 45–59. https://doi.org/10.1111/ejed.12061
Matveeva, N., Sterligov, I., & Yudkevich, M. (2021). The effect of Russian University Excellence Initiative on publications and collaboration patterns. Journal of Informetrics, 15(1), 101110. https://doi.org/10.1016/j.joi.2020.101110
Monitoring of universities’ effectiveness. (2021). http://indicators.miccedu.ru/monitoring/
Moore, D. A., Tetlock, P. E., Tanlu, L., & Bazerman, M. H. (2006). Conflicts of interest and the case of auditor independence: Moral seduction and strategic issue cycling. Academy of Management Review, 31(1), 10–29. https://doi.org/10.5465/amr.2006.19379621
Oleksiyenko, A. V. (2021). World-class universities and the Soviet legacies of administration: Integrity dilemmas in Russian higher education. Higher Education Quarterly, 76(2), 385–398.
Pollock, N., D’Adderio, L., Williams, R., & Leforestier, L. (2018). Conforming or transforming? How organizations respond to multiple rankings. Accounting, Organizations and Society, 64, 55–68. https://doi.org/10.1016/j.aos.2017.11.003
QS Intelligence Unit | Faculty Student Ratio. (2021). http://www.iu.qs.com/university-rankings/indicator-faculty-student/
QS Intelligence Unit | Services. (2021). http://www.iu.qs.com/services/
QS Unisolution | About Us. (2021). https://www.qs-unisolution.com/about-us/
QS World University Rankings. (2021). Top Universities. https://www.topuniversities.com/university-rankings/world-university-rankings/2021
QS World University Rankings – Methodology. (2021). Top Universities. https://www.topuniversities.com/qs-world-university-rankings/methodology
Redden, E. (2013). Scrutiny of QS rankings. Inside Higher Ed. https://www.insidehighered.com/news/2013/05/29/methodology-qs-rankings-comes-under-scrutiny
Rindova, V. P., Martins, L. L., Srinivas, S. B., & Chandler, D. (2018). The good, the bad, and the ugly of organizational rankings: A multidisciplinary review of the literature and directions for future research. Journal of Management, 44(6), 2175–2208. https://doi.org/10.1177/0149206317741962
Russian portal of government procurements. (2021). https://zakupki.gov.ru/
Sauder, M., & Espeland, W. N. (2009). The discipline of rankings: Tight coupling and organizational change. American Sociological Review, 74(1), 63–82. https://doi.org/10.1177/000312240907400104
Selten, F., Neylon, C., Huang, C.-K., & Groth, P. (2020). A longitudinal analysis of university rankings. Quantitative Science Studies, 1(3), 1109–1135. https://doi.org/10.1162/qss_a_00052
Shahjahan, R. A., Sonneveldt, E. L., Estera, A. L., & Bae, S. (2022). Emoscapes and commercial university rankers: The role of affect in global higher education policy. Critical Studies in Education, 63(3), 275–290.
Shore, C., & Wright, S. (2015). Audit culture revisited: Rankings, ratings, and the reassembling of society. Current Anthropology, 56(3), 421–444. https://doi.org/10.1086/681534
Stack, M. (2016). Global university rankings and the mediatization of higher education. Palgrave Macmillan UK. https://doi.org/10.1057/9781137475954
THE World University Rankings. (2021). Times Higher Education (THE). https://www.timeshighereducation.com/world-university-rankings/2021/world-ranking
Times Higher Education Rankings Methodology. (2021). Times Higher Education Rankings Methodology. https://www.timeshighereducation.com/world-university-rankings/world-university-rankings-2021-methodology
Wazana, A. (2000). Physicians and the pharmaceutical industry: Is a gift ever just a gift? JAMA, 283(3), 373–380. https://doi.org/10.1001/jama.283.3.373
Acknowledgements
Thanks to Philip Altbach; Zachary Bleemer; George Blumenthal; Jelena Brankovic; Bridget Costello; John Douglass; Julien Jacqmin; Miguel Lim; Joshua Lin; Sergey Malinovskiy; Daria Platonova; Jake Soloff; seminar participants at UC Berkeley, HSE University Moscow, Oxford University, and the University of Hong Kong; session participants at the 46th Annual ASHE Conference; and two anonymous reviewers for helpful comments and suggestions. All errors that remain are my own.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interests
The author declares no competing interest.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chirikov, I. Does conflict of interest distort global university rankings?. High Educ 86, 791–808 (2023). https://doi.org/10.1007/s10734-022-00942-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10734-022-00942-5