Abstract
Over the past decade, universities in the People’s Republic of China have notably progressed in international rankings. Most of the existing literature interested in this development describes the adoption of university rankings in China as a recent import of a global institution, and as being driven by a governmental agenda that seeks to bolster the country’s competitiveness and overall status on the world stage, including in the academic realm. The wider domestic environment that determines Chinese universities’ participation in the global ranking competition is usually left out of the picture. As this article demonstrates, university rankings and other performance indicators have been an organic part of Chinese science and higher education policy and a prominent element in state-directed national reform and development planning processes since at least the 1980s. In addition to the crucial role of the state and a lack of university autonomy, what further distinguishes the case of China from other countries in the rankings is a strong and accepted tradition of utilizing quantification, competition, and rating as political tools. Another reason, we argue, why Chinese universities were able to insert themselves into the ranking race relatively seamlessly and with some quick successes. Yet, after decades of following so-called “Western” standards and indicators for academic performance and reputation evaluation, domestic policy is changing again and taking a seemingly nationalist turn which may bring about some changes in the practice and significance of university rankings in China—and potentially beyond, as we discuss in conclusion.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
The People’s Republic of China (PRC) has firmly established itself in global university rankings. International observers often point out the astonishing speed at which, over the past decade, a number of the country’s universities have entered and climbed the most prominent league tables (see e.g. Baty, 2021; Marginson, 2022).Footnote 1 The dominant narrative that fascinates analysts of global academia is that especially Chinese elite universities are rapidly “on the rise” in international rankings and play an ever more significant role on the world stage.
It is commonly known that continuously improving Chinese universities’ position in global rankings has become an official and vigorously pursued goal for actors, not only in the educational, but also in the political sector in China (Perry, 2020). Yet, the political dimension of university rankings, or rather, their domestic political environment in China is usually not studied as a crucial factor. As a review of the relatively small body of literature on university rankings in China shows (see below), published research usually focuses almost exclusively on the evolution of higher education policies in general, which include the ambition to elevate national elite universities to “world class institutions” (世界一流大学), and on university management in China and how and whether it is geared to compete in the global ranking game. Strikingly, most of the existing literature treats university rankings primarily as an isolable phenomenon and as a story of rather simple diffusion (see e.g., Allen, 2017). The occurrence of university rankings in and their importance for China are further seen as some type of local adoption, or reinterpretation, of a global model of reputation cultivation, comparison, and competition (see e.g., Rhoads et al., 2014; Marginson, 2017). In this perspective, international rankings are mainly an external requirement, a material and normative pressure which Chinese universities, and China’s global ambitions overall, cannot escape (see e.g., Mok & Kang, 2021).
There are certain aspects, however, about the way in which China joined and now navigates the global ranking game that can get lost in studies that concentrate only on universities as individual organizations and on the global institutionalization and technicalities of university rankings. It is here that this article ties in with a slightly different perspective and one that focuses more on the wider socio-political context in which this takes place. We claim that this approach can add interesting nuances and contribute further to our understanding of what determines Chinese universities’ presence in global rankings in the past, present, and future.
We argue that one cannot simply take the career of university rankings in China as a given, or as a natural side effect of the global commercialization of higher education. Neither is it sufficient to point to the fact that Chinese universities, once they joined the race, were merely using efficient ways of directing their impressive manpower to catch up in all regards. Instead, we put forward that one needs to understand the political context in which universities are embedded in China and in which rankings evolved as an internal strategic tool for enforced infrastructural development of the wider research and education system and for resource allocation. In our assessment, this context occurs as a result of: (1) the dominance of the Chinese state, including in universities and regarding their positioning domestically and globally, and (2) the ambitions and comprehensiveness of top-down policy planning in the fields of research and higher education over (at least) the past four decades. It is through such a contextualization, we contend, that a global assessment and comparison of the making and meaning of university rankings in China becomes more substantial. This contextualization also provides a basis for assessing the apparent changes in the policy and politics of approaching global rankings that are currently promoted by the Chinese leadership. Not least, the case of China can contribute some interesting stimuli for general research and theory-building on university ranking as a phenomenon in world society and complement existing empirics that are usually almost exclusively derived from the OECD context.
This article, which builds on an analysis of available primary and secondary sources in English and Chinese, is structured as follows: the next section highlights some aspects in the study of the Chinese case and in the general literature on university rankings that motivate and inform the approach of this study. We then briefly describe Chinese universities’ function in the overall state and policy structure and outline how they were increasingly put under quantitative pressure to perform. We trace how the ever more intense application of science indicators after the start of China’s reform era was coupled with the emergence of university rankings, first national and then international ones. The subsequent part looks at the practices and tangible effects of university rankings as a deliberately employed distributional and incentivizing mechanism in China over time, and ends with a brief exploration of the domestic discourse on rankings. In conclusion, we point to recent policy changes concerning the standards of academic performance evaluation as well as the shift towards a strategy of more scientific and technological self-sufficiency pushed by the PRC’s political leadership. Both appear to herald some challenges for the current practice and meaning of university rankings in China and potentially beyond.
State of the field: university rankings and their Chinese characteristics
University rankings have become a measuring device for academic performance and have transformed universities around the globe into organizations that heavily compete with one another (see e.g., Brankovic et al., 2018). Chinese universities seem to be no exception, as the small (but growing) body of international literature on this topic documents. Most works, especially in the field of higher education studies, analyze what led to China’s success with regard to the different indicators tested in the rankings, for example, the development in scientific disciplines which formed the basis for the ever-increasing publication output and recognition of Chinese universities (see e.g., Allen, 2017; Chen, 2019). Other studies point to shifting trends in university governance, including the incentive system created to enhance performance, or the attempts at making campuses in China more attractive for international students and staff (see e.g., Pan, 2009; Rhoads et al., 2014).
Another frequently investigated topic is the nominal power and, at the same time, factual vagueness of the concept of “world-class” in Chinese higher education policies (see e.g., Allen, 2021; Ngok & Guo, 2008). In addition, some observers explore why the global reputation of Chinese universities has risen so dramatically, given that there are documented misconducts and reasons to “mistrust” science coming out of China, including a tendency to “fake” data and other information (see e.g., Lin, 2013). There are also suggestions for alternative rankings, which deliberately take into account these problematic issues, such as the Academic Freedom Index, in which the PRC fares among the countries at the bottom (Kinzelbach et al., 2021). Not least, some observers conclude that in spite of the rapid rise of China’s universities in global rankings to date, too many deficits in their national environment (missing control of research norms, lack of academic freedom, the current pandemic-induced closure of the country, etc.) will eventually slow and impede progress and keep them from winning real first-class status (see e.g., Altbach, 2016; Fischer, 2021).
Although the uncommon socio-political environment for China’s universities is touched upon as a variable when China’s rise up the league tables and its chances of continuous success in the global competition are described, this environment is not usually studied systematically when documenting the emergence and use of rankings in China. In general, university rankings are usually seen as some type of international regime. This was the case with the THE and QS rankings which are based on benchmarks initially defined in the academic, as well as the higher education system (see e.g., Wilbers & Brankovic, 2021), and then utilized by commercial – often media – organizations as a so-called “third party” who serve as the main evaluator and producer of rating tables (see e.g., Brankovic et al., 2018). More specifically, university rankings are often described as a new form of comparison and competition that emerged in an international space and then in turn affected individual countries as the host of the ranked universities and thereby another measurement unit. Here, nation states are usually treated as objects of evaluation, but rarely are they (or better: their governments) also seen as co-producers or causes of such evaluations. Can this logic really hold in a case such as that of China, with its all-pervasive Party-state structure that permeates any societal organization in the country? This seems to be a question surprisingly unexplored in the relevant literature that refers on universities as (world) organizations in the (global) higher education system when studying rankings.
Furthermore, according to the general sociological literature, university rankings achieve global convergence and standardization in higher education and research (Eposito and Stark, 2019; Heintz, 2010; Pfeffer & Stichweh 2015). This facet seems to be corroborated by the integration and ascent of Chinese universities in global rankings. In the existing literature on the Chinese case, however, it often seems that this diffusion is usually one-directional and China just an adopter or importer of ideas of practices of university rankings. In the words of Ryan M. Allen, China has picked up on the international trend of commensuration, meaning the “quantification of abstract ideas into smaller, easier-to-define measurements …, which often manifest into ranking structures used for direct comparison to other systems” (Allen, 2017: 1–2), which for him displays the tendencies of neoliberal research and education organizations worldwide. The story usually told is therefore rather straightforward: China was initially motivated to use rankings to measure the standing of Chinese universities in comparison to others around the world and as a push to catch up with them. Modelled on university ranking practices observable elsewhere, especially the U.S., China thereby created the first global league table, the Shanghai Ranking, in 2003. Ever since, more international rankings were produced and came to dominate the game, and Chinese universities became fixated on them as the embodiment of the “world-class universities” ideal and continuously tried to improve their performance in the “global reputation race” (Mok & Kang, 2021: 374).
While this is definitively what happened on the surface, to view university rankings merely as an isolated or an imported tool for China’s university reforms excludes important nuances of the history of university rankings in China. Whereas looking deeper into the political context of university rankings can further our understanding of the evolution of the Chinese academic system as a whole. A glimpse at the history of science and education policy-making in China quickly reveals that mechanisms of quantitative performance planning and evaluation, including listing and ranking, were in use in the PRC long before the inclusion of China in the relevant international systems of counting. The targeted use of structural information for the advancement of a systematic science policy, be it in certain scientific fields, specific institutions, or different regions, and especially for the application of related indicators for the sake of resource allocation (Hornbostel, 1997: 18), can be observed in China at least since the start of the Reform and Opening period in the late 1970s. University rankings—first domestic, then international—helped to reduce complexity and furthered the standardization and comparability of available information (Eposito and Stark, 2019; Heintz, 2010; Ringel, 2021). As such they apparently proved to be one very efficient tool for pushing China’s reformed science, technology, and innovation (STI) policies.
Moreover, there also seems to be value in considering the tradition and omnipresent practices of performance quantification and ranking as a societal feature and, especially, as a political tool in China. Besides the convincing yet abstract references to Confucianism and other hints at the long historical or cultural continuities that are sometimes invoked in descriptions of the Chinese system of higher education and research (Cao, 2014b; Marginson, 2011, 2016; Perry, 2020), further attention should be given to remaining practices of comprehensive planning and disciplining, to which universities, as public institutions, have been subjected to ever since the founding of the PRC (Han & Xu, 2019; Schulte, 2019).
Finally, it is a common assumption that there is limited observable controversy about university rankings in China, as in Asia generally (Stichweh, 2023). Debates like Western ones, which often build on the notion that the quantification of the performance of research and teaching institutions also represents external control over them and a challenge to their autonomy, are apparently inexistent in China. Whether this is because of the omnipresence of performance evaluation and the state per se, the dominance of a techno-nationalist ideology (Greenhalgh and Zhang, 2020) that encourages strive and competition, or simply a lack of a comprehensive public discourse on this topic, remains largely unexplored in the literature.
Adding perspectives: the role of the state in the global institutionalization of rankings
Although not able to address all of the puzzles and gaps just highlighted, in this article we will analyze the position of universities in China vis-à-vis the state and its pervasive system of performance planning and evaluation and thereby explain the evolution and application of university rankings in the country. More than just presenting a case study, we demonstrate that our analysis can also supplement general approaches to hypothesizing and theory-building about university rankings, not least, by highlighting the value of contextualizing the wider societal embedding of this practice in the modern systems of higher education and science. The case of China points to the peculiar and strong role of the state in encouraging and steering rankings. This may be a feature that is quite naturally expected in a one-party autocracy, but it may be worth examining the state factor also in other settings. It appears that even in OECD countries a state’s ambition to nudge, steer, or utilize rankings—and the overall societal attitude towards this ambition—differs widely (see e.g., Hazelkorn, 2009).
Telling the story of university rankings in China as one in which they are a political tool for governance and technocratic policy-making in the fields of science and education rather than a game played by individual universities on a neoliberal global higher education market (Lynch, 2014) can also contribute to sociological analyses at a more macro level. As was hinted above, this story speaks to the literature on science as a global function system and the university as a world organization (see e.g., Stichweh, 1996), and the global diffusion and isomorphism of higher education systems in the world polity (see e.g., Meyer et al., 1992). Both strands in the literature claim that parallel to the diffusion and growing similarity of representations of global ideas and norms, there at the same time exists internal differentiation of these global systems; one that is brought about, for example, by the local variances of universities as bearers of this differences. Literature on the national characteristics of science and higher education in China has in fact described the differences and interactions of the national and the global system and traced, for instance, how it uses its “global engagement to build national capability as well as vice versa” (Marginson, 2022: 907). As can be shown, China produced its national STI statistics, including university rankings, in close accordance with leading international examples, adapting them to local conditions and developing them further (Christmann-Budian, 2013). Whether this is to be interpreted simply as “diffusion” or more sophisticated “innovation” (Bound et al., 2013), is probably a question of theoretical preference.
It seems at least equally interesting, considering the characteristics of the Chinese case summarized above, to ask what happens when parameters for performance measurement developed for an autonomous system of science (and relatively autonomous universities) are deliberately imported into a context in which this basic requirement seems to be absent. Will this in the long run be a reductio ad absurdum; or does the Chinese adoption of university rankings and its global repercussions, have the potential to significantly alter the original idea and acceptance of international academic league tables altogether? While it goes beyond the limits of this publication to explore these questions to the fullest, a few related observations will be presented in the last part of this article.
Universities and the ubiquitous state structure and planning in China
Control and contribution
In studies of university rankings in other contexts (Brankovic et al., 2018; Yudkevich et al., 2019), universities are usually treated as individual (and autonomous) actors that possess and display considerable agency with regard to joining and navigating the ranking game. Different from most of their global peers, universities in China, however, do not only depend heavily on the state, they also are intrinsically entangled with it in terms of their internal organization and overall socio-political structural embedding. Beyond providing crucial financial resources as in most other countries, the Chinese state interacts in various ways with a university’s organization. As Han and Xu (2019) comprehensively describe, the state’s main instruments include first and foremost the so-called “dual governance” of the university, which means that the Communist Party upholds a structure in the university mirroring all crucial administrative levels and bodies and retains ultimate decision-making power at all these levels.Footnote 2 Further instruments include the involvement in the appointment of leadership positions, the centralized planning of student access to higher education through a nation-wide entrance exam and ensuing selection mechanisms and uptake quotas co-administered by the government and universities, as well as discipline structures and a monitoring of the curriculum and organization of ideological and military training on campus (see also Doyon & Tsimonis, 2022; Xu et al., 2021; Sui, 2019). Despite commercialization and internationalization processes that Chinese universities went through in recent decades (Mao & Yan, 2015), these Party-state entanglements with the university organization never ceased; at most, it transformed from direct to partly indirect influence (Han & Xu, 2019: 941–942). As part of this, universities are now steered by an incentive structure created by the government’s policy, the Party’s mandates, and the overall status and function assigned to them within the country’s higher education and science landscape (Marginson, 2016; Pan, 2009; Schulte, 2019).
Occupying a peculiar position, as described meticulously by Cao (2014a), Chinese universities are part of an “institutional division of labor” and an integral element of the larger infrastructure of science, research, and higher education under heavy governmental steering. Especially when it comes to scientific research, this implies that organizations, including universities, are given a mission, “designated … usually by the Party-state” (pp. 120). As a result, a “rigid, hierarchical institutional structure” exists in China’s science system, with a few “key” (重点) universities—besides the Chinese Academy of Sciences institutes—as the elite core, which also are the focal points for the allocation of research funding and mission-oriented research (Cao, 2014a: 121).Footnote 3
With the launch of the Reform and Opening era and the new focus on the performance and productivity of research and tertiary education, central political planning for and the steering of universities fully took off. For most of the government’s important programs in the science and higher education sector, this included detailed guidelines, down to the level of the individual university, in terms of regulations for program planning, personnel evaluations, budget allocations, etc. (Christmann-Budian, 2013). Getting every institution on board with the new development plans in the 1980s was a complex task for the government, as the previously existing guidance and structures to promote scientific research within the higher education sector was highly fragmented (Saich, 1989: 73–74). A centralized technocratic science administration in Beijing was tasked with bringing the new plans to fruition. The government’s increasing funding since the mid-1980s first focused almost exclusively on the “key universities.” Subsequent choices for an expansion of the programs were “based on the track record of the universities and the quality of their staff members” and expected to maximize the returns on investments (Saich, 1989: 74).
For this purpose, science statistics and university rankings were crafted (see below) to create the ability to analyze and compare scientific and educational output down to specific levels. At the same time, the central administration, in accordance with the established practice of comprehensive planning and steering in Chinese politics (Naughton, 1995), set numerical targets concerning the expansion of the structures of research and teaching and their output, to be achieved within specific timeframes. Regular performance assessment thus became immediately linked to a mandate for performance. Nonetheless, reform era planning and the steering of universities in China became different from the older, static Soviet-style practices of a command economy with unrealistic goals, rigid guidelines for implementation, and little means to fully assess the (failure of) outcomes (Heilmann & Melton, 2013). More technocratic and sophisticated data and plans came to be developed and employed, and the leeway and discretion of the relevant actors regarding how to achieve the set targets grew. Altogether, quantitative planning and performance evaluation with room for localized concretization and adjustments, a backbone of reform era policy-making in China (Landry, 2009), was ever more intensely and aptly applied by the government in the field of STI policy and thereby also for institutions of research and higher education (State Council of the People’s Republic of China, 2006; Suttmeier, 1989; Zhi and Pearson, 2017).
Gathering data for ranking research and education organizations in China
The government’s need for a better foundation for the assessment and steering of China’s research and education capacities contributed significantly to the rapid development of a number of science indicators in China throughout the 1980s. The Chinese government adopted the general global trend of science quantification, emulated models from around the world, and learned swiftly from the recommendations of the relevant international organizations, such as the OECD or UNESCO (Drori et al., 2003: 108 ff.). This trend only accelerated as science and technology development came to be regarded as the undergirding of China’s economic competitiveness, including the promotion of its version of a “knowledge economy” and a National Innovation System (NIS). A key (and centrally promoted) milestone in the process of measuring the Chinese science system was the introduction of the statistical series “Science-Technology Indicators” (科技指标; see e.g. Keji Zhibiao, 1990). Since 1990, the indicators were published every two years by the National Research Centre for Science and Technology for Development (NRCSTD 中国科学技术促进发展研究中心; now CASTED, Chinese Academy for Science and Technology for Development), a specialized entity under the Chinese Ministry of Science and Technology (MoST) and the Chinese government’s most relevant think tank for science policy surveys and related statistics. Based on the OECD’s original set of indicators, the Chinese “Science-Technology Indicators” came to represent the most comprehensive form of science and technology statistics in China, integrating comprehensive data collected from all relevant government departments and organizations. Ever since, China’s government science statistics have transformed analogous to the international development of science and innovation indicators, but their general weighting as well as application changed over the past decades. In 2011, CASTED developed the annual National Innovation Index Report. The report was not only modelled on the Global Innovation Index published by World Intellectual Property Organization (WIPO) in terms of its name, but also produced rankings of research organizations based on an analysis of multifactorial data concerning China’s innovation potential. Against this background, since the 1990s, the field of scientometrics in China grew significantly in its scope and significance.
As part of this general trend to quantify and rate performance in the STI sector, university rankings developed in China in various forms. Formally since 1983, national university rankings were issued in the USA in the form of the “U.S. News & World Report” ranking (Wilbers & Brankovic, 2021). In 1987, the Beijing-based Science and Technology Daily, the official newspaper of the Chinese MoST, also began to publish designated university rankings (大学排名). These rankings comprised about 90 universities country wide.Footnote 4 After the turn of the millennium, there already were circa 100 university rankings in China issued by about 20 institutions. In 2003, the first Academic Ranking of World Universities (ARWU) was created by Shanghai Jiaotong University (Huang, 2015), ostensibly upon governmental initiative. It was based on six indicators meant to capture research performance: number of alumni or faculty members with Nobel Prizes or Fields Medals; number of researchers with high citation scores according to Thomson Reuters; number of articles published in Nature or Science; number of articles in the Science Citation Index Expanded and Social Sciences Citation Index; and the institution’s per capita performance. Thus, unlike other rankings then and now, the ARWU relied primarily on a quantitative assessment of research capacity metrics rather than a qualitative peer review (Allen, 2017). The ARWU was the world’s first global continuous ranking of universities (Marginson, 2014), although the THE-QS World University Ranking, initiated in 2004, soon received more worldwide attention.
Over time, China’s main ambition thus changed from domestic data collection and planning to an orientation and emulation of internationally advanced organizations and, finally, to creating a matrix to gauge and push the global competitiveness of Chinese universities. That is why China needed international rankings and not merely national ones. Also, a qualitative overview of the world’s universities came in handy for Chinese students eligible and able to study overseas, which was promoted on a large scale with generous support from the Chinese government (Zweig, 2018). Furthermore, what stood out was the deliberate state-led top-down enforcement of the scaling of STI and university rating in China. This included that there was little to no public discussion pro and contra other qualitative assessment methods, such as peer evaluation for example, like in Northern-American and especially European debates (Sigurdson, 2004: 6–17). Until recently, there had been no input or open contradiction from any subordinate units in the science policy hierarchy and the science system itself. The adoption of global key performance indicators and rankings for science and higher education occurred somewhat eclectically in China and was mostly limited to technicalities and without a simultaneous reflection on the broader underlying principles that guided these practices in other contexts.
Making rankings work (for state policy) in China
Rankings, rewards, and the politicized stratification of universities in China
The orientation towards international scientometric indicators and rankings combined with a national system of performance evaluation and rewards introduced a remarkable dynamic into the development of Chinese universities over the past decades. Rankings and quantitative indicators became gradually more decisive for the rise and fall of individual universities from political grace in China, as is reflected in various (elite) funding programs. Counting and rating therefore crucially assisted the state’s distribution of resources and incentives for development.
“Project 211,” launched in 1995, was the first major central government program of the reform era to promote Chinese higher education and bring it to the international level—NB: at a time when the ARWU did not yet exist. According to criteria that were not made transparent, around one hundred Chinese universities were initially selected for preferential funding through this program. These universities were to be developed into top universities of “international standard” by the turn of the century. The chosen universities were mostly concentrated in Beijing, Shanghai, and the eastern coastal region of the country, and they were expected to achieve high international standards in both teaching and research and thus serve as role models for other universities in China (Staiger, 2002). Furthering this effort, in 1998, the “985 Program” followed, which focused on a smaller group of circa 30 universities, selected from among the approximately one hundred 211 universities to become “world leading institutions.” The selection process proved to be quite complex, and while there were openly communicated selection criteria, e.g., the ability to demonstrate research achievements through awards won, third-party funding, and a modern management, it is reported that regional desires and power-political wrangling also played a role (Staiger, 2003).
In 2009, authorities officially declared a so-called “China Nine (C9) League.” Representing the top echelon of the 985 program, these universities were to continuously receive governmental support based on their leading positions in domestic higher education rankings as well as, essentially, on the attainment of more prominent places in the (now existing) international rankings. Thereby attracting increasing attention beyond China’s borders, they were also supposed to form a network as China’s “Ivy League” and recruit the world’s most talented students. Quickly, these universities boosted their scientific output, especially in terms of publications, further improving their positions in international university rankings. The government’s intense investment in the formation of high-performing elite institutions apparently paid off.
Intended to further deepen previous efforts to join the world’s top universities, the Chinese government in October 2015 launched another program, the “Double First-Class Initiative” (双一流) (Tan, et al., 2017). This program combined two tracks, elite universities (“World First Class University” (世界一流大学), or the majority of the institutions of the former 985 program) with an institutional focus, and a “sub-scheme” which concentrated on the promotion of leading disciplines or research areas (“First Class Academic Discipline Construction,” 一流学科建设) (Zhao, 2018).Footnote 5 This combined approach was reminiscent of the German “Excellence Initiative” which started in 2005 and was closely observed by China (Liu et al., 2019). Although the responsible central authorities, the Ministry of Education (MoE), the Ministry of Finance (MoF), and the National Development and Reform Commission (NDRC), again did not provide detailed information regarding the selection criteria and process for the Double First-Class Project, it was reported that both domestic expert evaluations and the most well-known foreign world university rankings were considered (Zhao, 2018). Interestingly however, when the second phase of the Double First-Class program was announced at the beginning of 2022,Footnote 6 the government emphasized publicly again that rankings would not play a role in selecting the additional seven universities for special funding in this round (Sharma, 2022a). This was probably because these universities were chosen strategically for the spatial and structural distribution of the program across the country. What is more, after the initial launch of the program, there was some critical debate around international rankings as a benchmark, as we discuss below.
Altogether, Chinese universities steadily rose in international university rankings after 2000, and those that belonged to the most elite domestic institutions were also the most successful globally. Although other factors may ultimately have played the most decisive role in the state’s selection for preferential support and promotion of certain universities,Footnote 7 references to global measurement standards were used to confer an “aura of legitimacy” and objectivity (Perry, 2020: 14) to this strategy. The highly centralized mechanism of resource distribution thereby quickly yielded a “Matthew Effect” (Merton, 1973: 439 ff.) with Chinese characteristics: should strong universities improve their ranking position they would also receive a provision of more resources and likely continue to increase their output as well as attain evaluation and ranking successes. The practices of measuring, ranking, and modeling therefore tangibly reinforced macro-structural asymmetries in China, such as the traditional imbalance in favor of the Chinese Eastern regions (Borsi et al., 2022). Only until recently were other criteria introduced to gradually counteract extreme rating and the over-concentration of resources in the Chinese university landscape.
Assignment of ranking targets and cascading compliance
The Chinese government not only used metrics and rankings for its internal decision-making and resource distribution, but it also defined targets for universities on how to perform in international rankings, continuously pushing them to adapt their internal structure to these goals. While in the first two decades of reforms, the focus was mainly on building up excellent national infrastructures for scientific research and higher education, the logic of cascading state-led planning and support, fueling of competition, performance evaluation, and rating was adopted fully as the subsequent strategy to become a global science power with world universities (Rhoads et al., 2014). The benchmarks for that were set mostly externally, and “in return for massive state financial investment, the universities introduced an elaborate system of evaluation and compensation” (Perry, 2020: 14) for the purpose of meeting these criteria. To an ever increasing extent, the government incentivized Chinese universities to trump each other in the domestic competition for the most rank-able output and for fulfilling the state-set goals at a global scale.
A glance at publicly available information, including policy plans and evaluation guidelines at the subnational level or universities’ development outlines, indeed shows how pervasive this structure is today: provincial governments set specific goals to have a certain number of universities achieving Double First-Class status and a certain number of university-based disciplines entering World First-Class status (designated by the central Ministry of Education), by 2030 (see e.g., Sciping, 2019a). Universities list international ranking placements as achievements and ambitions, but some do also formulate specific goals, such as Northwestern Polytechnical University (西北工业大学) in Xi’an, which writes in its development plan that it “aims at having the school overall ranked among the top 100 worldwide, while promoting the disciplines of aviation, aerospace, and navigation sciences to be ranked among the top ten worldwide by 2050” (Sciping, 2019b). And although it is difficult, if not impossible, to retrieve information on the detailed criteria, mechanisms, and effects of these various internal procedures, it is clear that the government evaluates universities and university leaders, that the organizations themselves establish interior evaluation practices, and that all this vigorously drives universities to perform in rankings. Thus, while Chinese universities are pushed to become aligned with global principles of higher education governance and research collaboration, domestically, comprehensive political scientometrics and multi-level performance evaluation and rewards bind the universities closely to the state’s STI policy structures.
Apparently, universities in China have no choice but to comply with this extremely pervasive performance evaluation and streamlining mechanism. Beyond the material factors involved, an interesting but understudied precondition for the rapid adoption of world rankings in China and compliance with all its consequences might, arguably, also be found in the striking prevalence of statistics, evaluation, and competition in Chinese government and politics, if not in society overall (see e.g., Liu, 2009). Facets of politically utilized rating can be found throughout history, in the structures of China’s imperial bureaucracy, the mechanisms of the PRC’s planned socialist economy after 1949, as well as in China’s contemporary modes of digitalized technocracy (see e.g., De Weerdt, 2007; Kipnis, 2008; Kostka, 2019). Consequences of these evaluations in all domains of life can include promotion or demotion as well as sometimes public praise and awards or “shaming” (Mei & Pearson, 2014). Frequently, this includes league tables of those evaluated that are at times publicly displayed,Footnote 8 which only strengthens the “fame or blame” mechanism involved. However, while the general practice of evaluations, ranking and rewarding as such seems to be largely accepted in China as a political and social-engineering tool, in recent years more and more debates have emerged about the unintended and detrimental consequences of the ranking hype and the criteria applied in academic performance assessments.
Consequences and controversies: current calls for more qualitative and eventually “Chinese” indicators
University rankings’ global importance for the continuing production of reputation and competition (Ringel et al., 2021) is met by recurring debates about their scientific and practical validity. In the case of China, where decisions on governmental funding for universities are also dependent on their success in international rankings, as described above, criticism has recently grown louder as well. The forced ambition to rank as highly as possible and to rapidly improve a university’s position in these league tables, it is argued, has produced a tunnel view among actors in China and, worse, encouraged large-scale data manipulation for this purpose (Wang & Guo, 2019).
One example is the criticism of using bibliometric methods, in particular, the focus on citation rates in journals with a high impact factor and their manipulability, not least by universities. For example, in a 2004 study, it was already stated that Chinese “PhD students are expected to publish at least one article in a journal listed in Thomson’s Science Citation Index, the main citation database” in use (Wilsdon & Keeley, 2007). Universities paid monetary bonuses to staff for articles published in top international journals (MIT Technology Review, 2017). It is also well documented that a huge market for ghostwritten or “artificial” scientific publications and forged certificates of all kinds as well as citation cartels emerged in China (Hvistendahl, 2013; Qiu, 2010: 143). The pressure to meet these numerical requirements at—seemingly—all costs began to affect all areas and institutions of the Chinese scientific system. For instance, a study by the China Association of Science and Technology in 2009 already revealed that among the around 30,000 scientists surveyed, more than half of the respondents were aware of offenses in the context of plagiarism and other variants of fraud concerning scientific publications, 43% considered scientific misconduct in China to be serious, and around 30% expressed understanding for this kind of maneuvering due to the tough requirements in the domestic science system (Chen, 2009). Sociologist Cao Cong even states that, caught between the hypercompetitive system and the political rules at home and international principles of science and research ethics, “Chinese scientists have been confused and frustrated as to what norms or values they are expected to observe” (2014a; 150).Footnote 9
As a reaction to these developments, since around 2010, a more critical debate has evolved in academic circles and among the political elite in China. Interestingly enough, the debate became more public when it was announced that results of international university rankings would be utilized for the selection of universities for the Double First/C9 program in 2015/16. Altogether, the Janus-faced nature of the government’s strategy and universities’ compliance came into focus: On the one hand, rankings of different universities were considered informative and a convenient yardstick in the implementation of the elite university programs. On the other hand, Chinese policymakers, including former premier Wen Jiabao (cit. in Cao, 2014b: 157) and now CCP secretary general Xi Jinping, proclaimed that quantification and international rankings should be heeded but not overly relied upon. More important than numbers, Xi expressed recently, is the image of the university in people’s minds, which must be built up gradually (Wang & Guo, 2019).
Notably, decisions in the earlier phases of Chinese science and higher education policy also came into question again, including the overreliance on science indicators in general. As a first step, policies for scientific performance evaluation issued by the MoST and the MoE took a new direction, spelled out in two documents published in 2020.Footnote 10 The ministries’ announcements propagate a return to very vague “original academic goals” which is supposed to supersede the decade-long focus on quantitative performance benchmarks. The policy guidelines emphasize that a balance between internationalization and global cooperation on the one hand and domestic requirements and local relevance on the other ought to be the focus of Chinese research and higher education policies from now on. Somewhat more concretely, the documents recur to the strengthening of qualitative peer reviews instead of one-dimensional and macro-level quantitative evaluation of scientific outputs. In particular, the “worshipping of SCI” and impact factors should end, and publications ought to be evaluated qualitatively and in a limited number, for example, when applying for positions or funding (Zhang & Sivertsen, 2020).Footnote 11 Cash-for-publication practices by universities ought to be abolished. In addition, the guidelines call for greater use of Chinese citation indexes (e.g., the “CSCI”) and suggest that publications in Chinese language and by Chinese publishers should be given more weight (Li, 2020).Footnote 12 Also in 2020, some new study centers were established around the country with a mission to develop explicitly Chinese rankings in order to increasingly decouple the evaluation of Chinese universities from “foreign standards,” especially in the humanities and social sciences—one of the most visible of such institutions is the Evaluation Research Center at Renmin University in Beijing (中国人民大学评价研究中心).Footnote 13
Furthermore, at the time of writing this article, the MoE suddenly announced that Renmin University, Nanjing University, and Lanzhou University would no longer participate in overseas rankings, nor would they provide data to ranking agencies anymore (Sharma, 2022b). While the news immediately caused a lot of media attention and appeared as a major change of course, there has been no follow-up development and the three universities concerned are, in fact, cases which have recently been rather unsuccessful in improving their international standing. This means this move can be seen as merely a way of protecting them from the mentioned public “shaming” game, but it could also be a way of testing the waters for future shifts in this regard. Not least, official statements also stressed how particularly and valuably “Chinese” these institutions are and how little these “qualities” could be reflected by imported assessment criteria (Sohu, 2022).
It remains unclear so far to what extent the new evaluation guidelines will be implemented on the ground and whether the impetus for more independence from “Western” rating standards will grow stronger, but it will definitively be worth following what these tendencies could potentially mean for China’s status in university rankings, or better, for the gathering of the data necessary for producing global rankings. Is it a normative initiative to announce the end of “publish or perish?” Would it result in China’s detaching from the citation indexes, the most renowned science awards, and other indicators, that are the basis for defining a universities’ ranking status? Could it ultimately lead to an alternative Chinese model of scientific performance evaluation? And would this represent real diversification or just a tilting toward nationalism? While it is too early to answer any of these questions, a significant turn away from the conventional standards of performance evaluation and international ranking within the Chinese research and higher education bureaucracy and the academic community still seems rather unlikely in the short to mid-term future.
Conclusion
In this article, we studied the emergence and utilization of university rankings in the PRC in relation to their wider societal, especially political environment. By taking this broader perspective, it becomes clear that the career of university rankings in China is fundamentally interrelated with the state’s top-down science and innovation policies and politics and not only an imported measuring device for international comparison in the higher education sector. From the start of the reform era in the 1980s, China adopted quantitative benchmarks for scientific performance measurement already in use internationally and integrated them into a highly centralized domestic political structure of which universities are a core part and which was essentially steered by performance targets and assessments. Chinese universities thus became subjected to dual evaluations, globally, and domestically. Against the background of a long tradition of quantitative evaluation procedures and fierce competition for scarce public resources in the PRC, this created a forceful incentive structure pushing Chinese universities to participate in the global ranking game. The rise and consolidation of Chinese institutions in the most prominent rankings in recent years is also the result of the intensive and targeted efforts of Chinese science policy strategists and university managers to promote precisely the methods and indicators required for success in these rankings.
Despite sporadic critical debates, science indicators and rankings continue to be excessively instrumentalized in China. Public actors can be assumed to be socially accustomed to a sort of ubiquitous quantitative performance measurement. Criticism in China does therefore not usually address the principle of ranking and the resulting elite formation and extreme stratification per se. Debates center rather around questions of method and on whether the indicators used should become more locally adjusted and “relevant for China” and should challenge the predominant “Western” standards employed in the most popular current rankings. However, such an alternative ranking model has not yet evolved and for the time being a highly functional and accommodated approach to international university rankings persists in China.
Generalizability and inspiration for future research
From a macro view, we would further contend that the case of China therefore displays interesting divergences from the common description of the global diffusion and local function(ing) of rankings. Outside of China, rankings emerged (or were adopted) more organically and in dialogue with different actors, bridging the commercial, political, and academic fields, and a certain mistrust usually prevails towards (national) state involvement (see, for example, in the history of the US case; Wilbers & Brankovic, 2021). In this regard alone, the practice of rankings in China differs markedly.
More explicitly, studying the Chinese case, we have identified the following decisive factors in our analysis of how university rankings are set up nationally and where, arguably, crucial distinctions can be located: (a) the structure of the science system and in particular the degree of university autonomy, (b) the degree to which science and higher education policy (even policy making in general) is tied to quantitative benchmarking, and (c) the choice by policy makers between nationally designed rankings and rankings produced elsewhere. Future comparative research should reveal the configuration of these factors in other contexts and countries, ideally beyond the OECD world, to test how unique or extrapolatable our observations of the case of China are in this respect.
So far, the greatest divergence of the Chinese case can be found in the dominant role of the government and its deliberate and strategic interweaving of domestic policy goals with the benchmarks and pressures that are produced by an orientation on the established international ranking scene. While in the usual story of rankings, universities play the main part, in the Chinese case, the state takes on this role. The latter can be described, as we have attempted in this article, as an active subject that is involved in university ranking for domestic policy purposes and has chosen to partly outsource this work to international ranking vehicles. It remains to be seen whether this decision will be reconsidered soon under the influence of increasingly isolationist tendencies among some relevant actors in China. As determined as the government was in deciding two decades ago that Chinese universities should participate in global rankings, it may soon decide that they should withdraw from them—or, at least, become more assertive and selective in their participation.
Interestingly, against the background of increasing geopolitical tensions and discourses of “decoupling,” there seems to be a tendency, not only in China but also in other autocracies, to (at least nominally) challenge the current international academic system and, in particular, some of its practices of producing comparability, reputation, and competition. For example, the government and parts of the academic elite in Russia are re-framing the Russian Federation’s recent exclusion from the European Bologna Process, a university cooperation and credit transfer system, after its invasion of Ukraine, as a development they very much welcome because it would boost the return to traditional national values and structures in education and research and, as it is emphasized, end the country’s subjugation to foreign assessments (Vorotnikov, 2022; Forschungsstelle Osteuropa, 2022).
The Chinese case is therefore a fascinating topic for further in-depth study, as well as one that could inspire future comparative work on the re-calibration of university rankings as one representation of the increasingly dynamic global systems of science and higher education in the twenty-first century.
Notes
Since global rankings became widely accepted in the early 2000s, and China set up the Shanghai World University Ranking in 2003, the number of Chinese elite universities in the global top 100 has doubled in the two most prominent rankings outside of China, the Times Higher Education (THE) and the Quacquarelli Symonds (QS) World University Rankings (from only two, or three respectively, to now six).
Especially since general secretary Xi Jinping's recent efforts to strengthen Party structures across all societal domains, it now again means that the Party body always trumps the corresponding civil unit within the organization and that Party functionaries should always have the last word in decision making processes (CCP Central Committee, 2021).
Before the reform era, research and education were two separate domains with universities being responsible only for the latter before “academic standards rose and research became part of the graduate curricula of institutions of higher education in the early 1980s” (Orleans, 1989: 110).
For some more details on these earlier versions of domestic rankings, see e.g., Yang (1998).
At the level of disciplines, placements in the international Essential Science Indicators (ESI) ranking are mentioned as the main benchmark in relevant Chinese policy documents.
A more in-depth examination of, for example, the application of the Chinese state’s Science and Technology Indicators to university science in domestic ratings over the course of several years revealed inconsistencies that make the informative value of this data source for Chinese scientific performance evaluation appear somewhat dubious in some areas (Christmann-Budian, 2013: 223 ff.).
See also Gao Xuesong and Zheng Yongyan’s (2020) more recent, fascinating study of this dilemma in the Chinese social science and humanities disciplines, with a special focus on the role of rankings.
See MoE, 2020: http://www.moe.gov.cn/srcsite/A16/moe_784/202002/t20200223_423334.html; and MoST, 2020: https://news.sciencenet.cn/htmlnews/2020/2/436125.shtm.
More specifically, indicators based on the Web of Science should not be directly applied in evaluation and funding at any level anymore (Li, 2020).
Earlier, it was already reported that the CCP encourages Chinese universities to treat domestic political consultancy work and policy papers, as well as ideology-promoting and “politically correct” media articles by scientists and scholars similar to academic publications in career evaluation procedures (Sharma, 2017).
See the center’s website and an explanation of its mission (in Chinese) here: http://erc.ruc.edu.cn/gk/zxjj/index.htm.
References
Ahlers, A. L. (2014). Rural policy implementation in contemporary China: New Socialist Countryside. Routledge.
Allen, R. M. (2017). A comparison of China’s “Ivy League” to other peer groupings through global university rankings. Journal of Studies in International Education, 21(5), 395–411. https://doi.org/10.1177/1028315317697539
Allen, R. M. (2021). Commensuration of the globalised higher education sector: How university rankings act as a credential for world-class status in China. Compare A Journal of Comparative and International Education, 51(6), 920–938. https://doi.org/10.1080/03057925.2019.1686607
Altbach, P. (2016). China’s glass ceiling and feet of clay. University World News, 19 February, https://www.universityworldnews.com/post.php?story=20160217143711361.
Baty, P. (2021). Asian universities are on the rise. This is what it means for the rest of the world. World Economic Forum, 8 July, https://www.weforum.org/agenda/2021/07/asian-universities-on-the-rise-education-rankings-learning/.
Borsi, M. T., Valerio Mendoza, O. M., & Comim, F. (2022). Measuring the provincial supply of higher education institutions in China. China Economic Review, 71, 101724. https://doi.org/10.1016/j.chieco.2021.101724
Bound, K., Saunders, T., Wilsdon, J., & Adams, J. (2013). China’s absorptive State: research, innovation and the prospects for China-UK collaboration Project Report. Nesta.
Brankovic, J., Ringel, L., & Werron, T. (2018). How rankings produce competition: The case of global university rankings. Zeitschrift für Soziologie, 47(4), 270–288. 10/gk6g3v.
Cao, C. (2014a). China’s scientific elite. Routledge.
Cao, C. (2014b). The universal values of science and China’s Nobel Prize pursuit. Minerva, 52(2), 141–160. 10/f55sns.
Chen, X. (2019). High monetary rewards and high academic article outputs: Are China’s research publications policy driven? The Serials Librarian, 77(1–2), 49–59. 10/c9b6.
Chen, J. (2009). ‘Disgraceful’ researchers chastise their peers. China Daily, 11 July, http://www.chinadaily.com.cn/cndy/2009-07/11/content_8414502.htm.
Chinese Communist Party (CCP) Central Committee. (2021). 中国共产党普通高等学校基层组织工作条例 (Regulations on the work of grass-roots organizations of the Communist Party of China in general higher education institutions), rev. 22 April, http://www.gov.cn/zhengce/2021-04/22/content_5601428.htm.
Christmann-Budian, S. (2013). Chinesische Wissenschaftspolitik seit den 1990er Jahren [Chinese science policy since the 1990s]. Dissertation, Free University of Berlin, https://refubium.fu-berlin.de/handle/fub188/6423.
De Weerdt, H. (2007). Competition over content: Negotiating standards for the civil service examinations in Imperial China (1127–1279). Brill.
Doyon, J., & Tsimonis, K. (2022). Apathy is not enough: Changing modes of student management in post-Mao China. Europe-Asia Studies, 74(7), 1123–1146. https://doi.org/10.1080/09668136.2022.2089349
Drori, G. S., Meyer, J. W., Ramirez, F. O., & Schofer, E. (Eds.). (2003). Science in the modern world polity: Institutionalization and globalization. Stanford University Press.
Esposito, E., & Stark, D. (2019). What’s observed in a rating? Rankings as orientation in the face of uncertainty. Theory, Culture & Society, 36(4), 3–26. https://doi.org/10.1177/0263276419826276
Fischer, K. (2021). Nationalism revived: China’s universities under president Xi. In J. A. Douglass (Ed.), Neo-nationalism and universities Populists, autocrats, and the future of higher education (pp. 160–201). Johns Hopkins University Press.
Forschungsstelle Osteuropa [Research center for Eastern Europe]. (2022). Der Bologna-Prozess in Russland nach Beginn des russisch-ukrainischen Krieges [The Bologna Process in Russia after the start of the Russian-Ukrainian War]. Russland-Analysen [Russia analyses], 422, https://www.laender-analysen.de/russland-analysen/422/der-bologna-prozess-in-russland-nach-beginn-des-russisch-ukrainischen-krieges/.
Gao, X. A., & Zheng, Y. (2020). ‘Heavy mountains’ for Chinese humanities and social science academics in the quest for world-class universities. Compare: A Journal of Comparative and International Education, 50(4), 554–572. https://doi.org/10.1080/03057925.2018.1538770
Greenhalgh, S., & Zhang, L. (Eds.). (2020). Can science and technology save China? Ithaca: Cornell University Press https://www.jstor.org/stable/10.7591/j.ctvq2w1d
Han, S., & Xu, X. (2019). How far has the state ‘stepped back’: An exploratory study of the changing governance of higher education in China (1978–2018). Higher Education, 78(5), 931–946. https://doi.org/10.1007/s10734-019-00378-4
Hazelkorn, E. (2009). Attitudes to rankings: Comparing German, Australian and Japanese experiences. In S. Kaur, M. Sirat, & W. G. Tierney (Eds.), Quality assurance and university rankings in higher education in the Asia Pacific: challenges for Universities and Nations. Penerbit Universiti Sains Malaysia and National Higher Education Research Institute.
Heilmann, S., & Melton, O. (2013). The reinvention of development planning in China, 1993–2012. Modern China, 39(6), 580–628. https://doi.org/10.1177/00977004134975
Heintz, B. (2010). Nummerische Differenz. Überlegungen zu einer Soziologie des (nummerischen) Vergleichs. Zeitschrift für Soziologie, 39(3), 162–181. https://doi.org/10.1515/zfsoz-2010-0301
Hornbostel, S. (1997). Wissenschaftsindikatoren: Bewertungen in der Wissenschaft. Westdeutscher Verlag.
Huang, F. (2015). Building the world-class research universities: A case study of China. Higher Education, 70, 203–215. https://doi.org/10.1007/s10734-015-9876-8
Hvistendahl, M. (2013). China’s publication bazaar. Science, 342(6162), 1035–1039. 10/gfdcct.
Keji Zhibiao (科技指标). (1990). 中国科学技术指标1990 (China science and technology indicators, 1990). Beijing: 中国科学技术促进发展研究中心 (National Research Centre for Science and Technology for Development).
Kinzelbach, K., Saliba, I., Spannagel, J., & Quinn, R. (2021). Free universities: Putting the Academic Freedom Index into action. GPPi Report, 11 March, https://www.gppi.net/2021/03/11/free-universities.
Kipnis, A. B. (2008). Audit cultures: Neoliberal governmentality, socialist legacy, or technologies of governing? American Ethnologist, 35(2), 275–289. https://doi.org/10.1111/j.1548-1425.2008.00034.x
Kostka, G. (2019). China’s social credit systems and public opinion: Explaining high levels of approval. New Media & Society, 21(7), 1565–1593. https://doi.org/10.1177/1461444819826402
Landry, P. F. (2009). Decentralized authoritarianism in China. The Communist Party’s control of local elites in the Post-Mao era. Cambridge University Press.
Li, S. Q. (2020). The end of publish or perish? China’s new policy on research evaluation. Observations, 1. https://doi.org/10.17617/2.3263127.
Lin, S. (2013). Why serious academic fraud occurs in China. Learned Publishing, 26(1), 24–27. 10/dm9t.
Liu, X. (2009). The mirage of China: Anti-humanism, narcissism, and corporeality of the contemporary world. Berghahn Books.
Liu, Q., Turner, D., & Jing, X. (2019). The “Double First-Class Initiative” in China: Background, implementation, and potential problems. Beijing International Review of Education, 1, 92–108. https://doi.org/10.1163/25902547-00101009
Lynch, K. (2014). New managerialism, neoliberalism and ranking. Ethics in Science and Environmental Politics, 13(2), 141–153.
Mao, D., & Yan, F. (2015). Five systematic transformations and their impacts on academic profession in China. Chinese Education and Society, 48(4), 248–264. https://doi.org/10.1080/10611932.2015.1119542
Marginson, S. (2011). Higher education in East Asia and Singapore: Rise of the Confucian model. Higher Education, 61(5), 587–611. https://doi.org/10.1007/s10734-010-9384-9
Marginson, S. (2014). University rankings and social science. European Journal of Education, 49, 45–59. https://doi.org/10.1111/ejed.12061
Marginson, S. (2017). The world-class multiversity. Global commonalities and national characteristics. Frontiers of Education in China, 12(2), 233–260. https://doi.org/10.1007/s11516-017-0018-1
Marginson, S. (2022). ‘All things are in flux’: China in global science. Higher Education, 83(4), 881–910. https://doi.org/10.1007/s10734-021-00712-9
Marginson, S. (2016). The role of the state in university science: Russia and China compared. London: Centre for Global Higher Education Working paper series, 9, https://www.researchcghe.org/perch/resources/publications/wp9.pdf.
Mei, C., & Pearson, M. M. (2014). Killing a chicken to scare the monkeys? Deterrence failure and local defiance In China. The China Journal, 72, 75–97. https://doi.org/10.1086/677058
Merton, R. K. (1973). The sociology of science. Theoretical and empirical investigations. University of Chicago Press.
Meyer, J. W., Ramirez, F. O., & Soysal, Y. N. (1992). World expansion of mass education, 1870–1980. Sociology of Education, 65(2), 128–149. https://doi.org/10.2307/2112679
MIT Technology Review (n/a). (2017). The truth about China’s cash-for-publication policy. Tech Policy - MIT Technology Review, 12 July, https://www.technologyreview.com/2017/07/12/150506/the-truth-about-chinas-cash-for-publication-policy/.
Mok, K. H., & Kang, Y. (2021). A critical review of the history, achievements and impacts of China’s quest for world-class university status. In E. Hazelkorn & G. Mihut (Eds.), Research Handbook on University Rankings (pp. 366–381). Edward Elgar.
Naughton, B. (1995). Growing out of the plan: Chinese economic reform, 1978–1993. Cambridge University Press.
Ngok, K., & Guo, W. (2008). The quest for world class universities in China: Critical reflections. Policy Futures in Education, 6(5), 545–557. 10/fgmcfc.
Orleans, L. A. (1989). Reforms and innovations in the utilization of China’s scientific and engineering talent. In D. F. Simon & M. Goldman (Eds.), Science and technology in Post-Mao China (pp. 89–117). Harvard University Press.
Pan, S.-Y. (2009). University autonomy, the state and social change in China. Hong Kong University Press.
Perry, E. J. (2020). Educated acquiescence: How academia sustains authoritarianism in China. Theory and Society, 49(1), 1–22. 10/gh4swk.
Pfeffer, T., & Stichweh, R. (2015). Systems theoretical perspectives on higher education policy and governance. In J. Huisman et al. (Ed.), The Palgrave international handbook of higher education policy and governance (pp. 152–175). Palgrave Macmillan.
Qiu, J. (2010). Publish or perish in China. Nature, 463, 142. https://doi.org/10.1038/463142a
Rhoads, R. A., Wang, X., Shi, X., & Chang, Y. (2014). China’s rising research universities: A new era of global ambition. Johns Hopkins University Press.
Ringel, L. (2021). Challenging valuations: How rankings navigate contestation. Zeitschrift Für Soziologie, 50(5), 289–305. https://doi.org/10.1515/zfsoz-2021-0020
Ringel, L., Espeland, W., Sauder, M., & Werron, T. (2021). Worlds of rankings. Research in the Sociology of Organizations, 74, 1–23. https://doi.org/10.1108/S0733-558X20210000074026
Saich, T. (1989). China’s science policy in the 80’s. Humanities Press International.
Schulte, B. (2019). Innovation and control: Universities, the knowledge economy and the authoritarian state in China. Nordic Journal of Studies in Educational Policy, 5(1), 30–42. https://doi.org/10.1080/20020317.2018.1535732
Sciping (科塔学术), 2019a: Overview of “Double First-Class” construction programs in different jurisdictions (各地“双一流”建设方案综述), 24 March, https://www.sciping.com/27275.html.
Sciping (科塔学术), 2019b: Northwestern Polytechnic University First-class university construction program (西北工业大学一流大学建设方案), 24 March, https://www.sciping.com/27275.html.
Sharma, Yojana. (2017). Universities told to credit propaganda as publication. University World News, 26 September, https://www.universityworldnews.com/post.php?story=2017092619370021.
Sharma, Yojana. (2022a). More universities become “world class” to meet China’s ambitions. University World News, 17 February, https://www.universityworldnews.com/post.php?story=2022a0217082128415.
Sharma, Yojana. (2022b). Three major universities quit international rankings. University World News, 11 May, https://www.universityworldnews.com/post.php?story=2022b0511170923665.
Sigurdson, J. (2004). China becoming a technological superpower: A narrow window of opportunity. Scandinavian Working Papers in Economics, 194, http://swopec.hhs.se/eijswp/papers/eijswp0194.pdf.
Sohu (n.a.). (2022). 中国人民大学决定今年起不再参与世界大学排名, 原因曝光! (Renmin University of China decided to no longer participate in world university rankings from this year onward – now the reason is exposed!), https://www.sohu.com/a/545361503_121333743.
Staiger, B. (2002). Hochschulen: „Projekt 211“. China Aktuell, 9, 1003–1004.
Staiger, B. (2003). Spitzenuniversitäten: “Projekt 985.” China Aktuell, 5, 561.
State Council of the People’s Republic of China (2006). 国家中长期科学和技术发展规划纲要 (The state’s medium- to long-term plan for the development of science and technology, MLP), document no. 9, http://www.gov.cn/gongbao/content/2006/content_240244.htm.
Stichweh, R. (1996). Science in the system of world society. Social Science Information, 35(2), 327–340. https://doi.org/10.1177/053901896035002009
Stichweh, R. (2023). The university as a world organization. In P. Mattei et al. (Eds.), The Oxford Handbook of Education and Globalization, New York: Oxford University Press.
Sui, C. (2019). Chinese universities’ first course is nationalism 101. Foreign Policy, 25 October, https://foreignpolicy.com/2019/10/25/chinese-universities-nationalism-mainland-china-hong-kong/.
Suttmeier, R. P. (1989). Science, technology, and China’s political future – A framework for analysis. In D. F. Simon & M. Goldman (Eds.), Science and technology in Post-Mao China (pp. 375–396). Harvard University Press.
Tan, C., Zheng, K., & Xiao W. (2017). “Shuang yiliu” kaiju (“双一流”开局The kickoff of the “double world-class project”), Southern Weekend (南方周末), 28 September, http://www.infzm.com/content/129344.
Vorotnikov, E. (2022). Russians to consider pulling out of Bologna Process. University World News, 15 April, https://www.universityworldnews.com/post.php?story=20220415114832118.
Wang, G. H., & Guo, W. L. (2019). “双一流”建设的问题审视和发展路 (Reviewing the problems of “Double First-Class” construction and the way forward). 理论月刊 (Theory Monthly), 3, 153–160. https://doi.org/10.14180/j.cnki.1004-0544.2019.03.022
Wilbers, S., & Brankovic, J. (2021). The emergence of university rankings: A historical-sociological account. Higher Education, Online First. https://doi.org/10.1007/s10734-021-00776-7
Wilsdon, J., & Keeley, J. (2007). China: The next science superpower? London: Demos. https://digital-library.theiet.org/content/journals/10.1049/et_20070301
Xu, L., Zhao, X., & Starkey, H. (2021). Ideological and political education in Chinese Universities: Structures and practices. Asia Pacific Journal of Education, online first. https://doi.org/10.1080/02188791.2021.1960484
Yang, R. (1998). Ranking universities in China: Same game, different context. International Higher Education, 13, 15–16. https://doi.org/10.6017/ihe.1998.13.6444
Yudkevich, M., Altbach, P. G. & Rumbley, L. E. (2019). Citius, altius, fortius: Global university rankings as the “Olympic Games” of higher education? Intelligent Internationalization, 43, 27–30. https://doi.org/10.1163/9789004418912_005
Zhang, L., & Sivertsen, G. (2020). The new research assessment reform in China and its implementation. Scholarly Assessment Reports, 2(1), 3. https://doi.org/10.29024/sar.15
Zhao, L. (2018). China’s world-class 2.0: Towards more institutionalized and participatory policymaking? The Copenhagen Journal of Asian Studies, 36(1), 5–27. https://doi.org/10.22439/cjas.v36i1.5510
Zhi, Q., & Pearson, M. M. (2017). China’s hybrid adaptive bureaucracy: The case of the 863 program for science and technology. Governance, 30(3), 407–424 https://doi.org/10.1111/gove.12245
Zweig, D. (2018). Internationalizing China: Domestic Interests and Global Linkages. Cornell University Press.
Acknowledgements
We are very grateful to the anonymous reviewers of this article and the three editors of this special issue, whose comprehensive and precise remarks greatly improved our study, and to Fiona Bewley for the meticulous (and even last-minute) language editing of the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ahlers, A.L., Christmann-Budian, S. The politics of university rankings in China. High Educ 86, 751–770 (2023). https://doi.org/10.1007/s10734-023-01014-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10734-023-01014-y