Introduction

Internationalization indicators such as the share of international faculty and international students are included in global rankings because rankers believe that these indicators represent institutional competitiveness in the global market. However, there are criticisms that English-speaking systems are advantaged by these internationalization indicators while non-English speaking countries are disadvantaged (e.g., Marginson & van der Wende, 2007; Teichler, 2011; Gantman, 2012). In practice, higher education institutions (hereafter, HEIs) respond to the global ranking games differently to improve their global ranking status (e.g., Dowsett, 2020; Lee et al., 2020). One university might put weight on an international outlook while another might put more weight on research productivity. However, improving the international outlook is not simple because these scores depend on how to define “international” (Teichler, 2015; Huang & Welch, 2021). This study addresses how much the international outlook status of higher education system levels varies depending on the definition of “international” faculty.

The term of “international faculty”’ can be defined in various ways, as extensively discussed by Teichler (2015) and Kim and Jiang (2021). For example, foreign-born status can include a large number of immigrants crossing national borders. A second approach is to define international faculty by the “citizenship” of current employment. The foreign-born status is based on where a person was born and is defined as “nationality” while citizenship refers to whether a person has met the legal requirements as a legal current resident in the country where they are living (US Immigration Office, Jan. 28, 2021). It is relatively easy to obtain data from immigration offices on both the foreign-born and citizenship-based approaches, and most ranking data are based on this approach. The Times Higher Education (hereafter, THE) defines international faculty by the nationality criteria (THE, 2021) while the Quacquarelli Symonds (hereafter, QS) defines it by citizenship (QS, 2021). A third approach is an academic training-based approach where international faculty are defined by the country of their education, i.e., the country where they earned their first higher education degree (hereafter, bachelor degree), the country where they earned their doctoral degree, or where they undertook their post-doctoral studies.

Each definition has its own focus and emphasis. Foreign-born status focuses on the location of birthplace whereas citizenship places more emphasis on the visa status of their faculty members, while academic degrees (post-doc., doctoral, and bachelor) give more emphasis to academic training (e.g., Kim et al., 2011). Different definitions of international faculty lead to complexity in measuring international faculty (Teichler, 2015). Among these approaches, rankers such as THE and QS use birthplace (this study interchangeably uses birthplace and nationality) and citizenship rather than their academic training because they believe that citizenship represents the degree of internationalization of HEIs. However, the birthplace or citizenship might not represent the global competitiveness of the HEIs because some of them are local by their academic training. Therefore, one question raised by the research is how much an international faculty outlook score by citizenship differs from a score measured by academic training.

There are many criticisms of global rankings that evaluate the quality of universities on the basis of clumsy indicators (Dill et al., 2005; Altbach, 2006; Marginson et al., 2007; Kehm, & Stensaker, 2009; Aguillo et al., 2010; Shin et al., 2011; van Raan et al., 2011; Brankovic et al., 2018; Pietrucha, 2018; Safon, 2019; Safon et al., 2021). However, there has been little academic discussion or research on the issue of measuring international faculty in relation to global rankings using empirical data. This study addresses this issue through comparing the changes in international outlook at system levels by applying different measures to international faculty. Global rankings such as THE and QS assess the share of international faculty by nationality or citizenship, but these data do not provide insight on their academic training. Fortunately, international comparative surveys such as the Changing Academic Profession (hereafter, CAP) provide data on faculty members’ academic training as well as their nationality and citizenship, drawing from 25,282 academic staff across 19 higher education systems (Teichler et al., 2013). However, a methodological challenge is how to combine both institutional level data (ranking data) and national level data (the CAP data). This study transformed the international outlook scores of individual HEIs to a national average for the international outlook score, then analyzed possible changes in rankings by adopting different definitions—by academic training in this study—of international faculty.

This study developed a hypothetical ranking for individual HEIs that enables the changes in the ranking status of individual HEIs when applying different definitions of international faculty. This simulation will show how much ranking status has been changed at individual HEIs’ level and higher education system levels according to the different definitions of international faculty. In addition, this study proposes possible directions for updating the international outlook measure. For this research, the study proposed two questions:

  • Research question 1: How much has ranking status been changed by applying different definitions of international faculty by global rankings?

  • Research question 2: Are there similarities among the higher education systems that are under (or over) estimated by the current measure of international faculty?

Research background

This section overviews various definitions of international faculty, followed by how these definitions are measured, and how the measures are weighted in global rankings.

Definitions of international faculty

The underlying logic of using international faculty as a ranking indicator is that having more international faculty represents the global competitiveness of individual HEIs because global talent is moving around the world searching for better working places (Altbach, 2009; Yudkevich et al., 2017). Researchers use different terms such as foreign faculty, international faculty, expatriate faculty, foreign-born faculty, mobile academics, and immigrant academics depending on their research contexts (e.g., Kim et al., 2011; Teichler, 2015; Huang & Welch, 2021). These different terms highlight different dimensions of international faculty. For example, “foreign” faculty highlights a faculty member who is “not a local”; “expatriate” faculty refers to academics who are developing their career outside of their home country (Trembath, 2016), “foreign-born” academics refers to their birthplace, “mobile” academics refers to their frequent mobility across countries (Teichler, 2015). This study focuses on three concepts of international faculty, defined as birthplace-based, citizenship-based, or academic training-based.

In research practice, international faculty can be defined according to the research context. Current global ranking mechanisms use the term birthplace and/or citizenship to define international faculty members. THE defines international faculty as “those defined based on nationality differs from the country where the institution is based” (THE, 2021), and QS states “The term “international” should be determined by citizenship. For EU countries, this includes all foreign nationals, even if from another EU state. In Hong Kong SAR and Macau SAR, this includes professors from Mainland China” (QS, 2021). This is because nationality or citizenship is relatively easy to count and the data are publicly available through immigration offices. However, the term international faculty as determined by nationality or citizenship has some limitations as a measure of institutional competitiveness of HEIs. For example, some countries like the UK with its many former colonial states have higher numbers of international faculty as a baseline. In addition, citizenship depends on the immigration policy of a country rather than its academic competitiveness. Australia, Canada, the USA (before the Trump presidency), or the UK (before Brexit) have (or used to have) flexible immigration policies that favored naturalization for skilled workers (Barbaric & Jones, 2016; Huang & Welch, 2021; Kim & Jiang, 2021). Because of this, the number of foreign citizens might not necessarily represent the academic competitiveness of HEIs.

Given these limitations, academic training might be a better proxy for measuring international faculty as an indicator of global rankings. This is because competitive higher education systems such as in the USA, the UK and Australia recruit talented academics from globally competitive markets and not from locally trained human resources. Considering this, recruiting their faculty globally implies that their HEIs are competitive and open to global talent. Globally mobile academics might bring different areas of specialization, new networks, and even different cultures to their host universities (e.g., Tung, 2008; Teichler, 2015). These advantages are in line with the benefits a host university might gain through the internationalization of higher education as discussed in de Wit (2017) and Altbach and Knight (2007).

This study proposes an alternative term for international faculty—international faculty on the basis of academic training. Actually, academic training is often an important factor in faculty hiring policy in order to minimize faculty inbreeding where state policy encourages the hiring of candidates who are graduates from other universities. Some countries discourage the hiring of their own graduates because academic inbreeding is widespread across higher education systems such as in Portugal, Russia, Japan, and Korea (Horta et al., 2011; Altbach et al., 2015; Shin et al., 2016). Borrowing the terminology from academic inbreeding, this study defines international faculty as “faculty members who earned their academic degree from a university other than where they are currently employed”. The higher education degree could be a bachelor or doctoral degree. A master degree is not considered in this study because it is part of the doctoral degree in many countries (Shin et al., 2018).

According to the Changing Academic Profession (CAP) data defining international outlook by nationality or citizenship is quite different from that by academic training, at the higher education system level (Teichler et al., 2013). According to the CAP data, non-English speaking systems such as Argentina, Brazil, Italy, Malaysia, Mexico, Portugal and South Korea are underestimated in the international faculty rate when measured by nationality or citizenship because these systems demonstrate a much higher international faculty rate when international faculty is measured by academic training. This suggests that the current definition of international faculty in THE and QS favors some countries while understating the extent of international faculty in some other countries, mostly in non-English speaking systems. If academic training is a valid measure for the institutional competitiveness of HEIs, our follow-up analysis shows how much the ranking status fluctuates according to the different measures used, as this study proposes.

Composition of ranking indicators and sensitivity of ranking status

The THE ranking indicators consist of five areas—teaching (30%), research (30%), citation (30%), industry income (2.5%), and international outlook (7.5%). Similarly, the QS ranking assigns 10% for international outlook (5% for international faculty, and 5% for international students). However, the weighting assigned may or may not represent the actual contribution of each area because it depends on the variances of each indicator (Hou & Jacob, 2017). For example, if there is a high variance in teaching scores, then a small increase in teaching performance may not affect total scores because there is a relatively large gap between HEIs in the area of teaching. Thus, the high weight does not necessarily mean that HEIs inclined to focus more on the highly weighted indicators. In this context, HEIs might want to pay more attention to the indicators where they can easily and rapidly catch up with competing HEIs—possibly within one or two years.

HEIs often use a simulation to develop their strategy for institutional development in global rankings (e.g., Hazelkorn, 2007; Locke, 2011). In most cases, an institutional researcher analyzes research and citation indicators to develop a strategy for ranking games (e.g., Dowsett, 2020). In addition, current ranking mechanisms are based on these simulations and there have been academic disputes on the calculation methods as well as the definitions of individual indicators (Locke et al., 2008). Most rankers do not calculate publication scores simply by counting the number of papers. Rather, most rankers normalize the number of articles, and also citation counts (e.g., Leydesdorff & Bornmann, 2011; Waltman et al., 2011). Reflecting this complexity, the Leiden Ranking provides various scores according to different measures of the research and citation indicators. These simulation efforts have contributed to a ranking mechanism based on a more scientific methodology.

Despite this, the area of international outlook has not evolved much from when global rankings first emerged. With economic globalization, academic mobility has been considered a core factor of social development and innovation. Actually, international mobility has a longer story and there are well known lessons from history. For example, the aggressive study abroad policy during the Meiji period in Japan has been considered a key to it social and technological innovation (Kashioka, 1982) while the Spanish King Philip II’s policy of forbidding study abroad has been criticized as contributing to the decline of Spanish science (Goodman, 1983). Although there are arguments that the ranking status is simply representing the dominant local language rather than international competitiveness (e.g., Marginson & van der Wende, 2007; Teichler, 2011; van Raan et al., 2011), this argument has not been supported with empirical analysis using a solid methodology. HEIs that hire academics trained by HEIs in other than the current host country might better represent institutional openness and institutional diversity (Altbach & Knight, 2007). For example, many Hong Kong universities which tried to recruit foreign scholars from globally competitive markets are positioned as international key players, especially in Asian higher education (Mok, 2005). Hiring “locally trained” international academics does not represent institutional openness or diversity because these academics are already a part of the higher education systems in the host country (e.g., Pustelnikovaite, 2021). From that point of view, academic training might be a better proxy measure of international outlook in terms of institutional competitiveness. However, very little data support this view. The following method section describes how to simulate this using empirical data.

Research method

This section briefly explains the data that this study is based on and introduces the analytical strategy for addressing the research questions.

Data

The data for this study are from four major resources—The THE, the UNESCO data, national statistical data, and the CAP data. This study uses THE global ranking data because the QS does not disclose the scores by indicators before its ranking in 2019. The THE global ranking provides data on the rankings of individual HEIs with scores on international outlook, teaching, research, citations, and industry income. The UNESCO data provide the data on international students at a national level. The data on international faculty at nationwide were collected from various sources such as national statistical reports, national data systems and several books on case studies for internationalization (Appendix 1). The CAP data provide international faculty by academic training (bachelor degree and doctoral degree) as well as birthplace and citizenship of current employment. The CAP data were collected during 2007/2008 from 19 higher education systems through common survey items. Because the CAP data is based on 2007/2008, this study analyzed THE Global Ranking in 2011/2012. The ranking data are the oldest available on the Internet along with a list of the top 400 universities. Although this study is based on international faculty data in different time periods, we believe that it is not serious issue for this simulation because the proportion of international faculty by nationality or citizenship is not much changed within a short time period. For example, international outlook scores in last 10 years have not been changed a lot as shown in the correlation analysis in Appendix 2.

The 2015 UNESCO data were used in this study because they include most higher education systems included in the THE 400 rankings, and the 2015 data included cases from 18 of the 19 CAP participating countries (Australian data was released in 2015 and German data was released in 2013. Only Argentina was omitted). No university in Argentina, Malaysia or Mexico was ranked in the THE 2011/2012 top 400, and therefore, in this comparative study, 16 higher education systems among these 19 systems include 310 HEIs covering 77.11% of HEIs included in the THE 2011/2012 of the top 400 rankings. In our final analysis, this study analyzed 281 out of 310 universities because the industry income of 29 universities was not presented in the THE score. The data underlying our analysis is summarized in Table 1.

Table 1 HEIs Analyzed and their International Faculty and International Students

International outlook scores in the THE ranking are calculated based on the number of international faculty members, international students and international collaboration, but the raw scores are not released to the public. The THE ranking provides only a total score of international outlook scores without releasing each indicator’s raw score. Because of this, researchers outside of the ranking institute can access only the total score of the international outlook, rather than by each international faculty member and student. However, we can suggest how much the international outlook represents international faculty and students by comparing the national average of international outlook scores and a national average of international faculty members in national statistics and international students in the UNESCO data.

The THE ranking normalizes indicator scores by a Z score which is then turned into a cumulative probability score to assign different scores on this indicator. To analyze the correlations between different measures, this study transformed these measures (the rate of international faculty in national statistical data, the rate of international students in UNESCO data, and the rate of international faculty by different measures of international faculty in the CAP) into a percentile rank. The correlations between the international outlook in THE, international faculty in national statistical data, international students in UNESCO data, and different measures of international faculty in the CAP are presented in Table 2. The UNESCO data on international students and national data on international faculty are highly and significantly correlated with international outlooks in THE.

Table 2 Correlations across different measures of international outlooks

In addition, this study extended correlation analysis to different measures of international faculty among 16 higher education systems in the CAP study. Table 2 shows that international outlook scores are highly and significantly correlated with international faculty members measured by their bachelor degree and their citizenship, but the correlation is low for doctoral degrees in the CAP data. Moreover, there is a high correlation between the international outlook scores and the number of international students in the UNESCO data. This correlation analysis shows that the international outlook of the national average among the top 281 HEIs is highly correlated with that of the CAP (except for doctoral degree in the CAP) and UNESCO data.

Analysis and clustering of higher education systems

This study analyzed the changes in ranking status by applying different measures of international faculty members–birthplace, citizenship of current employment, and the other two by academic training (bachelor degree and doctoral degree). For our analysis, this study produced a ranking for the total 281 HEIs (hereafter, “reference ranking”) through replacing international outlook scores in the THE with the national average of international students in UNESCO data and international faculty rate in national statistics. The reference ranking assigned 50% to international faculty and the other 50% to international students in the international outlook scores. This reference ranking has a high correlation (0.996) with the original ranking by THE and it is our reference for assessing our hypothetical ranking developed for our simulation. Our hypothetical ranking is developed through plugging in the rate of international students in the UNESCO and international faculty in the CAP to produce rankings for the 281 HEIs included in this study. For our analysis, we assume that international collaboration does not vary and remains fixed across HEIs. Hypothetical ranking 1 is the ranking based on international faculty by birthplace; hypothetical ranking 2 is based on current citizenship of employment; hypothetical ranking 3 is based on the country where they earned their bachelor degree; and, hypothetical ranking 4 is by the country of their doctoral degree. Through this process, this study enables us to estimate how much the global rankings fluctuate by different measures of international faculty members. The procedure is summarized in Fig. 1.

Fig. 1
figure 1

Analytical procedure. (1) Reference ranking produces rankings for 281 higher education institutions by plugging in national data from official statistics (international student and international staff) for individual university. (2) Hypothetical ranking produces ranking for 281 higher education institutions by plugging in the CAP data (international staff) for different definitions (birthplace, citizenship and two academic training status)

The global ranking by citizenship is the measure in current THE international outlook, and the gaps between birthplace, citizenship, and two academic training measures show which systems are underestimated (or overestimated) by current measures of international faculty. This study analyzes the changes in the ranking status of individual HEIs by numeric values, so that simulation results can be further analyzed and discussed. This study also classified 16 higher education systems based on four different international faculty measures by applying k-means cluster analysis. The k-means cluster analysis is a non-hierarchical clustering method that calculates the centroids for each cluster and divides n observations into k clusters belonging to the nearest cluster center. The one-way analysis of variance (ANOVA) will be applied to determine whether differences between groups are statistically significant. These analytical approaches enable us to show how much ranking status has increased or decreased by applying different measures of international faculty.

Results

This section summarizes major findings on how ranking status changes by applying different measures of international faculty and whether some systems are systematically over or underestimated by the different measures of international faculty.

Changes of international outlook score and rankings

We made reference scores and hypothetical international scores nationwide to examine changes in international outlook scores by applying different measures of international faculty. Table 3 shows differences in international outlook scores across different measures of international faculty. The international outlook score for each country was calculated by averaging the international outlook scores of universities that ranked within THE top 281 in each country. Australia shows the highest score in the international outlook score, followed by the UK and Hong Kong. The standard deviation of international outlook scores is between 2.69 and 16.43, and nine countries have a standard deviation under 10.00, indicating that the deviation between universities in a country is not very large. On the other hand, this analysis found that differences in international outlook scores between countries (SD: 19.94) are bigger than that of between universities in a country.

Table 3 International outlook scores across different measures of international faculty

This simulation found that the international outlook scores on international faculty fluctuate a lot according to the different measures of international faculty whether measured by birthplace, citizenship status or academic training. For example, scores related to international faculty by birthplace and citizenship status differ as shown in Table 1, with the gaps calculated. The largest gap is in Australia where the gap is 26% points (37.9–11.9%), followed by Canada by 19.9% points (31.9–12.0%), and Hong Kong by 12.5% points (30.2–42.7%). In addition, scores on international faculty by academic training differ considerably in some systems from that of birthplace-based or current citizenship-based ones. Table 3 shows the gaps between the citizenship-based measure and Ph.D. training-based one. According to this result, the gap between the two measures is the largest in Korea (43.6% points), followed by Italy (33.8% points), Portugal (32.4% points), and China (20.8% points). This finding implies that some systems are systematically underestimated by the selection of a specific measure of international faculty.

Systemic differences in the changes to global rankings

This study further analyzed international faculty outlook scores by similarities and differences across 16 higher education systems by k-means cluster analysis. The number of clusters, k = 3 was determined as the optimal choice using the Elbow method, which defines clusters such that the total within-cluster sum of the square is minimized. Table 4 is the results of the k-means cluster analysis which is based on similarities and differences between groups by the four measures of international faculty. The cluster analysis shows that international outlook scores in five higher education systems are very stable across different measures of international faculty while the scores fluctuate in the rest of the systems.

Table 4 Cluster of 16 higher education systems and their profiles

In brief, the five systems of the UK, Canada, Australia, Canada, Hong Kong, Norway, and the UK show relatively stable scores across different measures of international faculty while this measure fluctuates considerably in the five systems of Brazil, China, Italy, Portugal, and South Korea. The remaining six systems of Finland, Germany, Japan, Netherlands, South Africa, and the USA fall between these two clusters. The five stable systems are not impacted very much by the different measures of international faculty whereas the other systems are. Interestingly, other than Norway, the five systems with stable international outlook scores have a system based on UK higher education. On the other hand, the five systems with considerable fluctuation in their rankings are in non-English speaking countries.

In addition, this study performed ANOVA analysis to examine whether differences were observed in four international faculty measures according to the clusters. There are statistical differences between four different measures of international faculty based on the three clusters.

These differences might be closely associated with the internationalization policy of each higher education system. These systems with high fluctuation (Brazil, Italy, Portugal, and South Korea) encourage their researchers to study abroad. As a result, these systems report a higher rate of doctoral training abroad while they did not hire many foreign-born or foreign citizens. Compared to these systems, five systems in the low fluctuation (Australia, Canada, Hong Kong, Norway, and the UK) prefer to recruit their researchers from the global market and report a high rate of international academics. Compared to these two types, six systems with the mid fluctuation (Finland, Germany, Japan, Netherlands, South Africa, the USA) locally train their faculty members, with some from abroad who earn their doctoral degrees within these systems. For example, the share of “temporary visa holders” which refers to non-citizens or non-resident status holders among doctoral degree recipients in the USA, was 32% in 2015 (US National Science Foundation, 2017).

Discussion

This study found that international outlook scores at THE have been significantly impacted when different measures of international faculty are used. Among the four measures included in this study, we found that international faculty measured by the country of doctoral studies produced quite different outlook scores from that of a nationality-based or citizenship-based one and thus produced different ranking statuses. This study found that major English-speaking systems such as the UK, Canada, and Australia hire a large percentage of foreign citizens as faculty members, but many of these foreign citizens have earned their doctoral degrees from the country of current employment. In contrast, Italy, Portugal, Korea, and Brazil hire a large number of faculty members who hold citizenship, but many of them earned their doctoral degrees abroad.

This finding implies that the current measure of international faculty favors major English-speaking systems while other systems are disadvantaged. The current internationalization indicators are particularly favorable to the English-speaking systems influenced by the British educational system (Luque-Martinez & Faraoni, 2020). This finding suggests a possible bias of international outlook scores and global rankings. Although there have been continuous debates and discussions on research measures such as publications and citation indicators (e.g., Lee et al., 2020; Safon & Docampo, 2021), we tend to accept international outlook measures as accepted. However, there are different definitions of international faculty in research and policy practices and the measures differ accordingly (Kim et al., 2011; Teichler, 2015). In this regard, one critical issue is whether citizenship status is a valid measure of international outlook.

Hiring faculty with non-citizenship indicates how much the systems and/or HEIs are open to different cultural backgrounds as discussed in Altbach and Knight, (2007). Without this culture, HEIs might be unable to recruit global talent that is the key for global competitiveness as discussed in a world-class university initiative (e.g., Altbach, 2009). This is because diversity in thought and ideas brings fresh ideas for breakthrough research (Altbach & Yudkevich, 2017; Yudkevich et al., 2017). However, hiring foreign citizenship holders does not necessarily mean the “diversity” in thought and ideas occurs if those international faculty are “naturalized” after they have been academically trained in the local system (Kim et al., 2011). In this case, international faculty are not very different from local faculty members.

Instead, hiring faculty who earned their doctoral degree abroad might be a better measure to rank a global outlook. International mobility of trained individuals returning to their home countries might improve academic performance in their home countries through academic exchange, language and culture (Saxenian, 2005). Returnees from study abroad have also maintained social and professional relationships with their domestic and international colleagues and actively collaborate and disseminate research between the two countries. In policy and institutional practice, similarly, a study visit is one of the requirements for researchers in Switzerland (Sautier, 2021), and in the Netherlands a large number of researchers are returnees from study abroad (de Jonge, 2021).

In addition, the motivation for hiring international faculty differs across higher education systems. For example, a large proportion of the international faculty is affiliated with science, technology, engineering, and mathematics (STEM) fields in the USA, the UK, Australia, and the Netherlands. For example, about 77% of the international faculty are affiliated with the fields of sciences, engineering and IT areas in Australian universities (Welch, 2021). Compared to these systems, however, in Japan, Korea, Taiwan, and Hong Kong a large proportion of the international faculty are in the fields of arts and humanities and social sciences where there is a lower rate of production of scientific publications (Chang, 2021; Chen, 2021; Huang, 2021; Shin, 2021). For example, in Taiwan in 2015, 60% of the international faculty members belonged to arts and humanities and social sciences (Chang, 2021). These different patterns of international faculty recruitment imply that some systems recruit international faculty for higher research productivity while the other systems do not.

Because of this pattern of faculty hiring practices, the share of international faculty is plateaued after some rapid growth during the last a couple of decades in Japan, Korea, and Taiwan (e.g., Huang et al., 2019; Chang, 2021; Shin, 2021). It seems that the share of international faculty might not increase in a short time frame in these systems. Nevertheless, the current measure of international faculty stimulates HEIs to hire faculty with a foreign passport.

In light of these differences across systems, citizenship status might not be a valid measure of international outlook and it is necessary to redefine international outlook beyond birthplace, nationality, or citizenship. Internationalization in higher education is related to the recovery of the cosmopolitan nature of higher education and the improvement of the quality of higher education through mutual learning and various international experiences (Knight, 2014). This study proposes the construction of the comprehensive form of internationalization needs to be unpacked when juxtaposed with the model of internationalization based solely on nationality or citizenship. One possible approach is to combine international faculty by both citizenship-based and doctoral training-based data.

In research and policy practice, researchers often use academic training as a measure of international faculty. For example, Sheehan and Welch (1996) counted international faculty in terms of their academic training in their study in Australia. Kim et al. (2011) considered the birthplace and their academic training to complement the limits caused by relying only on citizenship status. In addition, the OECD (2017) also defines international scientists by their first scientific publication, which is usually their doctoral thesis or related articles, for its report of the Science, Technology and Industry Scoreboard. We believe that the proposed approach is relevant to measuring institutional competitiveness as well as minimizing the issue of systemic bias by selecting the citizenship-based measure.

Conclusion

Global rankers selected the international faculty indicator as a proxy for the global competitiveness of HEIs. The THE describes its rationale for selecting an international outlook as “The ability of a university to attract undergraduates, postgraduates and faculty from all over the planet is key to its success on the world stage” in its methodology (THE, Dec. 9. 2021). The QS describes this ability as global “brand” power. This “ability” and “brand power” may represent one dimension of the institutional competitiveness of HEIs, but the logic of attracting international faculty as a proxy for institutional competitiveness is not valid in some systems. This is because internationalization is approached differently across higher education systems. Nevertheless, the current measure of international faculty stimulates HEIs especially in non-English speaking systems to hire foreign citizens to enhance their global rankings. However, hiring “foreign” passport holders brings other issues to both the host universities and the invited international faculty members (Gress & Shin, 2020).

This study proposed to complement these side effects of measuring international faculty by using only a citizenship-based one and recommended combining both citizenship-based and academic training-based ones together. Global rankers have been updating their indicators and measures, especially for measuring the credits for publication and citation to reflect scientific evidence by researchers as described on their websites. However, relatively few revisions were made in the areas of teaching and international outlook scores. Updating teaching quality is challenging because it involves serious issues around measurement and data collection. However, updating the international faculty measure is not particularly complicated for data collection and data analysis as this study has shown.

This study highlighted alternative approach to define international faculty and raised issues related to possible under- and overestimation of the international faculty rate across different higher education systems. However, this study has several limitations because of the data this simulation is based on. First, this study used relatively old data because of the data constraints. Follow-up research with more recent data might be more persuasive. Second, the CAP data were collected by an international comparative research project and each country team member took charge of data sampling and collection. Consequently, there may have been some selection bias, and generalizability may be limited. Third, this study defined internationalization of faculty members and did not pay much attention to their internationalization activity and its outputs such as international collaboration. Some global rankings such as the SCIMAGO institutions ranking, the Leiden ranking, and the University Ranking by Academic Performance (URAP) measure international collaboration as a measure of internationalization. An interesting area for research is how international co-authorship is associated with the rate of international faculty members in each HEI.