Introduction

Web 2.0 tools have become increasingly important in the research community (Ahmad, 2011; Grabner-Kräuter, 2009; He et al., 2014). Social networks based on Web 2.0 not only allow for the distribution of entertainment, they may also facilitate effective scholarly communication (Chugh & Ruhi, 2018; Görögh et al., 2017; Greifeneder et al., 2018; Harris & Rea, 2019; Hung & Yuen, 2010; Zaidieh, 2012). The current international academic environment is facilitated by many academic social networks (ASN) that are used by both early career and experienced researchers (Mason, 2020) and their usage has increased significantly over the past decade (Jordan, 2019). ASNs are important tools for disseminating research outputs and interacting with other scientists (Kadriu, 2013; Noorden, 2014). Social networks have accelerated the process of sharing knowledge (Asnafi, 2017). Specialized social networking platforms have attracted the attention of researchers and scientists because such tools can be effectively used not only for building a portfolio, developing international scientific networks, sharing ideas and convenient communication (Al-Aufi & Fulton, 2014; Greifeneder et al., 2018; Manco, 2019; Meishar-Tal & Pieterse, 2017; Muscanell Utz, 2017), but also for making comparisons, assessing research and researcher impact (Erdt et al., 2016; Hoffmann et al., 2016; Sugimoto, et al., 2017; Yan & Zhang, 2019) and promoting research institutions and developing regions (Asmi & Margam, 2018; Meishar-Tal & Pieterse, 2017McGillivray & Astell, 2019). Researchers have analysed both ASNs, their usefulness and benefits, but also occasionally criticized their disadvantages and defects (Jacsó, 2005; Jamali, 2017; Kraker & Lex, 2015; Thelwall & Kousha, 2017). Many studies have confirmed the benefits of ASNs both for researchers and research institutions and indicate that these tools are necessary at present to improve the international visibility of emerging academic systems (Mason, 2020) and measure many areas of scholarly activity like research, integration, application, teaching, and co-creation (Nicholas et al., 2015). The influence of articles and research based on citations is a transparent but slow process and as a consequence, the articles and research teams have to wait before they are truly recognized, therefore altmetric tools are needed to measure their real time impact (Warren et al., 2017; Williams, 2017).

The major known international university rankings include QS World University Rankings, Times Higher Education World University Rankings, and Academic Ranking of World Universities (ARWU). ARWU is one of the most influential, classical word university ranking (Shanghairanking, 2021). The indicators of university performance are also commonly provided by academic social networking platforms. There are many specialized platforms for scientists: Academia.edu, ResearchGate, Google Scholar, Scopus, Web of Science, ResearcherID or ORCID. They offer various features such as convenient mechanisms for sharing data and provide useful indicators for researcher and research unit evaluation (Herman & Nicholas, 2019; Waltman, 2016). At present, one of the most popular ASNs available is ResearchGate (RG) with over 16 million users worldwide. It is used to share, discover, and discuss research (ResearchGate, 2020). RG is recognized by the vast majority of researchers (Van Noorden, 2014). It offers many metrics useful for researcher level evaluation: RG Score, Total Research Interest, citations, h-index, recommendations, reads, research items, projects, questions, answers, following, followers (Shrivastava & Mahajan, 2015; Wiechetek, 2019; Wiechetek et al., 2020). These metrics may be used for recruitment, promotion and a broad researcher evaluation (Orduna-Malea et al., 2017; Yu et al., 2016; Nicholas, Clark, et al., 2016; Martín-Martín, 2016; Thelwall & Kousha, 2017; Gumpenberger et al., 2016; Jordan, 2015). ResearchGate also provides university level metrics: The total RG Score, number of publications and the number of researchers which could be used to compare the research units, faculties, or the universities as a whole, and assist with the process of organizational level evaluation. It appears that RG metrics may be an effective altmetric indicator for active researchers but they may also be used to measure the scientific outputs of the research units.

Higher education units are complicated organizations with a “complex production process”, therefore it is difficult to find simple up to date metrics that reflect their performance in a holistic manner (Johnes, 2016). Institutional performance may be improved with the use of specialized social networking platforms. Research conducted by Ali et al. indicates that there is a tendency for lower-ranked institutions to obtain a lower RG score (Ali et al., 2017). The research also confirms that academic social networking sites provide useful metrics which realistically mirror the research activity of institutions (Yan & Zhang, 2018), therefore ASNs may serve as a source of indicators for the evaluation of research institutions. The visibility of institutions on ResearchGate is closely correlated with the number of academic staff, therefore some analyses indicate that the total RG indicators should not be interpreted as strong indicators of university productivity (Lepori et al., 2018). However Thelwall and Kousha have provided proof that ResearchGate indicators correlate moderately well with major university rankings (Thelwall & Kousha, 2015). Many of these indicators are also stable across the different platforms (Martín-Martín et al., 2016) and in many cases universities which are included in the top university ranks occupy similar places with the other rankings based on bibliometric indicators (Borrego, 2017). New metrics may be used to add richness and differentiation to the scientific impact assessment (Jamali et al., 2016). If we assume that universities are collections of researchers, we may also assume that the sum of metrics of individual researchers are also useful for the evaluation of research units.

ResearchGate generates many benefits and value added for researchers and the research units, but it has also been criticized at times. Thelwall and Kousha revealed that the RG portal is dominated by recent articles and that some disciplines like arts and humanities, health professions and decision sciences are underrepresented (Thelwall & Kousha, 2015). It has been alleged that specialized social networks have facilitated copyright infringement (Jamali, 2017). Some RG users have falsified data in their publications and the platforms do not have the tools required to fight this issue (Meier & Tunger, 2018a). Finally, researchers have addressed the shortcomings of the RG score: inconsistency with other metrics (Copiello & Bonifaci, 2019), a lack of transparency, incorporating the journals impact factor into single researcher evaluation, obscure indicators (Kraker & Lex, 2015; Meier & Tunger, 2018b), ignoring certain scholarly activities (Nicholas et al., 2016b) and reflecting the social networking environment, rather than the publication impact (Copiello & Bonifaci, 2018). The criticisms above must also be considered and therefore university ranks based on ASN metrics may have some drawbacks.

Scholarly social networks offer many metrics that could support the assembly of richer and more up to date frameworks for assessing the article or researcher impact, that could be called scientometrics 2.0 (Priem & Hemminger, 2010). The altmetrics impact measures are based mainly on social media activity, they are the latest addition to the evaluation toolbox (Bar-Ilan et al., 2018). These methods will play an important role in the future and revolutionize the analysis of the impact of scholarly work (Galligan & Dyas-Correia, 2013; González-Valiente et al., 2016). However, at the present time there are an insufficient number of articles addressing the systematic comparison between RG scores and ARWU. It may be assumed that the reputation of a university is largely made up of the combined reputations of single employees, therefore in this paper, an alternative university rankings system based on ResearchGate metrics was developed and compared with the Academic Ranking of World Universities.

In this article the authors present the systematic comparison between RG metrics and ARWU. The next parts of the paper are organized as follows: In Sect. 2, the authors describe the method that was used to collect the data and the alternative ranks developed were included. Section 3 presents the results and includes descriptive statistics of the total RG metrics for the analysed universities, an analysis of the developed rankings, and a multiple linear regression model. Finally, in Sect. 4 discussion and in Sect. 5 conclusions, limitations and implications are presented.

Research

The research questions

The ResearchGate metrics of the best universities in the world (according to the ARWU ranking) were collected and analysed to answer the following research questions:

  • Q1: Which parameters are comparable in the ARWU and ResearchGate based evaluation system?

  • Q2: What are the values of the university level RG metrics of World’s top universities?

  • Q3: If rankings based on the RG metrics are positively correlated with ARWU and can be used for developing alternative university rankings?

  • Q4: Can we predict the university's position in ARWU ranking based on RG Score metrics?

  • Q5: What are the differences between ARWU and RG based ranks by continent?

  • Q6: What groups of ARWU universities can be identified using RG metrics?

To answer the research questions, the authors developed the web scrapping tool. Then the collected data were analysed using IBM SPSS Statistics tool. The descriptive statistics, Spearman's rho correlation, k-means clustering, and multiple linear regression was used.

The tool

In order to establish whether the metrics provided by the ResearchGate platform could possibly be useful for the evaluation of research organizations, the authors collected the Total RG Score, number of publications and number of researchers for a thousand universities from ARWU 2019.

ResearchGate provides three metrics that can be used to characterize research units: total RG Score (Total RG), members (researchers) and publications. The RG score is one of the fundamental metrics used in our analysis. It is occasionally the subject of accusations of a lack of transparency; however, many publications and our previous research (Wiechetek & Mastalerz, 2019) indicates that the metric has a particularly close correlation with very transparent indicators like the number of citations, reads. The analysis and observations made by the authors also indicate that the biggest impact on the RG score value was made by publications (not only on the quantity but also the quality), the activity level on the portal has a smaller impact. Some research also indicates that researcher level rankings based on the RG score correlate moderately with other academic rankings and therefore also reflect the traditional academic capital (Thelwall & Kousha, 2015), as well as platform engagement and seniority (Hoffmann et al., 2016) and therefore it may be used as an indicator of research performance (Yu et al., 2016).

In order to download these metrics for all universities from ARWU, two R scripts were implemented. The authors used the R package version 3.6.1 supplemented with RStudio, an integrated development environment. The web scraping technique was used to obtain data from the ARWU web site and RG platform. At present, the automatic retrieval of data from the Web for academic and business research projects is a common practice (Krotov & Silva, 2018). Most internet search engines use this technique to find specific information on the Internet (Gold & Latonero, 2017). Also, specialized social network portals use these tools for gathering information about the new publications of their users. Many scientific papers present different research cases based on the data collected using web-scrapping. Google scholar presents over 24 thousand articles addressing this technique, they were published after 2016. Web scrapping offers many new possibilities; however, it could also lead to ethical concerns that are still a “grey area” (Krotov & Silva, 2018; Sugiura et al., 2017).

The script was based on the rvest library (Wickham, 2016) for the R package which helps with the task of scraping information from the web page. The operation of the script may be described using the following steps:

  1. 1.

    Loading the names of the universities and their RG profiles from the CSV file.

  2. 2.

    Creating the output file for storing the downloaded metrics (if necessary).

  3. 3.

    Counting the new starting point of scraping if the previous collecting process was interrupted.

  4. 4.

    In the main loop in the script, each university on the list included the following:

    1. a.

      The hyperlink to the university profile on RG is created.

    2. b.

      The full html page with the university profile data is retrieved.

    3. c.

      Metrics values are extracted from the html page.

    4. d.

      If the metrics are not found special information is prepared instead of the metric.

    5. e.

      A data line with the university name, metric values and timestamp is prepared and written to the output file.

The script contained a mechanism to prevent unfinished data collection. If the process was interrupted by the user or by the RG server (e.g. 429 Too Many Requests error) after the restart, the tool collected the next data. To avoid blocking by the server at the end of the script a sleep function was used to pause the process for some time. The script was used both for collecting the RG profile names of the ARWU universities and to collect the values of all university level metrics in two iterations. The first iteration allowed for the collection of the university profile names if they were based on the full university name. In the second iteration, after manually completing the missing profile names, the metrics of the top 1000 ARWU universities were collected.

The CSV file containing the acquired data was then imported to Microsoft Excel to be supplemented. For future analysis and visualization, additional columns labelled Country, Continent and Country Code were added. The spreadsheet was used to create alternative rankings for universities based on RG metrics and their combinations. The authors prepared several rankings based on the absolute and average values of RG metrics.

A Microsoft Excel file containing the created rankings was imported to IBM SPSS Statistics 25 to a perform statistical analysis and visualization. A descriptive statistics, correlation analysis and a multiple linear regression was performed to check if rankings based on the ResearchGate metrics are highly and positively correlated with the ARWU rank, and therefore can be used for real time university evaluation.

Variables

The collected dataset consists of the main ResearchGate metrics Total RG Score, the number of profiles and the number of publications for all universities presented in the Academic Ranking of World Universities 2019. The data were collected using a web scraping technique in December 2019.The rankings used in the analysis are presented in Table 1.

Table 1 Characteristic of rankings used in analysis

The first rankings developed were based on the absolute values of the Total value of the RG Score, members on the RG platform, number of publications, but also on the average value of publications per employee and RG Score per researcher. Finally, the last five rankings were based on a combination of RG metrics.

Results

ARWU 2019 universities and their RG metrics

To answer the research question about the university level metrics (Q2) the descriptive statistics were calculated. The analysed metrics were diverse among the universities. The mean value for the Total RG Score was almost 33,567, the average value of the profiles per university was slightly above 6484, while the average number of publications per university was near 48,481. The descriptive statistics of the RG metrics for the ARWU top 1000 universities is presented in Table 2.

Table 2 Descriptive statistics of the RG metrics of top ARWU 2019 universities

The median is lower than the mean for all of the analysed metrics, so the distribution is asymmetrical with a positive skew. For the Total RG Score, the number of profiles and publications, the skewness value is 2.169, 1.839 and 2.425 respectively. The minimal value of the Total RG Score was over three hundred times lower than the maximal value. The least well represented university on RG had only 82 profiles while the most represented research organization had over 46,000 affiliated researchers on RG. The RSD indicates that for all the metrics the residuals are widely spread around the mean (RSD > 86%).

The positive correlations between values of three metrics should be noted. The total RG score is bigger when the number of researchers (rho = 0.897) and publications (rho = 0.897) are higher. Also, the number of researchers has a positive impact on the number of publications (Fig. 1).

Fig. 1
figure 1

The relationship between the RG metrics of ARWU 1000 top universities

Comparison of alternative RG rankings with ARWU

The dispersion charts (Fig. 2) indicate that the less related ranking is based on the average RG score per university profile and the average number of publications per RG profile. The rankings based on the absolute values of the ResearchGate metrics (Total RG Score, Number of publications, Number of researchers) are more convergent.

Fig. 2
figure 2

Comparison of alternative RG rankings with ARWU

For both rankings based on the one metric and developed with the use of a combination of more than one RG metric, matching is better for the first 300 universities. For the middle and the last universities in ARWU we may observe a large degree of scatter.

The Spearman's rho was used to find out if rankings based on the RG metrics are positively correlated with ARWU (Q3). The analysis shows a statistically significant positive correlation between ARWU 2019 and all of the rankings based on a summary of ResearchGate metrics for research units (Table 3).

Table 3 The relationship between RG metrics for ARWU 1000 universities

Predicting the position of the university

Let us define × 1, × 2, × 3 to be a summary of the RG score, the number of university RG profiles and the number of items published on RG on the profiles affiliated to the university, then the position in the rank based on RG metrics (RG_ARWU) should be calculated as follows:

$$RG\_ARWU = a*x1 + b*x2 + c*x3 + E$$
(1)

Multiple Linear Regression (MLR) was used to create a model for predicting the position of the university in the ARWU rank according to summary metrics provided for research units by ResearchGate portal (Q5). The results of the MLR model are shown in Table 4.

Table 4 Results of the MLR model

According to the MLR analysis, the final model allowing for an estimation of the position of the university in the world university ranking based on the RG university level metrics is as follows:

$${\text{RG}}\_{\text{ARWU }} = - 0.00295*{\text{ RG}}\_{\text{score }}{-}0.00460*{\text{profiles }}{-}0.00185*{\text{publications }} + 677.04413$$
(2)

The developed model fits well with the analysed data (F = 345.248, p < 0.001). A correlation of all the predictors with a place in the ARWU is 0.714. According to the model, 51% of the volatility of the position in the ranking is explained by the volatility of the three ResearchGate metrics.

ARWU members by country and continent

In the ARWU top 1000 rank, the majority were European (38.5%) and Asian (28.9%) universities. The lowest number of universities were represented by Oceania, South America and Africa, these continents had ≤ 4.1% of the total university number. The very top positions were occupied by North American universities. The most represented countries were the United States (20.6%), China (13.2%), the United Kingdom (6.1%) and Germany (5.1%). All the rest of the countries (27.2%) had a representation of less than 5%.

Considering the continents, we should note that North America has the most universities in the top 400, Asia has the majority in the last 500, while Europe is equally dispersed from the first to the last position (Fig. 3). The South American universities look better when we consider the RG rankings as opposed to ARWU. Asian research units especially those from the lower 500 group on the list are better represented in ARWU than by the results from the absolute indicators provided by RG. However, when we investigate the average RG Score and the average number of publications per profile, they should be placed higher. North American universities especially from the top 200 group are more highly placed by the ARWU ranks than by the results from the average values of the RG metrics. Finally, research units from Oceania, according to the average RG score and the average number of publications should be placed lower in the ranking scheme.

Fig. 3
figure 3

Number of universities by country and continent

The histogram analysis (Fig. 4) was used to check the differences between ARWU and RG based ranks by continent (Q5). The analysis indicates that according to ARWU rank, the top positions are occupied mainly by the universities from North America and Europe. The middle places are taken by both European and Asian academic centres. Africa, Oceania, and South America have few universities in the ranking list. However, while African and South American universities occupy the last positions, most universities from Oceania occupy positions in the top 400. When we take into account the ranking list based on the Total university RG score we observe that European universities are better represented (than in ARWU) in fact, most of them are placed in the first 500, while Asian institutions are poorly represented in the high and medium positions. Also, African units are moved to better positions.

Fig. 4
figure 4

Comparison of ARWU and RG based ranks by continent

The American and Oceanian universities occupy very similar places in both the ARWU and RG score-based rankings. When we take into account the total number of affiliated profiles, American and Asian universities decline in their rankings. Asian universities fell from the middle positions to the last places while the American ones declined from the top positions to the middle ones. The universities that benefited the most are located in Africa and South America. These universities rose from the last places to the top positions.

The third university level metric available on the RG platform is the total number of publications. The alternative ranking method based on this metric looks very similar to ARWU with two exceptions. The North American universities rose from the top to the very top positions and European academic centres rose from the middle towards the top positions. A large difference may be noted between ARWU and the rankings based on the average value of the RG score and number of publications. When we consider the average RG score, African and Oceanian universities fell to the last positions, North American declined from the top to the middle and the European institutions fell from the middle to the last places. The only academic centres which gained according to this ranking are located in Asia. When considering the average number of publications, we may also take note of the significant differences between the ARWU and RG based rankings. African and Oceanian universities lost their positions in the middle of the ranking list. European academic centres fell from the top position to the middle ones, and also, North American centres declined from the top positions to those in the middle. South American universities occupied similar positions. The main beneficiary were Asian academic centres which rose from the last and middle places to the very top positions.

Groups of ARWU universities according to the ResearchGate metrics

The k-means clustering method was used to identify the groups of universities presented in ARWU ranking (Q6). Before the clustering was performed the metrics were standardized. Basing on the three university level RG metrics: RG Total Score, Number of affiliated researchers and Number of publications the authors identified three groups of universities (Fig. 5).

Fig. 5
figure 5

Identified groups of ARWU universities, k-means clustering

F test indicated that all three analysed RG metrics were statistically significant for developing the clusters. The clustering procedure allowed for identification of the following clusters. Cluster 1—can be labelled as “Leaders” (n = 66). It consists mostly of the universities with the high values of all the three analysed metrics. It contains mostly the universities form the top 100 ARWU rank. However, in this cluster we can find also the four universities ranked at the end of the second hundred of the ranking. In this group we can notice relatively large spread with several universities with very high metrics’ values. Cluster 2—“Middlers” (n = 304) can be characterized as a set of universities with medium value of all three metrics. The group is moderately consistent. Inside the group the universities with lower (medium) values of all RG metrics are more similar. This cluster members occupy various positions in the ranking from 6 to 900 in ARWU rank. Finally, third identified cluster is the most consistent. The cluster can be described as “Closing”. It contains the biggest number of the universities (n = 630). The values of RG metrics of the universities assigned to the third cluster are the lowest, but the most similar. The members of this cluster mostly occupy places beyond the top hundred.

The descriptives of the three identified groups (Table 5) indicate the huge distance between the clusters. The median of Total RG score for the leaders is two and a half times bigger than for Middlers and seven times higher than in Closing cluster.

Table 5 Descriptive statistics of the identified groups of universities

Leaders have also much more researchers and publications than Middlers and Closing. The clustering analysis indicate that university level metrics provided by RG can be used to identify and evaluate different types of research units. That analysis can be used to find the research unit that could serve as a benchmark, but also to evaluate the progress of the university and assess the distance between research units. The analysis indicates that according to the RG metrics is not easy to move from the Closing cluster to Middlers and Leaders group, however, the differences between universities from the Closing cluster are not big, so changing places in the ranking inside this cluster is possible with a relatively small improvement in the value of the RG metrics.

Interesting can be also the cases where developed clusters do not match the ARWU position. In Leaders cluster we can notice thirteen universities that occupy places at the end of the ARWU (beyond the first 700). They are mostly Asian and European universities and were incorporated to the cluster because of huge number of profiles on RG. In Closing cluster, we also can observe nine universities that according to the ARWU should occupy places in the top 150. The cases are mostly the universities from North America and Europe. They were assigned to the Closing cluster mostly because of the small number of Research Gate profiles affiliated with them. They can be small universities or only a small part of their employees has the RG account.

Discussion

The ARWU methodology analysis indicates that the ranking is based mostly on staff and alumni performance. The ResearchGate portal provides three university level metrics Total RG score, number of publications, number of employers. The analysis and observations made by the authors indicate that the most significant impact on the RG score value are publications (not only the quantity, but also the quality), activity on the portal has less influence. Some scientists also indicate that researcher level rankings based on the RG score correlate moderately with other academic rankings and therefore not only reflect the traditional academic capital (Thelwall & Kousha, 2015), but also the platform engagement and seniority (Hoffmann et al., 2016) and therefore, it may be an indicator for measuring researcher performance (Yu et al., 2016). In our opinion, the most important and valuable asset of the university is people, particularly researchers, and therefore the aggregation of researcher level metrics may be used to build university level indicators useful for developing a new metric which can support traditional academic rankings. In this case the RG based ranking is similar to ARWU. They both use among others number of citations and published papers (Q1).

The descriptive statistics showed that the RG metrics of ARWU top 1000 universities are very diverse. In the ranking we have both big universities with thousands of researchers and hundreds of thousands of publications, but also small units with only about hundred employees active on ResearchGate. Therefore, we can conclude than not only big universities can be the best one. However, the most top ARWU positions are occupied rather by the big organizations with a large number of researchers and publications (Q2).

We may observe a strong positive correlation between ARWU and three rankings based on RG metrics. So they may be used for preparing the alternative university rankings (Q3). The closest correlation is between ARWU and the alternative ranking (R_RG_Pub) based on the multiplied number of publications and the Total RG Score value (rho = 0.808), also between ARWU and the number of publications (rho = 0.802). Surprisingly, the correlation between ARWU and the average number of publications per profile and the average number of RG scores per profile is weak (rho ≤ 0.241). Considering all ARWU top 1000 universities we may conclude that the rankings based on absolute ResearchGate metrics values are more similar to the Academic Ranking of World Universities. According to the correlation analysis, universities that wish for more recognition in the ResearchGate should motivate their employees to create and maintain RG profiles. One important factor is the requirement to possess a large number of publications and make them available through the platform. Finally, the quality of the publication is reflected in the high RG Score value, this is also an important factor for research unit evaluation. Taking into account the level of similarity between the ResearchGate metrics-based rankings and the reference ARWU we can state that university level RG metrics reflect the impact of research organizations.

The developed model based on Multiple Linear Regression shows that the position in the university ranking can be partially estimated by summarizing the RG score, the number of profiles and the number of affiliated publications provided by the ResearchGate portal (Q4). The least significant metric in the model is the number of affiliated ResearchGate profiles. However, 49% of the volatility of the position in the ARWU rank should be explained by other factors, this factors are due to the “complex production process” of universities (Johnes, 2016). The RG metrics can only reflect a few areas in the field of activity of the universities related to research and international cooperation, however, holistic university evaluation should also include integration, application, teaching, business cooperation, adapting to market requirements and many other considerations.

Comparing the ARWU and RG based ranks by continent we may state that alternative rankings based on absolute RG metrics are more like ARWU and more accurately reflect the impact of the university than rankings developed with the use of average university level RG indicators. According to these rankings, universities in developing countries, especially those located in Africa and South America gain the most (compare Asmi & Margam, 2017; Meishar-Tal & Pieterse, 2017; McGillivrayl & Astell, 2019). The situation of the academic centres in Asia is getting worse while the positions of the North American and Europe academic centres looks similar to ARWU. The Asian universities gain when we consider indicators per RG user (Q5).

The k-means clustering method allowed for dividing the ARWU Top 1000 universities into three different clusters: Leaders, Middlers and Closing. The analysis indicates that the universities from the end of the ranking are more similar according to the RG metrics, while in the leaders cluster the differences are huge. The clustering also confirmed the previous analysis and indicated that both universities from the top, middle and last positions in ARWU can form coherent clusters Leaders, Middlers and Closing that consists of universities with similar metric values (Q6).

The performed analysis has several practical implications. We may state that rankings based on RG metrics could be useful for university evaluation (Borrego, 2017; Priem & Hemminger, 2010; Thelwall & Kousha, 2015; Yan & Zhang, 2018), because they produce results similar to those of ARWU. Of course, the similarity is not perfect. The reason for this may be the stress which is placed on social network activity (Copiello & Bonifaci, 2018) which does not reflect all scholarly activities (Nicholas, Herman, et al., 2016). However, if university authorities wish to obtain a rapid and inexpensive analysis based, on real time and stable data (Martín-Martín et al., 2016) they can use the metrics provided by ResearchGate. Of course, using this solution also comes with some opportunities for manipulation. Manipulation may be caused by the researcher themselves who infringes copyright, fakes profile data or generates scores not through research but through social network activity (Jamali, 2017; Meier & Tunger, 2018a; Kraker & Lex, 2015; Thelwall & Kousha, 2015), manipulation may also be caused by the university authorities which use specially selected metrics. For example, to show universities from developing countries in a better light they could use the absolute values of the RG indicators. To enhance the promotion of Asian universities, the average values of the university level metrics should be used. To avoid manipulation the evaluation process should have a clear methodology and be based on input data which is consistent with reality and describes the broad aspects of the functioning of the university.

Conclusions

ResearchGate is one of the leading social networks for researchers (Jamali et al., 2016), therefore it attracts the attention of not only researchers but also research unit managers that want to build scientific networks and promote their achievements all over the world. The aim of the article was to perform the systematic comparison between RG scores and ARWU. In some cases the RG based ranking is similar to ARWU. They both are bases mostly on the staff research outcomes.

The implemented web scraping tool was used to download RG metrics of ARWU top 1000 universities. The calculated alternative university ranks were compared with the Academic Ranking of World Universities (ARWU, 2019). Based on the analysis performed we may conclude that the metrics provided by scholarly social networks can be used not only for single researcher evaluation but also for benchmarking research units and entire universities. Of course, specialized monitoring institutions offer more precise rankings that consider many areas of university operation such as: research, education, business cooperation, internationalization and many others. However, these rankings are prepared and published quite rarely, mainly once a year. The tools and evaluation procedure presented in the article confirm that the metrics provided by specialized social networks are easily accessible and provide up to date information about the impact of research conducted by the university staff, therefore, it may be used to develop useful university ranks and new, internet-based evaluation systems.

RG metrics correlate with rankings, and it is the fact known in the literature. The existing research (as i.e. Ali et al., 2017; Lepori et al., 2018; Ortego, 2015; Ravenscroft et al., 2017) are presenting other points of view on the problem described comparing to our research. Additionally, we did not find the published works with the topic related directly to our paper ideas. We hope our research can be used as the fulfillment of the different perspectives of looking at the university rankings and their scores presentation and evaluation. Also, there is the direct connection between our research and the university practices, which may allow the university managers and administration to improve their universities activity in comparison to the others. Our conclusions are basing both on data analysis and personal business and academic experiences connected to the university management, and we are aware the possibility of using research effects in the university practice to rise up competitive position on the global academic entities network.

Modern universities should look for new solutions to assist them in the task of monitoring and evaluating their performance and progress. New data sources (social media) and technologies (e.g. big data) as well as Web 2.0 and Web 3.0 tools provide a substantial opportunity for economical and rapid data collection and analysis. As stated in the introduction, we agree that the RG score is not an ideal indicator (Jacsó, 2005; Jamali, 2017; Kraker & Lex, 2015; Thelwall & Kousha, 2017), however, many researchers have confirmed the benefits of specialized social networks both for researchers and research institutions and their usefulness in increasing international visibility, integration and knowledge co-creation (Orduna-Malea et al., 2017; Yu et al., 2016; Nicholas, Clark, et al., 2016; Martín-Martín, 2016; Thelwall & Kousha, 2017; Gumpenberger et al., 2016; Jordan, 2015).

The ideal tool for the university evaluation does not exist, also the traditional, well-known rankings like ARWU are sometimes criticized for relying mostly on awards and prizes, dependence on the university size, problems with transparency and aggregation methodology. Furthermore, the education and research systems are evolving and therefore new sources of information and evaluation methods should be used and developed. The major concern is still the lack of transparency of the RG score, which uses an unknown algorithm to calculate its value (Copiello, 2019; Kraker & Lex, 2015) this may discourage some parties from using it to build university rankings. Secondly, the authors assumed that the Academic Ranking of World Universities (ARWU) is a reliable and true measure of research quality, while in fact, we can identify many problems with university rankings (Hazelkorn, 2009). However, the ARWU ranking is also it is also sometimes criticized. The correlation analysis showed that rankings based on the total RG Score, the numbers of profiles and numbers of affiliated publications available on ResearchGate are closely correlated with the ARWU rank, therefore university authorities (despite the lack of transparency of the RG Score) should motivate staff to create profiles on scholarly social networks like RG (Gaitán-Angulo et al., 2019) not only to widely disseminate their research results but also to improve the university metrics which can be used for global research unit evaluation.

The main contribution of this paper is to indicate that the traditional methods for solving the university rankings problem could be supplemented with rapid, economical and effective methods based on metrics provided by popular specialized social networking platforms and this may be the next step towards constructing new, and more effective indicators for measuring university performance, based on both quantitative and qualitative digital data. Of course, more in depth research in this area is required to find better solutions however, the ResearchGate metrics seems to be a good starting point.