Introduction

It is common practice in scientometrics and research evaluation of institutions to bank on the staff’s productivity and impact (Altanopoulou et al., 2012; Edgar & Geare, 2013). Productivity is a scientometric indicator working with publication numbers and impact is an indicator based on citations. Research on departments or institutions is on the meso-level of scientometrics (Rousseau et al., 2018, p. 247). When we want to conduct research evaluation on the meso-level in a scientifically profound way, we must have detailed and justified answers to the following questions: How to determine who is part of the institution in a specific time interval? How to collect and quantitatively describe the complete set of an institution’s publications and their citations? And how to count publications and citations for multi-authored papers, as the counting methods greatly influence the results, for instance, on rankings of institutions (Lin et al., 2013). In contrast to descriptions and evaluations of single researchers on the micro-level, scientometric studies on the meso-level have the challenge of capturing the exact number of staff members in the considered time frame (Toutkoushian et al., 2003) as the “underlying production capability” (Thelwall & Fairclough, 2017, p. 1142) to guarantee a fair ranking (Vavryčuk, 2018), if something like that is ever possible.

But, stop! What exactly does “productivity” mean? In economics and business administration, productivity “describes the relationship between output and the inputs required to generate that output” (Schreiber & Pilat, 2001, p. 128). Productivity in economics is, therefore, no simple output measure but a relation between the amount of output and the respective input. In research, the output is the number of publications. Input is the totality of factors necessary to generate the output, such as the number of researchers (including their qualifications and academic grades, their positions, their salaries, and working hours), the technical and administrative staff, the institution’s equipment (laboratories, computers, etc.), and other cost sources (as, for instance, rents, and accessories and supplies). In scientometric analyses, it would be very difficult or even impossible to collect data on all cost factors due to the related efforts or for data privacy reasons (e.g., on salaries of individual researchers). This is why we do not relate the productivity to monetary capital. This clear limitation can only be hurdled if sufficient cost data are at hand (see, for instance, Abramo et al., 2010, who had access to some Italian cost data).

So we have to reduce the complex productivity analysis to labor (or workforce) productivity. With the term of “labor productivity” we follow a common definition in economics, defining it as the calculation of the “amount of output per unit of labor” (Samuelson & Nordhaus, 2009, p. 116). Labor productivity is the “performance” of a unit of labor, e.g., of an employee, a department, or a company (Mihalic, 2007, p. 104). This is determined by measuring the quantitative performance which a unit of labor produces in a certain period of time. In economics, the concept of labor productivity is a simple measure relating output to input, it is by no means a criterion of quality or workload of a unit of labor. And it does not say anything about the produced goods. Therefore, labor productivity should be applied in combination with other indicators.

In scientometrics, we can define “labor productivity” as the relation between the number of publications (as output) and the number of an institution’s scientific staff (as input) in a time frame (say, a year) (so did, e.g., Akbash et al., 2021, or Abramo & D’Angelo, 2014) as a size-independent measure (Docampo & Bessoule, 2019). For Abramo et al. (2010, p. 606), labor productivity “is a fundamental indicator of research efficiency.” As we do not consider cost data (as we are not able to collect such data) and also consider neither qualifications nor academic ranks of the researchers, we rate all researchers and their labor input as uniform (Abramo et al., 2010). However, we add “labor impact” to our toolbox in order to arrive at a more comprehensive picture. Labor (or workforce) impact is the number of citations of the staff’s publications per researcher (Abramo & D’Angelo, 2016a, 2016b).

As there are not all researchers working full-time, and some of them do not work the entire year, it is necessary to calculate full-time equivalents (FTE) (Kutlača et al., 2015, p. 250). For example, a full-time researcher working all year round counts 1.0 FTE, a half-time employee 0.5 FTE, and a full-time faculty member working for three months 0.25 FTE. Considering FTE, a size-independent comparison between institutions with different scientific workforces and, therefore, a fairer ranking is possible. That is why we calculate a research institution’s productivity by publications per FTE and the institution’s relative impact by citations per FTE. Additionally, we work with different counting methods (i.e., summarized whole counting, aggregated whole counting, and fractional counting). The effect of changing the counting method in the calculation of publications or citations per FTE is not analyzed in much detail in the scientometric research literature.

Research output (absolute number of publications) and impact (absolute number of citations) are size-dependent indicators; they cover the contribution and impact of a research unit to science, irrespective of the size of the research unit, while labor productivity (number of publications per FTE) and labor impact (number of citations per FTE) are size-independent indicators, which are “about the contribution of a research unit to science relative to the size of the unit” (Waltman et al., 2016).

If a research institution employs different staff groups, e.g., professors, post-docs, pre-docs, etc., individual researchers have different labor costs. In Austria, for example, according to the collective agreement for university employees, a professor’s salary is around 30 to 80% (80 to 240%) higher than that of a post-doc (pre-doc) (Kollektivvertrag, 2022, §§47–49). The differences are often even greater in practice since many professors are offered a higher salary than the collective agreement in appointment negotiations. Various studies have shown that (full) professors are also more productive: Abramo et al. (2011) studied the performance differences between full, associate, and assistant professors in Italy; Ventura and Mombrú (2006) published a similar study on full and associate professors in Uruguay; finally, Blackburn et al. (1978) analyzed the productivity of researchers by age and sex, by their activity in the department, and by the work environment, especially the reputation of the university, all showing that there are indeed differences. Data from our project also exhibit differences in the productivity of professors and other staff members in terms of publications (Reichmann & Schlögl, 2021). For the concrete calculation of the labor costs of every staff member, it is necessary to have access to salary data, which was not possible in our case due to the protection of personal data. Therefore, we had to abstain from calculating labor productivity in terms of money and focus on counting full-time equivalents. For counting publications with multi-authorship, we can use whole counting (each publication counts “1” for every co-author), fractional counting (calculating 1/n given n co-authors) or some other methods of counting (Gauffriau, 2017), which we skip here. For counting citations, both mentioned counting methods are also possible (Leydesdorff & Shin, 2011). If field normalization should be necessary, not all citations may count “1” or “1/n,” depending on the concrete normalization factor.

Gauffriau et al. (2007) also discuss whole counting versus fractional counting. Waltman and van Eck (2015) recommend fractional counting, especially at the levels of countries and research organizations and whenever the study requires field normalization. Gauffriau (2021) presents a comprehensive review of (no less than 32) counting methods in scientometrics. Different counting methods may lead to different results when it comes to ranking units of analysis as, e.g., individual scientists, institutions, or countries (Gauffriau et al., 2008). Following Gauffriau and Larsen (2005, p. 85), counting methods are “decisive for rankings based on publication and citation studies.”

If one applies whole counting on the meso-level (and also on the macro-level, for example, a comparison by countries), we have to distinguish between summarizing counts of researchers of the same institution (or, on the macro-level, of the same country) and aggregating those counts. Summarizing (in the terminology of Gauffriau et al., 2007, it is called “complete counting”) means adding up the figures for every co-author; also for co-authors of the same institution. Assumed that there are two co-authors of one institution (both with the whole count of “1”), this method will lead to a value of “2” for the institution—and this is very problematic as it is a duplicative counting on the meso-level. In contrast, aggregating always considers the affiliation of the co-authors and counts “1” in our example. To show the differences between summarizing and aggregating, we will calculate the two values for our example institutions. For Gauffriau et al. (2007), whole counting with summarization on the meso-level is not allowed, as such an indicator is “non-additive.” However, could summarized values be useful if compared with aggregated values? In the case of co-authorship at the institutional level, a slight difference between summarizing and aggregating whole counting values indicates predominant external co-authorship (i.e., the collaboration with authors from other institutions), while a significant difference indicates predominantly internal co-authorship (i.e., the collaboration with other members of the same institution) (see also Gauffriau et al., 2007).

The following example should clarify the three counting methods applied in this study (1. whole counting—summarizing, 2. whole counting—aggregating, and 3. fractional counting (M. Gauffriau, personal communication, 2022-06-30). Consider an article written by three researchers X, Y, and Z. X works at institution A, and Y and Z at institution B.

Counting method (1): summarized whole counting

Author X from institution A: 1 credit, Author Y from institution B: 1 credit, Author Z from institution B: 1 credit.

Institution A: 1 credit, Institution B: 1 + 1 = 2 credits.

Counting method (2): aggregated whole counting

Institution A: 1 credit, Institution B: 1 credit.

Counting method (3): fractional counting

Author X from institution A: 1/3 credit, Author Y from institution B: 1/3 credit, Author Z from institution B: 1/3 credit.

Institution A: 1/3 credit, Institution B: 2/3 credits.

In the case of aggregated whole counting (counting method 2), we look directly for the institution; therefore, there are no values for individual authors.

Based on what we have discussed above, we formulate three research questions (RQs):

RQ1:

To what extent do indicators of production and impact (absolute numbers of publications and citations) on the one hand and labor productivity and labor impact (numbers of publications and citations per FTE) on the other hand differ at the meso-level? Are the rankings of the institutions affected?

RQ2:

To what extent do indicators using aggregated whole counting and indicators using fractional counting differ at the meso-level, again for production/impact and labor productivity/labor impact? And, in turn, are the rankings of the institutions affected?

RQ3:

Can the difference or the quotient between summarizing and aggregating whole counting values indicate an institution’s internal or external co-authorship preference?

We will count values for an institution’s output and its absolute impact, i.e. absolute numbers of publications and citations (whole and fractional counting), as well as its labor productivity and relative impact (values per FTE, again whole and fractional counting). For whole counting, we differentiate between summarizing and aggregating. In the end, we arrive at twelve different scientometric indicators on the meso-level (Table 1), two more classical indicator sets counting absolute numbers of publications and citations (1 and 2) and the new indicator sets relating the numbers of publications and citations to FTE (3 and 4). How will values and rankings change when we vary the indicators? We will show differences in values and rankings on the example of two research departments.

Table 1 Scientometric indicators on the meso-level

Methods

Our research method is case study research (Flyvbjerg, 2006; Hays, 2004). We analyze two paradigmatic institutions of information science in German-speaking countries (Friedländer, 2014), the Department of Information Science at Heinrich Heine University Düsseldorf in Germany (Gust von Loh & Stock, 2008) and the Institute for Information Science and Information Systems at Karl Franzens University Graz in Austria (Reichmann et al., 2021; Reichmann & Schlögl, 2022), since 2020 part of the Dept. of Operations and Information Systems, for the ten years 2009 to 2018. The research topics of both departments are similar; they work on information systems, especially mobile systems, science communication, citation analysis, university libraries, and scientific journals (Graz), as well as on social media, information literacy, informational (smart) cities, information behavior, information retrieval, and knowledge organization (Düsseldorf). While information scientists in Graz have a stronger focus on information systems, their colleagues from Düsseldorf do more research on information services (including social media services) (Dorsch et al., 2017). However, there are significant differences in the number of all researchers (in favor of Düsseldorf) and the number of professors (in favor of Graz). The mean number of co-authors for the publications from Graz is 2.02. For Düsseldorf, the corresponding value is 2.69; so the counting method for co-authored publications seems very important. At the beginning of our research, we were confronted with two problems: How to get a complete list of all staff members and how to get a complete list of their publications and the citations of these publications? It is essential on this scientometric level to guarantee the completeness of all indicators to avoid misleading figures or incomparable data sets (Reichmann & Schlögl, 2021; Reichmann et al., 2022).

The evaluated institutions and their members

What is the unit of evaluation at the meso-level? How to determine who is part of the institution? As the unit of evaluation, we chose a department and an institute of Information Science. The sources of the scientific staff’s employment, including the period and the extent (hours per week) of their employment, could be personnel records; however, those official documents are confidential and by no means open data. So we had to use published information (e.g., on personal or institutional websites) and ask our colleagues personally (in 2021). We considered scientific staff in temporary projects and research assistants only if they were employed and held an academic degree. Furthermore, we skipped technical and administrative staff. In some scientific disciplines, technical staff is mentioned in the articles’ by-line; this is only rarely realized in information science.

Both full-time and part-time jobs were considered, and we also captured the months in a year the researchers were employed. We did not consider visiting scholars. Pseudonyms (in our case study, “Mathilde B. Friedländer”) were dismantled. Our lists of the institutions’ members covered 26 researchers from Düsseldorf and eight from Graz.

The evaluated institutions’ publications

How to gather a complete set of all research publications of the institution’s members? As all sources are incomplete (Hilbert et al., 2015), we worked not only with publication data from Web of Science (WoS), Scopus, and Google Scholar but also with personal or corporate publication lists (Dorsch, 2017; Dorsch & Frommelius, 2015; Dorsch et al., 2018). However, we had to apply the primary multidisciplinary bibliographic information services for citation data. For all 26 and eight researchers in Düsseldorf and Graz, we searched in WoS, Scopus, and Google Scholar for publications and citations.

The country of the institution has proven to be essential for the method of collecting publication data. Due to Austria’s regulations of the universities, including the duty to report the institutions’ intellectual capital statements (German: Wissensbilanzen) annually (§ 13 (6) Universitätsgesetz, 2002), we had an excellent institutional repository for the institution in Graz. There is no such regulation in Germany, so we had to collect the publication data for Düsseldorf from the researchers’ personal publication lists on their websites.

In the end, we evaluated all publication data critically—partly in co-operation with the researchers. For data storage, retrieval, and most parts of the calculations, we used Microsoft Access.

Results

Tables 2 and 3 exhibit the values of our twelve meso-level indicators for both paradigmatic Information Science institutions. In the analyzed 10 years, Düsseldorf’s institution formally published 345 documents and Graz 228. In the entire 10 years, Düsseldorf’s labor input was 114.3 FTE, while Graz’s was only 51.3 FTE. Papers of Düsseldorf’s institution were cited 705 times in WoS, 1532 times in Scopus, and 5491 times in Google Scholar, and Graz’s papers received 204 (WoS), 294 (Scopus), and 879 (Google Scholar) citations (all data as of Sept. 2021). In this paper, we only consider the citation figures in WoS.

Table 2 Scientometric indicators for the Information Science Dept. in Düsseldorf
Table 3 Scientometric indicators for the Information Science and Information Systems Institute in Graz

Production and impact versus labor productivity and labor impact (RQ1)

Düsseldorf’s production (Agg(P)) in the observed 10 years covers 345 publications, while Graz’s is 228 papers. Düsseldorf’s 345 documents received (till September 2021) 705 citations (Agg(C)) in WoS, and the Graz papers got 204 citations. Due to the non-additivity of whole counting values of the researchers, the application of Sum(P) and Sum(C) does not make sense at the meso-level. (However, we will use Sum(P) and Sum(C) in order to answer RQ3.) So, Düsseldorf published 117 documents more than their colleagues from Graz and received 501 more citations in WoS. In an institutional ranking, Düsseldorf comes first and Graz—far behind—second with only 61.1% of Düsseldorf’s production and 28.9% of Düsseldorf’s absolute impact (Table 4).

Table 4 Düsseldorf and Graz in comparison

Now we turn to labor productivity and labor impact. Düsseldorf’s labor productivity (Agg(P)/FTE) equals 3.02, while Graz’s is 4.44. As can be seen, the ranking between the institutions dramatically changes. On average, each member of the Information Science Dept. in Düsseldorf (related to FTE) published 3.0 papers every year and each information scientist from Graz 4.4 papers, i.e. 1.42 papers more than a researcher from Düsseldorf. So Graz comes to 147.0% of Düsseldorf’s labor productivity. Düsseldorf’s labor impact (Agg(C)/FTE) is 6.17 and still higher than that of Graz (which is 3.98), but in contrast to the absolute impact, the difference between both institutions decreases. Concerning absolute impact, Graz shows only about 28.9% of Düsseldorf’s value (204 citations in Graz in relation to 705 in Düsseldorf), but concentrating on labor impact, Graz reaches 64.5% of Düsseldorf’s value (3.98 in relation to 6.17).

Considering our case study, we arrive at a clear result for answering RQ1: There are indeed significant differences between production and absolute impact indicators on the one side and labor productivity and labor impact on the other. Also, the ranking order of the institutions changes for some indicators. As size-independent indicators, labor productivity and labor impact have benefits for scientometrics and allow for an additional view on research institutions.

Whole counting versus fractional counting (RQ2)

Whole counting (of course, only aggregated values) does not consider the number of co-authors, but fractional counting does. In our case, we calculated the proportion of one author using the simple formula 1/n, assuming that there are n co-authors. How do the example institutions’ whole and fractional counting values differentiate? As both institutions apply co-authorship, the fractional counting values are necessarily lower than the whole counting values.

Considering fractional counting of publications (Sum(PFC)), Düsseldorf published 271.7 papers (or—better—publication points) compared to 345 when applying whole counting. They received 385.0 citation points (Sum(CFC)) instead of 705 for whole counting. Using fractional counting (Sum(PFC)), Graz arrived at 173.3 publication points (for 228 publications) and 86.0 citation points (Sum(CFC)) (derived from 204 citations). Comparing both institutions under the lens of fractional counting, Düsseldorf ranks first for publication points (a difference of 98.4 in favor of Düsseldorf) and citation points (a difference of 299 points).

Regarding fractional counting of labor productivity and labor impact, Düsseldorf has a value of 2.38 for labor productivity and 3.37 for labor impact, and Graz comes to 3.38 for labor productivity and 1.68 for labor impact. Again, the institutions change their ranking positions when we differentiate between production and labor productivity. However, these changes are due to the differences between production and labor productivity as well as to absolute impact and labor impact, but not due to the differences between whole and fractional counting, which are indeed observable (and all in favor of Düsselorf), but rather low (Table 4). The difference between whole and fractional counting depends on the institutions’ culture to cooperate inside the institution and with external researchers.

Internal or external co-authorship (RQ3)

Summarizing the whole counting values of the individual researchers from the same institution does not make sense if a research analysis takes place at the institution level. However, we can apply differences and quotients between summarizing and aggregating as scientometric indicators of internal co-operation on the meso-level.

If we look at Table 5, we see for Düsseldorf much larger values for all our indicators, i.e. for production (publications) and absolute impact (citations), as well as for labor productivity and labor impact than for Graz. For production, the difference indicates the number of internal co-authorships and the quotient the (average) rate of the internal authors per publication. If all co-authors are from other institutions, the difference between summarizing and aggregating is zero, and the quotient equals to 1. The higher these values, the more internal co-authors (counted twice or multiple times) are involved. In Table 5, we can see that internal co-authorship is relatively high in Düsseldorf. There are 289 internal co-authorships; one paper is, on average, co-authored by 1.84 internal staff members. Internal co-operation is much lower in Graz, where there are only 30 internal co-authorships. On average, a paper is co-published by 1.13 colleagues from Graz. The average number of authors (internal and external) per paper is 2.69 for Düsseldorf and 2.02 for Graz. If the proportion of internal authorship is to be calculated, the previous values must be substracted by one before. Accordingly, the proportion of internal co-authorships is approximately 50% for Düsseldorf (0.84/1.69 × 100) and roughly 13% for Graz (0.13/1.02 × 100).

Table 5 Absolute differences and quotients between summarizing and aggregating whole counting values for the Information Science Institutions in Düsseldorf and Graz

In the case of absolute impact, the difference informs about the (additional) number of citations from internally co-authored papers, while the quotient gives the factor by which the citations increase due the internal authorship (and, as a consequence, multiple counting). It must, however, be noticed that the citations to an internally co-published paper is counted multiple times, if it is co-authored by more than two staff members. According to Table 5, Düsseldorf received 427 citations for internally co-published papers, which is 61% of the total citations. Graz attracted only two citations for internally co-authored papers; this is only 1% of the total citations received.

Labor productivity refers the internal publication output to 1 FTE. Accordingly, in Düsseldorf, 1 FTE is, on average, involved in 2.53 internal co-authorships. In Graz, one FTE has, on average, 0.58 internal co-authorships. The difference between Düsseldorf and Graz is again higher for the labor impact. In Düsseldorf, internally co-authored papers by 1 FTE are on average cited 3.74 times (repeatedly), the corresponding value for Graz is only 0.04.

Can the difference or the quotient (as a percentage) between summarizing and aggregating whole counting values indicate an institution’s extent of internal or external co-operation? Based on our case study, the answer is “yes,” as we found clear indications for more internal co-operation in Düsseldorf and more external co-operation in Graz.

Discussion

Main results

In this article, we discussed the influence of applying absolute publication and citation numbers compared to numbers per FTE and the influence of whole versus fractional counting on evaluating a research institution. Our method was case study research. For two exemplary Information Science institutions, we collected data on the extent and period of employment of the institutions’ researchers, data on their publications, and finally, data on citations of those publications.

While absolute numbers of publications and citations are size-dependent measures and work as indicators for research production (output) and research impact, publications per FTE and citations per FTE are size-independent indicators for labor productivity and labor impact. Labor productivity and labor impact may be captured by whole counting as well as by fractional counting.

The choice of indicators and counting methods depends on the purpose of a scientometric study. The purpose of our study is a comparative analysis of two Information Science institutions. Both institutions work in the same scientific field but differ in the number of faculty members and their research co-operation strategies (different numbers and affiliation of co-authors). Therefore, for our case study, the most appropriate indicators at the meso-level are fractional counting values for labor productivity (number of publications per FTE) and labor impact (number of citations per FTE) for the following reasons:

  • Values per FTE reflect the size of an institution and normalize the publication and citation counts on the employee years invested in the institution. Now, comparisons between institutions of different sizes become possible.

  • Fractional counting seems to be fairer than whole counting, a proposition predominantly accepted in scientometrics (see, e.g., Perianes-Rodriguez et al., 2016; Leydesdorff & Park, 2017). This is particular true if the co-operation behavior differs (different co-authorship numbers, stronger internal/external co-operation).

  • Fractional counting and values per FTE make sense for publication and citation measures, as both depend on the size of an institution and the extent of co-operation.

When using size-dependent counting methods, Düsseldorf takes first place for all indicators. When moving to the size-independent productivity indicator (i.e., publications per FTE), Graz is on the top. And when it comes to switching to the fractional counting of publications, Graz is number one again. The change in the rankings when using size-independent instead of size-dependent indicators is due to the higher number of faculty in Düsseldorf; the difference in the rankings concerning whole and fractional counting results from the institutions’ co-operation strategies, whereby Graz prefers more external research partners than Düsseldorf. Thus, specific characteristics of the two institutions lead to changes in the scientometric portrayal of their publication activities, including the very important positioning in a ranking. Labor productivity and labor impact are valuable size-independent indicators for the scientometric evaluation of research institutions. They do not replace size-dependent indicators but complement them.

As a by-product of our research, we found that, in particular, the ratios between the sum of the institutions’ researchers’ publication and citation numbers (on the micro-level) and the aggregation of the values on the meso-level are a hint concerning the co-operation policy of an institution. The greater the ratio, the more one institution is oriented towards internal publication teams. However, we may also count the number of all authors A of all the institution’s publications (#A), the number of internal co-authors (#AInt), and the number of external co-authors (#AExt). The quotients #AInt/#A and #AExt/#A form indicators for internal and external cooperation prevalence. Such indicators give the basis for further investigations on research co-operations (Silva et al., 2019) as, for instance, the extent of labor impact of internal versus external co-authorship.

Limitations, outlook, and recommendations

A limitation of our study is the small database. However, the aim was not to present a lot of data on many institutions but to draw attention to particular scientometric problems at the meso-level and indicators that promise a solution to FTE-related challenges. Of course, there should be more extensive studies with much more data. However, the more data will be used, the greater the problems with the collection of complete institutional or even country-wide publication sets, especially concerning personal data and realistic citation sets.

In this paper, we only considered citation impact (Abramo, 2018) and skipped other impacts, such as on social media, which are partially captured by altmetrics. Likewise, for the calculation of fractional counting, we only considered the formula 1/n and bypassed alternative calculation methods (Gauffriau, 2017). We did not apply field-specific citation normalization and ignored different document types (e.g., research papers vs review articles).

The indicators for labor productivity, labor impact, and internal/external co-operation orientation do not only work at the level of single research institutions (as in our example) but also for all other meso-level institutions (e.g., universities) and, additionally, for all entities at the macro-level (e.g., countries or world regions), i.e. they can be applied to any entity, where size matters. Much more study is needed here as well.

In our study, complete data sets regarding the researchers and their publications formed the basis for the calculations. The data set for citations is incomplete, as we only considered citation numbers from WoS. It can be (even very) problematic to collect all data on researchers’ employment if there are no trustworthy sources and strict data protection laws. It is also difficult to collect all publication data. But with the union of data from WoS, Scopus, Google Scholar, field-specific services (e.g., ACM Digital Library), other information services such as Dimensions, and—very important—from personal or institutional publication lists, this should be realizable. Since the required data is known to the analyzed institutions, the optimal solution to ensure complete datasets, both in terms of employment data and publication data, would be for the institutions to self-report and archive this data annually.

Changing the counting method in the calculation of (1) production and impact (absolute numbers of publications and citations) and (2) labor productivity and labor impact (numbers of publications and citations per FTE) has not been analyzed in detail in scientometrics so far. Since there are indeed differences between “raw” publication and citation values and the numbers of publications and citations per FTE and also considerable differences in the ranking of institutions, the introduction of FTEs into scientometrics seems to be very promising and forward-looking.