Abstract
The hindex has received an enormous attention for being an indicator that measures the quality of researchers and organizations. We investigate to what degree authors can inflate their hindex through strategic selfcitations with the help of a simulation. We extended Burrell’s publication model with a procedure for placing selfcitations, following three different strategies: random selfcitation, recent selfcitations and hmanipulating selfcitations. The results show that authors can considerably inflate their hindex through selfcitations. We propose the qindex as an indicator for how strategically an author has placed selfcitations, and which serves as a tool to detect possible manipulation of the hindex. The results also show that the best strategy for an high hindex is publishing papers that are highly cited by others. The productivity has also a positive effect on the hindex.
Introduction
In the competitive academic world, it is necessary to assess the quality of researchers and their organizations. The allocation of resources and individual careers depend on it. In the UK, for example, the use of bibliometric indicators for the national Research Assessment Exercise (RAE) has long been discussed [17]. Efforts have been made to make such assessments as objective as possible. While the productivity could relatively easy be measured by counting publications, assessing the impact of research often relied on the count of citations received. In 2005, Hirsch [9] proposed the hindex, which tries to bring productivity and impact into a balance.
It is hard to underestimate the effect that his proposal had on the field of scientometrics. Google Scholar lists 1,130 citations for his original paper as of June 16th, 2010. Some authors even divide the research field into a pre and post Hirsch period [14]. A wealth of extensions, modifications have since been proposed [12, 13] and also new indicators have been developed, such as the gindex [4]. These modification and new indicators are intended to improve the original hindex. The arrival of the Publish and Perish software made the calculation of these diverse indicators accessible to a more general public.
An elaborated review on the benefits and problems of the hindex is available [5]. We will focus on the problem of selfcitations, which has polarized the research community. On the one hand, selfcitations can be considered a natural part of scientific communication, while others condemn it as a means to artificially inflate bibliometric indicators. Besides this fundamental divide about the role an importance of selfcitations, there are also practical issues. Reliably filtering selfcitations is currently only practical in highly consistent data sets, such as from the Web of Science. But even ThomsonReuter had to introduce the Researcher ID to indentify unique researchers, in particular if researchers have the same name. The Web of Science might be a useful data set for traditional disciplines, such as Physics, but its coverage is insufficient for research fields in which conference proceedings play an important role [11]. Google Scholar (GS) offers the widest coverage of academic communication, but filtering selfcitations from its results is currently not reliably possible. And maybe we even should not filter them, since they form an organic part of the citation process [8] and selfcitations make up for up to 36% of all citations [1]. This might make it difficult to sharpen the hindex by excluding selfcitation as it was already proposed [15]. It has also been demonstrated that results from GS can potentially be manipulated through mock publication [10].
Still it would be useful to be able to distinguish between authors that cite their previous work to clarify the relationship with the paper at hand and authors that strategically cite their papers even if they are not directly relevant to the current paper.
One method of strategically manipulating the hindex is the following: first cite the paper(s) that have currently as many citations as the hindex and then proceed downwards from there. Lets look an example to illustrate this strategy. Figure 1 shows the citation profile for an example author that published 60 papers. His papers are sorted by the citations they have received. He currently has an hindex of 20, which is visualized by the diagonal line. This means that he has at least 20 papers that have each been cited at least 20 times. We will refer to the paper that has the least citations and still contributes to the hindex as the hpaper. In this case it is paper number 20. If he would cite the hpaper and paper number 21, that each have currently 20 citations, then his hindex would increase to 21. He would have 21 papers that each have at least 21 citations. With only investing two selfcitations, this author could inflate his hindex by one. A more subtle strategy would be to only cite papers that currently have fewer citations than the author’s hpaper since citing already highly cited papers is unlikely to increase the hindex quickly.
Given that up to 36% of all citations are selfcitations, the potential inflation of bibliometric indicators could be enormous. We therefore focus on the following research questions:

(1)
How much can authors inflate their hindex through strategic selfcitations?

(2)
How can we detect strategic selfcitation?

(3)
What influence has the authors’ productivity, quality, career length, and proportion of selfcitations on the authors’ hindex?
Method
To be able to investigate how far the hindex can be inflated we need to consider extreme authors that focus all their selfcitations on increasing their hindexes. We are currently not aware of a sufficient number of such extreme authors to be able to appropriately answer all our research questions with data from real authors. The only exception might be Ike Antkare, an mock scientist who is only citing himself [10] We therefore did no base our analyses on existing authors, but focused on simulated authors.
Wolfgang Glänzel and his coauthors [6, 7] proposed a stochastic model for the publishing and citation process. However, here we make use of a more recent stochastic model proposed by Burrell [3], which is better suited for our simulation. The main result of this model as described in Eq. 1 defines is the expected number of papers that receive at least n citations by time T:
with B(x;a, b) the regularized incomplete beta function defined as
and \(\Gamma(a)\) the gamma function. The model depends on quality parameters which characterize a certain author:
 T :

is the time passed since the start of the researchers career
 θ:

is the productivity (mean number of publications per unit time)
 ν:

is the standard shape parameter of the citation distribution (gamma distribution), which is related to its hight.
 1/α:

is the standard scale parameter of the citation distribution (gamma distribution), which is related to its width.
 \(\frac{\nu}{\alpha}\) :

is the mean citation rate (average number of citations per paper per year)
 n :

is the number of citations
Equation 1 can be considered as the average citedness of papers from authors of a given quality (see Fig. 2). The graph has the expected shape for citation indexes and appears to match reality. It follows the well documented skew [16] and hence we assume face validity of the model. We invert the expression in Eq. 1 to create a theoretical citation profile for an example author characterized by these quality parameters.
We used Burrell’s model to simulate the publication process of an average author defined through the parameters mentioned above. We added one parameter μ, which is defined as the number of selfcitations per paper. For practical reasons, we defined μ as a constant, but it is conceivable that the number of selfcitations might change over the duration of a scientist’s career. We implemented three different selfcitation strategies:

(1)
the author makes μ strategic self citations by the method described above (unfair condition)

(2)
the author cites his μ last papers (fair condition)

(3)
the author randomly cites μ of papers (random condition)
The fair condition is based on the observation that the number of selfcitations is the highest for new papers and declines rapidly over time [1]. The random condition provides us with a baseline against which we can compare the other two conditions. The simulation, implemented in Mathematica, consists of a main loop that cycles for the p published papers from 1 to θ × T through the following steps:

(1)
calculate the current h _{ p }index of the author

(2)
calculate the citations received from other researchers through Burrell’s model

(3)
place μ self citations through one of the three strategies described above

(4)
calculate indicators, such as the qindex described below, for the current state

(5)
sum the citations from others and the selfcitations
Results
To answer the question how much an authors can inflate their hindex through selfcitations we first would like to present an archetypical author. He publishes three papers per year over a total of 20 years and he makes three selfcitations per paper. Figure 3 shows how the hindex develops over the period of publishing each of the 60 papers. After 20 years the author would have an hindex of 19 if he had used the unfair strategy, while a random selfcitation strategy would have resulted in an hindex of only 14. Through the strategic placement of his selfcitations, he was able to inflate his hindex by 5.
If we now look at the citation index of the unfair author, we notice a humpback around the hpaper, which is in this case the 19th paper (see Fig. 4). An author with a random selfcitation strategy does not have such a humpback. This may come at no surprise, since the humpback is a direct result of selfciting papers close to the hpaper.
To be able to assess the size of the humpback we propose the qindex. Quasimodo, a fictional character in Victor Hugo’s novel “The Hunchback of Notre Dame”, inspired its name. Quasimodo has a severely hunched back, which reminded us of the humpback in the citation profile. In comparison to the penalty system proposed by Burrell [2] the qindex does not decrease the citation count, but it introduces a stand alone indicator for the selfcitation behavior.
The qindex can be calculated as follows. First, sort all papers (i = 1…p) of an author or organization, given a certain number of already published papers p, according to their citations in a descending order: c _{p,i}. This creates the well known citation profiles, as shown in Fig. 1. This citation profile is characterized by hindex h _{ p }. For each selfcitation of a paper that has equal or fewer citations than the h _{ p }paper, the author receives a qscore. This qscore is calculated by dividing 1 by the number of different citations scores between the h _{ p }paper and the paper that receives the selfcitation. If the author cites the h _{ p }paper(s) then the score will be \({\frac{1} {1}}\). If he cites paper(s) that have the next fewer citations, then he receives a score of \({\frac{1}{2}}\) and so forth. Next papers i which have the same citation score c _{p,i} as the previous one, receive the same qscore. The formal definition is given by:
with a _{p,i} given by
Note that we only take into account the qscores for the actually cited papers i, and therefore the summed qscore that an author receives for publishing a new paper p can only range between 0 and μ.
Lets take an example to illustrate the qscores. Figure 5 shows the citation profile of our archetypical unfair author. The x axis lists the qscores that this author receives for citing his own papers. Notice that the author does not receive any qscore for selfciting papers that have more citations than the h _{ p }paper. These papers are on the left of the diagonal hline. Citing these papers does not directly inflate the hindex and are therefore not considered when calculating qscores. Also notice that papers that have the same number of citations also receive the same qscores. Their order can be assumed to be random and hence it would not be fair to give them different qscores.
We plotted the qscores in the order in which the papers were published (see Fig. 6). If the author publishes a new paper that cites three of his own papers, then the three qscores he received are summed. The paper index on the x axis thereby defines the order in which the papers were published. Initially, all three selfciting strategies produce the same qscores. This comes at no surprise since the fourth published paper can only cite its three predecessors. Only starting from the fifth paper, the author can choose which paper not to cite. A few papers later, we find significant differences between the three selfcitation conditions. The unfair author receives high qscores with very little spread, since he is always citing very close to the h _{ p }paper.
The author with a fair selfciting strategy receives lower and lower qscores (see Fig. 6). This can be explained by the fact that the total number of publications grows much faster than the hindex. The proportion of papers that have fewer citations than the h _{ p }paper (to the right of the h _{ p }paper) to the papers that have equal or more citations than the h _{ p }paper (from the h _{ p }paper to the left) is increasing (see Fig. 7). The new papers that the fair author cites become further and further away from the h _{ p }paper and hence attract lower and lower qscores.
An author with a random selfcitation strategy has a much higher spread in his qscores, but they also appear to decrease. The growing number of papers that have fewer citations than the h _{ p }paper can also explain this trend. The papers in this long tail cause lower and lower qscores (see Fig. 7).
We propose the qindex as the summed qscores the author received for each selfcitation s ranging from 0 to the total number of selfcitations μ, in published paper j, to a paper in the citation profile indexed by i _{j,s}. This is normalized by the number of published papers p:
The normalization by p assures that the qindex is approximately constant over all published papers if an author consistently cites according to the unfair scheme. This linear behavior can be seen from the unnormalized qindex in Fig. 8 for the unfair condition, while in the fair and the random condition it flattens out and are in general far below the unnormalized qindex of the unfair condition (see Fig. 8). Interestingly, the curve for the fair and the random condition are very close to each other. It might be difficult to distinguish between authors that use these two strategies. The qindex’s range follows as:
The qindex should be accompanied by the standard deviation of the summed qscores. For our example of our archetypical author, the qindexes at p = 60 are available in Table 1. The qindex of the fair and random condition are within one standard deviation from each other. We may therefore conclude that the qindex is not able to detect a significant differences between these two conditions. The qindex for the unfair condition is approximately ten standard deviations away from the qindex of the random condition and approximately four standard deviations away from the fair condition. It would be very unlikely if the difference observed would be due to chance. To test this hypothesis, we performed the nonparametric Mann–Whitney test, since we cannot assume a normal distribution of the data. The distributions in the random and unfair conditions differed significantly (Mann–Whitney U = 41.5, n _{1} = n _{2} = 60, P < 0.01, twotailed).
Next, we were interested in how the different parameters of Burrell’s model influence the development of the hindex. We started by varying the productivity θ from one paper per year to eighteen papers per year. These values seem plausible minimum and maximum values. Of course, an director of a research institute that insists on coauthorship of every paper produced in his/her institute may exceed these boundary conditions, but the analysis of honorary authorship are not in the focus of our study. The other parameters remained at their stereotypical setting of career length T = 20, mean citation rate \({\frac{\nu} {\alpha}}={\frac{3} {2}}\) with ν = 3 and α = 2. Figure 9 shows that hindex quickly increases 0 < θ < 5 and then slowly flattens. An author that publishes six papers per year will have an more than double the hindex compare to an author that publishes only one paper per year. The unfair strategy benefits in particular by an increased productivity, since more published papers also mean more selfcitations.
The next parameter we varied is the career length T between 1 and 40 years, which again seemed plausible boundary conditions. The remaining parameters were set to the stereotypical values of θ = 3, mean citation rate \({\frac{\nu} {\alpha}}={\frac{3} {2}}\) with ν = 3 and α = 2. Figure 10 shows a linear increase for the hindex for all three conditions. The hscore increases by approximately one per year.
We varied the number of selfcitations per papers μ from one to ten, which appeared to be reasonable limits. The other parameters remained at their stereotypical settings. The results displayed in Fig. 11 show that μ has a smaller effect on the hindex compared to θ and T. In the fair and random condition, the increasing μ results on only a mild increase in the hindex. In the unfair condition, the hindex grows over μ, but again less compared to θ and T. The small effect size is also visible in absolute terms. With ten selfcitations per paper, an unfair author is only able to get up to an hindex of around 30, while he can get up to 50 with a publication rate of 18 papers per year.
Next, we changed the shape parameter ν and the scale parameter α of the citation distribution, keeping in mind that \({\frac{\nu} {\alpha}}\) is the mean citation rate, which defines how many citations a paper receives from other researchers. We increased the value for ν and α from one to ten, which appeared reasonable boundary conditions. The other parameters remained at their stereotypical settings. Figure 12 shows that the increasing value for ν increases the number of citations from others, which in turn negates the advantage of strategic selfcitations. At ν, there is no more difference between the unfair condition and the other two conditions. For authors that produce highly esteemed works by others, strategic selfcitations have little positive effect. Burrell offered a similar result in his Fig. 4(a) he kept α at 5 and increased ν from 5 to 500.
When increasing the value for α, the mean citations rate drops, which has the opposite effect from increasing ν. And indeed, Fig. 13 shows that the hindex decreases over an increasing value for α. The gap between the unfair condition and the other two conditions increases, indicating that making strategic selfcitations becomes increasingly beneficial.
To assess how strong the effect of productivity, career length, number of selfcitations, and mean citation rate, is on the hindex, we calculated the average change in the hindex as:
where h _{ k } is the the hindex when the parameter (θ, T, μ, ν, α) is k, ranging from 2 to the maximum of the respective parameter. The average \(\Delta_k\) and its standard deviation is displayed in Table 2. The mean citation rate has the strongest impact on the hindex. The increase of ν by one increases the hindex on average by four and an increase in α of one decreases the hindex by around two. The second strongest effect stems from the productivity of the author. By publishing one paper more per year, the author’s hindex increases by approximately 1.5. With every year passed, the hindex increases on average by one. The number of selfcitations has only a strong effect for authors that strategically place them. For all other authors, it has the smallest benefit.
Conclusions
The results of our simulation show that authors can significantly inflate their hindex, and possible also other indices, by strategically citing their own publications. Calculating the qindex helps identifying such behavior and plotting the individual qscores over the sequence of published papers allows us to gain additional insights into the publication history of an author. The qindex also allows us to run standard statistical test for cases that are ambiguous. The unfair author in our study is an extreme example and real authors might apply more subtle strategies to manipulate their hindex. The qindex also conveniently ranges from 1 to μ, which gives it an easy to interpret range. Our simulation is able to provide the benchmark of a random selfcitation behavior, which can be used to compare the real authors’ qindex against.
Overall we can conclude that the unfair selfcitation strategy is mainly useful for authors that are less productive and that attract less citations from others. The most effective method to increase one’s hindex is to produce work that is highly cited by other. The next best strategy is to be productive. The length of the career only has a moderate influence. On average, authors can increase their hindex by one per year, as it was predicted by Hirsch [9].
This study does have some limitations. We have to acknowledge that our simulations has not yet been verified by comparing its results to data from real authors. However, we hope to test the qindex on real authors in the next phase of our project.
The application of the qindex to real data may proof to be difficult, since it requires knowledge of each publication, including the date of each received citations. While this is relatively easy to accomplish in a simulation, the data from the real world has a tendency to be incomplete and occasionally ambiguous. A first application could be achieved on the well structured data from the Web of Science and in a second phase, attempts could be made to parse the results from Google Scholar.
We also have to consider that the mean citation rate does not model the size of the research community in which a certain author may operate. This potentially influential factor is not part of Burrell’s model and hence we are unable to make any judgements about it. We are also not able to make any judgements about the differences between scientific disciplines.
In essence, we showed that the hindex is vulnerable to manipulations by selfcitations. We propose the qindex as a metric to judge how strategic the selfcitations of an author have been. In addition, we showed that the best way to increase one’s hindex is to write interesting papers. This might be no surprise, but sometimes it is necessary to even state the obvious.
References
Aksnes, D. (2003). A macro study of selfcitation. Scientometrics 56(2), 235–246.
Burrell, Q. L. (2007). Should the hindex be discounted? ISSI Newsletter, 3S, 65–67.
Burrell, Q. L. (2007). Hirsch’s hindex: A stochastic model. Journal of Informetrics, 1(1), 16–25.
Egghe, L. (2006) Theory and practise of the gindex. Scientometrics, 69(1), 131–152 doi:10.1007/s1119200601447.
GarciaPerez, M. (2009). A multidimensional extension to Hirschs hindex. Scientometrics, 81(3):779–785.
Glänzel, W., & Schoepflin, U. (1994). A stochastic model for the ageing of scientific literature. Scientometrics, 30(1), 49–64.
Glänzel, W., & Schubert, A. (1995). Predictive aspects of a stochastic model for citation processes. Information Processing and Management, 31(1), 69–80.
Glänzel, W., Thijs, B., & Schlemmer, B. (2004). A bibliometric approach to the role of author selfcitations in scientific communication. Scientometrics, 59(1), 63–77.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.
Labbe, C. (2010). Ike Antkare one of the great stars in the scientific firmament. ISSI Newsletter, 6(2), 48–52.
Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and rankings of lis faculty: Web of science versus scopus and google scholar. Journal of the American Society for Information Science and Technology 58(13), 2105–2125. doi:10.1002/asi.20677.
Van Noorden, R. (2010). Metrics: A profusion of measures. Nature, 465, 864–866.
Panaretos, J., & Malesios, C. (2009). Assessing scientific research performance and impact with single indices. Scientometrics, 81(3), 635–670.
Prathap, G. (2010). Is there a place for a mock hindex? Scientometrics, 84(1), 153–165.
Schreiber, M. (2007). Selfcitation corrections for the Hirsch index. Europhysics Letters, 78(3). 0002.
Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628–638.
Silverman, B. W. (2009). Comment: Bibliometrics in the context of the UK research assessment exercise. Statistical Science 24(1), 15–16.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/bync/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Bartneck, C., Kokkelmans, S. Detecting hindex manipulation through selfcitation analysis. Scientometrics 87, 85–98 (2011). https://doi.org/10.1007/s1119201003065
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1119201003065