Key performance indicators

A challenging question in peer-reviewed science is how to distribute judiciously resources amongst a large number of competing researchers. What are the "key performance indicators" that should be used to evaluate scientists who pursue similar research interests? One popular discussion is to ask how many times a person has published articles in journals with a high impact factor (IF). Several "quirks" in the way that a journal's IF is calculated have prompted many individuals to question whether this number reliably reflects the citation frequency of research articles that are published in the journal [1]. Recently, a scientist's H-index (HI) [2] has been suggested as a more informative measure of his/her scientific productivity [1].

H-index and total citations

The predictive value of the HI does have limitations [3]. However, in a 2007 survey of Retrovirology editorial board members, it was noted that an individual's H-number correlated well with the absolute frequency that his/her published papers were cited in the scientific literature [1]. A mid-October 2008 update of the 2007 survey, using numbers from the Scopus database http://www.scopus.com, continues to support this correlation (Table 1). Thus, within a well-delimited field of research, a scientist's HI and his/her total citations appear to be reasonably quantitative peer-measures, seemingly superior to the colloquial banters about "high impact" papers. It should be noted that different databases measure HI numbers over varying time periods, and are not directly comparable. In general, a HI number increases with the length of time over which it is measured; hence, older scientists would usually be expected to sport HI numbers higher than their younger counterparts

Table 1 H-index and citation frequencies of selected Retrovirology editorial board members.

A time for a mentoring-index?

Scientists do research and also mentor younger colleagues. Good mentoring should be a significant consideration of one's contribution to science. The HI might measure research productivity, but currently there does not appear to be a "mentoring index" (MI). Accepting that mentoring is an important component of a scientist's career, one could propose to construct a MI derived as a composite value based on the current HI of trainees during an earlier period with a given mentor. For example, a MI for scientist X reflecting his/her mentoring influence during the 1991 to 1995 period could be calculated from the sum of today's HI for all the first authors from his/her laboratory on papers published during 1991 to 1995 with scientist X as the last author. As an example, for Kuan-Teh Jeang (KTJ) during the 1991–1995 period, there were eight different first authors who listed the same laboratory affiliation as KTJ and who published papers with KTJ as the last author. The eight individuals, (with current HI in parentheses) A. Gatignol (14), B. Berkhout (38), B. Dropulic (9). O.J. Semmes (27), Y.N. Chang (5), F. Majone (5), A. Joshi (2) and L.M. Huang (19), provide a total HI of 14 + 38 + 9 + 27 + 5 + 5 + 2 + 19 = 119. If one divides 119 by 8, a MI of 14.8 for KTJ is derived. This number could be used for comparing KTJ to others for mentoring contributions during a defined period (e.g. 1991 to 1995). Of course, comparisons are meaningful only when done amongst appropriate peer groups. A focus on using the HI of previous trainees in evaluating established scientists could encourage the development of long-lasting mentoring relationships that continue even after the trainees have departed the mentors' laboratories.

Frequency of citation versus frequency of access

The above discussions of HI, MI, citation frequencies, and impact factor presume the primacy of citations as a measure of scientific value. What if this presumption is off-the-mark? Is there another value that could be considered? In other areas of communication (book publishing, music distribution) where citation metrics are irrelevant, the numbers of readers (copies of books sold) and listeners (number of albums sold or songs downloaded) are used to gauge impact. In the modern internet era, the frequency of "hits" or accesses to portals such as YouTube or Facebook quantitatively gauges relative importance. In this respect, should the frequency of accesses to online Open Access scientific articles similarly matter? To begin to explore this question, I examined the top 15 "all time" most highly accessed papers at Retrovirology http://www.retrovirology.com/mostviewedalltime. In this dataset, four 2006 papers (excluding a meeting report, [4]) were identified that have been accessed 23,634; 8,592; 8,304; and 7,902 times respectively [5], [6], [7], [8]. These four highly accessed papers have been cited to date 14, 13, 15, and 14 times, placing them in the top 15% of cited Retrovirology papers published in 2006. On the other hand, the four Retrovirology papers published during 2006 that are currently the most frequently cited [9], [10], [11], [12] (cited 27, 23, 21, 20 times) are not the four which are the most highly accessed. Thus, high readership does seem to produce high citation frequency, but high citation frequency does not always require high readership. This pattern suggests that Open Access readers encompass those who simply read and those who read and also write papers that cite other papers. Citation numbers measure the latter group, while access numbers measure both groups. Arguably, it is unclear that a published paper's influence on one group (citations) counts while the less well-tabulated impact on the second group (accesses) counts not. The relative merits of citations versus accesses require further validation.


I thank Mark Wainberg, Andrew Lever, and Ben Berkhout for critical readings of this editorial. The values shown in Table 1 are to be viewed as illustrative examples and are not to be regarded as fully accurate. The views expressed are the author's personal opinion and do not represent the position of the author's employer, the National Institutes of Health, USA. Research in KTJ's laboratory is supported by NIAID Intramural funds. I thank Christina Bezon for assistance with Table 1.