Impact factors, citation distributions and journal stratification
This past July, an international team of researchers and publishers published a proposal that academic journals share their citation distributions to encourage authors, publishers and institutions to look beyond using single numerical metrics for an entire journal as a proxy for the research quality of individual articles in it . We embrace this effort and include the citation distribution that contributed to this Journal’s 2015 Thomson Reuters Impact Factor.
A journal impact factor (JIF) is a simple ratio. The numerator is the number of citations a journal receives in a particular calendar year to ‘citable items’ with a publication date from the previous two years (the ‘citation window’). The denominator is the number of citable items in that citation window. Citable items include reviews and original research, which Thomson Reuters classify as ‘articles’. Editorials, such as this one, are classified as ‘editorial material’ and are not counted in the denominator. Although the classification protocol has been defined , there is still a lot of grey area ,1 particularly in publications that fall somewhere between scientific journals and society membership magazines.2
Percentage of citable items published in four materials science journals with fewer citations than the value of their 2015 Thomson Reuters journal impact factor
number of citable items (articles)
% citable items (articles) below JIF
Journal of Materials Research
65.2 % (65.6 %)
Journal of Materials Science
69.3 % (70.5 %)
69.9 % (69.4 %)
69.3 % (71.0 %)
If all journals have a similar citation distribution regardless of the size of their JIF, then what is the problem with using this metric? The danger is the prejudice that this oversimplification engenders. By assuming the quality of an unread article based on the journal in which it appeared, we risk ignoring relevant work and not being up-to-date with one’s field. When a JMS Editor asked one contributor why he had not cited a relevant paper, his reply was ‘it was not in a high-impact journal’!
First, there is a steady increase in impact factors. The median impact factor has gone up by 0.6 (66 %) to 1.6 over the decade, while the mean (average) impact factor has increased by 1.2 (76 %) to 2.9 in the same period.5
Second, the journals in groupings with JIFs below the median value (the ‘poor’) have seen their JIFs increase less than this and, in some cases, drop. This is like having a savings account with an interest rate below the inflation rate: though the balance looks bigger, its value is less than it was. For the journals with impact factors above 4 (the ‘rich’), which comprise about a sixth of the titles in the ‘Materials Science, Multidisciplinary’ category, the story is the opposite. Five of these titles have seen their impact factors at least double in the last 10 years, and each group has increased faster than the mean in absolute terms.
The scale in Fig. 3 is logarithmic: each grouping represents an impact factor increase of a third. This obscures the underlying stratification of JIFs in absolute, rather than relative, terms. At the high end, the gap is vast. The 2015 JIF for Nature Materials (red column) is 20 units higher than Advanced Materials (fuchsia column), even though they are second-nearest neighbours.
The glorification of the JIF also provides an incentive for researchers to oversell their results. Following this practice impoverishes our talent base and sets a poor example for the next generation of researchers. We are tacitly directing early career researchers to follow hot topics, rather than asking interesting questions regardless of the flavour of the month. Or, as our Editor-in-Chief has put it, the current situation is ‘like kindergarten kids playing soccer: you know where the ball is because that’s where all the kids are’.
Targeting journals based on impact factor, rather than remit or quality of review, increases the peer-review burden for editors and scientists as researchers. We all know that it is easy to resubmit the same article—often unmodified—to the next journal on the impact factor cascade and have another roll of the dice with the editors and referees. I suspect most academics feel they are approaching ‘peer review burnout’ because of this. We see the effects in the diminishing quality of reviews from time-strapped researchers.6
So how do we get around this well-established problem? My next editorial will look at some of the emerging tools that give other measures of impact on an article level and an author level, including CASRAI’s CRediT taxonomy initiative and Project COUNTER.
For example, none of the articles published in 2013 and 2014 in Acta Materialia are classified as reviews, including their top-cited article from the period which includes the phrase ‘which we summarise in this concise review’ in the abstract.
The archetype in the ‘Materials Science, Multidisciplinary’ category is the MRS Bulletin. The distinction between journal and magazine is especially blurred in titles such as Science and Nature where news, book reviews, and commentaries comprise about half the items and more than a quarter of the published pages.
An extreme example of this are the 2009 and 2010 JIFs for Acta Crystallographica A, which was boosted from around 2 to around 50 for those years on the basis of a single review of a programme used to refine the X-ray structure of small molecules. The article, which has nearly 50,000 citations to date, pushed the journal to the second spot on the JIF league table for those two years, just behind CA: A Cancer Journal of Clinicians.
In the USA, it works out that 65% of income goes to 35% of people, based on the 2013 Gini index from the World Bank. http://data.worldbank.org/indicator/SI.POV.GINI accessed 2016-08-03.
The number of titles in the category ‘Materials Science, Multidisciplinary’ grew markedly over the survey period, from 175 in 2006 to 270 in 2015. For the bottom panel of Fig. 3, the plots of mean impact factor with time cover fewer journals for the older values than for the newer ones.
Some publishers mitigate against this through internal transfers to related journals from the same company.
- 1.Larivière V, Kiermer V, MacCallum CJ, McNutt M, Patterson M, Pulverer B, Swaminathan S, Taylor S, Curry S (2016) A simple proposal for the publication of journal citation distributions. bioRxiv 062109; doi: 10.1101/062109
- 2.Davis, P. (2016) Can Scopus deliver a better journal impact metric? https://scholarlykitchen.sspnet.org/2016/03/07/can-scopus-deliver-a-better-journal-impact-metric/ Accessed 3 Aug 2016
- 3.McVeigh ME, Mann SJ (2009) The journal impact factor denominator: Defining citable (counted) items. JAMA 302:1107–1109. doi: 10.1001/jama.2009.1301