Neuroradiology

, Volume 55, Issue 7, pp 803–806

The impact factor

Editorial

DOI: 10.1007/s00234-013-1227-9

Cite this article as:
Wilms, G. Neuroradiology (2013) 55: 803. doi:10.1007/s00234-013-1227-9

During the recent 51st Annual Meeting & The Foundation of the ASNR Symposium 2013 in San Diego, the Program Chair/President-Elect of the ASNR Mauricio Castillo organized a scientific session on “Scientific Publishing Update” on Monday, May 20, 2013. Mauricio Castillo, Editor-in-Chief of AJNR (American Journal of Neuroradiology), spoke about his recently published AJNR article, “Peer Review Systems” [1]. Herbert Y. Kressel, Editor-in-Chief of Radiology, spoke on “Is What’s Published False?”, based in part on a recent publication in Radiology [2]. My own topic concerned “Metrics: Now and Future.” Since I think, as the Editor-in-Chief, that this subject might be of interest for the readers of this journal, I would like to share with you my thoughts about metrics.

Definition of bibliometrics

Bibliometrics or metrics is a set of methods to quantitatively analyze scientific literature. Several general resources are available based on the Citation Index, a database that links publications by citations. There are two major multidisciplinary citation databases for the counting and aggregation of citations. The first citation index service is the Institute for Scientific Information (ISI), which is now part of Thomson Reuters Scientific accessible via Web of Science and part of the Web of Knowledge databases (http://thomsonreuters.com/products_services/science/free/essays/impact_factor/). The second index is Scopus, which is a subscription database. Other players in the field are Google Scholar and SCImago Journal & Country Rank.

The impact factor

One of the most used metrics is the impact factor (IF). It is a quantitative tool to evaluate, rank, and compare journals and measures the frequency with which the “average article” in a journal has been cited during a particular period [3]. The IF is calculated by dividing the number of current year citations to the articles published in the journal during the previous 2 years. To calculate a journal’s 2012 IF, A is the number of times articles published in 2010 and 2011 were cited in indexed journals in 2012, B is the number of articles (“citable items”) published in 2010 and 2011, and A / B is the 2012 IF. This method of calculation should eliminate the bias of large versus small journals, frequently versus less frequently issued ones (both because divided by the number of articles), and older versus new journals (since the calculation is not cumulative over the years).

The purposes of the IF are the following [3]:
  • To provide quantitative evidence for publishers and editors for positioning their journal in relation to competitors in the same field or subfield

  • To provide librarians with a tool for managing their collections

  • To serve advertisers in evaluating a journal’s potential

  • To help authors identify the most appropriate, influential journals in which to publish

  • To help researchers discover where to find the current reading list in their respective fields

As an example, the 2012 IF of Neuroradiology is 2.700, AJNR is 3.167, and Radiology is 6.339. The ranking in the category "Radiology, Nuclear Medine, Medical Imaging" is 34 for Neuroradiology, 24 for AJNR, and 2 for Radiology for a total of 120 ranked journals.

Since its introduction by Eugene Garfield in 1972, the IF has long been a subject of debate, even by its inventor [3, 4]. The IF is highly discipline dependent making it difficult to compare between subject categories (e.g., biomedicine and natural sciences). The statistical model may be incorrect making the result poorly reproducible. More practically, the 2-year time window may be too short since the peak moment of citation impact is close to this 2-year length. The IF is not normalized to the document types published (reviews, original articles, case reports, etc.). The reasons for citation are unclear, very variable, and insufficiently examined. Finally, the IF can be widely influenced by editorial political decisions. However, it is the latter two situations that will be discussed here.

Reasons for citation

Krell [5] completed an exercise to examine why we choose a specific article to reference when writing an article and the result is very provocative. Of course, we want to cite our own articles or those of our collaborators and friends in order to promote careers (“you cite me, I cite you”). But some articles quickly become “standard references” in their field so that they are frequently cited. We might cite them (without even having read them) just because they are frequently cited. We also tend to cite high status authors in the search for credibility, to avoid criticism or to look for favor. Along the same lines, we find article citations in journals with high IF’s attractive, especially if we submit our article to these journals. Finally, we will only cite available journals in our own language, which of course limits our horizon.

Editorial policies to influence the IF

Several editorial policies can be used to try to increase the IF [6, 7]. Editors can invite authors to publish review articles, which generally are more cited, or editors can publish controversial issues that provoke discussion and thus citation, although both types of publications do not contribute to scientific progress. Neuroradiology recently introduced “Continuing Education” papers where an author completes a review of the literature, but adds his personal data so that in fact there is some input for science. Publishing of papers that are expected to be highly cited in the beginning of the calendar year can also increase the number of citations over the 2-year time span. Next, editors can limit the “citable items.” For instance, editors may avoid or ban “case reports” that are rarely cited. This was decided for AJNR in 2011 and for Neuroradiology in 2012. The editors can also encourage publication of “letters to the editor,” “editorials,” “technical notes,” or use “short reports” (one author, no abstract, 1,000 words, two figures, two references). While these types of articles do not enter the IF denominator, they can be cited and thus enter into the nominator. In this way, these types of articles can lead to “citations for free.” Self-citation is theoretically allowed up to 20 % by ISI. Some editors comment themselves on articles in the same printed issue or ask for comments or correspondence by specialists in the field with immediate reference to the article. Editors might advise reviewers to suggest citing recently published articles the authors overlooked. Imposing self-citation (“coercive citation”) as a requirement before accepting a paper is of course malpractice. Saying this, Frank-Thorsten Krell in 2010 formulated the following wise conclusion [5]: “As long as an editor does not force authors to cite irrelevant papers from their own journal, I consider it as a matter of caretaking for the journal and its authors if an editor brings recent papers to the authors’ attention.”

The IF and online first

Nowadays, articles are very rapidly available online, but print publication remains slow. The ISI database still indexes upon the official printed publication date, but citations can start as soon as an article is available online so that articles can be read and cited more during the 2-year window. The ideal scenario would be publishing an article online in January. Even if published in print somewhere in July, the article profits from a maximal time window of 2 years for citation. Moreover, Tort et al. demonstrated that an increase of “online-to-print lag” leads to an increase in IF [8]. Therefore, increasing “online-to-print lags” might become another means of active editorial policy.

The IF and assessment of researchers

Increasingly, the IF is used for academic evaluation of researchers or research programs. The IF is used for selection, tenure, promotion of researchers, and awarding research grants or even government funding of entire institutions. There is increasing criticism on this practice. First, it is well known that 90 % of the citations in a journal come from less than 25 % of articles; thus, non-cited authors profit from the cited ones. Furthermore, participation in a single publication with major influence can generate a large number of citations even if an author contributes little to the paper. Therefore, the IF cannot be used to assess the value of individual papers and certainly not be a reliable assessment tool of researchers [9, 10].

To measure the impact of a scientist, the h-index is a good alternative [11]. It is based on a scientist’s most cited papers and the number of citations. A scientist has an h-index of h if h papers have at least h citations and other papers have no more than h citations each. So, the h-index combines the effects of quantity (number of publications) and quality (number of citations). The h-index is insensitive to an accidental excess of uncited articles and is also insensitive to one or several extremely highly cited articles. Automated calculators are available on Web of Knowledge, Scopus, and via Google Scholar. Several variations of the h-index were created in order to overcome some criticism [12], but this is beyond the scope of this editorial.

Thomson Reuters impact factor and open access

The overall impression is that open access (OA) articles receive substantially more citations and that OA journals very quickly build up an IF. Yet, the literature data on the influence of OA publication and the IF are confusing and contradictory [13, 14]. Björk and Salomon [15] compared building up of the IF of 610 OA journals with 7,609 subscription journals. They found that in medicine and health, OA journals founded in the last 10 years receive approximately as many citations as subscription journals launched during the same period. Gargouri [16] found little difference and concluded that the process of science is driven not by access, but by discovery.

Increasingly, scientific literature is becoming more widely accessible in multiple ways. Some journals are entirely accessible to the general public at no charge. Others are publicly accessible in large part or for most issues. Still, others are accessible to the vast majority of their intended audience under wide license agreements, even if they are not accessible to the general public. For instance, subscription to the journal Neuroradiology is mandatory for all members of the ESNR and is included in the membership fee. Therefore, perhaps our answer to OA might be to quickly (i.e., after 1 year) open the content of the journal to the general public or at least the medical community in order to receive the possible benefit of OA on IF.

Alternative indices to the IF

Several new indices or adaptations of the IF were created to overcome its drawbacks.
  • The immediacy index is the 1-year citation in a given year divided by the number of articles and is thus able to detect “hot articles.”

  • The 5-year IF calculates the IF over 5 instead of 2 years. This is particularly useful for fields where citations need more time as seems to be the case for Skeletal Radiology [17, 18].

  • The i10-index only gives the number of articles with at least ten citations.

  • The SP-index incorporates paper numbers per year, IF of the journals, and citation number at any point in time [19].

  • The Eigenfactor score includes citations to articles in the last 5 years, with exclusion of self-citations, while journals are weighted on the basis of citations [20]. The article influence (AI) score is the Eigenfactor score divided by the number of articles. The calculation of both factors can easily be made on www.eigenfactor.org. An article influence score greater than 1.00 indicates an above average influence. AJNR has an AI of 1.06 and Neuroradiology has 0.90. The Article Influence score of Neuroradiology is 0.901, of AJNR is 1,112 and of Radiology is 2.280.

Metrics: the future

The future of metrics probably lies in emerging web-based alternatives (“Cybermetrics” or “Altmetrics”). An excellent overview of these alternatives can be found in an article by Roemer and Borchadt in C&RL News [21]. Altmetrics is a central hub of information on new metrics. ImpactStory compares metrics and assigns categories such as “highly cited/recommended or discussed.” Publish or Perish is a downloadable program that can calculate numerous metrics. The leading OA repository is the Public Library of Science (PLoS). It offers “Article Level Metrics” tracking the influence of individual articles from initial download to mention in social media and blogs. It also tracks internal metrics like comments, notes, and ratings. Even though only PLoS articles benefit from this, it might serve as an example to future publishers

Conclusions

Although heavily criticized, the IF maintains its place in ranking and comparing journals. For the evaluation of a single article or a researcher, the h-index and its variations are better suited. With new metrics or variations on existing metrics, major efforts are made to evaluate the real “influence” or “impact” of a paper rather than the simple number of citations.

Web-based alternatives (“Cybermetrics”) offer attractive features to evaluate the real influence or visibility of individual articles not only in the specialized literature but also in social media. Finally, I wish to underline that being the Editor-in-Chief of a journal does not make one a specialist in metrics, but simply a privileged and critical user.

Conflict of interest

The author is the Editor-in-Chief of Neuroradiology.

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Department of RadiologyUZ LeuvenLeuvenBelgium

Personalised recommendations