Avoid common mistakes on your manuscript.
Impact factors of scientific journals divide the number of received citations during a (citation) year by the number of papers published in the past; normally the 2 years preceding the year of the citation count. These impact factors have -widely and for almost 20 years- been criticised when it comes to a translation of the impact factor of a journal to its constituent papers [1–4]. The reason is that the distribution of citations over the papers of a journal is heavily skewed [2, 4]. Therefore, there is a large difference between the average number of citations (impact factor) and the median number of citations. The explanation for this discrepancy is incomplete but relates to, among other things, the number of scientists active in the same field. Efforts have been made to relate obtained citations to the source of citations, i.e. the citing authors, their field, including their citation behaviour [5–8], but there is inhomogeneity in citation density below the level of a scientific journal [9] and thus far no satisfactory solutions for this problem have been found. Relating citations to medical subject headings (MeSH) of bibliometric retrieval systems may be a step forward [10], but this would still depend on categorisation of science by others than the publishing authors. True as this may be, it does not mean that impact factors are not important for scientific journals, their owners, their publishers, their editors, their reviewers and their prospective authors. The importance is also substantial for those who are in charge of judging applications for research grants and academic positions.
There has, in my opinion, never been a serious analysis on the difference in significance between high citation numbers and, for example, high television viewing figures. Thus, frequent citation may mean that research is excellent, but it may also mean that many other scientists are interested in the same topic (see above). The first presumption is, of course, more attractive to frequently cited authors. But not only to authors! Editors have become active in pushing the impact factors of their own journals in a way that starts to overstep the bounds of reasonable ethical behaviour [11, 12]. Still, the deviation in temporal citation profile from the average profile of all other papers may be more indicative of scientific importance as we presumed in a recent issue of this journal [13].
The Netherlands Heart Journal received an impact factor for the first time in its history in 2009. It was at 1.39 (with truncation of the third decimal). The founder/owner of the Institute for Scientific Information (at present a part of Thomson Reuters), Eugene Garfield, has repeatedly stated that it is of course nonsense to calculate impact factors with an accuracy of three decimals, when it concerns, in general, far less than 1000 published papers. This fake accuracy serves the prevention of too many ex aequo rankings amongst the more than 8000 journals in the Journal Citation Reports, one of the yearly published products of Thomson Reuters. Figure 1 shows that the impact factor of the Netherlands Heart Journal remained constant at around 1.4 between 2009 and 2011. Figure 2 shows the citation per paper in the years following the year of publication. At the abscissa the year of citation is given. Year 1 concerns the year of publication, and by dividing the number of citations by the number of papers published in the same year, one arrives at the ‘immediacy index’. The impact factor is calculated by averaging the citations to papers published in the 2 years preceding the year during which citations are counted. In Fig. 2 it means that for arriving at the impact factor for 2009, one has to consider the weighed average of citations in year 3 for papers published in 2007 and in year 2 for papers published in 2008. At the same time, for the impact factor of 2011, one has to calculate the weighed average of citations in year 3 for papers published in 2009 and in year 2 for papers published in 2010 (these have been marked with an arrow in Fig. 2). Thus, citation during year 4 and later no longer contributes to the 2-year impact factor, although citation during these years may be more frequent than during the preceding years.
The impact factor for 2012 was unknown at the time of submission of this article. However, it can be deduced from Fig. 3, which shows citations obtained during 2012 to the papers published in 2010 and 2011 and divided by the total number of papers, that a minor decrease (compare with Fig. 1) is anticipated (see the thin line in Fig. 3). At the same time, by comparing the thick with the thin line one may appreciate that citations obtained thus far in 2013 to the papers published in 2011 and 2012 are well above the number of citations obtained during 2012 to the papers published in 2010 and 2011. The circle at week 22 has been duplicated at week 52, because it estimates the impact factor for 2013. It means that an impact factor well above 2.00 is foreseen for the first time in the history of the Netherlands Heart Journal. Despite the limitations of the impact factor (see above), this is good news for anyone interested in this journal, in particular for those who are considering submitting their work.
References
Seglen PO. From bad to worse: evaluation by Journal Impact. Trends Biochem Sci. 1989;14:326–7.
Opthof T. Sense and nonsense about the impact factor. Cardiovasc Res. 1997;33:1–7.
Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498–502.
Opthof T, Coronel R, Piper HM. Impact factors: no totum pro parte by skewness of citation. Cardiovasc Res. 2004;61:201–3.
Leydesdorff L, Opthof T. Scopus’s Source Normalized Impact per Paper (SNIP) versus the Journal Impact Factor based on fractional counting of citations. JASIST. 2010;61:2365–9.
Moed HF. Measuring contextual citation impact of scientific journals. J Informetr. 2010;4:265–77.
Waltman L, Van Eck NJ, Van Leeuwen TN, et al. Some modifications to the SNIP journal impact indicator. J Informetr. 2013;7:272–85.
Leydesdorff L, Bornmann L, Mutz R, et al. Turning the tables in citation analysis one more time: Principles for comparing sets of documents. JASIST. 2011;62:1370–81.
Opthof T. Differences in citation frequency of clinical and basic science papers in cardiovascular disease. Med Biol Eng Comput. 2011;49:613–21.
Leydesdorff L, Opthof T. Citation analysis with medical subject headings (MeSH) using the Web of Knowledge: a new routine. JASIST. 2013;64:1076–80.
Wilhite AW, Fong EA. Coercive citation in academic publishing. Science. 2012;335:542–3.
Opthof T. Inflation of impact factors by journal self-citation in cardiovascular science. Neth Heart J. 2013;21:163–5.
Opthof T, Janse MJ, Kléber AG, et al. The works of Dirk Durrer (1918–1984). Neth Heart J. 2012;20:430–3.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Opthof, T. The impact factor of the Netherlands Heart Journal in 2013. Neth Heart J 21, 319–321 (2013). https://doi.org/10.1007/s12471-013-0443-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12471-013-0443-6