Already in its early days of its existence, I have been harassed by the Impact Factor. In 1976, I moved from a safe haven of a University of Technology where at that time publications were not so important, to a Faculty of Medicine. Suddenly, it was imperative not only just to publish but also to publish in “journals that count.” As a biomedical engineer, I enjoy working in a medical faculty. However, the type of work I am attracted to appears not to result in a large flow of papers with many citations. I am proud of my 150 or more papers that can be found in Pubmed and my Hirsh factor of 30, but it does not compare to those of my colleagues from internal medicine, immunology, genetics, or epidemiology. I have never considered this to be a result of a difference in quality of our research output, but as a result of research field specifity.

In the EMBEC conference of Antwerp, November 2008 and the World Conference of the IUPESM in Munch, September 2009, I have presented on how Biomedical Engineering careers presently are influenced by these indices. I have especially pointed out the weak position of our profession when it comes to indices used to judge scientific quality. At both occasions, there was a significant audience of especially younger biomedical engineers demonstrating great eagerness to become acquainted with the concept of quality indices. Indeed, several countries among which the Netherlands and UK, are aiming at steering research based on general or self derived quality indices. For our young scientists, particularly in biomedical engineering, it is therefore important to understand the pitfalls of these indices and the conditions influencing their value. We do not address here the question whether steering of research has even resulted in success. History of science has shown that it does not. Nevertheless, policy makers love these indices since they can make judgments without insight into the potential and nature of a certain research area. There is real danger since policy makers tend to design new indices without proper public debate or justification in scientific journals.

Well-known indices are the journal impact factor, IF, and the Hirsch factor, hF. The Hirsch factor equals the number of papers, h, which are cited more than h times [3]. The hF can be applied to individual scientists, journals, or other well-defined entities. The journal impact factor equals the average amount of citations to the papers in the journal in the 2 years prior to the year for which it is calculated.

Thompson Reuter, the company that presents the journal impact factors in June–July each year, classifies the journals according to disciplineFootnote 1. For the category Medicine, General, and Internal, the IF for the 20 top journals varies between 52 and 2.8, for Biochemistry and Molecular Biology between 41 and 8.2, and for Engineering and Biomedical between 11 and 1.5. Hence, someone in the area of internal medicine has the opportunity to score higher impact factors than biomedical engineers. In other words, internal medicine has a more comfortable base for publishing in high IF journals.

I have rather mixed feelings about the application of these indices to policy making since they are biased by so many factors [1, 4]. For example, the top Hirsh factors in medicine are higher than in Physics [4]. Would Physics be a science in need of less brain content of the Physicist than medicine of the MD–Ph.D.’s? Hence, there must be a discipline dependency. In addition, the personal history of a scientist is of influence on the Hirsch factor. It seems trivial to just divide the Hirsch factor by a scientific age, e.g., the number of years after Ph.D., as has been proposed before. However, a career move may seriously affect the rate of publication over the different periods in one’s scientific life. Similarly, a change of scientific area, which is a rather healthy occurrence I might say, may easily introduce a period were rate of publications are low. At my age, also the evolvement of the tradition of a discipline does play a role since the push for publications was not that great decades ago. Appreciation was based on different factors. I have colleagues that found pride in having as little authors on a paper as possible: the Ph.D. student and the supervisor may be 1 or 2 extra at most. That policy, which was seen as quality of the supervisor decades ago, is detrimental to present quality indices based on “the more the better.” Today, publications with up to 20 authors are common.

Gender is always an issue. Only female colleagues deliver baby’s and are subjected to all the emotions related to that. Female colleagues that attempt to get children but fail have periods of emotional stress that affects the production of papers especially when they are in an environment that has no understanding for this problem. Furthermore, the gender issue is much more than just the time lost related to reproduction. Females are cited less than male colleagues working in the same area [7]. The simple demonstration of the gender issue is the number of females in leading positions. This seems to be the case everywhere although obviously there are regional differences. Within Europe there is a large diversity between countries. My country, the Netherlands, is one of the worst performers in this respect as measured by the number of females in higher academic ranks [2].

There are many more factors of influence that one can think of. However, the first question should be: do we need an index that ranks scientists? Kai Simons, the president of the European Life Scientist Organization in 2008, stated: There are no numerical shortcuts for evaluating research quality. What counts is the quality of a scientist’s work wherever it is published. That quality is ultimately judged by scientists… [5].

One may wonder whether the numerical scores are the right factors to stimulate creativity and development of independently thinking scientists. The fear is justified that research steered by such measures will reduce diversity in science, and research groups will eventually pursue the same scientific ideas. Obviously, I am not against the indices as such since it is good material for reflection on one’s performance. Obviously, also in our discipline, Biomedical Engineering, we should aim high in pursuing original ideas with high impact. MBEC is also aiming at increasing its IF, and successfully, by stimulating authors to submit their best work and improve the writing of their manuscripts [6]. However, ranking of scientists within a multidisciplinary institution has serious side effects. The top is obviously happy and gains leverage on the board of directors for financial support, but for the scientists that, due to bias, ends at the bottom of the list these indices have a strong de-motivating influence. Moreover, how can I motivate my young Ph.D. students to find a career in the medical faculty and hospital when it is clear that due to bias they will have a hard time to rise to the top in the institution? I tell them not to care, and that motivation has to come from love for the profession and I explain that working in a medical environment is rewarding indeed. Also, our policy makers should appeal to creativity and not to the numerical value of indices that can be obtained.