Peer-reviewed publication

Peer-reviewed publications are a major product of present-day research. Beside conference presentations, teaching and industrial project implementation, they are one of the main outputs of research work. Many, mostly statistical, systems exist for expressing the influence of publication, such as Impact Factors (2- and 5-years), Total Cites, Immediacy Index, Cited Half-life, Eigenfactor®, Score, Article Influence® Score from ISIC Web of Knowledge Thompson Reuters (2013, admin-apps.webofknowledge.com/JCR/JCR?wsid=P2bYYMPI6B8gjp1EJx1&ssid=&SID=P2bYYMPI6B8gjp1EJx1) SRJ from SCOPUS, Elsevier (2014, www.scopus.com) and various h-indexes (Hirsch 2005). For instance, h-index is the number of papers an author has published that are cited at least that number of times, h 10 index used by Google Scholar (2014, www.scholar.google.com)—the number of papers with minimum ten citations and h 1 index—the number of paper with at least one citation. A key issue is that the counted references should be independent, i.e. self citations excluded. This can be sometimes rather tricky as many authors can have the same surname and even same initials—see e.g. Li, Liu, Zhang, Wang, Kim in Asia or Smith, Brown, Johansson in the western hemisphere. Perhaps a kind of a DAI—digital author identifier an equivalent to DOI—The DOI ® System ISO 26324—Digital Object Identifier System (2014, www.doi.org) is needed in the near future to make all those statistics credible.

Measurement tools

Each of those measurement tools has pros and cons. The number of citations and the ethic rather differs in various research fields—e.g. bioengineering, chemical engineering and mathematics. The review type papers and review journals naturally attract more references. Is more valuable a research work attracting references still after ten years from the time when the paper was published or a paper, which got 50 citations in first 2 years and after becomes totally forgotten? Consequently, researchers feel pressure to publish quickly and in journals with a high Impact Factor. Peer-reviewed journals, however, find it harder to find willing reviewers to meet the demands.

R-Index

Related to this problem appeared a very interesting recent paper suggesting the new R-index (Logan 2014). The author suggests that the review process would be improved if we had reviewer metrics as well as author metrics. R-indices can be defined as the number of reviews, in which the author has provided over his/her academic career. Logan correctly stated that if it is possible to track total publications numbers and total citations, why not also track total number of reviews?

There can be various R-indices: R-factor, the number of reviews that an author has provided over the author’s academic career, R 5—over 5 years, R 2—over two years and R 1—over a calendar or running year. However Logan developed the idea even further: One peer-review publication requires typically three reviews. It means that for 10 publications, 30 reviews being needed, a representative R-index should be the number of reviews divided by the number of published papers. It is an appealing idea.

Of course the quality of content in a paper is not captured by quantitative measures only. Some reviewers prefer to respond to requests from journals of perceived quality, often times with high impact factors. Although known poor reviewers usually do not get re-invited, the number of papers that are being submitted these days are saturating the capacity of good reviewers. As a result, editors may be tempted to sacrifice quality to cope with the load. However, this is a similar problem of publication assessment and evaluation. An experienced author can profile the paper in a way to attract more references despite a deep scientific merit.

Logan’s paper has several other great suggestions, which are worth considering. What is also worth to be studied and discussed is a shameful R 2 Club, defined as an R-index that decreases when you square it or in a simpler way R-index <1. Just to sum up if R-index and/or R-factor are introduced it would make an editor’s life easier and also authors would benefit from a faster reviewing procedure. Especially, when editors in many cases have to re-invite reviewers several times.

Reviewing

Reviewing is professional courtesy but is not rewarded, except minor incentives such as a month’s free access to www.sciencedirect.com and/or Scopus or a £5 token to be used for IChemE publications. Most reviewers have those perks fully covered by their institutions anyway.

Yet the real benefit of reviewing is getting access to research results, both the excellence and pitfalls of them, ahead of publication. Just as real learning occurs when a person starts to teach, reviewing can significantly contribute to a person’s understanding. Young researchers can particularly benefit from reviewing by learning the techniques of presenting ideas and making the presentation structure attractive. Even the most exciting research if not properly presented can lose the attractiveness and even the opportunity to be published.

The process of review also reveals the reviewer’s personality, research ability, knowledge and management style. Here are some tips to potential reviewers who can consider the following questions to better their skills:

  • Are we well organised?

    How long does it take to respond to a request and do the review?

  • Are we efficient?

    Can we handle numerous invitations at the same time and still discharge normal workplace duties?

  • Are we good-natured or sour personalities?

    Is it possible to help the authors of weak manuscripts without being unfriendly?

  • Are we ready to help others by delivering as soon as possible?

    Reviewing takes the same amount of time regardless of when it is done. Delaying it actually may be more time-consuming.

What about our managerial abilities

  • How long does it take to reply?

  • We can either reply positively, negatively or ignore the request—each of those actions tells something about me.

  • It is no shame to decline the invitation if I am overloaded, but how long does it take me?

  • Are we really so overloaded or rather unwilling to take some extra professional responsibility?

Personality

  • Is our review sour, patronising, offending or tries to be helpful and suggest real improvement?

  • Is our review fair or am I trying to push some other agenda?

  • Are we ready to spend sufficient time to provide really honest feedback?

Research abilities

  • The review rather well reveals the understanding of the subject.

  • Are provided real evaluation and suggestions?

Ability to formulate

  • Is the opinion, assessment and recommendations clear enough?

  • Are the strong points appreciated and weakness spotted? Is the review just touching formalities or language correction? It is rather frequent that the reviewers can’t be a real language expert if a journal is not publishing in their mother tongue, but still trying to be language editors rather than scientific reviewers.

Reviewing can be an assigned task to a PhD student researchers or a young researcher so as to assess their ability to successfully impress those in position to offer a job. The reviewer who would be prompt, thoughtful and expressive in his/her review can easily be distinguished from one who is tardy, perfunctory and not thoughtful. The difference can usually reveal some character traits such as honesty, willingness to work hard, be helpful, be timely, knowledgeable and thoughtful.

Conclusions

Reviewing is a highly important, demanding and, at the end, can be also a rewarding task. It is a pity that a considerable number of researchers try to refuse under various excuses. Each editor met various excuses from “little boys” type to rather sophisticated, when the invited reviewer would have probably nearly completed the review using the time needed to formulate the comprehensive excuse. Unfortunately, in many cases, those who possess comprehensive skills and experience, in some cases, consider themselves above the reviewing crowd. On the other hand, there are well-known professors, who are enormously busy, but still are doing their best to share their experience with the authors of peer-review manuscript providing than valuable feedback.

For this reason, the introduction of some kind of R-index/factor and on the other hand R 2 Club seems a very desirable idea. The present databases have all those information available and are waiting to be exploited. Some leading universities have been already assessing not only publication, but also reviewing activities in some way. Perhaps it is a time for some wider discussion about the issues.