Peer Review System Modernization and Alternative Models

In the late 1990s and early 2000s end-to-end online submission and peer review systems such as ScholarOne’s Manuscript Central, Editorial Manager by Aries Systems, and others became available to scholarly publishers and journal editorial offices. Many of the tasks involved in the peer review process could now be automated, making it possible to conduct multiple rounds of peer review within months rather than years.

With this increased efficiency it became feasible for alternative versions of peer review to develop. In an effort to solicit feedback and comments from those refereeing the submitted articles, journals began to utilize “blinded” forms of review. That is, those reviewing the submitted manuscripts would remain anonymous to the authors. Other types of blinded peer review also became prevalent. Double blind peer review, where the identity of neither the author nor the reviewer is known by either party was, and still is, a common practice for many journals. Some journals even took this a step further by using a triple blind system where even the editors were not privy to the identities of the authors. The impetus behind all of these blind or anonymous systems was an attempt to eliminate any perceived bias by those involved in evaluating the research submitted for consideration. These methods are not without flaws and there have been many criticisms and calls to change or move away from this model of peer review even further.

Another result of the technological advances and wide-spread adoption of the World Wide Web was that a new method to deliver research to readers was now available. This helped to fuel the Open Access (OA) movement which in turn brought about some new approaches to the peer review system.

The most significant of these may have been the approach taken by the Public Library of Science (PLoS). Founded in 2001, PLoS has grown to become one of the world’s largest open access publishers. In 2006, PLoS Launched PLoS One, a journal which covers research from any discipline within the fields of science and medicine. While still considered a peer reviewed journal, PLoS One incorporates a peer review process which is based on the philosophy that they will publish all papers that are judged to be technically sound, with no consideration to the originality or ground-breaking nature of the work [1]. As such, while historically prestigious journals which practice a more traditional peer review system typically accept 10–15 % of all articles submitted, PLoS One accepts 65–70 % of all papers submitted. PLoS One is also unique in that there is no Editor-In-Chief who has oversight of the overall journal. Submissions which are deemed suitable are assigned to an “Academic Editor” who oversees the peer review process. Upon acceptance the author pays an “author publication charge” (APC). Since open access journals do not have a subscription-based revenue stream, APCs, along with grants and charitable donations, provide the bulk of an open access publisher’s revenue.

Currently there are over 9,000 open access journals in existence and many of them follow a system similar to PLoS One [2].

Criticisms of Peer Review

Despite efforts to reduce or eliminate bias and conflicts of interest within the peer review system, many still feel the traditional method of evaluation is flawed and should be changed. Unethical practices by authors and publishers along with honest failures of the system, which garner much attention, only add fuel to the fire.

Unethical Practices

Between the years 2000 and 2005, Elsevier, one of the largest scientific journal publishers in the world, published six “fake” peer reviewed journals [3]. These journals were sponsored by pharmaceutical companies, but produced to look as if they were legitimate peer reviewed publications. In 2009 The Scientist brought to light the fact that Elsevier’s Australian division produced six publications: the Australasian Journal of General Practice, the Australasian Journal of Neurology, the Australasian Journal of Cardiology, the Australasian Journal of Clinical Pharmacy, the Australasian Journal of Cardiovascular Medicine, and the Australasian Journal of Bone & Joint Medicine. After word of these “fake” journals was made public, Michael Hansen, CEO of Elsevier’s Health Sciences Division, issued a statement admitting that the publisher had produced a “series of sponsored article compilation publications, on behalf of pharmaceutical clients, that were made to look like journals and lacked the proper disclosures.”

Some are concerned that an open-access publishing model where authors pay to have their work published via “author publication charges” (APCs) has created an opening for other unethical journals to publish papers with little or zero quality control in order to increase revenue. The introduction of the APC model has raised questions about “predatory” publishers who lower their editorial standards, or have no standards at all, in order to attract authors who are willing to pay so that they can have their work published without too much scrutiny [4].

In 2009, The Open Information Science Journal (TOISCIJ), a journal that claims to enforce peer-review, accepted a completely nonsensical manuscript, apparently for the sole purpose of collecting the APC from the author. The “authors’ of this paper used a software program that generates grammatically correct, “context-free” (i.e. nonsensical) papers, to create an article, complete with figures, tables, and references. The resulting “article” looks legitimate unless someone actually reads it and realizes that the text makes no sense whatsoever [5]. Unfortunately, this is just one of many examples.

Jeffrey Beall, a research librarian at the University of Colorado in Denver, has developed his own blacklist of what he calls “predatory open-access journals.” These predatory publishers exist only to exploit the author-pays model in order to gain profit. As researchers are under increasing pressure to have their work published, these bogus journals have emerged to take advantage of desperate and inexperienced authors [6]. There were 20 publishers on his list in 2010, and now there are more than 300. He estimates that there are as many as 4,000 predatory journals today, at least 25 % of the total number of open-access journals [7].

Recently, Science magazine published an article by John Bohannon entitled “Who’s Afraid of Peer Review?” which shed further light on the problem of predatory publishers. Dozens of open-access journals targeted in an elaborate Science sting accepted a spoof research article, raising questions about peer-review practices in much of the open-access world [8].

Regardless of real or perceived flaws in the system or the type of peer review system used, the majority of those who participate in the process feel that scholarly peer review is a valuable and necessary part of academic publishing.

Necessity of Scholarly Peer Review

When conducted properly, the peer review process in scholarly publishing acts as a mechanism to validate research and to act as a type of quality control by filtering out weak studies and assisting to improve upon submitted research. The fundamental aim of peer review is to ensure that research publications are scientifically sound and enable others to reproduce the work [9]. Again, while the peer review system has its share of critics, industry surveys clearly show that the large majority of those involved in the research community feel it is a necessary and valued step in the scholarly publishing process.

Industry Surveys/Perceptions

In 2007 [10], the British Academy issued a report which adamantly supported the UK’s traditional system of peer review as the best way of controlling research quality. The report, which was based on the findings of a seven-member working group, concludes that “Peer review remains an essential, if imperfect, practice for the humanities and social sciences,” and states that there are no better alternatives to peer review (Radnofsky).

A 2009 survey of over 3,000 global academics details attitudes and perceptions among the research community towards the peer review process. The report shows that the overwhelming majority (93 %) disagree that peer review is unnecessary. The large majority (85 %) agreed that peer review greatly helps scientific communication and most (83 %) believe that without peer review there would be no control [11]. The same report shows that researchers overwhelmingly (90 %) said the main area of effectiveness of peer review was in improving the quality of the published paper. In their own experience as authors, 89 % of those who responded to the survey said that peer review had improved their last published paper, both in terms of the language or presentation but also in terms of correcting scientific errors.

Another survey conducted by the Research & Businesses Intelligence Department of Taylor & Francis in 2013 had nearly 15,000 respondents from all over the world. The results showed that 79 % of respondents felt that a peer review system which provided “A rigorous assessment of the merit and novelty of my article with constructive comments for its improvement, even if this takes a long time” was always or often the preferred system [12].

In 2011 the House of Commons Science and Technology Committee conducted an inquiry into peer review in scientific research. As a result of this inquiry U.K. parliamentarians concluded that, despite many criticisms and little evidence of its effectiveness, the traditional practice of having research articles evaluated by anonymous colleagues before publication is valued by the community and shouldn’t be completely abandoned [9].

Role of Academe

While the primary motivation for any researcher to publish their work should be to share knowledge with their peers and the public at large, there are other factors which have developed within the academic structure over time. For researchers who are employed by universities or research organizations the goal of achieving tenure or gaining promotion has come to be seen as the ultimate accomplishment. Many Deans and tenure committees place a high value on not only how often a tenure candidate has been published, but on if the publications were in a “high impact” peer reviewed journal. A candidate’s publication record is an increasingly important criterion for awarding tenure and having articles published in first-tier journals or other national and international publishing outlets are most desirable in obtaining tenure [13]. As Michael Munger states in the Liberty Guide Handbook [14] “the anonymous referee process guarantees that multiple other people have looked at this paper and thought it was good enough to publish. So, if you have lots of refereed journal articles, it means (a) you write a lot, and (b) a disinterested person, with no reason to know you or like you, thought the work was good enough to publish. The reason, in short, that people who publish lots of journal articles usually get tenure is this: they made it easy for the Dean’s review committee to evaluate the file. It is easier to measure that which can be quantified.”

As a result of the incentive structure which has developed in academe, the two ecosystems of scholarly publishing and academic institutions have become intertwined. The academic reward system is structured to encourage quality scholarship primarily in the form of publications—formal contributions to the knowledge base in specific fields, which are intended to be widely read and acknowledged by others in those fields [15]. Scholarly publications are produced by researchers as part of their jobs, and at most universities and research organizations publications count significantly toward salary and job security [16]. Peer reviewed scholarly publications have evolved from materials the researcher used to further their own knowledge. They are now also a tool used by university Deans and tenure committees in order to evaluate the worth of the researchers output. The quality and extent of academic publications in recognized academic or professional journals typically are a primary measure of a scholar’s value and evidence of eligibility for promotion and tenure [15].

The Participants/Roles in the Scholarly Peer Review Process

While there are many variations on the peer review process and those who participate in reviewing articles submitted for review, there are some roles which are fairly consistent throughout. Regardless of the field of research, the access model, or the type of blinded system used, most scholarly peer reviewed journals have individuals who fill the following roles:

  • Editor-In-Chief or Overseeing Editor.

  • Associate or Section Editor(s).

  • Reviewers/Referees.

In order for a metric to be calculated in a manner which measures all journal peer review in a consistent and comparable manner, these roles must be clearly defined so that a weighted value may be applied to each participant. Standardization is required in order for the metric to have any real-world applicability. As such, we define the roles as follows:

Editor-In-Chief/Overseeing Editor

An Editor-In-Chief (EIC) or “Overseeing” Editor is an individual who may be compared to the Captain of a ship. They focus on the mission of the journal and help make sure that the ship (the publication) is on course by assuring that the content accepted for publication is inline with their mission; that the editorial board members and reviewers are up to standard and performing their duties adequately and ethically; they ensure that tasks are completed on time and effectively. The editor of a journal, in conjunction with the publisher, chooses the philosophical direction of the publication [17]. A high quality EIC helps to build the community behind and the audience of a journal. While most “traditional” journals have an EIC (sometimes more than one person fulfills this role while acting as co-editors), some of the newer “mega-journals” such as PLoS ONE, PeerJ, and F1000 have eliminated the role of the EIC and instead assign submitted manuscripts to individual editors who are focused only on one specialized area of the publication. One reason these journals have elected to eliminate the position of EIC may be financial. Journals without editors-in-chief and expert editors may be able to run less expensively because they offer reduced service when compared to journals which do utilize a full complement of editors. These journals offer less robust peer review—they offer some validation, but no ranking of relevance or importance, both of which are vital for clinicians, researchers, and scientists looking to save time and separate the best from the rest [18].

Associate or Section Editors

An Associate Editor (AE) or Section Editor generally reports to the EIC or Overseeing Editor and is responsible for handling articles which fall into a specific category. Often the AE is the one who screens a submission prior to sending it out for full review. If the article is deemed to be suitable for the journal and appropriate for further review, the AE handles the selection of qualified reviewers or referees for a paper. Depending on the journal, the AE may make a decision on an article, or they may make a recommendation to the EIC, who then enters the final decision. Depending on the size of a journal and the number of submissions received per year, a journal may or may not have Associate or Section Editors. Many journals do not use AEs while others might have a dozen or more.

Reviewers/Referees

If Editors and Associate Editors are the pilots and co-pilots of the ship, than Reviewers may be thought of as the engine which keep things running. Without reviewers the entire scholarly journals system would collapse. Regardless of access models, both traditional and open access scholarly publishers rely on the work of reviewers to comment on and evaluate the articles which have been submitted for publication. Most journals send articles out to two to four individuals who have experience in the field of research being discussed. Again, depending on the journal, Reviewers are asked to evaluate the technical soundness of the work, the impact the results may have on the field, and any errors they feel they may have spotted. Reviewers also might ask the authors to clarify certain sections of the research paper or ask questions about how the work was carried out. Typically original submissions are sent back to the authors to be revised and resubmitted so referees who participated in the original review process will see the work again when it is resubmitted to the journal. Because of the time consuming and complex nature of performing quality peer review, top quality referees are in high demand. It is the norm that a research article will undergo three to four (or sometimes several, depending on the paper and topic area) rounds of review before the reviewers are satisfied that the author has addressed all of their concerns and a final decision is entered.

Decision Making

Once all required reviews have been submitted by the reviewers/referees, either the Associate Editor or the Editor-In-Chief will read and evaluate the comments of the reviewers, conduct their own review and make a decision on the submitted article. Again, depending on the journal, criteria for acceptance may vary. Traditional scholarly journals typically accept a very low percentage (<20 %) of work submitted to them, whereas some open access journals which are only concerned with the technical soundness of the research may accept approximately 60–70 % of submissions for publication.

Rationale Behind Need for Metric

It is estimated that approximately 1.5–2 million peer-reviewed papers across 24,000+ journals were published in 2012. The National Science Board estimates the average annual growth of the indexes within the Web of Science to be 2.5 % [19]. It is clear that researchers/readers have more information than ever to sort through, but less time to do so. The peer review process is increasingly under fire. Questions about trust abound. How do users determine what content to select? Before an article is published, IN THEORY it has passed through a quality peer-review process. As discussed, this process is meant to ensure that every article published meets the highest standards demanded by the scientific method.

As a practical matter however, some journals claim to be peer-reviewed when in fact they are not, or the review process is weak. New “mega journals,” some of which have eliminated the role of the Editor-In-Chief have emerged. Other journals which employ less rigorous peer review standards have also entered the market. They operate in this manner in order to survive in the competitive and evolving industry of modern scientific publishing as well as to benefit from “pay to publish” business models.

Existing Metrics and the Opportunity to Fill a Void

There have been efforts in the industry to provide a way to measure the quality and/or importance of a journal or a published article, but all of these methods rely on network topologies and are lagging indicators. After an article or journal is published it takes many months, often years, to see if the published research is cited. Impact Factor (IF) is one such measure used to assess a journal or article, but it primarily measures the popularity, not quality or importance, over time. Journal metrics measure the performance and/or impact of scholarly journals. Each metric has its own particular features, but in general, they all aim to provide rankings and insight into journal performance based on citation analysis. They start from the basic premise that a citation to a paper is a form of endorsement, and the most basic analysis can be done by simply counting the number of citations that a particular paper attracts: more citations to a specific paper means that more people consider that paper to be important [20]. There are other metrics which are often used when attempting to measure the quality of a journal or articles published in scholarly journals.

The Immediacy Index is an attempt to measure how topical and urgent a work is. It is calculated by taking the number of citations the articles in a journal receive in a given year dividing by the number of articles published [21]. The Cited Half-Life attempts to measure how long content is referred to after publication. The Cited Half-Life is more important for those fields in which citations start to flow in slowly after a significant lag time, such as social sciences, or mathematics and computer sciences [21]. There are other metrics as well, such as the Aggregate Impact Factor, Eigen Factor, SNIP, SJR, and more. But again, these measures apply only to journals, not individual articles or researchers. Even new attempts to better measure the impact of research articles in a more immediate manner (such as Altmetrics) do not give any indication as to the peer review process which occurred prior to publication. The extent of peer review is unknown and cannot be known from any of these methods and none of these offers any immediate indication which is useful as applies to newly published research.

A measure of the thoroughness of a peer review of an article, or a peer review evaluation score, could help a scientist or researcher locate the thoroughly reviewed articles and avoid the inferior ones. Such a measure could also help legitimize and raise the status of a journal. Thus, it would be desirable to have a method and system for appraising the extent to which articles in a publication have been examined by means of a peer-review process.

This paper proposes a new system, pre-SCORE, that would provide a metric which not only indicates how many individuals examined an article prior to publication, but the level, or expertise, of those involved.

pre-SCORE will let users know that new material has been vetted. The goal of pre-SCORE is to represent the quality of the peer review process, based on the belief that a strong peer review process usually results in a more trustworthy final product. In most cases a high pre-SCORE should indicate high quality peer review.

As previously mentioned, IF and other metrics are lagging indicators, while pre-SCORE is a leading indicator, providing users with information about potential interest and quality 2–3 years before the IF and other metrics do.

Benefits for Authors

Researchers who are authors submitting work which they’ve spent months and sometimes years on will gain further assurance that they are submitting their article to a trustworthy journal run by a legitimate publisher. They will have additional confidence that the number of reviewers selected to evaluate their research and the quality of the reviews will be top notch and diligent. This may prove especially important for younger researchers and foreign authors.

Benefits for Publishers/Journals

Publishers and journals also benefit from making use of the new pre-SCORE metric. By increasing the level of transparency related to their peer review process they will establish increased legitimacy and build trust within the research community. In a sense by using the Peer Review Evaluation Score and services offered by a neutral, independent source, journals are themselves being reviewed and evaluated while displaying forthrightness regarding the content which they publish. Newer journals which do not yet have a history of citations or an Impact Factor can earn a status of authenticity sooner and set themselves apart from those of the “predatory” ilk.

Benefits for Readers

It is clear that there is more research and scholarly content to search through than ever before. The amount of data available only continues to grow. The fact that researchers also have less time than ever to weed through all of this material makes for a difficult situation. Using the pre-SCORE readers would have the ability to filter out material which has not been properly vetted. pre-SCORE would be one more tool available to aid in the discovery of quality published research.

Benefits for Libraries/Consortia

Libraries/consortia have had budgets drastically cut over the last several years. They are under increasing pressure to provide sources and materials for their institutions’ researchers, but have smaller budgetary resources to do so. By using pre-SCORE, libraries and consortia would have further assurance that their limited funds are being spent wisely.

pre-SCORE will provide readers, libraries, etc. with an immediate indicator based on real world metadata provided by the journal that is used to calculate this new metric. The fact that the pre-SCORE metric is present will provide users assurance that the article and/or journal have been legitimately peer reviewed.

Peer Review Evaluation Score (pre-SCORE)

Basic Algorithm

pre-SCORE Algorithm

At the most basic level: S = [A + B + C]/√V

  • S = Peer Review Evaluation Score

  • A = (X*E)

  • B = (Y*F)

  • C = (Z*G)

  • X = the number of “Overseeing” Editors (EIC or the Editor who has journal oversight)

  • Y = the number of Associate or Sub-Editors (AE)

  • Z = the number of reviewers

  • E = the numeric value (0.4) assigned to X

  • F = the numeric value (0.3) assigned to Y

  • G = the numeric (0.2) value assigned to Z

  • V = the version of the paper being reviewed (original submission, revision 1, revision 2 and so on).

Expanding on the basic concept, R could equal the participants’ h-index (an index that measures the productivity and impact of a scientist or scholar). Then the score (S) is computed where S is a function of E, F, G, X, Y, Z, V, and R.

  • Re1, 2, 3, etc. is the h-index of each EIC or “Overseeing” Editor.

  • Ra1, 2, 3, etc. is the h-index of each Associate/Sub Editors.

  • Rr1, 2, 3, etc. is the h-index of each of the other reviewers.

The score is calculated according to the following equation:

S = [(Re1 * A) + (Ra1 * B) + (Rr1 * C) + (Rr2 * C)]/√V

We now have calculated a metric which not only indicates how many individuals have examined an article prior to publication, but also includes the level, or expertise, of those involved.

Standard Weighted Value of Process Participants

As explained in the previous sections there are participants who play various roles within the scholarly peer review process. The highest weighted value (0.4) is placed on the role of the EIC or “overseeing” editor because the individual in this role has the ultimate responsibility in determining what a journal accepts for publication.

Just below the EIC in terms of weighted value (0.3) in the pre-SCORE formula is the Associate/Section or Sub-Editor. These types of editors oversee specific sections within a journal, but not the overall journal content.

Finally, reviewers or referees are assigned a value of 0.2 within the calculation.

These values are standardized across all journals or else the metric will be meaningless. EIC/Overseeing editors cannot have a value of 0.4 for one journal and 0.5 on another. The same hold true for all other roles.

The value of each role is included for each revision of the article in which they participate in the review process. Typically, as the review process is extended the needs of various reviewers are met and they drop out of the process. Additionally, earlier rounds of review are generally more rigorous than subsequent examinations, so while the initial round carries full weight (1), each following round of review is divided by the square root (review round 2 = 1.4, review round 3 = 1.7, and so on) so as to give a realistic balance to the final metric.

Inclusion of H-Index

When setting out to evaluate the peer review process while still respecting the desire for anonymity there were two goals: to indicate how many “eyeballs” looked at a paper prior to acceptance and also what “type” of “eyeballs.” The basic algorithm helps to answer the first question. By incorporating the h-index of each individual we can attempt to address the second problem. In 2005, J.E. Hirsch, a professor of physics at the University of California, San Diego proposed the index h, which is defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher [22]. As such the h-index is a viable measure of level of expertise an individual has within the scholarly field. A higher pre-SCORE will indicate that either multiple individuals or individuals with high h-indexes (or both) examined an article prior to acceptance.

There have been some studies which indicate that reviewers who are earlier in their career produce higher quality peer review than more senior reviewers, who may have higher h-index [23]. A more recent, study published in 2010 [24] in the Annals of Emergency Medicine seems to support this idea (Callaham). While the studies on this subject are fairly limited, in relation to the pre-SCORE concept it would be a simple matter to replace h index with m index. The m-index is defined as h/n, where n is the number of years since the first published paper of the scientist; also called m-quotient [25].

Examples

An analysis of manuscripts submitted to and accepted by peer reviewed journals shows how the pre-SCORE is calculated. The metadata available when a paper is processed via an online submission and peer review system such as ScholarOne Manuscripts or Aires System’s Editorial Manager contains all of the information necessary to determine pre-SCORE.

One paper examined was submitted in January 2011 and underwent three rounds of review before ultimately being accepted in December of the same year. The EIC has an h-index of 34. The AE has an h-index of 53. Three external reviewers took part in the first round of evaluation. Reviewer 1 has an h-index of 42. Reviewer 2 has an h-index of 29. Reviewer 3 has an h-index of 18. H-index was determined using Thomson-Reuters Web of Knowledge database.

Each participant examined the submitted article during round 1, resulting in the following calculation:

S = [(Re1 * A) + (Ra1 * B) + (Rr1 * C) + (Rr2 * C)]/√V

or

S1 = [(34 * 0.4) + (53 * 0.3) + (42 * 0.2) + (29 * 0.2) + (18 * 0.2)/√1

S1 = [13.6 + 15.9 + 8.4 + 5.8 + 3.6]/1

S1 = 47.3

The paper was sent back to the authors and was revised and resubmitted. All participants again evaluated the article so all variables remain the same with the exception of √1 being adjusted to √2:

S2 = [(34 * 0.4) + (53 * 0.3) + (42 * 0.2) + (29 * 0.2) + (18 * 0.2)/√2

S2 = [13.6 + 15.9 + 8.4 + 5.8 + 3.6]/1.4

S2 = 33.8

The paper is then returned to the authors and again revised and resubmitted. The AE examines the article and is satisfied that all of the reviewers concerns have been addressed so returns it to the EIC with a recommendation to accept the paper for publication. The EIC reviews all previous comments, re-reads the paper and decides to accept the article:

S3 = [(34 * 0.4) + (53 * 0.3)/√3

S3 = [13.6 + 15.9]/1.7

S3 = 17.4

This process repeats as needed for each round of peer review. In this example the final pre-SCORE for the paper is the sum of all rounds of review or:

S = S1 + S2 + S3

S = 47.3 + 33.8 + 17.4 = 98.5

Several other papers were also analyzed with resulting scores ranging from 52.7 to 98.5.

Issue Level Measurement

In addition to providing a measurement for each individual article, expanding this out so that each issue of a journal is rated with a pre-SCORE value is easily accomplished by using the average of each article contained within the issue. For example:

  • An issue contains twelve (12) articles.

  • The issue contains a “Letter From The Editor.” Another is a “Book Review,” neither of which is peer reviewed.

  • The remaining ten (10) articles have pre-SCORE values of 98.5, 95, 101.2, 103, 92.5, 88, 114, 110.3, 104.7, and 82.

  • The average for the issue results in 98.92.

  • In order to account for individual articles which may be unusually high or low a standard deviation is incorporated. This results in an issue level pre-SCORE of 97.1618.

Annual Measurement

Extrapolating a measurement for yearly performance is also possible by again using a simple averaging of a journal’s annual output. For example:

  • A journal produces one issue every other month for a total of six (6) issues per year.

  • The pre-SCORE of each individual issue have ratings of: 82.4, 84.6, 85, 90.2, 92, and 83.5 for a total of 517.7.

  • Dividing this by the number of issues per year (in this case 6) results in an annual pre-SCORE measurement for this journal of 86.3.

Real-World Use

In order for real-world use to be practical and easily achievable across many thousands of journals a system by which the calculations can be accomplished quickly and easily in an automated fashion must be available. As most journals today use an online system for submission and peer review the creation of such a system is possible. The online systems being used capture all of the necessary information pertaining to the roles which participate in the peer review process. This information is tagged within the system XML and can be exported and processed by software created to calculate the pre-SCORE metric in the following manner:

  • The journal submission/peer review system has the ability to create an export batch or report from peer review system which contains metadata with all appropriate values. Upon acceptance for publication this export will run and send all required metadata to the pre-SCORE server.

  • The pre-SCORE software will pull user h-index from Google Scholar, Thomson-Reuters, SCOPUS (or other source) via API.

  • Meta data and h-index analyzed by pre-SCORE’s software/code and the score is calculated.

  • pre-SCORE passed as needed via API etc. and made available in search results, on page displays, article metrics and so on.

Potential pre-SCORE Adoption and Integration

There are currently several existing initiatives within the scholarly publishing community where the addition and availability of the pre-SCORE metric would fit and act as an added benefit to all parties involved.

  • Article Level Metrics: articles now include measures of: online usage such as: citations from the scholarly literature; social bookmarks; blog coverage; and the Comments, Notes and ‘Star’ ratings that have been made on the article by engaged users.

  • Crossmark: Crossref’s Crossmark initiative was developed to distinguish between different versions of a publication on the web and can include info on peer review. Currently Crossmark informs users whether or not an article was peer reviewed or not. Adding the pre-SCORE metric would provide another layer of verification and assurance to the user.

  • Web of Science/Knowledge: Thomson-Reuters citation indexing and search service provides bibliographic content and tools to access, analyze, and manage research information. Existing metrics within the citation indexing services include Impact Factor, Eigen Factor, Total citations and more. Again, the addition of pre-SCORE to this family of metrics would enhance the understanding of the quality of a journal at an article, issue or annual basis.

pre-SCORE would fit in perfectly in all of these instances.

Future Analyses of pre-SCORE Compared to Citation Rates

One future analyses which may be worth exploring is to examine if there is a correlation between the thoroughness of the peer review process conducted prior to an article’s acceptance, Impact Factor and future citation rates. In time it may be possible to see if a more rigorous peer review process does indeed result in higher quality work being published. In the future, by tracking citation rates of articles which have a pre-SCORE, rating trends can be established which may confirm that a superior peer review process does indeed result in an improved quality of work.

Conclusion

The history of the peer review process and the scholarly journal dates back hundreds of years. It was established in an effort to ensure that research conducted for the betterment of all has passed through the scientific method. In spite of advances in technology which now allow the peer review process to be conducted in a more timely and efficient manner, there are still many who feel that the system is flawed or in need of improvement. However, several surveys show that the majority of those who participate in and are served by the scholarly peer review process strongly believe it is a crucial and important aspect within the academic publishing ecosystem. Scholarly publishers and journals cannot exist without the researchers who provide their work as content, and those same researchers require a respected, neutral third party to evaluate and distribute their findings in the best possible manner. Quality scholarly journals provide this service, and a key aspect of these services is a legitimate, methodical peer review process. Other than relying on the “brand” or reputation of a scholarly journal or publisher there has never been any method by which the legitimacy and thoroughness of the peer review evaluation of newly published work could be known.

While many metrics which evaluate scholarly publications exist, all of the current metrics evaluate research over time. Traditional metrics such as Impact Factor, Immediacy Index, Cited Half-Life and others rely on the counting of citations which generally take years to accumulate. Even newer metrics such as article level metrics or “altmetrics,” which report some more timely information, such as sharing and discussion via social media outlets, do not give any indication of what transpired prior to the publication of an article. There are no existing values which give any signal of the potential value of new research, or which corroborate the level to which an article was evaluated before publication.

High profile instances of seeming failures of the peer review system as well as the emergence of unethical publishers and authors have made the need to confirm a system of legitimate peer review more relevant than ever. Such a method of verification is necessary for the success of all parties involved.

With the need for scholarly publishers to establish trust more prevalent than ever, there exists the potential for rapid adoption by all members of the scholarly community of pre-SCORE as the standard in this area. With millions of peer reviewed articles published each year the need for such a metric is very clear. Again, the proposed metric, “Peer Review Evaluation Score” (pre-SCORE), would be one more tool available to authors, publishers, readers, and libraries to aid in the discovery of quality published research.