At the most basic level: S = [A + B + C]/√V
S = Peer Review Evaluation Score
A = (X*E)
B = (Y*F)
C = (Z*G)
X = the number of “Overseeing” Editors (EIC or the Editor who has journal oversight)
Y = the number of Associate or Sub-Editors (AE)
Z = the number of reviewers
E = the numeric value (0.4) assigned to X
F = the numeric value (0.3) assigned to Y
G = the numeric (0.2) value assigned to Z
V = the version of the paper being reviewed (original submission, revision 1, revision 2 and so on).
Expanding on the basic concept, R could equal the participants’ h-index (an index that measures the productivity and impact of a scientist or scholar). Then the score (S) is computed where S is a function of E, F, G, X, Y, Z, V, and R.
Re1, 2, 3, etc. is the h-index of each EIC or “Overseeing” Editor.
Ra1, 2, 3, etc. is the h-index of each Associate/Sub Editors.
Rr1, 2, 3, etc. is the h-index of each of the other reviewers.
The score is calculated according to the following equation:
S = [(Re1 * A) + (Ra1 * B) + (Rr1 * C) + (Rr2 * C)]/√V
We now have calculated a metric which not only indicates how many individuals have examined an article prior to publication, but also includes the level, or expertise, of those involved.
Standard Weighted Value of Process Participants
As explained in the previous sections there are participants who play various roles within the scholarly peer review process. The highest weighted value (0.4) is placed on the role of the EIC or “overseeing” editor because the individual in this role has the ultimate responsibility in determining what a journal accepts for publication.
Just below the EIC in terms of weighted value (0.3) in the pre-SCORE formula is the Associate/Section or Sub-Editor. These types of editors oversee specific sections within a journal, but not the overall journal content.
Finally, reviewers or referees are assigned a value of 0.2 within the calculation.
These values are standardized across all journals or else the metric will be meaningless. EIC/Overseeing editors cannot have a value of 0.4 for one journal and 0.5 on another. The same hold true for all other roles.
The value of each role is included for each revision of the article in which they participate in the review process. Typically, as the review process is extended the needs of various reviewers are met and they drop out of the process. Additionally, earlier rounds of review are generally more rigorous than subsequent examinations, so while the initial round carries full weight (1), each following round of review is divided by the square root (review round 2 = 1.4, review round 3 = 1.7, and so on) so as to give a realistic balance to the final metric.
Inclusion of H-Index
When setting out to evaluate the peer review process while still respecting the desire for anonymity there were two goals: to indicate how many “eyeballs” looked at a paper prior to acceptance and also what “type” of “eyeballs.” The basic algorithm helps to answer the first question. By incorporating the h-index of each individual we can attempt to address the second problem. In 2005, J.E. Hirsch, a professor of physics at the University of California, San Diego proposed the index h, which is defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher . As such the h-index is a viable measure of level of expertise an individual has within the scholarly field. A higher pre-SCORE will indicate that either multiple individuals or individuals with high h-indexes (or both) examined an article prior to acceptance.
There have been some studies which indicate that reviewers who are earlier in their career produce higher quality peer review than more senior reviewers, who may have higher h-index . A more recent, study published in 2010  in the Annals of Emergency Medicine seems to support this idea (Callaham). While the studies on this subject are fairly limited, in relation to the pre-SCORE concept it would be a simple matter to replace h index with m index. The m-index is defined as h/n, where n is the number of years since the first published paper of the scientist; also called m-quotient .
An analysis of manuscripts submitted to and accepted by peer reviewed journals shows how the pre-SCORE is calculated. The metadata available when a paper is processed via an online submission and peer review system such as ScholarOne Manuscripts or Aires System’s Editorial Manager contains all of the information necessary to determine pre-SCORE.
One paper examined was submitted in January 2011 and underwent three rounds of review before ultimately being accepted in December of the same year. The EIC has an h-index of 34. The AE has an h-index of 53. Three external reviewers took part in the first round of evaluation. Reviewer 1 has an h-index of 42. Reviewer 2 has an h-index of 29. Reviewer 3 has an h-index of 18. H-index was determined using Thomson-Reuters Web of Knowledge database.
Each participant examined the submitted article during round 1, resulting in the following calculation:
S = [(Re1 * A) + (Ra1 * B) + (Rr1 * C) + (Rr2 * C)]/√V
S1 = [(34 * 0.4) + (53 * 0.3) + (42 * 0.2) + (29 * 0.2) + (18 * 0.2)/√1
S1 = [13.6 + 15.9 + 8.4 + 5.8 + 3.6]/1
S1 = 47.3
The paper was sent back to the authors and was revised and resubmitted. All participants again evaluated the article so all variables remain the same with the exception of √1 being adjusted to √2:
S2 = [(34 * 0.4) + (53 * 0.3) + (42 * 0.2) + (29 * 0.2) + (18 * 0.2)/√2
S2 = [13.6 + 15.9 + 8.4 + 5.8 + 3.6]/1.4
S2 = 33.8
The paper is then returned to the authors and again revised and resubmitted. The AE examines the article and is satisfied that all of the reviewers concerns have been addressed so returns it to the EIC with a recommendation to accept the paper for publication. The EIC reviews all previous comments, re-reads the paper and decides to accept the article:
S3 = [(34 * 0.4) + (53 * 0.3)/√3
S3 = [13.6 + 15.9]/1.7
S3 = 17.4
This process repeats as needed for each round of peer review. In this example the final pre-SCORE for the paper is the sum of all rounds of review or:
S = S1 + S2 + S3
S = 47.3 + 33.8 + 17.4 = 98.5
Several other papers were also analyzed with resulting scores ranging from 52.7 to 98.5.
Issue Level Measurement
In addition to providing a measurement for each individual article, expanding this out so that each issue of a journal is rated with a pre-SCORE value is easily accomplished by using the average of each article contained within the issue. For example:
An issue contains twelve (12) articles.
The issue contains a “Letter From The Editor.” Another is a “Book Review,” neither of which is peer reviewed.
The remaining ten (10) articles have pre-SCORE values of 98.5, 95, 101.2, 103, 92.5, 88, 114, 110.3, 104.7, and 82.
The average for the issue results in 98.92.
In order to account for individual articles which may be unusually high or low a standard deviation is incorporated. This results in an issue level pre-SCORE of
Extrapolating a measurement for yearly performance is also possible by again using a simple averaging of a journal’s annual output. For example:
A journal produces one issue every other month for a total of six (6) issues per year.
The pre-SCORE of each individual issue have ratings of: 82.4, 84.6, 85, 90.2, 92, and 83.5 for a total of 517.7.
Dividing this by the number of issues per year (in this case 6) results in an annual pre-SCORE measurement for this journal of 86.3.