PeerFootnote 1 review represents one of the foundations of modern scholarly communication. The scrutiny of peers to assess the merits of research and to provide recommendations for whether research exhibits sufficient rigor and novelty to warrant publication is intended to reduce the risk of publishing research that is sloppy, erroneous or, at worst, fabricated. The process of peer review is intended to help improve the reporting of research and to weed out work that does not meet the research community’s standards for research production.

Traditionally, peer review uses forms of blinded review where parties involved remain anonymous to reduce bias in the evaluation process. The most extensive form of blinded review, triple blind, anonymizes the process so that the author(s), reviewer(s) and the handling editor(s) are not aware of each other’s identities. A more common implementation is double blind peer review, where the author(s) and reviewer(s) are not aware of each other’s identities. To ensure author anonymity, authors must remove all content that might identify them to any reviewer. Single blind review is also commonly practiced, where reviewers are aware of the identities of the authors, but the authors do not know who has reviewed their manuscript. The question arises whether blinded peer review reduces bias and results in a more objective review. For authors, blinded reviews are like a black box. Blinding of reviewer identities may allow reviewers to use their anonymity to deliver more critical reviews or to write reviews that lack rigor because authors and readers will not know who the reviewers are. On the other hand, requiring reviewers to identify themselves may encourage greater accountability or could cause reviewers to blunt their criticisms (van Rooyen et al. 1999).

The open science movement has endeavored to increase the transparency of the production of scientific knowledge and to make products of scientific inquiry more broadly available. The most visible aspect of the open science movement to date has been open access (OA), where the products of scholarship are made freely available through open access journals or repositories. More recently, efforts have extended to the availability of open data and software, where datasets are shared and re-used. One of the last components of open science to be adopted is open peer review (OPR), where aspects of the peer review process, which have traditionally been hidden or anonymous, are made public.

Debate about the benefits of and concerns about OPR have been evident in scholarly communication. Malone (1999) believed that a fully open system increases responsibility and accountability and protects all parties more equitably: “Openness in peer review may be an idea whose time has come. What do you think?” (p. 151). At the 2016 Annual Meeting of the Association for Information Science and Technology, a panel of well-known scientists and editors engaged in a conversation and debate with conference attendees on the emerging open peer review innovation in the era of open science (Wang and Wolfram 2016). Similarly, at the 8th Peer Review Congress (2017), leaders in academic publishing held a panel on “Transparency in Peer Review.” The panelists discussed the various shades or spectrum of transparency in open peer review practices. Also touched upon was the lack of transparency in research proposal reviews, especially for private foundations. Attendees at the Congress raised another important question: “Should there also be transparency in reviewing reports of rejected manuscripts if they are a part of the scholarly ecosystem?” Launched in 2015, Peer Review Week (2017) set its theme for 2017 as Transparency in Review. Clobridge (2016) compared the benefits and challenges of OPR for authors, reviewers, and readers. She also cited three major players of OPR, PeerJ, F1000Research, and ScienceOpen. She noted that “Open peer review, while still a relatively new phenomenon, is catching the interest of many researchers and appears to be gaining momentum as part of the next wave of open knowledge and open science” (p. 62).

Will OPR become a more common scholarly practice like open access and open data in open science? Further research is needed to understand the concept of OPR and its diverse implementations by publishers as well as the perceptions and attitudes of scientists as authors and reviewers. The purpose of this study is to conduct a thorough search for and analysis of current OPR journals to address the following research questions:

  1. 1.

    What is the current state of OPR?

  2. 2.

    What has been the trend for OPR adoption?

  3. 3.

    Who are the early adopters of OPR?

    1. a.

      Which disciplines have adopted OPR?

    2. b.

      Which publishers are the front runners or leaders in OPR adoption?

  4. 4.

    How transparent are the emerging OPR implementations?

    1. a.

      Do these journals adopt open reports?

    2. b.

      Do these journals adopt open identities?

Literature review

In the era of digital open science, OA journals have mushroomed on the Web. Do these journals provide access to quality research? Does this openness extend to peer review and, if so, how is peer review conducted by these OA journals? In a sting-operation experiment, Science correspondent John Bohannon (2013) found that of the 304 versions of a fabricated paper with flawed research submitted to 304 OA journals, 255 submissions received a decision (the mean for acceptance was 40 days; the mean for rejection was 24 days). Surprisingly, 157 journals accepted a version of the paper. Was this reflected in the peer reviews? Only 36 reviews recognized the paper’s scientific problems whereas “about 60% of the final decisions occurred with no sign of peer review” (p 64). Rupp et al. (2019) concluded “although predatory publishing did not exist ten years ago, today, it represents a major problem in academic publishing” (p 516). There is an “apparent rise in scientific fraud” (Naik 2011) as well as peer review fraud. A “peer review ring” scandal resulted in the retraction of 60 articles at once by a prestigious journal (Barbash 2014). BioMed Central discovered fake peer reviewers involved in 50 manuscripts and took actions to investigate and retract 43 papers (Lawrence 2015). Haven et al. (2019) report from their survey and focus group that “Biomedical researchers and social science researchers were primarily concerned with sloppy science and insufficient supervision. Natural sciences and humanities researchers discussed sloppy reviewing and theft of ideas by reviewers, a form of plagiarism” (Abstract, Results).

The mainstream peer review systems in scientific and scholarly communication typically operate anonymously (Kriegeskorte 2012). This established, blind peer review model for journals has been criticized as being a flawed process (Smith 2006) or a broken system (Belluz et al. 2016). Peer review bias and unfairness exist to varying degrees in different disciplines (Lee et al. 2013; Rath and Wang 2017). Is there a way to restore the trust in peer review for scientific and scholarly publishing? Pioneers and innovators believe that transparency is the key (Fennell et al. 2017).

OPR initiatives and practices

A small number of pioneering journals have been offering forms of OPR since the turn of the century. Launched in 2001, the journal Atmospheric Chemistry and Physics, was among the first OA OPR journals (Pöschl and Koop 2008), along with 36 journals published by BioMed Central (

More than 10 years ago, Nature conducted a four-month trial of a hybrid model in which the manuscripts underwent formal closed review by referees and were posted to a preprint site for open review by community readers. The exploratory results showed limited use in improving the process. (Opening up peer review 2007). In January 2016, Nature Communications started a new OPR trial where the authors could decide on a blind or open review model at submission time and have their review reports published upon the acceptance of the manuscript while the reviewers could decide if they would remain anonymous or sign the review reports (Nature 2015). One year into the trial, 60% of the 787 published papers had open reports (Nature 2016). Four years later, Nature announced that it would add eight Nature Research journals to the trial project beginning in February 2020. The announcement reports that in 2018, 70% of the trial journal articles published open reports; 98% of the authors who published their reviewer reports responded they would do so again. Over the four years, 80% of papers had at least one referee named, which seemed to corroborate the results of a 2017 survey of Nature referees: the majority favored experimenting with alternative and more transparent models (Nature 2020).

F1000 beta-tested an open research platform as F1000Research in 2012. Articles submitted to F1000Research are published within 6–14 days and followed by a totally transparent peer review process during which a reviewer’s recommendation and report are published alongside the article. The process was not moderated by an editor. A key difference between post-publication OPR is that F1000Research does not make decisions on acceptance or rejection. Instead, it adopts the algorithm for indexing based on the review results: a minimum of 2 approved or 1 approved plus 2 approved with reservations by reviewers. Another distinct feature is that the review process is totally transparent and open in real-time with both open identities and open reports (

Choosing a middle ground, PeerJ launched a new optional OPR journal in 2013; as of this writing, 80% of authors have chosen open reports, and 40% of reviewers have signed review reports ( Adopting a similar model, the publisher MDPI first announced optional post-publication OPR in 2014 by the journal Life and by 2018 all journals adopted optional OPR. Rittman (2018) reports that 23% of MDPI journal papers published at least one review with open identities. The percentage of the 14 early OPR MDPI journals with open reports include Publications (60%), Dentistry (52%), Medical Sciences (51%), Quantum Beam Science (48%), Life (46%), Brain Sciences (44%), J (43%), Behavioral Sciences (41%), Economies (40%), Cosmetics (39%), Administrative Sciences (38%), Condensed Matter (37%), Animals (34%) and Atoms (33%). EMBO Press reports that currently, 95% of their authors chose to publish review reports alongside their papers (EMBO Press 2020).

Another option for open reports, in addition to appearing alongside the article (e.g., PeerJ) or in a stand-alone volume (e.g., Elsevier), is for reviewers to deposit their review reports to a research partnership service such as Here the decision to publish reports is made by the reviewers rather than the authors or publishers, given that Publons was created to credit reviewers and authenticate their claims. Recently, Wiley partnered with Publons for their OPR initiatives with 40 participating journals (Wiley2018). Wiley’s prestigious journal Clinical Genetics was the pioneering journal for this initiative (Graf 2019). As of March 2020, Wiley added 10 titles in early 2020 to expand this initiative (Moylan 2020).

OPR research

As an innovation in peer review, OPR pursues transparency and openness to improve the process (Wang et al. 2016a, b). Transparency in peer review was rigorously studied by researchers for the journal BMJ in the 1990s before the first journals implemented OPR. These early research examples that studied the effect of making reviewer identities known to authors or posting reviewer names with the paper concluded that these practices had no effect on the quality of the reviews (Godlee et al. 1998; van Rooyen et al. 1999). Walsh et al. (2000) conducted a controlled trial in British Journal of Psychiatry to investigate whether open peer review was feasible. Of the 322 reviewers, 245 (76%) agreed to sign their reviews. A total of 408 unsolicited manuscripts of original research were randomly assigned to the two groups of reviewers. To evaluate the reviews, a seven-item instrument was used to compare the quality of the reviews: importance of research question, originality, methodology, presentation, constructiveness of comments, substantiation of comments, and interpretation of results; in addition, the tone of the review was rated. With cautious notes, the researchers reported that the signed reviews were more courteous and of higher quality than unsigned reviews. Bornmann et al. (2012) compared the reviewer comments of a closed peer review journal and an open peer review journal. They found that the reviewer comments in the open review journal were significantly longer than the reviewer comments in the closed review journal.

Since then, a few studies have investigated author and reviewer attitudes towards OPR, characteristics of open reviews and methods of OPR adoption by existing and new journals. In 2012, Elsevier began a pilot OPR project of selected trial journals (Mehmani and van Rossum 2015). A survey of editors, authors, and reviewers of the five participating trial journals was conducted in 2015 to assess the impact of open review (Mehmani 2016). Forty-five percent of the reviewers revealed their identities. The majority of the reviewers (95%) commented that publishing review reports had no influence on their recommendations. Furthermore, 33% of the editors identified overall improvement in the review quality, and 70% of these editors said that the open review reports were more in-depth and constructive. Only a small proportion of the authors indicated that they would prefer not to publish in open review journals. Mehmani reported high usage of review reports by counting the clicks to the review reports, which indicated the value of open review to the readers.

At a webinar sponsored by Elsevier to discuss how to improve transparency in peer review, Agha (2017) reported on the experience of two Elsevier pilot OPR journals (International Journal of Surgery and Annals of Medicine and Surgery) that published peer reviewer reports as supplemental volumes. He concluded: “60% of the authors like it or like it a lot and 35% are more likely to publish because of it.” Bravo et al. (2019) observed and analyzed Elsevier’s pilot project of five OPR journals from 2015 to 2017. In order to compare referee behavior before and after OPR, the dataset included 9220 submissions and 18,525 reviews from 2010 to 2017. They found “that publishing reviewer reports did not significantly compromise referees’ willingness to review, recommendations, or turn-around time. Younger and non-academic scholars were more willing to accept invitations to review and provided more positive and objective recommendations. Male referees tended to write more constructive reports during the pilot. Only 8.1% of referees agreed to reveal their identity in the published report.” (Abstract). The authors also published review reports alongside their paper. Wang et al. (2016a, b) analyzed the optional OPR journal PeerJ’s publicly available reports for the first three years of the journal (2013–2016). They found that the majority of the papers (74%) published during this time period had open reports; 43% of which had open identities.

If transparency in peer review is the key to tackling the various issues facing the current peer review system, will authors and reviewers embrace OPR? Several large-scale surveys have collected data on attitudes towards OPR with diverse findings. Mulligan et al. (2013) found that only 20% of respondents were in favor of making the identity of the reviewers known to authors of the reviewed manuscripts; 25% of respondents were in favor of publishing signed review reports. In 2016, the OpenAIRE consortium conducted a survey of OPR perceptions and attitudes by inviting respondent participation through social media, distribution lists and publishers’ newsletters. Of the valid 3062 responses, 76% of respondents reported having taken part in an OPR process as an author, reviewer or editor. The survey results show that the respondents are more willing to support open reports (59%) than open identities (31%). The majority of the respondents (74%) believe that reviewers should be given the option to make their identities open. (Ross-Hellauer et al. 2017) Another survey of European researchers conducted by the European Union’s OpenUP Project in 2017 received 976 valid responses. The results of this survey also show that respondents support open reports (39%) more than open identities (29%). This survey also reports a gender difference in supporting open identities (i.e., 35% of female researchers versus 26% of male researchers) (Görögh et al. 2019).

A recent survey by ASAPbio (2018) asked authors and reviewers in the life sciences about their perspectives on OPR. Of the 358 authors, the majority were comfortable (20.67%) or very comfortable (51.96%) with publishing their recent paper’s peer reviews with referees’ names; when asked about the same reviews to be published without referees’ names, the number dropped but still represented the majority: 19.56% were comfortable and 37.71% were very comfortable. Of the 291 reviewers, the majority would be comfortable (32.30%) or very comfortable (40.21%) with posting their last peer review anonymously given the opportunity to remove or redact appraisals or judgments of importance; regarding signing the same review, 28.15% of respondents were comfortable and 32.30% were very comfortable. These results suggest that the majority of the authors are willing to publish their papers’ review reports, with a preference for signed reviews; the majority of the reviewers are willing to have their review reports published without sensitive information, with a preference for anonymity.

The analysis of nearly 2600 responses to Wiley’s 2019 Open Research Survey indicates that the respondents’ preferred peer review models are double-blind (79%), transparent (44%), and single-blind (34%). Twenty-eight percent of the respondents were not aware of the transparent review model (Moylan 2019).

OPR conceptualization and implementation

Despite the growing interest in OPR, there still is no uniform definition of OPR or generally agreed upon best implementation model. Ford (2013) reviewed the literature on the topic to define and characterize OPR. Acknowledging the diverse views of OPR, she states “the process incorporates disclosure of authors’ and reviewers’ identities at some point during an article’s review and publication” (p. 314). She further characterized OPR by openness (i.e., signed review, disclosed review, editor-mediated review, transparent review, and crowd-sourced/public review), and timing (pre-publication, synchronous, and post-publication).

Ross-Hellauer (2017) conducted a systematic literature review and identified seven elements based on 22 definitions of OPR. Of the seven elements, open identities and open reports are considered core elements to recognize OPR journals. The other five elements in the order of frequency of occurrences include open participation, open interaction, open pre-review manuscripts, open final-version commenting, and open platforms/decoupled review. These elements formed a framework for two surveys conducted by OpenAIRE (Ross-Hellauer et al. 2017) and OpenUP (Görögh et al. 2019). Similarly, Tennant et al. (2017) provided a comprehensive review of journals’ peer review practices from the past to the present, which they published in the OPR journal F1000Research. Taking a much broader perspective, they examined the pros and cons of open reviews, including public commentary and staged publishing.

Fresco-Santalla and Hernandez-Perez (2014) illustrated how OPR has been manifested by different journals: open reviews (for all or specific papers), signed reviews (obligatory, pre- or post-publication), readership access to review reports (required or optional) and readership comments (pre- or post- publication). Wang and Tahamtan (2017) identified 155 OPR journals, of which the majority were in medicine and related fields. They also found the various characteristics in the implementations by the OPR journals. According to Tattersall (2015), there were ten leading OPR platforms.


This research focuses on the two core elements of OPR journals that Ross-Hellauer (2017) identified: (1) open identities, where reviewer names were made public; (2) open reports, where the original reviews or integrated reviews were publicly available. In addition, we considered when a journal adopted OPR, the journal’s discipline coverage, and its publisher. For included OPR journals, authors’ rebuttals were not considered in this study, nor were open comments from registered or unregistered readers. This study did not include journals that implemented only one of the following OPR elements in Ross-Hellauer (2017): open participation, open interaction, open pre-review manuscripts, open final-version commenting and open platforms/decoupled review.

Data collection

Although a few journal directory sources attempt to identify OPR (e.g., Directory of Open Access Journals and Transpose), there is no established standard to describe aspects of OPR systematically. Journal records are submitted by users, and the schemas are open for interpretation. To identify relevant OPR journals, we used multiple search strategies and tracked different sources. The Directory of Open Access Journals (DOAJ) indexes more than 14.5 thousand journals and nearly 4.8 million articles. From the results of the advanced search for journals with the filter set to “open peer review,” we retrieved 133 OPR journals. Some DOAJ entries for journals were blogs rather than venues for the publication of research and were thus excluded. Each of the journals was accessed to verify if it publishes open identities or open reports; those misclassified were removed from the dataset. Several websites about peer review and scientific publishing were periodically scanned to keep current on the OPR development: ASAPbio (Accelerating Science and Publication in biology); the International Congress on Peer Review and Scientific Publication; Peer Review Week. Transpose, a database of journal policies on peer review and pre-printing (, was a particularly rich source for identifying candidate journals but many records were not verified by the publishers or editors, and many duplicated or erroneous records had to be corrected by checking the original journals.

Data verification and cleaning

This study used two criteria to select OPR journals, open identities and open reports; at least one of the two core elements had to be implemented to qualify as an OPR journal. Data from different sources needed to be transformed and verified. As of 23 November 2019, the Transpose database listed 294 OPR journals that adopted open identities and 232 OPR journals that publish open reports, many of which were misclassified perhaps due to the crowdsourcing nature of the database and the record contributors’ ability to distinguish OA from OPR. Unexpectedly, the publisher field was another confusing concept. For example, the newly launched journal Geochronology listed the European Geosciences Union (EGU) as the publisher while the journal’s Website had Copernicus Publications as the publisher. Therefore, each OPR journal’s website was visited to verify the data. Some journals (e.g., several journals published by Copernicus Publications and journals by Kowsar) indicated in their editorial policies that they follow OPR. To identify which year the journal started or transitioned to OPR, we accessed issues of the journals to find open reports or open identities in the published articles. If none of the articles published review reports or reviewer identities as of December 2019, the journal was excluded. Further efforts were made to search Websites of the publishers of known OPR journals to identify additional OPR journals that were not indexed in DOAJ or Transpose. For example, Transpose had listed 10 OPR journals for Wiley, but Wiley’s Website news pointed to an excel file of 40 OPR trial journals. We also searched newsletters and lists related to peer review, from which we identified OPR adoption, for example, from PLOS in 2019.

Identification of the year a journal began OPR could be a difficult and time-consuming task if a journal did not provide the precise date it adopted OPR. In these cases, we manually checked each issue to find the earliest OPR article. If a journal publisher clearly posted information about when OPR was adopted on their editorial or peer review policy page, we used that year (e.g., Kowsar and Wiley).

In this paper, we updated the dataset reported in Wolfram et al. (2019), which was collected in 2018 and consisted of 20 publishers and 174 OPR journals. The final dataset for this expanded study includes 38 publishers and 617 OPR journals as of December 2019. Data were stored in an Excel spreadsheet and were analyzed using cross-tabulations, queries, and qualitative assessment of relevant journal content. Stored information included: journal metadata, year of first OPR use, publisher (name and country of headquarters), policy for reviewer identity, policy for report availability, and high-level journal discipline.


Descriptive data

The growth of OPR adoption—measured either by existing or new journals—is summarized in Fig. 1 by broad discipline. The journals were classified into six broad topical areas using a modified form of the DOAJ classification scheme to determine which disciplinary areas have adopted OPR. Most journals did not report when they adopted OPR or if they have always used OPR. First OPR usage was confirmed by searching early issues of the journals to identify when OPR practices began. In many cases, OPR adoption coincided with the first journal issue.

Fig. 1
figure 1

Growth of OPR journals by discipline groups

The early adopters of OPR can be traced back to the beginning of the 2000s. The journals Atmospheric Chemistry and Physics and European Cells & Materials each implemented a different OPR model, although both launched their first issues in 2001. Similarly, 36 OPR journals published by BioMed Central implemented another model in the same year. Since then, there has been steady growth in the number of journals that have adopted OPR, most noticeably in the Medical and Health Sciences, and more recently, in the Natural Sciences over the past 10 years. This growth has increased dramatically since 2017, in which time the total number of OPR journals has more than doubled. The disciplinary distribution of OPR journals appears in Table 1. For each discipline group, its first OPR year and number of articles suggest how OPR is being adopted. Medical and Health Sciences had the most early adopters.

Table 1 Adoption of OPR by discipline group over time

A summary of the most prolific publishers contributing to OPR and their headquarters’ country appears in Table 2. Although many journals today attract an international audience and are managed by international teams of researchers, the prevalence of OPR journals associated with publishers based in Europe stands out. Twenty-four of the 38 (63.2%) identified publishers are based in Europe and account for 445 out of the 617 titles (72.1%). Although the publishers are based in Europe, many of the journals they publish may support journals originating from other areas of the world (e.g., Kowsar). Furthermore, 500 of the OPR journals (81.0%) are published by only five publishers (MDPI, SDI, BioMed Central, Frontiers Media S.A., Kowsar). This points to the important role that publishers have played to date in the promotion of OPR.

Table 2 Adoption of OPR by publishers

OPR transparency in current practice

A fundamental principle of OPR is transparency. This includes open identities and/or open reports. Publishers and editors of journals adopted different levels of transparency, where one or both of the transparency elements may be optional or required (e.g., EMBO Press 2020). Table 3 reports the adoption of open reports based on the broad discipline of the journals. The percentage of mandatory open reports is highest in the Medical and Health Sciences (64.0%), and second highest in the Multidisciplinary category (50.0%). Mandatory open reports are much lower for Humanities (14.3%) and Technology (5.7%), where optional open reports are more common. The availability of mandated or optional open identities was much more common across all disciplines, with only 9 journals (8 from the Natural Sciences and 1 from Medical and Health Sciences) requiring anonymity. Summary data for open identity adoption by discipline appear in Table 4.

Table 3 Adoption of open reports by discipline
Table 4 Adoption of open identities by discipline

Open identities may be mandated, optional (decided by the reviewer) or anonymous. Similarly, open reports may be mandated, optional (decided by the author or editor), or not available. The frequency of each combination appears in Table 5. When reviewers remain anonymous and their reports are not made available, this is traditional blind peer review (the lower right cell). The vast majority of OPR journals (608 or 98.5%) either require reviewers to identify themselves (268 or 43.4%) or allow reviewers to choose whether to identify themselves (340 or 55.1%). Similarly, 536 (86.9%) of the journals either require reports to be open (274 or 44.4%) or allow authors or editors to choose whether to make the reports open (259 or 42.3%). Only 189 (30.6%) journals require both open identities and open reports.

Table 5 Who decides about open identities and open reports

Transparency of the emerging OPR implementation approaches

The current OPR landscape is complex and exhibits a variety of configurations ranging from opening some aspects of the established blind-review process to a fully transparent process. Although there is no simple way to define the emerging OPR practices, a descriptive framework focusing on how open identities and open reports are being fulfilled during the review process and what end products are available for open access are depicted in Fig. 2.

Fig. 2
figure 2

Process–product approaches

At the implementation level, an OPR journal needs to decide:

  1. 1.

    Who makes decisions: reviewer, author, and editor/journal;

  2. 2.

    When the decision is made for a specific core element: pre-, post, or concurrent process;

  3. 3.

    What is contained in open reports: original reports, a consolidated letter, or invited commentaries by reviewers who made significant contributions to the paper’s revision;

  4. 4.

    Where the open reports can be accessed.

These four factors can potentially define the level of transparency which a journal puts into practice for OPR. For example, F1000Research is the most transparent OPR journal because its peer review process is totally open; both referee identity and review comments are instantly accessible alongside the manuscript while it is being reviewed and revised. As a contrast, the OPR journals published by Frontiers only publish each paper with its reviewers’ names, which is a minimum level of open identity. The process and the main product are still very much closed to the readers for whom the articles are published.

The emerging models varied in terms of transparency. Figure 3 shows four representative implementations:

  1. 1.

    Frontiers’ OPR journals publish only referee identities alongside articles without open reports as an open identities-only model;

  2. 2.

    PeerJ provides optional open identities to referees and optional open reports to authors, representing a range of journals adopting this model;

  3. 3.

    BMC’s OPR journals publish both open identities and open reports alongside articles;

  4. 4.

    F1000Research, the first of its kind, makes the review process itself open in addition to open identities and open reports. F1000Research, as post-publication OPR, has no acceptance or rejection decision to be made as a result of peer review, but an article will not be indexed in any bibliographic databases without passing the threshold within a defined timeframe consisting of two approved (✔✔) or one approved (✔) plus two approved with reservations (??).

Fig. 3
figure 3

OPR models as implemented by publishers


This study represents the first comprehensive investigation of the scope and depth of OPR adoption in the open science era. Since the BMJ experiments with open reviews more than 20 years ago, the adoption of OPR has gone from 38 journals in 2001 to at least 617 journals by the end of 2019. Figure 1 demonstrates that there has been steady growth in the number of OPR journals over time, led by journals in Medical and Health Sciences and the Natural Sciences, but with much higher growth since 2017. This growth has been prompted by a small number of publishers. The remaining disciplines have been much slower and later to adopt OPR. The Humanities have different scholarship cultures as compared to the Natural Sciences and have been slow in adopting open access overall (Eve 2017; Gross and Ryan 2015).

Several publishers have served as pioneers and early promoters of OPR. The five publishers of the most OPR journals that have led the way—MDPI, SDI, BioMed Central, Frontiers Media S.A. and Kowsar–have adopted different implementations of OPR. BioMed Central, as one of the earliest OPR journal publishers in this study, and SDI require both open reports and open identities. Kowsar requires open reports but makes referee identities optional. MDPI makes open reports and open identities optional for authors and reviewers, respectively. Frontiers Media S.A. requires open identities but does not provide open reports for its OPR journals.

More than 60% of the publishers in this study, who publish more than 70% of the OPR journals identified, are based in Europe, signifying Europe’s leading role in the OPR movement. This strong European effort is also seen in the larger open science movement, where organizations such as OpenAIRE and OpenUP are investigating all aspects of this movement, including OPR. Eleven of the identified publishers are based in the United States, indicating that there is also a growing interest in adopting OPR outside of Europe. Publishers based in other countries than those of the more prolific publishers have been slower to adopt forms of OPR as evidenced from the singular representation by these nations.

Multiple OPR practices emerge from the analysis of the data that show different levels of transparency in implementation. The level of transparency can be characterized along a continuum. The most transparent model is the concurrent open review process exemplified by F1000Research, where reviewers’ identities and reports are instantly available alongside manuscripts and are published upon submission following initial format checking. Another model that promotes total transparency, exemplified by many BioMed Central journals, provides access to the complete report history and author exchanges as well as open identities alongside the published articles, after acceptance. The next several implementations that allow authors and/or reviewers to participate in open review decisions during the process include: mandated open reports but optional open identities (e.g., Kowsar journals), mandated open reports without open identities (e.g., the journal Ledger), and optional open reports with optional open identities (e.g., PeerJ). The most limited implementation, used by the Frontiers Media S.A. journals, is a closed review process with the published articles including only the names of the reviewers.

Two recommendations arise from the findings:

  1. 1)

    Publishers should make their OPR information (policies, open reports, open identities) more accessible and should more prominently display their OPR status and adoption. This information was sometimes buried and difficult to locate.

  2. 2)

    A repository or registry of OPR journals that provides key elements relevant to OPR is needed. Information contained in sources such as DOAJ and Transpose is limited and frequently incorrect.


The adoption of the OPR innovation is growing. This growth has been largely spurred by a small number of publishers, primarily based in Europe. To date, OPR has been adopted mostly by journals in the Medical and Health Sciences and the Natural Sciences. However, the number of OPR journals remains a very small percentage of scholarly journals, overall. The fact that there are multiple approaches to the adoption of OPR indicates there is no consensus at present regarding best practices. The highest level of OPR transparency includes open identities along with open reports, but only a minority of the OPR journals identified have adopted complete transparency.

Limitations of the present research must be recognized. Currently, there is no universal way to identify journals that adopt OPR. Our approach was to cast a broad net using multiple sources to identify candidate OPR journals, which is time-consuming and often hit-or-miss. It is possible that we have missed OPR journals that are not indexed by the databases searched or by the publishers already in our dataset despite the fact that we expanded our searches to the OPR publishers to ensure inclusion. Similarly, given the growth in the number of OPR journals over the past couple of years, the findings presented here represent a snapshot as of late 2019. The OPR landscape is changing quickly. Like any indexing source, there may also be a regional or language bias, where there are additional examples of OPR journals that may not be evident due to a lack of familiarity with the publication language. Although most publishers post annual reports with metric data including the number of articles, citation counts, Journal Impact Factor, rejection rate, etc., they lack annual OPR metric data on the number or percentage of articles with optional open reports and open identities; both are essential metric data to document OPR adoption.

The next phase of this research is examining open report contents using text mining approaches to determine if there are quantitative and qualitative differences in the open reviews based on the OPR approaches used. A scoring instrument is being developed and tested to measure different models.