Introduction

Peer review, as a quality assurance tool, has become a very important part of an academic publishing process. Though traditional peer review has contributed significantly to academic research and publishing, it is not without problems: it harms the creative research projects that do not seem to align with consensus (Wang et al., 2012); it weights excessively the achievements and reputation of an author whose work is being reviewed (Li, 2008); it is not conducive to producing sound reviews for marginal inter-disciplinary studies (Li, 2008); there is a lack of effective monitoring and feedback mechanism during the peer review process (Decoursey, 2006). Attempting to address these problems, open peer review, a more overt and public peer review mode, has been proposed.  

The research in and developments of peer review have been ongoing for more than 30 years. In 1980s and 1990s, major efforts were spent on studying the drawbacks of then-existing peer review practice modes and the necessity to establish a better peer review mode (Fang, 1998). Opposing the traditional blind peer review mode, many scholars emphasized on disclosing the identities of reviewers to assure academic fairness. In the beginning of the twenty-first century, the publication of Declaration of the Budapest Open Access Initiative (2002) gave birth to Open Science. Afterwards, Berlin Declaration (2022) (2003), Bethesda statement on open access publishing (2003) and BOAI (2022) together established the fundamental principles of Open Access. As part of Open Science, Open Peer Review has become an important research focus. Studies on it concentrated on validating its feasibility through empirical studies and experiments as well as on testing the extent of acceptance for open peer review (Abraham, 2012).  

In recent years, many academic journals and publishing platforms have adopted open peer review as a means for quality control for academic publication, eg., ACP (Atmospheric Chemistry and Physics), Biology Direct, F1000, Peerage of Science, Publons, and The Winnower. Their platforms have been constructed based on open network system principles and methods to improve peer review efficiency (ACP, 2022; Bornmann et al., 2011; Kowalczuk et al., 2013).

The degrees of “openness” of the open peer review practice modes by different journals are usually different: some of them, insisting on the original idea of open peer review, are keen on opening up all the segments of a peer review process; some of them, based on their understandings of open peer review and for practical reasons, only open up some of the important segments. Based on their open peer review practice modes, we attempt to find out the key factors that play decisive roles in the application modes of open peer review to provide theoretical supports to the effective promotion of open peer review in the future. Kindly provide complete detail for reference Kowalczuk et al. 2013).

Literature review

Open peer review

Theories of open peer review

In 1982, Armstrong published Peer-Review Practices of Psychological Journals: The Fate of Published Articles, Submitted Again, which brought up the academic keyword “open peer review” for the first time (Peters & Ceci, 1982). However, its exact definition was not established at that time. Different researchers, based on their understandings on “openness”, discussed the features of open peer review. Ross-Hellauer (2017) considered open peer review as an inclusive academic keyword. Through multiple rounds of searching using this keyword, he constructed a corpus that included 122 open peer review features. By fractal iterations, he extracted 7 key features, which are open identity, open peer review report, open participation, pre-publication review, post-publication review, open interactions between authors and reviewers, and open review platforms. The different features were mutually independent but sometimes overlapping as well. Nevertheless, they have together formed the initial concept of open peer review (Ross-Hellauer et al., 2017). However, one drawback of that research is that, when choosing keywords, only keywords like “open review” and “open peer review” were considered, but other keywords that could be associated with open peer review like “post-publication peer review” were not included, which resulted into a gap on the peer review process timeline. Emily Ford (2013) gave her ideas on the concept of open peer review in a way that is similar to literature review. She characterized the features of open peer review into two categories, review process and review time. Review process is composed of signed review, open review process, editorial mediated review, transparent review, and crowdsourced review. Review time is composed of pre-publication review, synchronous review, and post-publication review. Her research filled the gap by Ross-Hellauer (2017), but there were still overlapping and vague definitions. Targeting the limitations of the research by the two scholars, Meng and Zhang characterized open peer review into different levels using quadrant analysis. They defined “openness” by using three major classification features, which are publication of contents, scope of participation, and time of publication. They can be further extended into second level sub-features. Publication of contents includes publication of identity, publication of review report, and editors as media. Scope of participation includes participants of interests and public composition. Time of publication includes pre-publication, post-publication, and synchronous with research and development. For all the sub-features on the second level, the authors also listed the journals that have implemented the corresponding features in their peer review processes (Meng & Zhang, 2019).

Open peer review experiments and developments

For open peer review, there were significant controversies in the theoretical studies on it. In addition, it was barely satisfactory the results of open peer review experiments for journals. Among the numerous experiments, the one by Nature in 2006 was the most famous (Greaves et al., 2006; Qian, 2009). That experiment started on June 1st, 2016 and ended on September 30th, 2006, lasting for 4 months. During this period, Nature received 1369 articles for peer review. 71 of the authors, roughly 5% of the total number of authors for those articles, were willing to participate in the comparatively new mode of peer review. During the experiment, the articles that underwent open peer review also underwent traditional peer review. Editors put those articles on open servers to attract the comments on the articles from the public. The length of time for open commenting was the same as that for traditional peer review. Both of them started simultaneously (Peters & Ceci, 1982). Nature took a negative attitude towards the experiment results, but it kept the relevant web pages, which asked for the comments and suggestions from the scholars willing to participate. Based on our discussions on Nature’s 2006 experiment, we have found that: first, many authors were disinterested in the experiment, as they were more receptive to traditional peer review; second, some authors were worried that open peer review may lead to plagiarism issues on their works and hence may put them into difficult positions regarding patent determination; third, some potential reviewers thought that open peer review had but limited values as reviews by it were overly modest and not constructive and critical enough. Qian (2009) believes that the reason why open peer review did not achieve the expected effect was due to the defects in the scheme design. Based on the feedbacks, researchers summarized the reasons for the failure as follows: (1) reviewers were afraid of negative consequences as a result of opening (revealing) their identities; (2) there was a lack of appropriate incentive mechanism; (3) the length of time for the experiment was too short so that the review experts were not willing to participate in an experiment that would soon be finished. Coincidentally, The Medical Journal of Australia conducted two rounds of experiment on open peer review in the March of 1996 and the October of 1998. The second round ended as a straight failure. Afterwards, it was summarized that the failure had been caused by two major reasons: (1) digitized journals were not popular at that time, and there was a lack of corresponding management tool for an open peer review process; (2) the editors were not familiar with the “whole-new” peer review process at that time, and they were not capable of balancing the powers of the various participants. When other journals were conducting experiments on open peer review, they tried to avoid the problems by adding or changing the then-existing traditional peer review process (Zhang & Zhang, 2009). To test the effects of review report on a reviewer’s willingness to participate in peer review, on types of recommendation, on duration of review, and on review styles, Elsevier conducted an experiment on open peer review using 9220 articles and 18,525 review reports from its journals. Compared with the previous experiments, the data in this experiment were obtained from 5 journals in different areas, which instantiated an empirical study on open peer review for journals from multiple disciplines for the first time (Bravo et al., 2019). The results of this research showed that publishing peer review reports would not affect the normal functions of peer review, and young and non-academic scholars are more inclined to participate in peer review providing positive and objective comments and suggestions.

Though open access, open data, and open science, etc., are all included into the open science framework by FOSTER (2021), as shown on Fig. 1, the acceptance and practical developments of open peer review have been far inferior to those of open access and open data. They are also inferior to those of the qualitative metrologies that are in the same category. The practical problems in the development of open peer review include vagueness of academic concepts (Liu & Liu, 2017; Ross-Hellauer, 2017; Wang, 2018), insufficient research on relevant theories (Yao et al, 2020), unsatisfactory experiment results (Greaves et al, 2006; He & Fu, 2020), and the unwillingness of reviewers to participate (Ross-Hellauer et al, 2017; Tattersall, 2015; Walsh, 2000; Zhang, 2015), etc. In addition, not only troubled by the above academic problems, open peer review also deals with multiple complicated practical problems such as psychological factors (Tennant et al, 2017), incentive factors (Sun et al, 2016), and recognitions of research results (Wang, 2018), and hence it is not simply only about the copyright problem (Feng, 1993; Jiang, 2002), which can be settled by negotiation or by hard rules.

Fig. 1
figure 1

FOSTER open science framework (FOSTER, 2021)

Transparent peer review

The concept of transparent peer review has been implemented by some journals for their routine peer review processes. Their users are mostly the staff working at the various academic publishers. However, the researchers on peer review have not provided detailed connotations for the terminology. According to the relevant files published jointly by Wiley, Publons, and ScholarOne (Wiley, 2021), they have characterized the term “transparency” into two aspects: (1) all peer review deliverables produced during the process are open; (2) publicizing the identities of reviewers is optional (Barros et al., 2019). However, it is argued that during open peer review, reviewer identity and review contents should both be open: neither of them should be optional. Though the files did not mention any other features of open peer review, it can be learned from the files that the major difference between open peer review and transparent peer review lies in whether or not to reveal (open) a reviewer’s identity as a requirement. The feature of open identity is essential to open peer review. Intuitively, for the majority of researchers that are not familiar with open peer review, open identity is equal to open peer review. According to the 122 open peer review features by Ross-Hellauer (2017), open identity is associated with 90% of them (110/122). It ranked the first among all the features. It is also because of the importance of open identity that the development of open peer review has hit its largest obstacle. Ford (2013) pointed out that it would be detrimental to the eventual development of open peer review if the feature of open identity is dropped. In 2017, Ross-Hellauer (2017) conducted a survey related to open peer review. He provided a detailed summary based on the survey results obtained from 2994 respondents who had been in contact with some open peer review practices during their routine works. On the survey, 73.9% of the respondents (2175/2994) agreed or strongly agreed that review experts have the rights to decide whether or not to open their identities. At the same time, 67.2% of them (1858/2767) thought that it would lower the willingness of reviewers to conduct peer reviews if journals forcibly open their identities.

Open peer review vs. transparent peer review

Though Nature did not obtain ideal results from its first experiment on open peer review, it still continued its efforts on improving the peer review process. 10 years after its first experiment, in the January of 2016, an experiment on transparent peer review was conducted by Nature Communications (Nature, 2019), a journal from Nature. Compared to the previous experiment, for the configuration of the reviewer role, Nature Communication made anonymity a default option, and the reviewers could choose to open their identities if they wanted to. The authors were required to make a choice on whether or not to open the files produced in the review process. During the experiment, 60% of the authors for the 787 articles chose transparent peer review and also chose to publish the review reports. The participation rate has achieved significant growth compared to the first experiment (5%). We argue that, the success obtained in the 2016 experiment, when compared to the 2006 experiment, can be attributed to three reasons: (1) making reviewer anonymity a default option avoided the resistances to participation by the reviewers; (2) before the experiment, Nature Communications motivated the reviewers by making it clear that it would publish the review reports, which can be considered as a certain kind of academic inventive to augment the reputation of the reviewers; (3) after a decade of developments in open science, open access, open data, and preprints, the acceptance of open peer review by scholars from multiple disciplines had become much better than before. Based on the experiment results, it can be seen that the scholars from the discipline of comparative open science studies were the most willing to sign their review reports (Nature, 2015).

The existing experiments on and practices of open peer review are much more related to transparent peer review than namely. For example, Atmospheric Chemistry and Physics (ACP) divided a peer review process into two phases. The first phase is a rapid peer review process which only assures the basic quality of a submitted article. Afterwards, the article will be sent to Atmospheric Chemistry and Physics Discussions (ACPD) to go through the second phase for in-depth discussions and reviews. During the second phase, a reviewer of the article can publish a signed or unsigned piece of review content. The article will be published by ACP if it went through the second phase successfully, and its related review contents and the replies to them by the author(s) will be published by ACPD (Zhang et al., 2011). Peerage of Science, a peer review platform, uses a process that is similar to that of ACP. When an article enters Peerage, every author who has published on Peerage of Science before can provide comments on the article as a reviewer, and editor(s) will monitor the comments and provide basic quality controls for both the article and the comments. When the article goes into the second phase, the reviewers will also review and rate the comments provided by the other reviewers in the first phase, and the author(s) will revise the article according to the collective comments received. The articles that successfully pass the second phase will eventually be chosen for publication by the editor(s) (Hettyey et al., 2012). The two processes share the same highlight: all comments produced during a review process are public, and it is optional to open the identity of a reviewer, which is exactly the key deciding feature of transparent peer review (He & Fu, 2021).

There is no universally agreed-upon definition on open peer review. Typically, though, central to open peer review is the idea that at some point in the peer review or publishing process both the author and the reviewer are mutually aware of each other’s identities. Moreover, in open peer review, review reports and the reviewers’ identities may be published together with the articles they have reviewed. Some open peer review journals will also publish all the early versions of an article. Transparent peer review emphasizes on publishing an article with all of its review reports, but it does not require publishing the reviewers’ identities. Therefore, the purpose behind transparent peer review is to open up scientific review to a wider audience and to invite them to participate in peer review. Journals usually use two methods to improve the objectivity of peer review: first, opening up reviewers’ identities to authors (open identity mode); second, publishing articles together with its review reports (open review report mode). Both of them serve to simulate an accountability system for peer review.

Mentioning transparent peer review, Elizabeth Moylan defined it as publishing an article with its review reports while hiding the identities of the reviewers. She further defined open peer review as publishing an article with its review reports as well as the identities of the reviewers. There is also another type of transparent peer review, in which a reviewer’s identity is shared but not his/her review reports (e.g., Nature) (Meadows, 2017) The major difference between open and transparent peer review, within the scope of this research, lies at whether or not a reviewer’s identity is open. As mentioned before, Ross-Hellauer (2017) has conducted an analysis on the 122 open features, and he found that open identity accounted for 90% of them, ranking the first. In fact, transparent peer review has multiple forms, and hence it is a dynamic concept, which references the features of open peer review. Therefore, transparent peer review shall be regarded as a derived concept from open peer review, and it has successfully promoted participations by scholars as potential reviewers, many of which had been previously discouraged by the open peer review’s feature of open reviewer identity.

Research problems and ideas

The rapid developments of technologies have made it possible the widespread application of open peer review. However, hindered by some factors, open peer review has not been widely adopted. Meanwhile, transparent peer review, as a combined result from open science and open peer review, has been adopted by a number of journals. Therefore, from an empirical viewpoint, we want to find out and validate the key factors that affect the practice modes of open peer review.

Open peer review can be classified into two practice modes: completely open and partially open ones. Therefore, we set up a dependent variable called “type of open peer review”. According to the definitions given above, during practice, there should be 7 relevant factors for “type of open peer review”: open identity, open review report, open participation, pre-publication review, post-publication review, public interactions between author(s) and reviewers, and open review platform. The 7 factors can be split, generalized, and combined into 5 measurable indices as independent variables: (1) whether or not author(s) identity (identities) are open; (2) whether or not reviewers’ identities are open; (3) whether or not review reports are open; (4) interactive discussion (generalized from open participation, public interactions between author(s) and reviewers, and open review platform); (5) order of review and publication (generalized from pre-publication review and post-publication review).

By data analysis we aim to sort out the evident relevancies between the dependent variable (open peer review practice modes) and independent variables (author identity, peer reviewer identity, review report, interactive discussion, order of review and publication). The independent variables that exhibit with high relevancies are the key factors affecting the practice modes of open peer review.

Empirical analysis

Research methodology

Multiple correspondence analysis

Multiple Correspondence Analysis (MCA) is also called relevancy analysis and R-Q factor analysis (Baidu Encyclopedia, 2022). It is a multivariate dependent variable statistical analysis technique that has surfaced in recent years. By analysis on interactive summary table composed of qualitative variables, it can open the differences between the various categories of the same variable and the corresponding relationships between the various categories of different variables. The fundamental idea behind Correspondence Analysis (CA) is to illustrate the proportional structure calculated using the elements in the rows and columns on a contingency table in the form of a set of points in a low-dimensional space. CA is suitable for analysis involving two categorical variables. MCA can be used for an analysis involving more than two categorical variables. We combined MCA and Optimal Sale Regression Analysis (OSRA) methods for our analysis, during which optimal scale transformation is carried out for each variable to highlight the correlational differences between its categories and the categories of other variables as much as possible, and then calculation is performed according to the standard multiple correspondence analysis method (Zhang & Dong, 2018).

Optimal scale regression analysis

Ordinary linear regression analysis places strict requirements on data. When encountering categorical variables, linear regression cannot accurately reflect the distances between different values of the categorical variables. For example, for the dependent and independent variables mentioned previously, type of open peer review, whether or not to open author(s) identity (identities), whether or not to open reviewers’ identities, and whether or not to open review reports, they are all on the same level, and there are no differences between them in size, sequence, and trends. Applying linear regression analysis for them would make them lose their connotational significances.

Optimal scale regression can be used to solve such problem. It is good at quantifying different categories (values) of categorical variables so as to convert categorical variables into a numerical type for statistical analysis. It can be said that the optimal scale regression method greatly improves the processing effectiveness for categorical variables, and it breaks through the limitation for categorical variables on selection of analysis models, expanding the applicability of regression analysis (Cao & Yang, 2019).

Optimal scale transformation is specifically used to solve the problem of how to quantify categorical variables in statistical modeling. Its basic idea is to analyze the strong and weak changes of the influences by each category of categorical variables on the values of dependent variables based on the modeling framework you want to fit. On the premise of ensuring that the correlation between variables after transformation is linear, a certain nonlinear transformation method is used for repeated iterations so as to find the best quantitative score for each category of the original categorical variables, and then the quantitative score is used in the corresponding model to replace the original variable category for subsequent analysis. In short, optimal scale regression analysis is the regression analysis for numerical categorical variables (Zhang & Dong, 2018).

Discriminant analysis

Discriminant Analysis (DA) is a classification method. It establishes a set of discriminant rules using a set of “training samples” from a known category and to classify data from unknown categories by predicting their values used by the rules. Linear Discriminant Analysis (LDA) is a DA method and a classical algorithm for pattern recognition. It was introduced by Belhumeur into pattern recognition and AI in 1996. Its basic idea is to project a pattern sample from high-dimensional space onto a vector space best suited for recognition to extract categorical information and to reduce the number of space dimensions. It assures that the pattern sample after projection has the largest distances between categories and smallest distances within the same category (Zhang & Dong, 2018).

Data source

Directory of Open Access Journals (DOAJ) is an indexing service for Open Access Journals (OAJs) (not including preprints). DOAJ was supported by the University of Lund, and it is now a community-curated online directory. The number of OAJs indexed by it is really large, and it is currently considered as the best OAJ indexing service. One of its advantages lies in the fact that the qualities of its indexed journals are assured and certified by international experts following a rigorous assessment process. In addition, for those OAJs, their electric versions and paper versions are published synchronously and it is free of charge to access them. Therefore, we use DOAJ as our data source. There are 176 OAJs employing open peer review in DOAJ (till August 3rd, 2021). 166 of them (94.32%) have effective and retrievable journal data.

Initializations of indicator variables

There are totally 6 indicator variables, and their value assignment rules are given as follows (also as shown on Table 1).

  1. (1)

    Type of open peer review According to the definition of open peer review, if author identity, reviewer identity, and review reports are all open by a journal, the journal is considered as completely open (value of 1 is assigned). Otherwise it is considered as partially open (value of 2 is assigned).

  2. (2)

    Reviewer identity Open peer review, by default, opens the identities of reviewers, and in this case the value of 1 is assigned. During practice, if a reviewer can choose to remain anonymous, this indicator of the related journal would be assigned the value of 2.

  3. (3)

    Author identity If author identity is open, it will have the value of 1. If an author can choose to remain anonymous, it will have the value of 2.

  4. (4)

    Review report If an OAJ publishes all review reports, it will have the value of 1. If the publication of reports can be called off by author(s) and/or reviewer(s), it will have the value of 2.

  5. (5)

    Interactive discussions If an OAJ has a forum set up for interactive discussions between authors and reviewers, it will have the value of 1. Otherwise, it will have the value of 2.

  6. (6)

    Order of review and publication If publication comes after reviews, it will have the value of 1. If reviews come after publication, it will have the value of 2.

Table 1 Values assignments for open peer review indicators

For the 176 OAJs in DOAJ, we crawled the web to gather their corresponding data. We assigned values to their various indicators (categorical) variables according to the data obtained, as shown in Table 2.

Table 2 Features of the 176 OA Journals Employing Open Peer Review (Excerpt)

Research results

Multiple correspondence analysis results

Table 3 provides the iteration history, which shows that the results converge after the 14th iteration. Table 4 provides a summary on feature and inertia values for each sub-category. An inertia value stands for the volume of information a dimension carries. It can be seen that the volumes of information for the two dimensions are 0.383 and 0.245 respectively. The value of Cronbach’s Alpha stands for the degree of discrimination, and the bigger the value is, the more discriminating the related dimension is. It can be seen that the first dimension (α = 0.678) is more discriminating than the second dimension (α = 0.385).

Table 3 Iteration history
Table 4 Model summary

Figure 2 is the discrimination measurement diagram, which shows the discrimination degree of each variable in two dimensions in the form of scatter coordinates. It can be seen that the discrimination degree of the first dimension of the four variables of “type of open peer review”, “reviewer identity”, “review report”, and “author identity” is greater than that of the second dimension. Among them, “type of open peer review” is the closest to “reviewer identity”. “interactive discussion” and “order of review and publication” are more discriminated in the second dimension.

Fig. 2
figure 2

Discrimination measurement graph

Figure 3 is the diagram for multiple correspondence analysis. The principles for reading the diagram are: different categories of the same variable in roughly the same area starting from the graphic origin (0, 0) have similar properties; there may be correlations between the categories of different variables in roughly the same area in the same direction starting from the graphic origin; quadrant method is used to investigate the relationships between scattered points groups. According to the above principles, the following clues can be obtained.

  1. (1)

    “Discussion after real name registration” is related to “post-publication review”.

  2. (2)

    “Partial open peer review” is related to “voluntary anonymity” of author identity, “voluntary anonymity” of reviewer identity and “not open” of review report. It corresponds with the features of partial open peer review. “Partial open peer review” and “voluntary anonymity” of reviewer identity are the closest to each other on the graph, which indicates that the two are highly correlated. “Voluntary anonymity” of author identity and “partial open peer review” have the biggest distance from each other on the graph, which indicates that their correlation is very limited.

  3. (3)

    “Pre-publication review” is related to “non-interactive discussions”, and it is close to the origin point, which shows that it is a commonly used practice mode of open peer review.

  4. (4)

    “Complete open peer review” is related to “open” of author identity, “open” of “reviewer identity”, and “open” of review report. The latter three are almost the essential features of complete open peer review. At the same time, it can be seen that “complete open peer review” and “open” of reviewer identity almost overlap with each other, which indicates that the two are very highly related.

Fig. 3
figure 3

Multiple correspondence analysis graph

Based on the above analysis, whether or not reviewer identity and review report are open should be two important factors affecting the practice modes of open peer review. Whether or not reviewer identity is open seems to be the more important one. Most common open peer review practices adopt pre-publication review and there is no interactive discussion.

Optimal scale regression analysis results

Table 5 shows the goodness of fit of the optimal scale regression model, from which it can be seen that the adjusted R square value is 0.89, indicating that the goodness of fit of the regression model is 89%, and hence the goodness of fit is good. Table 6 is the variance analysis table, which is used to test the significance of the regression equation. The table shows that the value of SIG is 0.000, which is very close to zero, indicating that a multivariate linear optimal scale regression equation can be built.

Table 5 Model summary
Table 6 ANOVA

Table 7 presents the coefficients and significance testing results by the multiple linear optimal scale regression function. “reviewer identity” has the biggest coefficient value (0.917) and at the same time it passed the significance testing, which means that “reviewer identity” has the most significant influence on “type of open peer review”. Another variable having passed the significance test is “order of review and publication”, but its coefficient value is very small, which shows that it has some but very limited influence on the dependent variable of “type of open peer review”.

$$Type \, of \, Open \, Peer \, Review \, = \, 0.014 \times Author \, Identity \, + \, 0.917 \times Reviewer \, Identity \, + \, 0.086 \times Review \, Report \, + \, 0.031 \times Order \, of \, Review \, and \, Publication \, + \, 0.004 \times Interactive \, Discussions.$$
Table 7 Coefficients

Table 8 shows the correlation and tolerance of independent variables. There is a special indicator called “importance”, which shows the degree of influence by an independent variable to a dependent variable. It can be seen that “reviewer identity” is very important for “type of open peer review”, and the value is 0.966. The second and third most important ones are “review report” and “order of review and publication”, but the importance values of them are very small.

Table 8 Correlations and tolerance

Discrimination analysis results

Table 9 shows the results by the stepwise selection method, which is used to screen the independent variables that have significant impacts on the discrimination results, and it finally retains the two independent variables “reviewer identity” and “review report” that have significant impacts on the discrimination probability of “type of open peer review” (Sig < 0.05).

Table 9 Variables entered/removed

Table 10 shows the eigenvalue (8.295) and the related information of the canonical discriminant function. The canonical discriminant function carries 100% of the information of the research data, and its canonical correlation coefficient is 0.945, which means highly correlated. Table 11 shows the Wilks’ lambda test results of the constructed discriminant function, which shows that the discriminant function is statistically significant.

Table 10 Eigenvalues
Table 11 Wilks’ Lambda

Table 12 shows the structural matrix of the discriminant function, which shows that only “reviewer identity” has a very strong positive correlation with “type of open peer review” based on its discriminant score.

Table 12 Structure matrix

Table 13 shows the non-standardized discriminant score function of “type of open peer review” constructed as follows, in which “author identity”, “interactive discussion” and “order of review and publication” do not appear in the discriminant function by using stepwise selection method. The function shows that the discriminant values on open peer review type are only related to the two variables of reviewer identity and review reports, and the value for reviewer identity is especially big (5.912), which in turn shows that reviewer identity (open or not) is perhaps the most critical factor deciding the type of open peer review.

$$Discriminant \, Score\left( {Type \, of \, Open \, Peer \, Review} \right) = \, 5.912 \times Reviewer \, Identity \, + \, 0.937 \times Review \, Report \, - \, 9.872.$$
Table 13 Canonical discriminant function coefficients

Discussions and conclusions

Whether or not a reviewer’s identity is open is the most important factor affecting open peer review practice modes

Though open peer review’s definition includes multiple requirements such as those to open identities, to open review reports, to offer interactive discussions, and to maintain the order of review and publication, based on the data analysis results given in the last section, it can be discovered that (1) the multivariate correspondence analysis shows that “type of open peer review” are much more related to “reviewer identity” than the others; (2) In the optimal scale regression function and discriminant function, the regression coefficient of the “reviewer identity” passed the significance test, and its value of the regression coefficient is the largest, which indicates it has the biggest influence on “type of open peer review”; (3) these results show that whether or not “reviewer identity” is open is the most important factor affecting the practice modes of open peer review.

Transparent peer review is the major practice mode for open peer review

Open peer review requires open identities, and the reviewers, who are worried about the possible negative consequences brought by this requirement, may resist complete open peer review. Transparent peer review respects a reviewer’s right to remain anonymous, and hence it has received better supports from reviewers. During practice, transparent peer review has become the predominant practice mode of open peer review, which has been validated by the analysis results.

Blockchain can be used to alleviate the psychological uneasiness of the reviewers who are reluctant to open their identities

Open peer review can be characterized with transparency, interactivity, verifiability, effective supervision (by liability), and scientific selectiveness. Blockchain has the features of hash encryption, distributed storage, consensus mechanism, smart contract, and incentive mechanism, etc. These features of blockchain can be used to address the contradictions between many issues in open peer review, e.g., decentralization versus effective supervision, open sharing versus privacy protection, review efficiency versus review quality, etc.

To carry out open peer review it requires many scholars to volunteer themselves as reviewers. However, many scholars are reluctant to participate in open peer review as a reviewer because of open identity. They worry about the various possible negative consequences of revealing their identities as a reviewer. Therefore, it is difficult to engage them as reviewers in open peer review.

We can use blockchain techniques to encrypt reviewers’ identities to help them remain anonymous. After their concerns over privacy issues are dealt with in this way, we proceed to publish their review reports and they can also proceed to the online interactive communication. In such way, we transform open peer review into transparent peer review. When the reviewers gradually become comfortable with it and realize the values of (and the benefits brought by) openness, their reluctances to participate in real open peer review may also gradually fade away. In this way, blockchain is used for preparing and training the scholars as potential reviewers in open peer review.

We think that blockchain techniques can be used to combine the advantages of double-blind peer review and open peer review. In the first phase, review is done in the double-blind way. Digital signature can be used for identity verification, assuring the privacies of review experts. (Liu & Huang, 2021) Except necessary review information, authors and reviewers cannot identify each other, and the texts produced by them should be anonymized. After reviews have been finished, in the second phase, it goes to an “open” peer review system, where all review comments, review scores are published and all the deliverables and the whole process is effectively supervised (Zhi & Ren, 2022). Blockchain techniques can be used to ensure the openness, transparency, and effective supervision of a peer review process and its review comments, achieving a balance between privacy protection and open data while ensuring data security. In addition, we can also leverage the blockchain’s feature of incentive mechanism (by using tokens, e.g., digital currency like bitcoin) to motivate scholars (as potential reviewers) to participate in open peer review (He & Fu, 2020).

Open peer review remains to be the traditional “academic goalkeeper”

“Order of review and publication” has an evident influence on “type of open peer review”, but the degree of influence is not very high. Therefore, it is a secondary factor affecting open peer review practice modes. Journals that have adopted the practice of “post-publication review” are few. Therefore, most journals using open peer review employ the practice of “pre-publication review”, which shows that they still consider open peer review as the traditional “academic goalkeeper” to control the quality of publications.

Publication of review report can be an incentive for reviewers and promotes the development of open peer review

“Review report”, as another important factor affecting open peer review practice modes, can function as an incentive to invite scholars to participate in open peer review as a reviewer. Formalized reviewers’ comments can be attached at the end of a published paper and be assigned a corresponding DOI number. In this way, the comments and the related reviewers become known to the potential readers of the paper and the comments can be referenced as well. Associating the DOIs of the comments to the ORCIDs of the reviewers will make the related review report to appear on the reviewers’ professional profiles, augmenting their reputation and recognitions in academia. It can also be used as an additional evaluation criterion for funding and/or promotion applications (He & Fu, 2020).

Summary

This paper presents an empirical study on the relevant factors affecting the practice modes of open peer review, and it found and verified the primary and secondary influencing factors. Our major contributions are: (1) by using categorical value assignment method, we conducted a set of quantitative analysis on the qualitative factors; (2) the multi-scale analysis chart was used to show the relevant influencing factors of open peer review and the internal correlations between the different categories of the factors, and the optimal scale regression method and discriminant analysis method were used to reveal the degree of influence by the relevant factors on the selection of open peer review practice mode; (3) taking the open peer review journals in DOAJ as our major data source and using web crawling technique to obtain the additional data, we ensured that the data collected is trustworthy and reliable. In open peer review, from a practical perspective, there are few indicators (variables) that can be directly observed and hence easy to measure. In the future, we will try to find and employ more observable indicators to study the developmental characteristics of the practice modes of open peer review.