Correlations between submission and acceptance of papers in peer review journals

This paper provides a comparative study about seasonal influence on editorial decisions for papers submitted to two peer review journals. We distinguish a specialized one, the Journal of the Serbian Chemical Society (JSCS) and an interdisciplinary one, Entropy. Dates of electronic submission for about 600 papers to JSCS and 2500 to Entropy have been recorded over 3 recent years. Time series of either accepted or rejected papers are subsequently analyzed. We take either editors or authors view points into account, thereby considering magnitudes and probabilities. In this sample, it is found that there are distinguishable peaks and dips in the time series, demonstrating preferred months for the submission of papers. It is also found that papers are more likely accepted if they are submitted during a few specific months, - these depending on the journal. The probability of having a rejected paper also appears to be seasonally biased. In view of clarifying reports with contradictory findings, we discuss previously proposed conjectures for such effects, like holiday effects and the desk rejection by editors. We conclude that, in this sample, the type of journal, specialized or multidisciplinary, seems to be the drastic criterion for distinguishing the outcomes rates.


Introduction
In the peer review process, two "strategic" questions have to be considered: on one hand, -for editors, what is the load due to the number (and what is the relative frequency) of papers submitted at some time during a year?; on the other hand, -for authors, is there any bias in the probability of acceptance of their (by assumption high quality) paper when submitted in a given month, because of the (being polite) mood of editors and/or reviewers? A study about such a time concentration (and dispersion) of submitted papers and their subsequent acceptance (or rejection) seems to become appropriate from a scientometrics point of view, in line with recent "effects" found and known through media, like coercive citations or faked research reports.
In fact, the mentioned question of paper submissions timing is of renewed interest nowadays in informetrics and bibliometrics due to the flurry of new publication journals by electronic means. Moreover, paper acceptance rate is of great concern to authors who feel much bias at some time. No need to say that the peer review process is sometimes slow, with reasons found in editor's and reviewers' workload, whence a difficulty of finding reviewers. Tying such questions are the open access policy and the submission fees imposed by publishers. on one hand, but also doubts or constraints about the efficiency in managing peer-review of scientific manuscriptseditors perspective [1] and of authors [2]. Thus, one may wonder if there is some "seasonal" or "day of the week" effect.
Very recently, Boja et al. [3], in this journal, showed that "the day of the week when a paper is submitted to a peer reviewed journal correlates with whether that paper is accepted", when looking at a huge set of cases for high Impact Factor journals. However, there was no study of rejected papers.
From the seasonal point view, previously, but in recent time, Shalvi et al. [4] discussed the case of electronic submission monthly frequency to two psychology journals, Psychological Science (PS) and Personality and Social Psychology Bulletin (PSPB), over 4 and 3 years respectively. Shalvi et al. [4] found a discrepancy in the pattern of "submission-per-month" and "acceptance-per-month" for PS, -but not for PSPB. More papers were submitted to PS during "summer months", but no seasonal bias effect (based on a χ 2 (11) test for percentages) was found about subsequent acceptance; nevertheless, the percentage of accepted papers when submitted in Nov. and Dec. was found to be very low. In contrast, many papers were submitted to PSPB during "winter months", followed by a dip in April, but the percentage of published papers was found to be greater if the submission to PSPB occurred in [Aug.-Sept.-Oct.]. Moreover, a marked "acceptance success dip" occurred if the submission was in "winter months". The main difference be-tween such patterns was conjectured to stem from different rejections policies i.e. employing desk rejections or not.
Later, Schreiber [5] examined submissions to a specialized journal, Europhysics Letters (EPL), over 12 years. He observed that the number of submitted manuscripts had been steadily increasing while the number of accepted manuscripts had grown more slowly. He claimed to find no statistical effect. However, -from Table 2 in [5], there is a clearly visible maximum for the number of submissions in July, more than 10% over the yearly mean, and a marked dip in submissions in February, -even taking into account the "month small length". Examining the acceptance rate (roughly ranging between 45 and 55 %, according to the month of submission), he concluded that strong fluctuations can be seen, between different months,. One detects a maximum in July and a minimum in January for the most recent years.
Alikhan et al. [6] had a similar concern: they compiled submissions, in 2008, to 20 journals pertaining to dermatology. It was found that May was the least popular month, while July was the most popular month. We have estimated a χ 2 36.27, from the Fig. 1 data in Alikhan et al. [6]. thereby suggesting a far from uniform distribution. There is no information on acceptance rate in [6].
Other papers have appeared pretending discussing seasonal or so effects, concluding from fluctuations, but finding no effect, from standard deviations arguments, -instead of χ 2 tests. Yet, it should be obvious to the reader that a χ 2 test performs better in order to find whether a distribution is uniform or not, -our research question. In contrast, a test based on the standard deviation and the confidence interval can only allow some claim on some percentage deviation of (month) outliers; furthermore such studies are tacitly assuming a normality of the (submission or acceptance) fluctuation distributions, -which is far from being the case. Usually, the skewness and kurtosis of the distributions to be mandatory complements are not provided in "fluctuations studies" by such authors.
In order to contribute answers to the question on "monthly bias", we have been fortunate to get access to data for submitted, and later either accepted or rejected, papers to a specialized (chemistry) scientific journal and to a multidisciplinary journal. Two coauthors of the present report, ON and AD, are Sub-Editor and Manager of the Journal of the Serbian Chemical Society (JSCS). One coauthor MA is a member of the editorial board of Entropy. It will be seen that comparing features from these two journals allows one to lift some veil on the reported apparent discrepancy in other cases.
Thus, here below, we explore the fate of papers submitted for peer review during a given month, plus their publication fate. We find that, in the case at hands, fluctuations of course occur from one year to another. However, for JSCS, submission peaks do occur in July and September, while many less papers are submitted in May and December. A marked dip in submissions occurs in August for Entropy, -the largest number of submissions occurs in October and December.
However, if the number of submitted papers is relevant for editors and handling machines, the probability of acceptance (and rejection) is much concerning authors. Relatively to the number of submitted papers, it is shown that more papers are accepted for publication if they are submitted in January (and February), -but less so if submitted in December, for JSCS; the highest rejection rates occur for papers submitted in December and March. For Entropy, the acceptance rate is the lowest in June and December, but is high for papers submitted during spring months, February to May. Statistical tests, e.g., χ 2 and confidence intervals, are provided to ensure the validity of the findings.
Due to different desk rejection policies and in order to discuss the effect of such policies as in [6], we discuss a possible specific determinant for JSCS data: possible effects due to religious or holiday bias (in Serbia) are commented upon.

Data
The JSCS and Entropy peer review process are both mainly managed electronically, -whence the editorial work is only weakly tied to the editors working days 1 .  subsequently editor) rejected papers, i.e., 52%, T dr = (42 + 81 + 79 =) 202 papers were desk rejected, without going to a peer review process, i.e. 22.1%. For completeness, let it be recorded that several papers were rejected because the authors did not reply to the reviewers remarks in due time and a few submissions were withdrawn. (Thus, T a + T r = T s : 424 + 474 = 913). The time series of the positive fate, thus acceptance, of submitted papers for a specific month submission is also shown in Fig. 1.

The Journal of the Serbian Chemical Society
The statistical characteristics 2 of the N  Table 1 -Table 4.

Entropy
Entropy covers research on all aspects of entropy and information studies. The journal home page is http : //www.mdpi.com/journal/entropy. Among the T s = 2573 submitted papers, T a = 1250 were finally accepted for publication. The time series of the positive fate, thus acceptance, of submitted papers after a specific month submission, is also shown in Fig. 2.
The statistical characteristics of the N distributions for Entropy are given in Table 5 -Table 8.

Data analysis
The most important value to discuss is the calculated χ 2 , for checking whether or not the distribution is uniform over the whole year. Notice that we can discuss the data not only comparing different years, but also the cumulated data: C  dr , as if all years are "equivalent". For further analysis, we provide the statistical characteristics of the cumulated distributions in Table 1 -Table 8.
We have also taken into account that months can have a different number of days, normalizing all months as if there were 31 days long (including the special case of February in 2016). The fact that the number of papers appears not to be an integer, in so doing, is not a drastic point, but more importantly such a data manipulation does not disagree at all with our following conclusions. Thus, we do not report results due to such "data normalization". . The coefficient of variation (CV ≡ σ/µ) is always quite small, indicating that the data is reliable beyond statistical sampling errors. Each coefficient of variation 3 , C s or C a or C r or C dr , for the cumulated data is lower than the other CVs; this is a posteriori pointing to the (statistical) interest of accumulating data for each month of different years, -beside looking at the more dispersed data over a long time span.

JSCS Data analysis
Next, observe the summary of statistical characteristics in Table 1 - Table  4; they show that the distributions are positively skewed, except those for the submitted papers which are negatively skewed. The kurtosis of each distribution is usually negative, except for the anomalous cases, N , whence for the latest case for the whole series. It can be concluded that the distributions are quite asymmetric, far from a Gaussian, but rather peaked.
Almost all measured values fall within the classical confidence interval ]µ − 2σ, µ + 2σ[. However, in five cases, a few extreme values fall above the upper limit, as can be deduced from the Tables.
"Finally", notice that all χ 2 values, reported in Table 1 -Table 4 are much larger than the 95% critical value: they markedly allow to reject the null hypothesis, i.e. a uniform distribution, for each examined case. Thus a monthly effect exists beyond statistical errors for all N s , N a , N r and N dr cases.

Entropy Data analysis
In the case of Entropy data, the CV is usually low;, -and much lower than in the case of JSCS. The skewness and kurtosis are not systematically positive or negative. The number of outliers outside the confidence interval is also "not negligible"; this is hinted from the number of maximum and minimum values falling outside the confidence interval, yet "not too far" from the relevant interval border. Nevertheless, this implies that the distribution behaviors are influenced by the number of data points, to a larger extent for Entropy than for JSCS.
Nevertheless, notice that all χ 2 values, reported in Table 5 - Table 8 are also much larger than the 95% critical value: they markedly allow to reject the null hypothesis, i.e. a uniform distribution, for each examined case. A month anomaly effect exists beyond statistical errors for all N s and N a ; it is weaker for the N r and N dr cases. The large χ 2 values obviously point to distinguishable peaks and dips, thereby markedly promoting the view of monthly effect bias for N s and N a .

Discussion
Let us first recall that the journals here examined have different aims; one is a specialized journal, the other is an interdisciplinary journal. To our knowledge, this is the first time that a journal with such a "broadness" is considered within the question on monthly bias. It seems that one should expect an averaging effect due to a varied number of constraints on the research schedules pertaining to different topics and data sources. One subquestion pertains on whether a focussed research influences the timing of paper submission, and later acceptance (or rejection). One would expect more bias for the JSCS case than for the Entropy case. Comparing journals (in psychology), but with different "specializations", Shalvi et al. [4] had found different behaviors indeed. Let us observe what anomalies are found in the present cases.

JSCS
Comparing months in 2012, 2013 and 2014, it can be noticed that the most similar months (the least change of positions in the decreasing order of "importance") are Dec., May, June for the lowest submission rate, while Sept. and July are those remaining on the top of the month list, for the highest submission rate; see figures. A specific deduction seems to be implied: there is a steady academic production of papers strictly before and after holidays, but there is a quiet (production and) submission of papers before holidays. This finding of July production relevance is rather similar to that found for most other journals, -except PSPB [4].
Concerning the May dip anomaly, one may remind ourselves that in most countries (including Serbia), lectures and practical work at faculties end by June; since many authors (professors, assistants) are very engaged with students at that time, probably May is not the month when they are focused on writing papers but rather "prefer" finishing regular duties. In fact, corroborating this remark, it has been observed that most papers submitted to JSCS are from academia researchers [8].
A huge peak in January 2013 is intriguing. It was searched whether something special occurred ca. January 2013; it was checked that the submission system worked properly: there was no special clogging a month before. Moreover, there were no special invitations or collection of articles for a special issue. Therefore, the peak can be correlated to that found for PS. From such a concordance, it seems that more quantitative correlation aspects could be searched for through available data.
Notice that on a month rank basis, for 2013 and 2014, the Kendall τ coefficient −0.0303 for submitted papers, but −0.3030 for accepted papers; concerning the correlation between the cumulated N s and N a , the Kendall τ coefficient −0.2121.
Two other points related to JSCS, are discussed in Sect. 5.1 and 6: (i) the possible influence of desk rejection policy, a conjecture of Shalvi et al. [4], for distinguishing patterns, and (ii) the acceptance and rejections rates, which are tied to the submission patterns, but also pertain to the "entrance barrier" (editor load mood) conjecture proposed by Schreiber [5].

Entropy
In the case of Entropy, the cumulated measure (over the 3 years here examined) points to a more frequent submission in December, and a big dip in August. From a more general view point, there are more papers submitted during the last 3 months of the year. A marked contrast occurs for the accepted papers for which a wide dip exists for 4 months : from June till September. The discussion on desk rejection and better chance for acceptance are also found in Sect. 5.1 and 6.
Notice that for the correlation between the cumulated N s and N a , the Kendall τ coefficient 0.4242.
Finally, comparing the cumulated numbers of submitted and accepted papers to JSCS and to Entropy, and ranking the months accordingly, the Kendall τ coefficient is weakly negative: -0.333 and -0.1818, respectively.

Seasonal desk rejection by editor
Often controversial or scorned upon, the desk rejection patterns at JSCS and Entropy can be discussed now. Table 4 and Table 8 provide the relevant data respectively. Notice that for either JSCS or Entropy, we do not discuss reasons why editors (and reviewers) reject papers; these reasons are outside the present considerations; see for some information [9,10,11]. Let us consider JSCS first. It can be observed that "only" (160/596) 27% papers are desk rejected, -this is interestingly compared to the ("many") rejected papers after peer review: 325/596 0.55, for JSCS; the ratio is ∼ 1/2. The highest desk rejection rate occurs for papers submitted in Nov., while the lowest is for those submitted in May; see Conclusions: there seems to be no effect due to holidays on the editorial workflow, as months most often containing holidays (January, July and August) exhibit no special statistical anomaly, -with respect to either submission or decision rate as compared to other months, for JSCS. Yet, the χ 2 is quite large (∼16.55; see Table 4). Thus, the seasonal effect might have another origin. The Entropy N dr data distribution is even more uniform (χ 2 ∼ 6.52; see Table 8). If any, some seasonal effect on N dr might occur during winter time.

Entrance barrier editor load effect
Schreiber [5] considers that an entrance barrier can be set up by editors due to their work load. We understand such a bias as resulting from an accumulation of submitted papers at some time thereafter correlated to a large rate of desk rejection. One can without much restriction assume that the correlation has to be observed for zero month-time lag, since both journals are usually prone to replying quickly to authors.
A visual comparison of the correlation between the number of desk rejected papers and the number of submitted papers to JSCS during a given month, distinguishing 2013 from 2014 or to Entropy during a given month in 2014 or in 2015, or in 2016 is shown in Fig. 4. For JSCS, the number of desk rejected papers is roughly proportional to the number N s during a given month, 25%, a value already noticed, -except at N s ∼ 30, when N dr can be as large as 30 -50%. However, both in 2013 fall and 2014 spring-summer time, there are months for which N s is large, but N dr is low, leading to a doubt on a editor barrier load effect.
For Entropy, it occurs that there are two clusters separated by borders N s ∼ 70 and N dr ∼ 20. When N s ≥ 70 , the number of desk rejected papers proportionally much increases. That was surely the case in 2015.
Conclusions: JSCS or Entropy editors may raise some entrance barrier due to overload whatever the season, 6 Optimal submission month, -for paper later acceptance The above data and discussion on the number of papers is relevant for editors, and automatic handling of papers. Of course, this holds partially true as well for authors who do not want to overload editorial desks with many publications at a given time, since authors expect some rather quick (and positive) decision on their submission. However, another point is of great interest for authors, somewhat bearing on the reviewer and desk editor mood. The most relevant question, on a possible seasonal bias, for authors is whether his/her paper has a greater chance to be accepted if submitted during a given month. Thus, the probability of acceptance, the so called "acceptance rate" is a relevant variable to be studied! The relative number (i.e., monthly percentages) of papers accepted or rejected, p , after submission on a specific month is easily obtained from the figures. The months (mo) can be ranked, e.g. in decreasing order of importance, according to such a relative probability (thereafter called p a ) of having a paper accepted if submitted in a given month (m) to JSCS or to Entropy in given years; see Table 9. One can easily obtained the corresponding p r of rejected papers; see Table 10. This holds true for any yearly time series leading to some p a ≡ p One could also consider for the corresponding cumulated data over each specific time interval. A comment on the matter is postponed to the Appendix.

JSCS case
The relevant percentage differences between accepted and rejected number of papers to JSCS in 2013 and 2014 are given in Fig. 5. From this difference in probability perspective, it does not seem to be recommended that authors submit their paper to JSCS in Mar or Dec.. They should rather submit their papers in January, with some non-negligible statistical chance of acceptance for submissions in February or October.

Entropy case
For Entropy, an equivalent calculation of p a − p r can be made, -from aggregating data in Fig. 2, over a 12 month interval leading to Fig. 5. Even though the best percentage of accepted papers occurs if the papers are submitted from January till May (with a steady increase, in fact) and in October and November, the percentage of submitted papers in December is the largest of the year, and the probability of acceptance is the lowest for such papers. Thus, a marked dip in acceptance probability occurs if the papers are submitted during the summer months [June-Sept.], as seen in Fig. 5, whence suggesting to avoid such months for submission to Entropy.

Warnings and Discussion
For fully testing seasonal effects, one might argue that one should correlate the acceptance/rejection matter to the hemisphere, or/and to nationality of authors, and notice the influence of co-authors 4 .
We apologize for not having searched for the affiliations (either in the southern or northern hemisphere, -since seasons are different) of submitting authors to Entropy; we expect that such a "hemisphere effect", if it exists, is hidden in the statistical error bar of the sample, 1/ √ N s ∼ 4%. Concerning the nationalities of authors (and reviewers) of JSCS in the period Nov. 2009 -Oct. 2014, those have been discussed by Nedic and Dekanski [8]; see Fig.  3 and Fig. 2 respectively in [8]. For completeness, let us mention that the disitribution Data on papers submitted, accepted, rejected, withdrawn, to JSCS from mainly Serbian authors and from "outside Serbia, on given years can be found in Table 11. From such a reading, it appears that JSCS editors are fair, not biased, in favor or against papers with the corresponding author being from Serbia.
At this level, more importantly, a comparison resulting form the observation of Fig. 5 allows to point to a marked difference between a specialized journal and a multidisciplinary one, -at least from the editorial aim, and the peer reviewers points of view. The difference between the probability of acceptance and that of rejection, on a monthly basis, i.e. p a − p r , has an astoundingly different behavior: the p a − p r value is only positive over 3 months for JSCS, but is always positive for Entropy. This can be interpreted in terms of peer review control. Recall that the percentage of desk rejection is approximatively the same for JSCS and Entropy, but the peer review rejection is much higher (∼ 55%) for JSCS in contrast with a ∼ 20% reviewer rejection rate for Entropy. In terms of seasonal effect, one has a positive value in January (and February) for JSCS, but a positive effect for the spring and fall months for Entropy. We consider that such a spread is likely due to the multidisciplinary nature of the latter journal, reducing the strong monthly and seasonal bias on the fate of a paper.

Conclusion
Two remarks seem to be of interest for attempting some understanding of these different findings. On one hand, statistical procedures (either χ 2 or confidence interval bounds µ ± 2σ) have not to lead to identical conclusions: both can point to deviations, but the former indicates the presence (or absence) of peaks and dips with respect to the uniform distribution, while the latter points to statistical deviations when the distributions of residuals is expected to be like a Gaussian. In the latter case, an extension of the discussion including skewness and kurtosis is mandatory [12]. We have pointed out such departures from Gaussianity. The second remark on monthly and/or seasonal bias, in view of the contradistinctions hereby found between the chemistry and multidisciplinary journal, might not be mainly due to desk rejection effects, as proposed by Shalvi et al. [4], but rather pertains to the peer reviewers having different statuses within the journal aims spread.
In so doing, by considering two somewhat "modest, but reliable" journals 5 , we have demonstrated seasonal effects, in paper submission and also in subsequent acceptance. A seasonal bias effect is stronger in the specialized journal. Recall that one can usually read when an accepted paper has been submitted, but the missing set, the rejected papers when submitted, is usually unknown. Due to our editorial status, we have provided a statistical analysis about such an information. Our outlined findings and intrinsic behavioral hypotheses markedly take into account the scientific work environment, and point, in the present cases, to seasonal bias effects, mainly due to authors in the submission/acceptance stage of the peer review process.
In order to go beyond our observation, we are aware that more data must be made available by editors and/or publishers. Avoiding debatable hypotheses on the quality of papers, ranking of journals, fame of editors, open access publicity, submission fees, publication charges, and so on, we may suggest more work on time lag effects, beyond Mrowinski et al. [13,14], in order to pin point better the role of both editors and reviewers quality and concern. In so doing, it might be wise to consider some ARCH-like modeling of seasonal effects, as it has been done for observing day of the week effect in paper submission/acceptance/rejection to/in/by peer review journals [15]. This suggestion of ARCH econometric-like modeling is supplementary supported by arguments as in related bibliometric studies. Indeed, one could develop a Black-Scholes-Schrödinger-Zipf-Mandelbrot model framework for studying seasonal effects, -instead of the coauthor core score as in [16].                      , and over [2014][2015][2016]. Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over N. mo = 12 or 36 months, respectively.   Table 9: Months (mo) ranked in decreasing order of importance according to the probability p a = N