Skip to main content
Log in

Sequential order as an extraneous factor in editorial decision

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Does the sequential order in which manuscripts are submitted to an academic journal have any effect on the editorial decision? As an extraneous factor, the order of submission has no relation to the manuscript’s content. However, an editor facing a list of new submissions could be subject to decision fatigue or order bias, which would in turn affect the editorial decision. Empirical analysis of nearly 10,000 (first) submissions to a leading academic journal shows that manuscripts which were submitted earlier on a given day were up to 7% more likely to be desk rejected, without any order effect on the likelihood of a rejection after peer review.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The incentive, apparently, is sufficiently strong. Some editors attempt to ‘game’ the processing time statistics by issuing ‘reject and resubmit’ decisions, which results in revised submissions being recorded as a new submission. This reduces the average time from submission to publication, with the added advantage of an increased rejection rate which makes the journal appear more selective.

  2. This and other editorial software-related points were clarified through e-mail exchange with the software provider’s technical support team and analysis of their technical documentation.

  3. The format was changed in 2009 by reordering parts of the manuscript ID and using two-digit year representation.

  4. For subjects with daily checking of responses, the submission order can also serve as a good proxy for the order in which manuscripts were viewed by the editor.

  5. Dietrich (2008) discusses an intriguing possibility of geography-determined bias due to differences in time zones, but finds no empirical support for this hypothesis. See “Appendix” for additional discussion of the rank exogeneity.

  6. Hausman specification test suggests that random effects estimation is consistent, but given the more restrictive identifying assumptions of random effects, estimation uses fixed effects. However, random effects estimates are very similar in sign and significance, but have a slightly smaller magnitude, see a sample table included in the “Appendix”.

  7. The data used in this paper doesn’t allow distinguishing decision fatigue from a weaker order effect in which the prominent positions are just the first one or two entries.

  8. Shugan (20007) provides an example of the editorial calculus on p. 594. Assuming, for example, that an article in a journal gets two citations on average, “... missing one highly cited article causes substantial opportunity cost. In this example, publishing one article that gets 300 citations compensates for publishing 149 articles that get zero citations”.

  9. There is no information on the content of referee recommendations, but it is assumed that the editor follows their advice. Hence, the referee recommendations can be proxied by the final decision on the manuscript.

  10. Assuming that the first and last positions are the prominent ones.

  11. Variations to the backlog accumulation procedure were also considered, including a model of editor’s working days that assumed a fixed work pattern within any year–month combination. For example, if the editor worked on a Tuesday in January 2010, then every Tuesday in that month was considered to be a working day, even if no activity was observed in the decision data on that day. These modifications did not lead to any meaningful changes in the estimates.

References

  • Ausloos, M., Nedic, O., & Dekanski, A. (2016). Day of the week effect in paper submission/acceptance/rejection to/in/by peer review journals. Physica A: Statistical Mechanics and its Applications, 456, 197–203.

    Article  Google Scholar 

  • Ausloos, M., Nedic, O., Dekanski, A., Mrowinski, M. J., Fronczak, P., & Fronczak, A. (2017). Day of the week effect in paper submission/acceptance/rejection to/in/by peer review journals. II. An arch econometric-like modeling. Physica A: Statistical Mechanics and its Applications, 468, 462–474.

    Article  Google Scholar 

  • Berger, J. (2016). Does presentation order impact choice after delay? Topics in Cognitive Science, 8, 670–684. doi:10.1111/tops.12205.

    Article  Google Scholar 

  • Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889–6892. doi:10.1073/pnas.1018033108. http://www.pnas.org/content/108/17/6889.abstract

  • Dietrich, J. (2008). Disentangling visibility and self-promotion bias in the arxiv: Astro-ph positional citation effect. Publications of the Astronomical Society of the Pacific, 120(869), 801.

    Article  Google Scholar 

  • Feenberg, D. R., Ganguli, I., Gaule, P., & Gruber, J. (2017). It’s good to be first: Order bias in reading and citing NBER working papers. Review of Economics and Statistics, 99(1), 32–39.

    Article  Google Scholar 

  • Gans, J. S., & Shepherd, G. B. (1994). How are the mighty fallen: Rejected classic articles by leading economists. Journal of Economic Perspectives, 8(1), 165–179.

    Article  Google Scholar 

  • Hamermesh, D. S. (2017). Citations in economics: Measurement, uses and impacts. Journal of Economic Literature (forthcoming). https://www.aeaweb.org/articles?id=10.1257/jel.20161326&&from=f.

  • Johnston, S. C., Lowenstein, D. H., Ferriero, D. M., Messing, R. O., Oksenberg, J. R., Hauser, S. L., et al. (2007). Early editorial manuscript screening versus obligate peer review: A randomized trial. Annals of Neurology, 61(4), A10–A12.

    Article  Google Scholar 

  • Kwan, J., Stein, L., Delshad, S., Johl, S., Park, H., Martinez, B., et al. (2016). Does “decision fatigue” impact manuscript acceptance? An analysis of editorial decisions by The American Journal of Gastroenterology. The American Journal of Gastroenterology, 111, 1511–1512.

    Article  Google Scholar 

  • McAfee, R. P. (2010). Edifying editing. The American Economist, 55(1), 1–8.

    Article  Google Scholar 

  • Mrowinski, M. J., Fronczak, A., Fronczak, P., Nedic, O., & Ausloos, M. (2016). Review time in peer review: Quantitative analysis and modelling of editorial workflows. Scientometrics, 107(1), 271–286.

    Article  Google Scholar 

  • Shalvi, S., Baas, M., Handgraaf, M. J., & De Dreu, C. K. (2010). Write when hot-submit when not: Seasonal bias in peer review or acceptance? Learned Publishing, 23(2), 117–123.

    Article  Google Scholar 

  • Shugan, S. M. (2007). The editor’s secrets. Marketing Science, 26(5), 589–595. http://www.jstor.org/stable/40057081.

  • Stewart, A. F., Ferriero, D. M., Josephson, S. A., Lowenstein, D. H., Messing, R. O., Oksenberg, J. R., et al. (2012). Fighting decision fatigue. Annals of Neurology, 71(1), A5–A15.

    Article  Google Scholar 

Download references

Acknowledgements

I would like to thank technical support staff for clarifications on editorial software, Aleksandar Crnjanski and Dusica Zegarac Dougherty (Clarivate Analytics’ ScholarOne Manuscripts) and Radha Ganesan (Elsevier’s Editorial System), and (in alphabetical order) Daniel S. Hamermesh, James Hartley, Frank Pisch, Arthur Robson, as well as an editor and two anonymous reviewers for comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sultan Orazbayev.

Appendix: robustness of the results

Appendix: robustness of the results

Exogeneity of rank and test for specification errors

A possible explanation for the main results is that there is another variable that simultaneously results in a higher rank and a lower quality of submitted manuscript (in turn leading to higher likelihood of desk rejection). In a different context, Dietrich (2008) suggested that geographic location of the submitting authors could lead to a correlation between their position on a list and other factors affecting the (perceived) quality of their work. For example, if the journal editor is based in Kazakhstan, then submissions from Europe or United States, which on average produce manuscripts of ‘higher quality’ (at least in terms of desk rejection probability), will be received late in the day. In this case, there would be a correlation between higher rank and lower likelihood of desk rejection.

The data from Journal X does not have manuscript-specific information, such as the number of authors, their identities (which could be linked to prior work) or location. As a result, it’s not possible to control for a geographical bias in ranking. However, two steps were taken to address this concern.

First, for a small number of manuscripts that were eventually accepted (after multiple stages of review), it is possible to use the submission and acceptance dates to match the computer-generated first submission IDs to the Digital Object Identifiers of published manuscripts. After matching DOI to (accepted) manuscript ID, it’s possible to examine if the sequential rank on the day of first submission has any correlation with some proxies for quality—the publication’s citation count or the number of downloads. Table 5 shows that sequential rank on the first submission day is not a predictor of manuscript quality.

Table 5 Bibliometric measures of (published) manuscript quality and the sequential order on the day of first submission

Second step is to test for a potential specification error using a link test. If the model is properly specified, then the square of the predicted value should have no explanatory power. This is confirmed using ‘linktest’ command in Stata after the estimations, all squared predicted values are insignificant. Prediction tables could not be generated given the fixed effects specification.

Measurement of rank: accuracy of manuscript ID-based approach

The rank variable is imputed from the manuscript ID-based ordering. This approach is a good proxy for the time of submission at long time scales, for example at the annual frequency the correlation between the manuscript ID-based rank and time of submission-based rank is 0.99 for both Pearson and Spearman correlations. However, as the time scale gets shorter the accuracy of this approach decreases. This is caused by the following feature of the editorial software: whenever an author modifies their submission the original manuscript ID is retained but the date of ’first submission’ is updated to the modification date. For example, if an author submitted a manuscript in August of 2010 and was assigned ID ‘JX-001-10’, but decided to modify the submission in January 2011, then their manuscript ID is retained with the ‘new’ submission date in January 2011.

Since the sequential nature of manuscript IDs is exploited as proxy for the submission order, such submission modifications introduce noise into the measurement of rank at high resolution time scales. For example, on a weekly scale, correlation between time of submission-based and manuscript ID-based ranks is 0.92 for Pearson and 0.90 for Spearman. Without knowing the exact submission time it’s not possible to check how accurate this approach is on a daily scale. However, since the correlation decreased by about 10% when the time scale increased by a factor of 50, assuming that the decline in correlation is proportional, the accuracy on a daily scale is estimated to be well above 0.8 for Spearman correlation. This allows using the manuscript ID-based approach as a reasonable proxy for the actual time of submission.

Subject heterogeneity

Submission to different subject fields vary in their volume, some subjects are more popular and as a result have a larger number of editors to handle the submissions. To check whether the order effect varies depending on the number of subject editors, Eq. 1 was modified by interacting the subject field dummies \(I_j\) with manuscript rank:

$$\begin{aligned} {\text{Prob}} (x_{ij,t}=1)= \Lambda \left( \alpha + \beta \times I_j \times R_{ij,t}+ \gamma _{j,t} \right) . \end{aligned}$$
(2)
Fig. 3
figure 3

Marginal effect of a change in rank. Notes this figure shows the marginal effects of rank measures (lines are given in the following order: rank, first rank, last rank); each marginal effect was calculating by running ‘xtlogit, fe’ with one rank measure at a time; the sample includes only subjects with a large number of submissions; thick and thin lines represents 90 and 95% CI, respectively

Figure 3 shows the marginal effects for rank measures over different subjects. There is some heterogeneity in the rank effects, with strong patterns for subjects with 3–5 editors (subjects 6, 8 and 12). Subjects with 1–2 editors have low submission volumes, which can partly explain the large confidence intervals. Also, that the default setting in the editorial software is to sort incoming manuscripts in the order of submission, but an editor can reverse that order with a single click. Individual editors that do not need to coordinate with colleagues could have specific preference in processing the submissions, which would reduce the accuracy of rank measure (e.g. if the editor checks only once or twice a week).

Measurement of rank: backlog accumulation

Each editor is likely to have their own schedule and time preferences, and hence daily rank might be an inaccurate measure of the order in which the manuscripts are viewed due to accumulation of manuscript backlogs. The editor’s typical schedule could vary over the sample period, since the sample spans several academic terms. To visualise the editorial activity, the decisions on all submissions (including revisions) were aggregated to subject field-day level and relative intensity of activity on any particular day was calculated, see Fig. 4. Panel (a) shows that a large number of editors (to process a larger number of submissions) had activity spread more evenly over the working week, panel (b) for the subject with just 1 editor shows more concentrated activity. This approach relies crucially on the number and types of decisions. Panel (c) shows the activity of a specific editor based on data for published manuscripts. This intensity is based only on decisions to accept, and hence unlikely to fully capture all of the journal-related activities.

Fig. 4
figure 4

Intensity of daily activity for selected subject fields and individual editors. a Subject 8 (5 Editors). b Subject 15 (1 Editor). c Editor A. Notes the figures show imputed work patterns of editors based on dates of decisions; light blue (light grey for black and white print) indicates days with no activity (during weeks with at least one active day), coloured lines (grey for black and white print) correspond to the intensity specified on the legend bar, and white spaces indicate weeks with no activity; in panels a, b the activity is based on information on all submissions (including revisions); in panel c activity is derived from accepted manuscript only (due to data limitation)

Using this decision activity as a proxy for editor’s activity on processing new submissions, manuscript backlogs were estimated by accumulating first submissions during the editor’s inactive days. The resulting manuscript ranking is highly correlated with the daily ranking (\(R_i\)): Pearson 0.77, Spearman 0.61. It was expected that this adjusted manuscript ranking would improve estimation for single-editor subjects, however the rank variable is not significant if the sample is restricted to single-editor subjects. This could indicate that the decision data is not an accurate reflection of the timing of first submission review by the editors at the single-editor subject areas, so the rank variable is measured with a lot of noise. It could, of course, also indicate that due to the low submission volume the single editor could have more flexibility in integrating submission review into their existing work schedule reducing any decision fatigue or order effect.Footnote 11

Large number of submissions

During the sample period the journal had three special issues and a call for papers for another one (deadline was beyond the sample period). A potential concern is that the rank effects could be driven by a number of high-quality submissions which were prepared in advance and submitted before the deadline. In contrast, manuscripts of lower quality could be submitted just before or on the deadline. This would introduce correlation between manuscript quality and imputed rank. The data doesn’t allow distinguishing a special issue submission from a regular submission, moreover special issues could include submissions from multiple subjects. To reduce potential bias caused by these episodes of large submission inflows, two sample restrictions were checked. The first sample restriction excluded all submissions in the months containing a special issue deadline (excluding submissions on the deadline day only could bias the sample by excluding potentially lower-quality submissions). The second sample restriction excluded all submissions on days with more than 5 submissions. The results from these estimations are consistent with the main results.

Table 6 Restricting the sample to 2–5 manuscripts per day
Table 7 Removing submissions during months with a deadline for a special issue

Random effects specification

Using random effects specification moderates the magnitude of the coefficients, but sign and significance are similar to the fixed effects specification (Tables 6, 7, 8).

Table 8 Estimation with random effects specification

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Orazbayev, S. Sequential order as an extraneous factor in editorial decision. Scientometrics 113, 1573–1592 (2017). https://doi.org/10.1007/s11192-017-2531-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-017-2531-7

Keywords

JEL Classification

Navigation