Looking at the fate of manuscripts previously rejected from a journal is important in evaluating the quality of peer review and a journal’s position and prestige in the publishing market (e.g., Bornmann 2011) for a number of reasons. First, considering the common features of rejected manuscripts can help to evaluate possible authors’ misperceptions about the journal’s scope and reveal coordination problems between authors and journals. This could include authors’ misunderstanding of the target of their work or journals communicating policy signals that confuse authors. Secondly, this analysis can prove the existence of different sources of editorial or reviewer bias (e.g., Siler et al. 2015; Weller 2002). Thirdly, while examining rejected manuscripts means focusing mainly on the selection side of peer review, tracing the trajectory of rejected submissions after the original peer review can show if peer review can also increase the quality of the rejected manuscripts for future publication (e.g., Walker and Rocha da Silva 2015).
Previous studies of the fate of rejected manuscripts looked, first, at the effect of rejections in delaying publication, by considering the relationship between the impact factor of the rejecting journal and that of the journal that eventually published the manuscript. This was to understand authors’ publication strategies, e.g., trying for publication in a highly ranked journal first and then successively in less prestigious ones, and to estimate potential editorial bias. Some studies have also tried to estimate whether peer review could help authors to increase the quality of their rejected submissions by understanding what authors modified when targeting subsequent journals.
For instance, in one of the pioneering works in the field, Chew (1991) examined the fate of 254 rejected manuscripts that were originally submitted in the first 5 months of 1986 to the American Journal of Roentgenology, an influential journal of diagnostic radiology. He reconstructed the trajectory of these manuscripts 45–54 months after the journal’s rejection. He found that the mean time lapse between rejection and later publication was 15 months and that most of the rejected manuscripts were published in journals with a lower impact. Ray, Berkwits, and Davidoff (2000) performed a similar study of 350 manuscripts that were rejected by the Annals of Internal Medicine in 1993–1994. 69% of them were eventually published, mostly in specialty journals of lower impact after a mean of 18 months. They also found that time to publication had a weak negative correlation with the impact factor of the journal in which the article was published (correlation coefficient −0.15, p = 0.007). Similar results were found by Opthof et al. (2000) in a study of 644 manuscripts that were rejected by Cardiovascular Research in 1995–1996, by Nemery (2001) in a study of Occupational and Environmental Medicine in 1995–1997 and by Liesegang, Shaikh, and Crook (2007) in a study of the American Journal of Ophthalmology in 2002–2003. Investigating 366 manuscripts that were rejected by the Journal of Vascular and Interventional Radiology in 2004, Silberzweig and Khorsandi (2008) were able to determine that by 2007 58% had been published in other journals and as a result of the initial rejection there had been a delay of 15.5 months in these manuscripts achieving publication.
McDonald, Cloft, and Kallmes (2007) examined a sample of manuscripts that were rejected by the American Journal of Neuroradiology during 2004. They considered submission type (i.e., major study, technical note or case report), publication delay, publishing journal type (i.e., neuroradiology, general radiology, or clinical neuroscience journal) and impact factor. They found that of the 554 rejected submissions, 315 (56%) were subsequently published in 115 different journals, with the Journal of Neuroradiology publishing the greatest number of articles (37 [12%]). The mean publication delay was 15.8 ± 7.5 months. The mean impact factor of journals subsequently publishing rejected manuscripts was 1.8 ± 1.3, compared with the impact factor of the American Journal of Neuroradiology, which was 2.5. 24 (7.5%) manuscripts were subsequently published in journals with higher impact factors than the rejecting journal. A later study by the same authors using Scopus data (McDonald et al. 2009) about articles published in the American Journal of Roentgenology showed that rejected articles that subsequently found a publication outlet had a lower number of citations than the average article in the rejecting journal, and that the number of citations received was correlated with the impact factor of the publishing journals. The numbers of citations received were higher in case of technical reports and when the publishing journals were close in subject matter to the American Journal of Roentgenology.
Wijnhoven and Dejong (2010) examined 926 manuscripts rejected by the British Journal of Surgery and found that 609 (65.8%) were published in 198 different journals, mostly in subspecialty surgical and non-surgical journals with a mean time lapse of 13.8 months. Only 14 manuscripts (2.3%) were eventually published in journals with a higher impact factor than the British Journal of Surgery. Similar results were found by Khosla et al. (2011) in a study on 371 manuscripts that were rejected by Radiology in 2005–2006, although here the mean delay was 17.3 months. Similar results were obtained in a retrospective online survey by Hall and Wilcox (2007) on a sample of authors rejected by Epidemiology in 2002. In general, authors admitted that their manuscripts that were rejected by the first journal were ultimately submitted to a journal of lower impact, so confirming the hypothesis that authors first try prestigious journals and then go for less prestigious ones.
An example of the analysis of potential editorial bias is Vinther and Rosenberg (2011), which found that publication of previously rejected submissions could be less likely if the previously rejecting journal was a non-English-language journal while the subsequent target was an English-language outlet. More recently, Holliday et al. (2015) looked at 500 manuscripts submitted to the International Journal of Radiation Oncology-Biology-Physics in 2010–2012 and tried to estimate whether rejected manuscripts could have been penalized by bias due to gender, country, academic status and the prestige of the submitting authors. While they found that there was no significant difference in acceptance rates according to gender or the academic status of the submitting authors, there were significant differences due to the submitting author’s country and h-index. By July 2014, 71.7% of the rejected manuscripts had been published in a PubMed-listed journal. Confirming previous results, the publishing journals had a lower impact factor and the published version of the manuscripts a lower number of citations compared to those published by the rejecting journal.
More interestingly, especially to understand whether peer review contributes to increasing the quality of rejected manuscripts for future publication, Armstrong et al. (2008) examined the case of 489 manuscripts rejected by the Journal of the American Academy of Dermatology in 2004–2005. They looked at whether the authors of rejected manuscripts adopted in their final publications the changes suggested by the original journal reviewers. Among the 101 subsequently published manuscripts for which full texts were available, 82% of the authors incorporated at least one change suggested by the original reviewers. These manuscripts were eventually published in journals with higher impact factors than those that did not incorporate any reviewer suggestions (p = 0.0305). A more in depth-study on Angewandte Chemie International Edition by Bornmann, Weymuth and Daniel (2010), who applied a content analysis to referee reports on 1899 manuscripts that were reviewed in 2010, confirmed a relation between original peer review and later publication of rejected manuscripts. While 94% of the 1021 rejected manuscripts were published more or less unchanged in another journal, they found that previously rejected manuscripts were more likely to be published in journals of higher impact factor when there were no negative comments by reviewers on important aspects of the submission, such as relevance of contribution and research design.
However, given that evaluation and publication time delays are strongly field dependent and that the publishing market is highly stratified and segmented from field to field, these studies may only be relevant to the practices and informal norms of research in medicine and related fields. It is not clear whether the findings are context-specific or can inform us about trends that are more general. Furthermore, these studies were constrained by a limited time frame, typically following papers for only a couple of years after they were rejected. This may be sufficient in fields such as medicine, but not for others, e.g., computer science, social sciences and humanities, where there are more types of publication outlet, including conference proceedings and books, and longer publication trajectories (e.g., Powell 2016).
To fill this gap, we reconstructed the fate of unpublished manuscripts in a multidisciplinary journal, the Journal of Artificial Societies and Social Simulation (from now on, JASSS), an open access, online interdisciplinary journal for the exploration and understanding of social processes by means of computer simulation. We examined 14 years of submissions so as to look at the longer publication trajectories that are more typical in social science. We analysed the type of publications that eventually resulted from rejected manuscripts and the journals in which later versions of some of these manuscripts were published. Then, we measured the impact factor of the journal that published each rejected manuscript and counted the citations that the article eventually received. We used this to estimate whether, when it rejected manuscripts, JASSS lost important contributions.
The rest of the paper is structured as follows. "Data" section presents our dataset, including data from the journal management system and data we extracted from other sources. "Results" section presents the results. "Discussion and conclusions" section discusses the main limitations of our study and suggests measures to make this type of analysis easier.