Background

Writing scientific papers is hard work, in particular for inexperienced authors. Publication of research is however crucial, because it allows to spread scientific knowledge and increase the recognition of researchers. In the past, when the submission process was carried out by post, it usually took a long time to publish research. Nowadays, the rapid development of information technology and the spread of internet should in principle reduce the delay between submission and publication, using online submission (Govender et al. 2008; Kalcioglu et al. 2015). However, despite the increase in the number of journals publishing research, there has been a rise in the workload of journals, which is explained by an increase in scientific output (Kalcioglu et al. 2015). As a result, both the rejection rate and the submission-to-publication times for accepted manuscripts are likely to increase in the future (Ellison 2002; https://www.authorea.com/users/2013/articles/36067-publication-cycle-a-study-of-the-public-library-of-science-plos/_show_article). This trend could be particularly problematic, because longer publication times could postpone the diffusion of new knowledge or the implementation of new interventions.

Choosing the right journal is an early step in planning research, because journals differ greatly in scope, balance of topics, audience, quality and influence (Huth 1999). Journals also differ widely in how their editors decide what to accept and what to reject, though several criteria are likely to be applied by most editors: relevance of the paper, importance of the message, novelty, scientific validity and usefulness of the paper for the journal (Huth 1999). Many differences in editorial procedures make it difficult to predict the time needed for receiving a response from the journal. Moreover, besides these factors which are related to editorial policy and journal workload, the quality of the study as well as characteristics related to the author(s), the study and/or the journal could all have an influence on the publication speed. For example, although editors are important for publication speed, it is likely that reviewers are ultimately responsible for most of the time a paper spends between submission and acceptance.

Publication speed could be viewed as a way to estimate publication efficiency and journal quality (Chen et al. 2013; Dióspatonyi et al. 2001; Amat 2008), and may be split into two parts: the time taken from first submission to acceptance (the acceptance time), which is a measure of the time needed for peer reviewing and revising manuscripts, and the time from acceptance to publication (the publication time per se), which is a measure of the time needed for editing, proofreading and publishing manuscripts (Chen et al. 2013). The acceptance time is considered the responsibility of the editor (time for decision, time for finding reviewers), the reviewers (time for peer reviewing) and the authors (time for revising), whereas the publication time is essentially the responsibility of the journal (priority given to the manuscript, amount of manuscripts waiting for publication and production system) (Palese et al. 2013).

A number of bibliometric studies from a wide range of disciplines including biomedicine tried to quantify acceptance and/or publication times (Govender et al. 2008; Kalcioglu et al. 2015; Chen et al. 2013; Amat 2008; Palese et al. 2013; Björk and Solomon 2013; Dong et al. 2006; Hopewell et al. 2007; Manzoli et al. 2014; Shah et al. 2016; Stamm et al. 2007; Jefferson et al. 2016; Garg 2016), and to analyze the link with some journal characteristics [discipline (Björk and Solomon 2013; Dong et al. 2006; Shah et al. 2016; Garg 2016), journal impact factor (Kalcioglu et al. 2015; Chen et al. 2013; Shah et al. 2016), open access publishing (Björk and Solomon 2013; Dong et al. 2006), journal size (Björk and Solomon 2013)], paper characteristics [study design (Kalcioglu et al. 2015; Palese et al. 2013; Stamm et al. 2007), online submission (Govender et al. 2008), online posting before effective online or print publication (Amat 2008), advance online publication (Chen et al. 2013; Palese et al. 2013; Shah et al. 2016), statistical significance of the results in clinical trials (Hopewell et al. 2007; Jefferson et al. 2016)] or trend over time (Kalcioglu et al. 2015; Shah et al. 2016).

To our knowledge, there are no detailed data available for general biomedical journals, though some of them were included in studies where publication speed was compared across disciplines (Björk and Solomon 2013; Dong et al. 2006; Shah et al. 2016) or according to the statistical significance of the results in clinical trials (Manzoli et al. 2014; Jefferson et al. 2016). Using data from studies conducted in other disciplines is not recommended, because publication speed has been shown to vary considerably across scientific fields (Björk and Solomon 2013; Dong et al. 2006; Shah et al. 2016; Garg 2016); for example, in Björk and Solomon’s study submission-to-publication times were approximately twice as long in business and economy as in chemistry (Björk and Solomon 2013), whereas in Garg’s study they were twice as long in earth sciences as in chemistry (Garg 2016). In addition, to our knowledge, no study so far has conducted in-deep research on the association between a number of author, paper and journal characteristics and publication speed.

Therefore, the two objectives of our study were (1) to quantify the acceptance, publication and total delay times of manuscripts submitted to general medical journals (general internal medicine and primary care journals), and (2) to identify which factors were associated with publication speed.

Methods

Identification of studies

The 2015 impact factor list of journals publishing in the field of general internal medicine and the 2015 impact factor list of journals publishing in the field of primary care were obtained using the JCR (Journal Citation Reports), a product of ISI web of Knowledge. In JCR, these two categories are referred to as “primary health care” and “medicine, general internal”. These impact factors were re-checked using bioxbio.com, which gives detailed information on a large number of scientific journals. Two investigators (CR and PHG) were asked to retrieve 45 original papers published in 2016 (between 1 January and 31 December) from each of the 9 highest impact factor journals of general internal medicine and the 9 highest impact factor journals of primary care that included both submission and publication dates. These papers were randomly selected using simple randomization based on computer-generated random numbers. Commentaries, editorials, brief reports, correspondence, case reports and non-systematic reviews were excluded. The total number of papers retrieved was 781 (45 were expected for each journal, but this number was not obtained for some journals due to a low number of papers published in 2016, and/or because submission, acceptance and/or publication dates were not available).

Data collection (extraction of the data)

The two investigators extracted the following data from the papers: (1) author characteristics: the gender, number of publications (using Web of Science [v 5.25.1] with full name and affiliation reported in the article) and place of affiliation of the first author; (2) paper characteristics: the date of submission, of acceptance and of first publication, the form of first publication (online, i.e. paper published online before being published in paper form, or paper form), the type of access (open or not), the number of authors, the number of participants in the study, the study design (systematic review with or without meta-analysis, randomized controlled trial, non randomized and/or non controlled trial, cohort study, cross-sectional study, case–control study, qualitative study, ecological study, mixed study or other design), and, for trials and reviews, the study result [positive if the result regarding the main objective, as defined in the introduction section of the abstract, was statistically significant (i.e. p value < 0.05 unless defined otherwise in the paper) or negative if the result was statistically non-significant (i.e. p value ≥ 0.05 unless defined otherwise in the paper)]; (3) journal characteristics: the journal discipline (general internal medicine or primary care), the 2015 impact factor, the location of the journal, the number of papers published in 2016 and the publication model (open access, hybrid open access or traditional subscription model). Note that if an article has been published both online and in print, the publication date refers to the earliest date of publication.

Inter-rater variability among the two investigators was assessed over a random sample of 15% of the papers included in the study (for the remaining papers the extraction of the data was therefore carried out by only one investigator). The agreement was higher than 95% for all variables, except for study design where the agreement was only 80%. It was therefore decided that the design of all studies would be assessed with the support of the main investigators (PS, JPF, HM), except when the design was clearly mentioned in the paper. Doubts and disagreements were resolved by discussion and consensus within the study team.

Sample size justification and statistical analyses

The sample size was estimated in order to detect a 10-day difference in time (taken from the first submission to acceptance and to publication) between two groups of observations of equal size. Assuming, according to Hozo’s method (Hozo et al. 2005), that the standard deviation is approximately three quarters (for a normal distribution) of the interquartile range, which was available from the literature, the estimated standard deviation would be 48 days. Taking a Type I error rate of 5%, and a Type II error rate of 20%, with an effect size of 0.208 (10/48), the sample size required equals 724.

We computed for each paper included in the study the time taken from the first submission to acceptance (the acceptance time), from acceptance to first publication (the publication time), and, overall, from the first submission to publication (the total delay time). We calculated medians and IQRs to summarize the data, because these three outcome variables were clearly asymmetric.

Due to a failure to fully satisfy the assumptions of linear regression (linearity between predictors and outcomes, normality and homoscedasticity of the residuals) (https://stats.idre.ucla.edu/stata/webbooks/reg/chapter2/stata-webbooksregressionwith-statachapter-2-regression-diagnostics/), we transformed the outcome variables, taking the square root of the submission-to-acceptance times, and the natural logarithm of the acceptance-to-publication and submission-to-publication times, in order to be closer to a Gaussian distribution as checked by histograms (https://stats.idre.ucla.edu/stata/webbooks/reg/chapter1/regressionwith-statachapter-1-simple-and-multiple-regression/). In addition, to address the Gaussian assumption, we categorized all numerical predictive variables into three categories, choosing cutoffs by dividing the sample into terciles and rounding off to the nearest whole number (number of publications, number of authors, impact factor) or to the nearest ten or hundred (number of participants, number of papers published).

Then, we used simple linear regressions to identify the link between the outcomes and the author, paper and journal characteristics, and computed for each variable the predicted differences in mean times (in square root days or log days), with estimated 95% CI. We also performed multiple linear regressions. All available covariates were included in the multivariate model, except country and continent (because of multicollinearity) and study result (because of the very low number of observations); then we used a non-automatic backward stepwise procedure so as to remove any covariates associated with a p value higher than 0.2. We only used data-driven criteria to guide the procedure, because we did not identify important factors, based on theory or published research, to include in the models. Multicollinearity was checked using variance inflation factors (VIF) and tolerance values (1/VIF) (https://stats.idre.ucla.edu/stata/webbooks/reg/chapter2/stata-webbooksregressionwith-statachapter-2-regression-diagnostics/). As individual observations were not independent of each other (observations coming from the same journal being likely to be more similar to each other than to observations in other journals), we used random effects models (multilevel models) to adjust for intra-cluster correlations (http://www.philender.com/courses/linearmodels/notes3/cluster.html; Katz 2006).

Finally, we computed, after back transformation of the data, the predicted differences in median times for each variable.

The sample size was estimated with PASS Sample Size Software version 13. All other analyses were carried out with STATA version 12. Statistical significance was set at a two-sided p value of ≤ 0.05.

Results

Figure 1 shows the flowchart of the study. The two investigators reviewed 781 papers published in 2016 (398 in general internal medicine and 383 in primary care); this figure represents 50% of the total number of papers published in 2016 by the 18 journals included in the study (1561 papers published).

Fig. 1
figure 1

Flowchart of the study

Table 1 lists these 18 journals, stratified by discipline (general internal medicine or primary care) and sorted by 2015 impact factor. Impact factors ranged from 19.7 (British Medical Journal) to 1.1 (Primary Health Care Research and Development).

Table 1 List of the 18 journals included in the study, stratified by discipline (general internal medicine or primary care) and sorted by 2015 impact factor

Table 2 shows the first author’s main socio-demographic characteristics. There were slightly more male authors (52%); nearly half of the authors (48%) had their place of affiliation in Europe (mainly in the UK), 25% in Asia (mainly in Taiwan, South Korea and China), 18% in North America (mainly in the US) and 8% in Oceania (mainly in Australia). Only 1% reported being affiliated with institutions in South America and Africa. Their median number of publications was 7 with a large spread (min 1, max 908).

Table 2 First authors’ main socio-demographic characteristics (N = 781)

Figures 2, 3 and 4 show the histograms of acceptance, publication and total delay times. Overall, the median acceptance time of the 781 papers was 123 days (IQR 111, min 1, max 922), the median publication time was 68 days (IQR 88, min 2, max 802) and the median total delay time was 224 days (IQR 156, min 24, max 1034).

Fig. 2
figure 2

Histogram of acceptance time in days of 781 papers published in general medical journals

Fig. 3
figure 3

Histogram of publication time in days of 781 papers published in general medical journals

Fig. 4
figure 4

Histogram of total delay time in days of 781 papers published in general medical journals

Appendix” (transformed scale) and Table 3 (original scale) show the associations between acceptance, publication and total delay times, and first author, paper and journal characteristics. In multivariate analysis, online publication was strongly associated with reduced publication time (difference: − 183 days, p value < 0.001) and total delay time (difference: − 93 days, p value < 0.001); there was less evidence of association with first author’s place of affiliation (difference between papers submitted from English-speaking countries and other countries: acceptance time: − 15 days, p value 0.03; total delay time: − 6 days, p value 0.05) and with impact factor (difference in total delay time between papers submitted to journals with impact factor > 2.2 and < 1.6: − 86 days, and between impact factor > 2.2 and 1.6–2.2: − 101 days, p value 0.04).

Table 3 Associations between submission, publication and total delay times of 781 papers published in general medical journals, and first author, paper and journal characteristics (original scale)

Discussion

Main findings

We found that the overall median acceptance, publication and total delay times were respectively 123, 68 and 224 days. We also found that, in multivariate analysis, online publication was strongly associated with reduced total delay time.

Comparison with existing literature

Considering that overall median/mean acceptance, publication and total delay times in our study were respectively 123/153, 68/105 and 224/258 days, our results compare favorably with figures from studies targeting ophthalmology journals (median acceptance and publication times: 133 and 100) (Chen et al. 2013), food research (mean times: 169, 192 and 348 days) (Amat 2008) and nursing journals (median acceptance and publication times: 146 and 116 days for online publication, 146 and 175 days for paper publication) (Palese et al. 2013), whereas delays were shorter in otorhinolaryngology journals (mean times: 123, 94 and 220 days) (Kalcioglu et al. 2015).

Thanks to the spread of internet in the last decades, researchers were given the opportunity to submit their research online, and it has been shown that online submission of manuscripts was more efficient than paper submission in terms of acceptance time (Govender et al. 2008). We found that online publication was more efficient in terms of publication and total delay times, confirming studies in ophthalmology, nursing and biomedical Indian journals (Chen et al. 2013; Palese et al. 2013; Shah et al. 2016). The fact that online publication was not linked to acceptance time in our study is a logical finding, since online publication should only affect the publication and not the peer review process. The predicted differences between online and paper publication (− 183 days for publication and − 93 days for total delay times) are meaningful.

We found a weak association with impact factor, journals with higher impact factor being more likely to have shorter total delay times. There is conflicting evidence in the literature on this topic: Kalcioglu et al. (2015) showed quite the opposite, since otorhinolaryngology journals with higher impact factor were more likely to have longer acceptance and publication times, whereas two other studies targeting ophthalmology and biomedical Indian journals found no association (Chen et al. 2013; Shah et al. 2016). Our finding might be explained by the fact that high-impact journals in general have more resources; these resources may be partly used to identify the problems encountered in the peer-review process and to develop strategies to improve the publication speed. Since the association with impact factor was weak, these differences could be due to the play of chance, which means that these findings could represent spurious associations.

Finally, we showed a weak association with the first authors’ location, those being affiliated with institutions in English-speaking countries being more promptly published than those being affiliated with institutions in other countries. Note that the predicted differences are not meaningful (15 days for acceptance and 6 days for total delay times). In addition, since the association with the first author’s place of affiliation was weak, these differences could be due to chance.

It has been shown that trials with positive results were, in general, more likely to be published (publication bias), or to be published sooner (time-lag bias), than trials with negative or null results (Chen et al. 2013; Hopewell et al. 2007; Manzoli et al. 2014); in a systematic review, Hopewell et al. (2007) found that trials with positive results tended to be published on average 1–3 years earlier than those with null or negative results. The fact, however, that we found no association with study results could be explained by differences in outcomes (Hopewell’s study assessed time from enrollment, from completion of follow-up and from approval by ethics committee to publication). Our findings are in line with a recent bibliometric study of clinical trials published in four high-impact general medical journals that did not find any time-lag bias (time from trial completion to publication) (Jefferson et al. 2016), which could reflect improved research practices, in particular due to the recent initiatives advocating that all trials be registered and reported (Manzoli et al. 2014; Jefferson et al. 2016).

We found no association with study design, thereby contradicting Palese’s study that showed that systematic reviews, with or without meta-analyses, had the shortest total delay time (Palese et al. 2013). These contradictory results are maybe related to the fact that Palese’s study computed the time from the end of data collection.

Perspectives

The journal impact factor has been used for many years to evaluate the merit of individual researchers and to compare relative importance of journals within a certain field, those with higher impact factor being usually considered as more important (Callaway 2016; Seglen 1997). Numerous criticisms have been made regarding its use and the way it is calculated, in particular the fact that (1) the journal impact factor is a measure of scientific use by other researchers rather than scientific quality, (2) it is not really representative of the individual journal articles, (3) it is calculated in a way that causes bias, and (4) there are important variations in citations habits between research fields (Callaway 2016; Seglen 1997). Despite these criticisms, many researchers make submission decisions based on them (Salinas and Munch 2015; Calcagno et al. 2012).

However, publication speed should also be part of the decision when deciding where to publish research (Salinas and Munch 2015). Indeed, knowledge diffusion, usually by publishing in peer-reviewed journals, is often a slow process, and the amount of published research is known to affect researchers’ individual careers and the funding of new projects (Palese et al. 2013; Gagnon 2011). In addition, there is a need to make new scientific knowledge available as soon as possible, because publication delays could affect patient outcomes (Chen et al. 2013). For example, if a recent trial showed that a treatment was effective (or ineffective) to treat a given condition, doctors should have prompt access to these results for the benefit of their patients. As suggested by our study, online publication could offer researchers a real potential for speeding up the publishing process.

Interestingly, a recent survey with researchers (n = 1038) showed that publication speed was the third most important factor in their choice of journal, after the paper’s fit with the subject area of the journal and the importance of the journal as measured by the impact factor (Solomon and Björk 2012). Though the peer review process is generally slow, it is considered by the vast majority of researchers as being essential to the communication of research, in particular because it helps to improve the quality of published papers (Mulligan et al. 2013).

There is currently a trend in biomedical sciences to publish preprints prior to submission (Peiperl and PLOS Medicine Editors 2018; Oakden-Rayner et al. 2018). Indeed, the large delays associated with formal publication in peer-reviewed scientific journals have led researchers to find faster ways to disseminate their results within the scientific community. A preprint is a version of a scientific paper that is uploaded by its author to a centralized online repository (preprint server), but has not yet been peer-reviewed for publication. Unlike formal publication, preprints can be made available within a few days and therefore accelerate the dissemination of new knowledge (Peiperl and PLOS Medicine Editors 2018; Oakden-Rayner et al. 2018). In addition, they are freely accessible to the scientific community and allow authors to receive early feedback, which gives them the opportunity to make corrections before submitting their manuscript (Peiperl and PLOS Medicine Editors 2018; Oakden-Rayner et al. 2018).

Limitations

Some limitations need to be pointed out when considering our results. First, we limited our study to journals of general internal medicine and primary care; our results are not necessarily generalizable to journals in other disciplines. Second, several high-impact journals were not included in our study because of missing data (dates of submission and/or publication unavailable), which also limits the generalizability of our results. Third, we limited data extraction to only 1 year (2016); however, it would have been interesting to assess trends, because the expansion of the internet might reduce publication times through online submission and publication, or conversely, increase them as a result of the growing workload (more papers submitted to scientific journals). Finally, double data extraction was carried out only for 15% of the papers; however, we believe that the risk of information bias is low, because inter-rater concordance was higher than 95% for all variables except for study design, and the latter was assessed with the support of the main investigators.

Conclusion

Knowledge diffusion is often a slow process. This study provides insight into the acceptance, publication and total delay times in general medical journals. Researchers interested in reducing publication delays when submitting their research to general medical journals should focus on journals with online publication as well as on alternative forms of dissemination such as preprints.