Advertisement

Scientometrics

, Volume 119, Issue 2, pp 1037–1058 | Cite as

Factors associated with publication speed in general medical journals: a retrospective study of bibliometric data

  • Paul SeboEmail author
  • Jean Pascal Fournier
  • Claire Ragot
  • Pierre-Henri Gorioux
  • François R. Herrmann
  • Hubert Maisonneuve
Open Access
Article
  • 254 Downloads

Abstract

We aimed to assess publication speed of manuscripts submitted to general medical journals and to explore the link with various author, paper and journal characteristics. In this retrospective study of bibliometric data we retrieved 45 randomly selected papers published in 2016 from each of the highest impact factor journals of general internal medicine (n = 9) and primary care (n = 9). Only journals reporting submission and publication dates were included. The following data were extracted: first author (gender, place of affiliation, number of publications), paper (submission and publication dates, online publication, open access, number of authors, number of participants, study design, study results) and journal characteristics (impact factor, number of papers published). We computed for each paper the submission-to-acceptance, acceptance-to-publication and submission-to-publication times. We performed linear regression with random effects models to identify the associations with predictors, adjusting for intra-cluster correlations. A total of 781 papers were included. The overall median submission-to-acceptance time was 123 days (interquartile range 111, min 1, max 922), acceptance-to-publication time 68 days (interquartile range 88, min 2, max 802) and submission-to-publication time 224 days (interquartile range 156, min 24, max 1034). In multivariate analysis, online publication was strongly associated with reduced submission-to-publication time (difference: − 93 days, p value < 0.001). This study provides insight into the submission-to-acceptance, acceptance-to-publication and submission-to-publication times in general medical journals. Researchers interested in reducing publication delays should focus on journals with online publication.

Keywords

Publication speed Publication delay Acceptance time Publication time Retrospective study General medical journals 

Background

Writing scientific papers is hard work, in particular for inexperienced authors. Publication of research is however crucial, because it allows to spread scientific knowledge and increase the recognition of researchers. In the past, when the submission process was carried out by post, it usually took a long time to publish research. Nowadays, the rapid development of information technology and the spread of internet should in principle reduce the delay between submission and publication, using online submission (Govender et al. 2008; Kalcioglu et al. 2015). However, despite the increase in the number of journals publishing research, there has been a rise in the workload of journals, which is explained by an increase in scientific output (Kalcioglu et al. 2015). As a result, both the rejection rate and the submission-to-publication times for accepted manuscripts are likely to increase in the future (Ellison 2002; https://www.authorea.com/users/2013/articles/36067-publication-cycle-a-study-of-the-public-library-of-science-plos/_show_article). This trend could be particularly problematic, because longer publication times could postpone the diffusion of new knowledge or the implementation of new interventions.

Choosing the right journal is an early step in planning research, because journals differ greatly in scope, balance of topics, audience, quality and influence (Huth 1999). Journals also differ widely in how their editors decide what to accept and what to reject, though several criteria are likely to be applied by most editors: relevance of the paper, importance of the message, novelty, scientific validity and usefulness of the paper for the journal (Huth 1999). Many differences in editorial procedures make it difficult to predict the time needed for receiving a response from the journal. Moreover, besides these factors which are related to editorial policy and journal workload, the quality of the study as well as characteristics related to the author(s), the study and/or the journal could all have an influence on the publication speed. For example, although editors are important for publication speed, it is likely that reviewers are ultimately responsible for most of the time a paper spends between submission and acceptance.

Publication speed could be viewed as a way to estimate publication efficiency and journal quality (Chen et al. 2013; Dióspatonyi et al. 2001; Amat 2008), and may be split into two parts: the time taken from first submission to acceptance (the acceptance time), which is a measure of the time needed for peer reviewing and revising manuscripts, and the time from acceptance to publication (the publication time per se), which is a measure of the time needed for editing, proofreading and publishing manuscripts (Chen et al. 2013). The acceptance time is considered the responsibility of the editor (time for decision, time for finding reviewers), the reviewers (time for peer reviewing) and the authors (time for revising), whereas the publication time is essentially the responsibility of the journal (priority given to the manuscript, amount of manuscripts waiting for publication and production system) (Palese et al. 2013).

A number of bibliometric studies from a wide range of disciplines including biomedicine tried to quantify acceptance and/or publication times (Govender et al. 2008; Kalcioglu et al. 2015; Chen et al. 2013; Amat 2008; Palese et al. 2013; Björk and Solomon 2013; Dong et al. 2006; Hopewell et al. 2007; Manzoli et al. 2014; Shah et al. 2016; Stamm et al. 2007; Jefferson et al. 2016; Garg 2016), and to analyze the link with some journal characteristics [discipline (Björk and Solomon 2013; Dong et al. 2006; Shah et al. 2016; Garg 2016), journal impact factor (Kalcioglu et al. 2015; Chen et al. 2013; Shah et al. 2016), open access publishing (Björk and Solomon 2013; Dong et al. 2006), journal size (Björk and Solomon 2013)], paper characteristics [study design (Kalcioglu et al. 2015; Palese et al. 2013; Stamm et al. 2007), online submission (Govender et al. 2008), online posting before effective online or print publication (Amat 2008), advance online publication (Chen et al. 2013; Palese et al. 2013; Shah et al. 2016), statistical significance of the results in clinical trials (Hopewell et al. 2007; Jefferson et al. 2016)] or trend over time (Kalcioglu et al. 2015; Shah et al. 2016).

To our knowledge, there are no detailed data available for general biomedical journals, though some of them were included in studies where publication speed was compared across disciplines (Björk and Solomon 2013; Dong et al. 2006; Shah et al. 2016) or according to the statistical significance of the results in clinical trials (Manzoli et al. 2014; Jefferson et al. 2016). Using data from studies conducted in other disciplines is not recommended, because publication speed has been shown to vary considerably across scientific fields (Björk and Solomon 2013; Dong et al. 2006; Shah et al. 2016; Garg 2016); for example, in Björk and Solomon’s study submission-to-publication times were approximately twice as long in business and economy as in chemistry (Björk and Solomon 2013), whereas in Garg’s study they were twice as long in earth sciences as in chemistry (Garg 2016). In addition, to our knowledge, no study so far has conducted in-deep research on the association between a number of author, paper and journal characteristics and publication speed.

Therefore, the two objectives of our study were (1) to quantify the acceptance, publication and total delay times of manuscripts submitted to general medical journals (general internal medicine and primary care journals), and (2) to identify which factors were associated with publication speed.

Methods

Identification of studies

The 2015 impact factor list of journals publishing in the field of general internal medicine and the 2015 impact factor list of journals publishing in the field of primary care were obtained using the JCR (Journal Citation Reports), a product of ISI web of Knowledge. In JCR, these two categories are referred to as “primary health care” and “medicine, general internal”. These impact factors were re-checked using bioxbio.com, which gives detailed information on a large number of scientific journals. Two investigators (CR and PHG) were asked to retrieve 45 original papers published in 2016 (between 1 January and 31 December) from each of the 9 highest impact factor journals of general internal medicine and the 9 highest impact factor journals of primary care that included both submission and publication dates. These papers were randomly selected using simple randomization based on computer-generated random numbers. Commentaries, editorials, brief reports, correspondence, case reports and non-systematic reviews were excluded. The total number of papers retrieved was 781 (45 were expected for each journal, but this number was not obtained for some journals due to a low number of papers published in 2016, and/or because submission, acceptance and/or publication dates were not available).

Data collection (extraction of the data)

The two investigators extracted the following data from the papers: (1) author characteristics: the gender, number of publications (using Web of Science [v 5.25.1] with full name and affiliation reported in the article) and place of affiliation of the first author; (2) paper characteristics: the date of submission, of acceptance and of first publication, the form of first publication (online, i.e. paper published online before being published in paper form, or paper form), the type of access (open or not), the number of authors, the number of participants in the study, the study design (systematic review with or without meta-analysis, randomized controlled trial, non randomized and/or non controlled trial, cohort study, cross-sectional study, case–control study, qualitative study, ecological study, mixed study or other design), and, for trials and reviews, the study result [positive if the result regarding the main objective, as defined in the introduction section of the abstract, was statistically significant (i.e. p value < 0.05 unless defined otherwise in the paper) or negative if the result was statistically non-significant (i.e. p value ≥ 0.05 unless defined otherwise in the paper)]; (3) journal characteristics: the journal discipline (general internal medicine or primary care), the 2015 impact factor, the location of the journal, the number of papers published in 2016 and the publication model (open access, hybrid open access or traditional subscription model). Note that if an article has been published both online and in print, the publication date refers to the earliest date of publication.

Inter-rater variability among the two investigators was assessed over a random sample of 15% of the papers included in the study (for the remaining papers the extraction of the data was therefore carried out by only one investigator). The agreement was higher than 95% for all variables, except for study design where the agreement was only 80%. It was therefore decided that the design of all studies would be assessed with the support of the main investigators (PS, JPF, HM), except when the design was clearly mentioned in the paper. Doubts and disagreements were resolved by discussion and consensus within the study team.

Sample size justification and statistical analyses

The sample size was estimated in order to detect a 10-day difference in time (taken from the first submission to acceptance and to publication) between two groups of observations of equal size. Assuming, according to Hozo’s method (Hozo et al. 2005), that the standard deviation is approximately three quarters (for a normal distribution) of the interquartile range, which was available from the literature, the estimated standard deviation would be 48 days. Taking a Type I error rate of 5%, and a Type II error rate of 20%, with an effect size of 0.208 (10/48), the sample size required equals 724.

We computed for each paper included in the study the time taken from the first submission to acceptance (the acceptance time), from acceptance to first publication (the publication time), and, overall, from the first submission to publication (the total delay time). We calculated medians and IQRs to summarize the data, because these three outcome variables were clearly asymmetric.

Due to a failure to fully satisfy the assumptions of linear regression (linearity between predictors and outcomes, normality and homoscedasticity of the residuals) (https://stats.idre.ucla.edu/stata/webbooks/reg/chapter2/stata-webbooksregressionwith-statachapter-2-regression-diagnostics/), we transformed the outcome variables, taking the square root of the submission-to-acceptance times, and the natural logarithm of the acceptance-to-publication and submission-to-publication times, in order to be closer to a Gaussian distribution as checked by histograms (https://stats.idre.ucla.edu/stata/webbooks/reg/chapter1/regressionwith-statachapter-1-simple-and-multiple-regression/). In addition, to address the Gaussian assumption, we categorized all numerical predictive variables into three categories, choosing cutoffs by dividing the sample into terciles and rounding off to the nearest whole number (number of publications, number of authors, impact factor) or to the nearest ten or hundred (number of participants, number of papers published).

Then, we used simple linear regressions to identify the link between the outcomes and the author, paper and journal characteristics, and computed for each variable the predicted differences in mean times (in square root days or log days), with estimated 95% CI. We also performed multiple linear regressions. All available covariates were included in the multivariate model, except country and continent (because of multicollinearity) and study result (because of the very low number of observations); then we used a non-automatic backward stepwise procedure so as to remove any covariates associated with a p value higher than 0.2. We only used data-driven criteria to guide the procedure, because we did not identify important factors, based on theory or published research, to include in the models. Multicollinearity was checked using variance inflation factors (VIF) and tolerance values (1/VIF) (https://stats.idre.ucla.edu/stata/webbooks/reg/chapter2/stata-webbooksregressionwith-statachapter-2-regression-diagnostics/). As individual observations were not independent of each other (observations coming from the same journal being likely to be more similar to each other than to observations in other journals), we used random effects models (multilevel models) to adjust for intra-cluster correlations (http://www.philender.com/courses/linearmodels/notes3/cluster.html; Katz 2006).

Finally, we computed, after back transformation of the data, the predicted differences in median times for each variable.

The sample size was estimated with PASS Sample Size Software version 13. All other analyses were carried out with STATA version 12. Statistical significance was set at a two-sided p value of ≤ 0.05.

Results

Figure 1 shows the flowchart of the study. The two investigators reviewed 781 papers published in 2016 (398 in general internal medicine and 383 in primary care); this figure represents 50% of the total number of papers published in 2016 by the 18 journals included in the study (1561 papers published).
Fig. 1

Flowchart of the study

Table 1 lists these 18 journals, stratified by discipline (general internal medicine or primary care) and sorted by 2015 impact factor. Impact factors ranged from 19.7 (British Medical Journal) to 1.1 (Primary Health Care Research and Development).
Table 1

List of the 18 journals included in the study, stratified by discipline (general internal medicine or primary care) and sorted by 2015 impact factor

Journal

Country housing the journal

2015 Impact factor

Open access, hybrid open access or traditional subscription model journal

Number of papers published in 2016

Number of papers included in the study

General internal medicine

 British Medical Journal

UK

19.697

Open access

148

45

 BMC Medicine

UK

8.005

Open access

116

44

 European Journal of Clinical Investigation

Netherlands

2.687

Hybrid open access

91

45

 International Journal of Medical Sciences

USA

2.232

Open access

113

39

 International Journal of Clinical Practice

USA

2.226

Hybrid open access

97

45

 Journal of the Formosan Medical Association

Taiwan

2.018

Open access

93

45

 Archives of Medical Science

Poland

1.812

Open access

124

45

 Korean Journal of Internal Medicine

South Korea

1.679

Open access

86

45

 American Journal of the Medical Sciences

USA

1.575

Hybrid open access

82

45

Primary care

 Annals of Family Medicine

USA

5.087

Open access

50

45

 British Journal of General Practice

UK

2.741

Open access

103

45

 Journal of the American  Board of Family Medicine

USA

1.989

Open access

61

45

 BMC Family Practice

UK

1.641

Open access

153

45

 Scandinavian Journal of Primary Health Care

UK

1.556

Open access

53

45

 European Journal of General Practice

UK

1.364

Open access

30

30

 Australian Journal of Primary Health

Australia

1.152

Hybrid open access

63

45

 Atencion Primaria

Spain

1.098

Open access

59

45

 Primary Health Care Research and Development

UK

1.090

Hybrid open access

39

38

Table 2 shows the first author’s main socio-demographic characteristics. There were slightly more male authors (52%); nearly half of the authors (48%) had their place of affiliation in Europe (mainly in the UK), 25% in Asia (mainly in Taiwan, South Korea and China), 18% in North America (mainly in the US) and 8% in Oceania (mainly in Australia). Only 1% reported being affiliated with institutions in South America and Africa. Their median number of publications was 7 with a large spread (min 1, max 908).
Table 2

First authors’ main socio-demographic characteristics (N = 781)

Characteristics

N (%)

Gender (male)

401 (52.3)

Place of affiliation

 Europe

374 (48.0)

  UK

102 (13.1)

  Spain

55 (7.0)

  Holland

44 (5.6)

 Asia

192 (24.6)

  Taiwan

56 (7.2)

  South Korea

49 (6.3)

  China

45 (5.8)

 North America

138 (17.7)

  USA

122 (15.6)

 Oceania

61 (7.8)

  Australia

60 (7.7)

South America

9 (1.2)

Africa

6 (0.8)

 

Median (IQR)

min–max

Total number of publications

7 (18)

1–908

Figures 2, 3 and 4 show the histograms of acceptance, publication and total delay times. Overall, the median acceptance time of the 781 papers was 123 days (IQR 111, min 1, max 922), the median publication time was 68 days (IQR 88, min 2, max 802) and the median total delay time was 224 days (IQR 156, min 24, max 1034).
Fig. 2

Histogram of acceptance time in days of 781 papers published in general medical journals

Fig. 3

Histogram of publication time in days of 781 papers published in general medical journals

Fig. 4

Histogram of total delay time in days of 781 papers published in general medical journals

Appendix” (transformed scale) and Table 3 (original scale) show the associations between acceptance, publication and total delay times, and first author, paper and journal characteristics. In multivariate analysis, online publication was strongly associated with reduced publication time (difference: − 183 days, p value < 0.001) and total delay time (difference: − 93 days, p value < 0.001); there was less evidence of association with first author’s place of affiliation (difference between papers submitted from English-speaking countries and other countries: acceptance time: − 15 days, p value 0.03; total delay time: − 6 days, p value 0.05) and with impact factor (difference in total delay time between papers submitted to journals with impact factor > 2.2 and < 1.6: − 86 days, and between impact factor > 2.2 and 1.6–2.2: − 101 days, p value 0.04).
Table 3

Associations between submission, publication and total delay times of 781 papers published in general medical journals, and first author, paper and journal characteristics (original scale)

Characteristics

N

Predicted difference (univariate) in median time between submission and acceptance, days

p value (uni-variate)

p value (multi-variate)a,b

Predicted difference (univariate) in median time between acceptance and publication, days

p value (uni-variate)

p value (multi-variate)a,c

Predicted difference (univariate) in median time between submission and publication, days

p value (uni-variate)

p value (multi-variate)a,d

Author characteristics

 First author’s gender

  

0.82

0.95

 

0.93

0.79

 

0.99

0.85

  Male

401

0

  

0

  

0

  

  Female

366

1.4

  

0.3

  

0.1

  

 First author’s number of publications

  

0.71

0.70

 

0.11

0.30

 

0.74

0.68

  < 5

299

3.9

  

0

  

0.7

  

  5–15

231

5.8

  

5.9

  

6.1

  

  > 15

245

0

  

5.4

  

0

  

 First author’s place of affiliation (continent)

  

0.29

§

 

0.23

§

 

0.32

§

  Europe

374

15.7

  

7.0

  

16.1

  

  North America

138

0

  

0

  

0

  

  Other

268

13.4

  

8.2

  

19.0

  

 First author’s place of affiliation (country)

  

0.29

§

 

0.26

§

 

0.43

§

  USA

122

0

  

0

  

0

  

  UK

102

5.6

  

9.7

  

8.8

  

  Other

556

14.6

  

4.9

  

15.4

  

 First author’s place of affiliation (language)

  

0.03

0.03

 

0.88

0.96

 

0.06

0.05

  English-speaking country

315

0

  

0

  

0

  

  Other

465

17.1e

  

0.6

  

17.8f

  

Paper characteristics

 Paper published online

  

0.78

0.72

 

< 0.001

< 0.001

 

< 0.001

< 0.001

  Yes

677

5.7

  

0

  

0

  

  No

104

0

  

179.2g

  

112.4h

  

 Open access

  

0.68

0.97

 

0.87

0.57

 

0.27

0.22

  Yes

593

6.0

  

1.2

  

19.7

  

  No (including hybrid journals)

188

0

  

0

  

0

  

 Number of authors

  

0.63

0.61

 

0.16

0.06

 

0.81

0.88

  < 5

252

7.1

  

0

  

3.6

  

  5–7

300

5.4

  

0.9

  

0

  

  > 7

229

0

  

6.4

  

5.3

  

 Number of participants

  

0.09

0.10

 

0.15

0.44

 

0.81

0.68

  < 100

177

0

  

6.7

  

0

  

  100–1000

256

14.8

  

1.2

  

5.6

  

  > 1000

348

14.5

  

0

  

4.9

  

 Study design

  

0.60

0.55

 

0.26

0.95

 

0.65

0.83

  Systematic review

58

10.8

  

0

  

8.3

  

  Experiment

81

0

  

11.0

  

19.1

  

  Cross-sectional, cohort or case–control

414

7.3

  

2.3

  

8.2

  

  Qualitative

123

4.0

  

1.2

  

0

  

  Other or multiple designs

72

19.6

  

7.3

  

17.3

  

 Study result for primary endpoint

  

0.95

§

 

0.61

§

 

0.82

§

  Positive

68

0

  

0

  

0

  

  Negative

28

0.9

  

5.5

  

5.2

  

Journal characteristics

 Journal discipline

  

0.02

0.35

 

0.71

0.52

 

0.20

0.98

  General internal medicine

398

0

  

0

  

0

  

  Primary care

383

56.0

  

10.0

  

52.0

  

 2015 impact factor

  

0.03

0.73

 

0.49

0.59

 

0.03

0.04

  < 1.6

248

69.5

  

19.8

  

85.9i

  

  1.6–2.2

225

5.2

  

40.4

  

100.8i

  

  > 2.2

308

0

  

0

  

0

  

 Number of papers published in 2016

  

0.003

0.44

 

0.55

0.93

 

0.20

0.68

  < 60

203

93.0

  

32.9

  

94.7

  

  60–100

315

18.0

  

26.1

  

27.5

  

  > 100

263

0

  

0

  

0

  

aAll the variables listed in the table were included in the multivariate analysis, excepted country and continent (multicollinearity), and study result (very few observations)

bLanguage, journal discipline, impact factor, number of articles and number of participants included in the final model

cAdvanced online publication and number of authors included in the final model

dLanguage, advanced online publication and impact factor included in the final model

eAdjusted difference in median time between English-speaking countries and other countries: − 14.8 days

fAdjusted difference in median time between English-speaking countries and other countries: − 5.6 days

gAdjusted difference in median time between online and paper publication: − 183.3 days

hAdjusted difference in median time between online and paper publication: − 92.5 days

iAdjusted difference in median time between impact factor > 2.2 and < 1.6: − 85.7 days; between impact factor > 2.2 and 1.6–2.2: − 101.3 days

§Not selected in the multivariate model

Discussion

Main findings

We found that the overall median acceptance, publication and total delay times were respectively 123, 68 and 224 days. We also found that, in multivariate analysis, online publication was strongly associated with reduced total delay time.

Comparison with existing literature

Considering that overall median/mean acceptance, publication and total delay times in our study were respectively 123/153, 68/105 and 224/258 days, our results compare favorably with figures from studies targeting ophthalmology journals (median acceptance and publication times: 133 and 100) (Chen et al. 2013), food research (mean times: 169, 192 and 348 days) (Amat 2008) and nursing journals (median acceptance and publication times: 146 and 116 days for online publication, 146 and 175 days for paper publication) (Palese et al. 2013), whereas delays were shorter in otorhinolaryngology journals (mean times: 123, 94 and 220 days) (Kalcioglu et al. 2015).

Thanks to the spread of internet in the last decades, researchers were given the opportunity to submit their research online, and it has been shown that online submission of manuscripts was more efficient than paper submission in terms of acceptance time (Govender et al. 2008). We found that online publication was more efficient in terms of publication and total delay times, confirming studies in ophthalmology, nursing and biomedical Indian journals (Chen et al. 2013; Palese et al. 2013; Shah et al. 2016). The fact that online publication was not linked to acceptance time in our study is a logical finding, since online publication should only affect the publication and not the peer review process. The predicted differences between online and paper publication (− 183 days for publication and − 93 days for total delay times) are meaningful.

We found a weak association with impact factor, journals with higher impact factor being more likely to have shorter total delay times. There is conflicting evidence in the literature on this topic: Kalcioglu et al. (2015) showed quite the opposite, since otorhinolaryngology journals with higher impact factor were more likely to have longer acceptance and publication times, whereas two other studies targeting ophthalmology and biomedical Indian journals found no association (Chen et al. 2013; Shah et al. 2016). Our finding might be explained by the fact that high-impact journals in general have more resources; these resources may be partly used to identify the problems encountered in the peer-review process and to develop strategies to improve the publication speed. Since the association with impact factor was weak, these differences could be due to the play of chance, which means that these findings could represent spurious associations.

Finally, we showed a weak association with the first authors’ location, those being affiliated with institutions in English-speaking countries being more promptly published than those being affiliated with institutions in other countries. Note that the predicted differences are not meaningful (15 days for acceptance and 6 days for total delay times). In addition, since the association with the first author’s place of affiliation was weak, these differences could be due to chance.

It has been shown that trials with positive results were, in general, more likely to be published (publication bias), or to be published sooner (time-lag bias), than trials with negative or null results (Chen et al. 2013; Hopewell et al. 2007; Manzoli et al. 2014); in a systematic review, Hopewell et al. (2007) found that trials with positive results tended to be published on average 1–3 years earlier than those with null or negative results. The fact, however, that we found no association with study results could be explained by differences in outcomes (Hopewell’s study assessed time from enrollment, from completion of follow-up and from approval by ethics committee to publication). Our findings are in line with a recent bibliometric study of clinical trials published in four high-impact general medical journals that did not find any time-lag bias (time from trial completion to publication) (Jefferson et al. 2016), which could reflect improved research practices, in particular due to the recent initiatives advocating that all trials be registered and reported (Manzoli et al. 2014; Jefferson et al. 2016).

We found no association with study design, thereby contradicting Palese’s study that showed that systematic reviews, with or without meta-analyses, had the shortest total delay time (Palese et al. 2013). These contradictory results are maybe related to the fact that Palese’s study computed the time from the end of data collection.

Perspectives

The journal impact factor has been used for many years to evaluate the merit of individual researchers and to compare relative importance of journals within a certain field, those with higher impact factor being usually considered as more important (Callaway 2016; Seglen 1997). Numerous criticisms have been made regarding its use and the way it is calculated, in particular the fact that (1) the journal impact factor is a measure of scientific use by other researchers rather than scientific quality, (2) it is not really representative of the individual journal articles, (3) it is calculated in a way that causes bias, and (4) there are important variations in citations habits between research fields (Callaway 2016; Seglen 1997). Despite these criticisms, many researchers make submission decisions based on them (Salinas and Munch 2015; Calcagno et al. 2012).

However, publication speed should also be part of the decision when deciding where to publish research (Salinas and Munch 2015). Indeed, knowledge diffusion, usually by publishing in peer-reviewed journals, is often a slow process, and the amount of published research is known to affect researchers’ individual careers and the funding of new projects (Palese et al. 2013; Gagnon 2011). In addition, there is a need to make new scientific knowledge available as soon as possible, because publication delays could affect patient outcomes (Chen et al. 2013). For example, if a recent trial showed that a treatment was effective (or ineffective) to treat a given condition, doctors should have prompt access to these results for the benefit of their patients. As suggested by our study, online publication could offer researchers a real potential for speeding up the publishing process.

Interestingly, a recent survey with researchers (n = 1038) showed that publication speed was the third most important factor in their choice of journal, after the paper’s fit with the subject area of the journal and the importance of the journal as measured by the impact factor (Solomon and Björk 2012). Though the peer review process is generally slow, it is considered by the vast majority of researchers as being essential to the communication of research, in particular because it helps to improve the quality of published papers (Mulligan et al. 2013).

There is currently a trend in biomedical sciences to publish preprints prior to submission (Peiperl and PLOS Medicine Editors 2018; Oakden-Rayner et al. 2018). Indeed, the large delays associated with formal publication in peer-reviewed scientific journals have led researchers to find faster ways to disseminate their results within the scientific community. A preprint is a version of a scientific paper that is uploaded by its author to a centralized online repository (preprint server), but has not yet been peer-reviewed for publication. Unlike formal publication, preprints can be made available within a few days and therefore accelerate the dissemination of new knowledge (Peiperl and PLOS Medicine Editors 2018; Oakden-Rayner et al. 2018). In addition, they are freely accessible to the scientific community and allow authors to receive early feedback, which gives them the opportunity to make corrections before submitting their manuscript (Peiperl and PLOS Medicine Editors 2018; Oakden-Rayner et al. 2018).

Limitations

Some limitations need to be pointed out when considering our results. First, we limited our study to journals of general internal medicine and primary care; our results are not necessarily generalizable to journals in other disciplines. Second, several high-impact journals were not included in our study because of missing data (dates of submission and/or publication unavailable), which also limits the generalizability of our results. Third, we limited data extraction to only 1 year (2016); however, it would have been interesting to assess trends, because the expansion of the internet might reduce publication times through online submission and publication, or conversely, increase them as a result of the growing workload (more papers submitted to scientific journals). Finally, double data extraction was carried out only for 15% of the papers; however, we believe that the risk of information bias is low, because inter-rater concordance was higher than 95% for all variables except for study design, and the latter was assessed with the support of the main investigators.

Conclusion

Knowledge diffusion is often a slow process. This study provides insight into the acceptance, publication and total delay times in general medical journals. Researchers interested in reducing publication delays when submitting their research to general medical journals should focus on journals with online publication as well as on alternative forms of dissemination such as preprints.

Notes

Acknowledgements

We would like to warmly thank Amir Moussa, our research assistant. We would also like to thank Bernard Cerutti for methodological support, and Dagmar Haller, Arabelle Rieder and Leonardo Silvestri for their support and assistance throughout the study.

Author contributions

PS, JPF and HM were involved in the conception of the study. CR and PHG were involved in the data collection. PS and FH were involved in the data analysis. PS, JPF, CR, PHG and HM were involved in the data interpretation. PS drafted the first version of the manuscript. All authors read and approved the final manuscript. PS can be contacted for access to the dataset underlying the current analysis.

Compliance with ethical standards

Ethical approval

Not required (under Swiss law, informed consent is required when collecting personal health data, not when collecting bibliometric data).

Supplementary material

11192_2019_3061_MOESM1_ESM.xls (213 kb)
Supplementary material 1 (XLS 213 kb)

References

  1. Amat, C. B. (2008). Editorial and publication delay of papers submitted to 14 selected Food Research journals. Influence of online posting. Scientometrics, 74, 379–389.CrossRefGoogle Scholar
  2. Björk, B.-C., & Solomon, D. (2013). The publishing delay in scholarly peer-reviewed journals. Journal of Informetrics, 7, 914–923.CrossRefGoogle Scholar
  3. Calcagno, V., Demoinet, E., Gollner, K., Guidi, L., Ruths, D., & de Mazancourt, C. (2012). Flows of research manuscripts among scientific journals reveal hidden submission patterns. Science, 338, 1065–1069.CrossRefGoogle Scholar
  4. Callaway, E. (2016). Beat it, impact factor! Publishing elite turns against controversial metric. Nature, 535, 210–211.CrossRefGoogle Scholar
  5. Chen, H., Chen, C. H., & Jhanji, V. (2013). Publication times, impact factors, and advance online publication in ophthalmology journals. Ophthalmology, 120, 1697–1701.CrossRefGoogle Scholar
  6. Dióspatonyi, I., Horvai, G., & Braun, T. (2001). Publication speed in analytical chemistry journals. Journal of Chemical Information and Computer Sciences, 41, 1452–1456.CrossRefGoogle Scholar
  7. Dong, P., Loh, M., & Mondry, A. (2006). Publication lag in biomedical journals varies due to the periodical’s publishing model. Scientometrics, 69, 271–286.CrossRefGoogle Scholar
  8. Ellison, G. (2002). The slowdown of the economics publishing process. Journal of Political Economy, 110, 947–993.CrossRefGoogle Scholar
  9. Gagnon, M. L. (2011). Moving knowledge to action through dissemination and exchange. Journal of Clinical Epidemiology, 64, 25–31.CrossRefGoogle Scholar
  10. Garg, K. C. (2016). Publication delay of manuscripts in periodicals published by CSIR-NISCAIR. Current Science, 111, 1924.CrossRefGoogle Scholar
  11. Govender, P., Buckley, O., McAuley, G., O’Brien, J., & Torreggiani, W. C. (2008). Does online submission of manuscripts improve efficiency? JBR-BTR Organe Société R Belge Radiol SRBR Orgaan Van K Belg Ver Voor Radiol KBVR, 91, 231–234.Google Scholar
  12. Hopewell, S., Clarke, M., Stewart, L., & Tierney, J. (2007). Time to publication for results of clinical trials. Cochrane Database System Review, MR000011.Google Scholar
  13. Hozo, S. P., Djulbegovic, B., & Hozo, I. (2005). Estimating the mean and variance from the median, range, and the size of a sample. BMC Medical Research Methodology, 5, 13.CrossRefGoogle Scholar
  14. Huth, E. J. (1999). Writing and publishing in medicine (3rd ed.). Baltimore: Williams & Wilkins.Google Scholar
  15. Jefferson, L., Fairhurst, C., Cooper, E., Hewitt, C., Torgerson, T., Cook, L., et al. (2016). No difference found in time to publication by statistical significance of trial results: A methodological review. JRSM Open, 7, 2054270416649283.CrossRefGoogle Scholar
  16. Kalcioglu, M. T., Ileri, Y., Karaca, S., Egilmez, O. K., & Kokten, N. (2015). Research on the submission, acceptance and publication times of articles submitted to international otorhinolaryngology journals. Acta Informatica Medica, 23, 379–384.CrossRefGoogle Scholar
  17. Katz, M. H. (2006). Multivariable analysis: A practical guide for clinicians (2nd ed.). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  18. Manzoli, L., Flacco, M. E., D’Addario, M., Capasso, L., De Vito, C., Marzuillo, C., et al. (2014). Non-publication and delayed publication of randomized trials on vaccines: survey. BMJ, 348, g3058.CrossRefGoogle Scholar
  19. Mulligan, A., Hall, L., & Raphael, E. (2013). Peer review in a changing world: An international study measuring the attitudes of researchers. Journal of the Association for Information Science and Technology, 64, 132–161.Google Scholar
  20. Oakden-Rayner, L., Beam, A. L., & Palmer, L. J. (2018). Medical journals should embrace preprints to address the reproducibility crisis. International Journal of Epidemiology, 47, 1363–1365.CrossRefGoogle Scholar
  21. Palese, A., Coletti, S., & Dante, A. (2013). Publication efficiency among the higher impact factor nursing journals in 2009: A retrospective analysis. International Journal of Nursing Studies, 50, 543–551.CrossRefGoogle Scholar
  22. Peiperl, L., & PLOS Medicine Editors. (2018). Preprints in medical research: Progress and principles. PLoS Medicine, 15, e1002563.CrossRefGoogle Scholar
  23. Publication cycle: A study of the Public Library of Science (PLOS) [Internet]. Authorea [cited December 9, 2018]. https://www.authorea.com/users/2013/articles/36067-publication-cycle-a-study-of-the-public-library-of-science-plos/_show_article.
  24. Regression with clustered data [Internet] [cited December 29, 2017]. http://www.philender.com/courses/linearmodels/notes3/cluster.html.
  25. Regression with stata chapter 1: Simple and multiple regression [Internet]. IDRE Stats [cited December 29, 2017]. https://stats.idre.ucla.edu/stata/webbooks/reg/chapter1/regressionwith-statachapter-1-simple-and-multiple-regression/.
  26. Regression with stata chapter 2: Regression diagnostics [Internet]. IDRE Stats [cited December 29, 2017]. https://stats.idre.ucla.edu/stata/webbooks/reg/chapter2/stata-webbooksregressionwith-statachapter-2-regression-diagnostics/.
  27. Salinas, S., & Munch, S. B. (2015). Where should I send it? Optimizing the submission decision process. PLoS ONE, 10, e0115451.CrossRefGoogle Scholar
  28. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314, 498–502.CrossRefGoogle Scholar
  29. Shah, A., Sherighar, S. G., & Bhat, A. (2016). Publication speed and advanced online publication: Are biomedical Indian journals slow? Perspectives in Clinical Research, 7, 40–44.CrossRefGoogle Scholar
  30. Solomon, D. J., & Björk, B.-C. (2012). Publication fees in open access publishing: Sources of funding and factors influencing choice of journal. Journal of the Association for Information Science and Technology, 63, 98–107.Google Scholar
  31. Stamm, T., Meyer, U., Wiesmann, H.-P., Kleinheinz, J., Cehreli, M., & Cehreli, Z. C. (2007). A retrospective analysis of submissions, acceptance rate, open peer review operations, and prepublication bias of the multidisciplinary open access journal Head & Face Medicine. Head and Face Medicine, 3, 27.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Primary Care Unit, Faculty of MedicineUniversity of GenevaGenevaSwitzerland
  2. 2.Department of General Practice, Faculty of MedicineUniversity of NantesNantesFrance
  3. 3.Collège universitaire de médicine généraleUniversité de LyonLyonFrance
  4. 4.Department of Internal Medicine, Rehabilitation and GeriatricsGeneva University Hospitals and University of GenevaGenevaSwitzerland

Personalised recommendations