Journal of General Internal Medicine

, Volume 33, Issue 7, pp 1002–1003 | Cite as

Peer Review of Abstracts Submitted to An Internal Medicine National Meeting: Is It a Predictor of Future Publication?

  • Cecilia ScholcoffEmail author
  • Payal Sanghani
  • Wilkins Jackson
  • Heidi M. Egloff
  • Adam P. Sawatsky
  • Jeffrey L. Jackson
Concise Research Reports


publication rate research abstract scientific conference medicine research 


Scientific meetings are often the first step to sharing new research, but journal publication of that research is vital for dissemination. Prior studies are mixed about what specific factors are associated with subsequent high-impact publication of abstracts submitted to scientific meetings.1,2 While peer review by medical journals is reasonably successful in selecting high-impact articles,3,4 the evidence is less clear for peer review of abstracts, which contain less information and are potentially more difficult to assess. In addition, abstract reviewers for scientific meetings often have 10–20 diverse submissions to review. Peer review of the Society of General Internal Medicine (SGIM) meeting submissions has demonstrated internal consistency for clinical vignettes,5 but poor interrater reliability for scientific abstracts.6 In this study, we hypothesized that abstract acceptance predicts eventual publication and that those publications will have higher impact compared to publications resulting from rejected abstracts.


We conducted a retrospective study of scientific abstracts submitted to the SGIM 2009 Annual Meeting. Submitted abstracts underwent unblinded peer review in four domains: importance of question, appropriateness of methods, validity of conclusion, and quality of writing. Each abstract was evaluated by 4–6 reviewers who are volunteer members of SGIM. Each item was scored from 0 to 7; the decision to accept an abstract was based on averaged scores from these items. We determined publication rates for all submitted abstracts by searching abstract authors and title keywords in MEDLINE and Google Scholar through August 2017, matching title and content to the original submission. We assessed impact by determining the number of citations for each article using Web of Science for 2 years after publication. We used analysis of variance or t tests for continuous variables and chi-square for categorical ones (STATA v14.2, College Station, TX).


Among 702 scientific abstracts submissions, 584 (83%) were accepted, 377 as posters and 207 as oral presentations (Table 1). Overall, the abstract review rating scores averaged 4.7 (range, 2.2 to 6.6). There was a stepwise increase in scores from rejected to poster to oral presentations (Table 1). Among the 702 submissions, 323 (46%) were eventually published. There was no difference in the likelihood of publication between accepted and rejected abstracts (OR 1.3, 95% CI 0.9–1.9), although oral abstracts were more likely to be published than rejections or posters (OR 1.5, 95% CI 1.1–2.0). While there was no difference in the article impact factor between rejected abstracts or poster presentations (p = 0.78), oral presentations had greater impact (10.6 vs 6.9, p = 0.02) than either poster or rejected abstracts. There was a weak correlation between the reviewers’ abstract score and the impact factor of published articles (Pearson’s rho = 0.19).
Table 1

Peer Review Scores, Publication Rate and Impact by Status of Submission


Rejected abstracts (n = 123)

Accepted abstracts (n = 584)

P value (rejected vs. accepted)

Poster abstracts (n = 377)

Oral abstracts

(n = 207)

P value (rejected vs. poster. vs. oral)

Average peer review score (SD)

3.6 (0.56)

4.8 (0.42)

< 0.00001

4.5 (0.38)

5.3 (0.47)

< 0.0005

Publication rate (n, %)

50 (41%)

277 (47%)


167 (44%)

110 (53%)


Article impact (SD)

7.3 (12.5)

8.1 (13.1)


6.8 (11.3)

10.6 (16.3)



Our study demonstrates limited evidence of the predictive validity of peer review to identify abstracts likely to result in eventual publication. Additionally, peer review scores correlated only weakly with article impact factor. The stepwise increase in ratings from rejected to poster to oral presentation is not surprising since decisions about whether to accept or reject and whether to give oral or poster presentations were based on these ratings. To encourage meeting attendance, the SGIM maximizes opportunity for poster presentations. This could explain why there was no difference in publication or impact between rejected abstracts and poster presentations. It is also possible that an oral acceptance leads to greater institutional attention and coaching for the presenter and therefore the potential for greater motivation and support to publish after the meeting.

Our study has several limitations. First, the peer review process for acceptance of meeting abstracts was not blinded, making it susceptible to bias. Next, we did not contact authors regarding publication and may have missed other published studies. Lastly, it is possible, although unlikely based on previous work, that an abstract was published after the 8-year time frame of our study.4

Our results suggest that abstracts selected for oral presentation by the peer review system often produce high-impact publications, partially vindicating the process. Just as important, many rejected abstracts are subsequently published with similar impact factors to abstracts that were accepted as posters, which should encourage authors to seek publication of rejected abstracts.


Compliance with Ethical Standards

Conflict of Interest

The authors declare that they do not have a conflict of interest.


  1. 1.
    de Meijer VE, Knops SP, van Dongen JA, Eyck BM, Vles WJ. The fate of research abstracts submitted to a national surgical conference: a cross-sectional study to assess scientific impact. Am J Surg. 2016;211(1):166–71CrossRefGoogle Scholar
  2. 2.
    Egloff HM, West CP, Wang AT, Lowe KM, Varayil JE, Beckman TJ, Sawatsky AP. Publication Rates of Abstracts Presented at the Society of General Internal Medicine Annual Meeting. J Gen Intern Med. 2017;32(6):673–8.CrossRefGoogle Scholar
  3. 3.
    Bormann L, Daniel HD. (2010) The usefulness of peer review for selecting manuscripts for publication: A utility analysis taking as an example a high-impact journal. PLoS One. 28: e11344CrossRefGoogle Scholar
  4. 4.
    Jackson JL, Srinivasan M, Rea J, Fletcher KE, Kravitz RL. The Validity of Peer Review in a General Medicine Journal. PLOS One. 2011;6(7):e22475CrossRefGoogle Scholar
  5. 5.
    Newsom J, Estrada CA, Panisko D, Willett L. Selecting the best clinical vignettes for academic meetings: should the scoring tool criteria be modified? J Gen Intern Med. 2012;27(2):202–6CrossRefGoogle Scholar
  6. 6.
    Rubin HR, Redelmeier DA, Wu AW, Steinberg EP. How reliable is peer review of scientific abstracts? Looking back at the 1991 Annual Meeting of the Society of General Internal Medicine. J Gen Intern Med. 1993;8(5):255–8.CrossRefGoogle Scholar

Copyright information

© Society of General Internal Medicine (outside the USA) 2018

Authors and Affiliations

  • Cecilia Scholcoff
    • 1
    • 2
    Email author
  • Payal Sanghani
    • 1
    • 2
  • Wilkins Jackson
    • 3
  • Heidi M. Egloff
    • 4
  • Adam P. Sawatsky
    • 5
  • Jeffrey L. Jackson
    • 1
    • 2
  1. 1.Zablocki Veterans Affairs Medical CenterMilwaukeeUSA
  2. 2.Medical College of WisconsinMilwaukeeUSA
  3. 3.University of Wisconsin-MilwaukeeMilwaukeeUSA
  4. 4.University of Michigan Health SystemAnn ArborUSA
  5. 5.Mayo ClinicRochesterUSA

Personalised recommendations