Journal of General Internal Medicine

, Volume 30, Issue 8, pp 1172–1177 | Cite as

Association Between Study Quality and Publication Rates of Medical Education Abstracts Presented at the Society of General Internal Medicine Annual Meeting

  • Adam P. Sawatsky
  • Thomas J. Beckman
  • Jithinraj Edakkanambeth Varayil
  • Jayawant N. Mandrekar
  • Darcy A. Reed
  • Amy T. Wang
Original Research

ABSTRACT

Background

Studies reveal that 44.5 % of abstracts presented at national meetings are subsequently published in indexed journals, with lower rates for abstracts of medical education scholarship.

Objective

We sought to determine whether the quality of medical education abstracts is associated with subsequent publication in indexed journals, and to compare the quality of medical education abstracts presented as scientific abstracts versus innovations in medical education (IME).

Design

Retrospective cohort study.

Participants

Medical education abstracts presented at the Society of General Internal Medicine (SGIM) 2009 annual meeting.

Main Measures

Publication rates were measured using database searches for full-text publications through December 2013. Quality was assessed using the validated Medical Education Research Study Quality Instrument (MERSQI).

Key Results

Overall, 64 (44 %) medical education abstracts presented at the 2009 SGIM annual meeting were subsequently published in indexed medical journals. The MERSQI demonstrated good inter-rater reliability (intraclass correlation range, 0.77–1.00) for grading the quality of medical education abstracts. MERSQI scores were higher for published versus unpublished abstracts (9.59 vs. 8.81, p = 0.03). Abstracts with a MERSQI score of 10 or greater were more likely to be published (OR 3.18, 95 % CI 1.47–6.89, p = 0.003). ). MERSQI scores were higher for scientific versus IME abstracts (9.88 vs. 8.31, p < 0.001). Publication rates were higher for scientific abstracts (42 [66 %] vs. 37 [46 %], p = 0.02) and oral presentations (15 [23 %] vs. 6 [8 %], p = 0.01).

Conclusions

The publication rate of medical education abstracts presented at the 2009 SGIM annual meeting was similar to reported publication rates for biomedical research abstracts, but higher than publication rates reported for medical education abstracts. MERSQI scores were associated with higher abstract publication rates, suggesting that attention to measures of quality—such as sampling, instrument validity, and data analysis—may improve the likelihood that medical education abstracts will be published.

KEY WORDS

medical education medical education research quality publication 

INTRODUCTION

Publication in indexed journals is a standard criterion for scholarly recognition and academic promotion.1 While abstracts presented at national meetings are important for disseminating research findings, limitations include small audiences, the absence of rigorous peer review, and abbreviated presentation of data. A recent Cochrane review found that only 44.5 % of abstracts presented at national meetings were subsequently published in indexed journals.2 This study included a broad range of biomedical abstracts from over 20 medical specialties, pharmacy, and dentistry, but did not evaluate medical education research. Three studies that have examined publication rates for medical education research abstracts revealed rates of 33 to 35 %, lower than those for biomedical research abstracts.35

Publication in peer-reviewed journals is the cornerstone for promotion at academic medical centers. This has been widely recognized as a challenge for the academic advancement of clinician-educators, whose primary roles are teaching and providing clinical care.6 Accordingly, several studies have demonstrated that clinician-educators are less likely than clinician-investigators to achieve high academic rank.7,8 While many academic centers have developed clinician-educator tracks with different promotion criteria,9 half of division directors in general internal medicine still consider publication of original research as very important for the academic advancement of clinician-educators.10

Of the 79 reports included in the Cochrane review on abstract publication rates,2 only three have directly assessed the association between abstract quality and subsequent publication.1113 Callahan found a statistically significant association between higher quality, as defined by a review panel using an adapted quality measure instrument, and rates of publication (RR 1.46 [1.20, 1.79]).11 Pooling of the three studies, however, did not show an association between abstract quality and publication likelihood (RR 1.24 [0.97, 1.58]).2 In the medical education literature, only surrogates of abstract quality—including presentation format (oral vs. poster), type of scholarship (research vs. education innovation), study design, and author characteristics (level of training, number of publications)—have been assessed, demonstrating that oral versus poster presentations, multi-center versus single-center studies, and scientific versus innovation in medical education abstracts have been associated with increased odds of publication.4,5 Using the validated Medical Education Research Study Quality Instrument (MERSQI), Reed et al. demonstrated that the quality of medical education manuscripts predicted acceptance in peer-reviewed journals.14 However, it is unknown whether this measure of quality is associated with subsequent publication of these abstracts in indexed journals.

Therefore, we endeavored to determine the following: 1) peer-reviewed publication rates of medical education abstracts presented at an annual meeting of the Society of General Internal Medicine (SGIM), 2) whether abstract quality as determined by the MERSQI was associated with subsequent publication, and 3) differences in publication rates and quality of scientific abstracts versus innovations in medical education (IME).

METHODS

We conducted a retrospective study of abstracts accepted to the 2009 SGIM annual meeting. Based on previous literature, the 2009 meeting was chosen to allow sufficient time for abstracts to be published.2,4 We included all scientific and IME abstracts that described medical education research as outlined by the 2006 Consensus Conference on Educational Scholarship,15 and included education at the undergraduate, graduate, and continuing medical education levels. We excluded abstracts that focused on patient education or general medical research.

Based on data from previous research,25 we identified several explanatory variables that could affect subsequent publication. Incorporating these variables, we abstracted data on the type of research (quantitative vs. qualitative), study population (undergraduate [UME], graduate [GME], or continuing medical education [CME]), presentation type (oral vs. poster), and submission category (scientific research vs. IME). We used the MERSQI, a previously validated tool with strong content, criterion, and predictive validity evidence for quantitative research, to assess the quality of quantitative medical education abstracts.14,16,17 The MERSQI contains 10 items (overall score range 5–18) within six domains of study quality: study design, sampling, data type, validity of assessments, data analysis, and outcomes. The complete MERSQI can be found in the original MERSQI validation study.16 To standardize our application of the MERSQI, three authors (AS, TB, and AW) independently rated five abstracts, resolved differences by consensus, and reached satisfactory agreement. Prior to determination of publication status, two authors (AS and AW) independently rated the remaining abstracts. Raters were blinded to publication data but not to abstract authors and institutions. Inter-rater reliability for each of the MERSQI items was measured using intraclass correlation coefficients (ICC).18

The primary outcome was full-article publication in indexed medical journals. We also assessed time to publication. To identify subsequent full publication in an indexed journal, two authors (AS and JE) independently searched PubMed, ISI Web of Knowledge, and Google Scholar for full-text publications through December 2013, using a combination of first, second, and last authors’ names as well as keywords from the title. The title, authors, methodology, and results of the published article were compared to the original abstract to confirm matches. Disagreements between independent reviewers were reviewed by a third author (AW), and consensus was reached through discussion. Time to publication (months) was determined as the length of time from the presentation of the abstract in April 2009 to the month when the full article was published.

Total MERSQI scores and time to publication were described using means and standard deviations. In comparisons of MERSQI scores and independent variables, p values for categories with two variables (e.g., publication, abstract type, presentation type) were determined using the two-sided t test. For categories with multiple variables (e.g., study population), p values were calculated using the Kruskal–Wallis test. To determine a quality threshold for the MERSQI that might predict subsequent publication, we examined previous studies showing average MERSQI scores for published studies of 9.6,14 9.95,16 and 9.94.17 A study by Reed et al. demonstrated an average MERSQI score of 10.7 (SD 2.5) for published studies, compared to 9.0 (SD 2.4) for rejected studies.14 In our study, therefore, we used a threshold MERSQI score of greater than or equal to 10, and we used the Chi-squared test to calculate the odds of publication. For comparisons of published abstracts to unpublished abstracts and scientific abstracts to IME abstracts, p values were determined using the two-sided t test.

This study was approved by the Mayo Clinic Institutional Review Board.

RESULTS

Of 651 scientific abstracts reviewed, 144 met the criteria for medical education scholarship. Overall, 64 (44 %) medical education abstracts that were presented at the 2009 SGIM annual meeting were subsequently published in indexed medical journals. The mean time to publication was 21 months. The majority of abstracts were quantitative (120 [83 %]) and involved GME (75 [52 %]) and UME (44 [31 %]), with only a minority involving CME (7 [5 %]) or mixed populations (18 [13 %]). The average MERSQI score for all quantitative medical education abstracts was 9.15 (SD 1.91, range 5–14; Table 1). Inter-rater reliability (ICC range, 0.77–1.00) was substantial for the MERSQI item “outcomes” and almost perfect for all other items (Table 2).19
Table 1

Characteristics of Education Abstracts Presented at the SGIM National Meeting and MERSQI Scores

Category

Variable

 

MERSQI

p value2

N*

Mean score

SD

MERSQI score

MERSQI score

119

9.15

1.91

NA

MERSQI domain1

Design

119

1.47

0.47

NA

 

Sampling

119

1.24

0.68

 

Data

119

2.05

1.02

 

Validity

119

0.29

0.59

 

Analysis

119

2.26

0.63

 

Outcome

119

1.84

0.60

Publication

Yes

52

9.59

1.84

0.03

 

No

67

8.81

1.91

Education specialty

UME

35

8.73

2.04

0.29

 

GME

67

9.31

1.75

 

CME

4

10.25

2.90

 

Mixed

13

9.15

2.02

Abstract type

Scientific

64

9.88

1.84

<0.001

 

IME

55

8.31

1.64

Presentation

Oral

20

10.62

1.74

<0.001

 

Poster

99

8.87

1.84

Author

No Publications

8

8.75

1.36

0.54

 

At least one publication

111

9.18

1.94

*The N for calculating MERSQI corresponds to the number of abstracts describing quantitative studies, excluding one study that presented conflicting data.

MERSQI Medical Education Research Study Quality Instrument, UME undergraduate medical education, GME graduate medical education, CME continuing medical education, IME innovations in medical education

Table 2

Inter-rater Reliability of MERSQI Scores

MERSQI item

Intraclass correlation coefficient

Study design

0.99

No. of institutions studied

0.86

Response rate

0.99

Type of data

0.92

Internal structure

0.88

Content

1.00

Relationship to other variables

1.00

Appropriateness of data analysis

1.00

Complexity of analysis

0.96

Outcomes

0.77

MERSQI Medical Education Research Study Quality Instrument

Overall, MERSQI scores were higher for published versus unpublished abstracts (9.59 vs. 8.81, p = 0.03), with a significant difference in the domain of data analysis (2.42 vs. 2.14, p = 0.01) and non-significant trends towards higher scores in the study design and outcomes domains (Table 3). Abstracts with a MERSQI score of 10 or greater were three times as likely to be published as abstracts with a MERSQI score of less than 10 (OR 3.18, 95 % CI 1.47–6.89, p = 0.003). Higher rates of publication were also found for scientific abstracts (42 [66 %] vs. 37 [46 %], p = 0.02) and for oral presentations (15 [23 %] vs. 6 [8 %], p = 0.01). Abstracts presented as oral presentations also had higher MERSQI scores compared to poster presentations (10.62 vs. 8.87, p < 0.001).
Table 3

Published Versus Non-Published Medical Education Abstracts Presented at the SGIM National Meeting

Category

Published abstracts (n = 64)

Non-published abstracts (n = 80)

p value

 Quantitative

53 (83 %)

67 (84 %)

0.88

 Population

  UME

17 (27 %)

27 (34 %)

 

  GME

33 (52 %)

42 (53 %)

 

  CME

3 (5 %)

4 (5 %)

 

  Mixed/other

11 (17 %)

7 (9 %)

 

 Abstract type

  Scientific

42 (66 %)

37 (46 %)

0.02

 Presentation

  Oral

15 (23 %)

6 (8 %)

0.01

MERSQI domain

Published abstracts (n = 52)*

Non-published abstracts (n = 67)

p value

 Study design

1.57 (0.52)

1.40 (0.40)

0.05

 Sampling

1.29 (0.74)

1.16 (0.57)

0.32

 Type of data

2.08 (1.00)

2.03 (1.03)

0.80

 Validity

0.25 (0.59)

0.33 (0.59)

0.47

 Data analysis

2.42 (0.50)

2.14 (0.69)

0.01

 Outcomes

1.94 (0.60)

1.75 (0.59)

0.09

 Total

9.59 (1.84)

8.81 (1.91)

0.03

*The N for calculating MERSQI corresponds to the number of abstracts describing quantitative studies, excluding one study that presented conflicting data.

MERSQI Medical Education Research Study Quality Instrument, UME undergraduate medical education, GME graduate medical education, CME continuing medical education

Scientific medical education abstracts were more likely to be published than IME abstracts (42 [53 %] vs. 22 [34 %], p = 0.02). Similarly, MERSQI scores were higher for scientific abstracts than for IME abstracts (9.88 vs. 8.31, p < 0.001). Specifically, scientific abstracts scored significantly higher than IME abstracts in the following MERSQI domains: sampling (number of institutions studied and response rate), validity of evaluation instrument (content and internal structure), and data analysis (appropriateness and complexity of analysis) (Table 4).
Table 4

Scientific Versus Innovations in Medical Education Abstracts Presented at the SGIM National Meeting

Category

Scientific abstracts (n = 79)

Innovations in medical education (n = 65)

p value

 Quantitative

65 (82 %)

55 (85 %)

0.71

 Population

  UME

24 (30 %)

20 (31 %)

 

  GME

39 (49 %)

36 (55 %)

 

  CME

4 (5 %)

3 (5 %)

 

  Mixed/Other

12 (15 %)

6 (9 %)

 

  Published

42 (53 %)

22 (34 %)

0.02

  Time to publication in months (SD)

18.74 (9.88)

25.41 (10.87)

0.01

MERSQI domain

Scientific abstracts (n = 64)*

Innovations in medical education (n = 55)

p value

 Study design

1.50 (0.50)

1.44 (0.42)

0.46

 Sampling

1.42 (0.77)

1.03 (0.49)

0.001

 Type of data

2.06 (1.01)

2.04 (1.04)

0.89

 Validity

0.47 (0.71)

0.09 (0.29)

<0.001

 Data analysis

2.52 (0.59)

1.96 (0.54)

<0.001

 Outcomes

1.91 (0.59)

1.75 (0.60)

0.17

 Total

9.88 (1.84)

8.31 (1.64)

<0.001

*The N for calculating MERSQI corresponds to the number of abstracts describing quantitative studies, excluding one study that presented conflicting data.

MERSQI Medical Education Research Study Quality Instrument, UME undergraduate medical education, GME graduate medical education, CME continuing medical education

DISCUSSION

Our study demonstrated a peer-reviewed journal publication rate of 44 % for medical education abstracts presented at the 2009 SGIM annual meeting. We also found a positive association between abstract quality, as determined by MERSQI score, and subsequent publication in indexed journals. Furthermore, abstracts presented as scientific abstracts had higher publication rates and were of higher quality than those presented as IME.

Rates of peer-reviewed publication for medical education abstracts in this study were similar to previously reported rates for traditional biomedical research,2 and were higher than rates reported for medical education scholarship presented at the Association of American Medical College’s Research in Medical Education (RIME) conference (37 %),4 the Canadian Conference on Medical Education (CCME, 32 %),4 the Clerkship Directors in Internal Medicine (CDIM) national meeting (35 %),3 and the Council on Medical Student Education in Pediatrics (COMSEP) annual meeting (34 %)5. There may be several reasons for these differences in publication rates. First, variations in the calls for abstracts, requests for structured (vs. unstructured) abstracts, and conference formats might affect the quality of abstracts and, therefore, the likelihood of subsequent publication. The three previous reports noted were from primary medical education conferences, whereas the SGIM annual meeting has a broader audience, with an emphasis on biomedical research, and only a small proportion of abstracts accepted (144/651 [22 %] in our study) address medical education scholarship. The potential competition between biomedical and education research abstracts submitted to SGIM may drive submission and acceptance of higher-quality medical education research abstracts than those for meetings that solicit only medical education research.

The varying rates of publication may reflect differences in the time frames over which abstracts were identified.35 The report on COMSEP abstracts spanned many years, showing a growing percentage of publication over time, which would affect the overall average publication rate.5 The report on the CDIM abstracts also spanned many years (1995–2005), although the results did not demonstrate a significant increase in publication rates after 2002.3 Moreover, the CDIM abstract report is 10 years old, and an increase in quality may not have been experienced until more recently.3 Similarly, the quality of SGIM abstracts may have increased over the years, but our study findings would reflect only the most recent (and potentially higher) publication rates and quality scores.

There may be other differences in abstract quality between our study and previous reports. The report on COMSEP abstracts had smaller proportions of research abstracts (42 %),5 and the CDIM abstract report contained a large percentage of purely descriptive studies (56 %),3 which in both cases might reflect lower abstract quality. This was not true, however, for the RIME and CCME abstract report, in which a higher number of accepted abstract submissions categorized as research (67 %) still had a lower publication rate (39 %).4

We also found that IME abstracts, which are largely reports of novel ideas in curriculum development, had lower quality scores and lower publication rates than scientific medical education abstracts. The SGIM call for abstracts defines IME as “innovative scholarly activities in medical education that are currently in progress or that have been completed.”20 While it might seem that a work in progress would not be fairly adjudicated by the MERSQI, the submission structure includes categories of setting and participants, description, and evaluation.20 Most MERSQI domains are not affected by whether the abstract is a work in progress or a completed project, the exception being response rate reporting, which could yield up to an additional point for a response rate over 74 %. Our findings are consistent with the results of Reed et al., which looked at MERSQI scores for full manuscripts in both original research and education innovations.14 In this study, the average MERSQI score was 10.3 (SD 2.2) for original articles, compared to 8.3 (SD 2.7) for educational innovations, which is comparable to the difference of 1.57 in our study.14 The current study also demonstrated a lower acceptance rate of 23 % for IME abstracts compared to 42 % for scientific medical education abstracts. Furthermore, Reed et al. found that educational innovations scored significantly lower in the same domains that we identified (sampling, validity, and data analysis).14 These might be areas to focus on when designing curricular innovations in order to enhance the likelihood of publication. The consistency between our work and previous studies suggests that these are appropriate areas for improvement in educational innovations regardless of whether they are in developmental stages or have been completed. It also suggests that this is not the result of a reporting issue in the conference abstract format. We suspect that IME abstracts scored lower because of goals inherent in educational innovations; these are often designed as specific curricula and novel solutions to common educational problems, and are often meant to enhance local teaching environments and not necessarily intended for peer reviewed publication.

We found a significant and meaningful difference in MERSQI scores of 0.80 (9.6 [SD 1.8] vs. 8.8 [SD 1.9]) between published and unpublished abstracts. This is comparable to the difference of 1.7 (10.7 [SD 2.5] vs. 9.0 [SD 2.4]) between accepted and rejected articles submitted to a Journal of General Internal Medicine medical education special issue.14 In this study, a 1.0 increase in the MERSQI score was associated with over 30 % higher likelihood of publication. Another study, in which the MERSQI was applied to a broad sample of published medical education research, found that higher MERSQI scores were associated with higher 3-year citation rates (0.8 increase in score per 10 Citations) and a higher impact factor of the journal of publication (1.0 increase in score per 6-unit increase in impact factor).16

In the Cochrane review of 79 studies on abstract publication rates, many different surrogates were used to measure study quality: oral versus poster presentation (12 reports), sample size (7 reports), randomized or controlled trials versus other study designs (9 reports), and multi- versus single-center studies (5 reports). Only three of these studies assessed overall study quality with the use of an assessment tool.1113 Our study utilized the MERSQI, a validated tool designed specifically for assessing the quality of medical education research.

The MERSQI has not been previously validated for measuring the quality of abstracts of medical education research. Prior studies using quality measures either developed their own instrument or modified an existing instrument that was originally validated on full manuscripts.1113 Our study presents new evidence of validity for the utilization of MERSQI to rate abstract quality. We demonstrated content validity (required abstract components correspond to existing MERSQI domains), internal structure (excellent inter-rater reliability), and correlations to other variables (significant differences in MERSQI scores for published vs. unpublished abstracts and abstracts selected for oral vs. poster presentation). This initial strong validity evidence is promising, although additional research is needed to further explore the validity of MERSQI for evaluating abstracts, including comparing MERSQI scores of abstracts with subsequently published articles.

Improving the quality of medical education research will serve both individual clinician-educators and the field of medical education as a whole. Publication in peer-reviewed journals is likely to advance the promotion of clinician-educators and to facilitate the dissemination of effective teaching strategies among medical educators.1,8,9 Research has shown that consumers of medical education literature find the greatest value in novel, provocative research findings and methodologically sound research; other important features include relevance, feasibility, and connection to a conceptual framework.21 Use of the MERSQI may enhance methodological quality in evaluating curricula and performing medical education research.22

There are several limitations to our study. First, the MERSQI has not been previously validated for use in grading abstracts, which by their nature have information that would be missing from full-length texts. Abstracts are therefore more likely to have lower scores, especially for missing elements such as descriptions of instrument validity and statistical analyses. Nonetheless, we demonstrated excellent inter-rater reliability and correlation with other markers of study quality, including scientific abstract versus IME submission and oral versus poster presentation. Second, there are aspects of the MERSQI that may not be included in IME abstracts, such as sampling and study design, which would tend to result in lower IME abstract quality scores. Lastly, the authors were blinded to publication status when determining the MERSQI score, but they were not blinded to author or institution, which may have influenced the scoring. In order to reduce bias, two reviewers scored all studies independently and in duplicate.

CONCLUSIONS

In this study, we demonstrated an association between a validated measure of study quality and subsequent full-text publication of medical education research abstracts. We also presented evidence of validity for use of the MERSQI tool to measure the quality of medical education research abstracts. The journal publication rate of medical education abstracts presented at the 2009 SGIM annual meeting was similar to previously demonstrated publication rates for biomedical research abstracts, but was higher than publication rates for medical education abstracts presented at other scientific meetings. These findings suggest that attention to measures of quality—such as study design, sampling, type of data, instrument validity, data analysis, and outcomes—will optimize the likelihood that medical education abstracts will be published in indexed journals. Our findings also indicate that education innovation projects that incorporate multiple institutions, validated evaluation instruments, and appropriate statistical analyses are more likely to be published.

Notes

Acknowledgments

This study was presented at the Society of General Internal Medicine annual meeting on April 24, 2014, in San Diego, CA.

Conflict of Interest

The authors each declare that they have no conflict of interest.

REFERENCES

  1. 1.
    Atasoylu AA, Wright SM, Beasley BW, Cofrancesco J Jr, Macpherson DS, Partridge T, Thomas PA, Bass EB. Promotion criteria for clinician-educators. J Gen Intern Med. 2003;18(9):711–6.PubMedCentralPubMedCrossRefGoogle Scholar
  2. 2.
    Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2007;2:MR000005.Google Scholar
  3. 3.
    Papp KK, Baker EA, Dyrbye LN, Elnicki DM, Hemmer PA, Mechaber AJ, Mintz M, Durning SJ. Analysis and publication rates of Clerkship Directors in Internal Medicine (CDIM) annual meeting abstracts 1995–2005. Teach Learn Med. 2011;23(4):342–6.PubMedCrossRefGoogle Scholar
  4. 4.
    Walsh CM, Fung M, Ginsburg S. Publication of results of abstracts presented at medical education conferences. JAMA. 2013;310(21):2307–9.PubMedCrossRefGoogle Scholar
  5. 5.
    Smith S, Kind T, Beck G, Schiller J, McLauchlan H, Harris M, Gigante J. Further dissemination of medical education projects after presentation at a pediatric national meeting (1998–2008). Teach Learn Med. 2014;26(1):3–8.PubMedCrossRefGoogle Scholar
  6. 6.
    Levinson W, Rubenstein A. Integrating clinician-educators into Academic Medical Centers: challenges and potential solutions. Acad Med. 2000;75(9):906–12.PubMedCrossRefGoogle Scholar
  7. 7.
    Thomas PA, Diener-West M, Canto MI, Martin DR, Post WS, Streiff MB. Results of an academic promotion and career path survey of faculty at the Johns Hopkins University School of Medicine. Acad Med. 2004;79(3):258–64.PubMedCrossRefGoogle Scholar
  8. 8.
    Beasley BW, Simon SD, Wright SM. A time to be promoted. The Prospective Study of Promotion in Academia (Prospective Study of Promotion in Academia). J Gen Intern Med. 2006;21(2):123–9.PubMedCentralPubMedGoogle Scholar
  9. 9.
    Fleming VM, Schindler N, Martin GJ, DaRosa DA. Separate and equitable promotion tracks for clinician-educators. JAMA. 2005;294(9):1101–4.PubMedCrossRefGoogle Scholar
  10. 10.
    Yeh HC, Bertram A, Brancati FL, Cofrancesco J Jr. Perceptions of division directors in general internal medicine about the importance of and support for scholarly work done by clinician-educators. Acad Med. 2015;90(2):203–8.PubMedCrossRefGoogle Scholar
  11. 11.
    Callahan ML, Wears RL, Weber EJ, Barton C, Young G. Positive-outcome bias and other limitations in the outcome of research abstracts submitted to a scientific meeting. JAMA. 1998;280(3):254–7.CrossRefGoogle Scholar
  12. 12.
    Chalmers I, Adams M, Dickersin K, Hetherington J, Tarnow-Mordi W, Meinert C, Tonascia S, Chalmers TC. A cohort study of summary reports of controlled trials. JAMA. 1990;263(10):1401–5.PubMedCrossRefGoogle Scholar
  13. 13.
    Timmer A, Cole JH, Macarthur C, Brasher P, Hailey D, Sutherland LR. Acceptance by the AGA is the single best indicator of subsequent full publication of research. Gastroenterology. 1998;114:G0185.CrossRefGoogle Scholar
  14. 14.
    Reed DA, Beckman TJ, Wright SM, Levine RB, Kern DE, Cook DA. Predictive validity evidence for medical education research study quality instrument scores: quality of submissions to JGIM’s Medical Education Special Issue. J Gen Intern Med. 2008;23(7):903–7.PubMedCentralPubMedCrossRefGoogle Scholar
  15. 15.
    Simpson D, Fincher RM, Hafler JP, Irby DM, Richards BF, Rosenfeld GC, Viggiano TR. Advancing educators and education by defining the components and evidence associated with educational scholarship. Med Educ. 2007;41(10):1002–9.PubMedCrossRefGoogle Scholar
  16. 16.
    Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298(9):1002–9.PubMedCrossRefGoogle Scholar
  17. 17.
    Reed DA, Beckman TJ, Wright SM. An assessment of the methodologic quality of medical education research studies published in The American Journal of Surgery. Am J Surg. 2009;198(3):442–4.PubMedCrossRefGoogle Scholar
  18. 18.
    Portney LG, Watkins MP. Foundations of clinical research. Applications and practice. Norwalk: Appleton & Lange; 1993:509–16.Google Scholar
  19. 19.
    Landis JR, Koch GG. The measure of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.PubMedCrossRefGoogle Scholar
  20. 20.
    Society of General Internal Medicine. IME Submission Information. Available at: http://www.sgim.org/meetings/annual-meeting/call-for-abstracts-vignettes-ime-cpi/ime-submission-info. Accessed 18 Feb 2015.
  21. 21.
    Sullivan GM, Simpson D, Cook DA, Delorio NM, Andolsek K, Opas L, Philbert I, Yarris LM. Redefining quality in medical education research: a consumer’s view. J Grad Med Ed. 2014;6(3):424–9.CrossRefGoogle Scholar
  22. 22.
    Cook DA, Levinson AJ, Garside S. Method and reporting quality in health professions education research: a systematic review. Med Educ. 2011;45(3):227–38.PubMedCrossRefGoogle Scholar

Copyright information

© Society of General Internal Medicine 2015

Authors and Affiliations

  • Adam P. Sawatsky
    • 1
  • Thomas J. Beckman
    • 1
  • Jithinraj Edakkanambeth Varayil
    • 1
  • Jayawant N. Mandrekar
    • 2
  • Darcy A. Reed
    • 3
  • Amy T. Wang
    • 4
  1. 1.Division of General Internal MedicineMayo ClinicRochesterUSA
  2. 2.Division of Biomedical Statistics and InformaticsMayo ClinicRochesterUSA
  3. 3.Division of Primary Care Internal MedicineMayo ClinicRochesterUSA
  4. 4.Division of General Internal MedicineLos Angeles Biomedical Research Institute at Harbor-UCLA Medical CenterTorranceUSA

Personalised recommendations