Skip to main content
Log in

Intra-rater repeatability of a structured method of selecting abstracts for the annual euraps scientific meeting

  • Original Paper
  • Published:
European Journal of Plastic Surgery Aims and scope Submit manuscript

Abstract

To date, surprisingly little attention has been given to the process of selection of abstracts submitted to biomedical meetings and there are no reports on the intra-rater reliability of such a selection. We wanted to determine the intra-rater repeatability of the selection by multiple reviewers, of abstracts submitted to a plastic surgical scientific meeting. Prospective analysis of repeated structured ratings of five abstracts by three blinded reviewers of each of the 202 abstracts submitted to the annual scientific meeting of the European Association of Plastic Surgeons (EURAPS). The intra-class correlation coefficient of the score and repeated score of five abstracts and the kappa statistic of the dichotomy of acceptance of the top two rated abstracts vs rejection of the remaining three abstracts. Both were calculated for the set of repeated scores by each individual reviewer, as well as for the set of totals of scores by all reviewers. The median of the reviewers’ individual intra-class correlation coefficient was 0.86 (range, −0.56–0.92). Six out of 10 reviewers rated the same two abstracts as top abstracts during both reviews, resulting in kappa statistics ranging from −0.15 to 1.0 (median, 0.59). The median intra-class correlation coefficient of joined scores was 0.93 (range, 0.92–0.97), and the kappa statistic for the joined top-rated abstracts was 1.0. Excellent repeatability of the ranking and dichotomy of abstracts based on the joined scores of multiple peer reviewers gives confidence in EURAPS’ structured method of abstract selection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Black N, van Rooyen S, Godlee F, Smith R, Evans S (1998) What makes a good reviewer and a good review for a general medical journal? JAMA 280:231–233

    Article  PubMed  CAS  Google Scholar 

  2. Deyo RA, Diehr P, Patrick DL (1991) Reproducibility and responsiveness of health status measures. Control Clin Trial 12:142s–158s

    CAS  Google Scholar 

  3. Fleiss JL (1981) Statistical methods for rates and proportions. Wiley, New York, NY, pp 212–236

    Google Scholar 

  4. Gardner MJ, Machin D, Campbell MJ (1986) Use of check lists in assessing the statistical content of medical studies. Br Med J 292:810–812

    Article  CAS  Google Scholar 

  5. Jefferson T, Wager E, Davidoff F (2002) Measuring the quality of editorial peer review. JAMA 287:2786–2790

    Article  PubMed  Google Scholar 

  6. Kearney RA, Puchalski SA, Yang HYH, Skakun EN (2002) The inter-rater and intra-rater reliability of a new Canadian oral examination format in anesthesia is fair to good. Can J Anaesth 49:232–236

    Article  PubMed  Google Scholar 

  7. Marsh HW, Bazeley P (1999) Multiple evaluations of grant proposals by independent assessors: confirmatory factor analysis evaluations of reliability, validity, and structure. Multivar Behav Res 34:1–30

    Article  Google Scholar 

  8. Miller GA (1994) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev 101:343–352

    Article  PubMed  CAS  Google Scholar 

  9. Monstrey SJ (2004) Committees 2003–2004. In: Monstrey SJ (ed) European Association of Plastic Surgeons fifteenth annual meeting. EURAPS, Gent, Belgium, pp 6–8

    Google Scholar 

  10. Montgomery AA, Graham A, Evans PH, Fahey T (2002) Inter-rater agreement in the scoring of abstracts submitted to a primary care research conference. BMC Health Serv Res 2:8

    Article  PubMed  Google Scholar 

  11. Overbeke J, Wager E (2003) The state of evidence: what we know and what we don’t know about journal peer review. In: Godlee F, Jefferson T (eds) Peer review in health sciences, 2nd edn. BMJ Books, London, UK, pp 45–61

    Google Scholar 

  12. Rennie D (2002) Fourth international congress on peer review in biomedical publication—editorial. JAMA 287:2759–2760

    Article  PubMed  Google Scholar 

  13. Rothwell PM, Martyn CN (2000) Reproducibility of peer review in clinical neuroscience. Is agreement between reviewers any greater than would be expected by chance alone? Brain 123:1964–1969

    Article  PubMed  Google Scholar 

  14. Rubin HR, Redelmeier DA, Wu AW, Steinberg EP (1993) How reliable is peer review of scientific abstracts? Looking back at the 1991 Annual Meeting of the Society of General Internal Medicine. J Gen Intern Med 8:255–258

    PubMed  CAS  Google Scholar 

  15. Schouten HJA (1985) Probability of correct judgment by a majority of observers. In: Schouten HJA (ed) Statistical measurement of interobserver agreement. Academic thesis, Erasmus University, Rotterdam, The Netherlands, pp 67–68

    Google Scholar 

  16. Siegelman SS (1991) Assassins and zealots: variation in peer review. Radiology 178:637–642

    PubMed  CAS  Google Scholar 

  17. Strayhorn J Jr, McDermott JF Jr, Tanguay P (1993) An intervention to improve the reliability of manuscript reviews for the Journal of the American Academy of Child and Adolescent Psychiatry. Am J Psychiatry 150:947–952

    PubMed  Google Scholar 

  18. van der Steen LPE, Hage JJ, Kon M, Mazzola R (2003) Contribution of six characteristics of an abstract to the acceptance of that abstracts for the EURAPS’ annual scientific meeting. Eur J Plast Surg 26:192–197

    Article  Google Scholar 

  19. van der Steen LPE, Hage JJ, Kon M, Mazzola R (2003) Reliability of a structured method of selecting abstracts for a plastic surgical scientific meeting. Plast Reconstr Surg 111:2215–2222

    Article  PubMed  Google Scholar 

  20. van der Steen LPE, Hage JJ, Kon M, Monstrey SJ (2004) Validity of a structured method of selecting abstracts for a plastic surgical scientific meeting. Plast Reconstr Surg 113:353–359

    Article  PubMed  Google Scholar 

  21. Vilstrup H, Sorensen HT (1998) A comparative study of scientific evaluation of abstracts submitted to the 1995 European Association for the Study of the Liver Copenhagen meeting. Dan Med Bull 45:317–319

    PubMed  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. J. Hage.

Rights and permissions

Reprints and permissions

About this article

Cite this article

van der Steen, L.P.E., Hage, J.J., Kon, M. et al. Intra-rater repeatability of a structured method of selecting abstracts for the annual euraps scientific meeting. Eur J Plast Surg 29, 111–114 (2006). https://doi.org/10.1007/s00238-006-0061-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00238-006-0061-2

Keywords

Navigation