Advertisement

Overview of the CLEF eHealth Evaluation Lab 2018

  • Hanna SuominenEmail author
  • Liadh Kelly
  • Lorraine Goeuriot
  • Aurélie Névéol
  • Lionel Ramadier
  • Aude Robert
  • Evangelos Kanoulas
  • Rene Spijker
  • Leif Azzopardi
  • Dan Li
  • Jimmy
  • João Palotti
  • Guido Zuccon
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11018)

Abstract

In this paper, we provide an overview of the sixth annual edition of the CLEF eHealth evaluation lab. CLEF eHealth 2018 continues our evaluation resource building efforts around the easing and support of patients, their next-of-kins, clinical staff, and health scientists in understanding, accessing, and authoring eHealth information in a multilingual setting. This year’s lab offered three tasks: Task 1 on multilingual information extraction to extend from last year’s task on French and English corpora to French, Hungarian, and Italian; Task 2 on technologically assisted reviews in empirical medicine building on last year’s pilot task in English; and Task 3 on Consumer Health Search (CHS) in mono- and multilingual settings that builds on the 2013–17 Information Retrieval tasks. In total 28 teams took part in these tasks (14 in Task 1, 7 in Task 2 and 7 in Task 3). Herein, we describe the resources created for these tasks, outline our evaluation methodology adopted and provide a brief summary of participants of this year’s challenges and results obtained. As in previous years, the organizers have made data and tools associated with the lab tasks available for future research and development.

Keywords

Evaluation Entity linking Information retrieval Health records Information extraction Medical informatics Systematic reviews Total recall Test-set generation Text classification Text segmentation Self-diagnosis 

Notes

Acknowledgements

The CLEF eHealth 2018 evaluation lab has been supported in part by (in alphabetical order) the ANU, the CLEF Initiative, the Data61/CSIRO, and the French National Research Agency (ANR), under grant CABeRneT ANR-13-JS02-0009-01. We are also thankful to the people involved in the annotation, query creation, and relevance assessment exercise. Last but not least, we gratefully acknowledge the participating teams’ hard work. We thank them for their submissions and interest in the lab.

References

  1. 1.
    Cormack, G.V., Grossman, M.R.: Engineering quality and reliability in technology-assisted review. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2016, pp. 75–84. ACM, New York (2016).  https://doi.org/10.1145/2911451.2911510
  2. 2.
    Ferro, N., Maistro, M., Sakai, T., Soboroff, I.: Overview of CENTRE @ CLEF 2018. In: Ferro, N., et al. (eds.) CLEF 2018. LNCS, vol. 11018, pp. 239–246 (2018)Google Scholar
  3. 3.
    Goeuriot, L., et al.: D7.3 meta-analysis of the second phase of empirical and user-centered evaluations. Technical report, Khresmoi Project, August 2014Google Scholar
  4. 4.
    Goeuriot, L., et al.: ShARe/CLEF eHealth evaluation lab 2013, Task 3: information retrieval to address patients’ questions when reading clinical reports. CLEF 2013 Online Working Notes 8138 (2013)Google Scholar
  5. 5.
    Goeuriot, L., et al.: An analysis of evaluation campaigns in ad-hoc medical information retrieval: CLEF eHealth 2013 and 2014. Inf. Retr. J. 21, 1–34 (2018)CrossRefGoogle Scholar
  6. 6.
    Goeuriot, L., et al.: ShARe/CLEF eHealth evaluation lab 2014, Task 3: user-centred health information retrieval. In: CLEF 2014 Evaluation Labs and Workshop: Online Working Notes. Sheffield, UK (2014)Google Scholar
  7. 7.
    Goeuriot, L., et al.: Overview of the CLEF eHealth evaluation lab 2015. In: Mothe, J., et al. (eds.) CLEF 2015. LNCS, vol. 9283, pp. 429–443. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24027-5_44CrossRefGoogle Scholar
  8. 8.
    Goeuriot, L., et al.: CLEF 2017 eHealth evaluation lab overview. In: Jones, G.L.F., et al. (eds.) CLEF 2017. LNCS, vol. 10456, pp. 291–303. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-65813-1_26CrossRefGoogle Scholar
  9. 9.
    Jimmy, Zuccon, G., Demartini, G.: On the volatility of commercial search engines and its impact on information retrieval research. In: SIGIR 2018 (2018)Google Scholar
  10. 10.
    Jimmy, Zuccon, G., Palotti, J., Goeuriot, L., Kelly, L.: Overview of the CLEF 2018 consumer health search task. In: Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings (2018)Google Scholar
  11. 11.
    Kanoulas, E., Li, D., Azzopardi, L., Spijker, R.: CLEF 2017 technologically assisted reviews in empirical medicine overview. In: Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings (2017)Google Scholar
  12. 12.
    Kanoulas, E., Li, D., Azzopardi, L., Spijker, R.: CLEF 2018 technologically assisted reviews in empirical medicine overview. In: Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings (2018)Google Scholar
  13. 13.
    Kelly, L., Goeuriot, L., Suominen, H., Névéol, A., Palotti, J., Zuccon, G.: Overview of the CLEF eHealth evaluation lab 2016. In: Fuhr, N., et al. (eds.) CLEF 2016. LNCS, vol. 9822, pp. 255–266. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-44564-9_24CrossRefGoogle Scholar
  14. 14.
    Kelly, L., et al.: Overview of the ShARe/CLEF eHealth evaluation lab 2014. In: Kanoulas, E., et al. (eds.) CLEF 2014. LNCS, vol. 8685, pp. 172–191. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11382-1_17CrossRefGoogle Scholar
  15. 15.
    Leeflang, M.M., Deeks, J.J., Takwoingi, Y., Macaskill, P.: Cochrane diagnostic test accuracy reviews. Syst. Rev. 2(1), 82 (2013)CrossRefGoogle Scholar
  16. 16.
    Lipani, A., Palotti, J., Lupu, M., Piroi, F., Zuccon, G., Hanbury, A.: Fixed-cost pooling strategies based on IR evaluation measures. In: Jose, J.M., et al. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 357–368. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-56608-5_28CrossRefGoogle Scholar
  17. 17.
    Moffat, A., Zobel, J.: Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst. 27(1), 2:1–2:27 (2008).  https://doi.org/10.1145/1416950.1416952CrossRefGoogle Scholar
  18. 18.
    Névéol, A., et al.: CLEF eHealth 2017 multilingual information extraction task overview: ICD10 coding of death certificates in English and French. In: Cappellato, L., Ferro, N., Goeuriot, L., Mandl, T. (eds.) CLEF 2017 Working Notes. CEUR Workshop Proceedings (CEUR-WS.org) (2017). ISSN 1613–0073. http://ceur-ws.org/Vol-1866/
  19. 19.
    Névéol, A., et al.: Clinical information extraction at the CLEF eHealth evaluation lab 2016. In: Balog, K., Cappellato, L., Ferro, N., Macdonald, C. (eds.) CLEF 2016 Working Notes. CEUR Workshop Proceedings (CEUR-WS.org) (2016). ISSN 1613–0073. http://ceur-ws.org/Vol-1609/
  20. 20.
    Névéol, A., et al.: CLEF eHealth 2017 multilingual information extraction task overview: ICD10 coding of death certificates in English and French. In: CLEF 2017 Online Working Notes. CEUR-WS (2017)Google Scholar
  21. 21.
    Névéol, A., et al.: CLEF eHealth 2018 multilingual information extraction task overview: ICD10 coding of death certificates in French, Hungarian and Italian. In: CLEF 2018 Online Working Notes. CEUR-WS (2018)Google Scholar
  22. 22.
    Palotti, J., et al.: CLEF eHealth evaluation lab 2015, task 2: retrieving information about medical symptoms. In: CLEF 2015 Online Working Notes. CEUR-WS (2015)Google Scholar
  23. 23.
    Palotti, J., et al.: CLEF 2017 task overview: the IR Task at the eHealth evaluation lab. In: Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings (2017)Google Scholar
  24. 24.
    Park, L.A., Zhang, Y.: On the distribution of user persistence for rank-biased precision. In: Proceedings of the 12th Australasian Document Computing Symposium, pp. 17–24 (2007)Google Scholar
  25. 25.
    Suominen, H.: In: Forner, P., Karlgren, J., Womser-Hacker, C., Ferro, N. (eds.) CLEF 2012 Working Notes. CEUR Workshop Proceedings (CEUR-WS.org) (2012). ISSN 1613–0073. http://ceur-ws.org/Vol-1178/
  26. 26.
    Suominen, H., et al.: Overview of the ShARe/CLEF eHealth evaluation lab 2013. In: Forner, P., Müller, H., Paredes, R., Rosso, P., Stein, B. (eds.) CLEF 2013. LNCS, vol. 8138, pp. 212–231. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40802-1_24CrossRefGoogle Scholar
  27. 27.
    Zuccon, G., et al.: The IR task at the CLEF eHealth evaluation lab 2016: user-centred health information retrieval. In: CLEF 2016 Evaluation Labs and Workshop: Online Working Notes, CEUR-WS, September 2016Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Hanna Suominen
    • 1
    • 2
    Email author
  • Liadh Kelly
    • 3
  • Lorraine Goeuriot
    • 4
  • Aurélie Névéol
    • 5
  • Lionel Ramadier
    • 5
  • Aude Robert
    • 6
  • Evangelos Kanoulas
    • 7
  • Rene Spijker
    • 8
  • Leif Azzopardi
    • 9
  • Dan Li
    • 7
  • Jimmy
    • 10
  • João Palotti
    • 11
    • 12
  • Guido Zuccon
    • 10
  1. 1.University of TurkuTurkuFinland
  2. 2.The Australian National University (ANU), Data61/Commonwealth Scientific and Industrial Research Organisation (CSIRO), University of CanberraCanberraAustralia
  3. 3.Maynooth UniversityMaynoothIreland
  4. 4.Univ. Grenoble Alpes, CNRS, Grenoble INP, LIGGrenobleFrance
  5. 5.LIMSI CNRS UPR 3251 Université Paris-SaclayOrsayFrance
  6. 6.INSERM - CépiDc 80 rue du Général LeclercLe Kremlin-Bicêtre CedexFrance
  7. 7.Informatics InstituteUniversity of AmsterdamAmsterdamNetherlands
  8. 8.Cochrane Netherlands and UMC UtrechtJulius Center for Health Sciences and Primary CareUtrechtNetherlands
  9. 9.Computer and Information SciencesUniversity of StrathclydeGlasgowUK
  10. 10.Queensland University of TechnologyBrisbaneAustralia
  11. 11.Vienna University of TechnologyViennaAustria
  12. 12.Qatar Computing Research InstituteDohaQatar

Personalised recommendations