Learning-to-Rank and Relevance Feedback for Literature Appraisal in Empirical Medicine

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11018)


The constantly expanding medical libraries contain immense amounts of information, including evidence from healthcare research. Gathering and interpreting this evidence can be both challenging and time-consuming for researchers conducting systematic reviews. Technologically assisted review (TAR) aims to assist this process by finding as much relevant information as possible with the least effort. Toward this, we present an incremental learning method that ranks documents, previously retrieved, by automating the process of title and abstract screening. Our approach combines a learning-to-rank model trained across multiple reviews with a model focused on the given review, incrementally trained based on relevance feedback. The classifiers use as features several similarity metrics between the documents and the research topic, such as Levenshtein distance, cosine similarity and BM25, and vectors derived from word embedding methods such as Word2Vec and Doc2Vec. We test our approach using the dataset provided by the Task II of CLEF eHealth 2017 and we empirically compare it with other approaches participated in the task.


Learning to rank Relevance feedback Technology-assisted reviews Empirical Medicine 


  1. 1.
    Alharbi, A., Stevenson, M.: Ranking abstracts to identify relevant evidence for systematic reviews: the university of Sheffield’s approach to CLEF eHealth 2017 task 2: working notes for CLEF 2017. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  2. 2.
    Anagnostou, A., Lagopoulos, A., Tsoumakas, G., Vlahavas, I.: Combining inter-review learning-to-rank and intra-review incremental training for title and abstract screening in systematic reviews. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  3. 3.
    Bastian, H., Glasziou, P., Chalmers, I.: Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 7(9), e1000326 (2010). Scholar
  4. 4.
    Chen, J., et al.: ECNU at 2017 eHealth task 2: technologically assisted reviews in empirical medicine. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  5. 5.
    Chen, T., Guestrin, C.: XGBoost: reliable large-scale tree boosting system. arXiv, pp. 1–6 (2016).
  6. 6.
    Cormack, G.V., Grossman, M.R.: Technology-assisted review in empirical medicine: waterloo participation in CLEF eHealth 2017. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  7. 7.
    Goeuriot, L., et al.: CLEF 2017 eHealth evaluation lab overview. In: Jones, G.J.F., et al. (eds.) CLEF 2017. LNCS, vol. 10456, pp. 291–303. Springer, Cham (2017). Scholar
  8. 8.
    Hashimoto, K., Kontonatsios, G., Miwa, M., Ananiadou, S.: Topic detection using paragraph vectors to support active learning in systematic reviews. J. Biomed. Inform. 62, 59–65 (2016). Scholar
  9. 9.
    Higgins, J.P., Green, S.: Cochrane Handbook for Systematic Reviews of Interventions. Wiley, Hoboken (2011). www.handbook.cochrane.orgGoogle Scholar
  10. 10.
    Hollmann, N., Eickhoff, C.: Ranking and feedback-based stopping for recall-centric document retrieval. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  11. 11.
    Howard, B.E., et al.: SWIFT-review: a text-mining workbench for systematic review. Syst. Rev. 5(1), 87 (2016). Scholar
  12. 12.
    Kalphov, V., Georgiadis, G., Azzopardi, L.: SiS at CLEF 2017 eHealth TAR task. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  13. 13.
    Kanoulas, E., Li, D., Azzopardi, L., Spijker, R.: CLEF 2017 technologically assisted reviews in empirical medicine overview. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  14. 14.
    Kusner, M.J., Sun, Y., Kolkin, N.I., Weinberger, K.Q.: From word embeddings to document distances. In: Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 957–966 (2015)Google Scholar
  15. 15.
    Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: International Conference on Machine Learning, ICML 2014, vol. 32, pp. 1188–1196 (2014).
  16. 16.
    Lee, G.E.: A study of convolutional neural networks for clinical document classification in systematic reviews: sysreview at CLEF eHealth 2017. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  17. 17.
    Lefebvre, C., Manheimer, E., Glanville, J.: Searching for studies. In: Cochrane Handbook for Systematic Reviews of Interventions. Cochrane Book Series, pp. 95–150 (2008).
  18. 18.
    Mikolov, T., Corrado, G., Chen, K., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of the International Conference on Learning Representations (ICLR 2013), pp. 1–12 (2013).
  19. 19.
    Norman, C., Leeflang, M., Névéol, A.: LIMSI@CLEF eHealth 2017 task 2: logistic regression for automatic article ranking. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  20. 20.
    O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M., Ananiadou, S.: Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst. Rev. 4(1), 1–22 (2015). Scholar
  21. 21.
    Ouzzani, M., Hammady, H., Fedorowicz, Z., Elmagarmid, A.: Rayyan-a web and mobile app for systematic reviews. Syst. Rev. 5(1), 210 (2016). Scholar
  22. 22.
    Qin, T., Liu, T.Y., Xu, J., Li, H.: LETOR: a benchmark collection for research on learning to rank for information retrieval. Inf. Retr. 13(4), 346–374 (2010). Scholar
  23. 23.
    Rathbone, J., Hoffmann, T., Glasziou, P.: Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Syst. Rev. 4(1), 80 (2015). Scholar
  24. 24.
    Robertson, S.: The probabilistic relevance framework: BM25 and beyond. Found. Trends® Inf. Retr. 3(4), 333–389 (2010). Scholar
  25. 25.
    Sackett, D.L.: Evidence-based medicine. Semin. Perinatol. 21(1), 3–5 (1997). Scholar
  26. 26.
    Scells, H., Zuccon, G., Deacon, A., Koopman, B.: QUT ielab at CLEF eHealth 2017 technology assisted reviews track: initial experiments with learning to rank. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  27. 27.
    Singh, G., Marshall, I., Thomas, J., Wallace, B.: Identifying diagnostic test accuracy publications using a deep model. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  28. 28.
    Sparck Jones, K., Sparck Jones, K., Walker, S., Walker, S., Robertson, S.E., Robertson, S.E.: A probabilistic model of information retrieval: development and comparative experiments part 2. Inf. Process. Manage. 36, 809–840 (2000). Scholar
  29. 29.
    Van Altena, A.J., Olabarriaga, S.D.: Predicting publication inclusion for diagnostic accuracy test reviews using random forests and topic modelling. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar
  30. 30.
    Wallace, B.C., Small, K., Brodley, C.E., Trikalinos, T.A.: Active learning for biomedical citation screening. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2010, p. 173 (2010).
  31. 31.
    Wallace, B.C., Trikalinos, T.A., Lau, J., Brodley, C.E., Schmid, C.H.: Semi-automated screening of biomedical citations for systematic reviews. BMC Bioinform. 11(1), 55 (2010). Scholar
  32. 32.
    Yu, Z., Menzies, T.: Data balancing for technologically assisted reviews: undersampling or reweighting. In: CEUR Workshop Proceedings, vol. 1866 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Aristotle University of ThessalonikiThessalonikiGreece

Personalised recommendations