Skip to main content

Improving Ranking for Systematic Reviews Using Query Adaptation

  • Conference paper
  • First Online:
Experimental IR Meets Multilinguality, Multimodality, and Interaction (CLEF 2019)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11696))

Abstract

Identifying relevant studies for inclusion in systematic reviews requires significant effort from human experts who manually screen large numbers of studies. The problem is made more difficult by the growing volume of medical literature and Information Retrieval techniques have proved to be useful to reduce workload. Reviewers are often interested in particular types of evidence such as Diagnostic Test Accuracy studies. This paper explores the use of query adaption to identify particular types of evidence and thereby reduce the workload placed on reviewers. A simple retrieval system that ranks studies using TF.IDF weighted cosine similarity was implemented. The Log-Likelihood, Chi-Squared and Odds-Ratio lexical statistics and relevance feedback were used to generate sets of terms that indicate evidence relevant to Diagnostic Test Accuracy reviews. Experiments using a set of 80 systematic reviews from the CLEF2017 and CLEF2018 eHealth tasks demonstrate that the approach improves retrieval performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/leifos/tar.

References

  1. Search Filters for MEDLINE in Ovid Syntax and the PubMed Translation. https://hiru.mcmaster.ca/hiru/HIRU_Hedges_MEDLINE_Strategies.aspx. Accessed 18 Jan 2018

  2. Alharbi, A., Briggs, W., Stevenson, M.: Retrieving and ranking studies for systematic reviews: University of Sheffield’s approach to CLEF ehealth 2018 task 2. In: CLEF 2018 Evaluation Labs and Workshop: Online Working Notes. CEUR-WS, France (2018)

    Google Scholar 

  3. Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval: The Concepts and Technology Behind Search, 2nd edn. Addison-Wesley, Boston (2011)

    Google Scholar 

  4. Carpineto, C., Romano, G.: A survey of automatic query expansion in information retrieval. ACM Comput. Surv. 44(1), 1–50 (2012)

    Article  Google Scholar 

  5. Cohen, A.M., Ambert, K., McDonagh, M.: A prospective evaluation of an automated classification system to support evidence-based medicine and systematic review. In: AMIA Annual Symposium Proceedings, vol. 2010, pp. 121–125 (2010). http://www.ncbi.nlm.nih.gov/pubmed/21346953

  6. Dunning, T.: Accurate methods for the statistics of surprise and coincidence. Comput. Linguist. 19(1), 61–74 (1993)

    Google Scholar 

  7. Gan, W.Q., Man, S., Senthilselvan, A., Sin, D.: Association between chronic obstructive pulmonary disease and systemic inflammation: a systematic review and a meta-analysis. Thorax 59(7), 574–580 (2004)

    Article  Google Scholar 

  8. Kanoulas, E., Li, D., Azzopardi, L., Spijker, R.: CLEF technologically assisted reviews in empirical medicine overview. In: Working Notes of CLEF 2017 - Conference and Labs of the Evaluation Forum. CEUR Workshop Proceedings, Dublin, Ireland (2017). CEUR-WS.org

  9. Kanoulas, E., Spijker, R., Li, D., Azzopardi, L.: CLEF 2018 technology assisted reviews in empirical medicine overview. In: CLEF 2018 Evaluation Labs and Workshop: Online Working Notes. CEUR-WS, France (2018)

    Google Scholar 

  10. Karimi, S., Pohl, S., Scholer, F., Cavedon, L., Zobel, J.: Boolean versus ranked querying for biomedical systematic reviews. BMC Med. Inform. Decis. Mak. 10(1), 1–20 (2010)

    Article  Google Scholar 

  11. McGowan, J., Sampson, M.: Systematic reviews need systematic searchers. J. Med. Libr. Assoc. 93(1), 74–80 (2005)

    Google Scholar 

  12. O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M., Ananiadou, S.: Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst. Rev. 4(1), 5 (2015)

    Article  Google Scholar 

  13. Paisley, S., Sevra, J., Stevenson, M., Archer, R., Preston, L., Chilcott, J.: Identifying potential early biomarkers of acute myocaridal infarction in the biomedical literature: a comparison of text mining and manual sifting techniques. In: Proceedings of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) 19th Annual European Congress, Vienna, Austria (2016)

    Google Scholar 

  14. Pojanapunya, P., Todd, R.W.: Log-likelihood and odds ratio: keyness statistics for different purposes of keyword analysis. Corpus Linguist. Linguist. Theory 14(1), 133–167 (2018)

    Article  Google Scholar 

  15. Rayson, P.: From key words to key semantic domains. Int. J. Corpus Linguist. 13(4), 519–549 (2008)

    Article  Google Scholar 

  16. Shemilt, I., Khan, N., Park, S., Thomas, J.: Use of cost-effectiveness analysis to compare the efficiency of study identification methods in systematic reviews. Syst. Rev. 5(1), 140 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amal Alharbi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alharbi, A., Stevenson, M. (2019). Improving Ranking for Systematic Reviews Using Query Adaptation. In: Crestani, F., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2019. Lecture Notes in Computer Science(), vol 11696. Springer, Cham. https://doi.org/10.1007/978-3-030-28577-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-28577-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-28576-0

  • Online ISBN: 978-3-030-28577-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics