Task-oriented search for evidence-based medicine

Abstract

Research on how clinicians search shows that they pose queries according to three common clinical tasks: searching for diagnoses, searching for treatments and searching for tests. We hypothesise, therefore, that structuring an information retrieval system around these three tasks would be beneficial when searching for evidence-based medicine (EBM) resources in medical digital libraries. Task-oriented (diagnosis, test and treatment) information was extracted from free-text medical articles using a natural language processing pipeline. This information was integrated into a retrieval and visualisation system for EBM search that allowed searchers to interact with the system via task-oriented filters. The effectiveness of the system was empirically evaluated using TREC CDS—a gold standard of medical articles and queries designed for EBM search. Task-oriented information was successfully extracted from 733,138 articles taken from a medical digital library. Task-oriented search led to improvements in the quality of search results and savings in searcher workload. An analysis of how different tasks affected retrieval showed that searching for treatments was the most challenging and that the task-oriented approach improved search for treatments. The most savings in terms of workload were observed when searching for treatments and tests. Overall, taking into account different clinical tasks can improve search according to these tasks. Each task displayed different results, making systems that are more adaptive to the clinical task type desirable. A future user study would help quantify the actual cost-saving estimates.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    Thirty topics from TREC 2014 and thirty topics from TREC 2015.

  2. 2.

    The full body was not included as it contained large amounts of HTML formatting that QuickUMLS could not interpret.

  3. 3.

    Elasticsearch version 2.2.0: https://www.elastic.co/downloads/past-releases/elasticsearch-2-2-0.

  4. 4.

    We used the default snippet generation provided by Elasticsearch.

  5. 5.

    Formally, \(\text {precision}@n = \frac{|\text {Rel} \cap \text {Ret}_n|}{|\text {Ret}_n|}\), where \(\text {Rel}\) is the set of relevant documents and \(\text {Ret}_n\) is the set of top n retrieved documents.

  6. 6.

    Formally, \(\text {recip. rank} = \frac{1}{\text {rank}}\), where \(\text {rank}\) is the rank position of the first correct result in a ranked list of results.

References

  1. 1.

    Azzopardi, L., Kelly, D., Brennan, K.: How query cost affects search behavior. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 23–32. ACM (2013)

  2. 2.

    Azzopardi, L., Zuccon, G.: Building and using models of information seeking, search and retrieval: full day tutorial. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR ’15, pp. 1107–1110. ACM, New York, NY, USA (2015)

  3. 3.

    Azzopardi, L., Zuccon, G.: An analysis of the cost and benefit of search interactions. In: International Conference on the Theory of Information Retrieval (ICTIR). Newark, USA. (2016)

  4. 4.

    Azzopardi, L., Zuccon, G.: Two scrolls or one click: a cost model for browsing search results. In: European Conference on Information Retrieval, pp. 696–702. Springer (2016)

  5. 5.

    Card, S.K., Moran, T.P., Newell, A.: The keystroke-level model for user performance time with interactive systems. Commun. ACM 23(7), 396–410 (1980)

    Article  Google Scholar 

  6. 6.

    Demner-Fushman, D., Lin, J.: Knowledge extraction for clinical question answering: preliminary results. In: Proceedings of the AAAI-05 Workshop on Question Answering in Restricted Domains, pp. 9–13 (2005)

  7. 7.

    Demner-Fushman, D., Lin, J.: Answering clinical questions with knowledge-based and statistical techniques. Comput. Linguist. 33(1), 63–103 (2007)

    Article  Google Scholar 

  8. 8.

    Druss, B.G., Marcus, S.C.: Growth and decentralization of the medical literature: implications for evidence-based medicine. J. Med. Libr. Assoc. 93(4), 499–501 (2005)

    Google Scholar 

  9. 9.

    Ely, J., Osheroff, J., Gorman, P., Ebell, M., Chambliss, M., Pifer, E., Stavri, P.: A taxonomy of generic clinical questions: classification study. Br. Med. J. 321(7258), 429–432 (2000)

    Article  Google Scholar 

  10. 10.

    Hearst, M., Elliott, A., English, J., Sinha, R., Swearingen, K., Yee, K.P.: Finding the flow in web site search. Commun. ACM 45(9), 42–49 (2002)

    Article  Google Scholar 

  11. 11.

    Hersh, W.: Information Retrieval: A Health and Biomedical Perspective, 3rd edn. Springer Verlag, New York (2009)

    Google Scholar 

  12. 12.

    Hofmann, K., Li, L., Radlinski, F., et al.: Online evaluation for information retrieval. Found. Trends Inf. Retr. 10(1), 1–117 (2016)

    Article  Google Scholar 

  13. 13.

    Koopman, B., Zuccon, G.: Why assessing relevance in medical IR is demanding. In: Proceedings of the SIGIR Workshop on Medical Information Retrieval (MedIR). Gold Coast, Australia (2014)

  14. 14.

    Koopman, B., Zuccon, G.: Why assessing relevance in medical ir is demanding. In: Medical Information Retrieval Workshop at SIGIR 2014, p.16 (2014)

  15. 15.

    Koopman, B., Zuccon, G., Bruza, P., Sitbon, L., Lawley, M.: Information retrieval as semantic inference: a graph inference model applied to medical search. Inf. Retr. 19(1), 6–37 (2015)

    Google Scholar 

  16. 16.

    Kwasnik, B.H.: The role of classification in knowledge representation and discovery. Libr. Trends 48(1), 22–47 (2000)

  17. 17.

    Lancaster, F.W.: Vocabulary Control for Information Retrieval, 2nd edn. Information Resources Press, Arlington, Virginia (1986)

  18. 18.

    Larsen, P.O., von Ins, M.: The rate of growth in scientific publication and the decline in coverage provided by science citation index. Scientometrics 84(3), 575–603 (2010)

    Article  Google Scholar 

  19. 19.

    Lau, A.Y., Coiera, E., et al.: How do clinicians search for and access biomedical literature to answer clinical questions? Stud. Health Technol. Inform. 129(1), 152 (2007)

    Google Scholar 

  20. 20.

    Limsopatham, N., Macdonald, C., Ounis, I.: A task-specific query and document representation for medical records search. In: Proceedings of the 35th European Conference on Information Retrieval (ECIR). Moscow, Russia (2013)

  21. 21.

    Liu, Z., Chu, W.W.: Knowledge-based query expansion to support scenario-specific retrieval of medical free text. Inf. Retr. 10(2), 173–202 (2007)

    MathSciNet  Article  Google Scholar 

  22. 22.

    National Library of Medicine: Detailed indexing statistics, 1965–2015 (2016). https://www.nlm.nih.gov/bsd/index_stats_comp.html

  23. 23.

    Richardson, W.S., Wilson, M.C., Nishikawa, J., Hayward, R.S., et al.: The well-built clinical question: a key to evidence-based decisions. ACP J. Club 123(3), A12 (1995)

    Google Scholar 

  24. 24.

    Roberts, K., Simpson, M.S., Voorhees, E., Hersh, W.R.: Overview of the TREC 2015 clinical decision support track. In: Text REtrieval Conference (TREC) (2015)

  25. 25.

    Simpson, M.S., Voorhees, E.M., Hersh, W.: Overview of the TREC clinical decision support track. In: Text REtrieval Conference (TREC) (2014)

  26. 26.

    Smucker, M.D.: Towards timed predictions of human performance for interactive information retrieval evaluation. In: Proceedings of The Third International Workshop on Human–Computer Interaction and Information Retrieval (HCIR 2009) (2009)

  27. 27.

    Soergel, D.: The rise of ontologies or the reinvention of classification. J. Assoc. Inf. Sci. Technol 50(12), 1119 (1999)

    Google Scholar 

  28. 28.

    Soldaini, L., Goharian, N.: Quickumls: a fast, unsupervised approach for medical concept extraction. In: SIGIR Medical Information Retrieval (MedIR) Workshop (2016)

  29. 29.

    Stoica, E., Hearst, M.A.: Nearly-automated metadata hierarchy creation. In: Proceedings of HLT-NAACL 2004: Short Papers, pp. 117–120. Association for Computational Linguistics (2004)

  30. 30.

    Trieschnigg, D., Hiemstra, D., de Jong, F., Kraaij, W.: A cross-lingual framework for monolingual biomedical information retrieval. In: Proceedings of the 19th ACM International Conference on Information and Knowledge Management, pp. 169–178. ACM (2010)

  31. 31.

    Turpin, A., Scholer, F., Jarvelin, K., Wu, M., Culpepper, J.S.: Including summaries in system evaluation. In: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 508–515. ACM (2009)

  32. 32.

    Uzuner, Ö., South, B.R., Shen, S., DuVall, S.L.: 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. J. Am. Med. Inform. Assoc. 18(5), 552–556 (2011)

    Article  Google Scholar 

  33. 33.

    White, R.W.: Interactactions with search systems. Cambridge University Press, Cambridge (2016)

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Bevan Koopman.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Koopman, B., Russell, J. & Zuccon, G. Task-oriented search for evidence-based medicine. Int J Digit Libr 19, 217–229 (2018). https://doi.org/10.1007/s00799-017-0209-7

Download citation

Keywords

  • Information retrieval
  • Evidence-based medicine
  • Task-oriented search
  • Clinical decision support