Information Retrieval Journal

, Volume 19, Issue 1–2, pp 149–173 | Cite as

Enhancing web search in the medical domain via query clarification

  • Luca Soldaini
  • Andrew Yates
  • Elad Yom-Tov
  • Ophir Frieder
  • Nazli Goharian
Medical Information Retrieval

Abstract

The majority of Internet users search for medical information online; however, many do not have an adequate medical vocabulary. Users might have difficulties finding the most authoritative and useful information because they are unfamiliar with the appropriate medical expressions describing their condition; consequently, they are unable to adequately satisfy their information need. We investigate the utility of bridging the gap between layperson and expert vocabularies; our approach adds the most appropriate expert expression to queries submitted by users, a task we call query clarification. We evaluated the impact of query clarification. Using three different synonym mappings and conducting two task-based retrieval studies, users were asked to answer medically-related questions using interleaved results from a major search engine. Our results show that the proposed system was preferred by users and helped them answer medical concerns correctly more often, with up to a 7 % increase in correct answers over an unmodified query. Finally, we introduce a supervised classifier to select the most appropriate synonym mapping for each query, which further increased the fraction of correct answers (12 %).

Keywords

Query clarification Personalized search Medical informatics Health search 

References

  1. Abdou, S., & Savoy, J. (2008). Searching in medline: Query expansion and manual indexing evaluation. Information Processing & Management, 44(2), 781–789.CrossRefGoogle Scholar
  2. Agarwal, A., Raghavan, H., Subbian, K., Melville, P., Lawrence, R. D., Gondek, D. C., & Fan, J. (2012). Learning to rank for robust question answering. In Proceedings of the 21st ACM international conference on Information and knowledge management (pp. 833–842). ACM.Google Scholar
  3. Can, A. B., & Baykal, N. (2007). Medicoport: A medical search engine for all. Computer methods and programs in biomedicine, 86(1), 73–86.CrossRefGoogle Scholar
  4. Carmel, D., Farchi, E., Petruschka, Y., & Soffer, A. (2002). Automatic query wefinement using lexical affinities with maximal information gain. In Proceedings of SIGIR ’02 (pp. 283–290). ACM.Google Scholar
  5. Carmel, D., & Yom-Tov, E. (2010). Estimating the query difficulty for information retrieval. Synthesis Lectures on Information Concepts, Retrieval, and Services, 2(1), 1–89.CrossRefGoogle Scholar
  6. Carmel, D., Yom-Tov, E., Darlow, A. & Pelleg, D. (2006). What makes a query difficult? In Proceedings of SIGIR ’06 (pp. 390–397). ACM.Google Scholar
  7. Cartright, M.-A., White, R. W., & Horvitz, E. (2011). Intentions and attention in exploratory health search. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval (pp. 65–74). ACM.Google Scholar
  8. Cole, M. J., Zhang, X., Liu, C., Belkin, N. J., & Gwizdka, J. (2011). Knowledge effects on document selection in search results pages. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval (pp. 1219–1220). ACM.Google Scholar
  9. Dasdan, A., Drome, C., & Kolay, S. (2009). Thumbs-up: A game for playing to rank search results. In Proceedings of the 18th international conference on world wide Web, WWW ’09, (pp. 1071–1072) New York, NY: ACM.Google Scholar
  10. Díaz-Galiano, M. C., Martín-Valdivia, M. T., & López, L. A. U. (2009). Query expansion with a medical ontology to improve a multimodal information retrieval system. Computers in Biology and Medicine, 39(4), 396–403.CrossRefGoogle Scholar
  11. Dwork, C., Kumar, R., Naor, M., & Sivakumar, D. (2001). Rank aggregation methods for the web. In Proceedings of the 10th international conference on world wide web, WWW ’01 (pp. 613–622) New York, NY: ACM.Google Scholar
  12. Eysenbach, G., & Köhler, C. (2002). How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. BMJ, 324(7337), 573–577.CrossRefGoogle Scholar
  13. Fox, S., & Duggan, M. (2013). Health online 2013. http://www.pewinternet.org/Reports/2013/Health-online.aspx.
  14. Goeuriot, L., Jones, G. J. F., Kelly, L., Leveling, J., Hanbury, A., Müller, H., et al. (2013). Share/clef ehealth evaluation lab 2013, task 3: Information retrieval to address patients’ questions when reading clinical reports.Google Scholar
  15. Goeuriot, L., Kelly, L., & Leveling, J. (2014a). An analysis of query difficulty for information retrieval in the medical domain. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval (pp. 1007–1010). ACM.Google Scholar
  16. Goeuriot, L., Kelly, L., Li, W., Palotti, J., Pecina, P., Zuccon, G., et al. (2014b). Share/clef ehealth evaluation lab 2013, task 3: User-centred health information retrieval. In Proceedings of CLEF, vol. 2014.Google Scholar
  17. Griffon, N., Chebil, W., Rollin, L., Kerdelhue, G., Thirion, B., Gehanno, J. F., & Darmoni, S. J. (2012). Performance evaluation of unified medical language system's synonyms expansion to query PubMed. BMC medical informatics and decision making, 12(1), 12.Google Scholar
  18. Heilman, J. M., & West, A. G. (2015). Wikipedia and medicine: Quantifying readership, editors, and the significance of natural language. Journal of Medical Internet Research, 17(3), e62.CrossRefGoogle Scholar
  19. Hersh, W., Buckley, C., Leone, T. J., & Hickam, D. (1994). Ohsumed: An interactive retrieval evaluation and new large test collection for research. In Proceedings of SIGIR ’94 (pp. 192–201) New York, NY: SpringerGoogle Scholar
  20. Hersh, W., Pentecost, J., & Hickam, D. (1996). A task-oriented approach to information retrieval evaluation. Journal of American Society for information Science, 47(1), 50–56.CrossRefGoogle Scholar
  21. Hersh, W. R., Crabtree, M. K., Hickam, D. H., Sacherek, L., Friedman, C. P., Tidmarsh, P., et al. (2002). Factors associated with success in searching medline and applying evidence to answer clinical questions. Journal of the American Medical Informatics Association, 9(3), 283–293.CrossRefGoogle Scholar
  22. Jalali, V., & Borujerdi, M. R. M. (2008). The effect of using domain specific ontologies in query expansion in medical field. In Innovations in Information Technology, pp. 277–281. IEEE, December 2008.Google Scholar
  23. Jalali, V., & Borujerdi, M. R. M. (2010). Information retrieval with concept-based pseudo-relevance feedback in MEDLINE. Knowledge and Information Systems, 29(1), 237–248.CrossRefGoogle Scholar
  24. Joachims, T. (2002). Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 133–142). ACM.Google Scholar
  25. Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., & Gay, G. (2007). Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems (TOIS), 25(2), 7.CrossRefGoogle Scholar
  26. Kuhn, M., Campillos, M., Letunic, I., Jensen, L. J., & Bork, P. (2010). A side effect resource to capture phenotypic effects of drugs. Molecular Systems Biology, 6, 343.CrossRefGoogle Scholar
  27. Liu, Z., & Chu, W. W. (2007). Knowledge-based query expansion to support scenario-specific retrieval of medical free text. Information Retrieval, 10(2), 173–202.MATHCrossRefGoogle Scholar
  28. Lu, Z., Kim, W., & Wilbur, W. J. (2009). Evaluation of query expansion using MeSH in PubMed. Information Retrieval, 12(1), 69–80.CrossRefGoogle Scholar
  29. Luo, G., Tang, C., Yang, H., & Wei, X. (2008). Medsearch: A specialized search engine for medical information retrieval. In Proceedings of the 17th ACM conference on Information and knowledge management (pp. 143–152). ACM.Google Scholar
  30. Milne, D., Medelyan, O., & Witten, I. H. (2006). Mining domain-specific thesauri from wikipedia: A case study. In Proceedings of the 2006 IEEE/WIC/ACM international conference on web intelligence, pp. 442–448. IEEE Computer Society.Google Scholar
  31. Milne, D. N., Witten, I. H., & Nichols, D. M. (2007). A knowledge-based search engine powered by wikipedia. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, (pp. 445–454). ACM.Google Scholar
  32. Mu, X., Lu, K., & Ryu, H. (2014). Explicitly integrating MeSH thesaurus help into health information retrieval systems: An empirical user study. Information Processing & Management, 50(1), 24–40.CrossRefGoogle Scholar
  33. Nie, L., Akbari, M., Li, T., & Chua, T.-S. (2014). A joint local-global approach for medical terminology assignment. MedIR 2014, p. 17.Google Scholar
  34. Palotti, J., Hanbury, A., & Müller, H. (2014). Exploiting health related features to infer user expertise in the medical domain.Google Scholar
  35. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2011). Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12, 2825–2830.MATHGoogle Scholar
  36. Powell, J., Inglis, N., Ronnie, J., & Large, S. (2011). The characteristics and motivations of online health information seekers: cross-sectional survey and qualitative interview study. Journal of Medical Internet Research, 13(1).Google Scholar
  37. Radlinski, F., & Craswell, N. (2013). Optimized interleaving for online retrieval evaluation. In Proceedings of the sixth ACM international conference on Web search and data mining (pp. 245–254). ACM.Google Scholar
  38. Radlinski, F., Kurup, M., & Joachims, T. (2008). How does clickthrough data reflect retrieval quality? In Proceedings of the 17th ACM conference on information and knowledge management (pp. 43–52). ACM.Google Scholar
  39. Spink, A., Yang, Y., Jansen, J., Nykanen, P., Lorence, D. P., Ozmutlu, S., et al. (2004). A study of medical and health queries to web search engines. Health Information & Libraries Journal, 21(1), 44–51.CrossRefGoogle Scholar
  40. Stanton, I., Ieong, S., & Mishra, N. (2014). Circumlocution in diagnostic medical queries. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval (pp. 133–142). ACM.Google Scholar
  41. Subbian, K., & Melville, P. (2011). Supervised rank aggregation for predicting influencers in twitter. In Privacy, security, risk and trust (passat), 2011 IEEE third international conference on and 2011 IEEE third international conference on social computing (socialcom), pp. 661–665. IEEE.Google Scholar
  42. Suchanek, F. M., Kasneci, G., & Weikum, G. (2008). Yago: A large ontology from wikipedia and wordnet. Web Semantics: Science, Services and Agents on the World Wide Web, 6(3), 203–217.CrossRefGoogle Scholar
  43. Toms, E. G., & Latter, C. (2007). How consumers search for health information. Health Informatics Journal, 13(3), 223–235.CrossRefGoogle Scholar
  44. White, R. W., Dumais, S., & Teevan, J. (2008). How medical expertise influences web search interaction. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’08 (pp. 791–792), New York, NY: ACM.Google Scholar
  45. Xu, Y., Ding, F., & Wang, B. (2008). Entity-based query reformulation using wikipedia. In Proceedings of the 17th ACM conference on information and knowledge management, CIKM ’08 (pp. 1441–1442), New York, NY: ACM.Google Scholar
  46. Yates, A., & Goharian, N. (2013). ADRTrace: Detecting expected and unexpected adverse drug reactions from user reviews on social media sites. In Proceedings of the 35th European conference on Advances in Information Retrieval (ECIR’13).Google Scholar
  47. Yates, A., Goharian, N., & Frieder, O. (2014). Relevance-ranked domain-specific synonym discovery. In Proceedings of the 36th European conference on Advances in Information Retrieval (ECIR’14).Google Scholar
  48. Yom-Tov, E., Fine, S., Carmel, D., & Darlow, A. (2005). Learning to estimate query difficulty: Including applications to missing content detection and distributed information retrieval. In Proceedings of SIGIR ’05 (pp. 512–519). ACM.Google Scholar
  49. Yom-Tov, E., & Gabrilovich, E. (2013). Postmarket drug surveillance without trial costs: Discovery of adverse drug reactions through large-scale analysis of web search queries. Journal of Medical Internet Research, 15(6), e124.CrossRefGoogle Scholar
  50. Young, H. P., & Levenglick, A. (1978). A consistent extension of condorcet’s election principle. SIAM Journal on Applied Mathematics, 35(2), 285–300.MATHMathSciNetCrossRefGoogle Scholar
  51. Zeng, Q. T., Kogan, S., Plovnick, R. M., Crowell, J., Lacroix, E.-M., & Greenes, R. A. (2004). Positive attitudes and failed queries: An exploration of the conundrums of consumer health information retrieval. International Journal of Medical Informatics, 73(1), 45–55.CrossRefGoogle Scholar
  52. Zeng, Q. T., Tse, T., Divita, G., Keselman, A., Crowell, J., & Browne, A. C. (2006). Exploring lexical forms: first-generation consumer health vocabularies. In AMIA Annual Symposium.Google Scholar
  53. Zickuhr, K. (2013). Who’s not online and why. http://www.pewinternet.org/2013/09/25/whos-not-online-and-why-2/.
  54. Zuccon, G., Koopman, B., & Palotti, J. (2015). Diagnose this if you can. In Allan H., Gabriella K., Andreas R., Norbert F., (Eds.), Advances in Information Retrieval, volume 9022 of Lecture Notes in Computer Science, pp. 562–567, Berlin: Springer.Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Luca Soldaini
    • 1
  • Andrew Yates
    • 1
  • Elad Yom-Tov
    • 2
  • Ophir Frieder
    • 1
  • Nazli Goharian
    • 1
  1. 1.Information Retrieval Lab, Computer Science DepartmentGeorgetown UniversityWashingtonUSA
  2. 2.Microsoft ResearchHerzeliyaIsrael

Personalised recommendations