Advertisement

Aggregation on Learning to Rank for Consumer Health Information Retrieval

  • Hua YangEmail author
  • Teresa Gonçalves
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1126)

Abstract

Common people are increasingly acquiring health information depending on general search engines which are still far from being effective in dealing with complex consumer health queries. One prime and effective method in addressing this problem is using Learning to Rank (L2R) techniques. In this paper, an investigation on aggregation over field-based L2R models is made. Rather than combining all potential features into one list to train a L2R model, we propose to train a set of L2R models each using features extracted from only one field and then apply aggregation methods to combine the results obtained from each model. Extensive experimental comparisons with the state-of-the-art baselines on the considered data collections confirmed the effectiveness of our proposed approach.

Keywords

Learning to Rank Rank aggregation Consumer health Information Retrieval 

References

  1. 1.
    Abacha, A.B.: NLM NIH at TREC 2016 clinical decision support track. In: TREC (2016)Google Scholar
  2. 2.
    Ai, Q., Bi, K., Luo, C., Guo, J., Croft, W.B.: Unbiased learning to rank with unbiased propensity estimation. arXiv preprint arXiv:1804.05938 (2018)
  3. 3.
    Amati, G.: Probabilistic models for information retrieval based on divergence from randomness. Ph.D. thesis, University of Glasgow, UK (2003)Google Scholar
  4. 4.
    Aslam, J.A., Montague, M.: Models for metasearch. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 276–284. ACM (2001)Google Scholar
  5. 5.
    Chapelle, O., Chang, Y.: Yahoo! learning to rank challenge overview. In: Proceedings of the Learning to Rank Challenge, pp. 1–24 (2011)Google Scholar
  6. 6.
    Deng, K., Han, S., Li, K.J., Liu, J.S.: Bayesian aggregation of order-based rank data. J. Am. Stat. Assoc. 109(507), 1023–1039 (2014)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Dwork, C., Kumar, R., Naor, M., Sivakumar, D.: Rank aggregation methods for the web. In: Proceedings of the 10th International Conference on World Wide Web, pp. 613–622. ACM (2001)Google Scholar
  8. 8.
    Fox, E.A., Shaw, J.A.: Combination of multiple searches. NIST special publication SP 243 (1994)Google Scholar
  9. 9.
    Fox, S., Duggan, M.: Health online 2013. Pew Internet & American Life Project, Washington, DC (2013)Google Scholar
  10. 10.
    Goeuriot, L., Jones, G.J., Kelly, L., Müller, H., Zobel, J.: Medical information retrieval: introduction to the special issue. Inf. Retr. J. 1(19), 1–5 (2016)Google Scholar
  11. 11.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20(4), 422–446 (2002)CrossRefGoogle Scholar
  12. 12.
    Jimmy, Zuccon, G., Palotti, J.: Overview of the CLEF 2018 consumer health search task. In: Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings (2018)Google Scholar
  13. 13.
    Kuzi, S., Shtok, A., Kurland, O.: Query expansion using word embeddings. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 1929–1932. ACM (2016)Google Scholar
  14. 14.
    Lee, J.H.: Analyses of multiple evidence combination. In: ACM SIGIR Forum, vol. 31-SI, pp. 267–276. ACM (1997)Google Scholar
  15. 15.
    Liu, T.Y., Xu, J., Qin, T., Xiong, W., Li, H.: LETOR: benchmark dataset for research on learning to rank for information retrieval. In: Proceedings of SIGIR 2007 Workshop on Learning to Rank for Information Retrieval, vol. 310. ACM, Amsterdam (2007)Google Scholar
  16. 16.
    Macdonald, C., Santos, R.L., Ounis, I., He, B.: About learning models with multiple query-dependent features. ACM Trans. Inf. Syst. (TOIS) 31(3), 11 (2013)CrossRefGoogle Scholar
  17. 17.
    Manmatha, R., Rath, T., Feng, F.: Modeling score distributions for combining the outputs of search engines. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 267–275. ACM (2001)Google Scholar
  18. 18.
    Montague, M., Aslam, J.A.: Relevance score normalization for metasearch. In: Proceedings of the Tenth International Conference on Information and Knowledge Management, pp. 427–433. ACM (2001)Google Scholar
  19. 19.
    Palotti, J., Goeuriot, L., Zuccon, G., Hanbury, A.: Ranking health web pages with relevance and understandability. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 965–968. ACM (2016)Google Scholar
  20. 20.
    Palotti, J., Rekabsaz, N.: Exploring understandability features to personalize consumer health search. In: CEUR-WS, Working Notes of CLEF 2017 - Conference and Labs of the Evaluation Forum (2018)Google Scholar
  21. 21.
    Palotti, J., et al.: CLEF 2017 task overview: the IR task at the ehealth evaluation lab. In: Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings (2017)Google Scholar
  22. 22.
    Roberts, K., Simpson, M.S., Voorhees, E.M., Hersh, W.R.: Overview of the TREC 2015 clinical decision support track. In: TREC (2015)Google Scholar
  23. 23.
    Scells, H., Zuccon, G., Deacon, A., Koopman, B.: QUT ielab at CLEF ehealth 2017 technology assisted reviews track: Initial experiments with learning to rank. In: CEUR Workshop Proceedings: Working Notes of CLEF 2017: Conference and Labs of the Evaluation Forum. vol. 1866, Paper-98. CEUR Workshop Proceedings (2017)Google Scholar
  24. 24.
    Jo, S.-H., Lee, K.S.: CBNU at TREC 2016 clinical decision support track. In: Text Retrieval Conference (TREC 2016) (2016)Google Scholar
  25. 25.
    Soldaini, L., Goharian, N.: Learning to rank for consumer health search: a semantic approach. In: Jose, J.M., et al. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 640–646. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-56608-5_60CrossRefGoogle Scholar
  26. 26.
    Song, Y., He, Y., Hu, Q., He, L., Haacke, E.M.: ECNU at 2015 ehealth task 2: user-centred health information retrieval. In: CLEF (Working Notes) (2015)Google Scholar
  27. 27.
    Tax, N., Bockting, S., Hiemstra, D.: A cross-benchmark comparison of 87 learning to rank methods. Inf. Process. Manage. 51(6), 757–772 (2015)CrossRefGoogle Scholar
  28. 28.
    Thuma, E., Anderson, G., Mosweunyane, G.: UBML participation to CLEF ehealth IR challenge 2015: Task 2. In: CLEF (Working Notes) (2015)Google Scholar
  29. 29.
    Vogt, C.C., Cottrell, G.W.: Predicting the performance of linearly combined IR systems. In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 190–196. ACM (1998)Google Scholar
  30. 30.
    Vogt, C.C., Cottrell, G.W.: Fusion via a linear combination of scores. Inf. Retrieval 1(3), 151–173 (1999)CrossRefGoogle Scholar
  31. 31.
    Wang, H., Langley, R., Kim, S., McCord-Snook, E., Wang, H.: Efficient exploration of gradient space for online learning to rank. arXiv preprint arXiv:1805.07317 (2018)
  32. 32.
    Wang, R., Lu, W., Ren, K.: WHUIRgroup at the CLEF 2016 ehealth lab task 3. In: CLEF (Working Notes), pp. 193–197 (2016)Google Scholar
  33. 33.
    Wang, X., Bendersky, M., Metzler, D., Najork, M.: Learning to rank with selection bias in personal search. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 115–124. ACM (2016)Google Scholar
  34. 34.
    Xia, X., Lo, D., Wang, X., Zhang, C., Wang, X.: Cross-language bug localization. In: Proceedings of the 22nd International Conference on Program Comprehension, pp. 275–278. ACM (2014)Google Scholar
  35. 35.
    Yang, H., Goncalves, T.: Promoting understandability in consumer health information search. In: Jose, J.M., et al. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 727–734. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-56608-5_72CrossRefGoogle Scholar
  36. 36.
    Zuccon, G., et al.: The IR task at the CLEF ehealth evaluation lab 2016: user-centred health information retrieval. In: CLEF 2016-Conference and Labs of the Evaluation Forum, vol. 1609, pp. 15–27 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Computer Science DepartmentUniversity of ÉvoraÉvoraPortugal
  2. 2.School of Computer ScienceZhongyuan University of TechnologyZhengzhouChina

Personalised recommendations