Advertisement

Applied Intelligence

, Volume 49, Issue 3, pp 1185–1199 | Cite as

ERR.Rank: An algorithm based on learning to rank for direct optimization of Expected Reciprocal Rank

  • Elham Ghanbari
  • Azadeh ShakeryEmail author
Article
  • 42 Downloads

Abstract

Learning to rank (LTR) is a machine learning-based ranking technique that constructs a ranking model to sort objects in response to a query, and is used in many applications especially in information retrieval. LTR ranking models are generally evaluated using information retrieval measures. Listwise approaches are among the most important learning to rank algorithms. A subset of listwise approaches try to optimize the evaluation measures. These evaluation measures are dependent only on the document positions in the ranking and are discontinuous and non-convex with respect to the scores of the ranking function. The majority of evaluation measures used by current listwise techniques ignore the relationship between a document at a position and the documents at higher positions. To overcome this problem, we propose a new listwise algorithm, which aims to directly optimize the Expected Reciprocal Rank (ERR) measure. ERR considers the importance of a document at a position to be dependent on the documents ranked higher than this document. Our algorithm uses a probabilistic framework to optimize the expected value of ERR. We use a boosting approach using a gradient descent in order to find the optimal ranking function. The presented algorithm is compared with state of the art algorithms. The results obtained on the ’LETOR 3.0’ standard dataset indicate that the proposed method outperforms the baselines.

Keywords

Learning Ranking Listwise methods Expected Reciprocal Rank measure Optimization 

Notes

Acknowledgements

This research was in part supported by a grant from the Institute for Research in Fundamental Sciences (No. CS1397-4-55).

References

  1. 1.
    Alencar ASC, Caldas WL, Gomes JPP, d. Souza AH, Aguilar PAC, Rodrigues C, Franco W (2015) MLM-rank: A ranking algorithm based on the minimal learning machine. In: Brazilian conference on intelligent systems (BRACIS), pp 305–309Google Scholar
  2. 2.
    Burges C, Ragno R (2006) Learning to rank with nonsmooth cost functions. In: Advances in neural information processing systems 18, pp 395–402. The MIT PressGoogle Scholar
  3. 3.
    Burges C, Shaked T, Renshaw E, Lazier A, Deeds M, Hamilton N, Hullender G (2005) Learning to rank using gradient descent. In: Proceedings of the 22nd international conference on machine learning, pp 89–96. ACMGoogle Scholar
  4. 4.
    Calumby RT, Gonċalves MA, Torres RDS (2016) On interactive learning-to-rank for IR: Overview, recent advances, challenges, and directions. Neurocomput 208(C):3–24CrossRefGoogle Scholar
  5. 5.
    Cao Z, Qin T, Liu TY, Tsai MF, Li H (2007) Learning to rank: From pairwise approach to listwise approach. In: Proceedings of the 24th international conference on machine learning, pp 129–136. ACMGoogle Scholar
  6. 6.
    Chakrabarti S, Khanna R, Sawant U, Bhattacharyya C (2008) Structured learning for non-smooth ranking losses. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 88–96. ACMGoogle Scholar
  7. 7.
    Chapelle O, Keerthi SS (2010) Efficient algorithms for ranking with SVMs. Inf Retr 13(3):201–215CrossRefGoogle Scholar
  8. 8.
    Chapelle O, Metlzer D (2009) Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM conference on Information and knowledge management, pp 621–630. ACMGoogle Scholar
  9. 9.
    Chapelle O, Wu M (2010) Gradient descent optimization of smoothed information retrieval metrics. Inf Retr 13(3):216–235CrossRefGoogle Scholar
  10. 10.
    Clarke C, Craswell N, Soboroff I (2004) Overview of the trec 2004 terabyte track. In: Proceedings of the 13th text retrieval conference (TREC 2004), p 74Google Scholar
  11. 11.
    Hersh W, Buckley C, Leone TJ, Hickam D (1994) OHSUMED: An interactive retrieval evaluation and new large test collection for research. In: Proceedings of the 17th international ACM SIGIR conference on research and development in information retrieval, pp 192–201. ACMGoogle Scholar
  12. 12.
    Hiemstra D, Tax N, Bockting S (2017) Ranking learning-to-rank methods. In: Proceedings of the 1st International Workshop on LEARning Next gEneration Rankers, pp 3–3Google Scholar
  13. 13.
    Jarvelin K, Kekalainen J (2002) Cumulated gain-based evaluation of IR techniques. ACM Trans Inf Syst 20(4):422–446CrossRefGoogle Scholar
  14. 14.
    Li H (2014) Learning to rank for information retrieval and natural language processing. Synthesis Lectures on Human Language Technologies 7(3):1–121CrossRefGoogle Scholar
  15. 15.
    Li P, Burges C, Wu Q (2008) McRank: learning to rank usingmultiple classification and gradient boosting. Adv Neural Inf Proces Syst 20(7):845–852Google Scholar
  16. 16.
    Liu TY (2011) Learning to rank for information retrieval. Springer, BerlinCrossRefzbMATHGoogle Scholar
  17. 17.
    Ma Q, He B, Xu J (2016) Direct measurement of training query quality for learning to rank. In: Proceedings of the 31st annual ACM symposium on applied computing, SAC ’16, pp 1035–1040. ACMGoogle Scholar
  18. 18.
    Metzler D, Bruce Croft W (2007) Linear feature-based models for information retrieval. Inf Retr 10(3):257–274CrossRefGoogle Scholar
  19. 19.
    Ponte JM, Croft WB (1998) A language modeling approach to information retrieval. In: Proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval, SIGIR ’98. ACM, New York, pp 275–281Google Scholar
  20. 20.
    Qin T, Liu TY, Li H (2010) A general approximation framework for direct optimization of information retrieval measures. Inf Retr 13(4):375–397CrossRefGoogle Scholar
  21. 21.
    Qin T, Liu TY, Xu J, Li H (2010) LETOR: A benchmark collection for research on learning to rank for information retrieval. Inf Retr 13(4):346–374CrossRefGoogle Scholar
  22. 22.
    Qin T, Zhang XD, Tsai MF, Wang DS, Liu TY, Li H (2008) Query-level loss functions for information retrieval. Inf Process Manag 44(2):838–855CrossRefGoogle Scholar
  23. 23.
    Robertson S, Walker S, Jones S, Hancock-Beaulieu M, Gatford M (1994) Okapi at TREC-3. In: Proceedings of the 3rd text retrieval conference (TREC-3), pp 109–126Google Scholar
  24. 24.
    Shashua A, Levin A (2002) Ranking with large margin principle: Two approaches. In: Advances in neural information processing systems 15, pp 937–944. The MIT PressGoogle Scholar
  25. 25.
    Shi Y, Karatzoglou A, Baltrunas L, Larson M, Hanjalic A (2013) xCLiMF: Optimizing expected reciprocal rank for data with multiple levels of relevance. In: Proceedings of the 7th ACM conference on Recommender systems, pp 431–434. ACMGoogle Scholar
  26. 26.
    Shi Y, Karatzoglou A, Baltrunas L, Larson M, Oliver N, Hanjalic A (2012) CLiMF: Learning to maximize reciprocal rank with collaborative less-is-more filtering. In: Proceedings of the 6th ACM conference on recommender systems, pp 139–146. ACMGoogle Scholar
  27. 27.
    Tax N, Bockting S, Hiemstra D (2015) A cross-benchmark comparison of 87 learning to rank methods. Inf Process Manag 51(6):757–772CrossRefGoogle Scholar
  28. 28.
    Taylor M, Guiver J, Robertson S, Minka T (2008) SoftRank: optimizing non-smooth rank metrics. In: Proceedings of the 1st ACM international conference on web search and data mining, pp 77–86. ACMGoogle Scholar
  29. 29.
    Tewari A (2015) Generalization error bounds for learning to rank: Does the length of document lists matter?. In: Proceedings of the 32nd international conference on machine learning, pp 315–323 JMLRGoogle Scholar
  30. 30.
    Valizadegan H, Jin R, Zhang R, Mao J (2009) Learning to rank by optimizing ndcg measure. In: Advances in Neural Information Processing Systems 22, pp 1883–1891. The MIT PressGoogle Scholar
  31. 31.
    Volkovs MN, Zemel RS (2009) BoltzRank: learning to maximize expected ranking gain. In: proceedings of the 26th International Conference on Machine Learning, pp 1089–1096. ACMGoogle Scholar
  32. 32.
    Voorhees EM, Harman DK (2005) TREC: Experiment and Evaluation in Information Retrieval. The MIT Press, CambridgeGoogle Scholar
  33. 33.
    Wang Y, Choi IC, Liu H (2016) Generalized ensemble model for document ranking in information retrieval. Comput Sci Inf Syst 14:42Google Scholar
  34. 34.
    Wu Q, Burges C, Svore K, Gao J (2010) Adapting boosting for information retrieval measures. Inf Retr 13(3):254–270CrossRefGoogle Scholar
  35. 35.
    Xia F, Wang J (2008) Listwise approach to learning to rank-theory and algorithm. In: Proceedings of the 25th international conference on machine learning, pp 1192–1199. ACMGoogle Scholar
  36. 36.
    Xu J, Li H (2007) AdaRank: a boosting algorithm for information retrieval. In: Proceedings of the 30th International ACM SIGIR conference on research and development in information retrieval, 49, pp 391–398. ACMGoogle Scholar
  37. 37.
    Xu J, Li H, yan Liu T, Peng Y, Lu M, Ying Ma W (2008) Direct optimization of evaluation measures in learning to rank. In: Proceedings of the 31st international ACM SIGIR conference on research and development in information retrieval, pp 107–114. ACMGoogle Scholar
  38. 38.
    Xu J, Xia L, Lan Y, Guo J, Cheng X (2017) Directly optimize diversity evaluation measures: A new approach to search result diversification. ACM Trans Intell Syst Technol 8(3):1,26CrossRefGoogle Scholar
  39. 39.
    Yilmaz E, Robertson S (2010) On the choice of effectiveness measures for learning to rank. Inf Retr 13(3):271–290CrossRefGoogle Scholar
  40. 40.
    Yue Y, Finley T, Radlinski F, Joachims T (2007) A support vector method for optimizing average precision. In: Proceedings of the 30th international ACM SIGIR conference on research and development in information retrieval, pp 271–278. ACMGoogle Scholar
  41. 41.
    Zhang P, Lin H, Lin Y, Wu J (2011) Learning to rank by optimizing expected reciprocal rank. In: Proceedings of the 7th asia information retrieval societies conference, pp 93–102. SpringerGoogle Scholar
  42. 42.
    Zhang R, Bao H, Sun H, Wang Y, Liu X (2016) Recommender systems based on ranking performance optimization. Front Comp Sci 10(2):270–280CrossRefGoogle Scholar
  43. 43.
    Zhao Y, Scholer F, Tsegay Y (2008) Effective pre-retrieval query performance prediction using similarity and variability evidence. In: Proceedings of the 30th european conference on ir research, pp 52–64. SpringerGoogle Scholar
  44. 44.
    Tax N, Bockting S, Hiemstra D (2015) A cross-benchmark comparison of 87 learning to rank methods. Inf Process Manag 15(6):757–772CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Electrical and Computer Engineering, College of EngineeringUniversity of TehranTehranIran
  2. 2.School of Computer ScienceInstitute for Research in Fundamental Sciences (IPM)TehranIran

Personalised recommendations