Advertisement

Co-Learning Ranking for Query-Based Retrieval

  • Min Peng
  • Jiajia Huang
  • Jiahui Zhu
  • Li Zhou
  • Hui Fu
  • Yanxiang He
  • Fei Li
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8180)

Abstract

In this paper, we propose a novel blending ranking model, named Co-Learning ranking, in which two ranked results produced by two basic rankers interact with each other adequately and are combined linearly with a pair of appropriate weights. Specifically, in the interaction process, a reinforcement strategy is proposed to boost the performance of each ranked results. In addition, an automatic combination method is designed to detect the better-performance ranked result and assign a higher weight to it automatically. The Co-Learning ranking model is applied to the document ranking problem in query-based retrieval, and evaluated on the TAC 2009 and TAC 2011 datasets. Experimental results show that our model has higher precision than basic ranked results and better stability than linear combination.

Keywords

Co-Learning Ranking Interactive Learning Automatic Combination Ranked Result 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wei, F., Li, W., Liu, S.: iRANK: A Rank-learn-combine Framework for Unsupervised Ensemble Ranking. J. Am. Soc. Inf. Sci. Tec. 61(6), 1232–1243 (2010)Google Scholar
  2. 2.
    Wei, F., Li, W., He, Y.: Co-feedback Ranking for Query-focused Summarization. In: The ACL-IJCNLP 2009 Conference Short Papers, pp. 117–120. ACL Press, Singapore (2009)CrossRefGoogle Scholar
  3. 3.
    Wei, F., Li, W., Wang, W., He, Y.: iRANK: An Interactive Ranking Framework and Its Application in Query-focused Summarization. In: The 18th ACM Conference on Information and Knowledge Management, pp. 1557–1560. ACM Press, New York (2009)Google Scholar
  4. 4.
    Chen, Z., Ji, H.: Collaborative Ranking: a Case Study on Entity Linking. In: The Conference on Empirical Methods in Natural Language Processing, pp. 771–781. ACL Press, Edinburgh (2011)Google Scholar
  5. 5.
    Xia, F., Liu, T.Y., Li, H.: Statistical Consistency of Top-k Ranking. In: Advances in Neural Information Processing Systems, vol. 22, pp. 2098–2106 (2009)Google Scholar
  6. 6.
    Spearman, C.: ‘Footrule’ for Measuring Correlation. British Journal of Psychology, 1904-1920 2(1), 89–108 (1906)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. ACM Press, New York (1999)Google Scholar
  8. 8.
    Cortes, C., Mohri, M., Rastogi, A.: Magnitude-preserving Ranking Algorithms. In: The 24th International Conference on Machine Learning, pp. 169–176. ACM Press, Corvallis (2007)Google Scholar
  9. 9.
    Qin, T., Zhang, X.D., Tsai, M.F., Wang, D.S., Liu, T.Y., Li, H.: Query-level Loss Functions for Information Retrieval. J. Inform. Process. Manag. 44(2), 838–855 (2008)CrossRefGoogle Scholar
  10. 10.
    iProspect Search Engine User Behavior Study (April 2006), http://www.iprospect.com/
  11. 11.
    TAC 2009 Update Summarization Task, http://www.nist.gov/tac/2009/Summarization/
  12. 12.
    TAC 2011 Update Summarization Task, http://www.nist.gov/tac/2011/Summarization/

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Min Peng
    • 1
  • Jiajia Huang
    • 1
  • Jiahui Zhu
    • 1
  • Li Zhou
    • 1
  • Hui Fu
    • 1
  • Yanxiang He
    • 1
  • Fei Li
    • 1
  1. 1.Computer SchoolWuhan UniversityWuhanChina

Personalised recommendations