Advertisement

A Study on Novelty Evaluation in Biomedical Information Retrieval

  • Xiangdong An
  • Nick Cercone
  • Hai Wang
  • Zheng Ye
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7608)

Abstract

In novelty information retrieval, we expect that novel passages are ranked higher than redundant ones and relevant ones higher than irrelevant ones. Accordingly, we desire an evaluation algorithm that would respect such expectations. In TREC 2006 & 2007, a novelty performance measure, called the aspect-based mean average precision (MAP), was introduced to the Genomics Track to rank the novelty of the medical passages. In this paper, we demonstrate that this measure may not necessarily yeild a higher score for the rankings that honor above expectations better. We propose an improved measure to reflect such expectations more precisely, and present some supporting evidences.

Keywords

Mean Average Precision Relevant Passage Medical Passage Passage Retrieval Information Retrieval Evaluation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ali, M., et al.: Structural relevance: A common basis for the evaluation of structured document retrieval. In: Proceedings of CIKM 2008, pp. 1153–1162 (2008)Google Scholar
  2. 2.
    Carbonell, J., Goldstein, J.: The use of MMR, diversity-based reranking for reordering documents and producing summaries. In: SIGIR 1998, pp. 335–336 (1998)Google Scholar
  3. 3.
    Clarke, C., et al.: Novelty and diversity in information retrieval evaluation. In: SIGIR 2008, pp. 659–666 (2008)Google Scholar
  4. 4.
    Cormack, G.V., Lynam, T.R.: Statistical precision of information retrieval evalution. In: SIGIR 2006, pp. 533–540 (2006)Google Scholar
  5. 5.
    Hersh, W., et al.: TREC 2007 genomics track overview. In: TREC 2007, pp. 98–115 (2007)Google Scholar
  6. 6.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems 20(4), 422–446 (2002)CrossRefGoogle Scholar
  7. 7.
    Zhai, C., Cohen, W.W., Lafferty, J.: Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In: SIGIR 2003, pp. 10–17 (2003)Google Scholar
  8. 8.
    Zhang, B., Li, H., Liu, Y., Ji, L., Xi, W., Fan, W., Chen, Z., Ma, W.-Y.: Improving web search results using affinity graph. In: SIGIR 2005, pp. 504–511 (2005)Google Scholar
  9. 9.
    Zhang, Y., Callan, J., Minka, T.: Novelty and redundancy detection in adaptive filtering. In: SIGIR 2002, pp. 81–88 (2002)Google Scholar
  10. 10.
    Zhu, X., et al.: Improving diversity in ranking using absorbing random walks. In: NAACL-HLT 2007, pp. 97–104 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Xiangdong An
    • 1
  • Nick Cercone
    • 1
  • Hai Wang
    • 2
  • Zheng Ye
    • 1
  1. 1.York UniversityTorontoCanada
  2. 2.Saint Mary’s UniversityHalifaxCanada

Personalised recommendations