How Complementary Are Different Information Retrieval Techniques? A Study in Biomedicine Domain

  • Xiangdong An
  • Nick Cercone
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8404)


In this paper, we make an empirical study on the submitted runs to the TREC Genomics Track, a gathering for information retrieval research in biomedicine. Based on the evaluation criteria provided by the track, we investigate how much relevant information is generally lost from a run, and how well the relevant nominees are actually ranked w.r.t. the level of relevancy and how they are distributed among the irrelevant ones in a run. We examine whether the relevancy or the level of relevancy play a more important role in the performance evaluation. Answering these questions may give us some insight into and help us improve the current IR technologies. The study reveals that the recognition of relevancy is more important than that of level of relevancy. It indicates that on average more than 60% of relevant information is lost from each run w.r.t. to either the amount of relevant information or the amount of aspects (subtopics, novelty or diversity), which suggests the big potential room for performance improvement. The study shows that the submitted runs from different groups are quite complementary, which implies ensemble IRs could significantly improve retrieval performance. The experiments illustrate that a run performs “good” or “bad” mainly due to its performance on its top 10% rankings, and the rest of the run only contributes to the performance marginally.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008)CrossRefMATHGoogle Scholar
  2. 2.
    Hersh, W., Cohen, A., Roberts, P.: TREC 2007 genomics track overview. In: TREC 2007, pp. 98–115 (2007)Google Scholar
  3. 3.
    Hersh, W., Cohen, A., Roberts, P., Rekapalli, H.K.: TREC 2006 genomics track overview. In: TREC 2006, pp. 68–87 (2006)Google Scholar
  4. 4.
    Zhai, C., Cohen, W.W., Lafferty, J.: Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In: SIGIR 2003, pp. 10–17 (2003)Google Scholar
  5. 5.
    Yang, L., Ji, D., Tang, L.: Document re-ranking based on automatically acquired key terms in chinese information retrieval. In: COLING 2004, pp. 480–486 (2004)Google Scholar
  6. 6.
    Goldberg, A.B., Andrzejewski, D., Gael, J.V., Settles, B., Zhu, X.: Ranking biomedical passages for relevance and diversity: University of Wisconsin, Madison at TREC genomics 2006. In: TREC 2006, pp. 129–136 (2006)Google Scholar
  7. 7.
    Hu, Q., Huang, X.: A reranking model for genomics aspect search. In: SIGIR 2008, pp. 783–784 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Xiangdong An
    • 1
  • Nick Cercone
    • 1
  1. 1.Department of Electrical Engineering & Computer ScienceYork UniversityTorontoCanada

Personalised recommendations