An n-Gram and Initial Description Based Approach for Entity Ranking Track

  • Meenakshi Sundaram Murugeshan
  • Saswati Mukherjee
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4862)


The most important work that takes the center stage in the Entity Ranking track of INEX is proper query formation. Both the subtasks, namely Entity Ranking and List Completion, would immensely benefit if the given query can be expanded with more relevant terms, thereby improving the efficiency of the search engine. This paper stresses on the correct identification of “Meaningful n-grams” from the given title and proper selection of the “Prominent n-grams” among them as the utmost important task that improves query formation and hence improves the efficiencies of the overall Entity Ranking tasks. We also exploit the Initial Descriptions (IDES) of the Wikipedia articles for ranking the retrieved answers based on their similarities with the given topic. List completion task is further aided by the related Wikipedia articles that boosted the score of retrieved answers.


Entity Ranking List Completion n-gram checking 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Voorhees, E.M.: Overview of the TREC 2001 Question Answering Track. In: Proceedings of the 10th Text REtrieval Conference (2001)Google Scholar
  2. 2.
    Craswell, N., de Vries, A.P., Soboroff, I.: Overview of the TREC 2005 Enterprise Track. In: Proceedings of the 14th Text REtrieval Conference (2005)Google Scholar
  3. 3.
    Soricut, R., Brill, E.: A Unified Framework for Automatic Evaluation using N-gram Co-Occurrence Statistics. In: Proceedings of the Association for Computational Linguistics (ACL) (2004)Google Scholar
  4. 4.
    Lin, C.-Y., Hovy, E.: Automatic evaluation of summaries using N-gram co-occurrence statistics. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (2003)Google Scholar
  5. 5.
    Chen, J., Diekema, A., Taffett, M.D., McCracken, N., Ozgencil, N.E., Yilmazel, O., Liddy, E.D.: Question Answering: CNLP at the TREC 10 Question Answering Track. In: Proceedings of the 10th Text REtrieval Conference (2001)Google Scholar
  6. 6.
    Yang, H., Chua, T.-S.: Web-based list question answering. In: Proceedings of the 20th International Conference on Computational Linguistics (2004)Google Scholar
  7. 7.
    Greenwood, M.A., Stevenson, M., Gaizauskas, R.: The University of Sheffield’s TREC 2006 Q&A Experiments. In: Proceedings of the 15th Text REtrieval Conference (2006)Google Scholar
  8. 8.
    Kazama, J., Torisawa, K.: Exploiting Wikipedia as External Knowledge for Named Entity Recognition. In: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL) (2007)Google Scholar
  9. 9.
    Hermjakob, U., Hovy, E.H., Lin, C.-Y.: Knowledge-Based Question Answering. In: Proceedings of the 6th World Multiconference on Systems, Cybernatics and Informatics (SCI-2002), Orlando, FL, U.S.A, July 14-18 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Meenakshi Sundaram Murugeshan
    • 1
  • Saswati Mukherjee
    • 1
  1. 1.Department of Computer Science and Engineering, College of Engineering, GuindyAnna UniversityChennaiIndia

Personalised recommendations