Using Relevance Feedback in Expert Search

  • Craig Macdonald
  • Iadh Ounis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4425)

Abstract

In Enterprise settings, expert search is considered an important task. In this search task, the user has a need for expertise - for instance, they require assistance from someone about a topic of interest. An expert search system assists users with their “expertise need” by suggesting people with relevant expertise to the topic of interest. In this work, we apply an expert search approach that does not explicitly rank candidates in response to a query, but instead implicitly ranks candidates by taking into account a ranking of document with respect to the query topic. Pseudo-relevance feedback, aka query expansion, has been shown to improve retrieval performance in adhoc search tasks. In this work, we investigate to which extent query expansion can be applied in an expert search task to improve the accuracy of the generated ranking of candidates. We define two approaches for query expansion, one based on the initial of ranking of documents for the query topic. The second approach is based on the final ranking of candidates. The aims of this paper are two-fold. Firstly, to determine if query expansion can be successfully applied in the expert search task, and secondly, to ascertain if either of the two forms of query expansion can provide robust, improved retrieval performance. We perform a thorough evaluation contrasting the two query expansion approaches in the context of the TREC 2005 and 2006 Enterprise tracks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amati, G.: Probabilistic Models for Information Retrieval based on Divergence from Randomness. PhD thesis, Department of Computing Science, University of Glasgow (2003)Google Scholar
  2. 2.
    Amati, G.: Frequentist and bayesian approach to information retrieval. In: Lalmas, M., et al. (eds.) ECIR 2006. LNCS, vol. 3936, pp. 13–24. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. 3.
    Balog, K., Azzopardi, L., de Rijke, M.: Formal models for expert finding in enterprise corpora. In: Proceedings of the 29th ACM SIGIR 2006, Seattle, WA, August 2006, pp. 43–50. ACM Press, New York (2006)Google Scholar
  4. 4.
    Craswell, N., de Vries, A.P., Soboroff, I.: Overview of the TREC-2005 Enterprise Track. In: Proceedings of the 14th Text REtrieval Conference (TREC-2005) (2005)Google Scholar
  5. 5.
    Hertzum, M., Pejtersen, A.M.: The information-seeking practices of engineers: searching for documents as well as for people. Inf. Process. Manage. 36(5), 761–778 (2000)CrossRefGoogle Scholar
  6. 6.
    Hiemstra, D.: Using language models for information retrieval. PhD thesis, Centre for Telematics and Information Technology, University of Twente (2001)Google Scholar
  7. 7.
    Macdonald, C., et al.: University of Glasgow at TREC 2005: Experiments in Terabyte and Enterprise tracks with Terrier. In: Proceedings of 14th Text REtrieval Conference (TREC 2005) (2005)Google Scholar
  8. 8.
    Macdonald, C., Ounis, I.: Voting for candidates: Adapting data fusion techniques for an expert search task. In: Proceedings of the 15th ACM CIKM 2006, Arlington, VA, November 2006, ACM Press, New York (2006)Google Scholar
  9. 9.
    Ounis, I., et al.: Terrier: A high performance and scalable information retrieval platform. In: Proceedings of the OSIR Workshop 2006, August 2006, pp. 18–25 (2006)Google Scholar
  10. 10.
    Rocchio, J.: Relevance feedback in information retrieval. Prentice-Hall, Englewood CliffsGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Craig Macdonald
    • 1
  • Iadh Ounis
    • 1
  1. 1.Department of Computing Science, University of Glasgow, G12 8QQUK

Personalised recommendations