Advertisement

Evaluation of Pseudo Relevance Feedback Techniques for Cross Vertical Aggregated Search

  • Hermann ZiakEmail author
  • Roman Kern
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9283)

Abstract

Cross vertical aggregated search is a special form of meta search, were multiple search engines from different domains and varying behaviour are combined to produce a single search result for each query. Such a setting poses a number of challenges, among them the question of how to best evaluate the quality of the aggregated search results. We devised an evaluation strategy together with an evaluation platform in order to conduct a series of experiments. In particular, we are interested whether pseudo relevance feedback helps in such a scenario. Therefore we implemented a number of pseudo relevance feedback techniques based on knowledge bases, where the knowledge base is either Wikipedia or a combination of the underlying search engines themselves. While conducting the evaluations we gathered a number of qualitative and quantitative results and gained insights on how different users compare the quality of search result lists. In regard to the pseudo relevance feedback we found that using Wikipedia as knowledge base generally provides a benefit, unless for entity centric queries, which are targeting single persons or organisations. Our results will enable to help steering the development of cross vertical aggregated search engines and will also help to guide large scale evaluation strategies, for example using crowd sourcing techniques.

Keywords

Relevance Feedback Query Term Query Expansion Result List Normalize Discount Cumulative Gain 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agrawal, R., Gollapudi, S., Halverson, A., Ieong, S.: Diversifying search results. In: Proceedings of the Second ACM International Conference on Web Search and Data Mining, WSDM 2009, pp. 5–14. ACM, New York (2009)Google Scholar
  2. 2.
    Amati, G., Van Rijsbergen, C.J.: Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM Transactions on Information Systems 20(4), 357–389 (2002). http://doi.acm.org/10.1145/582415.582416 CrossRefGoogle Scholar
  3. 3.
    Arguello, J., Diaz, F., Callan, J., Carterette, B.: A methodology for evaluating aggregated search results. In: Clough, P., Foley, C., Gurrin, C., Jones, G.J.F., Kraaij, W., Lee, H., Mudoch, V. (eds.) ECIR 2011. LNCS, vol. 6611, pp. 141–152. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  4. 4.
    Cai, D., Yu, S., Wen, J.R., Ma, W.Y.: Block-based web search. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 456–463. ACM (2004)Google Scholar
  5. 5.
    Diaz, F., Allan, J.: When less is more: Relevance feedback falls short and term expansion succeeds at hard 2005. Tech. rep., DTIC Document (2006)Google Scholar
  6. 6.
    Gehlen, V., Finamore, A., Mellia, M., Munafò, M.M.: Uncovering the big players of the web. In: Pescapè, A., Salgarelli, L., Dimitropoulos, X. (eds.) TMA 2012. LNCS, vol. 7189, pp. 15–28. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  7. 7.
    Harman, D.: Relevance feedback and other query modification techniques (1992)Google Scholar
  8. 8.
    He, B., Ounis, I.: Combining fields for query expansion and adaptive query expansion. Information Processing & Management 43(5), 1294–1307 (2007). http://linkinghub.elsevier.com/retrieve/pii/S0306457306001956 CrossRefGoogle Scholar
  9. 9.
    Kazai, G., Kamps, J., Koolen, M., Milic-Frayling, N.: Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2011, pp. 205–214. ACM, New York (2011). http://doi.acm.org/10.1145/2009916.2009947
  10. 10.
    Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with mechanical turk. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2008, pp. 453–456. ACM, New York (2008). http://doi.acm.org/10.1145/1357054.1357127
  11. 11.
    Kopliku, A., Pinel-Sauvagnat, K., Boughanem, M.: Aggregated search: A new information retrieval paradigm. ACM Computing Surveys (CSUR) 46(3), 41 (2014)CrossRefGoogle Scholar
  12. 12.
    Lam-Adesina, A.M., Jones, G.J.: Applying summarization techniques for term selection in relevance feedback. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1–9. ACM (2001)Google Scholar
  13. 13.
    Lynam, T.R., Buckley, C., Clarke, C.L., Cormack, G.V.: A multi-system analysis of document and term selection for blind feedback. In: Proceedings of the 13th ACM International Conference on Information and Knowledge Management, pp. 261–269. ACM (2004)Google Scholar
  14. 14.
    Minnie, D., Srinivasan, S.: Meta search engines for information retrieval on multiple domains. In: Proceedings of the International Joint Journal Conference on Engineering and Technology (IJJCET 2011), pp. 115–118 (2011)Google Scholar
  15. 15.
    Montgomery, J., Si, L., Callan, J., Evans, D.: Effect of varying number of documents in blind feedback: analysis of the 2003 NRRC RIA workshop bfnumdocs experiment suite. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2004 (2004)Google Scholar
  16. 16.
    Pass, G., Chowdhury, A., Torgeson, C.: A picture of search. In: Proceedings of the 1st International Conference on Scalable Information Systems, InfoScale 2006. ACM, New York (2006). http://doi.acm.org/10.1145/1146847.1146848
  17. 17.
    Radlinski, F., Dumais, S.: Improving personalized web search using result diversification. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 691–692. ACM (2006)Google Scholar
  18. 18.
    Rhodes, B.J.: Just-in-time information retrieval. Ph.D. thesis, Massachusetts Institute of Technology (2000)Google Scholar
  19. 19.
    Santos, R.L., Macdonald, C., Ounis, I.: Exploiting query reformulations for web search result diversification. In: Proceedings of the 19th International Conference on World Wide Web, pp. 881–890. ACM (2010)Google Scholar
  20. 20.
    Schlötterer, J., Seifert, C., Granitzer, M.: Web-based just-in-time retrieval for cultural content. In: PATCH14: Proceedings of the 7th International ACM Workshop on Personalized Access to Cultural Heritage (2014)Google Scholar
  21. 21.
    Shokouhi, M., Azzopardi, L., Thomas, P.: Effective query expansion for federated search. In: Proceedings of the 32th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2009, pp. 427–434. ACM, New York (2009). http://doi.acm.org/10.1145/1571941.1572015
  22. 22.
    Voorhees, E.M.: The philosophy of information retrieval evaluation. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 355–370. Springer, Heidelberg (2002) CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Know-Center GmbHGrazAustria

Personalised recommendations