Advertisement

Simulating Simple and Fallible Relevance Feedback

  • Feza Baskaya
  • Heikki Keskustalo
  • Kalervo Järvelin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6611)

Abstract

Much of the research in relevance feedback (RF) has been performed under laboratory conditions using test collections and either test persons or simple simulation. These studies have given mixed results. The design of the present study is unique. First, the initial queries are realistically short queries generated by real end-users. Second, we perform a user simulation with several RF scenarios. Third, we simulate human fallibility in providing RF, i.e., incorrectness in feedback. Fourth, we employ graded relevance assessments in the evaluation of the retrieval results. The research question is: how does RF affect IR performance when initial queries are short and feedback is fallible? Our findings indicate that very fallible feedback is no different from pseudo-relevance feedback (PRF) and not effective on short initial queries. However, RF with empirically observed fallibility is as effective as correct RF and able to improve the performance of short initial queries.

Keywords

Relevance feedback fallibility simulation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Azzopardi, L., Järvelin, K., Kamps, J., Smucker, M.: Report on the SIGIR 2010 Workshop on the Simulation of Interaction. SIGIR Forum 44(2), 35–47 (2010)CrossRefGoogle Scholar
  2. 2.
    Efthimiadis, E.N.: Query expansion. In: Williams, M.E. (ed.) Annual Review of Information Science and Technology ARIST, vol. 31, pp. 121–187. Information Today, Inc., Medford (1996)Google Scholar
  3. 3.
    Foley, C., Smeaton, A.F.: Synchronous Collaborative Information Retrieval: Techniques and Evaluation. In: Boughanem, M., Berrut, C., Mothe, J., Soule-Dupuy, C. (eds.) ECIR 2009. LNCS, vol. 5478, pp. 42–53. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  4. 4.
    Jansen, M.B.J., Spink, A., Saracevic, T.: Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web. Information Processing & Management 36(2), 207–227 (2000)CrossRefGoogle Scholar
  5. 5.
    Järvelin, K.: Interactive Relevance Feedback with Graded Relevance and Sentence Extraction: Simulated User Experiments. In: Cheung, D., et al. (eds.) Proceedings of the 18th ACM Conference on Information and Knowledge Management (ACM CIKM 2009), Hong Kong, November 2-6, pp. 2053–2056 (2009)Google Scholar
  6. 6.
    Keskustalo, H., Järvelin, K., Pirkola, A.: The Effects of Relevance Feedback Quality and Quantity in Interactive Relevance Feedback: A Simulation Based on User Modeling. In: Lalmas, M., MacFarlane, A., Rüger, S.M., Tombros, A., Tsikrika, T., Yavlinsky, A. (eds.) ECIR 2006. LNCS, vol. 3936, pp. 191–204. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Keskustalo, H., Järvelin, K., Pirkola, A.: Evaluating the Effectiveness of Relevance Feedback Based on a User Simulation Model: Effects of a User Scenario on Cumulated Gain Value. Information Retrieval 11(5), 209–228 (2008)CrossRefGoogle Scholar
  8. 8.
    Lam-Adesina, A.M., Jones, G.J.F.: Applying Summarization Techniques for Term Selection in Relevance Feedback. In: Proc. of the 24th Annual ACM Conference on Research and Development in Information Retrieval, pp. 1–9. ACM Press, New York (2001)Google Scholar
  9. 9.
    Marchionini, G., Dwiggins, S., Katz, A., Lin, X.: Information seeking in full-text end-user-oriented search systems: The roles of domain and search expertise. Library and Information Science Research 15(1), 35–70 (1993)Google Scholar
  10. 10.
    Pirkola, A., Keskustalo, H., Leppänen, E., Känsälä, A.-P., Järvelin, K.: Targeted S-Gram Matching: A Novel N-Gram Matching Technique for Cross- and Monolin-gual Word Form Variants. Information Research 7(2) (2002), http://InformationR.net/ir/7-2/paper126.html
  11. 11.
    Ruthven, I., Lalmas, M.: A survey on the use of relevance feedback for information access systems. Knowledge Engineering Review 18(2), 95–145 (2003)CrossRefGoogle Scholar
  12. 12.
    Ruthven, I., Lalmas, M., van Rijsbergen, K.: Incorporating user search behaviour into relevance feedback. Journal of the American Society for Information Science and Technology 54(6), 529–549 (2003)CrossRefGoogle Scholar
  13. 13.
    Sihvonen, A., Vakkari, P.: Subject knowledge improves interactive query expansion assisted by a thesaurus. J. Doc. 60(6), 673–690 (2004)CrossRefGoogle Scholar
  14. 14.
    Sormunen, E.: Liberal Relevance Criteria of TREC - Counting on Negligible Documents? In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 320–330. ACM Press, New York (2002)Google Scholar
  15. 15.
    Stenmark, D.: Identifying Clusters of User Behavior in Intranet Search Engine Log Files. Journal of the American Society for Information Science and Technology 59(14), 2232–2243 (2008)CrossRefGoogle Scholar
  16. 16.
    Tombros, A., Sanderson, M.: Advantages of query biased summaries in information retrieval. In: Proc. of the 21st Annual ACM Conference on Research and Development in Information Retrieval, pp. 2–10. ACM Press, New York (1998)Google Scholar
  17. 17.
    Turpin, A., et al.: Including Summaries in System Evaluation. In: Proc. of the 32nd Annual ACM Conference on Research and Development in Information Retrieval, pp. 508–515. ACM Press, New York (2009)Google Scholar
  18. 18.
    Vakkari, P., Sormunen, E.: The influence of relevance levels on the effectiveness of interactive IR. J. Am. Soc. Inf. Sci. Tech. 55(11), 963–969 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Feza Baskaya
    • 1
  • Heikki Keskustalo
    • 1
  • Kalervo Järvelin
    • 1
  1. 1.Department of Information Studies and Interactive MediaUniversity of TampereFinland

Personalised recommendations