Abstract
Relevance feedback techniques assume that users provide relevance judgments for the top k (usually 10) documents and then re-rank using a new query model based on those judgments. Even though this is effective, there has been little research recently on this topic because requiring users to provide substantial feedback on a result list is impractical in a typical web search scenario. In new environments such as voice-based search with smart home devices, however, feedback about result quality can potentially be obtained during users’ interactions with the system. Since there are severe limitations on the length and number of results that can be presented in a single interaction in this environment, the focus should move from browsing result lists to iterative retrieval and from retrieving documents to retrieving answers. In this paper, we study iterative relevance feedback techniques with a focus on retrieving answer passages. We first show that iterative feedback is more effective than the top-k approach for answer retrieval. Then we propose an iterative feedback model based on passage-level semantic match and show that it can produce significant improvements compared to both word-based iterative feedback models and those based on term-level semantic similarity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This dataset is publicly available at https://ciir.cs.umass.edu/downloads/PsgRobust/.
- 2.
- 3.
On PsgRobust, BM25 and Rocchio underperform QL, RM3 and Distillation respectively by a large margin. Because its labels are generated based on retrieval with SDM, this collection favors approaches in the framework of LM more than VSM.
- 4.
We also tried the true RF version of BM25-PRF-GT [24], which is a generalized translation model of BM25 based on word embeddings and Rocchio. Due to its inferior performance on our dataset, we did not include the experiments here.
- 5.
- 6.
The reason why ERM does not perform well will be shown in Sect. 6.3 where we discuss the performance difference of ERM on the two tasks.
References
Aalbersberg, I.J.: Incremental relevance feedback. In: Proceedings of the 15th Annual International ACM SIGIR Conference, pp. 11–22. ACM (1992)
Allan, J.: Incremental relevance feedback for information filtering. In: Proceedings of the 19th Annual International ACM SIGIR Conference, pp. 270–278. ACM (1996)
Bi, K., Ai, Q., Croft, W.B.: Revisiting iterative relevance feedback for document and passage retrieval. arXiv preprint arXiv:1812.05731 (2018)
Brondwine, E., Shtok, A., Kurland, O.: Utilizing focused relevance feedback. In: Proceedings of the 39th International ACM SIGIR Conference, pp. 1061–1064. ACM (2016)
Chen, M.: Efficient vector representation for documents through corruption. arXiv preprint arXiv:1707.02377 (2017)
Cirillo, C., Chang, Y., Razon, J.: Evaluation of feedback retrieval using modified freezing, residual collection, and test and control groups. Scientific Report No. ISR-16 to the National Science Foundation (1969)
Croft, W.B., Metzler, D., Strohman, T.: Search Engines: Information Retrieval in Practice, vol. 283. Addison-Wesley, Reading (2010)
Dai, A.M., Olah, C., Le, Q.V.: Document embedding with paragraph vectors. In: NIPS Deep Learning Workshop (2015)
Dehghani, M., Azarbonyad, H., Kamps, J., Hiemstra, D., Marx, M.: Luhn revisited: significant words language models. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 1301–1310. ACM (2016)
Grossman, M.R., Cormack, G.V., Roegiest, A.: TREC 2016 total recall track overview. In: TREC (2016)
Habernal, I., et al.: New collection announcement: focused retrieval over the web. In: Proceedings of the 39th International ACM SIGIR Conference, pp. 701–704. ACM (2016)
Harman, D.: Relevance feedback revisited. In: Proceedings of the 15th Annual International ACM SIGIR Conference, pp. 1–10. ACM (1992)
Iwayama, M.: Relevance feedback with a small number of relevance judgements: incremental relevance feedback vs. document clustering. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 10–16. ACM (2000)
Jones, G., Sakai, T., Kajiura, M., Sumita, K.: Incremental relevance feedback in Japanese text retrieval. Inf. Retrieval 2(4), 361–384 (2000)
Krovetz, R.: Viewing morphology as an inference process. In: Proceedings of the 16th Annual International ACM SIGIR Conference, pp. 191–202. ACM (1993)
Lavrenko, V., Croft, W.B.: Relevance-based language models. In: ACM SIGIR Forum, vol. 51, pp. 260–267. ACM (2017)
Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1188–1196 (2014)
Maron, M.E., Kuhns, J.L.: On relevance, probabilistic indexing and information retrieval. J. ACM (JACM) 7(3), 216–244 (1960)
Metzler, D., Croft, W.B.: A Markov random field model for term dependencies. In: Proceedings of the 28th Annual International ACM SIGIR Conference, pp. 472–479. ACM (2005)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)
Mogotsi, I.: Christopher D. Manning, Prabhakar Raghavan, and Hinrich SchĂĽtze: Introduction to Information Retrieval (2010)
Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Proceedings of the 21st Annual International ACM SIGIR Conference, pp. 275–281. ACM (1998)
Rekabsaz, N., Lupu, M., Hanbury, A., Zuccon, G.: Generalizing translation models in the probabilistic relevance framework. In: Proceedings of the 25th ACM CIKM Conference, pp. 711–720. ACM (2016)
Robertson, S.E., Jones, K.S.: Relevance weighting of search terms. J. Assoc. Inf. Sci. Technol. 27(3), 129–146 (1976)
Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M.M., Gatford, M., et al.: Okapi at TREC-3. NIST Special Publication SP 109, 109 (1995)
Rocchio, J.J.: Relevance feedback in information retrieval. In: The Smart Retrieval System-experiments in Automatic Document Processing (1971)
Ruthven, I., Lalmas, M.: A survey on the use of relevance feedback for information access systems. Knowl. Eng. Rev. 18(2), 95–145 (2003)
Salton, G., Buckley, C.: Improving retrieval performance by relevance feedback. J. Am. Soc. Inf. Sci. 41, 288–297 (1990)
Salton, G., Wong, A., Yang, C.S.: A vector space model for automatic indexing. Commun. ACM 18(11), 613–620 (1975)
Smucker, M.D., Allan, J., Carterette, B.: A comparison of statistical significance tests for information retrieval evaluation. In: Proceedings of the 16th ACM CIKM Conference, pp. 623–632. ACM (2007)
Sun, F., Guo, J., Lan, Y., Xu, J., Cheng, X.: Learning word representations by jointly modeling syntagmatic and paradigmatic relations. In: ACL, vol. 1, pp. 136–145 (2015)
Yang, G.H., Soboroff, I.: TREC 2016 dynamic domain track overview. In: TREC (2016)
Yang, L., et al.: Beyond factoid QA: effective methods for non-factoid answer sentence retrieval. In: ECIR (2016)
Zamani, H., Croft, W.B.: Embedding-based query language models. In: Proceedings of the 2016 ACM ICTIR, pp. 147–156. ACM (2016)
Zamani, H., Croft, W.B.: Relevance-based word embedding. In: Proceedings of the 40th International ACM SIGIR Conference. SIGIR 2017 (2017)
Zhai, C., Lafferty, J.: Model-based feedback in the language modeling approach to information retrieval. In: Proceedings of the Tenth CIKM Conference, pp. 403–410. ACM (2001)
Acknowledgments
This work was supported in part by the Center for Intelligent Information Retrieval and in part by NSF IIS-1715095. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Bi, K., Ai, Q., Croft, W.B. (2019). Iterative Relevance Feedback for Answer Passage Retrieval with Passage-Level Semantic Match. In: Azzopardi, L., Stein, B., Fuhr, N., Mayr, P., Hauff, C., Hiemstra, D. (eds) Advances in Information Retrieval. ECIR 2019. Lecture Notes in Computer Science(), vol 11437. Springer, Cham. https://doi.org/10.1007/978-3-030-15712-8_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-15712-8_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-15711-1
Online ISBN: 978-3-030-15712-8
eBook Packages: Computer ScienceComputer Science (R0)