Skip to main content

Studying Catastrophic Forgetting in Neural Ranking Models

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12656))

Included in the following conference series:

Abstract

Several deep neural ranking models have been proposed in the recent IR literature. While their transferability to one target domain held by a dataset has been widely addressed using traditional domain adaptation strategies, the question of their cross-domain transferability is still under-studied. We study here in what extent neural ranking models catastrophically forget old knowledge acquired from previously observed domains after acquiring new knowledge, leading to performance decrease on those domains. Our experiments show that the effectiveness of neural IR ranking models is achieved at the cost of catastrophic forgetting and that a lifelong learning strategy using a cross-domain regularizer successfully mitigates the problem. Using an explanatory approach built on a regression model, we also show the effect of domain characteristics on the rise of catastrophic forgetting. We believe that the obtained results can be useful for both theoretical and practical future work in neural IR.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    According to Jialin and Qiang [40], a domain consists of at most two components: a feature space over a dataset and a marginal probability distribution within a task.

  2. 2.

    We consider the definition of transfer learning in [40]. Please note that several other definitions exist [13].

  3. 3.

    In our work, different domains refer to different datasets characterized by different data distributions w.r.t. to their source and content as defined in [40].

References

  1. Adomavicius, G., Zhang, J.: Impact of data characteristics on recommender systems performance. ACM Trans. Manage. Inf. Syst. 3(1), 1–17 (2012)

    Article  Google Scholar 

  2. Asghar, N., Mou, L., Selby, K.A., Pantasdo, K.D., Poupart, P., Jiang, X.: Progressive memory banks for incremental domain adaptation. arXiv preprint arXiv:1811.00239 (2020)

  3. Bajaj, P., et al.: Ms marco: a human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 (2016)

  4. Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. In: UTLW2011, pp. 17–37 (2011)

    Google Scholar 

  5. Cai, H., Chen, H., Zhang, C., Song, Y., Zhao, X., Yin, D.: Adaptive parameterization for neural dialogue generation. In: EMNLP-IJCNLP, pp. 1793–1802 (2019)

    Google Scholar 

  6. Chen, Z., Liu, B.: Lifelong machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–207 (2018)

    Google Scholar 

  7. Cohen, D., Mitra, B., Hofmann, K., Croft, W.B.: Cross domain regularization for neural ranking models using adversarial learning. In: ACM SIGIR, pp. 1025–1028 (2018)

    Google Scholar 

  8. Dehghani, M., Zamani, H., Severyn, A., Kamps, J., Croft, W.B.: Neural ranking models with weak supervision. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 65–74 (2017)

    Google Scholar 

  9. Deldjoo, Y., Di Noia, T., Di Sciascio, E., Merra, F.A.: How dataset characteristics affect the robustness of collaborative recommendation models. In: ACM SIGIR, pp. 951–960 (2020)

    Google Scholar 

  10. Díaz-Rodríguez, N., Lomonaco, V., Filliat, D., Maltoni, D.: Don’t forget, there is more than forgetting: new metrics for Continual Learning. arXiv preprint arXiv:1810.13166 (2018)

  11. French, R.M.: Catastrophic forgetting in connectionist networks. Trends in Cogn. Sci. 3(4), 128–135 (1999)

    Article  Google Scholar 

  12. Goodfellow, I.J., Mirza, M., Da, X., Courville, A.C., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 (2014)

  13. Gulrajani, I., Lopez-Paz, D.: In search of lost domain generalization. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=lQdXeXDoWtI

  14. Guo, J., Fan, Y., Ai, Q., Croft, W.B.: A deep relevance matching model for ad-hoc retrieval. In: CIKM 2016, pp. 55–64. Association for Computing Machinery (2016)

    Google Scholar 

  15. Hancock, B., Bordes, A., Mazare, P.E., Weston, J.: Learning from dialogue after deployment: feed yourself, chatbot! In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3667–3684 (2019)

    Google Scholar 

  16. Hofstätter, S., Rekabsaz, N., Eickhoff, C., Hanbury, A.: On the effect of low-frequency terms on neural-ir models. In: SIGIR, pp. 1137–1140 (2019)

    Google Scholar 

  17. Hui, K., Yates, A., Berberich, K., de Melo, G.: PACRR: a position-aware neural IR model for relevance matching. arXiv preprint arXiv:1704.03940 (2017)

  18. Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N.D.: Dataset shift in machine learning. The MIT Press (2009)

    Google Scholar 

  19. Jha, R., Lovering, C., Pavlick, E.: When does data augmentation help generalization in NLP? arXiv preprint arXiv:2004.15012 (2020)

  20. Kemker, R., McClure, M., Abitino, A., Hayes, T.L., Kanan, C.: Measuring catastrophic forgetting in neural networks. In: AAAI-18, pp. 3390–3398 (2018)

    Google Scholar 

  21. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  22. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2016)

    Article  MathSciNet  Google Scholar 

  23. Lee, C., Cho, K., Kang, W.: Mixout: effective regularization to finetune large-scale pretrained language models. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020. OpenReview.net (2020). https://openreview.net/forum?id=HkgaETNtDB

  24. Lee, S.W., Kim, J.H., Jun, J., Ha, J.W., Zhang, B.T.: Overcoming catastrophic forgetting by incremental moment matching. In: NIPS2017, Curran Associates Inc., Red Hook, NY, USA, pp. 4655–4665 (2017)

    Google Scholar 

  25. Li, J., Miller, A.H., Chopra, S., Ranzato, M., Weston, J.: Learning through dialogue interactions by asking questions. In: ICLR 2017 (2017)

    Google Scholar 

  26. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 12, 2935–2947 (2018)

    Article  Google Scholar 

  27. Lin, J., Efron, M.: Overview of the trec-2013 microblog track. In: Text REtrieval Conference (TREC), Gaithersburg, Maryland, USA (2013)

    Google Scholar 

  28. Liu, T.Y.: Learning to rank for information retrieval. Found. Trends Inf. Retr. 3(3), 225–331 (2009)

    Article  Google Scholar 

  29. Liu, X., Gao, J., He, X., Deng, L., Duh, K., Wang, Y.: Representation learning using multi-task deep neural networks for semantic classification and information retrieval. NAACL HLT 2015, 912–921 (2015)

    Google Scholar 

  30. MacAvaney, S.: OpenNIR: a complete neural ad-hoc ranking pipeline. In: WSDM 2020 (2020)

    Google Scholar 

  31. MacAvaney, S., Yates, A., Cohan, A., Goharian, N.: CEDR: contextualized embeddings for document ranking. In: ACM SIGIR, pp. 1101–1104 (2019)

    Google Scholar 

  32. d’Autume, C.D.M., Ruder, S., Kong, L., Yogatama, D.: Episodic memory in lifelong language learning. arXiv preprint arXiv:1906.01076 (2019)

  33. Mazumder, S., Ma, N., Liu, B.: Towards a continuous knowledge learning engine for chatbots. arXiv preprint arXiv:1802.06024 (2018)

  34. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: ICLR 2013 (2013)

    Google Scholar 

  35. Mitra, B., Craswell, N.: An introduction to neural information retrieval. Found. Trend Inf. Retrieval 13(1), 1–126 (2018)

    Article  Google Scholar 

  36. Mitra, B., Craswell, N.: An updated duet model for passage re-ranking. arXiv preprint arXiv:1903.07666 (2019)

  37. Mosbach, M., Andriushchenko, M., Klakow, D.: On the stability of fine-tuning bert: misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884 (2020)

  38. Mou, L., Meng, Z., Yan, R., Li, G., Xu, Y., Zhang, L., Jin, Z.: How transferable are neural networks in NLP applications? In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 479–489 (2016)

    Google Scholar 

  39. Onal, K.D., et al.: Neural information retrieval: at the end of the early years. Inf. Retrieval J. 21, 111–182 (2017)

    Article  Google Scholar 

  40. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. on Knowl. and Data Eng. 22(10), 1345–1359 (Oct 2010)

    Google Scholar 

  41. Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019)

    Article  Google Scholar 

  42. Roller, S., Boureau, Y.L., Weston, J., Bordes, A., Dinan, E., Fan, A., Gunning, D., Ju, D., Li, M., Poff, S., et al.: Open-domain conversational agents: Current progress, open problems, and future directions. arXiv preprint arXiv:2006.12442 (2020)

  43. Rusu, A.A., et al.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)

  44. Soboroff, I., Ounis, I., Macdonald, C., Lin, J.J.: Overview of the TREC-2012 microblog track. In: Proceedings of The Twenty-First Text REtrieval Conference, TREC 2012. NIST Special Publication, vol. 500 p. 298 (2012)

    Google Scholar 

  45. Thompson, B., Gwinnup, J., Khayrallah, H., Duh, K., Koehn, P.: Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In: NAACL, pp. 2062–2068 (2019)

    Google Scholar 

  46. Wang, L.L., et al.: Cord-19: the covid-19 open research dataset. ArXiv (2020)

    Google Scholar 

  47. Wen, S., Itti, L.: Overcoming catastrophic forgetting problem by weight consolidation and long-term memory. arXiv preprint arXiv:1805.07441 (2018)

  48. Wiese, G., Weissenborn, D., Neves, M.: Neural domain adaptation for biomedical question answering. CoNLL 2017, 281–289 (2017)

    Google Scholar 

  49. Xiong, C., Dai, Z., Callan, J., Liu, Z., Power, R.: End-to-end neural ad-hoc ranking with kernel pooling. In: ACM SIGIR, pp. 55–64 (2017)

    Google Scholar 

  50. Xu, H., Liu, B., Shu, L., Yu, P.S.: Lifelong domain word embedding via meta-learning. In: IJCAI-18, pp. 4510–4516 (2018)

    Google Scholar 

  51. Yang, W., Lu, K., Yang, P., Lin, J.: Critically examining the “neural hype" weak baselines and the additivity of effectiveness gains from neural ranking models. In: ACM SIGIR, pp. 1129–1132 (2019)

    Google Scholar 

  52. Yang, W., Xie, Y., Tan, L., Xiong, K., Li, M., Lin, J.: Data augmentation for BERT fine-tuning in open-domain question answering. arXiv preprint arXiv:1904.06652 (2019)

  53. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: NIPS2014, pp. 3320–3328 (2014)

    Google Scholar 

Download references

Acknowledgement

We would like to thank projects ANR COST (ANR-18-CE23-0016) and ANR JCJC SESAMS (ANR-18-CE23-0001) for supporting this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jesús Lovón-Melgarejo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lovón-Melgarejo, J., Soulier, L., Pinel-Sauvagnat, K., Tamine, L. (2021). Studying Catastrophic Forgetting in Neural Ranking Models. In: Hiemstra, D., Moens, MF., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds) Advances in Information Retrieval. ECIR 2021. Lecture Notes in Computer Science(), vol 12656. Springer, Cham. https://doi.org/10.1007/978-3-030-72113-8_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72113-8_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72112-1

  • Online ISBN: 978-3-030-72113-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics