Advertisement

Enriching Word Embeddings for Patent Retrieval with Global Context

  • Sebastian HofstätterEmail author
  • Navid Rekabsaz
  • Mihai Lupu
  • Carsten Eickhoff
  • Allan Hanbury
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11437)

Abstract

The training and use of word embeddings for information retrieval has recently gained considerable attention, showing competitive performance across various domains. In this study, we explore the use of word embeddings for patent retrieval, a challenging domain, especially for methods based on distributional semantics. We hypothesize that the previously reported limited effectiveness of semantic approaches, and in particular word embeddings (word2vec Skip-gram) in this domain, is due to inherent constraints on the (short) window context that is too narrow for the model to capture the full complexity of the patent domain. To address this limitation, we jointly draw from local and global contexts for embedding learning. We do this in two ways: (1) adapting the Skip-gram model’s vectors using global retrofitting (2) filtering word similarities using global context. We measure patent retrieval performance using BM25 and LM Extended Translation models and observe significant improvements over three baselines.

References

  1. 1.
    Andersson, L., Lupu, M., Palotti, J., Hanbury, A., Rauber, A.: When is the time ripe for natural language processing for patent passage retrieval? In: Proceedings of CIKM (2016)Google Scholar
  2. 2.
    Baker, C.F., Fillmore, C.J., Lowe, J.B.: The Berkeley FrameNet project. In: Proceedings of ACL (1998)Google Scholar
  3. 3.
    Berger, A., Lafferty, J.: Information retrieval as statistical translation. In: Proceedings of SIGIR (1999)Google Scholar
  4. 4.
    Diaz, F., Mitra, B., Craswell, N.: Query expansion with locally-trained word embeddings. In: Proceedings of ACL (2016)Google Scholar
  5. 5.
    Faruqui, M., Dodge, J., Jauhar, S.K., Dyer, C., Hovy, E., Smith, N.A.: Retrofitting word vectors to semantic lexicons. In: Proceedings of NAACL-HLT (2015)Google Scholar
  6. 6.
    Ganitkevitch, J., Van Durme, B., Callison-Burch, C.: PPDB: the paraphrase database. In: Proceedings of NAACL (2013)Google Scholar
  7. 7.
    Kuzi, S., Shtok, A., Kurland, O.: Query expansion using word embeddings. In: Proceedings of CIKM (2016)Google Scholar
  8. 8.
    Lupu, M.: On the usability of random indexing in patent retrieval. In: Hernandez, N., Jäschke, R., Croitoru, M. (eds.) ICCS 2014. LNCS (LNAI), vol. 8577, pp. 202–216. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-08389-6_17CrossRefGoogle Scholar
  9. 9.
    Lupu, M., Hanbury, A.: Patent retrieval. In: Foundations and Trends in Information Retrieval (2013)CrossRefGoogle Scholar
  10. 10.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of NIPS (2013)Google Scholar
  11. 11.
    Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38, 39–41 (1995)CrossRefGoogle Scholar
  12. 12.
    Nguyen, G.-H., Soulier, L., Tamine, L., Bricon-Souf, N.: DSRIM: a deep neural information retrieval model enhanced by a knowledge resource driven representation of documents. In: Proceedings of SIGIR (2017)Google Scholar
  13. 13.
    Piroi, F., Lupu, M., Hanbury, A.: Overview of CLEF-IP 2013 lab. In: Forner, P., Müller, H., Paredes, R., Rosso, P., Stein, B. (eds.) CLEF 2013. LNCS, vol. 8138, pp. 232–249. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40802-1_25CrossRefGoogle Scholar
  14. 14.
    Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Proceedings of SIGIR (1998)Google Scholar
  15. 15.
    Řehůřek, R., Sojka, P.: Software framework for topic modelling with large corpora. In: Proceedings of LREC Workshop on New Challenges for NLP Frameworks (2010)Google Scholar
  16. 16.
    Rekabsaz, N., Lupu, M., Hanbury, A.: Exploration of a threshold for similarity based on uncertainty in word embedding. In: Jose, J.M., et al. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 396–409. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-56608-5_31CrossRefGoogle Scholar
  17. 17.
    Rekabsaz, N., Lupu, M., Hanbury, A., Zamani, H.: Word embedding causes topic shifting; exploit global context! In: Proceedings of SIGIR (2017)Google Scholar
  18. 18.
    Rekabsaz, N., Lupu, M., Hanbury, A., Zuccon, G.: Generalizing translation models in the probabilistic relevance framework. In: Proceedings of CIKM (2016)Google Scholar
  19. 19.
    Xiong, C., Callan, J., Liu, T.-Y.: Word-entity duet representations for document ranking. In: Proceedings of SIGIR (2017)Google Scholar
  20. 20.
    Xiong, C., Dai, Z., Callan, J., Liu, Z., Power, R.: End-to-end neural ad-hoc ranking with kernel pooling. In: Proceedings of SIGIR (2017)Google Scholar
  21. 21.
    Zamani, H., Croft, W.B.: Relevance-based word embedding. In: Proceedings of SIGIR (2017)Google Scholar
  22. 22.
    Zuccon, G., Koopman, B., Bruza, P., Azzopardi, L.: Integrating and evaluating neural word embeddings in information retrieval. In: Proceedings of Australasian Document Computing Symposium (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Sebastian Hofstätter
    • 1
    Email author
  • Navid Rekabsaz
    • 2
  • Mihai Lupu
    • 3
  • Carsten Eickhoff
    • 4
  • Allan Hanbury
    • 1
    • 5
  1. 1.TU WienViennaAustria
  2. 2.Idiap Research InstituteMartignySwitzerland
  3. 3.Research Studios AustriaViennaAustria
  4. 4.Brown UniversityProvidenceUSA
  5. 5.Complexity Science HubViennaAustria

Personalised recommendations