Advertisement

Multiple Keyphrase Generation Model with Diversity

  • Shotaro MisawaEmail author
  • Yasuhide Miura
  • Motoki Taniguchi
  • Tomoko Ohkuma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11437)

Abstract

Encoder–decoder models have achieved high performance in their application to keyphrase generation. However, keyphrases for a source text generated by these models are similar to each other because each keyphrase is independently generated. To improve the diversity, we propose a model that iteratively generates each keyphrase while considering the formerly generated keyphrase. The experimentally obtained results indicate that our model generates more diverse keyphrases with a performance that is superior or comparable to conventional models.

Keywords

Keyphrase generation Diversity Attention 

References

  1. 1.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
  2. 2.
    Bahuleyan, H., Mou, L., Vechtomova, O., Poupart, P.: Variational attention for sequence-to-sequence models. arXiv preprint arXiv:1712.08207 (2017)
  3. 3.
    Boudin, F.: pke: an open source python-based keyphrase extraction toolkit. In: Proceedings of COLING 2016, The 26th International Conference on Computational Linguistics: System Demonstrations, pp. 69–73 (2016)Google Scholar
  4. 4.
    Boudin, F.: Unsupervised keyphrase extraction with multipartite graphs. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 2, pp. 667–672 (2018)Google Scholar
  5. 5.
    Cibils, A., Musat, C., Hossman, A., Baeriswyl, M.: Diverse beam search for increased novelty in abstractive summarization. arXiv preprint arXiv:1802.01457 (2018)
  6. 6.
    Florescu, C., Caragea, C.: Positionrank: an unsupervised approach to keyphrase extraction from scholarly documents. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1105–1115 (2017)Google Scholar
  7. 7.
    Gimpel, K., Batra, D., Dyer, C., Shakhnarovich, G.: A systematic exploration of diversity in machine translation. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1100–1111 (2013)Google Scholar
  8. 8.
    Ian, H.W., Gordon, W.P., Frank, E., Gutwin, C., Craig, G.N.M.: Kea: practical automatic keyphrase extraction. In: Proceedings of the Fourth ACM Conference on Digital Libraries, pp. 254–255 (1999)Google Scholar
  9. 9.
    Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.: A diversity-promoting objective function for neural conversation models. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 110–119 (2016)Google Scholar
  10. 10.
    Li, J., Jurafsky, D.: Mutual information and diverse decoding improve neural machine translation. arXiv preprint arXiv:1601.00372 (2016)
  11. 11.
    Mahata, D., Kuriakose, J., Shah, R.R., Zimmermann, R.: Key2vec: automatic ranked keyphrase extraction from scientific articles using phrase embeddings. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 2, pp. 634–639 (2018)Google Scholar
  12. 12.
    Meng, R., Zhao, S., Han, S., He, D., Brusilovsky, P., Chi, Y.: Deep keyphrase generation. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 582–592 (2017)Google Scholar
  13. 13.
    Nema, P., Khapra, M.M., Laha, A., Ravindran, B.: Diversity driven attention model for query-based abstractive summarization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1063–1072 (2017)Google Scholar
  14. 14.
    Nguyen, T.D., Luong, M.T.: Wingnus: keyphrase extraction utilizing document logical structure. In: Proceedings of the Fifth International Workshop on Semantic Evaluation, pp. 166–169 (2010)Google Scholar
  15. 15.
    Prasad, A., Kan, M.Y.: Wing-nus at semeval-2017 task 10: Keyphrase extraction and classification as joint sequence labeling. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 973–977 (2017)Google Scholar
  16. 16.
    See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1073–1083 (2017)Google Scholar
  17. 17.
    Shen, X., Su, H., Niu, S., Demberg, V.: Improving variational encoder-decoders in dialogue generation. arXiv preprint arXiv:1802.02032 (2018)
  18. 18.
    Vijayakumar, A.K., et al.: Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424 (2016)
  19. 19.
    Yao, T., Pan, Y., Li, Y., Mei, T.: Incorporating copying mechanism in image captioning for learning novel objects. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5263–5271 (2017)Google Scholar
  20. 20.
    Zhang, Y., Li, J., Song, Y., Zhang, C.: Encoding conversation context for neural keyphrase extraction from microblog posts. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1676–1686 (2018)Google Scholar
  21. 21.
    Zhao, T., Zhao, R., Eskenazi, M.: Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 654–664 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Shotaro Misawa
    • 1
    Email author
  • Yasuhide Miura
    • 1
  • Motoki Taniguchi
    • 1
  • Tomoko Ohkuma
    • 1
  1. 1.Fuji Xerox Co., Ltd.YokohamaJapan

Personalised recommendations