Advertisement

A Targeted Retraining Scheme of Unsupervised Word Embeddings for Specific Supervised Tasks

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10235)

Abstract

This paper proposes a simple retraining scheme to purposefully adjust unsupervised word embeddings for specific supervised tasks, such as sentence classification. Different from the current methods, which fine-tune word embeddings in training set through the supervised learning procedure, our method treats the labels of task as implicit context information to retrain word embeddings, so that every required word for the intended task obtains task-specific representation. Moreover, because our method is independent of the supervised learning process, it has less risk of over-fitting. We have validated the rationality of our method on various sentence classification tasks. The improvements of accuracy are remarkable, when only scarce training set is available.

Keywords

Word embedding Unsupervised learning Task-specific 

Notes

Acknowledgments

This work was supported by 111 Project of China under Grant no. B08004, the National Natural Science Foundation of China (61273217, 61300080), the Ph.D. Programs Foundation of Ministry of Education of China (20130005110004).

References

  1. 1.
    Astudillo, R.F., Amir, S., Lin, W., Silva, M., Trancoso, I.: Learning word representations from scarce and noisy data with embedding sub-spaces. In: Proceedings of the Association for Computational Linguistics (ACL), Beijing, China (2015)Google Scholar
  2. 2.
    Bansal, M., Gimpel, K., Livescu, K.: Tailoring continuous word representations for dependency parsing. In: ACL, vol. 2, pp. 809–815 (2014)Google Scholar
  3. 3.
    Bengio, Y., Schwenk, H., Senécal, J.S., Morin, F., Gauvain, J.L.: Neural probabilistic language models. In: Holmes, D.E., Jain, L.C. (eds.) Innovations in Machine Learning, pp. 137–186. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Chen, Y., Xu, L., Liu, K., Zeng, D., Zhao, J.: Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. vol. 1, pp. 167–176 (2015)Google Scholar
  5. 5.
    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)MATHGoogle Scholar
  6. 6.
    Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
  7. 7.
    Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
  8. 8.
    Labutov, I., Lipson, H.: Re-embedding words. In: ACL, vol. 2, pp. 489–493 (2013)Google Scholar
  9. 9.
    Le, Q.V., Mikolov, T.: Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053 (2014)
  10. 10.
    Li, R., Shindo, H.: Distributed document representation for document classification. In: Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H. (eds.) PAKDD 2015. LNCS (LNAI), vol. 9077, pp. 212–225. Springer, Cham (2015). doi: 10.1007/978-3-319-18038-0_17 Google Scholar
  11. 11.
    Li, X., Roth, D.: Learning question classifiers. In: Proceedings of the 19th International Conference on Computational Linguistics-Volume 1, pp. 1–7. Association for Computational Linguistics (2002)Google Scholar
  12. 12.
    Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 142–150. Association for Computational Linguistics (2011)Google Scholar
  13. 13.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  14. 14.
    Pang, B., Lee, L.: Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 115–124. Association for Computational Linguistics (2005)Google Scholar
  15. 15.
    Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014)Google Scholar
  16. 16.
    Ren, Y., Zhang, Y., Zhang, M., Ji, D.: Improving twitter sentiment classification using topic-enriched multi-prototype word embeddings. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)Google Scholar
  17. 17.
    dos Santos, C.N., Xiang, B., Zhou, B.: Classifying relations by ranking with convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, vol. 1, pp. 626–634 (2015)Google Scholar
  18. 18.
    Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the conference on Empirical Methods in Natural Language Processing (EMNLP), vol. 1631, p. 1642. Citeseer (2013)Google Scholar
  19. 19.
    Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., Qin, B.: Learning sentiment-specific word embedding for twitter sentiment classification. In: ACL, vol. 1, pp. 1555–1565 (2014)Google Scholar
  20. 20.
    Xu, K., Feng, Y., Huang, S., Zhao, D.: Semantic relation classification via convolutional neural networks with simple negative sampling. arXiv preprint arXiv:1506.07650 (2015)
  21. 21.
    Yang, H., Hu, Q., He, L.: Learning topic-oriented word embedding for query classification. In: Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H. (eds.) PAKDD 2015. LNCS (LNAI), vol. 9077, pp. 188–198. Springer, Cham (2015). doi: 10.1007/978-3-319-18038-0_15 Google Scholar
  22. 22.
    Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J., et al.: Relation classification via convolutional deep neural network. In: COLING, pp. 2335–2344 (2014)Google Scholar
  23. 23.
    Zhang, M., Liu, Y., Luan, H., Sun, M., Izuha, T., Hao, J.: Building earth movers distance on bilingual word embeddings for machine translation. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)Google Scholar
  24. 24.
    Taghipour, K., Ng, H.T.: Semi-supervised word sense disambiguation using word embeddings in general and specific domains. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 314–323 (2015)Google Scholar
  25. 25.
    Yin, Y., Wei, F., Dong, L., Xu, K., Zhang, M., Zhou, M.: Unsupervised word and dependency path embeddings for aspect term extraction (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.School of Information and Communication EngineeringBeijing University of Posts and TelecommunicationsBeijingChina

Personalised recommendations