Text Classification Research Based on Improved Word2vec and CNN

  • Mengyuan Gao
  • Tinghui LiEmail author
  • Peifang Huang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11434)


In view of the traditional classification algorithm, the problem of high feature dimension and data sparseness often occurs when text classification of short texts. This paper proposes a text feature combining neural network language model word2vec and document topic model Latent Dirichlet Allocation (LDA). Represents a matrix model. The matrix model can not only effectively represent the semantic features of the words but also convey the context features and enhance the feature expression ability of the model. The feature matrix was input into the convolutional neural network (CNN) for convolution pooling, and text classification experiments were performed. The experimental results show that the proposed matrix model has better classification effect than the traditional text classification methods based on word2vec and CNN. In the text classification accuracy rate, recall rate and F1 three evaluation indicators increased by 8.4%, 8.9% and 8.6%.


Text classification Word2vec LDA CNN 


  1. 1.
    Severyn, A., Moschitti, A.: Learning to rank short text pairs with convolutional deep neural networks. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR 2015, pp. 373–382 (2015)Google Scholar
  2. 2.
    Zhang, Y., Jin, R., Zhou, Z.H.: Understanding bag-of-words model: a statistical framework. Int. J. Mach. Learn. Cybern. 1, 43–52 (2010)CrossRefGoogle Scholar
  3. 3.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, vol. 26, pp. 3111–3119 (2013)Google Scholar
  4. 4.
    Zheng, X., Chen, H., Xu, T.: Deep learning for Chinese word segmentation and POS tagging. In: EMNLP, pp. 647–657 (2013)Google Scholar
  5. 5.
    Xue, B., Fu, C., Shaobin, Z.: A study on sentiment computing and classification of Sina Weibo with Word2vec. In: 2014 IEEE International Congress on Big Data, pp. 358–363 (2014)Google Scholar
  6. 6.
    Xing, C., Wang, D., Zhang, X., Liu, C.: Document classification with distributions of word vectors. In: Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2014 (2014)Google Scholar
  7. 7.
    Le, Q.V., Mikolov, T.: Distributed representations of sentences and documents. In: International Conference on International Conference on Machine Learning, pp. 1–9 (2014)Google Scholar
  8. 8.
    Kim, H.K., Kim, H., Cho, S.: Bag-of-concepts: comprehending document representation through clustering words in distributed representation. Neurocomputing 266, 336–352 (2017)CrossRefGoogle Scholar
  9. 9.
    Agarwal, A., Xie, B., Vovsha, I.: Sentiment analysis of Twitter data. In: The Workshop on Languages in Social Media, pp. 30–38. Association for Computational Linguistics (2011)Google Scholar
  10. 10.
    Yang, F., Li, Z., Zeng, S., Hao, B., Qi, P., Pang, Z.: A novel method for wireless communication signal modulation recognition in smart grid. J. Commun. 11, 813–818 (2016)Google Scholar
  11. 11.
    Jie, C., Zhiyi, F., Dan, Z., Guannan, Q.: Network traffic classification using feature selection and parameter optimization. Int. J. Appl. Eng. Res. 10, 5663–5679 (2015)Google Scholar
  12. 12.
    Luong, M., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421 (2015)Google Scholar
  13. 13.
    Kim, Y.: Convolutional neural networks for sentence classification. In: The 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746–1751 (2014)Google Scholar
  14. 14.
    Zhang, Y., Wallace, B.: A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. In: The 8th International Joint Conference on Natural Language Processing, pp. 253–263 (2015)Google Scholar
  15. 15.
    Mathew, J., Radhakrishnan, D.: An FIR digital filter using one-hot coded residue representation. In: IEEE, pp. 1–4 (2015)Google Scholar
  16. 16.
    Ming, T., Lei, Z., Xianchun, Z.: Document vector representation based on Word2Vec. Comput. Sci. 43, 214–219 (2016)Google Scholar
  17. 17.
    Carrera-trejo, V., Sidorov, G., Miranda-jiménez, S., Ibarra, M.M., Martínez, R.C.: Latent Dirichlet allocation complement in the vector space model for multi-label text classification. Int. J. Comb. Optim. Probl. Inform. 6, 7–19 (2015)Google Scholar
  18. 18.
    Taiyong, G.: A method based on TF-IDF and improved support vector machine research on Chinese text categorization. Comput. Eng. 37, 141–145 (2016)Google Scholar
  19. 19.
    Yuting, S., Dehua, X.: Research on Chinese text classification based on LDA and SVM. Res. Dev. 2, 18–23 (2016)Google Scholar
  20. 20.
    Nguyen, T.H., Grishman, R.: Relation extraction: perspective from convolutional neural networks. In: Workshop on Vector Modeling for NLP, pp. 39–48 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Guangxi Normal UniversityGuilinChina

Personalised recommendations