Multimodal Fusion with Global and Local Features for Text Classification

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10634)

Abstract

Text classification is a crucial task in natural language processing. Due to the characteristics of text structure, achieving the best result remains an ongoing challenge. In this paper, we propose an ensemble model which outperforms the state-of-the-art. We first utilize rule-based n-gram approach to extend corpus. Then two different features, global dependencies of word and local semantic feature, are extracted by gated recurrent unit and global average pooling model respectively. In order to take advantage of the complementarity of the global and local features, a decision-level fusion is applied to fuse those different kinds of features. We evaluate the quality of our model on various public datasets, including sentiment analysis, ontology classification and text categorization. Experimental results show that our model can effectively learn representations for language modeling, and achieves the best accuracy of text categorization.

Keywords

Text classification Semantic feature Global average pooling Global feature 

Notes

Acknowledgements

This work is supported by the Special Funds of the National Natural Science Foundation of China (Grant No. 51227803).

References

  1. 1.
    Sun, B., Li, L., Zuo, T., Chen, Y., Zhou, G., Wu, X.: Combining multimodal features with hierarchical classifier fusion for emotion recognition in the wild. In: Proceedings of the 16th International Conference on Multimodal Interaction, pp. 481–486. ACM (2014)Google Scholar
  2. 2.
    Pang, B., Lee, L., Vaithyanathan, S.: Thumbs up? Sentiment classification using machine learning techniques. In: Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing, vol. 10, pp. 79–86 (2002)Google Scholar
  3. 3.
    Wang, S., Manning, C.D.: Baselines and bigrams: simple, good sentiment and topic classification. In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers, vol. 2, pp. 90–94 (2012)Google Scholar
  4. 4.
    Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 (2016)
  5. 5.
    Gers, F.A., Schmidhuber, J.: Recurrent nets that time and count. In: IEEE-INNS-ENNS International Joint Conference on Neural Networks, vol. 3, pp. 189–194. IEEE (2000)Google Scholar
  6. 6.
    Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)
  7. 7.
    Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
  8. 8.
    Zolotov, V., Kung, D.: Analysis and optimization of fasttext linear text classifier. arXiv preprint arXiv:1702.05531 (2017)
  9. 9.
    Mesnil, G., Mikolov, T., Ranzato, M.: Ensemble of generative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335 (2014)
  10. 10.
    Ke, Y., Hagiwara, M.: Alleviating overfitting for polysemous words for word representation estimation using lexicons. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2164–2170. IEEE (2017)Google Scholar
  11. 11.
    Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)
  12. 12.
    Zhou, X., Hu, B., Chen, Q., Wang, X.: An auto-encoder for learning conversation representation using LSTM. In: Arik, S., Huang, T., Lai, W.K., Liu, Q. (eds.) ICONIP 2015. LNCS, vol. 9489, pp. 310–317. Springer, Cham (2015). doi: 10.1007/978-3-319-26532-2_34 CrossRefGoogle Scholar
  13. 13.
    Maas, A., Daly, R., Pham, P., Huang, D., Ng, A., Potts, C.: Learning word vectors for sentiment analysis. In: 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 142–150 (2011)Google Scholar
  14. 14.
    McAuley, J., Leskovec, J.: Hidden factors and hidden topics: understanding rating dimensions with review text. In: Proceedings of the 7th ACM Conference on Recommender Systems, pp. 165–172. ACM (2013)Google Scholar
  15. 15.
    Lang, K.: Newsweeder: learning to filter netnews. In: Proceedings of the 12th International Conference on Machine Learning, vol. 10, pp. 331–339 (1995)Google Scholar
  16. 16.
    Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems, pp. 649–657 (2015)Google Scholar
  17. 17.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. EMNLP 14, 1532–1543 (2014)Google Scholar
  18. 18.
    Johnson, R., Zhang, T.: Semi-supervised convolutional neural networks for text categorization via region embedding. In: Advances in Neural Information Processing Systems, pp. 919–927 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.School of Computer Engineering and ScienceShanghai UniversityShanghaiChina

Personalised recommendations