Advertisement

Guiding Approximate Text Classification Rules via Context Information

  • Wai Chung WongEmail author
  • Sunny Lai
  • Wai Lam
  • Kwong Sak Leung
Conference paper
  • 340 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11292)

Abstract

Human experts can often easily write a set of approximate rules based on their domain knowledge for supporting automatic text classification. While such approximate rules are able to conduct classification at a general level, they are not effective for handling diverse and specific situations for a particular category. Given a set of approximate rules and a moderate amount of labeled data, existing incremental text classification learning models can be employed for tackling this problem by continuous rule refinement. However, these models lack the consideration of context information, which inherently exists in data. We propose a framework comprising rule embeddings and context embeddings derived from data to enhance the adaptability of approximate rules via considering the context information. We conduct extensive experiments and the results demonstrate that our proposed framework performs better than existing models in some benchmarking datasets, indicating that learning the context of rules is constructive for improving text classification performance.

Keywords

Rule embedding Context embedding Text classification 

References

  1. 1.
    Bostanci, B., Bostanci, E.: An evaluation of classification algorithms using Mc Nemar’s test. In: Bansal, J., Singh, P., Deep, K., Pant, M., Nagar, A. (eds.) BIC-TA 2012. AISC, vol. 201, pp. 15–26. Springer, India (2013).  https://doi.org/10.1007/978-81-322-1038-2_2CrossRefGoogle Scholar
  2. 2.
    Cohen, W.W.: Fast effective rule induction. In: Twelfth International Conference on Machine Learning (ICML), pp. 115–123 (1995)CrossRefGoogle Scholar
  3. 3.
    Cohen, W.W., Singer, Y.: Context-sensitive learning methods for text categorization. ACM Trans. Inf. Syst. (TOIS) 17(2), 141–173 (1999)CrossRefGoogle Scholar
  4. 4.
    Elwell, R., Polikar, R.: Incremental learning of concept drift in nonstationary environments. IEEE Trans. Neural Netw. 22(10), 1517–1531 (2011)CrossRefGoogle Scholar
  5. 5.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  6. 6.
    Muhlbaier, M.D., Topalis, A., Polikar, R.: Learn\({++}\). NC: combining ensemble of classifiers with dynamically weighted consult-and-vote for efficient incremental learning of new classes. IEEE Trans. Neural Netw. 20(1), 152–168 (2009)CrossRefGoogle Scholar
  7. 7.
    Paszke, A., et al.: Automatic differentiation in PyTorch. In: Advances in Neural Information Processing Systems (NIPS) - Workshop (2017)Google Scholar
  8. 8.
    Polikar, R., Upda, L., Upda, S.S., Honavar, V.: Learn++: an incremental learning algorithm for supervised neural networks. IEEE Trans. Syst. Man Cybern. part C (Appl. Rev.) 31(4), 497–508 (2001)CrossRefGoogle Scholar
  9. 9.
    Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems (NIPS), pp. 649–657 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Wai Chung Wong
    • 1
    Email author
  • Sunny Lai
    • 2
    • 3
  • Wai Lam
    • 1
  • Kwong Sak Leung
    • 2
    • 3
  1. 1.Department of Systems Engineering and Engineering ManagementThe Chinese University of Hong KongShatinHong Kong
  2. 2.Department of Computer Science and EngineeringThe Chinese University of Hong KongShatinHong Kong
  3. 3.Institute of Future CitiesThe Chinese University of Hong KongShatinHong Kong

Personalised recommendations