Skip to main content

Comparison of the Novel Classification Methods on the Reuters-21578 Corpus

  • Conference paper
  • First Online:
Multimedia and Network Information Systems (MISSI 2018)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 833))

Included in the following conference series:

Abstract

The paper describes an evaluation of novel boosting methods of the commonly used Multinomial Naïve Bayes classifier. Evaluation is made upon the Reuters corpus, which consists of 10788 documents and 90 categories. All experiments use the tf-idf weighting model and the one versus the rest strategy. AdaBoost, XGBoost and Gradient Boost algorithms are tested. Additionally the impact of feature selection is tested. The evaluation is carried out with use of commonly used metrics – precision, recall, F1 and Precision-Recall breakeven points. The novel aspect of this work is that all considered boosted methods are compared to each other and several classical methods (Support Vector Machine methods and a Random Forests classifier). The results are much better than in the classic Joachims paper and slightly better than obtained with maximum discrimination method for feature selection. This is important because for the past 20 years most works were concerned with a change of results upon modification of parameters. Surprisingly, the result obtained with the use of feed-forward neural network is comparable to the Bayesian optimization over boosted Naïve Bayes (despite the medium size of the corpus). We plan to extend these results by using word embedding methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Banerjee, S., Majumder, P., Mitra, M.: Re-evaluating the need for modelling term-dependence in text classification problems. CoRR abs/1710.09085 (2017)

    Google Scholar 

  2. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system

    Google Scholar 

  3. Freund, Y., Schapire, R.: A decision theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)

    Article  MathSciNet  Google Scholar 

  4. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29, 1189–1232 (2000)

    Article  MathSciNet  Google Scholar 

  5. Ji, Y., Noah, A., Smith, N.A.: Neural discourse structure for text categorization. In: ACL (1), pp. 996–1005 (2017)

    Google Scholar 

  6. Joachims, T.: Text categorization with support vector machines: learning with many relevant features. In: ECML, pp. 137–142 (1998)

    Google Scholar 

  7. Lewis, D.D., Yang, Y., Rose, T., Li, F.: RCV1: a new benchmark collection for text categorization research. J. Mach. Learn. Res. 5, 361–397 (2004)

    Google Scholar 

  8. Liang, H., Sun, X., Sun, Y., Gao, Y.: Text feature extraction based on deep learning: a review. EURASIP J. Wirel. Commun. Netw. 2017(1), 211 (2017)

    Article  Google Scholar 

  9. Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, New York (2008)

    Book  Google Scholar 

  10. Yogatama, D., Kong, L., Smith, N.A.: Bayesian optimization of text representations. In: EMNLP, pp. 2100–2105 (2015)

    Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: The International Conference on Learning Representations (ICLR), San Diego (2015)

    Google Scholar 

  12. Salakhutdinov, R., Hinton, G.E.: Semantic hashing. Int. J. Approx. Reason. 50(7), 969–978 (2009)

    Article  Google Scholar 

  13. Yang, Y., Liu, X.: A re-examination of text categorization methods. In: Proceedings of 22nd Annual International SIGIR (1999)

    Google Scholar 

  14. Tang, B., Kay, S., He, H.: Toward optimal feature selection in Naive Bayes for text categorization. IEEE Trans. Knowl. Data Eng. 28(9), 2508–2521 (2016)

    Article  Google Scholar 

  15. Ji, Y., Smith, N.A.: Neural discourse structure for text categorization. In: ACL 2017, Vancouver, Canada, pp. 996–1005 (2017)

    Google Scholar 

Download references

Acknowledgements

We acknowledge the Poznan University of Technology grant (04/45/DSPB/0185).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Czesław Jędrzejek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zdrojewska, A., Dutkiewicz, J., Jędrzejek, C., Olejnik, M. (2019). Comparison of the Novel Classification Methods on the Reuters-21578 Corpus. In: Choroś, K., Kopel, M., Kukla, E., Siemiński, A. (eds) Multimedia and Network Information Systems. MISSI 2018. Advances in Intelligent Systems and Computing, vol 833. Springer, Cham. https://doi.org/10.1007/978-3-319-98678-4_30

Download citation

Publish with us

Policies and ethics