Advertisement

Online Extreme Entropy Machines for Streams Classification and Active Learning

  • Wojciech Marian Czarnecki
  • Jacek Tabor
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 403)

Abstract

When dealing with large evolving datasets one needs machine learning models able to adapt to the growing number of information. In particular, stream classification is a research topic where classifiers need an ability to rapidly change their solutions and behave stably after many changes in training set structure. In this paper we show how recently proposed Extreme Entropy Machine can be trained in an online fashion supporting not only adding/removing points to/from the model but even changing the size of the internal representation on demand. In particular we show how one can build a well-conditioned covariance estimator in an online scenario. All these operations are guaranteed to converge to the optimal solutions given by their offline counterparts.

Keywords

Online learning Extreme entropy machine Stream classification Covariance estimation 

Notes

Acknowledgments

The work of the first author was partially founded by National Science Centre Poland grant no. 2013/09/N/ST6/03015, while the work of the second one by National Science Centre Poland grant no. 2014/13/B/ST6/01792.

References

  1. 1.
    Bache, K., Lichman, M.: UCI machine learning repository. http://archive.ics.uci.edu/ml (2013)
  2. 2.
    Bartocha, K., Podolak, I.T.: Classifier ensembles for virtual concept drift-the DEnboost algorithm. In: Corchado, E., Kurzyński, M., Woźniak, M. (eds.) Hybrid Artificial Intelligent Systems, pp. 164–171. Springer, Berlin (2011)CrossRefGoogle Scholar
  3. 3.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Chong, E.K., Zak, S.H.: An Introduction to Optimization, vol. 76. Wiley, New York (2013)zbMATHGoogle Scholar
  5. 5.
    Czarnecki, W.M., Tabor, J.: Extreme entropy machines: robust information theoretic classification. Pattern Anal. Appl. (2015). doi: 10.1007/s10044-015-0497-8
  6. 6.
    Czarnecki, W.M., Tabor, J.: Multithreshold entropy linear classifier: theory and applications. Expert Syst. Appl. 42, 5591–5606 (2015)CrossRefGoogle Scholar
  7. 7.
    Drineas, P., Mahoney, M.W.: On the nyström method for approximating a gram matrix for improved kernel-based learning. J. Mach. Learn. Res. 6, 2153–2175 (2005)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(1), 3133–3181 (2014)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Gaber, M.M., Zaslavsky, A., Krishnaswamy, S.: Mining data streams: a review. ACM Sigmod Rec. 34(2), 18–26 (2005)CrossRefzbMATHGoogle Scholar
  10. 10.
    Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings. 2004 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985–990. IEEE (2004)Google Scholar
  11. 11.
    Kosina, P., Gama, J.: Very fast decision rules for classification in data streams. Data Min. Know. Discov. 29(1), 168–202 (2015)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Krawczyk, B., Stefanowski, J., Wozniak, M.: Data stream classification and big data analytics. Neurocomputing 150, 238–239 (2015)CrossRefGoogle Scholar
  13. 13.
    Lakshminarayanan, B., Roy, D.M., Teh, Y.W.: Mondrian forests: efficient online random forests. In: Advances in Neural Information Processing Systems, pp. 3140–3148 (2014)Google Scholar
  14. 14.
    Ledoit, O., Wolf, M.: A well-conditioned estimator for large-dimensional covariance matrices. J. Multivar. Anal. 88(2), 365–411 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Settles, B.: Active learning. Synth. Lect. Artif. Intel. Mach. Learn. 6(1), 1–114 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Zong, W., Huang, G.B., Chen, Y.: Weighted extreme learning machine for imbalance learning. Neurocomputing 101, 229–242 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Faculty of Mathematics and Computer ScienceJagiellonian UniversityKrakowPoland

Personalised recommendations