Advertisement

Overview of One-Pass and Discard-After-Learn Concepts for Classification and Clustering in Streaming Environment with Constraints

  • Chidchanok Lursinsap
Chapter
Part of the Unsupervised and Semi-Supervised Learning book series (UNSESUL)

Abstract

With the advancement of internet technology and sensor networks, tremendous amount of data have been generated beyond our imagination. These data contain valuable and possibly relevant information for various fields of applications. Learning these data online by using current neural learning techniques is not so simple due to many technical constraints including data overflow, uncontrollable learning epochs, arbitrary class drift, and dynamic imbalanced class ratio. Recently, we have been attempted to tackle this neural learning problem under the non-stationary environment. In this article, we summarize the new concept of One-Pass-Learning-and-Discard and also new structures, called Malleable Hyper-ellipsoid and Hyper-cylinder, of neural network recently introduced to cope with supervised as well as unsupervised learning under the constraints of data overflow, preserving polynomial time and space complexities of learning process, arbitrary class drift, life of data, and dynamic imbalanced class ratio. Both structures are rotatable, transposable, and expandable according to the distribution and location of data cluster.

Notes

Acknowledgement

This work is supported by Thailand Research Fund under grant number RTA6080013.

References

  1. 1.
    Abdulsalam, H., Skillicorn, D. B., & Martin, P. (2011, January). Classification using streaming random forests. IEEE Transactions on Knowledge and Data Engineering, 23(1), pp. 22–36.CrossRefGoogle Scholar
  2. 2.
    Aggarwal, C. C., Han, J., Wang, J., & Yu, P. S. (2003). A framework for clustering evolving data streams. In Proceedings of the 29th International Conference on Very Large Data Bases (pp. 81–92).Google Scholar
  3. 3.
    Brzezinski, D., & Stefanowski, J. (2014, January). Reacting to different types of concept drift: The accuracy updated ensemble algorithm. IEEE Transactions on Neural Networks and Learning Systems, 25(1), 81–94.CrossRefGoogle Scholar
  4. 4.
    Cao, F., Ester, M., Qian, W., & Zhou, A. (2006). Density-based clustering over an evolving data stream with noise. In SIAM International Conference on Data Mining (pp. 328–339).Google Scholar
  5. 5.
    Ditzler, G., Rosen, G., & Polikar, R. (2014, July). Domain adaptation bounds for multiple expert systems under concept drift. In Proceedings of the International Joint Conference on Neural Networks (pp. 595–601).Google Scholar
  6. 6.
    Elwell, R., & Polikar, R. (2011, October). Incremental learning of concept drift in non-stationary environments. IEEE Transactions on Neural Networks, 22(10), 1517–1531.CrossRefGoogle Scholar
  7. 7.
    Furao, S., & Hasegawa, O. (2008, December). A fast nearest neighbor classifier based on self-organizing incremental neural network. Neural Networks, 21(10), 1537–1547.CrossRefGoogle Scholar
  8. 8.
    Hahsler, M., & Dunham, M. H. (2010). rEMM: Extensible Markov model for data stream clustering in r. Journal of Statistical Software, 35(5).Google Scholar
  9. 9.
    He, H., Chen, S., Li, K., & Xu, X. (2011, December). Incremental learning from stream data. IEEE Transactions on Neural Networks, 22(12), 1901–1914.CrossRefGoogle Scholar
  10. 10.
    Hoens, T. R., Polikar, R., & Chawla, N. V. (2012, April). Learning from streaming data with concept drift and imbalance: An overview. Progress in Artificial Intelligence, 1(1), 89–101.CrossRefGoogle Scholar
  11. 11.
    Jaiyen, S., Lursinsap, C., Phimoltares, S. (2010, March). A very fast neural learning for classification using only new incoming datum. IEEE Transactions on Neural Networks, 21(3), 381–392.CrossRefGoogle Scholar
  12. 12.
    Junsawang, P., Phimoltares, S., & Lursinsap, C. (2016). A fast learning method for streaming and randomly ordered multi-class data chunks by using one-pass-throw-away class-wise learning concept. Expert Systems with Applications, 63, 249–266.CrossRefGoogle Scholar
  13. 13.
    Kranen, P., Assent, I., Baldauf, C., & Seidl, T. (2011). The ClusTree: Indexing micro-clusters for anytime stream mining. Knowledge and Information Systems, 29(2), 249–272.CrossRefGoogle Scholar
  14. 14.
    Laohakiat, S., Phimoltares, S., & Lursinsap, C. (2016). Hyper-cylindrical micro-clustering for streaming data with unscheduled data removals. Knowledge-Based Systems, 99, 183–200.CrossRefGoogle Scholar
  15. 15.
    Ozava, S., Pang, S., & Kasabov, N. (2008, June). Incremental learning of chunk data for online pattern classification systems. IEEE Transactions on Neural Networks, 19(6), 1061–1074.CrossRefGoogle Scholar
  16. 16.
    Pang, S., Ban, T., Kadobayashi, Y., & Kasabov, N. K. (2012). LDA merging and splitting with applications to multi-agent cooperative learning and system alteration. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics, 42(2), 552–564.CrossRefGoogle Scholar
  17. 17.
    Pang, S., Ozawa, S., & Kasabov, N. (2005). Incremental learning discriminant analysis classification of data streams. IEEE Transactions on Systems, Man, and Cybernetics-part B: Cybernetics, 35(5), 905–914.CrossRefGoogle Scholar
  18. 18.
    Shen, F., & Hasegawa, O. (2008). A fast nearest neighbor classifier on self-organizing incremental neural network. Neural Networks, 21, 1537–1547.CrossRefGoogle Scholar
  19. 19.
    Singla, P., Subbarao, K., & Junkins, J. L. (2007, January). Direction-dependent learning approach for radial basis function networks. IEEE Transaction on Neural Networks, 18(1), 203–222.CrossRefGoogle Scholar
  20. 20.
    Thakong, M., Phimoltares, S., Jaiyen, S., & Lursinsap, C. (2017). Fast learning and testing for imbalanced multi-class changes in streaming data by dynamic multi-stratum network. IEEE Access, 5, 10633–10648.CrossRefGoogle Scholar
  21. 21.
    Thakong, M., Phimoltares, S., Jaiyen, S., & Lursinsap, C. (2018). One-pass-throw-away learning for cybersecurity in streaming non-stationary environments by dynamic stratum networks. PLoS One, 13(9), e0202937.CrossRefGoogle Scholar
  22. 22.
    Tu, L., Chen, Y. (2009). Stream data clustering based on grid density and attraction. ACM Transactions on Knowledge Discovery from Data, 3(3), 12:1–12:27.CrossRefGoogle Scholar
  23. 23.
    Wattanakitrungroj, N., Maneeroj, S., & Lursinsap, C. (2017). Versatile hyper-elliptic clustering approach for streaming data based on one-pass-thrown-away learning. Journal of Classification, 34, 108–147.MathSciNetCrossRefGoogle Scholar
  24. 24.
    Wattanakitrungroj, N., Maneeroj, S., & Lursinsap, C. (2018). BEstream batch capturing with elliptic function for one-pass data stream clustering. Data & Knowledge Engineering, 117, 53–70.CrossRefGoogle Scholar
  25. 25.
    Wu, X., Li, P., & Hu, X. (2012, September). Learning from concept drifting data streams with unlabeled data. Neurocomputing, 92, 145–155.CrossRefGoogle Scholar
  26. 26.
    Xu, Y., Shen, F., & Zhao, J. (2012). An incremental learning vector quantization algorithm for pattern classification. Neural Computing and Applications, 21(6), 1205–1215.CrossRefGoogle Scholar
  27. 27.
    Zheng, J., Shen, F., Fan, H., & Zhao, J. (2013, April). An online incremental learning support vector machine for large-scale data. Neural Computing and Applications, 22(5), 1023–1035.CrossRefGoogle Scholar
  28. 28.
    Žliobaitė, I., Bifet, A., Read, J., Pfahringer, B., & Holmes, G. (2015, March). Evaluation methods and decision theory for classification of streaming data with temporal dependence. Machine Learning, 98(3), 455–482.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Chidchanok Lursinsap
    • 1
    • 2
  1. 1.Advanced Virtual and Intelligent Computing Center, Department of Mathematics and Computer Science, Faculty of ScienceChulalongkorn UniversityBangkokThailand
  2. 2.The Royal Institute of ThailandBangkokThailand

Personalised recommendations