Self-Adaptive Parameters Optimization for Incremental Classification in Big Data Using Neural Network

  • Simon FongEmail author
  • Charlie Fang
  • Neal Tian
  • Raymond Wong
  • Bee Wah Yap
Part of the International Series on Computer Entertainment and Media Technology book series (ISCEMT)


Big Data is being touted as the next big thing arousing technical challenges that confront both academic research communities and commercial IT deployment. The root sources of Big Data are founded on infinite data streams and the curse of dimensionality. It is generally known that data which are sourced from data streams accumulate continuously making traditional batch-based model induction algorithms infeasible for real-time data mining. In the past many methods have been proposed for incrementally data mining by modifying classical machine learning algorithms, such as artificial neural network. In this paper we propose an incremental learning process for supervised learning with parameters optimization by neural network over data stream. The process is coupled with a parameters optimization module which searches for the best combination of input parameters values based on a given segment of data stream. The drawback of the optimization is the heavy consumption of time. To relieve this limitation, a loss function is proposed to look ahead for the occurrence of concept-drift which is one of the main causes of performance deterioration in data mining model. Optimization is skipped intermittently along the way so to save computation costs. Computer simulation is conducted to confirm the merits by this incremental optimization process for neural network.


Neural network Incremental machine learning Classification Big data Parameter optimization 



The authors are thankful for the financial support from the research grant “Rare Event Forecasting and Monitoring in Spatial Wireless Sensor Network Data,” Grant no. MYRG2014-00065-FST, offered by the University of Macau, FST, and RDAO.


  1. 1.
    P. McCullagh, J.A. Nelder, Generalized Linear Models, 2nd edn. (Chapman & Hall, London, 1989)CrossRefzbMATHGoogle Scholar
  2. 2.
    P.-F. Pai, T.-C. Chen, Rough set theory with discriminant analysis in analyzing electricity loads. Expert Syst. Appl. 36, 8799–8806 (2009)CrossRefGoogle Scholar
  3. 3.
    M.M. Gaber, A. Zaslavsky, S. Krishnaswamy, Mining data streams: a review. ACM SIGMOD Rec. 34(2), 18–26 (2005)CrossRefzbMATHGoogle Scholar
  4. 4.
    W. Fan, A. Bifet, Mining big data: current status, and forecast to the future. SIGKDD Explor. 14(2), 1–5 (2012)CrossRefGoogle Scholar
  5. 5.
    A. Murdopo, Distributed decision tree learning for mining big data streams. Master of Science Thesis, European Master in Distributed Computing, July 2013Google Scholar
  6. 6.
    B. Yoshua, A. Courville, P. Vincent, Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  7. 7.
    G. Zhou, K. Sohn, H. Lee, Online incremental feature learning with denoising autoencoders. AISTATS, 2012Google Scholar
  8. 8.
    O. Aran, E. Alpaydın, An incremental neural network construction algorithm for training multilayer perceptrons. ICANN/ICONIP‘03, 2003Google Scholar
  9. 9.
    A.G. Ivakhnenko, Heuristic self-organization in problems of engineering cybernetics. Automatica 6, 207–219 (1970)CrossRefGoogle Scholar
  10. 10.
    A. Fang, F. Ramos, Tuning online neural networks with reinforcement learning. 2014 Research Conversazione, The School of Information Technologies, University of Sydney, 2014Google Scholar
  11. 11.
    N. Shiraga, S. Ozawa, S. Abe, A reinforcement learning algorithm for neural networks with incremental learning ability. Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP ‘02, vol. 5, November 2002, pp. 2566–2570Google Scholar
  12. 12.
    G.E. Hinton, S. Osindero, Y.-W. Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    A. Carlevarino, R. Martinotti, G. Metta, G. Sandini, An incremental growing neural network and its application to robot control. Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, 2000. IJCNN 2000, vol. 5, pp. 323–328Google Scholar
  14. 14.
    M. Ring, Sequence learning with incremental higher-order neural networks. Technical Report, University of Texas at Austin, Austin, TX, USA, 1993Google Scholar
  15. 15.
    C. Constantinopoulos, A. Likas, An incremental training method for the probabilistic RBF network. IEEE Trans. Neural Netw. 17(4), 966–974 (2006)CrossRefzbMATHGoogle Scholar
  16. 16.
    M. Tscherepanow, Incremental on-line clustering with a topology-learning hierarchical ART neural network using hyperspherical categories, Advances in data mining, poster and industry. Proceedings of the 12th Industrial Conference on Data Mining (ICDM2012) (ibai-publishing, Fockendorf 2012), p. 22Google Scholar
  17. 17.
    F. Costa, P. Frasconi, V. Lombardo, G. Soda, Learning incremental syntactic structures with recursive neural networks. Proceedings of the Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, 2000, pp. 458–461Google Scholar
  18. 18.
    C. MacLeod, G.M. Maxwell, Incremental evolution in ANNs: neural nets which grow. Artif. Intell. Rev. 16(3), 201–224 (2001)CrossRefzbMATHGoogle Scholar
  19. 19.
    T. Seipone, J.A. Bullinaria, Evolving improved incremental learning schemes for neural network systems. The 2005 I.E. Congress on Evolutionary Computation, vol. 3, pp. 2002–2009Google Scholar
  20. 20.
    J.-F. Connolly, E. Granger, R. Sabourin, Incremental adaptation of fuzzy ARTMAP neural networks for video-based face classification. Proceedings of the 2009 I.E. Symposium on Computational Intelligence in Security and Defense Applications (CISDA 2009), July 2009, pp. 1–8Google Scholar
  21. 21.
    R. Kohavi, Wrappers for Performance Enhancement and Oblivious Decision Graphs (Department of Computer Science, Stanford University, Stanford, 1995)Google Scholar
  22. 22.
    S. Fong, S. Deb, X.-S. Yang, J. Li, Feature selection in life science classification: metaheuristic swarm search. IT Prof. 16(4), 24–29 (2014)CrossRefGoogle Scholar
  23. 23.
    R. Kohavi, The power of decision tables. 8th European Conference on Machine Learning, pp. 174–189, 1995bGoogle Scholar
  24. 24.
    S. Ioannou, L. Kessous, G. Caridakis, K. Karpouzis, V. Aharonson, S. Kollias, Adaptive on-line neural network retraining for real life multimodal emotion recognition. Artificial Neural Networks—ICANN 2006. Lecture notes in computer science, vol. 4131, 2006, pp. 81–92Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Simon Fong
    • 1
    Email author
  • Charlie Fang
    • 1
  • Neal Tian
    • 1
  • Raymond Wong
    • 2
  • Bee Wah Yap
    • 3
  1. 1.Department of Computer and Information ScienceUniversity of MacauMacau, SARChina
  2. 2.School of Computer Science and EngineeringUniversity of New South WalesSydneyAustralia
  3. 3.Faculty of Computer and Mathematical SciencesUniversiti Teknologi MARASelangorMalaysia

Personalised recommendations