Skip to main content
Log in

Non-iterative online sequential learning strategy for autoencoder and classifier

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Artificial neural network training algorithms aim to optimize the network parameters regarding the pre-defined cost function. Gradient-based artificial neural network training algorithms support iterative learning and have gained immense popularity for training different artificial neural networks end-to-end. However, training through gradient methods is time-consuming. Another family of training algorithms is based on the Moore–Penrose inverse, which is much faster than many other gradient methods. Nevertheless, most of those algorithms are non-iterative and thus do not support mini-batch learning in nature. This work extends two non-iterative Moore–Penrose inverse-based training algorithms to enable online sequential learning: a single-hidden-layer autoencoder training algorithm and a sub-network-based classifier training algorithm. We further present an approach that uses the proposed autoencoder for self-supervised dimension reduction and then uses the proposed classifier for supervised classification. The experimental results show that the proposed approach achieves satisfactory classification accuracy on many benchmark datasets with extremely low time consumption (up to 50 times faster than the support vector machine on CIFAR 10 dataset).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

All the datasets used in this work are publicly available.

Notes

  1. http://www.mathworks.com/matlabcentral/fileexchange/38310-deeplearning-toolbox.

  2. http://www.cad.zju.edu.cn/home/dengcai/Data/DimensionReduction.html.

References

  1. Bai Z, Huang G, Wang D, Wang H, Westover MB (2014) Sparse extreme learning machine for classification. IEEE Trans Cybern 44(10):1858–1870. https://doi.org/10.1109/TCYB.2014.2298235

    Article  Google Scholar 

  2. Bartlett PL (1996) For valid generalization, the size of the weights is more important than the size of the network. In: Proceedings of the 9th international conference on neural information processing systems

  3. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44(2):525–536. https://doi.org/10.1109/18.661502

    Article  MathSciNet  MATH  Google Scholar 

  4. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. Advances in neural information processing systems 19

  5. Breiman L (2001) Random forests. Mach Learn 45(1):5–32. https://doi.org/10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  6. Cao J, Zhao, Y, Lai X, Chen T, Liu N, Mirza B, Lin Z (2015) Landmark recognition via sparse representation. In: 2015 IEEE international conference on digital signal processing (DSP). IEEE, pp 1030–1034

  7. Deng C, Wang S, Li Z, Huang G, Lin W (2019) Content-insensitive blind image blurriness assessment using weibull statistics and sparse extreme learning machine. IEEE Trans Syst Man Cybern Syst 49(3):516–527

    Article  Google Scholar 

  8. Dong G, Liao G, Liu H, Kuang G (2018) A review of the autoencoder and its variants: a comparative perspective from target recognition in synthetic-aperture radar images. IEEE Geosci Remote Sens Mag 6(3):44–68

    Article  Google Scholar 

  9. Fang X, Tie Z, Guan Y, Rao S (2018) Quasi-cluster centers clustering algorithm based on potential entropy and t-distributed stochastic neighbor embedding. Soft Compu. https://doi.org/10.1007/s00500-018-3221-y

    Article  Google Scholar 

  10. Fernandez-Delgado M, Cernadas E, Barro S, Amorim D (2014) Do we need hundreds of classifiers to solve real world classification problems?. J Mach Learn Res 15:3133–3181

    MathSciNet  MATH  Google Scholar 

  11. French R (1992) Semi-distributed representations and catastrophic forgetting in connectionist networks. Connect Sci 4:365–377. https://doi.org/10.1080/09540099208946624

    Article  Google Scholar 

  12. French R (1999) Catastrophic forgetting in connectionist networks. Trends Cogn Sci 3:128–135. https://doi.org/10.1016/S1364-6613(99)01294-2

    Article  Google Scholar 

  13. Ghosh T (2017) Quicknet: maximizing efficiency and efficacy in deep architectures. arXiv preprint arXiv:1701.02291

  14. He X, Ji M, Zhang C, Bao H (2011) A variance minimization criterion to feature selection using laplacian regularization. IEEE Trans Pattern Anal Mach Intell 33(10):2013–2025. https://doi.org/10.1109/TPAMI.2011.44

    Article  Google Scholar 

  15. Henriquez PA, Ruz GA (2018) A non-iterative method for pruning hidden neurons in neural networks with random weights. Appl Soft Comput 70:1109–1121. https://doi.org/10.1016/j.asoc.2018.03.013

    Article  Google Scholar 

  16. Hinton G, Salakhutdinov R (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  Google Scholar 

  17. Hinton G, Roweis S (2003) Stochastic neighbor embedding. Advances in neural information processing systems, 2002

  18. Hinton G, Salakhutdinov R (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  Google Scholar 

  19. Huang G, Song S, Gupta JND, Wu C (2014) Semi-supervised and unsupervised extreme learning machines. IEEE Trans Cybern 44:2405–2417

    Article  Google Scholar 

  20. Huang GB, Chen L, Siew CK et al (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  21. Huang GB, Saratchandran P, Sundararajan N (2005) An efficient sequential learning algorithm for growing and pruning rbf (gap-rbf) networks. IEEE Trans Syst Man Cybern Part B 34(6):2284–2292

    Article  Google Scholar 

  22. Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. Trans Syst Man Cybern Part B 42(2):513–529. https://doi.org/10.1109/TSMCB.2011.2168604

    Article  Google Scholar 

  23. Jia Y, Kwong S, Wang R (2020) Applying exponential family distribution to generalized extreme learning machine. IEEE Trans Syst Man Cybern Syst 50(5):1794–1804

    Article  Google Scholar 

  24. Johnson R, Zhang T (2013) Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems 26. 

  25. Kasun L, Zhou H, Huang GB, Vong CM (2013) Representational learning with elms for big data. IEEE Intell Syst 28:31–34

    Article  Google Scholar 

  26. Katuwal R, Suganthan P (2019) Stacked autoencoder based deep random vector functional link neural network for classification. Appl Soft Comput 85:105854. https://doi.org/10.1016/j.asoc.2019.105854

    Article  Google Scholar 

  27. Kim J (2019) Sequential training algorithm for neural networks. arXiv abs/1905.07490

  28. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Tech. rep, Citeseer

    Google Scholar 

  29. Le Roux N, Bengio Y (2008) Representational power of restricted boltzmann machines and deep belief networks. Neural Comput 20(6):1631–1649

    Article  MathSciNet  Google Scholar 

  30. Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Article  Google Scholar 

  31. Liu B, Xia SX, Meng FR, Zhou Y (2015) Extreme spectral regression for efficient regularized subspace learning. Neurocomputing 149:171–179

    Article  Google Scholar 

  32. Lu Y, Sundararajan N, Saratchandran P (1998) Performance evaluation of a sequential minimal radial basis function (rbf) neural network learning algorithm. IEEE Trans Neural Netw 9(2):308–18

    Article  Google Scholar 

  33. Mayne AJ (1972) Generalized inverse of matrices and its applications. J Oper Res Soc 23(4):598

    Google Scholar 

  34. Pao YH, Park GH, Sobajic DJ (1994) Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180

    Article  Google Scholar 

  35. Platt J (1991) A resource-allocating network for function interpolation. Neural Comput 3(2):213–225. https://doi.org/10.1162/neco.1991.3.2.213

    Article  MathSciNet  Google Scholar 

  36. Robins A (2004) Sequential learning in neural networks: a review and a discussion of pseudorehearsal based methods. Intell Data Anal 8(3):301–322

    Article  MathSciNet  Google Scholar 

  37. Yang Y, Wu QJ, Feng X, Akilan T (2019) Recomputation of the dense layers for performance improvement of dcnn. IEEE Trans Pattern Anal Mach Intell 42(11):2912–2925

    Google Scholar 

  38. Yang Y, Wu QJ, Wang Y (2018) Autoencoder with invertible functions for dimension reduction and image reconstruction. IEEE Trans Syst Man Cybern Syst 48(7):1065–1079

    Article  Google Scholar 

  39. Yang Y, Wu QMJ (2016) Extreme learning machine with subnetwork hidden nodes for regression and classification. IEEE Trans Cybern 46(12):2885–2898. https://doi.org/10.1109/TCYB.2015.2492468

    Article  Google Scholar 

  40. Yang Y, Wu QMJ (2016) Multilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern 46(11):2570–2583. https://doi.org/10.1109/TCYB.2015.2481713

    Article  Google Scholar 

  41. Yang Y, Wu QMJ, Feng X, Akilan T (2020) Recomputation of the dense layers for performance improvement of dcnn. IEEE Trans Pattern Anal Mach Intell 42(11):2912–2925. https://doi.org/10.1109/TPAMI.2019.2917685

    Article  Google Scholar 

  42. Yingwei L, Sundararajan N, Saratchandran P (1997) A sequential learning scheme for function approximation using minimal radial basis function neural networks. Neural Comput 9(2):461–478. https://doi.org/10.1162/neco.1997.9.2.461

    Article  MATH  Google Scholar 

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui Zhang.

Ethics declarations

Conflict of interest

We declare that we have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A. Paul and P. Yan: co-first authorship.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Paul, A.N., Yan, P., Yang, Y. et al. Non-iterative online sequential learning strategy for autoencoder and classifier. Neural Comput & Applic 33, 16345–16361 (2021). https://doi.org/10.1007/s00521-021-06233-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06233-x

Keywords

Navigation