Skip to main content
Log in

H-BLS: a hierarchical broad learning system with deep and sparse feature learning

  • Published:
Applied Intelligence Aims and scope Submit manuscript

A Correction to this article was published on 28 October 2023

This article has been updated

Abstract

Broad learning system (BLS) is an emerging machine learning algorithm with high efficiency and good approximation capability. It has been proved that BLS can learn hundreds of times faster than traditional deep learning algorithms while providing a comparable or even better generalization performance. Owing to its superb efficiency and powerful learning ability, the BLS is attracting increasing attention from machine learning community and can be considered as an alternative to deep learning in some situations. However, due to its shallow structure, the feature learning of BLS is not sufficient and which may probably limit its learning performance. For this issue, this paper proposes a novel hierarchical broad learning system (H-BLS) with deep and sparse feature learning. Different from the original BLS which conducts feature learning simply using a single-layer function mapping, the H-BLS adopts a hierarchical feature learning framework with multi-layer and multi-group structure to extract high-level and rich feature information from the original input, so as to improve the feature representation capability of the model. Meanwhile, in the hierarchical feature learning process of H-BLS, a new l1-constrained sparse autoencoder is employed and embedded in each layer of the framework for feature reconstruction, so as to eliminate redundancy of the input and generate more sparse and compact feature representations, thus further enhancing its learning performance. The learning ability of the proposed H-BLS is firstly evaluated by ten commonly used regression data sets, and the experimental results show that H-BLS performs better compared with several representative learning algorithms such as SVM, LSSVM, ELM, BLS and two recently proposed BLS variants. Moreover, the H-BLS also shows advantages over the state-of-the-art methods in terms of classification accuracy and training time on image classification problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Change history

References

  1. Chen CLP, Liu Z (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans Neural Networks Learn Syst 29(1):10–24. https://doi.org/10.1109/TNNLS.2017.2716952

    Article  MathSciNet  Google Scholar 

  2. Igelnik B, Pao YH (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6(6):1320–1329. https://doi.org/10.1109/72.471375

    Article  Google Scholar 

  3. Chen CLP, Liu ZL, Feng S (2019) Universal approximation capability of broad learning system and its structural variations. IEEE Trans Neural Networks Learn Syst 30(4):1191–1204. https://doi.org/10.1109/tnnls.2018.2866622

    Article  MathSciNet  Google Scholar 

  4. Feng S, Chen CLP (2020) Fuzzy broad learning system: a novel neuro-fuzzy model for regression and classification. IEEE Trans Cybern 50(2):414–424. https://doi.org/10.1109/tcyb.2018.2857815

    Article  Google Scholar 

  5. Guo HB, Sheng B, Li P, Chen CLP (2021) Multiview high dynamic range image synthesis using fuzzy broad learning system. IEEE Trans Cybern 51(5):2735–2747. https://doi.org/10.1109/TCYB.2019.2934823

    Article  Google Scholar 

  6. Jin JW, Liu ZL, Chen CLP (2018) Discriminative graph regularized broad learning system for image recognition. Sci China-Inform Sci 61(11):112209. https://doi.org/10.1007/s11432-017-9421-3

    Article  Google Scholar 

  7. Zhao HM, Zheng JJ, Deng W, Song YJ (2020) Semi-supervised broad learning system based on manifold regularization and broad network. IEEE Trans Circuit Syst I-Regular Papers 67(3):983–994. https://doi.org/10.1109/tcsi.2019.2959886

    Article  MathSciNet  MATH  Google Scholar 

  8. Feng SB, Ren WJ, Han M, Chen YW (2019) Robust manifold broad learning system for large-scale noisy chaotic time series prediction: a perturbation perspective. Neural Netw 117:179–190. https://doi.org/10.1016/j.neunet.2019.05.009

    Article  MATH  Google Scholar 

  9. Zheng YF, Chen BD, Wang SY, Wang WQ (2021) Broad learning system based on maximum Correntropy criterion. IEEE Trans Neural Networks Learn Syst 32(7):3083–3097. https://doi.org/10.1109/TNNLS.2020.3009417

    Article  MathSciNet  Google Scholar 

  10. Chu F, Liang T, Chen CLP, Wang XS, Ma XP (2020) Weighted broad learning system and its application in nonlinear industrial process modeling. IEEE Trans Neural Networks Learn Syst 31(8):3017–3031. https://doi.org/10.1109/TNNLS.2019.2935033

    Article  MathSciNet  Google Scholar 

  11. Liu ZL, Chen CLP, Feng S, Feng QY, Zhang T (2021) Stacked broad learning system: from incremental flatted structure to deep model. IEEE Trans Syst, Man, Cybernetics: Syst 51(1):209–222. https://doi.org/10.1109/TSMC.2020.3043147

    Article  Google Scholar 

  12. Xu ML, Han M, Chen CLP, Qiu T (2020) Recurrent broad learning Systems for Time Series Prediction. IEEE Trans Cybern 50(4):1405–1417. https://doi.org/10.1109/tcyb.2018.2863020

    Article  Google Scholar 

  13. Han M, Feng SB, Chen CLP, Xu ML, Qiu T (2019) Structured manifold broad learning system: a manifold perspective for large-scale chaotic time series analysis and prediction. IEEE Trans Knowl Data Eng 31(9):1809–1821. https://doi.org/10.1109/tkde.2018.2866149

    Article  Google Scholar 

  14. Han M, Li WJ, Feng SB, Qiu T, Chen CLP (2021) Maximum information exploitation using broad learning system for large-scale chaotic time-series prediction. IEEE Trans Neural Networks Learn Syst 32(6):2320–2329. https://doi.org/10.1109/TNNLS.2020.3004253

    Article  Google Scholar 

  15. Pu XK, Li CG (2021) Online semi-supervised broad learning system for industrial fault diagnosis. IEEE Trans Industrial Inform 17(10):6644–6654. https://doi.org/10.1109/TII.2020.3048990

    Article  Google Scholar 

  16. Wang BS, Zhao Y, Chen CLP (2021) Hybrid transfer learning and broad learning system for wearing mask detection in the COVID-19 era. IEEE Trans Instrum Meas 70:5009612–5009612. https://doi.org/10.1109/TIM.2021.3069844

    Article  Google Scholar 

  17. Sheng B, Li P, Zhang YH, Mao L, Chen CLP (2021) GreenSea: visual soccer analysis using broad learning system. IEEE Trans Cybernetics 51(3):1463–1477. https://doi.org/10.1109/TCYB.2020.2988792

    Article  Google Scholar 

  18. Du J, Vong CM, Chen CLP (2021) Novel efficient RNN and LSTM-like architectures: recurrent and gated broad learning systems and their applications for text classification. IEEE Trans Cybernetics 51(3):1586–1597. https://doi.org/10.1109/TCYB.2020.2969705

    Article  Google Scholar 

  19. Gao S, Guo G, Chen CLP (2019) Event-based incremental broad learning system for object classification. In: 2019 IEEE/CVF international conference on computer vision workshop (ICCV). Seoul, Korea (South), pp 2989–2998

    Chapter  Google Scholar 

  20. Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 6(3):376–390. https://doi.org/10.1007/s12559-014-9255-2

    Article  Google Scholar 

  21. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122. https://doi.org/10.1561/2200000016

    Article  MATH  Google Scholar 

  22. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666. https://doi.org/10.1109/TIT.2007.909108

    Article  MathSciNet  MATH  Google Scholar 

  23. Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. Siam J Imaging Sci 2(1):183–202. https://doi.org/10.1137/080716542

    Article  MathSciNet  MATH  Google Scholar 

  24. Cortes C, Vapnik V (1995) Support vector networks. Mach Learn 20(3):273–297. https://doi.org/10.1023/A:1022627411411

    Article  MATH  Google Scholar 

  25. Suykens J, Gestel TV, Brabanter JD, Moor BD, Vandewalle J (2002) Least squares support vector machines. World Scientific Publishing Co. Pte. Ltd., Singapore

    Book  MATH  Google Scholar 

  26. Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybernetics Part B: Cybern 42(2):513–529. https://doi.org/10.1109/tsmcb.2011.2168604

    Article  Google Scholar 

  27. Xie RS, Wang ST (2020) Downsizing and enhancing broad learning systems by feature augmentation and residuals boosting. Complex Intell Syst 6:411–429. https://doi.org/10.1007/s40747-020-00139-2

    Article  Google Scholar 

  28. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507. https://doi.org/10.1126/science.1127647

    Article  MathSciNet  MATH  Google Scholar 

  29. Vincent P, Larochelle H, Bengio Y, Manzagol PA (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning (ICML). Helsinki, Finland, pp 1096–1103

    Chapter  Google Scholar 

  30. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554. https://doi.org/10.1162/neco.2006.18.7.1527

    Article  MathSciNet  MATH  Google Scholar 

  31. Salakhutdinov R, Hinton GE (2009) Deep Boltzmann Machines. J Mach Learn Res 5(2):1967–2006

    MATH  Google Scholar 

  32. Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer-Verlag, New York, NY, USA

    MATH  Google Scholar 

  33. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th international conference on neural information processing systems (NIPS) - volume 1. Lake Tahoe, Nevada, USA, pp 1106–1114

    Google Scholar 

  34. Shuang F, Chen CLP (2018) A fuzzy restricted Boltzmann machine: novel learning algorithms based on the crisp Possibilistic mean value of fuzzy numbers. IEEE Trans Fuzzy Syst 26(1):117–130. https://doi.org/10.1109/TFUZZ.2016.2639064

    Article  Google Scholar 

  35. Kasun LLC, Zhou H, Huang GB, Vong CM (2013) Representational learning with ELMs for big data. IEEE Intell Syst 28(6):31–34

    Google Scholar 

  36. Tang J, Deng C, Huang GB (2016) Extreme learning machine for multilayer perceptron. IEEE Trans Neural Networks Learn Syst 27(4):809–821. https://doi.org/10.1109/TNNLS.2015.2424995

    Article  MathSciNet  Google Scholar 

  37. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). Seattle, USA, pp. 770–778

  38. Zhang T, Chen R, Yang X, Guo S (2019) Rich feature combination for cost-based broad learning system. IEEE Access 7:160–172. https://doi.org/10.1109/ACCESS.2018.2885164

    Article  Google Scholar 

  39. Jin J, Li Y, Yang T, Zhao L, Duan J, Chen CLP (2021) Discriminative group-sparsity constrained broad learning system for visual recognition. Inf Sci 576:800–818. https://doi.org/10.1016/j.ins.2021.06.008

    Article  MathSciNet  Google Scholar 

  40. Liu Z, Chen CLP, Feng S, Feng Q, Zhang T (2021) Stacked broad learning system: from incremental flatted structure to deep model. IEEE Trans Syst, Man, Cybern: Syst 51(1):209–222. https://doi.org/10.1109/TSMC.2020.3043147

    Article  Google Scholar 

  41. Mao R, Cui R, Chen CLP (2021) Broad learning with reinforcement learning signal feedback: theory and applications. IEEE Trans Neural Networks Learn Syst Early access:1–13. https://doi.org/10.1109/TNNLS.2020.3047941

  42. Sabour S, Frosst N, Hinton G (2017) Dynamic routing between capsules. Adv Neural Inf Proces Syst 30:3856–3866

    Google Scholar 

  43. Choi J, Seo H, Im S, Kang M (2019) Attention routing between capsules. In: 2019 IEEE/CVF international conference on computer vision workshop (ICCV). Seoul, Korea (South), pp 1981–1989. https://doi.org/10.1109/ICCVW.2019.00247

    Chapter  Google Scholar 

  44. Huang W, Zhou F (2020) DA-CapsNet: dual attention mechanism capsule network. Sci Rep 10:11383. https://doi.org/10.1038/s41598-020-68453-w

    Article  Google Scholar 

  45. Ribeiro F, Leontidis G, Kollias S (2020) Capsule routing via variational bayes. AAAI. New York, USA, In, pp 3749–3756

    Google Scholar 

  46. Mazzia V, Salvetti F, Chiaberge M (2021) Efficient-CapsNet: capsule network with self-attention routing. Sci Rep 11:14634. https://doi.org/10.1038/s41598-021-93977-0

    Article  Google Scholar 

  47. Zhang L, Li J, Lu G, Shen P (2022) Analysis and variants of broad learning system. IEEE Trans Syst, Man, Cybernetics: Syst 52(1):334–344. https://doi.org/10.1007/s00371-020-01796-7

    Article  Google Scholar 

  48. Quan Q, He F, Li H (2021) A multi-phase blending method with incremental intensity for training detection networks. Vis Comput 37(2):245–259. https://doi.org/10.1007/s00371-020-01796-7

    Article  Google Scholar 

  49. Zhang S, He F (2020) DRCDN: learning deep residual convolutional Dehazing networks. Vis Comput 36(9):1797–1808. https://doi.org/10.1007/s00371-019-01774-8

    Article  Google Scholar 

  50. Chen Y, He F, Li H, Zhang D, Wu Y (2020) A full migration BBO algorithm with enhanced population quality bounds for multimodal biomedical image registration. Appl Soft Comput 93:106335. https://doi.org/10.1016/j.asoc.2020.106335

    Article  Google Scholar 

  51. Li H, He F, Chen Y, Pan Y (2021) MLFS-CCDE: multi-objective large-scale feature selection by cooperative coevolutionary differential evolution. Memetic Comput 13(1):1–18. https://doi.org/10.1007/s12293-021-00328-7

    Article  Google Scholar 

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant No. 61603326), the research fund of Jiangsu Provincial Key Constructive Laboratory for Big Data of Psychology and Cognitive Science (Grant No. 72591962004G, 72591862007G).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Guo.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: There was a mistake in Tables 8, 10 and Algorithm 1.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, W., Chen, S. & Yuan, X. H-BLS: a hierarchical broad learning system with deep and sparse feature learning. Appl Intell 53, 153–168 (2023). https://doi.org/10.1007/s10489-022-03498-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03498-0

Keywords

Navigation