Skip to main content
Log in

Tensor extreme learning design via generalized Moore–Penrose inverse and triangular type-2 fuzzy sets

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

A tensor-based extreme learning machine is proposed, which is referred to as tensor-based type-2 extreme learning machine (TT2-ELM). In contrast to the work on ELM, regularized ELM (RELM), weighted regularized ELM (WRELM) and least squares support vector machine (LS-SVM), which are the most often used learning algorithm in regression problems, TT2-ELM adopts the tensor structure to construct the ELM for type-2 fuzzy sets, Moore–Penrose inverse of tensor is used to obtain the tensor regression result. No further type-reduction method is needed to obtain the coincide type-1 fuzzy sets, and type-2 fuzzy structure can be seamlessly incorporated into the ELM scheme. Experimental results are carried out on two Sinc functions, a nonlinear system identification problem and four real-world regression problems, results show that TT2-ELM performs at competitive level of generalized performance as the ELM, RELM, WRELM and LS-SVM on the small- and moderate-scale data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. TPROD efficiently allows any type of tensor product between two multi-dimensional arrays, which can be downloaded from Mathworks’s file exchange.

  2. TT2-ELM’s source MATLAB code can be download directly from https://github.com/zggl/TriT2ELM.

References

  1. Huang GB, Ding XJ, Zhou HM (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74(1):155–163

    Article  Google Scholar 

  2. Huang GB, Zhou HM, Ding XJ, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern 42(2):513–529

    Article  Google Scholar 

  3. Zong WW, Huang GB, Chen YQ (2013) Weighted extreme learning machine for imbalance learning. Neurocomputing 101:229–242

    Article  Google Scholar 

  4. Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 6(3):376–390

    Article  Google Scholar 

  5. Deng W, Zheng Q, Chen L (2009) Regularized extreme learning machine. In: IEEE symposium on computational intelligence and data mining, pp 389–395

  6. Javed K, Gouriveau R, Zerhouni N (2014) SW-ELM: a summation wavelet extreme learning machine algorithm with a priori parameter initialization. Neurocomputing 123:299–307

    Article  Google Scholar 

  7. Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162

    Article  Google Scholar 

  8. Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–451

    Article  MathSciNet  Google Scholar 

  9. Miche Y, van Heeswijk M, Bas P, Simula O, Lendasse A (2011) TROP-ELM: a double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing 74(16):2413–2421

    Article  Google Scholar 

  10. Deng WY, Ong YS, Zheng QH (2016) A fast reduced kernel extreme learning machine. Neural Netw 76:29–38

    Article  Google Scholar 

  11. Deng WY, Bai Z, Huang GB, Zheng QH (2016) A fast SVD-hidden-nodes based extreme learning machine for large-scale data analytics. Neural Netw 77:14–28

    Article  Google Scholar 

  12. Ding SF, Guo LL, Hou YL (2017) Extreme learning machine with kernel model based on deep learning. Neural Comput Appl 28(8):1975–1984

    Article  Google Scholar 

  13. Kolda TG, Sun J (2008) Scalable tensor decompositions for multi-aspect data mining. In: 2008 eighth IEEE international conference on data mining. IEEE, Pisa, Italy, pp 363–372

  14. Kolda T, Bader B (2009) Tensor secompositions and applications: a survey. SIAM Rev 51(3):455–500

    Article  MathSciNet  Google Scholar 

  15. Acar E, Dunlavy DM, Kolda TG, Rup MMO (2011) Scalable tensor factorizations for incomplete sata. Chemometr Intell Lab Syst 106(1):41–56

    Article  Google Scholar 

  16. Acar E, Dunlavy DM, Kolda TG (2011) A scalable optimization approach for fitting canonical tensor decompositions. J Chemom 25(2):67–86

    Article  Google Scholar 

  17. Papalexakis EE, Faloutsos C, Sidiropoulos ND (2012) ParCube: sparse parallelizable tensor decompositions. In: Cristianini N (eds) Proceedings of 2012 European conference on machine learning and knowledge discovery in databases. Springer, Berlin, Heidelberg, pp 521–536

    Chapter  Google Scholar 

  18. Sun WW, Lu JW, Liu H, Cheng G (2017) Provable sparse tensor decomposition. J R Stat Soc Ser B (Stat Methodol) 79(3):899–916

    Article  MathSciNet  Google Scholar 

  19. Ji TY, Huang TZ, Zhao XL, Ma TH, Deng LJ (2017) A non-convex tensor rank approximation for tensor completion. Appl Math Model 48:410–422

    Article  MathSciNet  Google Scholar 

  20. Friedland S (2005) A new approach to generalized singular value decomposition. SIAM J Matrix Anal Appl 27(2):434–444

    Article  MathSciNet  Google Scholar 

  21. Zhou H, Li LX (2014) Regularized matrix regression. J R Stat Soc Ser B (Stat Methodol) 76(2):463–483

    Article  MathSciNet  Google Scholar 

  22. Friedland S, Mehrmann V, Pajarola R, Suter SK (2013) On best rank one approximation of tensors. Numer Linear Algebra Appl 20(6):942–955

    Article  MathSciNet  Google Scholar 

  23. De Lathauwer L, De Moor B, Vandewalle J (2000) A multilinear singular value decomposition. SIAM J Matrix Anal Appl 21(4):1253–1278

    Article  MathSciNet  Google Scholar 

  24. Sun LZ, Zheng BD, Bu CJ, Wei YM (2016) Moore–Penrose inverse of tensors via Einstein product. Linear Multilinear Algebra 64(4):686–698

    Article  MathSciNet  Google Scholar 

  25. Behera R, Mishra D (2017) Further results on generalized inverses of tensors via the Einstein product. Linear Multilinear Algebra 65(8):1662–1682

    Article  MathSciNet  Google Scholar 

  26. Ji J, Wei YM (2017) Weighted Moore–Penrose inverses and fundamental theorem of even-order tensors with Einstein product. Front Math China. https://doi.org/10.1007/s11464-017-0628-1

    Article  MathSciNet  MATH  Google Scholar 

  27. Mendel JM (2001) Uncertain rule-based fuzzy logic system: introduction and new directions. Prentice-Hall, Prentice

    MATH  Google Scholar 

  28. Brazell M, Li N, Navasca C, Tamon C (2013) Solving multilinear systems via tensor inversion. SIAM J Matrix Anal Appl 34(2):542–570

    Article  MathSciNet  Google Scholar 

  29. Rong HJ, Huang GB, Sundararajan N, Saratchandran P (2009) Online sequential fuzzy extreme learning machine for function approximation and classification problems. IEEE Trans Syst Man Cybern Part B (Cybern) 39(4):1067–1072

    Article  Google Scholar 

  30. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  Google Scholar 

  31. Li HD, Xu QS, Liang YZ (2014) libPLS: an integrated library for partial least squares regression and discriminant analysis. PeerJ PrePrints 2: e190v1, source codes available at www.libpls.net

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (Nos. 11771111 and 61603126), and the Natural Science Foundation of Heilongjiang Province with Grant No. QC2016094.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sharina Huang.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, S., Zhao, G. & Chen, M. Tensor extreme learning design via generalized Moore–Penrose inverse and triangular type-2 fuzzy sets. Neural Comput & Applic 31, 5641–5651 (2019). https://doi.org/10.1007/s00521-018-3385-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3385-5

Keywords

Navigation