Skip to main content

Optimally Selected Minimal Learning Machine

  • Conference paper
  • First Online:
Intelligent Data Engineering and Automated Learning – IDEAL 2018 (IDEAL 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11314))

Abstract

This paper introduces a new approach to select reference points (RPs) to minimal learning machine (MLM) for classification tasks. A critical issue related to the training process in MLM is the selection of RPs, from which the distances are taken. In its original formulation, the MLM selects the RPs randomly from the data. We propose a new method called optimally selected minimal learning machine (OS-MLM) to select the RPs. Our proposal relies on the multiresponse sparse regression (MRSR) ranking method, which is used to sort the patterns in terms of relevance. After doing so, the leave-one-out (LOO) criterion is also used in order to select an appropriate number of reference points. Based on the simulations we carried out, one can see our proposal achieved a lower number of reference points with an equivalent, or even superior, accuracy with respect to the original MLM and its variants.

Supported by Federal Institute of Ceará and Federal University of Ceará.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A S-level qualitative variable is represented by a vector of S binary variables or bits, only one of which is on at a time. Thus, the j-th component of an output vector \(\varvec{y}\) is set to 1 if it belongs to class j and 0 otherwise.

  2. 2.

    The use of therm “optimally” is based on the our inspiration, the optimally pruned extreme learning machines method [16].

References

  1. Alcin, O., Sengur, A., Qian, J., Ince, M.: OMP-ELM: orthogonal matching pursuit-based extreme learning machine for regression. J. Intell. Syst. 24(1), 135–143 (2015)

    Google Scholar 

  2. Alencar, A.S.C., et al.: MLM-rank: a ranking algorithm based on the minimal learning machine. In: 2015 Brazilian Conference on Intelligent Systems, BRACIS 2015, Natal, Brazil, 4–7 November 2015, pp. 305–309. IEEE (2015)

    Google Scholar 

  3. Allen, D.M.: The relationship between variable selection and data agumentation and a method for prediction. Technometrics 16(1), 125–127 (1974)

    Article  MathSciNet  Google Scholar 

  4. Coelho, D.N., Barreto, G.D.A., Medeiros, C.M.S., Santos, J.D.A.: Performance comparison of classifiers in the detection of short circuit incipient fault in a three-phase induction motor. In: 2014 IEEE Symposium on Computational Intelligence for Engineering Solutions, CIES 2014, Orlando, FL, USA, 9–12 December 2014, pp. 42–48. IEEE (2014)

    Google Scholar 

  5. de Sousa, L.S., Dias, M.L.D., Rocha Neto, A.R.: Máquinas de vetores-suporte de mínimos quadrados esparsas via recozimento simulado. In: Simpósio Brasileiro de Automação Inteligente (SBAI). SBA, Rio Grande do Norte, Brasil, October 2015

    Google Scholar 

  6. Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

  7. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)

    Article  MathSciNet  Google Scholar 

  8. Florêncio, J.A.V., Dias, M.L.D., da Rocha Neto, A.R., de Souza Júnior, A.H.: A fuzzy C-means-based approach for selecting reference points in minimal learning machines. In: Barreto, G.A., Coelho, R. (eds.) NAFIPS 2018. CCIS, vol. 831, pp. 398–407. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-95312-0_34

    Chapter  Google Scholar 

  9. Huang, G., Zhu, Q., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(1–3), 489–501 (2006)

    Article  Google Scholar 

  10. Lichman, M.: UCI machine learning repository (2013)

    Google Scholar 

  11. Luo, J., Vong, C., Wong, P.: Sparse Bayesian extreme learning machine for multi-classification. IEEE Trans. Neural Netw. Learn. Syst. 25(4), 836–843 (2014)

    Article  Google Scholar 

  12. MacKay, D.J.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992)

    Article  Google Scholar 

  13. Marinho, L.B., Almeida, J.S., Souza, J.W.M., de Albuquerque, V.H.C., Filho, P.P.R.: A novel mobile robot localization approach based on topological maps using classification with reject option in omnidirectional images. Expert Syst. Appl. 72, 1–17 (2017)

    Article  Google Scholar 

  14. Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963)

    Article  MathSciNet  Google Scholar 

  15. Mesquita, D.P.P., Gomes, J.P.P., Junior, A.H.S.: Ensemble of efficient minimal learning machines for classification and regression. Neural Process. Lett. 46, 1–16 (2017)

    Article  Google Scholar 

  16. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: optimally pruned extreme learning machine. IEEE Trans. Neural Netw. 21(1), 158–162 (2010)

    Article  Google Scholar 

  17. Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 40–44, November 1993

    Google Scholar 

  18. da Silva Vieira, D.C., da Rocha Neto, A.R., Rodrigues, A.W.D.O.: Sparse least squares support vector regression via multiresponse sparse regression. In: 2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, BC, Canada, 24–29 July 2016, pp. 3218–3225. IEEE (2016)

    Google Scholar 

  19. Similä, T., Tikka, J.: Multiresponse sparse regression with application to multidimensional scaling. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 97–102. Springer, Heidelberg (2005). https://doi.org/10.1007/11550907_16

    Chapter  Google Scholar 

  20. de Souza Junior, A.H., Corona, F., Barreto, G.D.A., Miché, Y., Lendasse, A.: Minimal learning machine: a novel supervised distance-based approach for regression and classification. Neurocomputing 164, 34–44 (2015)

    Article  Google Scholar 

  21. Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers. Neural Process. Lett. 9(3), 293–300 (1999)

    Article  Google Scholar 

  22. Tipping, M.E.: Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)

    MathSciNet  MATH  Google Scholar 

  23. Valyon, J., Horvath, G.: A sparse least squares support vector machine classifier. In: Proceedings of IEEE International Joint Conference on Neural Networks, vol. 1, pp. 543–548 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Átilla N. Maia or Madson L. D. Dias .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Maia, Á.N., Dias, M.L.D., Gomes, J.P.P., da Rocha Neto, A.R. (2018). Optimally Selected Minimal Learning Machine. In: Yin, H., Camacho, D., Novais, P., Tallón-Ballesteros, A. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2018. IDEAL 2018. Lecture Notes in Computer Science(), vol 11314. Springer, Cham. https://doi.org/10.1007/978-3-030-03493-1_70

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03493-1_70

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03492-4

  • Online ISBN: 978-3-030-03493-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics