Skip to main content
Log in

Ensemble of Efficient Minimal Learning Machines for Classification and Regression

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Minimal Learning Machine (MLM) is a recently proposed supervised learning algorithm with performance comparable to most state-of-the-art machine learning methods. In this work, we propose ensemble methods for classification and regression using MLMs. The goal of ensemble strategies is to produce more robust and accurate models when compared to a single classifier or regression model. Despite its successful application, MLM employs a computationally intensive optimization problem as part of its test procedure (out-of-sample data estimation). This becomes even more noticeable in the context of ensemble learning, where multiple models are used. Aiming to provide fast alternatives to the standard MLM, we also propose the Nearest Neighbor Minimal Learning Machine and the Cubic Equation Minimal Learning Machine to cope with classification and single-output regression problems, respectively. The experimental assessment conducted on real-world datasets reports that ensemble of fast MLMs perform comparably or superiorly to reference machine learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Arlandis J, Perez-Cortes J, Cano J (2002) Rejection strategies and confidence measures for a k-nn classifier in an ocr task. In: Proceedings of the 16th international conference on pattern recognition, vol 1, pp 576–579

  2. Ballings M, den Poel DV, Hespeels N, Gryp R (2015) Evaluating multiple classifiers for stock price direction prediction. Expert Syst Appl 42(20):7046–7056

    Article  Google Scholar 

  3. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  4. Cao J, Lin Z, Huang GB, Liu N (2012) Voting based extreme learning machine. Inf Sci 185(1):66–77

    Article  MathSciNet  Google Scholar 

  5. Coelho D, Barreto G, Medeiros C, Santos J (2014) Performance comparison of classifiers in the detection of short circuit incipient fault in a three-phase induction motor. In: IEEE symposium on computational intelligence for engineering solutions (CIES), pp 42–48

  6. de Souza Junior AH, Corona F, Miche Y, Lendasse A, Barreto GA, Simula O (2013) Minimal learning machine: a new distance-based method for supervised learning. In: Proceedings of the 12th international conference on artificial neural networks: advances in computational intelligence—volume part I, IWANN’13. Springer, Berlin, Heidelberg, pp 408–416

  7. de Souza Junior AH, Corona F, Barreto GA, Miche Y, Lendasse A (2015) Minimal learning machine: a novel supervised distance-based approach for regression and classification. Neurocomputing 164:34–44

    Article  Google Scholar 

  8. Dietterich TG (2000) Ensemble methods in machine learning. In: Proceedings of the first international workshop on multiple classifier systems, MCS’00. Springer, London, UK, pp 1–15

  9. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: International conference on machine learning, pp 148–156

  10. Garcia-Pedrajas N, Hervas-Martinez C, Ortiz-Boyer D (2005) Cooperative coevolution of artificial neural network ensembles for pattern classification. IEEE Trans Evol Comput 9(3):271–302

    Article  Google Scholar 

  11. Gomes JPP, Souza Junior AH, Corona F, Neto ARR (2015) A cost sensitive minimal learning machine for pattern classification. In: Arik S, Huang T, Lai WK, Liu Q (eds) International conference on neural information processing (ICONIP). Lecture notes in computer science, vol 9489. Springer, pp 557–564

  12. Guo L, Ma Y, Cukic B, Singh H (2004) Robust prediction of fault-proneness by random forests. In: Proceedings of the 15th international symposium on software reliability engineering, pp 417–428

  13. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10):993–1001

    Article  Google Scholar 

  14. Jung C, Shen Y, Jiao L (2014) Learning to rank with ensemble ranking svm. Neural Process Lett 42(3):703–714

    Article  Google Scholar 

  15. Latkowski T, Osowski S (2015) Developing gene classifier system for autism recognition. In: Proceedings of the advances in computational intelligence—13th international work-conference on artificial neural networks, IWANN 2015, Palma de Mallorca, Spain. Part II, pp 3–14, 10–12 June 2015

  16. Liu N, Wang H (2010) Ensemble based extreme learning machine. IEEE Signal Process Lett 17(8):754–757

    Article  Google Scholar 

  17. Mendes-Moreira J, Soares C, Jorge AMJ, Sousa JF (2012) Ensemble approaches for regression: a survey. ACM Comput Surv 45(1):10

    Article  MATH  Google Scholar 

  18. Mesquita DPP, Gomes JPP, Souza Junior AH (2015) Ensemble of minimal learning machines for pattern classification. In: Rojas I, Caparrós GJ, A. Català (eds) International work conference on artificial neural networks. Lecture notes in computer science, vol 9095. Springer, pp 142–152

  19. Niewiadomska-Szynkiewicz E, Marks M (2009) Optimization schemes for wireless sensor network localization. Int J Appl Math Comput Sci 19(2):291–302

    Article  MATH  Google Scholar 

  20. Patel H, Temple M, Baldwin R (2015) Improving zigbee device network authentication using ensemble decision tree classifiers with radio frequency distinct native attribute fingerprinting. IEEE Trans Reliab 64(1):221–233

    Article  Google Scholar 

  21. Rooney N, Patterson DW, Anand SS, Tsymbal A (2004) Dynamic integration of regression models. In: Roli F, Kittler J, Windeatt T (eds) Multiple Classifier Systems. Lecture notes in computer science, vol 3077. Springer, pp 164–173

  22. Suzuki J (2009) Mathematics in historical context, 1st edn. Mathematical Association of America, Washington

    Book  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Brazilian National Council for Scientific and Technological Development (CNPq) for the financial support (Grant 456837/2014-0).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to João P. P. Gomes.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mesquita, D.P.P., Gomes, J.P.P. & Souza Junior, A.H. Ensemble of Efficient Minimal Learning Machines for Classification and Regression. Neural Process Lett 46, 751–766 (2017). https://doi.org/10.1007/s11063-017-9587-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9587-5

Keywords

Navigation