Skip to main content

Multilayer Perceptron: NSGA II for a New Multi-objective Learning Method for Training and Model Complexity

  • Conference paper
  • First Online:
Lecture Notes in Real-Time Intelligent Systems (RTIS 2017)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 756))

Included in the following conference series:

Abstract

The multi-layer perceptron has proved its efficiencies in several fields as pattern and voice recognition. Unfortunately, the classical training for MLP suffers from a poor generalization. In this respect, we have proposed a new multi-objective training model with constraints, satisfies two objectives. The first one is the learning objective: minimizing the perceptron error and the second is the complexity objective: optimizing number of weights and neurons. The proposed model will provide a balance between the multi-layer perceptron learning and the complexity to get a good generalization. Our model has been solved using an evolutionary approach called the Non-Dominated Sorting Genetic Algorithm (NSGA II). This approach has led to a good representation of the Pareto set for the MLP network, from which an improved generalization performance model is selected.

K. Senhaji—Ph’D student, laboratory of modeling and scientific computing, USMBA, Fez, Morocco.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagation error. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  2. Ramchoun, H., Janati Idrissi, M.A., Ghanou, Y., Ettaouil, M.: New modeling of multilayer perceptron architecture optimization with regularization: an application to pattern classification. IAENG Int. J. Comput. Sci. 44(3), 261–269 (2017)

    Google Scholar 

  3. Abdelatif, E.S., Fidae, H., Mohamed, E.: Optimization of the organized KOHONEN map by a new model of preprocessing phase and application in clustering. J. Emerg. Technol. Web Intell. 6(1), 80–85 (2014)

    Google Scholar 

  4. Arlot, S., Celisse, A.: A survey of cross-validation procedures for model selection. Stat. Surv. 4, 40–79 (2010)

    Article  MathSciNet  Google Scholar 

  5. Reed, R.: Pruning algorithms-a survey. IEEE Trans. Neural Netw. 4(5), 740–747 (1993)

    Article  Google Scholar 

  6. Kwok, T.Y.: Constructive algorithms for structure learning in feedforward neural networks (Doctoral dissertation). (1996)

    Google Scholar 

  7. Bartlett, P.L.: For valid generalization the size of the weights is more important than the size of the network. In: Advances in Neural Information Processing Systems, pp. 134–140 (1997)

    Google Scholar 

  8. De Albuquerque Teixeira, R., Braga, A.P., Takahashi, R.H., Saldanha, R.R.: Improving generalization of MLPs with multi-objective optimization. Neurocomputing 35(1), 189–194 (2000)

    Article  Google Scholar 

  9. De Albuquerque Teixeira, R., Braga, A.P., Takahashi, R.H., Saldanha, R.R.: Recent advances in the MOBJ algorithm for training artificial neural networks. Int. J. Neural Syst. 11(03), 265–270 (2001)

    Article  Google Scholar 

  10. Costa, M.A., Braga, A.P.: Optimization of neural networks with multi-objective lasso algorithm. In: International Joint Conference on Neural Networks, IJCNN 2006, pp. 3312–3318. IEEE, July 2006

    Google Scholar 

  11. Chankong, V., Haimes, Y.Y.: Multiobjective Decision-Making: Theory and Methodology. Courier Dover Publications, Mineola (2008)

    MATH  Google Scholar 

  12. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: International Conference on Parallel Problem Solving From Nature, pp. 849–858. Springer, Heidelberg, September 2000

    Chapter  Google Scholar 

  13. Liu, G.P., Kadirkamanathan, V.: Learning with multi-objective criteria. In: 4th International Conference on Artificial Neural Networks, pp. 53–58 (1995)

    Google Scholar 

  14. Costa, M.A., Braga, A.P., Menezes, B.R., Teixeira, R.A., Parma, G.G.: Training neural networks with a multi-objective sliding mode control algorithm. Neurocomputing 51, 467–473 (2003)

    Article  Google Scholar 

  15. Braga, A., Takahashi, R., Costa, M., Teixeira, R.: Multi-objective algorithms for neural networks learning. In: Multi-objective Machine Learning, pp. 151–171. Springer (2006)

    Google Scholar 

  16. Costa, M.A., de Pádua Braga, A., de Menezes, B.R.: Improving generalization of MLPs with sliding mode control and the Levenberg–Marquardt algorithm. Neurocomputing 70(7), 1342–1347 (2007)

    Article  Google Scholar 

  17. Janati Idrissi, M A., Ramchoun, H., Ghanou, Y., Ettaouil, M.: Genetic algorithm for neural network architecture optimization. In: IEEE Proceeding of the 3rd International Conference of Logistics Operations Management 2016. IEEE, Morocco, 23–25 May 2016

    Google Scholar 

  18. Senhaji, K., Ettaouil, M.: Multi-criteria optimization of neural networks using multi-objective genetic algorithm. In: 2017 Intelligent Systems and Computer Vision (ISCV), 4-pp. IEEE, April 2017

    Google Scholar 

  19. Rosenblatt, F.: The Perceptron, A Theory of Statistical Separability in Cognitive Systems, Cornell Aeronautical Laboratory. Tr. No. VG-1196-6-1, January 1958

    Google Scholar 

  20. Ghanou, Y., Bencheikh, G.: Architecture optimization and training for the multilayer perceptron using ant system. Architecture 28, 10 (2016)

    Google Scholar 

  21. Salomon, D.: Data Compression: The Complete Reference. Springer Science & Business Media, London (2004)

    MATH  Google Scholar 

  22. Sietsma, J., Dow, R.J.: Creating artificial neural networks that generalize. Neural Netw. 4(1), 67–79 (1991)

    Article  Google Scholar 

  23. Holland, J.H.: Adaptation in Natural and Artificial Systems. An Introductory Analysis with Application to Biology, Control, and Artificial Intelligence. University of Michigan Press, Ann Arbors (1975)

    MATH  Google Scholar 

  24. Coello, C.A.C., Lamont, G.B., Van Veldhuizen, D.A.: Evolutionary Algorithms for Solving Multi-objective Problems, vol. 5. Springer, New York (2007)

    MATH  Google Scholar 

  25. Jaimes, A.L., Coello, C.A.C.: An introduction to multi-objective evolutionary algorithms and some of their potential uses in biology. In: Applications of Computational Intelligence in Biology, pp. 79–102. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  26. Srinivas, N., Deb, K.: Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol. Comput. 2(3), 221–248 (1994)

    Article  Google Scholar 

  27. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput. 8(2), 173–195 (2000)

    Article  Google Scholar 

  28. Zitzler, E., Thiele, L.: Multiobjective optimization using evolutionary algorithms-a comparative case study. In: International Conference on Parallel Problem Solving from Nature, pp. 292–301. Springer, Heidelberg, September 1998

    Google Scholar 

  29. Jebari, K., Madiafi, M.: Selection methods for genetic algorithms. Int. J. Emerg. Sci. 3(4), 333–344 (2013)

    Google Scholar 

  30. Hingee, K., Hutter, M.: Equivalence of probabilistic tournament and polynomial ranking selection. In: IEEE World Congress on Computational Intelligence. IEEE Congress on Evolutionary Computation, CEC 2008, pp. 564–571. IEEE, June 2008

    Google Scholar 

  31. Hwang, C.L., Masud, A.S.M.: Multiple Objective Decision Making-Methods and Applications: A State-of-the-Art Survey, vol. 164. Springer Science & Business Media, Heidelberg (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaoutar Senhaji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Senhaji, K., Ramchoun, H., Ettaouil, M. (2019). Multilayer Perceptron: NSGA II for a New Multi-objective Learning Method for Training and Model Complexity. In: Mizera-Pietraszko, J., Pichappan, P., Mohamed, L. (eds) Lecture Notes in Real-Time Intelligent Systems. RTIS 2017. Advances in Intelligent Systems and Computing, vol 756. Springer, Cham. https://doi.org/10.1007/978-3-319-91337-7_15

Download citation

Publish with us

Policies and ethics