Skip to main content

Optimizing a Higher Order Neural Network Through Teaching Learning Based Optimization Algorithm

  • Conference paper
  • First Online:
Computational Intelligence in Data Mining—Volume 1

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 410))

Abstract

Higher order neural networks pay more attention due to greater computational capabilities with good learning and storage capacity than the existing traditional neural networks. In this work, a novel attempt has been made for effective optimization of the performance of a higher order neural network (in particular Pi-Sigma neural network) for classification purpose. A newly developed population based teaching learning based optimization algorithm has been used for efficient training of the neural network. The performance of the model has been benchmarked against some well recognized optimized models and they have tested by five well recognized real world bench mark datasets. The simulating results demonstrated favorable classification accuracies towards the proposed model as compared to others. Also from the statistical test, the results of the proposed model are quite interesting than others, which analyzes for fast training with stable and reliable results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ivakhnenko, A.G.: Polynomial theory of complex systems polynomial theory of complex systems. IEEE Trans. Syst. Man Cybern. 1(4), 364–378 (1971)

    Article  MathSciNet  Google Scholar 

  2. Giles, C.L., Maxwell, T.: Learning, invariance, and generalization in high-order neural networks. Appl. Optics, 26(23), 4972–4978 (1987). ISI:A1987L307700009

    Google Scholar 

  3. Bengtsson, M.: Higher order artificial neural networks. Diane Publishing Company, Darby PA, USA (1990). ISBN 0941375927

    Google Scholar 

  4. Zhang, M., Xu, S.X., Fulcher, J.: Neuron-adaptive higher order neural-network models for automated financial data modeling. IEEE Trans. Neural Netw. 13(1), 188–204 (2002). WOS: 000 173440100016

    Google Scholar 

  5. Naik, B., Nayak, J., Behera, H.S.: A honey bee mating optimization based gradient descent learning–FLANN (HBMO-GDL-FLANN) for Classification. In: Emerging ICT for Bridging the Future-Proceedings of the 49th Annual Convention of the Computer Society of India CSI, vol. 2, pp. 211–220. Springer International Publishing (2015)

    Google Scholar 

  6. Naik, B., Nayak, J., Behera, H.S., Abraham, A.: A harmony search based gradient descent learning-FLANN (HS-GDL-FLANN) for Classification. In: Computational Intelligence in Data Mining, vol. 2, pp. 525–539. Springer India (2015)

    Google Scholar 

  7. Shin, Y., Ghosh, J.: The pi-sigma networks: an efficient higher order neural network for pattern classification and function approximation. In: Proceedings of International Joint Conference on Neural Networks, vol. 1, pp. 13–18. Seattle, Washington, July 1991

    Google Scholar 

  8. Hussain, A.J., Liatsis, P.: Recurrent pi-sigma networks for DPCM image coding. Neurocomputing 55, 363–382 (2002)

    Article  Google Scholar 

  9. Li, C.-K.: Memory-based sigma-pi-sigma neural network. IEEE SMC, TP1F5, pp. 112–118 (2002)

    Google Scholar 

  10. Weber, C., Wermter, S.: A self-organizing map of sigma–pi units. Neurocomputing 70, 2552–2560 (2007)

    Article  Google Scholar 

  11. Ghazali, R., Hussain, A., El-Deredy, W.: Application of ridge polynomial neural networks to financial time series prediction. In: 2006 International Joint Conference on Neural Networks, pp. 913–920, July 16–21, 2006

    Google Scholar 

  12. Nie, Y., Deng, W.: A hybrid genetic learning algorithm for Pi-sigma neural network and the analysis of its convergence. In: IEEE Fourth International Conference on Natural Computation, 2008, pp. 19–23

    Google Scholar 

  13. Song, G., Peng, C., Miao, X.: Visual cryptography scheme using pi-sigma neural networks. In: 2008 International Symposium on Information Science and Engineering, pp. 679–682

    Google Scholar 

  14. Nayak, J., Naik, B., Behera, H.S.: A novel chemical reaction optimization based higher order neural network (CRO-HONN) for nonlinear classification. Ain Shams Eng. J. (2015)

    Google Scholar 

  15. Nayak, J., Naik, B., Behera, H.S.: A novel nature inspired firefly algorithm with higher order neural network: performance analysis. Eng. Sci. Technol. Int. J. (2015)

    Google Scholar 

  16. Nayak, J., Naik, B., Behera, H.S.: A hybrid PSO-GA based Pi sigma neural network (PSNN) with standard back propagation gradient descent learning for classification. In: 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE (2014)

    Google Scholar 

  17. Nayak, J., Kanungo, D.P., Naik, B., Behera, H.S.: A higher order evolutionary Jordan Pi-Sigma neural network with gradient descent learning for classification. In: 2014 International Conference on High Performance Computing and Applications (ICHPCA), pp. 1–6. IEEE (2014)

    Google Scholar 

  18. Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems. Inf. Sci. 183(1), 1–15 (2012)

    Article  MathSciNet  Google Scholar 

  19. Rao, R.V., Kalyankar, V.D.: Parameter optimization of modern machining processes using teaching–learning based optimization algorithm. Eng. Appl. Artif. Intel. 26(1), 524–531 (2013)

    Article  Google Scholar 

  20. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(9), 533–536 (1986)

    Article  Google Scholar 

  21. Alcalá-Fdez, J., Fernandez, A., Luengo, J., Derrac, J., García, S., Sánchez, L., Herrera, F.: KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J. Multiple-Valued Logic Soft Comput. 17(2–3), 255–287 (2011)

    Google Scholar 

  22. Nayak, J., et al.: Particle swarm optimization based higher order neural network for classification. Comput. Intell. Data Mining. Springer India. 1, 401–414 (2015)

    Google Scholar 

Download references

Acknowledgments

This work is supported by Department of Science and Technology (DST), Ministry of Science and Technology, New Delhi, Govt. of India, under grants No. DST/INSPIRE Fellowship/2013/585.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Janmenjoy Nayak .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer India

About this paper

Cite this paper

Nayak, J., Naik, B., Behera, H.S. (2016). Optimizing a Higher Order Neural Network Through Teaching Learning Based Optimization Algorithm. In: Behera, H., Mohapatra, D. (eds) Computational Intelligence in Data Mining—Volume 1. Advances in Intelligent Systems and Computing, vol 410. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2734-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-81-322-2734-2_7

  • Published:

  • Publisher Name: Springer, New Delhi

  • Print ISBN: 978-81-322-2732-8

  • Online ISBN: 978-81-322-2734-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics