Skip to main content

Implementation of a Gate Neural Network Based on Combinatorial Logic Elements

  • Conference paper
  • First Online:
Advances in Neural Computation, Machine Learning, and Cognitive Research (NEUROINFORMATICS 2017)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 736))

Included in the following conference series:

Abstract

Generally, math models which use the “continuous mathematics” are dominant in the construction of modern digital devices, while the discrete basis remain without much attention. However, when solving the problem of constructing effective computing devices it is impossible to ignore the compatibility level of the mathematical apparatus and the computer platform used for its implementation. In the field of artificial intelligence, this problem becomes urgent during the development of specialized computers based on the neural network paradigm. In this paper, the disadvantages of the application of existing approaches to the construction of a neural network basis are analyzed. A new method for constructing a neural-like architecture based on discrete trainable structures is proposed to improve the compatibility of artificial neural network models in the digital basis of programmable logic chips and general-purpose processors. A model of a gate neural network using a mathematical apparatus of Boolean algebra is developed. Unlike formal models of neural networks, proposed network operates with the concepts of discrete mathematics. Formal representations of the gate network are derived. The learning algorithm is offered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aljautdinov, M.A., Galushkin, A.I., Kazancev, P.A., Ostapenko, G.P.: Neurocomputers: from software to hardware implementation, p. 152. Gorjachaja linija - Telekom, Moscow (2008). (in Russian)

    Google Scholar 

  2. Mezenceva, O.S., Mezencev, D.V., Lagunov, N.A., Savchenko, N.S.: Implementations of non-standard models of neuron using Neuromatrix. Izvestija JuFU. Tehnicheskie nauki 131(6), 178–182 (2012). (in Russian)

    Google Scholar 

  3. Adetiba, E., Ibikunle, F.A., Daramola, S.A., Olajide, A.T.: Implementation of efficient multilayer perceptron ANN neurons on field programmable gate array chip. Int. J. Eng. Technol. 14(1), 151–159 (2014)

    Google Scholar 

  4. Manchev, O., Donchev, B., Pavlitov, K.: FPGA implementation of artificial neurons. Electronics: An Open Access Journal, Sozopol, Bulgaria, 22–24 September (2004). https://www.researchgate.net/publication/251757109_FPGA_IMPLEMENTATION_OF_ARTIFICIAL_NEURONS. Accessed 28 Jan 2017

  5. Kohut R., Steinbach B.: The Structure of Boolean Neuron for the Optimal Mapping to FPGAs. http://www.informatik.tu-freiberg.de/prof2/publikationen/CADSM2005_BN_FPGA.pdf. Accessed 1 Feb 2017

  6. Korani, R., Hajera, H., Imthiazunnisa, B., Chandra Sekhar, R.: FPGA modelling of neuron for future artificial intelligence applications. Int. J. Adv. Res. Comput. Commun. Eng. 2(12), 4763–4768 (2013)

    Google Scholar 

  7. Omondi, A. R., Rajapakse, J. C.: FPGA Implementations of Neural Networks. Springer (2006). http://lab.fs.uni-lj.si/lasin/wp/IMIT_files/neural/doc/Omondi2006.pdf. Accessed 28 Jan 2017

  8. Gribachev, V.: Element base of hardware implementations of neural networks (in Russian). http://kit-e.ru/articles/elcomp/2006_8_100.php. Accessed 30 June 2016

  9. Mikhailyuk, T. E., Zhernakov, S. V.: Increasing efficiency of using FPGA resources for implementation neural networks. In: Nejrokomp’jutery: razrabotka, primenenie, vol. 11, pp. 30–39 (2016). (in Russian)

    Google Scholar 

  10. Mikhailyuk, T.E., Zhernakov, S.V.: On an approach to the selection of the optimal FPGA architecture in neural network logical basis. Informacionnye tehnologii 23(3), 233–240 (2017). (in Russian)

    Google Scholar 

  11. Kohut, R., Steinbach, B.: Decomposition of Boolean Function Sets for Boolean Neural Networks. https://www.researchgate.net/publication/228865096_Decomposition_of_Boolean_Function_Sets_for_Boolean_Neural_Networks. Accessed 1 Feb 2017

  12. Anthony, M.: Boolean Functions and Artificial Neural Networks. http://www.cdam.lse.ac.uk/Reports/Files/cdam-2003–01.pdf. Accessed 29 Jan 2017

  13. Kohut, R., Steinbach, B.: Boolean neural networks. WSEAS Trans. Syst. 3(2), 420–425 (2004)

    Google Scholar 

  14. Steinbach, B., Kohut, R.: Neural Networks – A Model of Boolean Functions. https://www.researchgate.net/publication/246931125_Neural_Networks_-_A_Model_of_Boolean_Functions. Accessed 1 Feb 2017

  15. Vinay, D.: Mapping boolean functions with neural networks having binary weights and zero thresholds. IEEE Trans. Neural Netw. 12(3), 639–642 (2001)

    Article  Google Scholar 

  16. Zhang, C., Yang, J., Wu, W.: Binary higher order neural networks for realizing boolean functions. IEEE Trans. Neural Netw. 22(5), 701–713 (2011)

    Article  Google Scholar 

  17. Rademacher, H.: Einige Sätze über Reihen von allgemeinen Orthogonalfunktionen. Math. Ann. 87(1–2), 112–138 (1922)

    Article  MathSciNet  MATH  Google Scholar 

  18. Hajkin, S.: Neural networks: a comprehensive foundation. Vil’jams, Moscow (2008). (in Russian)

    Google Scholar 

  19. Osovskij, S.: Neural networks for information processing, p. 344. Finansy i statistika, Moskow (2002). (in Russian)

    Google Scholar 

  20. Shin, Y., Ghosh, J.: Efficient higher-order neural networks for classification and function approximation. The University of Texas at Austin (1995). https://www.researchgate.net/publication/2793545_Efficient_Higher-order_Neural_Networks_for_Classification_and_Function_Approximation. Accessed 28 Jan 2017

  21. Shevelev Ju, P.: Discrete mathematics. Part 1: The theory of sets. Boolean algebra (Automated learning technology “Symbol”), p. 118. TUSUR University, Tomsk (2003). (in Russian)

    Google Scholar 

  22. Omondi, A., Premkumar, B.: Residue number systems: theory and implementation, p. 312. Imperial College Press, London (2007)

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Taras Mikhailyuk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Cite this paper

Mikhailyuk, T., Zhernakov, S. (2018). Implementation of a Gate Neural Network Based on Combinatorial Logic Elements. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V. (eds) Advances in Neural Computation, Machine Learning, and Cognitive Research. NEUROINFORMATICS 2017. Studies in Computational Intelligence, vol 736. Springer, Cham. https://doi.org/10.1007/978-3-319-66604-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-66604-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-66603-7

  • Online ISBN: 978-3-319-66604-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics