Skip to main content

A Novel Algorithm to Train Multilayer Hardlimit Neural Networks Based on a Mixed Integer Linear Program Model

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9095))

Included in the following conference series:

  • 1932 Accesses

Abstract

In a previous work we showed that hardlimit multilayer neural networks have more computational power than sigmoidal multilayer neural networks [1]. In 1962 Minsky and Papert showed the limitations of a single perceptron which can only solve linearly separable classification problems and since at that time there was no algorithm to find the weights of a multilayer hardlimit perceptron research on neural networks stagnated until the early eighties with the invention of the Backpropagation algorithm [2]. Nevertheless since the sixties there have arisen some proposals of algorithms to implement logical functions with threshold elements or hardlimit neurons that could have been adapted to classification problems with multilayer hardlimit perceptrons and this way the stagnation of research on neural networks could have been avoided. Although the problem of training a hardlimit neural network is NP-Complete, our algorithm based on mathematical programming, a mixed integer linear model (MILP), takes few seconds to train the two input XOR function and a simple logical function of three variables with two minterms. Since any linearly separable logical function can be implemented by a perceptron with integer weights, varying them between -1 and 1 we found all the 10 possible solutions for the implementation of the two input XOR function and all the 14 and 18 possible solutions for the implementation of two logical functions of three variables, respectively, with a two layer architecture, with two neurons in the first layer. We describe our MILP model and show why it consumes a lot of computational resources, even a small hardlimit neural network translates into a MILP model greater than 1G, implying the use of a more powerful computer than a common 32 bits PC. We consider the reduction of computational resources as the near future work main objective to improve our novel MILP model and we will also try a nonlinear version of our algorithm based on a MINLP model that will consume less memory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barahona da Fonseca, J.: Are Rosenblatt multilayer perceptrons more powerfull than sigmoidal multilayer perceptrons? From a counter example to a general result. In: Proceedings of ESANN 2013, Ciaco, Belgium, pp. 345–350 (2013)

    Google Scholar 

  2. Minsky, M., Papert, S.: Perceptrons. MIT Press, Massachusetts (1965)

    Google Scholar 

  3. Gonzalez, R., Lawler, E.L.: Two-level threshold minimization. In: Proceedings of the Sixth Annual Symposium on Switching Circuit Theory and Logical Design, 41–4 (1965)

    Google Scholar 

  4. Orponen, P.: Computational Complexity of Neural Networks: a Survey. NC-TR-94-010 Technical Report, Department of Computer Science, University of London, England (1994)

    Google Scholar 

  5. Blum, A.L., Rivest, R.L.: Training a 3-node neural network is NP-complete. Lecture Notes in Computer Science, 661 (1993)

    Google Scholar 

  6. ILOG: Cplex Solver (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose B. da Fonseca .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

da Fonseca, J.B. (2015). A Novel Algorithm to Train Multilayer Hardlimit Neural Networks Based on a Mixed Integer Linear Program Model. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2015. Lecture Notes in Computer Science(), vol 9095. Springer, Cham. https://doi.org/10.1007/978-3-319-19222-2_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19222-2_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19221-5

  • Online ISBN: 978-3-319-19222-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics