Voltage-Mode Neural Network for the Solution of Linear Equations

Part of the Studies in Computational Intelligence book series (SCI, volume 508)


This chapter starts with a discussion on the applicability of the standard Hopfield Neural Network (HNN) for the task of solving linear equations. It is demonstrated that the HNN in not suitable for such a task. Thereafter, a detailed study of the application of the Non-Linear Synapse Neural Network to the task of solving systems of simultaneous linear equations, is presented. The number of decision variables in the set of linear equations to be solved govern the number of neurons. For an n variable system of equations, n neurons connected through an interconnected non-linear feedback structure comprising of n comparators, are needed. By virtue of the non-linear feedback, a new energy function involving transcendental terms is obtained. This transcendental energy function is fundamentally different from the standard quadratic form associated with Hopfield network and its variants. Along with presenting the analysis of the NOSYNN-based linear equation solver circuit and proof of its energy function, it is also shown that the stable state of the network corresponds exactly with the solution of the given system of linear equations. Proper working of the network is ascertained by performing PSPICE simulations as well as actual hardware implementations on a breadboard using standard laboratory components.


Linear Equation Energy Function Operational Amplifier Convergence Time Hopfield Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Xu, Z.-B., Hu, G.-Q., Kwong, C.-P.: Asymmetric hopfield-type networks: theory and applications. Neural Netw. 9(3), 483–501 (1996)CrossRefGoogle Scholar
  2. 2.
    Kosko, B.: Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. 18(1), 49–60 (1988)Google Scholar
  3. 3.
    Ackley, D.H., Hinton, G.E., Sejnowski, T.J.: A learning algorithm for boltzmann machines. Cogn. Sci. 9(1), 147–169 (1985)CrossRefGoogle Scholar
  4. 4.
    Kohring, G.A.: On the Q-state neuron problem in attractor neural networks. Neural Netw. 6(4), 573–581 (1993)CrossRefGoogle Scholar
  5. 5.
    Guan, Z.-H., Chen, G., Qin, Y.: On equilibria, stability, and instability of hopfield neural networks. IEEE Trans. Neural Netw. 11(2), 534–540 (Mar 2000)CrossRefGoogle Scholar
  6. 6.
    Arik, S.: Global asymptotic stability of a class of dynamical neural networks. IEEE Trans. Circ. Syst. I Fundam. Theory Appl. 47(4), 568–571 (Apr 2000)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Hopfield, J.J., Tank, D.W.: “Neural” computation of decisions in optimization problems. Biol. Cybern. 52, 141–152 (1985)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. 79(8), 2554–2558 (1982)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Nat. Acad. Sci. 81(10), 3088–3092 (1984)CrossRefGoogle Scholar
  10. 10.
    Vidyasagar, M.: Location and stability of the high-gain equilibria of nonlinear neural networks. IEEE Trans. Neural Netw. 4(4), 660–672 (1993)CrossRefGoogle Scholar
  11. 11.
    Tank, D., Hopfield, J.: Simple ‘neural’ optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans. Circ. Syst. 33(5), 533–541 (1986)CrossRefGoogle Scholar
  12. 12.
    Zurada, J.M.: Introduction to artificial neural systems. West Publishing Co, Saint Paul (1992)Google Scholar
  13. 13.
    Rahman, S.A., Jayadeva, Dutta Roy, S.C.: Neural network approach to graph colouring. Electron. Lett. 35(14), 1173–1175 (1999)Google Scholar
  14. 14.
    Rahman, S.A.: A nonlinear synapse neural network and its applications. PhD thesis, Department of Electrical Engineering. Indian Institute of Technology, Delhi (2007)Google Scholar
  15. 15.
    LM318 Operational Amplifier. National semiconductor incorporated. online. (2012). Accessed 30 Oct 2012
  16. 16.
    \(\upmu \)A741 General Purpose Operational Amplifiers. Texas instruments incorporated online. (2012) Accessed 30 Oct 2012
  17. 17.
    Huang, C.-J., Huang, H.-Y.: A low-voltage CMOS rail-to-rail operational amplifier using double p-channel differential input pairs. In: Proceedings of the 2004 International Symposium on Circuits and Systems (ISCAS’04), vol. 1, pp. 673–676 (2004)Google Scholar
  18. 18.
    Dai, G.-D., Huang, P., Yang, L., Wang, B.: A constant \({G}_m\) CMOS op-amp with rail-to-rail input/output stage. In: 10th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT), pp. 123–125 (2010)Google Scholar
  19. 19.
    Safari, L., Azhari, S.J.: An ultra low power, low voltage tailless QFG based differential amplifier with high CMRR, rail to rail operation and enhanced slew rate. Analog Integr. Circ. Sig. Process. 67(2), 241–252 (2011)CrossRefGoogle Scholar
  20. 20.
    Wang, J.: Electronic realization of recurrent neural network for solving simultaneous linear equations. Electron. Lett. 28(5), 493–495 (1992)CrossRefGoogle Scholar
  21. 21.
    Cichocki, A., Unbehauen, R.: Neural networks for solving systems of linear equations and related problems. IEEE Trans. Circ. Syst. I Fundam. Theory Appl. 39(2), 124–138 (1992)CrossRefzbMATHGoogle Scholar
  22. 22.
    Wang, J., Li, H.: Solving simultaneous linear equations using recurrent neural networks. Inf. Sci. Intell. Syst. 76(3), 255–277 (1994)zbMATHGoogle Scholar
  23. 23.
    Xia, Y., Wang, J., Hung, D.L.: Recurrent neural networks for solving linear inequalities and equations. IEEE Trans. Circ. Syst. I Fundam. Theory Appl. 46(4), 452–462 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Censor, Y., Elfving, T.: New methods for linear inequalities. Linear Algebra Appl. 42, 199–211 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Anderson, E., Bai, Z., Bischof, C.: LAPACK Users’ Guide. Software, environments, tools. Society for Industrial and Applied Mathematics, Philadelphia, PA (1999)Google Scholar
  26. 26.
    Lundberg, K.H.: The history of analog computing: introduction to the special section. IEEE Control Syst. 25(3), 22–25 (June 2005)CrossRefGoogle Scholar
  27. 27.
    Cowan, G.E.R., Melville, R.C., Tsividis, Y.P.: A VLSI analog computer/math co-processor for a digital computer. In: IEEE International Solid-State Circuits Conference (ISSCC), vol. 1, pp. 82–86 (2005)Google Scholar
  28. 28.
    Ansari, M.S., Rahman, S.A.: Integrable non-linear feedback analog circuit for solving linear equations. In: 2011 International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT), pp. 284–287 (2011)Google Scholar
  29. 29.
    Ansari, M.S., Rahman, S.A.: MO-OTA based recurrent neural network for solving simultaneous linear equations. In: 2011 International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT), pp. 192–195 (2011)Google Scholar
  30. 30.
    Newcomb, R.W., Lohn, J.D.: Analog VLSI for neural networks. In: Arbib, M.A. (ed.) The handbook of Brain Theory and Neural Networks, pp. 86–90. MIT Press, Cambridge (1998)Google Scholar

Copyright information

© Springer India 2014

Authors and Affiliations

  1. 1.Department of Electronics EngineeringAligarh Muslim UniversityAligarhIndia

Personalised recommendations