Automatic generation of C++ code for neural network simulation

  • Stephan Dreiseitl
  • Dongming Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 686)


Coding neural network simulators by hand is often a tedious and error-prone task. In this paper, we seek to remedy this situation by presenting a code generator that produces efficient C++ simulation code for a wide variety of backpropagation networks. We define a high-level, Maple-like language that allows the specification of such networks. This language is compiled to C++ code segments that in turn are executable in link with an already given generic code for backpropagation networks. Our generator allows the specification of arbitrary network topologies (with the restriction of full connections between layers) and weightchange formulae, while the activation rule and error propagation rule remain fixed. With this tool, future research on learning rules for backpropagation networks can be made more efficient by eliminating routine work and producing code that is guaranteed to be error-free.


Learning Rule Symbolic Computation Computer Algebra System Layer Variable Target Code 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [APR90]
    J. A. Anderson, A. Pellionisz and E. Rosenfeld (eds.) Neurocomputing 2: Directions for Research, The MIT Press, Cambridge, 1990.Google Scholar
  2. [AR88]
    J. A. Anderson and E. Rosenfeld (eds.) Neurocomputing: Foundations of Research, The MIT Press, Cambridge, 1988.Google Scholar
  3. [Bie85]
    A. W. Biermann. Automatic Programming: A Tutorial on Formal Methodologies, J. Symbolic Computation, 1 (1985), 119–142.Google Scholar
  4. [CGG+88]
    B. W. Char, K. O. Geddes, G. H. Gonnet, B. L. Leong, M. B. Monagan and S. P. Watt. Maple: Reference Manual, 5th Ed., WATCOM Publications Limited, Waterloo, 1988.Google Scholar
  5. [Dre92]
    S. Dreiseitl. Accelerating the Backpropagation Algorithm by Local Methods, diploma thesis preprint, RISC-Linz, Johannes Kepler University, Austria, 1992.Google Scholar
  6. [Gat90]
    K. Gatermann. Symbolic Solution of Polynomial Equation Systems with Symmetry, Proc. ISSAC'90 (Tokyo, August 20–24, 1990), 112–119.Google Scholar
  7. [LW91]
    R. Leighton and A. Wieland. The Aspirin/MIGRAINES Software Tools: User's Manual, Release V4.0, The MITRE Corporation, McLean, USA, 1991.Google Scholar
  8. [RS90]
    U. Ramacher and B. Schürmann. Unified Description of Neural Algorithms for Time Independent Pattern Recognition, VLSI Design of Neural Networks (U. Ramacher and U. Rückert, eds.), Kluwer Academic Publishers, 1990, 255–270.Google Scholar
  9. [RHW86]
    D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing, volume 1, chapter 8. MIT Press, Cambridge, 1986.Google Scholar
  10. [Wan91]
    D. M. Wang. A Toolkit for Manipulating Indefinite Summations with Application to Neural Networks, Proc. ISSAC'91 (Bonn, July 15–17, 1991), 462–463; ACM SIGSAM Bulletin, 25(3)(1991), 18–27.Google Scholar
  11. [WS91a]
    D. M. Wang and B. Schürmann. Computer Aided Investigations of Artificial Neural Systems, Proc. IJCNN'91 (Singapore, November 18–21, 1991), 2325–2330.Google Scholar
  12. [WS91b]
    D. M. Wang and B. Schürmann. Computer Aided Analysis and Derivation for Artificial Neural Systems, IEEE Trans. Software Eng. to appear.Google Scholar
  13. [WS91c]
    D. M. Wang and B. Schürmann. Computer Algebra and Neurodynamics, Proc. Arbeitsgespräch Physik und Informatik — Informatik und Physik (Munich, November 21–22, 1991).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Stephan Dreiseitl
    • 1
  • Dongming Wang
    • 1
  1. 1.Research Institute for Symbolic ComputationJohannes Kepler UniversityLinzAustria

Personalised recommendations