Inversion in time

  • Sebastian Thrun
  • Alexander Linden
Part II Theory, Algorithms
Part of the Lecture Notes in Computer Science book series (LNCS, volume 412)


Inversion of multilayer synchronous networks is a method which tries to answer questions like “What kind of input will give a desired output?” or “Is it possible to get a desired (output under special input/output constraints)?”.

We will describe two methods of inverting a connectionist network. Firstly, we extend inversion via backpropagation (Linden/Kindermann [4], Williams [11]) to recurrent (Elman [1], Jordan [3], Mozer [5], Williams/Zipser [10]), time-delayed (Waibel at al.

Secondly, we introduce a new inversion method for proving the non-existence of an input combination under special constraints, e.g. in a subspace of the input space. This method works by iterative exclusion of invalid activation values. It might be a helpful way to judge the properties of a trained network.

We conclude with simulation results of three different tasks: XOR, morse signal decoding and handwritten digit recognition.


connectionist systems backpropagation inversion recurrent neural networks digit recognition 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    J. L. Elman, Finding Structure in Time. Technical Report CRL Technical Report 8801, Center for Research in Language, University of California, San Diego, 1988Google Scholar
  2. [2]
    G. E. Hinton, Connectionist Learning Procedures. Technical Report CMU-CS-87-115, Pittsburgh, 1987Google Scholar
  3. [3]
    M. I. Jordan, Serial Order: A Parallel Distributed Processing Approach. Technical Report ICS Report 8604, Institute for Cognitive Science, University of California, B1986Google Scholar
  4. [4]
    A. Linden, J. Kindermann, Inversion of Multilayer Nets. Proceedings of the First International Joint Conference on Neural Networks, Washington, 1989Google Scholar
  5. [5]
    M. C. Mozer, A Focused Back/Propagation Algorithm for Temporal Pattern Recognition. Technical Report CRG-TR-88-3, 1988Google Scholar
  6. [6]
    B. A. Pearlmutter, Learning State Space Trajectories in Recurrent Neural Networks. Technical Report CMU-CS-88-191, 1988Google Scholar
  7. [7]
    F. J. Pineda, Generalization of backpropagation to recurrent neural networks. Physical Review Letters, 59/19:2229–2232, 1987Google Scholar
  8. [8]
    D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing. The MIT Press, University of California San Diego, 1986Google Scholar
  9. [9]
    A. Waibel, H. Sawai, K. Shikano, Modularity and Scaling in Large Phonemic Networks. Technical Report TR-I-0034, ATR Interpreting Telephony Research Laboratories, 1988Google Scholar
  10. [10]
    R. J. Williams, D. Zipser, A Learning Algorithm for Continually Running Fully Recurrent Networks. ICS Report 8805, 1988Google Scholar
  11. [11]
    R. J. Williams, Inverting a Connectionist Network Mapping by Backpropagation of Error, Proceedings 8th Annual Conference of the Cognitive Science Society, Lawrence-Erlbaum 1986Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Sebastian Thrun
    • 1
  • Alexander Linden
    • 1
  1. 1.Gesellschaft für Mathematik und Datenverarbeitung mbHSt. AugustinWest Germany

Personalised recommendations