Inversion in time
Inversion of multilayer synchronous networks is a method which tries to answer questions like “What kind of input will give a desired output?” or “Is it possible to get a desired (output under special input/output constraints)?”.
We will describe two methods of inverting a connectionist network. Firstly, we extend inversion via backpropagation (Linden/Kindermann , Williams ) to recurrent (Elman , Jordan , Mozer , Williams/Zipser ), time-delayed (Waibel at al.
Secondly, we introduce a new inversion method for proving the non-existence of an input combination under special constraints, e.g. in a subspace of the input space. This method works by iterative exclusion of invalid activation values. It might be a helpful way to judge the properties of a trained network.
We conclude with simulation results of three different tasks: XOR, morse signal decoding and handwritten digit recognition.
Keywordsconnectionist systems backpropagation inversion recurrent neural networks digit recognition
Unable to display preview. Download preview PDF.
- J. L. Elman, Finding Structure in Time. Technical Report CRL Technical Report 8801, Center for Research in Language, University of California, San Diego, 1988Google Scholar
- G. E. Hinton, Connectionist Learning Procedures. Technical Report CMU-CS-87-115, Pittsburgh, 1987Google Scholar
- M. I. Jordan, Serial Order: A Parallel Distributed Processing Approach. Technical Report ICS Report 8604, Institute for Cognitive Science, University of California, B1986Google Scholar
- A. Linden, J. Kindermann, Inversion of Multilayer Nets. Proceedings of the First International Joint Conference on Neural Networks, Washington, 1989Google Scholar
- M. C. Mozer, A Focused Back/Propagation Algorithm for Temporal Pattern Recognition. Technical Report CRG-TR-88-3, 1988Google Scholar
- B. A. Pearlmutter, Learning State Space Trajectories in Recurrent Neural Networks. Technical Report CMU-CS-88-191, 1988Google Scholar
- F. J. Pineda, Generalization of backpropagation to recurrent neural networks. Physical Review Letters, 59/19:2229–2232, 1987Google Scholar
- D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing. The MIT Press, University of California San Diego, 1986Google Scholar
- A. Waibel, H. Sawai, K. Shikano, Modularity and Scaling in Large Phonemic Networks. Technical Report TR-I-0034, ATR Interpreting Telephony Research Laboratories, 1988Google Scholar
- R. J. Williams, D. Zipser, A Learning Algorithm for Continually Running Fully Recurrent Networks. ICS Report 8805, 1988Google Scholar
- R. J. Williams, Inverting a Connectionist Network Mapping by Backpropagation of Error, Proceedings 8th Annual Conference of the Cognitive Science Society, Lawrence-Erlbaum 1986Google Scholar