Multi-Neural Networks hardware and software architecture: Application of the divide to simplify paradigm DTS
We present in this paper the implementation of the data driven method we called DTS (Divide to Simplify), that builds dynamically a Multi-Neural Network Architecture. The Multi-Neural Network architecture, we propose, solves a complex problem by splitting it into several easier problems. We have previously present a software version of the DTS multi-neural network architecture. The main idea of the DTS approach is to use a set of small and specialized mapping neural networks, or Slave Neural Networks (SNN), that are guided by a prototype based neural network, or Master Neural Network (MNN). In this paper, the MNN manages a set of hardware digital neural networks. Learning is performed in few milliseconds. We get a very good rate of classification when using the two spirals problem as a benchmark.
Key wordsDivide To Simplify Kohonen Self Organization Maps Multi-Neural Networks Systems Cooperative and parallel architecture IBM Zero Instruction Set computer ZISC-036
Unable to display preview. Download preview PDF.
- S. Goonatilake and S. Khebbal, « Intelligent Hybrid Systems: Issues, Classification and Future Directions », In « Intelligent Hybrid Systems » John Wiley & Sons, pp 1–20, ISBN 0 471 94242 1Google Scholar
- A. Krogh, J. Vedelsby, « Neural Network Ensembles, Cross Validation, and Active Learning », Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 231–238.Google Scholar
- ZISC036 data book, IBM Essonnes Component Development Laboratory, IBM Microelectronics, Corbeil-Essonnes, France.Google Scholar
- K. J. Lang and M. J. Witbrock, « Learning to tell two spirals apart », Proc. of the 1988 Connectionist Models Summer School, Morgan Kauffman, pp 52–59.Google Scholar
- S. E. Fahlman, C. Lebiere, « The Cascaded-Correlation Learning Architecture », Advances in Neural Information Processing Systems 2, Morgan Kauffman, San Mateo, pp 524–534.Google Scholar
- T. R. Shultz, Y. O. Takane and Y. Takana, « Analysis of Unstandardized Contributions of Cross Connected Networks », Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 610–608.Google Scholar
- J. Hun, C. Moraga, « The influence of Sigmoid Function Parameters on the speed of Backpropagation Learning», International Workshop on Artificial Neural Network, Malaga-Torremolinos, Spain, June 1995, Springer, pp 195–201.Google Scholar
- B. Fritzke, « Growing Cell Structure, A self organizing network for unsupervised and supervised training », ICSI Berkeley, Technical Report, tr-93-026.Google Scholar
- J. Bruske, G. Sommer, «Dynamic Cell Structure», Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 497–504.Google Scholar
- A Hannibal « VLSI Building Block for Neural Networks with on chip Back Learning », Neurocoputing n∘5,1993, pp 25–37.Google Scholar
- K. K. Sang and P. Niyogi, « Active learning for function approximation », Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 593–600.Google Scholar
- A. Chebira, G. Mercier, K.Madani, G. de Tremiolles « A prototype based neural network supervising a set of feature based networks: application to the two spirals problem», Fourth European Congress on Intelligent Techniques and Soft Computing, Aachen, Germany, September 2–5, 1996, volume 1, pp 303–307.Google Scholar