Implementing Neural Networks with the Associative String Processor
The rebirth of activity in the area of neural computation is stimulated by the increasing frequency with which traditional computational paradigms appear to inefficiently handle fuzzy problems of large dimensionality (e.g. pattern recognition, associative information retrieval, etc.) and the technological advances. Indeed, with the huge strides in VLSI and WSI technologies and the emergence of electro-optics, massively parallel systems that were unrealisable only a few years ago are coming within reach.
The paper details the efficient implementation of two neural network models (i.e. Hopfield’s relaxation model and the back propagation model) on a massively parallel, programmable, fault-tolerant architecture, the ASP (Associative String Processor), which can efficiently support low-MIMD/high-SIMD and other parallel computation paradigms.
Indeed, the paper describes the mapping of the two neural networks, details the steps required to execute the network computations and reports the performance of the ASP implementations which achieved computational rate of Giga-interconnections/sec (i.e. 10 9 interconnections per sec).
KeywordsHide Node Synaptic Weight Virtual Node Hopfield Model Parallel Computer Architecture
Unable to display preview. Download preview PDF.