Skip to main content
Log in

Abstract

A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J.J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,”Proceedings of the National Academy of Sciences, USA 79, pp. 2554–2558, reprinted in 1988.

  2. D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning Representations by Back-Propagation,”Nature, Vol. 323, pp. 533–536, 1986.

    Article  Google Scholar 

  3. J. Ghosh and K. Hwang, “Mapping Neural Networks onto Message-Passing Multicomputers,”Journal of Parallel and Distributed Computing, Vol. 6, pp. 291–330, 1989.

    Article  Google Scholar 

  4. M. Duranton and J.A. Sirat, “Learning on VLSI: A general Purpose Digital Neurochip,”International Conference on Neural Networks, Washington, DC, 1989.

  5. N. Morgan, J. Beck, P. Kohn, J. Bilmes, E. Allman, and J. Beer, “The RAP: A Ring Array Processor for Layered Network Calculations,”Proceedings of International Conference on Application Specific Array Processors, Princeton, NJ, pp. 296–308, 1990.

  6. N. Morgan et al., “The Ring-Array Processor: A Multiprocessing Peripheral for Connectionist Applications,”Journal of Parallel and Distributed Computing, Vol. 14, pp. 248–259, 1992.

    Article  Google Scholar 

  7. D. Hammerstorm, “A VLSI Architecture for High-Performance, Low-Cost, On-Chip Learning,”International Joint Conference on Neural Networks, Vol. 2, pp. 537–543, 1990.

    Google Scholar 

  8. Ulrich Ramacher, “SYNAPSE-A Neurocomputer That Synthesizes Neural Algorithms on a Parallel Systolic Engine,”Journal of Parallel and Distributed Computing, Vol. 14, pp. 306–318, 1992.

    Article  Google Scholar 

  9. A. Hiraiwa et al., “A Two Level Pipeline RISC Processor Array for ANN,”International Joint Conference on Neural Networks, pp. 137–140, 1990.

  10. T.H. Madraswala et al., “A Reconfigurable ANN Architecture,”International Symposium on Circuits and Systems, 1991.

  11. J.R. Brown, M.M. Garber, and S.F. Vanable, “Artificial Neural Network on a SIMD Architecture,”Proceedings of the 2nd Symposium on the Frontiers of Massively Parallel Computation, Fairfax, VA, pp. 43–47, 1988.

  12. L.C. Chu and B.W. Wah, “Optimal Mapping of Neural-Network Learning on Message-Passing Multicomputers,”Journal of Parallel and Distributed Computing, Vol. 14, pp. 319–339, 1992.

    Article  Google Scholar 

  13. J.N. Hwang and S.Y. Kung, “Parallel Algorithms/Architectures for Neural Networks,”Journal of VLSI Signal Processing, Vol. 1, pp. 221–251, 1989.

    Article  MATH  Google Scholar 

  14. K. Kim and K.P. Kumar, “Efficient Implementation of Neural Networks on Hypercube SIMD Arrays,”International Joint Conference on Neural Networks, 1989.

  15. S.Y. Kung, “Parallel Architectures for Artificial Neural Nets,”International Conference on Systolic Arrays, pp. 163–174, 1988.

  16. W. Lin, V.K. Prasanna, and K.W. Przytula, “Algorithmic Mapping of Neural Networks Models onto Parallel SIMD Machines,”IEEE Transactions on Computers, Vol 40, pp. 1390–1401, 1991.

    Article  Google Scholar 

  17. Sherryl Tomboulian, “Overview and Extensions of a System for Routing Directed Graph on SIMD Architectures,”Frontiers of Massively Parallel Processing, 1988.

  18. B.W. Wah and L. Chu, “Efficient Mapping of Neural Networks on Multicomputers,”Int. Conf. on Parallel Processing, Pennsylvania State Univ. Press, Vol. I, pp. 234–241, 1990.

  19. X. Zhang et al., “An Efficient Implementation of the Back Propagation Algorithm on the Connection Machine CM-2,”Neural Information Processing Systems, Vol. 2, pp. 801–809, 1989.

    Google Scholar 

  20. T. Nordstrom and B. Svensson, “Using and Designing Massively Parallel Computers for Artificial Neural Networks,”Journal of Parallel and Distributed Computing, Vol. 14, pp. 260–285, 1992.

    Article  Google Scholar 

  21. G. Chinn et al., “Systolic Array Implementation of Neural Nets on the MasPar MP-1 Massively Parallel Processor,”Int. Conference on Neural Networks, San Diego, CA, Vol. 2, pp. 169–173, 1990.

    Google Scholar 

  22. A. Krikelis and M. Grozinger, “Implementing Neural Networks with Associate String Processor,”Int. Workshop for Artificial Intelligence and Neural Networks, Oxford, 1990.

  23. C.R. Rosenberg and G. Belloch, “An Implementation of Network Learning on the Connection Machine,”Proc. 10th Int. Conference on AI, Milan, Italy, pp. 329–340, 1987.

  24. S. Shams and W. Przutula, “Mapping of Neural Networks onto Programmable Parallel Machines,”IEEE International Symposium on Circuits and Systems, New Orleans, LA, May 1990.

  25. J. Hertz, A. Krogh, and R.G. Palmer,Introduction to the Theory of Neural Computation, Addison Wesley, 1991.

Download references

Author information

Authors and Affiliations

Authors

Additional information

The authors acknowledge the support of the NSF and State of Louisiana grant NSF/LEQSF (1992–96)-ADP-04.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Malluhi, Q.M., Bayoumi, M.A. & Rao, T.R.N. Tree-based special-purpose Array architectures for neural computing. Journal of VLSI Signal Processing 11, 245–262 (1995). https://doi.org/10.1007/BF02107056

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02107056

Keywords

Navigation