Advertisement

Artificial neural networks on reconfigurable meshes

  • Jing-Fu Jenq
  • Wing Ning Li
Workshop on Biologically Inspired Solutions to Parallel Processing Problems Albert Y. Zomaya, The University of Western Australia Fikret Ercal, University of Missouri-Rolla Stephan Olariu, Old Dominion Univesity
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1388)

Abstract

Artificial neural networks(ANN) have been used successfully in applications such as pattern recognition, image processing, automation and control. Majority of today's applications use backpropagate feedforward ANN. In this paper, two methods of P pattern L layer ANN learning on n x n RMESH have been presented. One required memory space of O(nL) but conceptually is simpler to develop and the other uses pipelined approach which reduces the memory requirement to O(L). Both of these algorithms take O(PL) time and are optimal for RMESH architecture.

Keywords

Artificial Neural Networks Reconfigurable Mesh Algorithms Parallel Algorithms 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    G. Chinn, K. A. Grajski, C. Chen, C. Kuszmaul, and S. Tomboulian, “Systolic Array implementations of Neural Nets on the MasPar MP-1 Massively Parallel Processor”, International Conference Neural Networks, vol. 2, pp 169–173, San Diego, 1990CrossRefGoogle Scholar
  2. [2]
    L. Chu and W. Wah “Optimal mapping of Neural-Network Learning on Message-Passing Multicomputers”, Journal of Parallel and Distributed Computing, vol. 14, pp-319–339, 1992CrossRefGoogle Scholar
  3. [3]
    J. Chung, H. Yoon, and S. R. Maeng, “A systolic Array Exploiting the Inherent Parallelisms of Artificial Neural Networks”, International Conference on Parallel Processing, vol. 1, pp 652–653, 1991Google Scholar
  4. [4]
    T. G. Clarkson, C. K. Ng, and Y. Guan, “The pRAM: An Adaptive VLSI Chip”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 408–411, May 1993CrossRefGoogle Scholar
  5. [5]
    A. El-Amawy, and P. Kulasinghe, “Algorithmic Mapping of Feedforward Neural Networks onto Multiple Bus Systems”, IEEE transactions on Parallel and Distributed Systems, vol. 8, ppl30–136, Feb. 1997CrossRefGoogle Scholar
  6. [6]
    D. Hammerstrom, “A VLSI Architecture for High-Performance, Low-Cost, Onchip Learning”, International Joint Conference on Neural Networks, vol. 2, pp 537–543, 1990CrossRefGoogle Scholar
  7. [7]
    S. Haykin, Neural Networks, A Comprehensive Foundation, IEEE Press, 1994Google Scholar
  8. [8]
    A. Hiraiwa, S. Kurosu, S. Arisawa and M. Ionue, “A Two Level pipeline RISC Processor Array for ANN”, International Joint Conference on Neural Networks, Washington DC, vol. 2, ppl37–140, 1990Google Scholar
  9. [9].
    J Jenq and S. Sahni “Reconfigurable Mesh Algorithms for Fundamental Data Manipulation Operations”, Computing on Distributed Memory Multiprocessors, NATO Series F, ed. F. Ozguner, Springer Verlag, 1993Google Scholar
  10. [10]
    J. Jenq and W. Li “Artificial Neural Networks on Reconfigurable Meshes”, CSCI-TR-98-01 Department of Computer Science, University of ArkansasGoogle Scholar
  11. [11]
    V. Kumar, S. Shekhar, and M. Amin, “A Scaleable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures”, IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 10, pp 1073–1090, Oct. 1994CrossRefGoogle Scholar
  12. [12]
    S. Y. Kung, “Parallel Architectures for Artificial Neural Nets”, International Conference on Systolic Arrays, pp 163–174, 1988Google Scholar
  13. [13]
    J. Lansner and T. Lehmann, “An Analog CMOS Chip Set for Neural Networks with Arbitrary Topologies”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 441–444, May 1993CrossRefGoogle Scholar
  14. [14]
    C. Lehmann, M. Viredaz, and F. Blayo, “A Generic Systolic Array Building Block for Neural Networks with on-Chip Learning”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 400–407, May 1993CrossRefGoogle Scholar
  15. [15]
    W. Lin, V. K. Prasanna, and K. W. Przytula, “Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines”, IEEE Transactions on Computers, vol. 40, no. 12, pp 1390–1401, Dec. 1991CrossRefGoogle Scholar
  16. [16]
    Q. M. Malluhi, M. Bayoumi, and T. R. N. Rao, “Efficient mapping of ANNs on Hypercube Massively Parallel Machines”, IEEE Transactions on Computers, vol. 44, no. 6, pp769–779, June 1995CrossRefGoogle Scholar
  17. [17]
    T. Nordstrom and B. Svensson, “Using and Designing Massively Parallel Computers for Artificial Neural Networks”, Journal of Parallel and Distributed Computing, vol. 14, pp 260–285, 1992CrossRefGoogle Scholar
  18. [18]
    K. Parker and A. Thornbrugh, “Parallelized Back-Propagation Training and Its Effectiveness”, vol. 2, International Conference on Neural Networks, Washington DC, vol. 2, pp, 179–182, Jan. 1990Google Scholar
  19. [19]
    U. Ramacher, “SYNAPSE-A Neuralcomputer that Synthesizes Neural Algorithms on a Parallel Systolic Engine”, Journal of Parallel and Distributed Computing, vol. 14, pp 306–318, 1992CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Jing-Fu Jenq
    • 1
  • Wing Ning Li
    • 2
  1. 1.CS DepartmentTennessee State UniversityUSA
  2. 2.CS DepartmentUniversity of ArkansasUSA

Personalised recommendations