Advertisement

Exploiting the Inherent Parallelism of Artificial Neural Networks to Achieve 1300 Million Interconnects per Second

  • Alexander Singer

Abstract

An artificial neural network implementation on the Connection Machine is presented which performs 1300 million interconnects per second. This implementation exploits training set parallelism and is discussed within the framework provided by the inherent parallelism of ANNs.

Keywords

Artificial Neural Network Training Pattern Connectionist School Forward Pass Artificial Neural Network Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Blum, Avrim and Rivest, Ronald. (1988). “Training a 3-node neural network is NP-Complete.” Proceedings of the 1988 IEEE Conference on Neural Information Processing Systems — Natural and Synthetic.Google Scholar
  2. 2.
    Fahlman, Scott E. (1988). “Faster learning variations on backpropagation: An empirical study.” Proceedings of the 1988 Connectionist Models Summer School, pp. 38–50.Google Scholar
  3. 3.
    Farber, Robert M. (1989). personal communication.Google Scholar
  4. 4.
    MIT Lincoln Laboratory. (1988). DARPA Neural Network Study — Final Report.Google Scholar
  5. 5.
    Millán, J. and Bofill. (1989). “Learning by back-propagation: a systolic algorithm and its transputer implementation.” Universitat Politècnica de Catalunya, Report LSI-89–15.Google Scholar
  6. 6.
    Pomerleau, Dean A., Gusciora, George L., Touretzky, David S., and Kung, H.T. (1988). “Neural network simulation at warp speed: How we got 17 million connections per second” in Proceedings of the IEEE International Conference on Neural Networks, San Diego 1988.Google Scholar
  7. 7.
    Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986). “Learning internal representations by error propagation” in Parallel Distributed Processing, eds. James McClelland, David Rumelhart, and the PDP Research Group. Vol. 1, pp. 318–362. Cambridge: MIT Press.Google Scholar
  8. 8.
    Singer, Alexander. (1990) “Implementations of Artificial Neural Networks on the Connection Machine.” Thinking Machines Corporation, forthcoming technical report.Google Scholar
  9. 9.
    Tesauro, Gerald. (1987). “Scaling relationships in backpropagation learning: dependence on training set size.” Complex Systems, 1: 367–372.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1990

Authors and Affiliations

  • Alexander Singer
    • 1
  1. 1.Thinking Machines CorporationCambridgeUSA

Personalised recommendations