Advertisement

Future parallel computers

  • Philip C. Treleaven
Invited Addresses
Part of the Lecture Notes in Computer Science book series (LNCS, volume 237)

Abstract

There is currently a veritable explosion of research into novel computer architectures, especially parallel computers. In addition, an increasing number of interesting parallel computer products are appearing. The design motivations cover a broad spectrum: (i) parallel UNIX systems (e.g. SEQUENT Balance), (ii) Artificial Intelligence applications (e.g. Connection Machine), (iii) high performance numerical Supercomputers (e.g. Cosmic Cube), (iv) exploitation of Very Large Scale Integration (e.g. INMOS Transputer), and (v) new technologies (e.g. Optical computers). This short paper gives an overview of these novel parallel computers and discusses their likely commercial impact.

Keywords

Parallel Computer Global Memory Computer Architecture Very Large Scale Integration Artificial Intelligence Application 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Almasi G.S. and Paul G., (eds.) "Special Issue on Parallel Processing", Parallel Computing, vol. 2, no. 3 (November 1985).Google Scholar
  2. [2]
    Barron I., et al: "Transputer does 5 or more MIPS even when not used in parallel", Electronics, vol. 56, no. 23 (November 1983) pp. 109–115.Google Scholar
  3. [3]
    Caulfield H.J., et al, "The Special Issue on Optical Computing", Proc. IEEE, vol. 72, no. 7 (July 1984).Google Scholar
  4. [4]
    Conrad M., "On Design Principles for Molecular Computer", Comm. ACM, vol. 28, no. 5 (may 1985) pp.464–479.Google Scholar
  5. [5]
    Hockney R.W., "MIMD computing in the USA — 1984", Parallel Computing, vol. 2 (1985) pp.119–136.Google Scholar
  6. [6]
    Lineback J.R., "Parallel Processing: why a shakeout nears", ELECTRONICS, vol. 58, no. 43 (October 1985) pp. 32–34.Google Scholar
  7. [7]
    Kung H.T., "Why Systolic Arrays," IEEE COMPUTER, vol. 15, no. 1, (January 1982), pp. 37–46.Google Scholar
  8. [8]
    Milutinovic V., et al, "An Introduction to GaAs Microprocessor Architecture for VLSI", IEEE Computer, vol. 19, no. 3 (March 1986) pp. 30–42.Google Scholar
  9. [9]
    Moto-oka T., "Overview to the fifth generation computer system project", Proc. Tenth Int. Symp. on Computer Architecture (June 1983). pp. 417–422.Google Scholar
  10. [10]
    Patterson D.A., "Reduced Instruction Set Computers," Comm. ACM, vol. 28, no. 1, (January 1985), pp. 8–21.Google Scholar
  11. [11]
    Seitz C.L., "Concurrent VLSI Architectures", IEEE Trans. on Computers, vol. C-33, no. 12 (December 1984) pp. 1247–1265.Google Scholar
  12. [12]
    Seitz C.L., "The Cosmic Cube", Comm. ACM, vol. 28, no. 1 (Jan. 1985) pp. 22–33.Google Scholar
  13. [13]
    Treleaven P.C. et al, "Data Driven and Demand Driven Computer Architecture", ACM Computing Surveys, vol. 14, no. 1 (March 1982) pp. 93–143.CrossRefGoogle Scholar
  14. [14]
    Treleaven P.C. and Gouveia Lima I., "Japan's Fifth Generation Computer Systems", IEEE COMPUTER, vol. 15, no. 8 (August 1982) pp. 79–88.Google Scholar
  15. [15]
    Treleaven P.C., "VLSI processor architectures", IEEE COMPUTER, vol. 15, no. 6 (June 1982), pp. 33–45.Google Scholar
  16. [16]
    Treleaven P.C., et al, "Computer Architectures for Artificial Intelligence", University College London, Computer Science Department, Tech. Report UCL-CS TR 119 (March 1986).Google Scholar
  17. [17]
    Uchida S., "Inference Machine: From Sequential to Parallel", Proc. of the 10th Annual International Symposium on Computer Architecture, Sweden, (June 1983)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1986

Authors and Affiliations

  • Philip C. Treleaven
    • 1
  1. 1.University College LondonLondon

Personalised recommendations