Advertisement

International Journal of Parallel Programming

, Volume 22, Issue 1, pp 47–77 | Cite as

Machines and models for parallel computing

  • Jack B. Dennis
Article

Abstract

It is widely believed that superscalar and superpipelined extensions of RISC style architecture will dominate future processor design, and that needs of parallel computing will have little effect on processor architecture. This belief ignores the issues of memory latency and synchronization, and fails to recognize the opportunity to support a general semantic model for parallel computing. Efforts to extend the shared-memory model using standard microprocessors have led to systems that implement no satisfactory model of computing, and present the programmer with a difficult interface on which to build parallel computing applications. A more satisfactory model for parallel computing may be obtained on the basis of functional programming concepts and the principles of modular software construction. We recommend that designs for computers be built on such a general semantic model of parallel computation. Multithreading concepts and dataflow principles can frame the architecture of these new machines.

Key words

Parallel computing semantic models computer architecture memory coherence multithreaded dataflow functional programming 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    John L. Hennessy and Norman P. Jouppi, Computer technology and architecture: An evolving interaction,Comm. of the ACM,24(9):18–29 (September 1991).Google Scholar
  2. 2.
    Harold S. Stone and John Cocke, Computer architecture in the 1990s,Comm. of the ACM,24(9):30–38 (September 1991).Google Scholar
  3. 3.
    Arvind and Robert A. Iannucci, Two fundamental issues in multiprocessing,Parallel. Computing in Science and Engineering, Lecture Notes in Computer Science,295:61–88, Springer-Verlag (1987).Google Scholar
  4. 4.
    V. S. Sunderam, PVM: A framework for parallel distributed computing,ACM Trans. on Programming Languages and Systems,1:80–112 (1985).Google Scholar
  5. 5.
    A. Deshpande and M. Schultz,Efficient parallel programming with Linda,Proc. of Supercomputing'92, IEEE Computer Society and ACM SIGARCH, pp. 238–244 (1992).Google Scholar
  6. 6.
    W. Daniel Hillis and Guy L. Steele, Jr., Data parallel algorithms,Comm. of the ACM,29(12):18–29 (December 1986).CrossRefGoogle Scholar
  7. 7.
    American National Standards Institute, FORTRAN 90: Draft of the international standard, Technical Report, The FORTRAN Technical Committe of ANSI X3J3 (June 1990).Google Scholar
  8. 8.
    Eugene Albert, Kathleen Knobe, Joan Lukas, and Guy L. Steele Jr., Compiling Fortran 8x array features for the Connection Machine computer system,Proc. of ACM SIGPLAN Symp. on Parallel Programming (July 1988).Google Scholar
  9. 9.
    J. Li and Marina Chen, Compiling communication-efficient programs for massively parallel machines,IEEE Trans. on Parallel and Distrib. Syst.,2(3):361–375 (July 1991).CrossRefGoogle Scholar
  10. 10.
    A. Choudhary, G. Fox, S. Ranka, S. Hirandani, K. Kennedy, C. Koebel, and C.-W. Tseng, Compiling Fortran 77D and 90D for mimd distributed-memory machines,Proc. of the Fourth Symp. on the Frontiers of Massively Parallel Computation, ACM and IEEE, pp. 4–11 (1992).Google Scholar
  11. 11.
    Gary Sabot, A compiler for a massively paralled distributed memory MIMD computer.Proc. of the Fourth Symp. on the Frontiers of Massively Parallel Computation, ACM and IEEE, pp. 12–20, (1992).Google Scholar
  12. 12.
    D. J. Mavriplis, Raja Das, R. E. Vermeland, and Joel Saltz, Implementation of a parallel unstructured Euler solver on shared and distributed memory architectures,Proc. of Supercomputing'92, IEEE Computer Society and ACM SIGARCH, pp. 132–141 (1992).Google Scholar
  13. 13.
    S. Thakkar, P. Gifford, and G. Fielland, The Balance multiprocessor system,IEEE Micro (February 1988).Google Scholar
  14. 14.
    B. R. Rau and C. D. Glaeser, Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing,Proc. of the 14th Ann. Workshop on Microprogramming, pp. 183–198 (1981).Google Scholar
  15. 15.
    J. A. Fisher, Very long instruction work architectures and the ELI-512,Proc. of the Tenth Int'l. Symp. on Computer Architecture, ACM (1983).Google Scholar
  16. 16.
    J. B. Dennis, Evolution of “static” dataflow architecture, Jean-Luc Gaudiot and Lubomir Bic. (eds.),Advanced Topics in Dataflow Computing, Prentice-Hall, pp. 35–91 (1991).Google Scholar
  17. 17.
    J. B. Dennis, G. R. Gao, and K. W. Todd, Modeling the weather with a dataflow supercomputer,IEEE Trans. on Computer,C-33(7):592–603 (July 1984).CrossRefGoogle Scholar
  18. 18.
    John Palmer and Guy L. Steele, Jr, Connection machine model CM-5 system overview,Proc. of the Fourth Symp. on the Frontiers of Massively Parallel Computation, ACM and IEEE, pp. 474–483 (1992).Google Scholar
  19. 19.
    R. S. Nikhil, G. M. Papadopoulos, and Arvind,*T: A multithreaded massively parallel architecture,Proc. of the 19th Int's Symp. on Computer Architecture, pp. 156–167 (1992).Google Scholar
  20. 20.
    Harry F. Jordan, HEP architecture: Programming and performance, Janusz S. Kowalik, (ed.),Parallel MIMD Computation: The HEP Supercomputer and its Applications, The MIT Press, pp. 1–40 (1985).Google Scholar
  21. 21.
    Robert Alverson, David Callahan, Daniel Cummings, Brian Koblenz, Allan Proterfield, and Burton Smith, The Tera computer system,Proc. of the Int'l Conf. on Supercomputing, ACM, pp. 1–6 (1990).Google Scholar
  22. 22.
    G. M. Papadopoulos and D. E. Culler, Monsoon: An explicit token-store architecture,Proc. of the 17th Ann. Int'l. Symp. of Comput. Architecture, ACM, pp. 82–91 (1990).Google Scholar
  23. 23.
    Gregory M. Papadopoulos and Kenneth R. Traub, Multithreading: A revisionist view of dataflow architectures,The 18th Ann. Int'l Symp. on Computer Architecture, ACM, pp. 342–351 (1991).Google Scholar
  24. 24.
    L. Lamport, How to make a multiprocessor that correctly executes multiprocess programs,IEEE Trans. on Comput.,C-28:690–691 (September 1979).zbMATHGoogle Scholar
  25. 25.
    Philip Bitar, The weakest memory-access order.J. of Parallel and Distrib. Comput.,15:305–331 (1992).zbMATHCrossRefGoogle Scholar
  26. 26.
    James Archibald and Jean-Loup Baer, Cache coherence protocols: Evaluation using a multiprocessor simulation model,ACM Trans. on Comput. Syst.,4:273–298 (November 1986).CrossRefGoogle Scholar
  27. 27.
    D. Lenoski,J. Laudon, T. Joe, K. Gharachorloo, A. Gupta, and J. Hennessy, The directorybased cache coherence protocol for the DASH multiprocessor,Proc. of the 17th Int'l Symp. on Comput. Architecture, ACM, pp. 148–159 (1990).Google Scholar
  28. 28.
    Erik Hagersten, Anders Landin, and Seif Haridi, DDM: A cache-only memory architecture,IEEE Computer,25(9):44–54 (September 1992).Google Scholar
  29. 29.
    J. B. Dennis and G. R. Gao, An efficient pipelined dataflow processor architecture,Proc. of the Supercomputing Conf., IEEE Computer Society and ACM SIGARCH, pp. 368–373 (1988).Google Scholar
  30. 30.
    A. Bensoussan, Charles T. Clingen, and Robert C. Daley, The Multics virtual memory,Proc. of the Second Symp. on Operating System Principles, ACM, pp. 30–42 (1969).Google Scholar
  31. 31.
    Robert C. Daley and Jack B. Dennis, Virtual memory, processes and sharing in Multics,Comm. of the ACM,11(5):306–312 (May 1968).zbMATHCrossRefGoogle Scholar
  32. 32.
    Peter Lucas and Kurt Walk, On the formal description of PL/1,Ann. Rev. in Automatic Programming,6:105–182, Pergamon Press (1965).Google Scholar
  33. 33.
    Dines Bjorner and O. N. Oest, (eds.),Towards a Formal Description of Ada, Lecture Notes in Computer Science, Vol. 98, Springler-Verlag, (1980).Google Scholar
  34. 34.
    L. L. Constantine and T. O. Barnett, eds.,Modular Programming: Proc. of a National Conf., Information and Systems Press, Cambridge, Massachusetts (1968).Google Scholar
  35. 35.
    David L. Parnas, On the criteria to be used in decomposing systems into modules,Comm. of ACM,15(12):1053–1058 (December 1972).CrossRefGoogle Scholar
  36. 36.
    Jack B. Dennis, Programming generality, parallelism and computer architecture,Inf. Proc., North-Holland,68:484–492 (1969).Google Scholar
  37. 37.
    Jack B. Dennis, Modularity,Software Engineering: An Advanced Course, Lecture Notes in Computer Science,30:128–182, Springer-Verlag (1975).Google Scholar
  38. 38.
    Edsger W. Dijkstra, Co-operating sequential processes, F. Genuys, (ed.),Programming Languages, Academic Press, pp. 43–112 (1968).Google Scholar
  39. 39.
    Gilles Kahn, The semantics for a simple language for parallel processing,Infor. Processing: Proc. of IFIP Congress, pp. 471–475 (1974).Google Scholar
  40. 40.
    Suhas S. Patil, Closure properties of interconnections of determinate systems,Record of the Project MAC Conference on Concurrent Systems and Parallel Computation, ACM, pp. 107–116 (1970).Google Scholar
  41. 41.
    Adele Goldberg and D. Robson,Smalltalk-80: The Language and Its Implementation, Addison-Wesley (1980).Google Scholar
  42. 42.
    B. Liskov, R. Atkinson, T. Bloom, E. Moss, C. Schaffert, B. Scheifler, and A. Snyder, CLU Reference Manual, Technical Report MIT/LCS/TR-225, MIT Laboratory for Computer Science (October 1979).Google Scholar
  43. 43.
    Jim Waldo, (ed.),The Evolution of C++, MIT Press (1993).Google Scholar
  44. 44.
    Jack B. Dennis and Earl C. Van Horn, Programming semantics for multi-programmed computations,Comm. of ACM,9(2):143–155 (February) 1966).zbMATHCrossRefGoogle Scholar
  45. 45.
    Robert S. Fabry, Capability-based addressing,Comm. of the ACM,17(7):403–412 (July 1974).CrossRefGoogle Scholar
  46. 46.
    Jack B. Dennis, First version of a data-flow procedure language,Proc. of the Colloque sur la Programmation, Lecture Notes in Computer Science,19:362–376, Springer-Verlag (1975).Google Scholar
  47. 47.
    John B. Johnston, The contour model of block structured processes,Proc. of a Symp. on Data Structures in Programming Languages, ACM, pp. 55–82 (1971).Google Scholar
  48. 48.
    Daniel M. Berry, Introduction to Oregano,Proc. of a Symp. on Data Structures in Programming Languages, ACM, pp. 171–190 (1971).Google Scholar
  49. 49.
    J. Arvind and Kim P. Gostelow, The U-Interpreter,IEEE Computer,15(2):42–49 (February 1982).Google Scholar
  50. 50.
    Jack B. Dennis, On the design and specification of a common base language,Proc. of the Symp. on Computers and Automata, Polytechnic Institute of Brooklyn, pp. 47–74 (1971).Google Scholar
  51. 51.
    J. B. Dennis, An operational semantics for a language with early completion data structures,Formalization of Programming Concepts. Lecture-Notes in Computer Science,107:260–267, Springer-Verlag (1981).Google Scholar
  52. 52.
    J. B. Dennis and K. S. Weng, An abstract implementation for concurrent computations with streams,Proc. Int'l. Conf. on Parallel Processing, IEEE Computer Society, pp. 35–45 (1979).Google Scholar
  53. 53.
    J. B. Dennis, J. E. Stoy, and B. Guharoy, VIM: An experimental multiuser system supporting functional programming,Proc. of the Int'l Workshop on High-Level Computer Architecture, IEEE Computer Society (1984).Google Scholar
  54. 54.
    K.-S. Weng, An abstract implementation for a generalized data flow language, Technical Report MIT/LCS-TR-228, Laboratory for Computer Science, MIT (1979).Google Scholar
  55. 55.
    David A. Turner, The semantic elegance of applicative languages,Proc. of ACM Conf. on Functional Programming Languages and Computer Architecture, ACM (1981).Google Scholar
  56. 56.
    R. S. Nikhil and Arvind, Id: A language with implicit parallelism, Computation Structures Group Memo 305, Laboratory for Computer Science, MIT (1990).Google Scholar
  57. 57.
    W. B. Ackerman and J. B. Dennis, VAL—A value-oriented algorithmic language, Technical Report 218, Laboratory for Computer Science, MIT (1979).Google Scholar
  58. 58.
    J. R. McGraw, S. Skedzielewski, S. Allan, D. Grit, R. Oldehoeft, J. R. W. Glauert, I. Dobes, and P. Hohensee, SISAL: Streams and iteration in a single assignment language: Language reference manual, Version 1.2, Technical Report TR M-146, Lawrence Livermore Laboratory, University of California (March 1985).Google Scholar
  59. 59.
    G. R. Gao, R. Yates, J. B. Dennis, and L. Mullin, An efficient monolithic array constructor,Proc. of the Third Workshop on Languages and Compilers for Parallel Computing, MIT Press (1990).Google Scholar
  60. 60.
    Paul S. Barth, Rishiyur S. Nikhil, and J. Arvind,M-structures: Extending a parallel, nonstrict, functional language with state,Proc. of the Fifth ACM Conf. on Functional Programming Languages and Computer Architecture, Lecture Notes in Computer Science,523:538–568, Springer-Verlag (1991).Google Scholar
  61. 61.
    Arvind and Dean N. Brock, Resource managers in functional programming,J. of Parallel and Distrib. Comput.,1:5–21 (1984).CrossRefGoogle Scholar
  62. 62.
    C. A. R. Hoare, Monitors: an operating system structuring concept,Comm. of ACM,17(10):549–557 (October 1974).zbMATHCrossRefGoogle Scholar
  63. 63.
    S. K. Skedzielewski and M. L. Welcome, Dataflow graph optimization In IF1,Functional Programming Languages and Computer Architecture,201:17–34, Springer-Verlag (1985).Google Scholar
  64. 64.
    Richard Johnson and Keshav Pingali, Dependence-based program analysis,Proc. of the ACM SIGPLAN Conf. on Programming Language Design and Implementation, ACM, pp. 78–89 (1993).Google Scholar
  65. 65.
    Jack B. Dennis, The Paradigm Compiler: Mapping a functional language for the Connection Machine, Horst Simon, (ed.),Scientific Applications of the Connection Machine, World Scientific, Singapore, pp. 301–315 (1989).Google Scholar
  66. 66.
    Zena M. Ariola and Arvind, P-TAC: A parallel intermediate language,Proc. of the Fourth Int'l. Conf. on Functional Programming Languages and Computer Architecture, ACM, pp. 230–242 (1989).Google Scholar
  67. 67.
    Inmos Ltd,Transputer Reference Manual, Prentice-Hall (1988).Google Scholar
  68. 68.
    J. Hayes, T. Mudge, Q. Stout, S. Colley, and J. Palmer. Architecture of a hypercube supercomputer,Proc. of the Tenth Int'l Conf. on Parallel Processing, IEEE, pp. 653–660 (1986).Google Scholar
  69. 69.
    Michael J. Backerle, Overview of the START (*T) multithreaded computer,Proc. of the 1993 CompCon., pp. 148–156 (1993).Google Scholar

Copyright information

© Plenum Publishing Corporation 1994

Authors and Affiliations

  • Jack B. Dennis
    • 1
  1. 1.MIT Laboratory for Computer ScienceCambridge

Personalised recommendations