Advertisement

Scalable computing

  • W. F. McColl
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1000)

Abstract

Scalable computing will, over the next few years, become the normal form of computing. In this paper we present a unified framework, based on the BSP model, which aims to serve as a foundation for this evolutionary development. A number of important techniques, tools and methodologies for the design of sequential algorithms and programs have been developed over the past few decades. In the transition from sequential to scalable computing we will find that new requirements such as universality and predictable performance will necessitate significant changes of emphasis in these areas. Programs for scalable computing, in addition to being fully portable, will have to be efficiently universal, offering high performance, in a predictable way, on any general purpose parallel architecture. The BSP model provides a discipline for the design of scalable programs of this kind. We outline the approach and discuss some of the issues involved.

Keywords

Directed Acyclic Graph Message Passing Remote Memory Sequential Computing Bulk Synchronous Parallel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. Aggarwal, A. K. Chandra, and M. Snir. Communication complexity of PRAMs. Theoretical Computer Science, 71:3–28, 1990.CrossRefGoogle Scholar
  2. 2.
    R. H. Arpaci, D. E. Culler, A. Krishnamurthy, S. G. Steinberg, and K. Yelick. Empirical evaluation of the CRAY-T3D: A compiler perspective. In Proc. 22nd Annual International Symposium on Computer Architecture, June 1995.Google Scholar
  3. 3.
    R. H. Bisseling and W. F. McColl. Scientific computing on bulk synchronous parallel architectures. Technical Report 836, Dept. of Mathematics, University of Utrecht, December 1993. Short version appears in Proc. 13th IFIP World Computer Congress. Volume I (1994), B. Pehrson and I. Simon, Eds., Elsevier, pp. 509–514.Google Scholar
  4. 4.
    B. Bollobás. Random Graphs. Academic Press, 1985.Google Scholar
  5. 5.
    A. W. Burks, H. H. Goldstine, and J. von Neumann. Preliminary discussion of the logical design of an electronic computing instrument. Part 1, Volume 1. The Institute of Advanced Study, Princeton, 1946. Report to the U.S. Army Ordnance Department. First edition, 28 June 1946. Second edition, 2 September 1947. Also appears in Papers of John von Neumann on Computing and Computer Theory, W. Aspray and A. Burks, editors. Volume 12 in the Charles Babbage Institute Reprint Series for the History of Computing, MIT Press, 1987, 97–142.Google Scholar
  6. 6.
    T. Cheatham, A. Fahmy, D. C. Stefanescu, and L. G. Valiant. Bulk synchronous parallel computing — a paradigm for transportable software. In Proc. 28th Hawaii International Conference on System Science, January 1995.Google Scholar
  7. 7.
    D. E. Culler, A. Dusseau, S. C. Goldstein, A. Krishnamurthy, S. Lumetta, T. von Eicken, and K. Yelick. Parallel programming in Split-C. In Proc. Supercomputing '93, pages 262–273, November 1993.Google Scholar
  8. 8.
    A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM: Parallel Virtual Machine — A Users' Guide and Tutorial for Networked Parallel Computing. MIT Press, Cambridge, MA, 1994.Google Scholar
  9. 9.
    M. Gereb-Graus and T. Tsantilas. Efficient optical communication in parallel computers. In Proc. 4th Annual ACM Symposium on Parallel Algorithms and Architectures, pages 41–48, 1992.Google Scholar
  10. 10.
    A. M. Gibbons and P. Spirakis, editors. Lectures on Parallel Computation, volume 4 of Cambridge International Series on Parallel Computation. Cambridge University Press, Cambridge, UK, 1993.Google Scholar
  11. 11.
    J. W. Hong and H. T. Kung. I/O complexity: The red-blue pebble game. In Proc. 13th Annual ACM Symposium on Theory of Computing, pages 326–333, 1981.Google Scholar
  12. 12.
    G. Manzini. Sparse matrix vector multiplication on distributed architectures: Lower bounds and average complexity results. Inform. Process. Lett., 50(5):231–238, June 1994.CrossRefGoogle Scholar
  13. 13.
    W. F. McColl. General purpose parallel computing. In Gibbons and Spirakis [10], pages 337–391.Google Scholar
  14. 14.
    W. F. McColl. Special purpose parallel computing. In Gibbons and Spirakis [10], pages 261–336.Google Scholar
  15. 15.
    W. F. McColl. BSP programming. In G. E. Blelloch, K. M. Chandy, and S. Jagannathan, editors, Specification of Parallel Algorithms. Proc. DIMACS Workshop, Princeton, May 9–11, 1994, volume 18 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 21–35. American Mathematical Society, 1994.Google Scholar
  16. 16.
    Message Passing Interface Forum. MPI: A message-passing interface standard. Technical report, May 1994.Google Scholar
  17. 17.
    R. Miller. A library for bulk-synchronous parallel programming. In Proc. British Computer Society Parallel Processing Specialist Group workshop on General Purpose Parallel Computing, December 1993. A revised and extended version of this paper is available by anonymous ftp from ftp.comlab.ox.ac.uk in directory /pub/Packages/BSP along with the Oxford BSP Library software distribution.Google Scholar
  18. 18.
    S. Rao, T. Suel, T. Tsantilas, and M. Goudreau. Efficient communication using total-exchange. In Proc. 9th International Parallel Processing Symposium, 1995.Google Scholar
  19. 19.
    D. Skillicorn. Foundations of Parallel Programming, volume 6 of Cambridge International Series on Parallel Computation. Cambridge University Press, Cambridge, UK, 1994.Google Scholar
  20. 20.
    A. M. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society. Series 2, 42:230–265, 1936. Corrections, ibid., 43 (1937), 544–546.Google Scholar
  21. 21.
    L. G. Valiant. A bridging model for parallel computation. Communications of the ACM, 33(8):103–111, 1990.CrossRefGoogle Scholar
  22. 22.
    L. G. Valiant. General purpose parallel architectures. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science: Volume A, Algorithms and Complexity, pages 943–971. North Holland, 1990.Google Scholar
  23. 23.
    L. G. Valiant. A combining mechanism for parallel computers. In F. Meyer auf der Heide, B. Monien, and A. L. Rosenberg, editors, Parallel Architectures and Their Efficient Use. Proceedings of the First Heinz Nixdorf Symposium, Paderborn, November 1992. Lecture Notes in Computer Science, Vol. 678, Springer-Verlag, Berlin, 1993, pages 1–10.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • W. F. McColl
    • 1
  1. 1.Programming Research GroupOxford University Computing LaboratoryOxfordEngland

Personalised recommendations