Efficient Management of Parallelism in Object-Oriented Numerical Software Libraries

  • Satish Balay
  • William D. Gropp
  • Lois Curfman McInnes
  • Barry F. Smith

Abstract

Parallel numerical software based on the message passing model is enormously complicated. This paper introduces a set of techniques to manage the complexity, while maintaining high efficiency and ease of use. The PETSc 2.0 package uses object-oriented programming to conceal the details of the message passing, without concealing the parallelism, in a high-quality set of numerical software libraries. In fact, the programming model used by PETSc is also the most appropriate for NUMA shared-memory machines, since they require the same careful attention to memory hierarchies as do distributed-memory machines. Thus, the concepts discussed are appropriate for all scalable computing systems. The PETSc libraries provide many of the data structures and numerical kernels required for the scalable solution of PDEs, offering performance portability.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Gagan Agrawal, Alan Sussman, and Joel Saltz. Compiler and runtime support for unstructured and block structured problems. In Proceedings of Supercomputing ‘83, pages 578–587, 1993.Google Scholar
  2. [2]
    Brett M. Averick, Richard G. Carter, and Jorge J. Moré. The MINPACK-2 test problem collection. Technical Report ANL/MCSTM-150, Argonne National Laboratory, 1991.Google Scholar
  3. [3]
    Satish Balay, William Gropp, Lois Curfman McInnes and Barry Smith. PETSc 2.0 User’s Manual. Technical Report ANL-95/11, Argonne National Laboratory, November 1995.Google Scholar
  4. [4]
    Satish Balay, William Gropp, Lois Curfman McInnes, and Barry Smith. PETSc home page. http://www.mcs.anl.gov/petsc/petsc.html, December 1996.Google Scholar
  5. [5]
    A. M. Bruaset and H. P. Langtangen. A Comprehensive set of Tools for Solving Partial Differential Equations: Diffpack. In M. Dæhlen and A. Tveito, editors, Numerical Methods and Software Tools in Industrial Mathematics, pages 63–92. Birkhäuser, 1997.Google Scholar
  6. [6]
    Greg Burns, Raja Daoud and James Vaigl. LAM: An open cluster environment for MPI. In John W. Ross, editor, Proceedings of Supercomputing Symposium ‘84, pages 379–386. University of Toronto, 1994.Google Scholar
  7. [7]
    William Gropp, Ewing Lusk, Nathan Doss and Anthony Skjellum. A high-performace, portable implementation of the MPI message passing interface standard. Parallel Computing, 22:789–828, 1996.MATHCrossRefGoogle Scholar
  8. [8]
    William Gropp, Ewing Lusk, Nathan Doss and Anthony Skjellum. MPICH home page. http://www.mcs.anl.gov/mpi/mpich/index.html, December 1996.Google Scholar
  9. [9]
    William Gropp, Ewing Lusk and Anthony Skjellum. Using MPI: Portable Parallel Programming with the Message Passing Interface. MIT Press, 1994.Google Scholar
  10. [10]
    R. W. Hockney and C. R. Jesshope. Parallel Computers 2. Adam Hilger, 1988.MATHGoogle Scholar
  11. [11]
    Scott A. Hutchinson, John N. Shadid and Ray S. Tuminaro. Aztec User’s Guide Version 1.1. Technical Report SAND95/1559, Sandia National Laboratories, October 1995.CrossRefGoogle Scholar
  12. [12]
    Mark T. Jones and Paul E. Plassmann. BlockSolve v1.1: Scalable library software for the parallel solution of sparse linear systems. Technical Report ANL-92/46, Argonne National Laboratory, 1992.Google Scholar
  13. [13]
    Mark T. Jones and Paul E. Plassmann. A parallel graph coloring heuristic. SIAM J. Sci. Comput., 14(3):654–669, 1993.MathSciNetMATHCrossRefGoogle Scholar
  14. [14]
    Jorge J. Moré, Danny C. Sorenson, Burton S. Garbow, and Kenneth E. Hillstrom. The MINPACK project. In Wayne R. Cowell, editor, Sources and Development of Mathematical Software, pages 88–111, 1984.Google Scholar
  15. [15]
    MPI: A message-passing interface standard. International J. Super-computing Applications, 8(3/4), 1994.Google Scholar
  16. [16]
    NAS Parallel Benchmarks home page. http://www.nas.nasa.gov/NAS/NPB/index.html, December 1996.Google Scholar
  17. [17]
    J. V. W. Reynders, J. C. Cummings, P. J. Rinker, M. Tholburn, M. Srikant S. Banerjee, S. Karmesin, S. Atlas, K. Keahey and W. F. Humphrey. POOMA: A Frame Work for Scientific Computing Applications on Parallel Architectures, chapter 14. 1996.Google Scholar
  18. [18]
    Barry F. Smith and William D. Gropp. The design of data-structureneutral libraries for the iterative solution of sparse linear systems. Scientific Programming, 5:329–336, 1996.Google Scholar
  19. [19]
    Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker and Jack Dongarra. MPI• The Complete Reference. MIT Press, 1995.Google Scholar
  20. [20]
    Thinking Machines Corporation. Users Manual for CM-Fortran. Thinking Machines Corporation, 1993.Google Scholar
  21. [21]
    M. D. Tidriri. Krylov methods for compressible flows. Technical Report 95–48, ICASE, June 1995.Google Scholar
  22. [22]
    D. Whitfield and L. Taylor. Discretized Newton-relaxation solution of high resolution flux-difference split schemes. In Proceedings of the AIAA Tenth Computational Fluid Dynamics Conference, pages 134–145, 1991.Google Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Satish Balay
  • William D. Gropp
  • Lois Curfman McInnes
  • Barry F. Smith
    • 1
  1. 1.Mathematics and Computer Science DivisionArgonne National LaboratoryArgonneUSA

Personalised recommendations