Skip to main content

An Introduction to Parallel Programming Using MPI

  • Chapter
  • 10k Accesses

Part of the book series: Undergraduate Topics in Computer Science ((UTICS))

Abstract

The most common type of high-performance parallel computer is a distributed memory computer: a computer that consists of many processors, each with their own individual memory, that can only access the data stored by other processors by passing messages across a network. This chapter serves as an introduction to the Message Passing Interface (MPI), which is a widely used library of code for writing parallel programs on distributed memory architectures. Although the MPI libraries contain many different functions, basic code may be written using only a very small subset of these functions. By providing a basic guide to these commonly used functions you will be able to write simple MPI programs, edit MPI programs written by other programmers, and understand the function calls when using a scientific library built on MPI.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The Portable Extensible Toolkit for Scientific Computing (PETSc, pronounced “pet see”) is a library providing functionality for the solution of linear and nonlinear systems of equations on both sequential and parallel architectures.

  2. 2.

    There are several programming libraries which allow the programmer access to a distributed shared memory computer where machines over a network act as if they were part on one contiguous system. There has, however, not been wide-spread use of these libraries at the time of writing.

  3. 3.

    MPI implementations vary in how they return console input from the individual processes to the console from which the program was launched. Even when flush is called on the cout stream it may still be the case that the MPI machinery is buffering output.

  4. 4.

    MPI::ANY_TAG and MPI::ANY_SOURCE are C++ names for these wild-card values. Many codes use the interchangeable C names: MPI_ANY_TAG and MPI_ANY_SOURCE.

  5. 5.

    Note that these are the C++ object names for these types—they are also called synonymously by their C names: MPI_BOOL,  MPI_CHAR,  MPI_INT and MPI_DOUBLE.

  6. 6.

    There are a few standard ways of getting data to file from a parallel program: concentration, where one process does all the writing, as suggested above; round-robin where processes take it in turns to open and close the same file; parallel file libraries such as MPI’s MPIIO; and separate files where each process writes data to different places to be re-assembled later. The choice of output method is largely dependent on the data structure and size.

References

The Message-Passing Interface (MPI)

  1. Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface, 2nd edn. Massachusetts Institute of Technology Press, Cambridge (1999)

    Google Scholar 

  2. Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface. Massachusetts Institute of Technology Press, Cambridge (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jonathan Whiteley .

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag London Limited

About this chapter

Cite this chapter

Pitt-Francis, J., Whiteley, J. (2012). An Introduction to Parallel Programming Using MPI. In: Guide to Scientific Computing in C++. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-2736-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-2736-9_11

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-2735-2

  • Online ISBN: 978-1-4471-2736-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics