An Introduction to Parallel Programming Using MPI

  • Joe Pitt-Francis
  • Jonathan WhiteleyEmail author
Part of the Undergraduate Topics in Computer Science book series (UTICS)


The most common type of high-performance parallel computer is a distributed memory computer: a computer that consists of many processors, each with their own individual memory, that can only access the data stored by other processors by passing messages across a network. This chapter serves as an introduction to the Message Passing Interface (MPI), which is a widely used library of code for writing parallel programs on distributed memory architectures. Although the MPI libraries contain many different functions, basic code may be written using only a very small subset of these functions. By providing a basic guide to these commonly used functions you will be able to write simple MPI programs, edit MPI programs written by other programmers, and understand the function calls when using a scientific library built on MPI.


Message Passing Interface (MPI) Distributed Memory Programming Parallel Linear Algebra Central Halo Mpirun Command 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


The Message-Passing Interface (MPI)

  1. 9.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface, 2nd edn. Massachusetts Institute of Technology Press, Cambridge (1999) Google Scholar
  2. 10.
    Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface. Massachusetts Institute of Technology Press, Cambridge (1999) Google Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of OxfordOxfordUK

Personalised recommendations