Encyclopedia of Parallel Computing

2011 Edition
| Editors: David Padua

PGAS (Partitioned Global Address Space) Languages

  • George Almasi
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-09766-4_210

Definition

PGAS (Partitioned Global Address Space) is a programming model suited for shared and distributed memory parallel machines, e.g., machines consisting of many (up to hundreds of thousands of) CPUs.

Shared memory in this context means that the total of the memory space is available to every processor in the system (although access time to different banks of this memory can be different on each processor). Distributed memory is scattered across processors; access to other processors’ memory is usually through a network.

A PGAS system, therefore, consists of the following components:
  • A set of processors, each with attached local storage. Parts of this local storage can be declared private by the programming model, and is not visible to other processors.

  • A mechanism by which at least a part of each processor’s storage can be sharedwith others. Sharing can be implemented through the network device with system software support, or through hardware shared memory with cache...

This is a preview of subscription content, log in to check access.

Bibliography

  1. 1.
    Chamberlain BL, Choi S-E, Christopher Lewis E, Lin C, Snyder L, Weathersby D (2000) ZPL: a machine independent programming language for parallel computers. Softw Eng 26(3):197–211Google Scholar
  2. 2.
    The cascade high productivity language. HIPS, 00:52–60, 2004Google Scholar
  3. 3.
    High Performance Fortran Forum (1993). High performance Fortran language specification, version 1.0. Technical report CRPC-TR92225, HoustonGoogle Scholar
  4. 4.
    Nieplocha J, Palmer B, Tipparaju V, Krishnan M, Trease H, Apra E (2006). Advances, applications and performance of the global arrays shared memory programming toolkit. Int J High Perform Comput Appl 20:203–231Google Scholar
  5. 5.
    Numrich RW, Reid J (1998) Co-array fortran for parallel programming. SIGPLAN Fortran Forum 17(2):1–31Google Scholar
  6. 6.
    Open MP (2000) Simple, portable, scalable SMP programming. http://www.openmp.org/
  7. 7.
    Snir M, Otto S, Huss-Lederman S, Walker D, Dongarra J. MPI-the complete reference. The MPI Core, vol 1. MIT Press, Cambridge, MAGoogle Scholar
  8. 8.
  9. 9.
    UPC language Specification, V1.2, May 2005Google Scholar
  10. 10.
    The X10 programming language. http://x10.sourceforge.net, 2004
  11. 11.
    Yelick K, Semenzato L, Pike G, Miyamoto C, Liblit B, Krishnamurthy A, Hilfinger P, Graham S, Gay D, Colella P, Aiken A (1998). Titanium: a high-performance Java dialect. Concurrency Pract Experience 10(11–13):825–836Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • George Almasi
    • 1
  1. 1.T.J. Watson Research CenterIBMYorktown HeightsUSA