Languages and Compilers for Parallel Computing

Volume 5898 of the series Lecture Notes in Computer Science pp 31-49

Hardware Support for OpenMP Collective Operations

  • Soohong P. KimAffiliated withSchool of ECE, Purdue University
  • , Samuel P. MidkiffAffiliated withSchool of ECE, Purdue University
  • , Henry G. DietzAffiliated withDepartment of ECE, University of Kentucky

* Final gross prices may vary according to local VAT.

Get Access


Efficient implementation of OpenMP collective operations (e.g. barriers and reductions) is essential for good performance from OpenMP programs. State-of-the-art on-chip networks and block-based cache coherence protocols used in shared memory Chip MultiProcessors (CMPs) are inefficient for implementing these collective operations. The performance of CMPs can be seriously degraded by the multitude of memory requests and coherence messages required to implement collective operations. To provide efficient support for OpenMP collective operations, this paper presents a CMP-AFN architecture and Instruction Set Architecture (ISA) extensions that augment a conventional shared-memory CMP with a tightly-integrated Aggregate Function Network (AFN) that implements low-latency collectives without using or interfering with the memory hierarchy. For a modest increase in circuit complexity, traffic within a CMP’s internal network is dramatically reduced, improving the performance of caches and reducing power consumption. Full system simulations of 16-core CMPs show a CMP-AFN outperforms the reference design significantly, eliminating more than 60% of memory accesses and more than 70% of private L1 data cache misses in both the EPCC OpenMP microbenchmarks and SPEC OMP benchmarks.