Delegation-Based MPI Communications for a Hybrid Parallel Computer with Many-Core Architecture
Many-core architecture draws much attention in HPC community towards the Exascale era. Many ongoing research activities using GPU or the Many Integrated Core (MIC) architecture from Intel exist worldwide. Many-core CPUs have a great deal of impact to improve computing performance, however, they are not favorable for heavy communications and I/Os which are essential for MPI operations in general.
We have been focusing on the MIC architecture as many-core CPUs to realize a hybrid parallel computer in conjunction with multi-core CPUs. We propose a delegation mechanism for scalable MPI communications issued on many-core CPUs so as to play delegated operations on multi-core ones. This architecture also minimizes memory utilization of not only many-core CPUs but also multi-core ones by deploying multi-layered MPI communicator information. Here we evaluated the delegation mechanism on an emulated hybrid computing environment. We show our innovative design and its performance evaluation on the emulated environment in this paper.
Keywordsmany-core architecture MPI inter-core communication delegation multi-layered MPI communicator information
Unable to display preview. Download preview PDF.
- 2.Broquedis, F., Clet-Ortega, J., Moreaud, S., Furmento, N., Goglin, B., Mercier, G., Thibault, S., Namyst, R.: hwloc: A generic framework for managing hardware affinities in hpc applications. In: Proceedings of the 18th Euromicro Conference on Parallel, Distributed and Network-based Processing, PDP 2010, Pisa, Italy, February 17-19, pp. 180–186. IEEE Computer Society (2010)Google Scholar
- 4.Hursey, J., Squyres, J.M., Dontje, T.: Locality-aware parallel process mapping for multi-core hpc systems. In: Proceedings of 2011 IEEE International Conference on Cluster Computing (CLUSTER), Austin, TX, USA, September 26-30, pp. 527–531. IEEE (2011)Google Scholar
- 5.Intel: Intel unveils new product plans for high-performance computing (2010), http://www.intel.com/pressroom/archive/releases/2010/20100531comp.htm
- 7.KNEM: High-performance intra-node MPI communication, http://runtime.bordeaux.inria.fr/knem/
- 8.MPI Forum, http://www.mpi-forum.org/
- 9.Open MPI: Open Source High Performance Computing, http://www.open-mpi.org/
- 10.Sato, M., Fukazawa, G., Nagamine, K., Sakamoto, R., Namiki, M., Yoshinaga, K., Tsujita, Y., Hori, A., Ishikawa, Y.: A design of hybrid operating system for a parallel computer with multi-core and many-core processors. Accepted to ROSS 2012 (2012)Google Scholar
- 11.Texas Advanced Computing Center: Stampede, http://www.tacc.utexas.edu/stampede
- 12.The Ohio State University: MVAPICH: MPI over InfiniBand, 10GigE/iWARP and RoCE, http://mvapich.cse.ohio-state.edu/index.shtml