Advertisement

Process Mapping for MPI Collective Communications

  • Jin Zhang
  • Jidong Zhai
  • Wenguang Chen
  • Weimin Zheng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5704)

Abstract

It is an important problem to map virtual parallel processes to physical processors (or cores) in an optimized way to get scalable performance due to non-uniform communication cost in modern parallel computers. Existing work uses profile-guided approaches to optimize mapping schemes to minimize the cost of point-to-point communications automatically. However, these approaches cannot deal with collective communications and may get sub-optimal mappings for applications with collective communications.

In this paper, we propose an approach called OPP (Optimized Process Placement) to handle collective communications which transforms collective communications into a series of point-to-point communication operations according to the implementation of collective communications in communication libraries. Then we can use existing approaches to find optimized mapping schemes which are optimized for both point-to-point and collective communications.

We evaluated the performance of our approach with micro-benchmarks which include all MPI collective communications, NAS Parallel Benchmark suite and three other applications. Experimental results show that the optimized process placement generated by our approach can achieve significant speedup.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Colwell, R.R.: From terabytes to insights. Commun. ACM 46(7), 25–27 (2003)CrossRefGoogle Scholar
  2. 2.
    Pant, A., Jafri, H.: Communicating efficiently on cluster based grids with MPICH-VMI. In: CLUSTER, pp. 23–33 (2004)Google Scholar
  3. 3.
    Chen, H., Chen, W., Huang, J., Robert, B., Kuhn, H.: MPIPP: an automatic profile-guided parallel process placement toolset for SMP clusters and multiclusters. In: ICS, pp. 353–360 (2006)Google Scholar
  4. 4.
    NASA Ames Research Center. NAS parallel benchmark NPB, http://www.nas.nasa.gov/Resources/Software/npb.html
  5. 5.
    Phinjaroenphan, P., Bevinakoppa, S., Zeephongsekul, P.: A heuristic algorithm for mapping parallel applications on computational grids. In: EGC, pp. 1086–1096 (2005)Google Scholar
  6. 6.
    Sanyal, S., Jain, A., Das, S.K., Biswas, R.: A hierarchical and distributed approach for mapping large applications to heterogeneous grids using genetic algorithms. In: CLUSTER, pp. 496–499 (2003)Google Scholar
  7. 7.
    Bhanot, G., Gara, A., Heidelberger, P., Lawless, E., Sexton, J., Walkup, R.: Optimizing task layout on the Blue Gene/L supercomputer. IBM Journal of Research and Development 49(2-3), 489–500 (2005)CrossRefGoogle Scholar
  8. 8.
    Yu, H., Chung, I., Moreira, J.: Topology mapping for Blue Gene/L supercomputer. In: SC, pp. 52–64 (2006)Google Scholar
  9. 9.
    Kielmann, T., Hofman, R.F.H., Bal, H.E., Plaat, A., Bhoedjang, R.: MagPIe: MPI’s collective communication operations for clustered wide area systems. In: PPOPP (1999)Google Scholar
  10. 10.
    Sanders, P., Traff, J.L.: The hierarchical factor algorithm for all-to-all communication (research note). In: Monien, B., Feldmann, R.L. (eds.) Euro-Par 2002. LNCS, vol. 2400, pp. 799–804. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  11. 11.
    Sistare, S., vande Vaart, R., Loh, E.: Optimization of MPI collectives on clusters of large-scale SMP’s. In: SC, pp. 23–36 (1999)Google Scholar
  12. 12.
    Tipparaju, V., Nieplocha, J., Panda, D.K.: Fast collective operations using shared and remote memory access protocols on clusters. In: IPDPS, pp. 84–93 (2003)Google Scholar
  13. 13.
    Barnett, M., Gupta, S., Payne, D.G., Shuler, L., van de Geijn, R., Watts, J.: Interprocessor collective communication library (InterCom). In: SHPCC, pp. 357–364 (1994)Google Scholar
  14. 14.
    Kalé, L.V., Kumar, S., Varadarajan, K.: A framework for collective personalized communication. In: IPDPS, pp. 69–77 (2003)Google Scholar
  15. 15.
    Ohio State University. MVAPICH: MPI over infiniband and iWARP, http://mvapich.cse.ohio-state.edu
  16. 16.
    Bruck, J., Ho, C., Upfal, E., Kipnis, S., Weathersby, D.: Efficient algorithms for all-to-all communications in multiport message-passing systems. IEEE Trans. Parallel Distrib. 8(11), 1143–1156 (1997)CrossRefGoogle Scholar
  17. 17.
    Rabenseifner, R.: New optimized MPI reduce algorithm, http://www.hlrs.de/organization/par/services/models/mpi/myreduce.html
  18. 18.
    Argonne National Laboratory. MPICH1, http://www-unix.mcs.anl.gov/mpi/mpich1
  19. 19.
  20. 20.
    Huang, Z., Purvis, M.K., Werstein, P.: Performance evaluation of view-oriented parallel programming. In: ICPP, pp. 251–258 (2005)Google Scholar
  21. 21.
    Xue, W., Shu, J., Wu, Y., Zheng, W.: Parallel algorithm and implementation for realtime dynamic simulation of power system. In: ICPP, pp. 137–144 (2005)Google Scholar
  22. 22.
    Hewlett-Packard Development Company. HP-MPI user’s guide, http://docs.hp.com/en/B6060-96024/ch03s12.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Jin Zhang
    • 1
  • Jidong Zhai
    • 1
  • Wenguang Chen
    • 1
  • Weimin Zheng
    • 1
  1. 1.Department of Computer Science and TechnologyTsinghua UniversityChina

Personalised recommendations