Advertisement

ClustMap: A Topology-Aware MPI Process Placement Algorithm for Multi-core Clusters

  • K. B. ManwadeEmail author
  • D. B. Kulkarni
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 673)

Abstract

Many high-performance computing applications are using MPI (Message Passing Interface) for communication. The performance of MPI library affects the performance of MPI applications. Various techniques like communication latency reduction, increasing bandwidth, and increasing scalability are available for improving the performance of message passing. In multi-core cluster environment, the communication latency can be further reduced by topology-aware process placement. This technique involves three steps: finding communication pattern of MPI application (application topology), finding architecture details of underlying multi-core cluster (system topology), and mapping processes to cores. In this paper, we have proposed novel “ClustMap” algorithm for the third step. By using this algorithm, both system and application topologies are mapped. The experimental results show that the proposed algorithm outperforms over existing process placement techniques.

Keywords

High-performance computing Topology-aware process placement System topology MPI application topology 

References

  1. 1.
    Balaji Venu, “Multi core processors: An overview”, Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK, (2010).Google Scholar
  2. 2.
    Dennis Abts and John Kim, “High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities”, Synthesis Lectures on Computer Architecture, Book series, Pages 1–115, Vol. 6, No. 1, Morgan Claypool Publishers, (2011).Google Scholar
  3. 3.
    Mohammad Javad Rashti, Jonathan Green, Pavan Balaji, Ahmad Afsahi and William Gropp, “Multi-core and Network Aware MPI Topology Functions, EuroMPI’11, ISBN: 978-3-642-24448-3 (2011), pp. 50–60.Google Scholar
  4. 4.
    Joshua Hursey, Jerey M. Squyres and Terry Dontje, “Locality-Aware Parallel Process Mapping for Multi-Core HPC Systems”, IEEE International Conference on Cluster Computing, (2011).Google Scholar
  5. 5.
    B. Brandfass, T. Alrutz and T. Gerhold, “Rank reordering for MPI communication optimization”, 23rd Int. Conf. on Parallel Fluid Dynamics, (2013), pp. 372–380.Google Scholar
  6. 6.
    Torsten Hoefler, Emmanuel Jeannot, Guillaume Mercier, “An Overview of Process Mapping Techniques and Algorithms in High-Performance Computing”, High Performance Computing on Complex Environments, Wiley, pp. 75–94, 2014, 978-1-118-71205-4 <hal-00921626>, (2014)Google Scholar
  7. 7.
    Alberto Fernndez and Sergio Gmez, “Solving Non-Uniqueness in Agglomerative Hierarchical Clustering Using Multi-dendrograms”, Journal of Classification 25:43-65, ISSN: 0176–4268, (2008).Google Scholar
  8. 8.
    MPI, “MPI 2.2: A Message-Passing Interface Standard”, MPI Forum, www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf (2009).
  9. 9.
    B. W. Kernighan and S. Lin, “An efficient heuristic procedure for partitioning graphs”, Bell System Technical Journal, 59 (1970), pp. 291–307.Google Scholar
  10. 10.
    T. Hoeer and M. Snir, “Generic topology mapping strategies for large-scale parallel architectures”, Int. Conf. on Supercomputing, (2011), pp. 75–84.Google Scholar
  11. 11.
    T. Hatazaki, “Rank reordering strategy for MPI topology creation functions”, Recent Advances in Parallel Virtual Machine and Message Passing Interface, EuroPVM/MPI 1998, Lecture Notes in Computer Science, vol 1497, pp. 188–195, (1998).Google Scholar
  12. 12.
    S. H. Bokhari, “On the mapping problem”, IEEE Transactions on Computers, Volume 30 Issue 3, Pages 207–214, ISSN: 0018-9340, (1981).Google Scholar
  13. 13.
    S. Y. Lee and J. K. Aggarwal, “A mapping strategy for parallel processing”, IEEE Transactions on Computers, Volume 36 Issue 4, Pages 433-442, ISSN: 0018–9340, (1987).Google Scholar
  14. 14.
    C. Sudheer and A. Srinivasan, “Optimization of the hop-byte metric for effective topology-aware mapping”, 19th Int. Conf. on HPC, (2012), pp. 19.Google Scholar
  15. 15.
    P. Balaji, R. Gupta, A. Vishnu, and P. H. Beckman, “Mapping communication layouts to network hardware characteristics on massive-scale Blue Gene systems”, Journal of Computer Science - Research and Development, Volume 26 Issue 3-4, Pages 247–256, ISSN:1865-2034 (2011).Google Scholar
  16. 16.
    B. E. Smith and B. Bode, “Performance effects of node mappings on the IBM Blue-Gene/L machine”, Euro-Par 2005. Lecture Notes in Computer Science, vol 3648, Pages 1005–1013, ISBN: 978-3-540-28700-1, (2005).Google Scholar
  17. 17.
    J. L. Traff, “Implementing the MPI process topology mechanism”, Proceedings of the 2002 ACM/IEEE conf. on Supercomputing, IEEE Computer Society Press, (2002), pp. 114.Google Scholar
  18. 18.
    Guillaume Mercier, Jerome Clet-Ortega, “Towards an efficient process placement policy for MPI applications in multi-core environments”, Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2009, pp. 104–115, ISBN: 978-3-642-03769-6 (2009).Google Scholar
  19. 19.
    Henri Casanova, Arnaud Giersch, Arnaud Legrand, Martin Quinson, Frederic Suter, “Versatile, Scalable, and Accurate Simulation of Distributed Applications and Platforms”, Journal of Parallel and Distributed Computing, Volume 74, Issue 10, pp. 2899–2917, (2014).Google Scholar
  20. 20.

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringPh.D. Research Center, Walchand College of EngineeringSangliIndia
  2. 2.Shivaji UniversityKolhapurIndia
  3. 3.Department of Information TechnologyWalchand College of EngineeringSangliIndia

Personalised recommendations