Advertisement

Improving the Reliability and the Performance of CAPE by Using MPI for Data Exchange on Network

  • Van Long TranEmail author
  • Éric Renault
  • Viet Hai Ha
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9395)

Abstract

CAPE — which stands for Checkpointing Aided Parallel Execution — has demonstrated to be a high-performance and compliant OpenMP implementation for distributed memory systems. CAPE is based on the use of checkpoints to automatically distribute jobs of OpenMP parallel constructs to distant machines and to automatically collect the calculated results on these machines to the master machine. However, on the current version, the data exchange on networks use manual sockets that require time to establish connections between machines for each parallel construct. Furthermore, this technique is not really reliable due to the risk of conflicts on ports and the problem of data exchange using stream. This paper aims at presenting the impact of using MPI to improve the reliability and the performance of CAPE. Both socket and MPI implementations are analyzed and discussed, and performance evaluations are provided.

Keywords

CAPE OpenMP MPI High-performance computing Parallel programming 

References

  1. 1.
    MPI: A Message-Passing Interface Standard. Message Passing Interface Forum (2012)Google Scholar
  2. 2.
    OpenMP specification 4.0. OpenMP Architecture Review Board (2013)Google Scholar
  3. 3.
    Ha, V.H., Renault, E.: Design and performance analysis of CAPE based on discontinuous incremental checkpoints. In: Proceedings of the IEEE Conference on Communications, Computers and Signal Processing. Victoria, Canada, August 2011Google Scholar
  4. 4.
    Ha, V.H., Renault, E.: Improving performance of CAPE using discontinuous incremental checkpointing. In: Proceedings of the IEEE International Conference on High Performance and Communications 2011 (HPCC-2011), Banff, Canada, September 2011Google Scholar
  5. 5.
    Ha, V.H., Renault, E.: Discontinuous incremental: a new approach towards extremely checkpoint. In: Proceedings of IEEE International Symposium on Computer Networks and Distributed System (CNDS 2011), Tehran, Iran, February 2011Google Scholar
  6. 6.
    Li, Y., Shen, W., Shi, A.: MPI and OpenMP paradigms on cluster with multicores and its application on FFT. In: Proceedings of the Conference on Computer Design and Application (ICCDA 2010) (2010)Google Scholar
  7. 7.
    Rabenseifner, R., Hager, G., Jost, G.: Hybrid MPI/OpenMP parallel programming on clusters of multi-core SMP nodes. In: Proceedings of 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing (2009)Google Scholar
  8. 8.
    Wong, H.J., Rendell, A.P. : The design of MPI based distributed shared memory systems to support OpenMP on clusters. In: Proceedings of IEEE International Conference on Cluster Computing (2007)Google Scholar
  9. 9.
    Wang, H., Potluri, S., Bureddy, D., Rosales, C., Panda, D.K.: GPU-aware MPI on RDMA-enabled clusters: design, implementation and evaluation. IEEE Trans. Parallel Distrib. Syst. 25(10), 2595–2605 (2014)CrossRefGoogle Scholar
  10. 10.
    Potluri, S., Wang, H., Bureddy, D., Singh, A.K., Rosales, C., Panda, D.K. : Optimizing MPI communication on Multi-GPU systems using CUDA inter-process communication. In: Proceedings of the IEEE International Conference on Parallel and Distributed Processing Symposium Workshops & Ph.D. Forum (IPDPSW) (2012)Google Scholar
  11. 11.
    Balamurugan, B., Krishna, P.V., Rajya Lakshmi, G.V., Kumar, N.S.: Cloud cluster communication for critical applications accessing C-MPICH. In: Proceedings of the International Conference on Embedded Systems (ICES 2014) (2014)Google Scholar
  12. 12.
    Shivaramakrishnan, S., Babar, S.D.: Rolling curve ECC for centralized key management system used in ECC-MPICH2. In: Proceedings of the IEEE Global Conference on Wireless Computing and Networking (GCWCN 2014) (2014)Google Scholar
  13. 13.
    Matsuda, M., Kudoh, T., Kodama, Y., Takano, R., Ishikawa, Y.: Efficient MPI collective operations for clusters in long-and-fast networks. In: Proceedings of the IEEE International Conference on Cluster Computing (2006)Google Scholar
  14. 14.
    Rabenseifner, R.: Automatic MPI counter profiling of all users: first result on a CRAY T3E 900–512. In: Proceedings of the Message Passing Interface Developers and Users Conference 1999 (MPIDC 1999) (1999)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Institut Mines-Telecom – Telecom SudParisÉvryFrance
  2. 2.College of EducationHue UniversityHueVietnam

Personalised recommendations