Advertisement

The New Multidevice Architecture of MetaMPICH in the Context of Other Approaches to Grid-Enabled MPI

  • Boris Bierbaum
  • Carsten Clauss
  • Martin Pöppe
  • Stefan Lankes
  • Thomas Bemmerl
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4192)

Abstract

MetaMPICH is an MPI implementation which allows the coupling of different computing resources to form a heterogeneous computing system called a meta computer. Such a coupled system may consist of multiple compute clusters, MPPs, and SMP servers, using different network technologies like Ethernet, SCI, and Myrinet. There are several other MPI libraries with similar goals available. We present the three most important of them and contrast their features and abilities to one another and to MetaMPICH. We especially highlight the recent advances made to MetaMPICH, namely the development of the new multidevice architecture for building a meta computer.

Keywords

MPI Grid Computing Meta Computing MPICH Multidevice Architecture 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Karonis, N., Toonen, B., Foster, I.: MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface. Journal of Parallel and Distributed Computing 63(5), 551–563 (2003)zbMATHCrossRefGoogle Scholar
  2. 2.
    Beisel, T., Gabriel, E., Resch, M., Keller, R.: Distributed Computing in a Heterogeneous Computing Environment. In: Proc. of the 5th European PVM/MPI Users’ Group Meeting, Liverpool, UK, September 1998, pp. 180–187 (1998)Google Scholar
  3. 3.
    Aumage, O., Mercier, G.: MPICH/MADIII: a Cluster of Clusters Enabled MPI Implementation. In: Proc. of the 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid, Tokyo, Japan, May 2003, pp. 26–36 (2003)Google Scholar
  4. 4.
    Kielmann, T., Hofmann, R., Bal, H., Plaat, A., Bhoedjang, R.: MagPIe: MPI’s Collective Communication Operations for Clustered Wide Area Systems. In: Proc. of the 7th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Atlanta, Georgia, May 1999, pp. 131–140 (1999)Google Scholar
  5. 5.
    George, W., Hagedorn, J., Devaney, J.: IMPI: Making MPI Interoperable. Journal of Research of the National Institute of Standards and Technology 105, 343–428 (2000)Google Scholar
  6. 6.
    Kimura, T., Takemiya, H.: Local Area Metacomputing for Multidisciplinary Problems: A Case Study for Fluid/Structure Coupled Simulation. In: Proc. of the 12th International Conference on Supercomputing, Melbourne, Australia, pp. 149–156 (1998)Google Scholar
  7. 7.
    Balkanski, D., Trams, M., Rehm, W.: Heterogeneous Computing With MPICH/Madeleine and PACX MPI: a Critical Comparison. Chemnitzer Informatik-Berichte, CSR-03-04 (December 2003)Google Scholar
  8. 8.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing 22(6), 789–828 (1996)zbMATHCrossRefGoogle Scholar
  9. 9.
    Karonis, N., De Supinski, B., Foster, I., Gropp, W., Lusk, E., Bresnahan, J.: Exploiting Hierarchy in Parallel Computer Networks to Optimize Collective Operation Performance. In: Proc. of the 14th IEEE International Parallel and Distributed Processing Symposium, Cancun, Mexico, pp. 377–384 (2000)Google Scholar
  10. 10.
    Pöppe, M., Schuch, S., Bemmerl, T.: A Message Passing Interface Library for Inhomogeneous Coupled Clusters. In: Proc. of the 17th IEEE International Parallel and Distributed Processing Symposium, Nice, France, April 2003, p. 199 (2003)Google Scholar
  11. 11.
    Pöppe, M., Schuch, S., Finocchiaro, R., Clauss, C., Worringen, J.: MP-MPICH User Documentation and Technical Notes. Aachen: Lehrstuhl für Betriebssysteme, RWTH Aachen (2005)Google Scholar
  12. 12.
    Clauss, C., Pöppe, M., Bemmerl, T.: Optimising MPI Applications for Heterogeneous Coupled Clusters with MetaMPICH. In: Proc. of the IEEE International Conference on Parallel Computing in Electrical Engineering, Dresden, Germany, September 2004, pp. 7–12 (2004)Google Scholar
  13. 13.
    Müller, M., Hess, M., Gabriel, E.: Grid enabled MPI solutions for Clusters. In: Proc. of the IEEE International Symposium on Cluster Computing and the Grid, Tokyo, Japan, May 2003, pp. 18–25 (2003)Google Scholar
  14. 14.
    Worringen, J.: SCI-MPICH: The Second Generation. In: Proc. of SCI-Europe 2000 (Conference Stream of Euro-Par 2000), Munich, Germany, pp. 11–20 (2000)Google Scholar
  15. 15.
    Bierbaum, B., Clauss, C., Eickermann, T., Kirtchakova, L., Krechel, A., Springstubbe, S., Wäldrich, O., Ziegler, W.: Reliable Orchestration of distributed MPI-Applications in a UNICORE-based Grid with MetaMPICH and MetaScheduling. In: Proc. of the 13th European PVM/MPI Users’ Group Meeting, Bonn, Germany (September 2006)Google Scholar
  16. 16.
    Erwin, D., Snelling, D.: UNICORE: A Grid Computing Environment. In: Sakellariou, R., Keane, J.A., Gurd, J.R., Freeman, L. (eds.) Euro-Par 2001. LNCS, vol. 2150, p. 825. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  17. 17.
    Grund, H., Ziegler, W.: Resource Management in an Optical Grid Testbed. Journal of ERCIM News 59 (October 2004)Google Scholar
  18. 18.
    Gropp, W., Lusk, E.: Reproducible Measurements of MPI Performance Characteristics. In: Proc. of the 6th European PVM/MPI Users’ Group Meeting, Barcelona, Spain, pp. 11–18 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Boris Bierbaum
    • 1
  • Carsten Clauss
    • 1
  • Martin Pöppe
    • 1
  • Stefan Lankes
    • 1
  • Thomas Bemmerl
    • 1
  1. 1.Chair for Operating SystemsRWTH Aachen UniversityGermany

Personalised recommendations