An extension to MPI for distributed computing on MPPs
We present a tool that allows to run an MPI application on several MPPs without having to change the application code. PACX (Parallel Computer eXtension) provides to the user a distributed MPI environment with most of the important functionality of standard MPI. It is therefore well suited for usage in metacomputing.
We are going to show how two MPPs are configured by PACX into a single virtual machine. The underlying communication management that makes use of highly optimized MPI for internal communication and uses standard protocols for external communication is presented. The performance of PACX for several basic message-passing calls is described. This covers latency, bandwidth, synchronization and global communication.
Unable to display preview. Download preview PDF.
- 1.Beisel, T.: Ein effizientes Message-Passing-Interface (MPI) für HiPPI. Studienarbeit, RUS, 1996.Google Scholar
- 2.Gabriel, E.: Erweiterung einer MPI-Umgebung zur Interoperabilitiit verteilter MPP-Systeme. RUS-37, 1997.Google Scholar
- 3.Message Passing Interface Forum: MPI: A Message-Passing Interface Standard. University of Tennessee, Knoxville, Tennessee, USA, 1995.Google Scholar
- 4.Fagg, G.E., Dongarra, J.J.: PVMPI: An Integration of the PVM and MPI Systems. Department of Computer Science Technical Report CS-96-328, Knoxville, May, 1996.Google Scholar
- 5.Brune, M., Gehring, J. and Reinefeld A.: A lightweight Communication Interface for Parallel Programming Environments. HPCN'97, Springer, 1997.Google Scholar
- 6.Cheng, F-C., Vaughan, P., Reese, D., Skjellum, A.: The Unify System. Technical Report, NSF Engineering Research Center, Mississippi State University, July, 1994.Google Scholar
- 7.Gropp, W., Lusk, E, Doss, N., Skjellum, A.: A high-performance, portable implementation of the MPI message passing interface standard, Parallel Computing, 22, 1996.Google Scholar
- 8.Resch, M.M., Beisel, T., Boenisch, T., Loftis, B., Reddy, R.: Performance Issues of Intercontinental Computing. Cray User Group Conference, May, 1997.Google Scholar