Advertisement

Improving the Dynamic Creation of Processes in MPI-2

  • Márcia C. Cera
  • Guilherme P. Pezzi
  • Elton N. Mathias
  • Nicolas Maillard
  • Philippe O. A. Navaux
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4192)

Abstract

The MPI-2 standard has been implemented for a few years in most of the MPI distributions. As MPI-1.2, it leaves it up to the user to decide when and where the processes must be run. Yet, the dynamic creation of processes, enabled by MPI-2, turns it harder to handle their scheduling manually. This paper presents a scheduler module, that has been implemented with MPI-2, that determines, on-line (i.e. during the execution), on which processor a newly spawned process should be run. The scheduler can apply a basic Round-Robin mechanism or use load information to apply a list scheduling policy, for MPI-2 programs with dynamic creation of processes. A rapid presentation of the scheduler is given, followed by experimental evaluations on three test programs: the Fibonacci computation, the N-Queens benchmark and a computation of prime numbers. Even with the basic mechanisms that have been implemented, a clear gain is obtained regarding the run-time, the load balance, and consequently regarding the number of processes that can be run by the MPI program.

Keywords

Load Balance Task Graph List Schedule Load Information Dynamic Creation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Foster, I., Kesselman, C.: Globus: A metacomputing infrastructure toolkit. The International Journal of Supercomputer Applications and High Performance Computing 11(2), 115–128 (1997)CrossRefGoogle Scholar
  2. 2.
    Gabriel, E., Fagg, G.E., Bosilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: Open MPI: Goals, concept, and design of a next generation MPI implementation. In: Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary, September 2004, pp. 97–104 (2004)Google Scholar
  3. 3.
    Graham, R.: Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math. 17(2), 416–426 (1969)zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface. MIT Press, Cambridge (1994)Google Scholar
  5. 5.
    Gropp, W., Lusk, E., Thakur, R.: Using MPI-2 Advanced Features of the Message-Passing Interface. The MIT Press, Cambridge (1999)Google Scholar
  6. 6.
    Hamilton, J.D.: Time Series Analysis. Princeton University Press, Princeton (1994)zbMATHGoogle Scholar
  7. 7.
    Ju, J., Wang, Y.: Scheduling pvm tasks. SIGOPS Oper. Syst. Rev. 30(3), 22–31 (1996)CrossRefGoogle Scholar
  8. 8.
    Squyres, J.M., Lumsdaine, A.: A Component Architecture for LAM/MPI. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 379–387. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Márcia C. Cera
    • 1
  • Guilherme P. Pezzi
    • 1
  • Elton N. Mathias
    • 1
  • Nicolas Maillard
    • 1
  • Philippe O. A. Navaux
    • 1
  1. 1.Instituto de InformáticaUniversidade Federal do Rio Grande do SulPorto AlegreBrazil

Personalised recommendations