Advertisement

An Architecture for Reconfigurable Iterative MPI Applications in Dynamic Environments

  • Kaoutar El Maghraoui
  • Boleslaw K. Szymanski
  • Carlos Varela
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3911)

Abstract

With the proliferation of large scale dynamic execution environments such as grids, the need for providing efficient and scalable application adaptation strategies for long running parallel and distributed applications has emerged. Message passing interfaces have been initially designed with a traditional machine model in mind which assumes homogeneous and static environments. It is inevitable that long running message passing applications will require support for dynamic reconfiguration to maintain high performance under varying load conditions. In this paper we describe a framework that provides iterative MPI applications with reconfiguration capabilities. Our approach is based on integrating MPI applications with a middleware that supports process migration and large scale distributed application reconfiguration. We present our architecture for reconfiguring MPI applications, and verify our design with a heat diffusion application in a dynamic setting.

Keywords

Message Passing Interface Virtual Topology Iterative Application Dynamic Load Balance Application Entity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Message Passing Interface Forum: MPI: A message-passing interface standard. The International Journal of Supercomputer Applications and High Performance Computing 8(3/4), 159–416 (1994)Google Scholar
  2. 2.
    Gropp, W., Lusk, E.: Dynamic process management in an MPI setting. In: Proceedings of the 7th IEEE Symposium on Parallel and Distributeed Processing, p. 530. IEEE Computer Society, Los Alamitos (1995)CrossRefGoogle Scholar
  3. 3.
    Message Passing Interface Forum: MPI-2: Extensions to the Message-Passing Interface (1996)Google Scholar
  4. 4.
    Desell, T., Maghraoui, K.E., Varela, C.: Load balancing of autonomous actors over dynamic networks. In: Hawaii International Conference on System Sciences, HICSS-37 Software Technology Track, Hawaii (2004)Google Scholar
  5. 5.
    Maghraoui, K.E., Desell, T., Varela, C.: Network sensitive reconfiguration of distributed applications. Technical Report CS-05-03, Department of Computer Science, Rensselaer Polytechnic Institute (2005)Google Scholar
  6. 6.
    Fox, G.C., Williams, R.D., Messina, P.C.: Parallel Computing Works. Morgan Kaufmann Publishers, San Fransisco (1994), Available at http://www.npac.syr.edu/pcw/ Google Scholar
  7. 7.
    Maghraoui, K.E., Desell, T., Szymanski, B.K., Teresco, J.D., Varela, C.A.: Towards a middleware framework for dynamically reconfigurable scientific computing. In: Grandinetti, L. (ed.) Grid Computing and New Frontiers of High Performance Processing, Elsevier, Amsterdam (to appear, 2005)Google Scholar
  8. 8.
    Elsasser, R., Monien, B., Preis, R.: Diffusive load balancing schemes on heterogeneous networks. In: Proceedings of the Twelfth Annual ACM Symposium on Parallel Algorithms and Architectures, pp. 30–38. ACM Press, New York (2000)Google Scholar
  9. 9.
    Flaherty, J.E., Loy, R.M., Özturan, C., Shephard, M.S., Szymanski, B.K., Teresco, J.D., Ziantz, L.H.: Parallel structures and dynamic load balancing for adaptive finite element computation. Applied Numerical Mathematics 26, 241–263 (1998)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Teresco, J.D., Faik, J., Flaherty, J.E.: Resource-aware scientific computation on a heterogeneous cluster. Technical Report CS-04-10, Williams College Department of Computer Science, Computing in Science & Engineering (to appear, 2005)Google Scholar
  11. 11.
    Bhandarkar, M.A., Kale, L.V., de Sturler, E., Hoeflinger, J.: Adaptive load balancing for MPI programs. In: Proceedings of the International Conference on Computational Science-Part II, pp. 108–117. Springer, Heidelberg (2001)Google Scholar
  12. 12.
    Huang, C., Lawlor, O., Kalé, L.V.: Adaptive MPI. In: Rauchwerger, L. (ed.) LCPC 2003. LNCS, vol. 2958, Springer, Heidelberg (2004)CrossRefGoogle Scholar
  13. 13.
    Sievert, O., Casanova, H.: A simple MPI process swapping architecture for iterative applications. International Journal of High Performance Computing Applications 18(3), 341–352 (2004)CrossRefGoogle Scholar
  14. 14.
    Stellner, G.: Cocheck: Checkpointing and process migration for MPI. In: Proceedings of the 10th International Parallel Processing Symposium, pp. 526–531. IEEE Computer Society, Los Alamitos (1996)Google Scholar
  15. 15.
    Agbaria, A., Friedman, R.: Fault-tolerant dynamic MPI programs on clusters of workstations. In: Proceedings of the The Eighth IEEE International Symposium on High Performance Distributed Computing, p. 31. IEEE Computer Society, Los Alamitos (1999)Google Scholar
  16. 16.
    Bosilca, G., Bouteiller, A., Cappello, F., Djilali, S., Fedak, G., Germain, C., Herault, T., Lemarinier, P., Lodygensky, O., Magniette, F., Neri, V., Selikhov, A.: MPICH-V: toward a scalable fault tolerant mpi for volatile nodes. In: Proceedings of the 2002 ACM/IEEE conference on Supercomputing, pp. 1–18. IEEE Computer Society Press, Los Alamitos (2002)Google Scholar
  17. 17.
    Vadhiyar, S.S., Dongarra, J.J.: SRS - a framework for developing malleable and migratable parallel applications for distributed systems. Parallel Processing Letters 13, 291–312 (2003)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Vadhiyar, S., Dongarra: Self adaptivity in grid computing. Concurrency and Computation: Practice and Experience 17(2-4), 235–257 (2005)CrossRefGoogle Scholar
  19. 19.
    Berman, F., Wolski, R., Casanova, H., Cirne, W., Dail, H., Faerman, M., Figueira, S., Hayes, J., Obertelli, G., Schopf, J., Shao, G., Smallen, S., Spring, N., Su, A., Zagorodnov, D.: Adaptive Computing on the Grid Using AppLeS. IEEE Trans. Parallel Distrib. Syst. 14(4), 369–382 (2003)CrossRefGoogle Scholar
  20. 20.
    Allen, G., Dramlitsch, T., Foster, I., Karonis, N.T., Ripeanu, M., Seidel, E., Toonen, B.: Supporting efficient execution in heterogeneous distributed computing environments with Cactus and Globus. In: Supercomputing 2001. Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), pp. 52–52. ACM Press, New York (2001)CrossRefGoogle Scholar
  21. 21.
    Huedo, E., Montero, R.S., Llorente, I.M.: A framework for adaptive execution in grids. Softw. Pract. Exper. 34(7), 631–651 (2004)CrossRefGoogle Scholar
  22. 22.
    Wolski, R.: Dynamically forecasting network performance using the Network Weather Service. Cluster Computing 1(1), 119–132 (1998)CrossRefGoogle Scholar
  23. 23.
    Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid Information Services for Distributed Resource Sharing. In: Proceedings of the 10th IEEE Symposium On High Performance Distributed Computing (2001)Google Scholar
  24. 24.
    Blumofe, R.D., Leiserson, C.E.: Scheduling Multithreaded Computations by Work Stealing. In: Proceedings of the 35th Annual Symposium on Foundations of Computer Science (FOCS 1994), Santa Fe, New Mexico, pp. 356–368 (1994)Google Scholar
  25. 25.
    Varela, C., Agha, G.: Programming dynamically reconfigurable open systems with SALSA. ACM SIGPLAN Notices. OOPSLA 2001 Intriguing Technology Track Proceedings, 36(12), 20–34 (2001), http://www.cs.rpi.edu/~cvarela/oopsla2001.pdf
  26. 26.
    Argone National Laboratory: MPICH2, http://www-unix.mcs.anl.gov/mpi/mpich2

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Kaoutar El Maghraoui
    • 1
  • Boleslaw K. Szymanski
    • 1
  • Carlos Varela
    • 1
  1. 1.Rensselaer Polytechnic InstituteTroyUSA

Personalised recommendations