Advertisement

Task migration and fine grain parallelism on distributed memory architectures

  • Yvon Jégou
Software
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1277)

Abstract

The most successful compilation techniques for distributed memory architectures are based on static analysis of the memory accesses. Loop iterations with similar comportment on the parallel memories are combined in order to form coarse grain parallel tasks. But for irregularly structured applications, the behavior of each iteration of a parallel loop on the memories is data dependent and cannot be predicted at compile-time and the only exploitable parallelism is fine-grain. We show that, because it generates parallel and asynchronous execution of a large number of small tasks, the task migration paradigm allows a direct exploitation of these irregularly structured problems on distributed memory architectures.

keywords

task migration distributed memory fine grain irregular code 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chau-Wen Tseng, An Optimizing Fortran D Compiler for MIMD Distributed Memory Machines, PhD thesis, Rice University, January 1993.Google Scholar
  2. 2.
    Barbara Chapman, Piyush Mehrotra, Hans Moritsh, and Hans Zima. Dynamic Data Distribution in Vienna Fortran, Technical Report 93-92, ICASE/NASA LRC, December 1993.Google Scholar
  3. 3.
    Anne Rogers and keshav Pingali, “Process Decomposition Trough Locality of Reference”, In Conference on Programming Language Design and Implementation, page 69–80, ACM, June 21–23 1989.Google Scholar
  4. 4.
    Mounir Hahad, Thierry Priol, Jocelyne Erhel, Irregular Loop Patterns “Compilation on Distributed Shared Memory Multiprocessors”, International Conference on Parallel Processing, Vol.2: Software, pp. 113–116, CRC Press, Aug 1995.Google Scholar
  5. 5.
    R. S. Nikhil, G. M. Papadopoulos, Arvind, “*T: A Multithreaded Massively Parallel Architecture”, In Proc. 21th Intl. Conf. on Parallel Processing, pp 156–167, MIT 1992.Google Scholar
  6. 6.
    Y. Jégou, “The Migrating Tasks: An Execution Model for Irregular Codes”, in Proceedings of Euro-Par'96 Parallel Processing, L. Bougé, P. Fraigniaud, A. Mignotte, Y. Robert (Editors), Lecture Notes in Computer Science, 1123, p. 1:562–570, Lyon, Rance, August 1996.Google Scholar
  7. 7.
    S. Jenks, J.-L. Gaudiot, “Nomadic Threads: A Migrating Multithreaded Approach to Remote Memory Accesses in Multiprocessors”, in Proceedings of Parallel Architectures and Compilation Techniques, PACT'96, Boston, pp. 2–11, October 1996.Google Scholar
  8. 8.
    High Performance Fortran Forum, High Performance Fortran Language Specification, Technical Report revision 1.0, Rice University, May 1993.Google Scholar
  9. 9.
    Y. Jégou, “Runtime Support for Task Migration on Distributed Memory Architectures”, RTSPP Workshop Proceedings, 11th International Parallel Processing Symposium, IPPS'97, Geneva, Switzerland, April 5, 1997.Google Scholar
  10. 10.
    M. O. Bristeau, J. Erhel, P. Feat, R. Glowinsky, J. Périaux, “Solving the helmotz equation at high wave numbers on a parallel computer with a shared virtual memory”, International journal of supercomputer applications and high performance computing, (9.1), 1995.Google Scholar
  11. 11.
    M. R. Eskicioglu, L. F. Cabrera, Process Migration, An Annotated Bibliography, IBM Research Report RJ7935, 1991.Google Scholar
  12. 12.
    F. Douglis, J. Ousterhout, “Transparent Process Migration: Design Alternatives and the Sprite Implementation”, Software — Practice And Experience, vol. 8(21), pp. 757–786, 1991.Google Scholar
  13. 13.
    M. J. Litzkow, M. Livny, M. W. Mutka, “Condor — A Hunter of Idle Workstations”, Proc. 8th Intl. Conf. on Distributed Computing Systems, pp. 104–111, San Jose, Calif, June 1988.Google Scholar
  14. 14.
    W. C. Hsieh, P. Wang, W. E. Weihl, “Computation Migration: Enhancing Locality for Distributed-Memory Parallel Systems”, in Proceedings Fourth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, vol. 28(7), pp. 239–248, July 1993, New York, NY, USA.Google Scholar
  15. 15.
    J. Plevyak, V. Karamcheti, X. Zhang, A. A. Chien, “A Hybrid Execution Model for Fine-Grained Languages on Distributed Memory Multicomputers”, in Proceedings Supercomputing '95, San Diego, California.Google Scholar
  16. 16.
    M. Booth, K. Misegades, Microtasking: A New Way to Harness Multiprocessors, Cray Channels 1986.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Yvon Jégou
    • 1
  1. 1.IRISA / INRIA, Campus de BeaulieuRennes CedexFrance

Personalised recommendations