Advertisement

Challenges and Issues of Supporting Task Parallelism in MPI

  • Márcia C. Cera
  • João V. F. Lima
  • Nicolas Maillard
  • Philippe O. A. Navaux
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6305)

Abstract

Task parallelism deals with the extraction of the potential parallelism of irregular structures, which vary according to the input data, through a definition of abstract tasks and their dependencies. Shared-memory APIs, such as OpenMP and TBB, support this model and ensure performance thanks to an efficient scheduling of tasks. In this work, we provide arguments favoring the support of task parallelism in MPI. We explain how native MPI can be used to define tasks, their dependencies, and their runtime scheduling. We also discuss performance issues. Our preliminary experiments show that it is possible to implement efficient task-parallel MPI programs and to increase the range of applications covered by the MPI standard.

Keywords

Matrix Multiplication Task Parallelism Potential Parallelism Thread Building Block Runtime Schedule 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mattson, T.G., Sanders, B.A., Massingill, B.L.: Patterns for Parallel Computing. In: Software Patterns Series. Addison-Wesley, Reading (2004)Google Scholar
  2. 2.
    Blumofe, R.D., Joerg, C.F., Kuszmaul, B.C., Leiserson, C.E., Randall, K.H., Zhou, Y.: Cilk: An efficient multithreaded runtime system. J. of Parallel and Dist. Comp. 37(1), 55–69 (1996)CrossRefGoogle Scholar
  3. 3.
    Leiserson, C.E.: The Cilk++ concurrency platform. In: Proceedings of the 46th Annual Design Automation Conference, pp. 522–527. ACM, New York (2009)CrossRefGoogle Scholar
  4. 4.
    Chapman, B., Jost, G., van der Pas, R.: Using OpenMP: Portable Shared Memory Parallel Programming. In: Scientific and Engineering Computation Series. MIT Press, Cambridge (2008)Google Scholar
  5. 5.
    Reinders, J.: Intel Threading Building Blocks: Outfitting C++ for Multi-core Processor Parallelism. O’Reilly & Associates, Inc., Sebastopol (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Márcia C. Cera
    • 1
  • João V. F. Lima
    • 1
  • Nicolas Maillard
    • 1
  • Philippe O. A. Navaux
    • 1
  1. 1.Universidade Federal do Rio Grande do SulBrazil

Personalised recommendations