Challenges and Issues of Supporting Task Parallelism in MPI
Task parallelism deals with the extraction of the potential parallelism of irregular structures, which vary according to the input data, through a definition of abstract tasks and their dependencies. Shared-memory APIs, such as OpenMP and TBB, support this model and ensure performance thanks to an efficient scheduling of tasks. In this work, we provide arguments favoring the support of task parallelism in MPI. We explain how native MPI can be used to define tasks, their dependencies, and their runtime scheduling. We also discuss performance issues. Our preliminary experiments show that it is possible to implement efficient task-parallel MPI programs and to increase the range of applications covered by the MPI standard.
KeywordsMatrix Multiplication Task Parallelism Potential Parallelism Thread Building Block Runtime Schedule
Unable to display preview. Download preview PDF.
- 1.Mattson, T.G., Sanders, B.A., Massingill, B.L.: Patterns for Parallel Computing. In: Software Patterns Series. Addison-Wesley, Reading (2004)Google Scholar
- 4.Chapman, B., Jost, G., van der Pas, R.: Using OpenMP: Portable Shared Memory Parallel Programming. In: Scientific and Engineering Computation Series. MIT Press, Cambridge (2008)Google Scholar
- 5.Reinders, J.: Intel Threading Building Blocks: Outfitting C++ for Multi-core Processor Parallelism. O’Reilly & Associates, Inc., Sebastopol (2007)Google Scholar