Advertisement

Communicating Multiprocessor-Tasks

  • Jörg Dümmler
  • Thomas Rauber
  • Gudula Rünger
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5234)

Abstract

The use of multiprocessor tasks (M-tasks) has been shown to be successful for mixed task and data parallel implementations of algorithms from scientific computing. The approach often leads to an increase of scalability compared to a pure data parallel implementation, but restricts the data exchange between M-tasks to the beginning or the end of their execution, expressing data or control dependencies between M-tasks.

In this article, we propose an extension of the M-task model to communicating M-tasks (CM-tasks) which allows communication between M-tasks during their execution. In particular, we present and discuss the CM-task programming model, programming support for designing CM-task programs, and experimental results. Internally, a CM-task comprises communication and computation phases. The communication between different CM-tasks can exploit optimized communication patterns for the data exchange between CM-tasks, e.g., by using orthogonal realizations of the communication. This can be used to further increase the scalability of many applications, including time-stepping methods which use a similar task structure for each time step. This is demonstrated for solution methods for ordinary differential equations.

Keywords

Execution Order Data Parallelism Parameter List Parallel Programming Model Multiprocessor Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aldinucci, M., Danelutto, M., Teti, P.: An advanced environment supporting structured parallel programming in Java. Future Generation Computer Systems 19(5), 611–626 (2003)CrossRefGoogle Scholar
  2. 2.
    Bal, H., Haines, M.: Approaches for Integrating Task and Data Parallelism. IEEE Concurrency 6(3), 74–84 (1998)CrossRefGoogle Scholar
  3. 3.
    Chakrabarti, S., Demmel, J., Yelick, K.: Modeling the benefits of mixed data and task parallelism. In: Symposium on Parallel Algorithms and Architecture, pp. 74–83 (1995)Google Scholar
  4. 4.
    Chandy, M., Foster, I., Kennedy, K., Koelbel, C., Tseng, C.-W.: Integrated support for task and data parallelism. The Int. Journal of Supercomputer Applications 8(2), 80–98 (1994)CrossRefGoogle Scholar
  5. 5.
    Chapman, B., Haines, M., Mehrota, P., Zima, H., Van Rosendale, J.: Opus: A coordination language for multidisciplinary applications. Sci. Program. 6(4), 345–362 (1997)Google Scholar
  6. 6.
    Fink, S.J.: A Programming Model for Block-Structured Scientific Calculations on SMP Clusters. PhD thesis, University of California, San Diego (1998)Google Scholar
  7. 7.
    Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems. Springer, Berlin (1993)zbMATHGoogle Scholar
  8. 8.
    Hill, M., McColl, W., Skillicorn, D.: Questions and Answers about BSP. Scientific Programming 6(3), 249–274 (1997)Google Scholar
  9. 9.
    Joisha, P., Banerjee, P.: PARADIGM (version 2.0): A New HPF Compilation System. In: Proc. 1999 International Parallel Processing Symposium (IPPS 1999) (1999)Google Scholar
  10. 10.
    Keßler, C.W.: NestStep: Nested Parallelism and Virtual Shared Memory for the BSP model. The Journal of Supercomputing 17, 245–262 (2001)CrossRefGoogle Scholar
  11. 11.
    Kühnemann, M., Rauber, T., Rünger, G.: Optimizing MPI Collective Communication by Orthogonal Structures. Journal of Cluster Computing 9(3), 257–279 (2006)CrossRefGoogle Scholar
  12. 12.
    Orlando, S., Palmerini, P., Perego, R.: Coordinating HPF programs to mix task and data parallelism. In: SAC 2000: Proceedings of the 2000 ACM symposium on Applied computing, pp. 240–247. ACM Press, New York (2000)CrossRefGoogle Scholar
  13. 13.
    Radulescu, A., Nicolescu, C., van Gemund, A., Jonker, P.P.: CPR: Mixed task and data parallel scheduling for distributed systems. In: Proceedings of the 15th International Parallel and Distributed Symposium (2001)Google Scholar
  14. 14.
    Ramaswamy, S.: Simultaneous Exploitation of Task and Data Parallelism in Regular Scientific Applications. PhD thesis, University of Illinois at Urbana-Champaign (1996)Google Scholar
  15. 15.
    Rauber, T., Rünger, G.: A Transformation Approach to Derive Efficient Parallel Implementations. IEEE Transactions on Software Engineering 26(4), 315–339 (2000)CrossRefGoogle Scholar
  16. 16.
    Rauber, T., Rünger, G.: Execution Schemes for Parallel Adams Methods. In: Danelutto, M., Vanneschi, M., Laforenza, D. (eds.) Euro-Par 2004. LNCS, vol. 3149, pp. 708–717. Springer, Heidelberg (2004)Google Scholar
  17. 17.
    Rauber, T., Rünger, G.: Tlib - A Library to Support Programming with Hierarchical Multi-Processor Tasks. J. of Parallel and Distributed Computing 65(3), 347–360 (2005)Google Scholar
  18. 18.
    Skillicorn, D., Talia, D.: Models and languages for parallel computation. ACM Computing Surveys 30(2), 123–169 (1998)CrossRefGoogle Scholar
  19. 19.
    Subhlok, J., Yang, B.: A new model for integrated nested task and data parallel programming. In: Proceedings of the sixth ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 1–12. ACM Press, New York (1997)CrossRefGoogle Scholar
  20. 20.
    van der Houwen, P.J., Messina, E.: Parallel Adams Methods. J. of Comp. and App. Mathematics 101, 153–165 (1999)zbMATHCrossRefGoogle Scholar
  21. 21.
    Vydyanathan, N., Krishnamoorthy, S., Sabin, G., Catalyurek, U., Kurc, T., Sadayappan, P., Saltz, J.: An integrated approach for processor allocation and scheduling of mixed-parallel applications. In: Proc. of the 2006 International Conference on Parallel Processing (ICPP 2006). IEEE, Los Alamitos (2006)Google Scholar
  22. 22.
    Vydyanathan, N., Krishnamoorthy, S., Sabin, G., Catalyurek, U., Kurc, T., Sadayappan, P., Saltz, J.: Locality conscious processor allocation and scheduling for mixed parallel applications. In: Proc. of the 2006 IEEE Int. Conf. on Cluster Computing. IEEE, Los Alamitos (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jörg Dümmler
    • 1
  • Thomas Rauber
    • 2
  • Gudula Rünger
    • 1
  1. 1.Chemnitz University of Technology 
  2. 2.University Bayreuth 

Personalised recommendations