Advertisement

TOPPER: An Integrated Environment for Task Allocation and Execution of MPI Applications onto Parallel Architectures

  • Dimitris Konstantinou
  • Nectarios Koziris
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2563)

Abstract

Although the use of parallel computing systems has significantly expanded in the last years, the existence of many processing elements is not fully exploited, due to the interprocessor communication overhead. In this paper we present an integrated software environment for optimizing the performance of parallel programs on multiprocessor architectures. TOPPER can efficiently allocate the tasks of a parallel application on the various nodes of a multiprocessing machine, using several algorithms for task clustering, cluster merging and physical mapping. The programmer outlines the application’s task computation and communication requirements along with the multiprocessor network available in two similar graphs. TOPPER aims to minimize the application’s overall execution time, proposing an efficient task allocation. In the case of MPI programs, TOPPER proves more powerful, since the application is automatically executed on the target machine with the provided task mapping.

Keywords

Task Allocation Parallel Application Task Graph Integrate Environment Processor Graph 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Stone H.: “Multiprocessor Scheduling with the Aid of Network Flow Algorithms”, IEEE Transactions on Software Engineering, Vol.3, No.1, pp.85–93, 1997. 337CrossRefGoogle Scholar
  2. [2]
    George A., Heath M.T. and Liu J.: “Parallel Cholesky Factorization on a Shared Memory Processor”, Linear Algebra and Applications, Vol.77, pp.165–187, 1986. 341zbMATHCrossRefGoogle Scholar
  3. [3]
    Lo V.: “Heuristic Algorithms for Task Assignment in Distributed Systems”, IEEE Transactions on Computers, Vol.37, No.11, pp.1384–1397, 1988. 337CrossRefMathSciNetGoogle Scholar
  4. [4]
    Sarkar V.: “Partitioning and Scheduling Parallel Programs for Execution on Multiprocessors”, Cambridge, MA, MIT Press, 1989. 338Google Scholar
  5. [5]
    Monien B. and Sudborough H.: “Embedding one Interconnection Network in Another”, Computational Graph Theory, Springer-Verlag, Computing Supplement, Vol.7, pp.257–282, 1990. 337MathSciNetGoogle Scholar
  6. [6]
    Papadimitriou C.H. and Yannakakis M.: “Toward an Architecture-independent Analysis of Parallel Algorithms”, SIAM J. Computing, Vol.19, pp.322–328, 1990. 337zbMATHCrossRefMathSciNetGoogle Scholar
  7. [7]
    Lo V., Rajopadhye S., Gupta S., Keldsen D., Mohamed M., Nitzberg B., Telle J. And Zhong X.: “OREGAMI: Tools for Mapping Parallel Computations to Parallel Architectures”, International Journal of Parallel Programming, Vol.20, No.3, pp.237–270, 1991. 336, 338, 343CrossRefGoogle Scholar
  8. [8]
    Yang T. and Gerasoulis A.: “PYRROS: Static Task Scheduling and Code Generation for Message Passing Multiprocessors”, Proceedings 6th Conference on Supercomputing (ICS92), pp.428–437, New York, NY, 1992. 336, 338Google Scholar
  9. [9]
    Lewis T. and El-Rewini H.: “Parallax: a Tool for Parallel Program Scheduling”, IEEE Parallel and Distributed Technology, Vol.1, No.2, pp.62–72, 1993. 336CrossRefGoogle Scholar
  10. [10]
    Ali H. and El-Rewini H.: “Task Allocation in Distributed Systems: a Split Graph Model”, Journal of Combinatorial Mathematics and Combinatorial Computing, Vol.14, pp.15–32, 1993. 336zbMATHMathSciNetGoogle Scholar
  11. [11]
    El-Rewini H., Lewis T. G. and Ali H.: “Task Scheduling in Parallel and Distributed Systems”, Prentice Hall, 1994. 336Google Scholar
  12. [12]
    Yang T. and Gerasoulis A.: “DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors”, IEEE Transactions on Parallel and Distributed Systems, Vol.5, No.9, pp.951–967, 1994. 338, 340, 341CrossRefGoogle Scholar
  13. [13]
    Koziris N., Papakonstantinou G. and Tsanakas P.: “Optimal Time and Efficient Space Free Scheduling for Nested Loops”, The Computer Journal, Vol.39, No.5, pp.439–448, 1996. 337CrossRefGoogle Scholar
  14. [14]
    Liou J. C., Palis. M.A.: “A Comparison of General Approaches to Multiprocessor Scheduling”, Proceedings 11th Parallel Processing Symposium (IPPS’97), pp.152–156, Geneva, Switzerland, 1997. 338Google Scholar
  15. [15]
    Koziris N., Romesis M., Papakonstantinou G. and Tsanakas P.: “An Efficient Algorithm for the Physical Mapping of Clustered Task Graphs onto Multiprocessor Architectures”, Proceedings PDP’2000 Conference, pp.406–413, Rhodes, 2000. 337, 339, 343Google Scholar
  16. [16]
    Konstantinou D. and Panagiotopoulos A.: Thesis, Department of Electrical & Computer Engineering, NTUA, Athens, 2000. 340, 341Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Dimitris Konstantinou
    • 1
  • Nectarios Koziris
    • 1
  1. 1.Computing Systems Lab, Computer Science Division Department of Electrical & Computer EngineeringNational Technical University of AthensZografou CampusZografouGreece

Personalised recommendations