Scheduling Task Graphs for Execution in Dynamic SMP Clusters with Bounded Number of Resources

  • Lukasz Masko
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3911)


The paper presents an algorithm for scheduling parallel tasks in a parallel architecture based on multiple dynamic SMP clusters, in which processors can be switched between shared memory modules at runtime. Memory modules and processors are organized in computational System–on–Chip (SoC) modules of a fixed size and are inter–connected by a local communication network implemented in a Network–on–Chip technology (NoC). Processors located in the same SoC module can communicate using data transfers on the fly. A number of such SoC modules can be connected using a global interconnection network to form a larger infrastructure. The presented algorithm schedules initial macro dataflow program graphs for such an architecture with a given number of SoC modules, assuming a fixed size of a module. First, it distributes program graph nodes between processors. Then it transforms and schedules computations and communication to use processor switching and read on the fly facilities. Finally, it divides the whole set of processors into subsets of a given size, which then are mapped to separate SoC modules.


Data Transfer Shared Memory Memory Module Data Cache Program Graph 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hwang, J.-J., Chow, Y.-C., Anger, F.D., Lee, C.-Y.: Scheduling precedence graphs in systems with interprocessor communication times. SIAM Journal on Computing (Society for Industrial and Applied Mathematics) 18(2) (1989)Google Scholar
  2. 2.
    Yang, T., Gerasoulis, A.: DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors. IEEE Transactions on Parallel and Distributed Systems 5(9) (1994)Google Scholar
  3. 3.
    Tudruj, M., Masko, L.: Program execution control for communication on the fly in dynamic shared memory processor clusters. In: Proceedings of the 3rd International Conference on Parallel Computing in Electrical Engineering, PARELEC 2002, Warsaw, September 2002, IEEE Computer Society Press, Los Alamitos (2002)Google Scholar
  4. 4.
    Tudruj, M., Masko, L.: Communication on the Fly and Program Execution Control in a System of Dynamically Configurable SMP Clusters. In: 11th Euromicro Conference on Parallel Distributed and Network based Processing, Genova – Italy, February 2003, pp. 67–74. IEEE CS Press, Los Alamitos (2003)Google Scholar
  5. 5.
    Rowen, C.: Engineering the Complex SOC, Fast, Flexible Design with Configurable Processors. Prentice Hall PTR, Englewood Cliffs (2004)Google Scholar
  6. 6.
    Masko, L.: Atomic operations for task scheduling for systems based on communication on–the–fly between SMP clusters. In: 2nd International Symposium on Parallel and Distributed Computing ISPDC, Ljubljana, October 2003, IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
  7. 7.
    Masko, L.: Program graph scheduling for dynamic SMP clusters with communication on the fly. In: International Symposium on Parallel and Distributed Computing ISPDC 2004, Cork, July 5-7, 2004, IEEE Computer Society Press, Los Alamitos (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Lukasz Masko
    • 1
  1. 1.Institute of Computer Science of the Polish Academy of SciencesWarsawPoland

Personalised recommendations