International Journal of Parallel Programming

, Volume 34, Issue 5, pp 459–487 | Cite as

Scalable and Structured Scheduling

  • Paul FeautrierEmail author

Scheduling a program (i.e. constructing a timetable for the execution of its operations) is one of the most powerful methods for automatic parallelization. A schedule gives a blueprint for constructing a synchronous program, suitable for an ASIC or VLIW processor. However, constructing a schedule entails solving a large linear program. Even if one accepts the (experimental) fact that the Simplex is almost always polynomial, the scheduling time is of the order of a large power of the program size. Hence, the method does not scale well. The present paper proposes two methods for improving the situation. First, a large program can be divided into smaller units (processes), which can be scheduled separately. This is structured scheduling. Second, one can use projection methods for solving linear programs incrementally. This is specially efficient if the dependence graph is sparse.


Structured scheduling automatic parallelization scalability 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    C. Ancourt and F. Irigoin, Scanning polyhedra with DO Loops, in Proceedings of the third SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 39–50. ACM Press, April 1991.Google Scholar
  2. 2.
    C. Bastoul, Efficient Code Generation for Automatic Parallelization and Optimization, in ISPDC’03 IEEE International Symposium on Parallel and Distributed Computing, pp. 23–30. Ljublana, October 2003.Google Scholar
  3. 3.
    A. J. Bernstein, Analysis of Programs for Parallel Processing, IEEE Transactions on El. Computers, EC-15, 1966.Google Scholar
  4. 4.
    P. Boulet and P. Feautrier, Scanning Polyhedra without DO Loops, in PACT’98, October 1998.Google Scholar
  5. 5.
    A. Darte, R. Schreiber and G. Villard, Lattice-based Memory Allocation, in sixth ACM International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES 2003), October 2003.Google Scholar
  6. 6.
    P. Feautrier, Semantical Analysis and Mathematical Programming: Application to Parallelization and Vectorization, in M. Cosnard, Y. Robert, P. Quinton, and M. Raynal, eds., Workshop on Parallel and Distributed Algorithms, Bonas, pp. 309–320. North Holland, 1989.Google Scholar
  7. 7.
    P. Feautrier, Dataflow Analysis of Scalar and Array References. International Journal of Parallel Programming 20(1):23–53, February 1991.Google Scholar
  8. 8.
    P. Feautrier, Some Efficient Solutions to the Affine Scheduling Problem, I, One-dimensional Time, International Journal of Parallel Programming, 21(5):313–348, October 1992.Google Scholar
  9. 9.
    P. Feautrier, Some Efficient Solutions to the Affine Scheduling Problem, II, Multidimensional Time, International Journal of Parallel Programming, 21(6):389–420, December 1992.Google Scholar
  10. 10.
    Griebl M., Feautrier P., Lengauer C. (2000). Index Set Splitting. International Journal of Parallel Programming 28(6):607–631CrossRefGoogle Scholar
  11. 11.
    G. Kahn, The Semantics of a Simple Language for Parallel Programming, in IFIP’94, pp. 471–475. North Holland 1974.Google Scholar
  12. 12.
    H. Leverge, C. Mauras and P. Quinton. The ALPHA Language and its Use for the Design of Systolic Arrays, Journal of VLSI Signal Processing, 3:173–182, 1991.Google Scholar
  13. 13.
    P. Quinton and T. Risset, Structured Scheduling of Recurrence Equations: Theory and Practice, in Proceedings of the System Architecture Modelling and Simulation Workshop, Lecture Notes in Computer Science, 2268, Samos, Greece, 2001. Springer Verlag.Google Scholar
  14. 14.
    P. Quinton, The Systematic Design of Systolic Arrays, in F. Fogelman, Y. Robert and M. Tschuente, eds., Automata Networks in Computer Science, pp. 229–260. Manchester University Press, December 1987.Google Scholar
  15. 15.
    Schrijver A. (1986). Theory of Linear and Integer Programming. Wiley, NewYorkzbMATHGoogle Scholar
  16. 16.
    Robert E. Tarjan, Graph Theory and Gaussian Elimination, in J. Bunch and D. Rose, eds., Sparse Matrix Computations. Academic Press, 1976.Google Scholar
  17. 17.
    W. Thies, F. Vivien, J. Sheldon and S. Amarasinghe, A unified framework for schedule and storage optimization, in ACM SIGPLAN’01 Conference on Programming Language Design and Implementation (PLDI), Snowbird, Utah, June 2001.Google Scholar
  18. 18.
    R. Triolet, F. Irigoin and P. Feautrier, Automatic Parallelization of FORTRAN Programs in the Presence of Procedure Calls, in B. Robinet and R. Wilhelm, eds., ESOP 1986, LNCS 213. Springer-Verlag, 1986.Google Scholar

Copyright information

© Springer Science+Business Media, Inc. 2006

Authors and Affiliations

  1. 1.LIPproject Compsys, Ecole Normale Supérrieure de Lyon, INRIALyonFrance

Personalised recommendations