On the optimality of linear schedules

  • Weijia Shang
  • Jose A. B. Fortes


An algorithm can be modeled as a set of indexed computations, and a schedule is a mapping of the algorithm index space into time.Linear schedules are a special class of schedules that are described by a linear mapping and are commonly used in many systolic algorithms.Free schedules cause computations of an algorithm to execute as soon as their operands are available. If one computation uses data generated by another computation, then a data dependence exists between these two computations which can be represented by the difference of their indices (calleddependence vector). Many important algorithms are characterized by the fact that data dependencies areuniform, i.e., the values of the dependence vectors are independent of the indices of computations. There are applications where it is of interest to find an optimal linear schedule with respect to the time of execution ofa specific computation of the given algorithm. This paper addresses the problem of identifying optimal linear schedules for uniform dependence algorithms so that the execution time ofa specific computation of the algorithm is minimized and proposes a procedure to solve this problem based on the mathematical solution of a linear optimization problem. Also, linear schedules are compared with free schedules. The comparison indicates that optimal linear schedules can be as efficient as free schedules, the best schedules possible, and identifies a class of algorithms for which this is always true.


Execution Time Parallel Processing Linear Programming Problem Specific Computation Systolic Array 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    R. M. Karp, R. E. Miller and S. Winograd. The organization of computations for uniform recurrence equations.JACM 14, 3, Jul. 1967, pp. 563–590.MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    D.I. Moldovan and J.A.B. Fortes. Partitioning and mapping algorithms into fixed size systolic arrays.IEEE Trans. Computers, Vol. C-35, No. 1, Jan. 1986, pp. 1–12.CrossRefGoogle Scholar
  3. 3.
    P.R. Cappello and K. Steiglitz. Unifying VLSI array designs with geometric transformations.Proc. Int’l Conf on Parallel Processing, 1983, pp. 448–457.Google Scholar
  4. 4.
    P. Quinton. Automatic synthesis of systolic arrays from uniform recurrent equations.Proc. 11th Annual Symposium on Computer Architecture, 1984, pp. 208–214.Google Scholar
  5. 5.
    S.K. Rao.Regular iterative algorithms and their implementations on processor arrays. Ph.D. Dissertation, Stanford University, Stanford, California, Oct. 1985.Google Scholar
  6. 6.
    M. Chen. A design methodology for synthesizing parallel algorithms and architectures.Journal of Parallel and Distributed Computing, Dec. 1986, pp. 461–491.Google Scholar
  7. 7.
    J.-M. Delosme and I.C. F. Ipsen. An illustration of a methodology for the construction of efficient systolic architectures in VLSI. Proc. Second Int’l Symposium on VLSI Technology, Systems and Applications, 1985, pp. 268–273.Google Scholar
  8. 8.
    S.Y. Kung.VLSI Array Processors. Englewood Cliffs, N.J.: Prentice-Hall, 1987.Google Scholar
  9. 9.
    C. Guerra and R. Melhem. Synthesizing non-uniform systolic designs,Proc. Int’l Conf. on Parallel Processing, 1986, pp. 765–771.Google Scholar
  10. 10.
    G.-J. Li and B.W. Wah. The design of optimal systolic arrays.IEEE Trans. Computers, Vol. C-34, Jan. 1985, pp. 66–77.CrossRefGoogle Scholar
  11. 11.
    M.T. O’Keefe and J.A.B. Fortes. A comparative study of two systematic design methodologies for systolic arrays.Proc. Int’l Conf. on Parallel Processing, 1986, pp. 672–675.Google Scholar
  12. 12.
    J.A.B. Fortes, F. Parisi-Presicce, Optimal linear schedule for the parallel execution of algorithms.Proc. Int’l Conf. on Parallel Processing, 1984, pp. 322–328.Google Scholar
  13. 13.
    R. Cytron. Doacross: Beyond vectorization for multiprocessors (extended abstract).Proc. Int’l Conf. on Parallel Processing, 1986, pp. 836–844.Google Scholar
  14. 14.
    W. Shang and J.A.B. Fortes. Time optimal linear schedules for algorithms with uniform dependencies.Proc. Int’l Conf. on Systolic Arrays, May 1988, pp. 393–402.Google Scholar
  15. 15.
    D.A. Padua.Multiprocessors: Discussion of theoretical and practical problems. Ph.D. Thesis, Univ. of Illinois at Urb.-Champ., Rept. No. UIUCDCS-R79-990, Nov. 1979.Google Scholar
  16. 16.
    J.-K. Peir and R. Cytron. Minimum distance: a method for partitioning recurrences for multiprocessors.Proc. Int’l Conf. on Parallel Processing, 1987, pp. 217–225.Google Scholar
  17. 17.
    W. Shang and J.A.B. Fortes. Independent partitioning of algorithms with uniform dependencies.Proc. Int’l Conf. on Parallel Processing, Vol. 2, 1988, pp 26–33.Google Scholar
  18. 18.
    P.E. Gill, W. Murray and M.H. Wright.Practical Optimization. New York: Academic Press, 1981.MATHGoogle Scholar
  19. 19.
    D.G. Luenberger.Linear and Nonlinear Programming. Second Edition, Menlo Park, California: Addison-Wesley Publishing Company, 1984.MATHGoogle Scholar

Copyright information

© Kluwer Academic Publishers 1989

Authors and Affiliations

  • Weijia Shang
    • 1
  • Jose A. B. Fortes
    • 1
  1. 1.School of Electrical EngineeringPurdue UniversityWest Lafayette

Personalised recommendations