Advertisement

Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms

  • V. P. Roychowdhury
  • T. Kailath
Article

Abstract

The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs. In this paper, we shall study the dependence structure of RIAs that are not systolic; examples of such RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.

Keywords

Leaf Node Systolic Array Index Point Directed Cycle Processor Array 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R.M. Karp, R.E. Miller, and S. Winograd. The organization of computations for uniform recurrence equations.JACM 14:563–590, 1967.MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    H.V. Jagadish, S.K. Rao, and T. Kailath. Multi-processor architectures for iterative algorithms.Proceedings of the IEEE, 75, No. 9:1304–1321, Sept. 1987.CrossRefGoogle Scholar
  3. 3.
    S.K. Rao,Regular Iterative Algorithms and their Implementation on Processor Arrays. PhD thesis, Stanford University, Stanford, Calafornia, 1985.Google Scholar
  4. 4.
    S.K. Rao and T. Kailath. Regular iterative algorithms and their implementations on processor arrays.Proc. IEEE, 76, No. 2:259–282, March 1988.CrossRefGoogle Scholar
  5. 5.
    S.Y. Kung.VLSI Array Processors. Prentice Hall Series, 1987.Google Scholar
  6. 6.
    S.K. Rao.Systolic Arrays and their Extensions. Prentice Hall Series, to appear, 1988.Google Scholar
  7. 7.
    V .P. Roychowdhury, S. K. Rao, L. Thiele, and T. Kailath. On the localization of algorithms for VLSI processor arrays.1988 Workshop On VLSI Signal Processing, pages 459–470, Nov. 1988.Google Scholar
  8. 8.
    V.P. Roychowdhury, L. Thiele, S.K. Rao, and T. Kailath. On the localization of algorithms for VLSI processor arrays. Submitted toIEEE Trans. Computers, October, 1988.Google Scholar
  9. 9.
    H.T. Kung. Let’s design algorithms for VLSI systems. InProc. Caltech Conf on VLSI, pages 65–90, Jan. 1979.Google Scholar
  10. 10.
    H.T. Kung. Why systolic architectures?IEEE Computer Magazine, 25:37–46, Jan. 1980.Google Scholar
  11. 11.
    H.T. Kung and C,E, Leiserson, Systolic arrays for VLSI. InSparse Matrix Proceedings, pages 245–282. Philadelphia Society of Industrial and Applied Mathematicians, 1978.Google Scholar
  12. 12.
    D.I. Moldovan. On the analysis and synthesis of VLSI algorithms.IEEE Trans. Computers, C-31:1121–1126, Nov. 1982.CrossRefGoogle Scholar
  13. 13.
    D.I. Moldovan. On the design of algorithms for VLSI systolic arrays.Proc. IEEE, pages 113–120, Jan. 1983.Google Scholar
  14. 14.
    J.A.B. Fortes.Algorithm transformations for parallel processing and VLSI architectures. PhD thesis, University of Southern California, Los Angeles, Dec. 1983.Google Scholar
  15. 15.
    P. Quinton. The systematic design of systolic arrays. Technical report, INRIA Report, Paris, 1983.Google Scholar
  16. 16.
    P.R. Capello and K. Steiglitz, Unifying VLSI array design with linear transformations of space-time.Advances in Computing Research, 2:23–65, 1984.Google Scholar
  17. 17.
    V.P. Roychowdhury and T. Kailath. Study of parallelism in regular iterative algorithms. Submitted to SIAM Journal of Computing, Dec. 1988.Google Scholar
  18. 18.
    Vwani P. Roychowdhury.Derivation, Extensions and Parallel Implementation of Regular Iterative Algorithms. PhD thesis, Department of Electrical Engineering, Stanford University, Stanford, California, December 1988.Google Scholar
  19. 19.
    W.M. Waite. Path detection in niulti-dimensional iterative arrays.JACM, 14, 1967.Google Scholar
  20. 20.
    C.H. Papadimitriou and K. Steiglitz.Combinatorial Optimization: Algorithms and Complexity. Prentice Hall. 1982.Google Scholar
  21. 21.
    M. Behzad, G. Chartrand, and L. Lesniak-Foster.Graphs and Digraphs. Prindle, Weber and Schmidt International Series, 1979.Google Scholar
  22. 22.
    V.P. Roychowdhury and T. Kailath. Regular processor arrays for matrix algorithms with pivoting. Submitted toCACM, Feb. 1988.Google Scholar
  23. 23.
    V.P. Roychowdhury and T. Kailath. Regular processor arrays for matrix algorithms with pivoting,Int. Conference on Systolic Arrays, pages 237–246, May 1988.Google Scholar
  24. 24.
    D.I. Moldovan and J.A.B. Fortes. Partitioning and mapping of algorithms into fixed size systolic arrays.IEEE Trans. Computers, No. 1:1–12, January 1986.Google Scholar

Copyright information

© Kluwer Academic Publishers 1989

Authors and Affiliations

  • V. P. Roychowdhury
    • 1
  • T. Kailath
    • 1
  1. 1.Information Systems LaboratoryStanford UniversityUK

Personalised recommendations