Relocalization of Data Dependences in Partitioned Affine Indexed Algorithms

  • Uwe Eckhardt
  • René Schüffny
  • Renate Merker


Localization is an essential step in the array synthesis which leads to communications occuring only between neighboring processor modules. Costs of localization are an increase of allocated memory and total amount of communications within the architecture. In general, localization is applied to the whole algorithm before or in mutual dependence with scheduling and allocation. If resource constraints are taken into consideration, then scheduling and allocation are modified by several partitioning methods. If partitioning has been applied, localization should be revised due to the fact that partitioning makes localization partially unnecessary. Synthesis from completely localized code leads to very inefficient utilization of the memory system, to high implementation and communication costs, and hence, to high power consumption. Relocalization gives an approach to restrict localization to parts of partitioned algorithms where it is necessary.


Memory Location Data Dependence Systolic Array System Design Automation Array Architecture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    V. Bashkaran and K. Konstantinides. Image and Video Compression Standards. Kluwer Academic Publishers, second edition, 1997.Google Scholar
  2. [2]
    J. Bu and E. Deprettere. Processor clustering for the design of optimal fixed-size systolic arrays. In E. Deprettere and A.-J. van der Veen, editors, Algorithms and Parallel VLSI Architectures, volume A:Tutorials, pages 314–362. Elsevier Science Publisher B.V., 1991.Google Scholar
  3. [3]
    U. Eckhardt and R. Merker. Co-partitioning–a method for hardware/software code-sign for scalable systolic arrays. In R. Hartenstein and V. Prasanna, editors, Reconfigurable Architectures, pages 131–138. IT Press, Chicago,IL, 1997.Google Scholar
  4. [4]
    U. Eckhardt and R. Merker. Scheduling in copartitioned array architectures. Proc. Int. Conf. Application-Specific Systems, Architectures, and Processors (ASAP), pages 219–228, 1997.Google Scholar
  5. [5]
    U. Eckhardt and R. Merker. Hierarchical algorithm partitioning at system level for an improved utilization of memory structures. IEEE Trans. on Computer-Aided Design, 18 (1): 14–24, January 1999.CrossRefGoogle Scholar
  6. [6]
    R.M. Karp, R.E. Miller, and S. Winograd. The organisation of computations for uniform recurrence equations. Journal of the Association of Computing Machinery, 14 (3): 563–590, July 1967.MathSciNetzbMATHCrossRefGoogle Scholar
  7. [7]
    H.T. Kung and C.E. Leiserson. Algorithms for VLSI processor arrays. In C. Mead and L. Conway, editors, Introduction to VLSI systems, chapter 8.3, pages 271–292. Addison-Wesley Publishing Company, 1980.Google Scholar
  8. [8]
    E.J. McCluskey. Iterative combinational switching networks-general design consideration. IRE trans. on electronic computers, EC-7: 258–291, 1958.Google Scholar
  9. [9]
    W.L. Miranker and A. Winkler. Spacetime representations of computational structures. Computing, 32: 93–114, 1984.MathSciNetzbMATHCrossRefGoogle Scholar
  10. [10]
    D.I. Moldovan. On the ananlysis and synthesis of VLSI algorithms. IEEE Trans. on Computers, C-31(11): 1121–1126, November 1982.Google Scholar
  11. [11]
    D.I. Moldovan and J.A.B. Fortes. Partitioning and mapping algorithms into fixed size systolic arrays. IEEE Trans. on Computers, C-35(1): 1–12, January 1986.Google Scholar
  12. [12]
    J.J. Navarro, J.M. Llaberia, and M. Valero. Partitioning: An essential step in mapping algorithms into systolic array processors. IEEE Computers, pages 77–89, 1987.Google Scholar
  13. [13]
    J.v. Neumann. The theory of self-reproducing automata. In A.W. Burks, editor, Essays on cellular automata. University of Illinois Press, Urbana, 1970.Google Scholar
  14. [14]
    S.K. Rao. Regular Iterative Algorithms and their Implementation on Processor Arrays. PhD thesis, Information Systems Lab., Stanford University, October 1985.Google Scholar
  15. [15]
    V.P. Roychowdhury, L. Thiele, S.K. Rao, and T. Kailath. Design of local VLSI processor arrays. In Proc. Int. Conf. on VLSI and Signal Processing, Monterrey, November 1988.Google Scholar
  16. [16]
    V.P. Roychowdhury, L. Thiele, S.K. Rao, and T. Kailath. On the localization of algorithms for VLSI processor arrays. In VLSI Signal Processing III, pages 459–470, New York, 1989. IEEE Press.Google Scholar
  17. [17]
    L. Thiele. Compiler techniques for massive parallel architectures. In P. Dewilde and J. Vandewalle, editors, Computer Systems and Software Engineering, pages 101–150. Kluwer Academic Publishers, 1992.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2001

Authors and Affiliations

  • Uwe Eckhardt
    • 1
  • René Schüffny
    • 1
  • Renate Merker
    • 1
  1. 1.Institute of Circuits and SystemsTU DresdenGermany

Personalised recommendations