Synthesis and equivalence of concurrent systems

  • Björn Lisper
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 226)


A framework for synthesis of synchronous concurrent systems with local memory is developed. Given an output specification of the system a cell action structure can be derived. This structure can be mapped into a communication structure, a model of the events in the target hardware with constraints on the communication possible between events, giving a schedule for the cell actions. Communication structures are interesting in their own right, and transformations defined on such can be used for showing equivalence between different computational networks. As an example, the equivalence between two specific communication structures is proved and it is shown that a FFT algorithm can be implemented on them.


Communication Structure Output Specification Systolic Array Fast Fourier Transform Algorithm Strict Partial Order 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [BL70]
    G. Birkhoff, J. D. Lipson, “Heterogenous Algebras”, J. Combin. Theory, vol. 8 (Jan. 1970), pp. 115–133Google Scholar
  2. [CS83]
    P. K. Cappello, K. Steiglitz, Unifying VLSI Array Design with Linear Transformations of Space-Time, Research Report TRCS83-03, Dept. Comput. Sci., UCSB, 1983Google Scholar
  3. [C85A]
    M. C. Chen, A Synthesis Method for Systolic Designs, Research Report YALEU/DCS/RR-334, Dept. Comput. Sci., Yale University, Jan. 1985Google Scholar
  4. [C85B]
    M. C. Chen, Synthesizing Systolic Designs, Research Report YALEU/DCS/RR-374, Dept. Comput. Sci., Yale University, March 1985Google Scholar
  5. [CF84]
    K. Culik II, I. Fris, Topological Transformations As a Tool in The Design of Systolic Networks, Report CS-84-11, Dept. Comput. Sci., University of Waterloo, April 1984Google Scholar
  6. [G79]
    G. Grätzer, Universal Algebra, Springer-Verlag, New York, 1979Google Scholar
  7. [JC85]
    L. Johnsson, D. Cohen, A Mathematical Approach to Computational Networks for the Discrete Fourier Transform, Preliminary Technical Report, Dept. Comput. Sci., Yale University, 1985Google Scholar
  8. [JWCD81]
    L. Johnsson, U. Weiser, D. Cohen, A. L. Davis, “Towards a Formal Treatment of VLSI Arrays”, Proc. Second Caltech Conf. on VLSI, Jan. 1981, pp. 378–398Google Scholar
  9. [KMC72]
    D. J. Kuck, Y. Muraoka, S. C. Chen, “On the Number of Operations Simoultaneously Executable in Fortran-like Programs and Their Resulting Speedup”, IEEE Trans. Comput., vol C-21, pp. 1293–1310, Dec. 1972Google Scholar
  10. [KLe80]
    H. T. Kung, C. E. Leiserson, “Algorithms for VLSI Processor Arrays”, Ch. 8.3 in C. Mead, L. Conway, Introduction to VLSI systems, Addison-Wesley, Reading, MA, 1980Google Scholar
  11. [KLi83]
    H. T. Kung, W. T. Lin, An Algebra for VLSI Algorithm Design, Technical Report, Dept. Comput. Sci., Carnegie-Mellon Univ., PA, 1983Google Scholar
  12. [La74]
    L. Lamport, “The Parallel Execution of DO Loops”, CACM vol. 17 (Feb. 1974), pp. 83–93Google Scholar
  13. [Le83]
    H. Lev-Ari, Modular Computing Networks: a New Methodology for Analysis and Design of Parallel Algorithms/Architectures, ISI Report 29, Integrated Systems Inc., Dec. 1983Google Scholar
  14. [Le84]
    H. Lev-Ari, Canonical Realizations of Completely Regular Modular Computing Networks, ISI Report 41, Integrated Systems Inc., May 1984Google Scholar
  15. [Li83]
    B. Lisper, Description and Synthesis of Systolic Arrays, Technical Report TRITA-NA-8318, NADA, R.I.T., Stockholm, 1983Google Scholar
  16. [Li85]
    B. Lisper, Hardware Synthesis from Specification with Polynomials, Technical Report TRITA-NA-8506, NADA, R.I.T., Stockholm, 1985Google Scholar
  17. [Li86]
    B. Lisper, Ph. D. Thesis, to appearGoogle Scholar
  18. [MW82]
    W. L. Miranker, A. Winkler, “Spacetime Representations of Computational Structures”, Computing, vol. 32 (1984), pp. 93–114Google Scholar
  19. [Mo82]
    D. T. Moldovan, “On the Analysis and Synthesis of VLSI algorithms”, IEEE Trans. Comput., vol. C-31 (Oct. 1982), pp. 1121–1126Google Scholar
  20. [Mu71]
    Y. Muraoka, Parallelism Exposure and Exploitation in Programs, Ph. D. Thesis, Dept. Comput. Sci., University of Illinois at Urbana-Champaign, 1971Google Scholar
  21. [NPW81]
    M. Nielsen, G. Plotkin, G. Winskel, “Petri Nets, Event Structures and Domains, Part I”, TCS vol. 13 (1981), pp. 85–108Google Scholar
  22. [RFS82]
    I. V. Ramakrishnan, D. S. Fussell, A. Silberschatz, Towards a Characterization of Programs for a Model of VLSI Array-Processors, Technical Report TR-202, Dept. Comput. Sci., University of Texas at Austin, July 1982Google Scholar
  23. [S71]
    H. S. Stone, “Parallel Processing with the Perfect Shuffle”, IEEE Trans. Comput., vol C-20 (Feb. 1971), pp. 153–161Google Scholar
  24. [WD81]
    U. Weiser, A. L. Davis, “A Wavefront Tool for VLSI Design”, in H. T. Kung, B. Sproull and G. Steele eds. VLSI Systems and Computations, Springer-Verlag, Berlin, 1981, pp. 226–234Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1986

Authors and Affiliations

  • Björn Lisper
    • 1
  1. 1.NADA, Royal Institute of TechnologyStockholmSweden

Personalised recommendations