Architectures for Statically Scheduled Dataflow

  • Edward Ashford Lee
  • Jeffrey C. Bier
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 149)


When dataflow program graphs can be statically scheduled, little run-time overhead (software or hardware) is necessary. This paper describes a class of parallel architectures consisting of Von Neumann processors and one or more shared memories, where the order of shared-memory accesses is determined at compile time and enforced at run time. The architecture is extremely lean in hardware, yet for a set of important applications it can perform as well as any shared memory architecture. Dataflow graphs can be mapped onto it statically. Furthermore, it supports shared data structures without the run time overhead of I-structures. A software environment has been constructed that automatically maps signal processing applications onto a simulation of such an architecture, where the architecture is implemented using Motorola DSP96002 microcomputers.

Static (compile-time) scheduling is possible for a subclass of dataflow program graphs where the firing pattern of actors is data-independent This model is suitable for digital signal processing and some other scientific computation. It supports recurrences, manifest iteration, and conditional assignment. However, it does not support true recursion, data-dependent iteration, or conditional evaluation. An effort is under way to weaken the constraints of the model and to determine the implications on hardware design.


Shared Memory Finite Impulse Response Filter Static Schedule Data Flow Graph FIFO Queue 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Arv82]
    Arvind and K. P. Gostelow, “The U-Interpreter”, Computer 15(2), February 1982.Google Scholar
  2. [Arv87]
    Arvind, R. S. Nikhil, and K. K. Pingali, “I-structures: Data Structures for Parallel Computing”, Computation Structures Group Memo 269, MIT, February 1987 (revised March 1989). Also to appear in ACM Transactions on Programming Languages and Systems.Google Scholar
  3. [Arv89]
    Arvind, private communication, 1989.Google Scholar
  4. [Bie89]
    J. C. Bier and E. A. Lee, “Frigg: A Simulation Environment for Multiple-Processor DSP System Development”, Proceedings of the International Conference on Computer Design, Boston, October 2–4, 1989.Google Scholar
  5. [Bie90]
    J. Bier and E. A. Lee, “A Class of Multiprocessor Architectures for Real-Time DSP”, Proc. of the Int. Conf. on Circuits and Systems, New Orleans, May, 1990.Google Scholar
  6. [Bur81]
    F. W. Burton and M. R. Sleep, “Executing Functional Programs on A Virtual Tree of Processors”, Proc. ACM Conf. Functional Programming Lang. Cornput. Arch., pp. 187–194, 1981.Google Scholar
  7. [Cha84]
    M. Chase, “A Pipelined Data Row Architecture for Signal Processing: the NEC UPD7281”, VLSI Signal Processing, IEEE Press, New York (1984)Google Scholar
  8. [Chu80]
    W. W. Chu, L. J. Holloway, L. M.-T. Lan, and K. Efe, “Task Allocation in Distributed Data Processing”, IEEE Computer, pp. 57–69, November, 1980.Google Scholar
  9. [Chu87]
    W. W. Chu and L. M.-T. Lan, “Task Allocation and Precedence Relations for Distributed Real-Time Systems”, IEEE Trans. on Computers, C-36(6), pp. 667–679, June 1987.Google Scholar
  10. [Coh85]
    G. Cohen, D. Dubois, J. P. Quadrat, and M. Viot, “A Linear-System-Theoretic View of Discrete-Event Processes and its Use for Performance Evaluation in Manufacturing”, IEEE Trans, on Automatic Control, AC-30, 1985, pp. 210–220.MathSciNetCrossRefGoogle Scholar
  11. [Cor79]
    M. Cornish, D. W. Hogan, and J. C. Jensen, “The Texas Instruments Distributed Data Processor”, Proc. Louisiana Computer Exposition, Lafayette, La., pp. 189–193, March 1979.Google Scholar
  12. [Dav78]
    A. L. Davis, “The Architecture and System Method of DDM1: A Recursively Structured Data Driven Machine”, Proc. Fifth Ann. Symp. Computer Architecture, April, 1978, pp. 210–215.Google Scholar
  13. [Den75]
    J.B. Dennis, “First Version Data Flow Procedure Language”, Technical Memo MAC TM61, May, 1975, MIT Laboratory for Computer Science.Google Scholar
  14. [Den80]
    J. B. Dennis, “Data Flow Supercomputers” Computer, 13 (11), November 1980.Google Scholar
  15. [Den88]
    J. B. Dennis and G. R. Gao “An Efficient Pipelined Dataflow Processor Architecture” To appear in Proceedings of the IEEE, also in the Proc. ACM SIGARCH Conf. on Supercomputing, Florida, Nov., 1988.Google Scholar
  16. [Dri86]
    J. Driscoll, N. Samak, D. Sleator, and R. Tarjan, “Making Data Structures Persistent”, in Proc. of the 18th Ann. ACM Symp. on Theory of Computing, Berkeley, CA, pp.109–121, May 1986.Google Scholar
  17. [Efe82]
    K. Efe, “Heuristic Models of Task Assignment Scheduling in Distributed Systems”, IEEE Computer, pp. 50–56, June, 1982.Google Scholar
  18. [Fis84]
    J. A. Fisher, “The VLIW Machine: A Multiprocessor for Compiling Scientific Code”, Computer, July, 1984, 17(7).Google Scholar
  19. [Gao88]
    G. R. Gao, R. Tio, and H. H. J. Hum, “Design of an Efficient Dataflow Architecture without Data Flow”, Proc. Int. Conf. on Fifth Generation Computer Systems, 1988.Google Scholar
  20. [Gau86]
    J. L. Gaudiot, “Structure Handling in Data-Row Systems”, IEEE Trans. on Computers, C-35(6), June 1986.Google Scholar
  21. [Gau87]
    J. L. Gaudiot, “Data-Driven Multicomputers in Digital Signal Processing”, IEEE Proceedings, Vol. 75, No. 9, pp. 1220–1234, September, 1987.CrossRefGoogle Scholar
  22. [Gra87]
    M. Granski, I. Koren, and G.M. Silberman, “The Effect of Operation Scheduling on the Performance of a Data Flow Computer”, IEEE Trans. on Computers, C-36(9), September, 1987.Google Scholar
  23. [Ha89]
    S. Ha and E. A. Lee, “Compile-time Scheduling and Assignment of Dataflow Program Graphs with Data-Dependent Iteration”, Memorandum no. UCB/ERL M89/57, Electronics Research Lab, U. C. Berkeley, Berkeley, CA 94720.Google Scholar
  24. [Hoa78]
    C. A. R. Hoare, “Communicating Sequential Processes”, Communications of the ACM, August 1978, 21(8)Google Scholar
  25. [Hu61]
    T. C. Hu, “Parallel Sequencing and Assembly Line Problems”, Operations Research, 9(6), pp. 841–848, 1961.MathSciNetCrossRefGoogle Scholar
  26. [Hud86]
    P. Hudak, “A Semantic Model of Reference Counting and its Abstraction”, in Proc. of the 1986 ACM Conf. on Lisp and Functional Programming, MIT, Cambridge, MA, pp. 351–363, August, 1986.CrossRefGoogle Scholar
  27. [Ian88]
    R. A. Iannucci, “A Dataflow/von Neumann Hybrid Architecture”, Technical Report TR-418, MIT Lab. for Computer Science, 545 Technology Square, Cambridge, MA 02139, May 1988.Google Scholar
  28. [Pqb86]
    M. A. Iqbal, J. H. Saltz, and S. H. Bokhari, “A Comparative Analysis of Static and Dynamic Load Balancing Strategies”, Int. Conf. on Parallel Processing, pp. 1040–1045, 1986.Google Scholar
  29. [Jag86]
    H. V. Jagadish, R. G. Mathews, T. Kailath, and J. A. Newkirk, “A Study of Pipelining in Computing Arrays”, IEEE Trans. on Computers, C-35(5), May 1986Google Scholar
  30. [Kel84]
    R. M. Keller, F. C. H. Lin, and J. Tanaka, “Rediflow Multiprocessing”, Proc. IEEE COMPCON, pp. 410–417, February, 1984.Google Scholar
  31. [Kim88]
    S.J. Kim and J.C. Browne, “A General Approach to Mapping of Parallel Computations upon Multiprocessor Architectures”, Proceedings 1988 International Conference on Parallel Processing, August, 1988, pp. 1–8.Google Scholar
  32. [Kun87]
    J. Kunkel, “Parallelism in COSSAP”, Internal Memorandum, Aachen University of Technology, Fed. Rep. of Germany, 1987.Google Scholar
  33. [Kun88]
    S. Y. Kung, VLSI Array Processors, Prentice-Hall, Englewood Cliffs, NJ (1988).Google Scholar
  34. [Lee86]
    E. A. Lee, “A Coupled Hardware and Software Architecture for Programmable Digital Signal Processors”, Memorandum No. UCBIERL M86/54, EECS Dept., UC Berkeley (PhD Dissertation), 1986.Google Scholar
  35. [Lee87a]
    E. A. Lee and D. G. Messerschmitt, “Static Scheduling of Synchronous Data How Graph for Digital Signal Processing”, IEEE Trans. on Computers, January, 1987.Google Scholar
  36. [Lee87b]
    E. A. Lee and D. G. Messerschmitt, “Synchronous Data Flow”, IEEE Proceedings, September, 1987.Google Scholar
  37. [Lee88]
    E. A. Lee, “Programmable DSP Architectures, Part I”, ASSP Magazine, October, 1988.Google Scholar
  38. [Lee89a]
    E. A. Lee, “Programmable DSP Architectures, Part II”, ASSP Magazine, January, 1989.Google Scholar
  39. [Lee89b]
    E. A. Lee, W.-H. Ho, E. Goei, J. Bier, and S. Bhattacharyya, “Gabriel: A Design Environment for DSP”, IEEE Trans. on ASSP, November, 1989.Google Scholar
  40. [Lu86]
    H. Lu and M. J. Carey, “Load-Balanced Task Allocation in Locally Distributed Computer Systems”, Int. Conf. on Parallel Processing, pp. 1037–1039, 1986.Google Scholar
  41. [Ma82]
    P. R. Ma, E. Y. S. Lee and M. Tsuchiya, “A Task Allocation Model for Distributed Computing Systems”, IEEE Trans. on Computers, Vol. C-31, No. 1, pp. 41–47, January, 1982.CrossRefGoogle Scholar
  42. [Muh87]
    H. Muhlenbein, M. Gorges-Schleuter, and O. Kramer, “New Solutions to the Mapping Problem of Parallel Systems: The Evolution Approach”, Parallel Computing, 4, pp. 269–279, 1987.CrossRefGoogle Scholar
  43. [Nik89]
    R. S. Nikhil and Arvind, “Can Dataflow Subsume von Neumann Computing?”, Proc. of the 16th Annual Int. Symp. on Computer Architecture, Jerusalem, Isreal, May 28 — June 1, 1989.Google Scholar
  44. [Pap88]
    G. M. Papadopoulos, Implementation of a General Purpose Dataflow Multiprocessor, Dept. of Electrical Engineering and Computer Science, MIT, PhD Thesis, August, 1988.Google Scholar
  45. [Pet77]
    J. L. Peterson, Petri Net Theory and the Modeling of Systems, Prentice-Hall Inc., Englewood Cliffs, NJ, 1981.Google Scholar
  46. [Pla76]
    A. Pias, et. al., “LAU System Architecture: A Parallel Data-driven Processor Based on Single Assignment”, Proc. 1976 Int. Conf. Parallel Processing, pp. 293–302.Google Scholar
  47. [Ren81]
    M. Renfors and Y. Neuvo, “The Maximum Sampling Rate of Digital Filters Under Hardware Speed Constraints”, IEEE Trans. on Circuits and Systems, CAS-28(3), March 1981.Google Scholar
  48. [Sih90]
    G. Sih and E. A. Lee, “Scheduling to Account for Interprocessor Communication within Interconnection-Constrained Processor Networks”, Proc. Int. Conf. on Parallel Processing, August, 1990.Google Scholar
  49. [Wat82]
    I. Watson and J. Gurd, “A Practical Data Flow Computer”, Computer 15 (2), February 1982.Google Scholar
  50. [Zis87]
    M. A. Zissman and G. C. O’Leary, “A Block Diagram Compiler for a Digital Signal Processing MIMD Computer”, IEEE Int. Conf. on ASSP, pp. 1867–1870, 1987.Google Scholar

Copyright information

© Springer Science+Business Media New York 1991

Authors and Affiliations

  • Edward Ashford Lee
    • 1
  • Jeffrey C. Bier
    • 1
  1. 1.U. C. BerkeleyBerkeley

Personalised recommendations