Forklight: A control-synchronous parallel programming language

  • Christoph W. Keßler
  • Helmut Seidl
Track C3: Computational Science
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1593)


ForkLight is an imperative, task-parallel programming language for massively parallel shared memory machines. It is based on ANSI C, follows the SPMD model of parallel program execution, provides a sequentially consistent shared memory, and supports dynamically nested parallelism. While no assumptions are made on uniformity of memory access time or instruction-level synchronicity of the underlying hardware, ForkLight offers a simple but powerful mechanism for coordination of parallel processes in the tradition and notation of PRAM algorithms: Beyond its asynchronous default execution mode, ForkLight offers a mode for control-synchronous execution that relates the program's block structure to parallel control flow.

We give a scheme for compiling ForkLight to C with calls to a very small set of basic shared memory access operations like atomic fetch&add. This yields portability across parallel architectures and exploits the local optimizations of their native C compilers. Our implementation is publically available; performance results are reported. We also discuss translation to OpenMP.


Shared Memory Current Group Asynchronous Mode Memory Access Time Parallel Random Access Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    F. Abolhassan, R. Drefenstedt, J. Keller, W. J. Paul, D. Scheerer. On the physical design of PRAMs. Computer Journal, 36(8):756–762, Dec. 1993.CrossRefGoogle Scholar
  2. 2.
    R. Alverson, D. Callahan, D. Cummings, B. Koblenz, A. Porterfield, B. Smith. The Tera Computer System. Proc. 4th ACM Int. Conf. on Supercomputing, pp. 1–6, 1990.Google Scholar
  3. 3.
    Y. Ben-Asher, D. Feitelson, L. Rudolph. ParC—An Extension of C for Shared Memory Parallel Processing, Software—Practice and Experience, 26(5):581–612, May 1996.CrossRefGoogle Scholar
  4. 4.
    E. D. Brooks III, B. C. Gorda, K. H. Warren. The Parallel C Preprocessor. Scientific Programming, 1(1):79–89, 1992.Google Scholar
  5. 5.
    R. Butler, E. Lusk. Monitors, Messages, and Clusters: The P4 Parallel Programming System. Parallel Computing, 20(4):547–564, April 1994.CrossRefGoogle Scholar
  6. 6.
    D. Callahan, B. Smith. A Future-based Parallel Language for a General-Purpose Highly-parallel Computer. Tera Computer Company,, 1990.Google Scholar
  7. 7.
    W. W. Carlson, J. M. Draper. Distributed Data Access in AC. In Proc. ACM SIGPLAN Symp. on Principles and Practices of Parallel Programming, pages 39–47. ACM Press, 1995.Google Scholar
  8. 8.
    M. I. Cole. Algorithmic Skeletons: Structured Management of Parallel Computation. Pitman and MIT Press, 1989.Google Scholar
  9. 9.
    R. Cole, O. Zajicek. The APRAM: Incorporating Asynchrony into the PRAM model. Proc. 1st Ann. ACM Symp. on Par. Algorithms and Architectures, pp. 169–178, 1989.Google Scholar
  10. 10.
    R. Cole, O. Zajicek. The Expected Advantage of Asynchrony, JCSS 51:286–300, 1995.MathSciNetGoogle Scholar
  11. 11.
    D. E. Culler, A. Dusseau, S. C. Goldstein, A. Krishnamurthy, S. Lumetta, T. von Eicken, K. Yelick. Parallel Programming in Split-C. Proc. Supercomputing'93, Nov. 1993.Google Scholar
  12. 12.
    F. Darema, D. George, V. Norton, G. Pfister. A single-program-multiple-data computational model for EPEX/FORTRAN. Parallel Computing, 7:11–24, 1988.CrossRefGoogle Scholar
  13. 13.
    P. B. Gibbons. A More Practical PRAM model. In Proc. 1st Annual ACM Symposium on Parallel Algorithms and Architectures, pages 158–168, 1989.Google Scholar
  14. 14.
    H. F. Jordan. Structuring parallel algorithms in an MIMD, shared memory environment. Parallel Computing, 3:93–110, 1986.CrossRefMathSciNetGoogle Scholar
  15. 15.
    C. W. Keßler, H. Seidl. The Fork95 Parallel Programming Language: Design, Implementation, Application. Int. Journal of Parallel Programming, 25(1):17–50, Feb. 1997.Google Scholar
  16. 16.
    C. W. Keßler, H. Seidl. ForkLight: A Control-Synchronous Parallel Programming Language. Tech. Report 98-13, Univ. Trier, FB IV-Informatik, 54286 Trier, Germany, 1998.Google Scholar
  17. 17.
    C. E. Leiserson. Programming Irregular Parallel Applications in Cilk. Proc. IRREGULAR'97, pp. 61–71. Springer LNCS 1253, 1997.Google Scholar
  18. 18.
    D. Lenoski, J. Laudon, K. Gharachorloo, W.-D. Weber, A. Gupta, J. Hennesy, M. Horowitz, M. S. Lam. The Stanford DASH multiprocessor. IEEE Computer, 25(3):63–79, 1992.Google Scholar
  19. 19.
    C. León, F. Sande, C. Rodríguez, F. García. A PRAM Oriented Language. EUROMICRO Wksh. on Par. and Distr. Processing, pp. 182–191. IEEE CS Press, 1995.Google Scholar
  20. 20.
    OpenMP ARB. OpenMP White Paper,, 1997.Google Scholar
  21. 21.
    M. Philippsen, W. F. Tichy. Compiling for Massively Parallel Machines. In Code Generatin—Concepts, Tools, Techniques, pp. 92–111. Springer Workshops in Computing, 1991.Google Scholar
  22. 22.
    J. M. Wilson. Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. PhD thesis, New York University, 1988.Google Scholar
  23. 23.
    X. Zhang, Y. Yan, R. Castaneda. Evaluating and Designing Software Mutual Exclusion Algorithms on Shared-Memory Multiprocessors. IEEE Par. & Distr. Techn., 4:25–42, 1996.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 1999

Authors and Affiliations

  • Christoph W. Keßler
    • 1
  • Helmut Seidl
    • 1
  1. 1.FB IV-InformatikUniversität TrierTrierGermany

Personalised recommendations