Expressing fine-grained parallelism using concurrent data structures

  • Suresh Jagannathan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 574)


A major criticism of concurrent data structures has been that efficient implementations are difficult to construct. To some extent, these criticisms have been valid particularly when considered in the context of fine-grained concurrency. Previous implementations of tuple-space languages, for example, have by and large ignored issues of runtime scheduling and storage management, and have not fully addressed the implications of using semantic-based compile-time analysis for building optimal representations of tuple-space structures.

This paper has focused on both concerns. Type analysis is used to infer structural properties of concurrent data structures. Generating efficient representations for tuple-spaces becomes tractable once their type structure is derived. A runtime kernel that permits deferred evaluation of thread objects makes it possible for programs to generate many fine-grained processes; resources for these processes are allocated only when the values they produce are required. Thus, the creation of active threads is dictated wholly by a program's runtime dynamics.

We expect these techniques will make concurrent data structures a natural linguistic device within which to efficiently exploit fine-grained parallelism in a variety of applications. Implementation of these ideas is currently underway.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Thomas Anderson, Edward Lazowska, and Henry Levy. The Performance Implications of Thread Management Alternatives for Shared Memory Multiprocessors. IEEE Transactions on Computers, 38(12):1631–1644, December 1989.Google Scholar
  2. 2.
    Rajive Bagrodia and Sharad Mathur. Efficient Implementation of High-Level Parallel Programs. In Proceedings of the Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 142–151, 1991.Google Scholar
  3. 3.
    A.D. Birrell, J.V. Guttag, J.J. Horning, and R. Levi. Synchronization Primitives for a Multiprocessor: A Formal Specification. In Proceedings of the 11 th Symposium on Operating Systems Principles, pages 94–102, November 1987.Google Scholar
  4. 4.
    Nick Carriero and David Gelernter. How to Write Parallel Programs: A Guide to the Perplexed. ACM Computing Surveys, 21(3), September 1989.Google Scholar
  5. 5.
    Nick Carriero and David Gelernter. Linda in Context. Communications of the ACM, 32(4):444–458, April 1989.Google Scholar
  6. 6.
    Nick Carriero and David Gelernter. Tuple Analysis and Partial Evaluation Strategies in the Linda Precompiler. In Second Workshop on Languages and Compilers for Parallelism. MIT Press, August 1989.Google Scholar
  7. 7.
    Marina Chen, Young-Il Choo, and Jingke Li. Compiling Parallel Programs by Optimizing Performance. Journal of Supercomputing, 2:171–207, 198.Google Scholar
  8. 8.
    Eric Cooper and Richard Draves. C Threads. Technical Report CMU-CS-88-154, Carnegie-Mellon University, June.Google Scholar
  9. 9.
    William Dally et. al. The J-Machine: A Fine-Grain Concurrent Computer. In Proceedings of the 1989 IFIPS Conference, 1989.Google Scholar
  10. 10.
    Robert Halstead. Multilisp: A Language for Concurrent Symbolic Computation. Transactions on Programming Languages and Systems, 7(4):501–538, October 1985.Google Scholar
  11. 11.
    Waldemar Horwat, Andrew Chien, and William Dally. Experience with CST: Programming and Implementation. In ACM SIGPLAN '89 Conference on Programming Language Design and Implementation, pages 101–109, June 1989.Google Scholar
  12. 12.
    Suresh Jagannathan and Jim Philbin. A foundation for an efficient multi-threaded scheme system. Technical Report 91-009-3-0050-2, NEC Research Institute, 1991.Google Scholar
  13. 13.
    Charles Koelbel and Piyush Mehrotra. Supporting Shared Data Structures on Distributed Memory Machines. In Second ACM Symposium on Principles and Practice of Parallel Programming, pages 177–187, March 1990.Google Scholar
  14. 14.
    Greg Papadopolus and David Culler. Monsoon: An Explicit Token-Store Architecture. In Proceedings of the 1990 Conference on Computer Architecture, pages 82–92, 1990.Google Scholar
  15. 15.
    Jonathan Rees and William Clinger, editors. The Revised3 Report on the Algorithmic Language Scheme. ACM Sigplan Notices, 21(12), 1986.Google Scholar
  16. 16.
    Ehud Shapiro, editor. Concurrent Prolog: Collected Papers. MIT Press, 1987. Volumes 1 and 2.Google Scholar
  17. 17.
    A. Tevanian, R. Rashid, D. Golub, D. Black, E. Cooper, and M. Young. Mach Treads and the UNIX Kernel: The Battle for Control. In 1987 USENIX Summer Conference, pages 185–197, 1987.Google Scholar
  18. 18.
    Charles Thacker, Lawerence Stewart, and Edward Satterthwaite, Jr. Firefly: A Multiprocessor Workstation. IEEE Transactions on Computers, 37(8):909–920, August 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Suresh Jagannathan
    • 1
  1. 1.NEC Research InstitutePrinceton

Personalised recommendations