Advertisement

Lazy thread and task creation in parallel graph-reduction

  • Manuel M. T. Chakravarty
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1467)

Abstract

Lightweight threads can be used to cover latencies occurring in distributed-memory implementations of declarative programming languages. To keep the costs of thread management low, this paper proposes two techniques: first, to distinguish locally scheduled threads from globally distributed tasks; and second, to create both threads and tasks lazily. The paper focuses on the integration of these techniques into compiled graph-reduction, which was neglected by other researchers; in particular, their approach prohibits both tail call optimization and the use of the push-enter model of function evaluation.

Keywords

Graph Reduction Case Expression Return Continuation Task Frame Work Request 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Guy E. Bleiloch, Phillip B. Gibbons, and Yossi Matias. Provably efficient scheduling for languages with fine-grained parallelism. In Proceedings of the Symposium on Parallel Algorithms and Architectures, pages 1–12, 1995.Google Scholar
  2. [2]
    Robert D. Blumofe, Christopher F. Joerg, Bradley C. Kuszmaul, Charles E. Leiserson, Keith H. Randall, and Yuli Zhou. Cilk: An efficient multithreaded runtime system. In Proceedings of the 5th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 207–216, 1995.Google Scholar
  3. [3]
    Manuel M. T. Chakravarty. Integrating multi-threading into the Spineless Tagless G-machine. In David N. Turner, editor, Functional Programming, Glasgow 1995, London, 1996. University of Glasgow, Springer-Verlag.Google Scholar
  4. [4]
    Manuel M. T. Chakravarty. On the Massively Parallel Execution of Declarative Programs. PhD thesis, Technische Universität Berlin, Fachbereich Informatik, 1997.Google Scholar
  5. [5]
    Manuel M. T. Chakravarty, Yike Guo, Martin Köhler, and Hendrik C. R. Lock. Goffin: Higher-order functions meet concurrent constraints. Science of Computer Programming, 30(1–2): 157–199, 1998.zbMATHMathSciNetCrossRefGoogle Scholar
  6. [6]
    David E. Culler, Seth Copen Goldstein, Klaus Erik Schauser, and Thorsten von Eicken. TAM—a compiler controlled threaded abstract machine. Journal of Parallel and Distributed Computing, 18: 347–370, 1993.CrossRefGoogle Scholar
  7. [7]
    Cormac Flanagan and Rishiyur S. Nikhil. pHluid: The design of a parallel functional language implementation on workstations. In Proceedings of the ACM SIGPLAN International Conference on Functional Programming. ACM Press, 1996.Google Scholar
  8. [8]
    Ian Foster, Carl Kesselman, and Steven Tuecke. The Nexus approach to integrating multithreading and communication. Journal of Parallel and Distributed Computing, 37(1): 70–82, 1996.CrossRefGoogle Scholar
  9. [9]
    Guang R. Gao, Lubomir Bic, and Jean-Luc Gaudiot, editors. Advanced Topics in Dataflow Computing and Multithreading. IEEE Computer Society Press, 1995.Google Scholar
  10. [10]
    Seth Copen Goldstein, Klaus Erik Schauser, and David E. Culler. Enabling primitives for compiling parallel languages. In Boleslaw K. Szymanski and Balaram Sinharoy, editors, Languages, Compilers and Run-Time Systems for Scalable Computers, chapter 12, pages 153–168. Kluwer Academic Publishers, 1996.Google Scholar
  11. [11]
    Seth Copen Goldstein, Klaus Erik Schauser, and David E. Culler. Lazy threads: Implementing a fast parallel call. Journal of Parallel and Distributed Computing, 1997. Submitted; revised version of [10].Google Scholar
  12. [12]
    Robert H. Halstead. Multilisp: A language for concurrent symbolic computation. ACM Transactions on Programming Languages and Systems, 7(4): 501–538, 1985.zbMATHCrossRefGoogle Scholar
  13. [13]
    Herbert H.J. Hum et al. A design study of the EARTH multiprocessor. In Lubomir Bic, Wim Böhm, Paraskevas Evripidou, and Jean-Luc Gaudiot, editors, Proceedings of the IFIP WG 10.3 Working Conference on Parallel Architectures and Compilation Techniques, PACT '95, pages 59–68. ACM Press, 1995.Google Scholar
  14. [14]
    David A. Kranz, Robert H. Halstead, and Eric Mohr. Mul-T: A high-performance parallel lisp. In ACM SIGPLAN'89 Conference on Programming Languages Design and Implementation, pages 81–90. ACM Press, 1989.Google Scholar
  15. [15]
    A. Krishnamurthy, D. E. Culler, A. Dusseau, S. C. Goldstein, S. Lumetta, T. von Eicken, and K. Yelick. Parallel programming in Split-C. In Bob Borchers, editor, Proceedings of the Supercomputing '93 Conference, pages 262–273. IEEE Computer Society Press, November 1993.Google Scholar
  16. [16]
    Eric Mohr, David A. Kranz, and Robert H. Halstead, Jr. Lazy task creation: A technique for increasing the granularity of parallel programs. IEEE Transactions on Parallel and Distributed Systems, 2(3): 264–280, July 1991.CrossRefGoogle Scholar
  17. [17]
    Simon L. Peyton Jones. Implementing lazy functional languages on stock hardware: the Spineless Tagless G-machine. Journal of Functional Programming, 2(2), 1992.Google Scholar
  18. [18]
    P. W. Trinder, K. Hammond, H.-W. Loidl, and S. L. Peyton Jones. Algorithm + strategy = parallelism. Journal of Functional Programming, 1998. To appear.Google Scholar
  19. [19]
    P. W. Trinder, K. Hammond, J. S. Mattson Jr, A. S. Partridge, and S. L. Peyton Jones. GUM: a portable parallel implementation of Haskell. In Proceedings of Programming Languages Design and Implementation, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Manuel M. T. Chakravarty
    • 1
  1. 1.Institute of Information Sciences and ElectronicsUniversity of TsukubaJapan

Personalised recommendations