Advertisement

A foundation for advanced compile-time analysis of linda programs

  • N. Carriero
  • D. Gelernter
X. Analysis of Explicitly Parallel Programs
Part of the Lecture Notes in Computer Science book series (LNCS, volume 589)

Abstract

Efficient implementations of Linda must address two potentially expensive properties of tuple space: associative access and uncoupled communications. Current compile-time analyses have significantly reduced the cost of associative access to tuple space. We propose a set of new analyses that help tackle uncoupling, as well as establish a more general framework for optimizations. We relate these analyses to new optimization strategies and give example applications of the latter.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    S. Ahmed, N. Carriero, and D. Gelernter. The Linda Program Builder. In Proc. Third Workshop on Languages and Compilers for Parallelism, Irvine, 1990.Google Scholar
  2. [2]
    M. Burke and R. Cytron. Interprocedural depenedence analysis and parallelization. In Proc. of ACM SIGPLAN Symposium on Compiler Construction, pages 162–175. ACM, July 1986.Google Scholar
  3. [3]
    R. Bjornson. Linda on Distributed-Memory Multiprocessors. PhD thesis, Yale University Department of Computer Science, New Haven, Connecticut, 1991. Department of Computer Science.Google Scholar
  4. [4]
    N. Carriero and D. Gelernter. Linda in context. Commun. ACM, 32(4):444–458, Apr. 1989.Google Scholar
  5. [5]
    N. Carriero and D. Gelernter. How to Write Parallel Programs: A first course. MIT Press, Cambridge, 1990.Google Scholar
  6. [6]
    N. Carriero and D. Gelernter. Tuple analysis and partial evaluation strategies in the Linda pre-compiler. In D. Gelernter, A. Nicolau, and D. Padua, editors, Languages and Compilers for Parallel Computing, pages 114–125. MIT Press, Cambridge, 1990.Google Scholar
  7. [7]
    N. Carriero, D. Gelernter, and J. Leichter. Distributed data structures in Linda. In Thirteenth ACM Symposium on Principles of Programming Languages Conf., pages 236–242, St. Petersburg, Florida, Jan. 1986. Association for Computing Machinery.Google Scholar
  8. [8]
    M. C. Chen. A parallel language and its compilation to multiprocessor machines or VLSI. In Thirteenth Annual ACM Symposium on Principles of Programming Languages, pages 131–139. Association for Computing Machinery, Jan. 1986.Google Scholar
  9. [9]
    D. Callahan and K. Kennedy. Compiling programs for distributed-memory multiprocessors. The Journal of Supercomputing, 2:151–169, 1988.Google Scholar
  10. [10]
    D. Gelernter. Generative communication in Linda. ACM Trans. Prog. Lang. Syst., 7(1):80–112, Jan. 1985.Google Scholar
  11. [11]
    V. Krishnaswamy. The Linda Machine. PhD thesis, Yale University Department of Computer Science, New Haven, Connecticut, 1991. Department of Computer Science (in preparation).Google Scholar
  12. [12]
    D. Padua and M. J. Wolfe. Advanced Compiler Optimizations for Supercomputers. Commun. ACM, 29(12):1184–1201, Dec. 1986.Google Scholar
  13. [13]
    E. Segall. In preparation. PhD thesis, Rutgers University, New Brunswick, New Jesey, 1991. CAIP.Google Scholar
  14. [14]
    N. Yang Wang. 690 project report. Report on preliminary thesis research., 1990.Google Scholar
  15. [15]
    H. Zima, H.-J. Bast, and M. Gerndt. SUPERB: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6:1–18, 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • N. Carriero
    • 1
  • D. Gelernter
    • 1
  1. 1.Yale UniversityUSA

Personalised recommendations