LISP and Symbolic Computation

, Volume 2, Issue 3–4, pp 179–396 | Cite as

The interprocedural analysis and automatic parallelization of Scheme programs

  • Williams Ludwell HarrisonIII


Lisp and its descendants are among the most important and widely used of programming languages. At the same time, parallelism in the architecture of computer systems is becoming commonplace. There is a pressing need to extend the technology of automatic parallelization that has become available to Fortran programmers of parallel machines, to the realm of Lisp programs and symbolic computing. In this paper we present a comprehensive approach to the compilation of Scheme programs for shared-memory multiprocessors. Our strategy has two principal components:interprocedural analysis andprogram restructuring. We introduceprocedure strings andstack configurations as a framework in which to reason about interprocedural side-effects and object lifetimes, and develop a system of interprocedural analysis, using abstract interpretation, that is used in the dependence analysis and memory management of Scheme programs. We introduce the transformations ofexit-loop translation andrecursion splitting to treat the control structures of iteration and recursion that arise commonly in Scheme programs. We propose an alternative representation for s-expressions that facilitates the parallel creation and access of lists. We have implemented these ideas in a parallelizing Scheme compiler and run-time system, and we complement the theory of our work with “snapshots” of programs during the restructuring process, and some preliminary performance results of the execution of object codes produced by the compiler.


Interprocedural Analysis Abstract Interpretation Automatic Parallelization Program Transformation Parallel Processing Symbolic Computation Lisp Scheme 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Butterfly Parallel Processor Overview. BBN Laboratories Inc., Cambridge, Massachusetts (1985).Google Scholar
  2. 2.
    FX/Series Architecture Manual. Alliant Computer Systems Corporation, Acton, Massachusetts (January 1986).Google Scholar
  3. 3.
    Harrison III, Williams Ludwell.Compiling Lisp for Evaluation on a Tightly Coupled Multiprocessor. Technical Report 565, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign (March 1986).Google Scholar
  4. 4.
    Harrison III, Williams Ludwell and Padua, David A. Parcel: project for the automatic restructuring and concurrent evaluation of lisp. InProceedings of the 1988 International Conference on Supercomputing, Association for Computing Machinery (July 1988).Google Scholar
  5. 5.
    Steele Jr., Guy L.RABBIT: a Compiler for Scheme. Technical Report AI Memo 474, Massachusetts Institute of Technology (May 1978).Google Scholar
  6. 6.
    Steele Jr., Guy L.Common Lips: the Language. Digital Press (1984).Google Scholar
  7. 7.
    Steele Jr., Guy L. and Hillis, W. D. Connection machine lisp: finegrained parallel symbolic processing. InProceedings of the 1986 Conference on Lisp and Functional Programming (August 1986) 279–297.Google Scholar
  8. 8.
    Steele Jr., Guy L. and Sussman, Gerald Jay.The Revised Report on Scheme. Technical Report AI Memo 452, Massachusetts Institute of Technology (January 1978).Google Scholar
  9. 9.
    Abelson, Harold and Sussman, Gerald J.Structure and Interpretation of Computer Programs. The MIT Electrical Engineering and Computer Science Series, MIT Press, Cambridge, Massachusetts (1985).Google Scholar
  10. 10.
    Aho, Alfred V. and Ullman, Jeffrey D.Principles of Compiler Design. Addison Wesley Publishing Company, Reading, Massachusetts (1979).Google Scholar
  11. 11.
    Allison, Lloyd.A Practical Introduction to Denotational Semantics. Cambridge Computer Science Texts 23, Cambridge University Press, Cambridge (1986).Google Scholar
  12. 12.
    Banerjee, Uptal D.Data Dependence in Ordinary Programs. Master's thesis, University of Illinois at Urbana-Champaign (November 1976).Google Scholar
  13. 13.
    Banerjee, Uptal D.Speedup of Ordinary Programs. PhD thesis, University of Illinois at Urbana-Champaign (October 1979).Google Scholar
  14. 14.
    Burke, Michael.An Interval-Based Approach to Exhaustive and Incremental Interprocedural Data Flow Analysis. Technical Report RC 12702 (#58665), IBM T.J. Watson Research Center (September 1987).Google Scholar
  15. 15.
    Burn, G. L.Abstract Interpretation and the Parallel Evaluation of Functional Languages. PhD thesis, Imperial College, University of London (march 1987).Google Scholar
  16. 16.
    Church, Alonzo.The Calculi of Lambda-Conversion. Princeton University Press, Princeton, New Jersey (1941).Google Scholar
  17. 17.
    Cousot, P. and Cousot, R. Abstract interpretation: a unified lattice model for static analysis of programs by construction of approximation of fixpoints. InConference Record of the Fourth ACM Symposium on Principles of Programming Languages (January 1977) 238–252.Google Scholar
  18. 18.
    Cousot, P. and Cousot, R. Systematic design of program analysis frameworks. InConference Record of the Sixth ACM Symposium on Principles of Programming Languages (January 1979) 269–282.Google Scholar
  19. 19.
    Cytron, Ronald G. and Ferrante, Jeanne. What's in a name? or the value of renaming for parallelism detection and storage management. InProceedings of the 1987 International Conference on Parallel Processing (August 1987) 19–27.Google Scholar
  20. 20.
    Darlington, John and Burstall, Richard M. A system which automatically improves programs.Acta Informatica, 6, 41 (1976).Google Scholar
  21. 21.
    Gabriel, Richard P.Performance and Evaluation of Lisp Systems. MIT Press, Cambridge, Massachusetts (1985).Google Scholar
  22. 22.
    Gabriel, Richard P. and McCarthy, John. Queue-based multiprocessing lisp. InProceedings of the 1984 Conference on Lisp and Functional Programming (January 1984) 25–44.Google Scholar
  23. 23.
    Gifford, D. K., Jouvelot, P., Lucassen, J. M., and Sheldon, M. A.FX-87 Reference Manual. Technical Report MIT/LCS/TR-407, Massachusetts Institute of Technology (January 1987).Google Scholar
  24. 24.
    Halstead, Robert H. Multilisp: a language for concurrent symbolic computation.ACM Transactions on Programming Languages and Systems, 7, 4 (October 1985) 501–538.Google Scholar
  25. 25.
    Hansen, W. J. Compact list representation: definition, garbage collection and system implementation.Communications of the ACM, 12, 9 (September 1969).Google Scholar
  26. 26.
    Hecht, M. S.Flow Analysis of Computer Programs. Elsevier North-Holland (1977).Google Scholar
  27. 27.
    Hillis, W. Daniel.The Connection Machine. MIT Press, Cambridge, Massachusetts (1985).Google Scholar
  28. 28.
    Hudak, Paul and Young, Jonathan. A collecting interpretation of expressions (without powerdomains). InConference Record of the Thirteenth ACM Symposium on Principles of Programming Languages (January 1988).Google Scholar
  29. 29.
    Kranz, D., Kelsey, R., Rees, J., Hudak, P., Philbin, J., and Adams, N. Orbit: an optimizing compiler for scheme. InProceedings of the SIGPLAN 1986 Symposium on Compiler Construction (July 1986) 162–175.Google Scholar
  30. 30.
    Kuck, David J., Davidson, Edward S., Lawrie, Duncan H., and Sameh, Ahmed H. Supercomputing today and the cedar approach.Science, 231 (February 1986) 967–974.Google Scholar
  31. 31.
    Kuck, David J., Kuhn, Robert H., Leasure, Bruce, and Wolfe, Michael J. The structure of an advanced vectorized for pipelined processors. InFourth International Computer Softward and Applications Conference (October 1980).Google Scholar
  32. 32.
    Ladner, R. E. and Fischer, M. J. Parallel prefix computation.Journal of the ACM (October 1980) 831–838.Google Scholar
  33. 33.
    Larus, J. and Hilfinger, P. N. Restructuring lisp programs for concurrent execution (summary). InConference Record of the ACM SIGPLAN Symposium on Parallel Programming (1988).Google Scholar
  34. 34.
    Marti, J. and Fitch, J. The bath concurrent lisp machine. InEUROCAM '83 (Lecture Notes in Computer Science), Springer Verlag (1983).Google Scholar
  35. 35.
    McGehearty, P. F. and Krall, E. J. Potentials for parallel execution of common lisp programs. InProceedings of the 1986 International Conference on Parallel Processing (1986) 696–702.Google Scholar
  36. 36.
    Midkiff, Samuel P. and Padua, David A. Compiler algorithms for synchronization.IEEE Transactions on Computers, C-36, 12 (December 1987) 1485–1495.Google Scholar
  37. 37.
    Midkiff, Samuel P. and Padua, David A.The Further Concurrentization of Parallel Programs. Technical Report, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign (1988).Google Scholar
  38. 38.
    Miller, James Slocum.MultiScheme: A Parallel Processing System Based on MIT Scheme. PhD thesis, Massachusetts Institute of Technology (1987).Google Scholar
  39. 39.
    Padua, David A. and Wolfe, Michael J. Advanced compiler optimizations for supercomputers.Communications of the ACM, 29, 12 (December 1986).Google Scholar
  40. 40.
    Pfister, G. F., Brantley, D. A.,et al. The ibm research parallel processor prototype (rp3). InProceedings of the 1985 International Conference on Parallel Processing (1985) 764–771.Google Scholar
  41. 41.
    Rees, J., Clinger, W.,et al. Revised revised revised report on the algorithmic language scheme.SIGPLAN Notices, 21, 12 (December 1986) 37–76.Google Scholar
  42. 42.
    Roads, C. B.3600 Technical Summary. Symbolics Corporation, Cambridge, Massachusetts (February 1983).Google Scholar
  43. 43.
    Scott, Dana S.Domains for Denotational Semantics. Technical Report, Carnegie-Mellon University (June 1982).Google Scholar
  44. 44.
    Stoy, J. E.Denotational Semantics: the Schott-Strachey Approach to Programming Language Theory. MIT Press (1977).Google Scholar
  45. 45.
    Triolet, Remi.Contributions to Automatic Parallelization of Fortran Programs with Procedure Calls. PhD thesis, University of Paris VI (I.P.) (1984).Google Scholar
  46. 46.
    Wegman, Mark and Zadeck, Kenneth. Constant propagation with conditional branches. InConference Record of the Twelfth ACM Symposium on Principles of Programming Languages (January 1985) 291–299.Google Scholar
  47. 47.
    Wolfe, Michael J.Optimizing Supercompilers for Supercomputers. PhD thesis, University of Illinois at Urbana-Champaign (October 1982).Google Scholar

Copyright information

© Kluwer Academic Publishers 1989

Authors and Affiliations

  • Williams Ludwell HarrisonIII
    • 1
  1. 1.Center for Supercomputing Research and DevelopmentUniversity of Illinois at Urbana-Champaign 305 Talbot LaboratoryUrbana

Personalised recommendations