Effective symbolic analysis to support parallelizing compilers and performance analysis

  • Thomas Fahringer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1225)


Existing parallelizing compilers have been shown to detect and exploit only small portions of parallelism in real application programs. A core problem is the difficulty to collect sufficient information from the program to utilize the underlying multiprocessor architecture. Symbolic analysis is the key component to support compilers in parameterizing codes by the number of processors as well as array sizes and to enable performance analyzers examining programs with program unknowns. We present a variety of novel symbolic analysis techniques that are crucial for a new generation of parallelizing compilers and performance analysis tools to handle non-linear array index functions, complex loop bounds, and deal with unknown problem, array and machine sizes which is critical for programming languages such as High Performance Fortran. This includes counting solutions to a system of constraints, computing lower and upper bounds of symbolic expressions, and comparing symbolic expressions. We have implemented all of these techniques as part of a state-of-the-art parallelizing compiler and a performance estimator. Examples will be shown that demonstrate the effectiveness of our symbolic analysis.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    S. Benkner et al. Vienna Fortran Compilation System — Version 2.0 — User's Guide, October 1995.Google Scholar
  2. 2.
    William Blume. Symbolic Analysis Techniques for Effective Automatic Parallelization. PhD thesis, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, June 1995.Google Scholar
  3. 3.
    M. Berry et al. The PERFECT club benchmarks: Effective performance evaluation of supercomputers. International Journal of Supercomputing Applications, pages 3(3): 5–40, 1989.Google Scholar
  4. 4.
    T. Fahringer. Toward Symbolic Performance Prediction of Parallel Programs. In IEEE Proc. of the 1996 International Parallel Processing Symposium, pages 474–478, Honolulu, Hawaii, April 15–19, 1996.Google Scholar
  5. 5.
    Thomas Fahringer. Estimating and Optimizing Performance for Parallel Programs. IEEE Computer, 28(11):47–56, November 1995.Google Scholar
  6. 6.
    M. Haghighat and C. Polychronopoulos. Symbolic Analysis for Parallelizing Compilers. CSRD Report No. 1355, CSRD, University of Illinois at Urbana-Champaign, IL, 1994.Google Scholar
  7. 7.
    High Performance FORTRAN Language Specification. Technical Report, Version 1.0, Rice University, Houston, TX, May 1993.Google Scholar
  8. 8.
    William Pugh. Counting Solutions to Presburger Formulas: How and Why. In ACM SIGPLAN'94 Conference on Programming Language Design and Implementation, pages 121–134, Orlando, FL, June 20–24 1995.Google Scholar
  9. 9.
    N. Tawbi. Estimation of nested loop execution time by integer arithmetic in convex polyhedra. In Proc. of the 1994 International Parallel Processing Symposium, April 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Thomas Fahringer
    • 1
  1. 1.Institute for Software Technology and Parallel SystemsUniversity of ViennaViennaAustria

Personalised recommendations