Optimizing for a multiprocessor: Balancing synchronization costs against parallelism in straight-line code

  • Peter G. Hibbard
  • Thomas L. Rodeheffer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 137)

Abstract

This paper reports on the status of a research project to develop compiler techniques to optimize programs for execution on an asynchronous multiprocessor. We adopt a simplified model of a multiprocessor, consisting of several identical processors, all sharing access to a common memory. Synchronization must be done explicitly, using two special operations that take a period of time comparable to the cost of data operations. Our treatment differs from other attempts to generate code for such machines because we treat the necessary synchronization overhead as an integral part of the cost of a parallel code sequence. We are particularly interested in heuristics that can be used to generate good code sequences, and local optimizations that can then be applied to improve them. Our current efforts are concentrated on generating straight-line code for high-level, algebraic languages.

We compare the code generated by two heuristics, and observe how local optimization schemes can gradually improve its quality. We are implementing our techniques in an experimental compiler that will generate code for Cm*, a real multiprocessor, having several characteristics of our model computer.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    S. J. Allan and A. E. Oldehoeft. A Flow Analysis Procedure for the Translation of High Level Languages to a Data Flow Language. In Oscar N. Garcia (editor), Proceedings of the 1979 International Conference on Parallel Processing, pages 26–34. IEEE Computer Society, Long Beach, California, 1979.Google Scholar
  2. [2]
    U. Banerjee, S. C. Chen, D. J. Kuck, and R. A. Towle. Time and Parallel Processor Bounds for Fortran-Like Loops. IEEE Transactions on Computers C-28(9):660–670, September, 1979.Google Scholar
  3. [3]
    G. Baudet. Asynchronous Iterative Methods for Multiprocessors. Technical Report, Department of Computer Science, Carnegie-Mellon University, 1976.Google Scholar
  4. [4]
    A. J. Bernstein. Analysis of Programs for Parallel Processing. IEEE Transactions on Electronic Computers EC-15(5):757–763, October, 1966.Google Scholar
  5. [5]
    R. P. Brent. The Parallel Evaluation of General Arithmetic Expressions. Journal of the ACM 21(2):201–206, April, 1974.Google Scholar
  6. [6]
    A. J. Catto and J. R. Gurd. Resource Management in Dataflow. In Proceedings of the 1981 Conference on Functional Programming Languages and Computer Architecture, pages 77–84. Association for Computing Machinery, 1981.Google Scholar
  7. [7]
    M. J. Gonzalez Jr. and C. V. Ramamoorthy. Parallel Task Execution in a Decentralized System. IEEE Transactions on Computers C-21(12):1310–1322, December, 1972.Google Scholar
  8. [8]
    M. S. Hecht. Programming Language Series: Flow Analysis of Computer Programs. Elsevier, New York, New York, 1977.Google Scholar
  9. [9]
    A. K. Jones and E. F. Gehringer. The Cm* Multiprocessor Project: A Research Review. Technical Report, Department of Computer Science, Carnegie-Mellon University, July, 1980.Google Scholar
  10. [10]
    D. J. Kuck, Y. Muraoka, and S. C. Chen. On the Number of Operations Simultaneously Executable in Fortran-Like Programs and Their Resulting Speedup. IEEE Transactions on Computers C-21(12):1293–1310, December, 1972.Google Scholar

Copyright information

© Springer-Verlag 1982

Authors and Affiliations

  • Peter G. Hibbard
  • Thomas L. Rodeheffer

There are no affiliations available

Personalised recommendations