Optimizing for a multiprocessor: Balancing synchronization costs against parallelism in straight-line code
- Peter G. Hibbard
- , Thomas L. Rodeheffer
This paper reports on the status of a research project to develop compiler techniques to optimize programs for execution on an asynchronous multiprocessor. We adopt a simplified model of a multiprocessor, consisting of several identical processors, all sharing access to a common memory. Synchronization must be done explicitly, using two special operations that take a period of time comparable to the cost of data operations. Our treatment differs from other attempts to generate code for such machines because we treat the necessary synchronization overhead as an integral part of the cost of a parallel code sequence. We are particularly interested in heuristics that can be used to generate good code sequences, and local optimizations that can then be applied to improve them. Our current efforts are concentrated on generating straight-line code for high-level, algebraic languages.
We compare the code generated by two heuristics, and observe how local optimization schemes can gradually improve its quality. We are implementing our techniques in an experimental compiler that will generate code for Cm*, a real multiprocessor, having several characteristics of our model computer.
- Optimizing for a multiprocessor: Balancing synchronization costs against parallelism in straight-line code
- Book Title
- International Symposium on Programming
- Book Subtitle
- 5th Colloquium Turin, April 6–8, 1982 Proceedings
- pp 194-211
- Print ISBN
- Online ISBN
- Series Title
- Lecture Notes in Computer Science
- Series Volume
- Series ISSN
- Springer Berlin Heidelberg
- Copyright Holder
- Additional Links
- Industry Sectors
To view the rest of this content please follow the download PDF link above.