International Symposium on Programming

Volume 137 of the series Lecture Notes in Computer Science pp 194-211


Optimizing for a multiprocessor: Balancing synchronization costs against parallelism in straight-line code

  • Peter G. Hibbard
  • , Thomas L. Rodeheffer

* Final gross prices may vary according to local VAT.

Get Access


This paper reports on the status of a research project to develop compiler techniques to optimize programs for execution on an asynchronous multiprocessor. We adopt a simplified model of a multiprocessor, consisting of several identical processors, all sharing access to a common memory. Synchronization must be done explicitly, using two special operations that take a period of time comparable to the cost of data operations. Our treatment differs from other attempts to generate code for such machines because we treat the necessary synchronization overhead as an integral part of the cost of a parallel code sequence. We are particularly interested in heuristics that can be used to generate good code sequences, and local optimizations that can then be applied to improve them. Our current efforts are concentrated on generating straight-line code for high-level, algebraic languages.

We compare the code generated by two heuristics, and observe how local optimization schemes can gradually improve its quality. We are implementing our techniques in an experimental compiler that will generate code for Cm*, a real multiprocessor, having several characteristics of our model computer.