Working with Sequential Versions

  • Yosi Ben-AsherEmail author
Part of the Undergraduate Topics in Computer Science book series (UTICS)


In here we propose a programming methodology for developing efficient parallel programs. We basically advocate the idea that the parallel program is obtained from its sequential version by a sequence of gradual changes as follows:
  • Most of the developing stages are made sequentially and in gradual stages.

  • Each stage can be debugged sequentially, such that many bugs are found and removed before the parallel program is tested. Clearly, debugging a parallel program is harder than a sequential one, due to the fact that for a parallel program all possible execution orders must be tested.

  • Dependence analysis of the sequential program can reveal potential ways to invoke more parallelism to the desired parallel program.


Parallel Algorithm Parallel Program Parallel Version Memory Reference Sequential Program 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Bacon, D.F., Graham, S.L., Sharp, O.J.: Compiler transformations for high-performance computing. ACM Comput. Surv. 26(4), 345–420 (1994). CrossRefGoogle Scholar
  2. Banerjee, U.: Loop Transformations for Restructuring Compilers: The Foundations. Springer, Berlin (1993). ISBN 079239318X zbMATHCrossRefGoogle Scholar
  3. Ben-Asher, Y., Stein, E.: Basic results in automatic transformations of shared memory parallel programs into sequential programs. J. Supercomput. 17(2), 143–165 (2000) zbMATHCrossRefGoogle Scholar
  4. Dowling, M.L.: Optimal code parallelization using unimodular transformations. Parallel Comput. 16(2–3), 157–171 (1990) MathSciNetzbMATHCrossRefGoogle Scholar
  5. Kuck, D.J., Kuhn, R.H., Leasure, B., Wolfe, M.: The structure of an advanced retargetable vectorizer. In: Proc. COMPSAC ’80 (1980). A revised version appeared in K. Hwang, Supercomputers: Design and Applications, pp. 163–178, IEEE Computer Society, 1984 Google Scholar
  6. Lamport, L.: The parallel execution of do loops. Commun. ACM 17, 83–93 (1974) MathSciNetzbMATHCrossRefGoogle Scholar
  7. McKinley, K.S., Carr, S., Tseng, C.W.: Improving data locality with loop transformations. ACM Trans. Program. Lang. Syst. 18(4), 453 (1996) CrossRefGoogle Scholar
  8. Padua, D.A., Wolfe, M.J.: Advanced compiler optimizations for supercomputers. Commun. ACM 29(12), 1184–1201 (1986) CrossRefGoogle Scholar
  9. Whitfield, D., Soffa, M.L.: An approach to ordering optimizing transformations. In: Symp. Principles & Practice of Parallel Programming, pp. 137–146 (1990) Google Scholar
  10. Wolf, M.E., Lam, M.S.: A loop transformation theory and an algorithm to maximize parallelism. IEEE Trans. Parallel Distrib. Syst., 452–471 (1991) Google Scholar
  11. Wolf, M.E., Lam, M.S.: A data locality optimizing algorithm. ACM SIGPLAN Not. 26(6), 30–44 (1991) CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2012

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of HaifaHaifaIsrael

Personalised recommendations