Scalable parallel computers and scalable parallel codes: From theory to practice
The evolution in parallel programming languages is toward implicit parallelism, and toward virtual parallelism: Explicitly coding for parallelism is to be avoided; coding for the physical machine size is a low-level programming practice to be overcome as soon as possible. Our examples indicate this may not be possible in general — although it might well be a realistic alternative for many numerical codes with simple structure. Much emphasis is now put on data-parallel languages, where parallelism is implied from the use of aggregate operations on data aggregate (mostly array operations on data arrays); parallelism is derived from parallel execution of these aggregate operations or derived from a data partition. Our examples imply that control parallelism, where parallelism is derived from explicit user allocation of operations to (virtual or physical) processors is necessary to express certain algorithms.
Unable to display preview. Download preview PDF.
- Aggarwal A., Chandra A. K. and Snir M., Communication Complexity of PRAMs. Theoretical Computer Science 71 (1990), 3–28.Google Scholar
- Baudet, G. and Stevenson, D.. Optimal sorting algorithms for parallel computers. IEEE Trans. Comput. C-27 (1978), 84–87.Google Scholar
- Cole, R.. Parallel merge sort. SIAM J. Comput. 17 (1988), 770–785.Google Scholar
- Fox G. et. al.. Fortran D language specification. Tech. Rep. COMP TR90079, Computer Science, Rice Univ., March 1991.Google Scholar
- Peterson, V. L. et. al.. Supercomputing requirements for selected disciplines important to aerospace. Proc. IEEE 77(1989) 1038–1055.Google Scholar
- Ullman J. D. Computational aspects of VLSI. Computer Science Press, 1984.Google Scholar