The idea behind data parallel programming is to perform global operations over large data structures, where the individual operations on singleton elements of the data structure are performed simultaneously. In the simplest case, for example, this means that a loop over an array is replaced by a constant-time aggregate operation. In order to introduce parallelism, the programmer thinks about the organisation of data structures rather than the organisation of processes. This leads directly to two of the most appealing benefits of data parallelism:
The program can be quite explicit about parallelism, through the choice of suitable data structure operations, while at the same time it is structured like an ordinary sequential program. Thus data parallelism allows efficient usage of a parallel machine’s resources, while providing a straightforward programming style that avoids many of the difficulties of task-oriented concurrent programming.
The parallelism can be scaled up simply by increasing the data structure size, without needing to reorganise the algorithm. Typical data parallel programs can use far greater numbers of processors than typical task parallel programs.
KeywordsSorting Prefix Verse Cuted
Unable to display preview. Download preview PDF.