Introduction: Logic Programming and Parallel Processing
The technology of sequential computers has been pushed nearly to its limits, and there is a growing realization that parallel computers are the way to high-performance computing. There are three approaches for running programs in parallel: the first is to use existing sequential (imperative) languages extended with constructs for parallelism. This approach (e.g. Ada [B82], Occam [I84]) makes the task of software writing very difficult, since the programmer must explicitly manage the parallel processes. The second approach is to use compilers that automatically paralleliz sequential programs [AK87]. Automatic parallelization of sequential programs is a very hard task, and, in general, it cannot exploit all the available parallelism in a program. The third approach, which we believe to be the most promising, is to use declarative languages: programs written in these languages can be implicitly parallilized—much more easily than those written in imperative languages—since declarative languages disallow explicit control structures and side-effects.
Unable to display preview. Download preview PDF.