Abstract
The technology of sequential computers has been pushed nearly to its limits, and there is a growing realization that parallel computers are the way to high-performance computing. There are three approaches for running programs in parallel: the first is to use existing sequential (imperative) languages extended with constructs for parallelism. This approach (e.g. Ada [B82], Occam [I84]) makes the task of software writing very difficult, since the programmer must explicitly manage the parallel processes. The second approach is to use compilers that automatically paralleliz sequential programs [AK87]. Automatic parallelization of sequential programs is a very hard task, and, in general, it cannot exploit all the available parallelism in a program. The third approach, which we believe to be the most promising, is to use declarative languages: programs written in these languages can be implicitly parallilized—much more easily than those written in imperative languages—since declarative languages disallow explicit control structures and side-effects.
“Contrariwise,” continued Tweedledee,“if it was so, it might be; and it were so, it would be: but as it isn’t, it ain’t. That’s logic.” Lewis Carroll. Through the Looking Glass IV
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1994 Springer Science+Business Media New York
About this chapter
Cite this chapter
Gupta, G. (1994). Introduction: Logic Programming and Parallel Processing. In: Multiprocessor Execution of Logic Programs. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2778-7_1
Download citation
DOI: https://doi.org/10.1007/978-1-4615-2778-7_1
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-6200-5
Online ISBN: 978-1-4615-2778-7
eBook Packages: Springer Book Archive