Abstract
It is very common for parallel programs to give slightly different results when running on different numbers of processors or even when using the same number of processors but different data decompositions. Though this variation is usually not at a significant level it can mask programming errors by providing an alternative explanation for varying results. The obvious methods for solving this problem rely on performing the affected sections of the code in serial. This approach may require very large amounts of memory on the processor performing the serial calculation and may significantly extend the run-time of the program to an extent that is unacceptable even for debugging purposes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
S.P. Booth Parallel Random Number Generators EPCC technical report (EPCC-TR96-04) University of Edinburgh (1996)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer Science+Business Media New York
About this chapter
Cite this chapter
Booth, S. (1999). Decomposition Independence in Parallel Programs. In: Allan, R.J., Guest, M.F., Simpson, A.D., Henty, D.S., Nicole, D.A. (eds) High-Performance Computing. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-4873-7_12
Download citation
DOI: https://doi.org/10.1007/978-1-4615-4873-7_12
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-7211-0
Online ISBN: 978-1-4615-4873-7
eBook Packages: Springer Book Archive