2.8 Summary
In this chapter, a brief summary of how to write simple parallel MPI/FORTRAN codes has been presented and explained. Through two simple examples (given in Sections 2.2 – 2.3), it is rather obvious that writing parallel MPI/FORTRAN codes are essentially the same as in the sequential mode of FORTRAN 77, or FORTRAN 90, with some specially added parallel MPIJFORTRAN statements. These special MPIJFORTRAN statements are needed for the purposes of parallel computation and Processors’ communication (such as sending/receiving messages, summing /merging Processor’s results, etc.). Powerful unrolling techniques associated with “dotproduct” and “saxpy” operations have also been explained. These unrolling strategies should be incorporated into any application codes to substantially improve the computational speed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Reference
W. Gropp, “Tutorial on MPI: The Message Passing Interface,” Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439
Rights and permissions
Copyright information
© 2006 Springer Science+Business Media, Inc.
About this chapter
Cite this chapter
(2006). Simple MPI/FORTRAN Applications. In: Finite Element Methods: Parallel-Sparse Statics and Eigen-Solutions. Springer, Boston, MA. https://doi.org/10.1007/0-387-30851-2_2
Download citation
DOI: https://doi.org/10.1007/0-387-30851-2_2
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-29330-1
Online ISBN: 978-0-387-30851-7
eBook Packages: EngineeringEngineering (R0)