Performance Comparison of Parallel Programming Environments for Implementing AIAC Algorithms
- 38 Downloads
AIAC algorithms (Asynchronous Iterations Asynchronous Communications) are a particular class of parallel iterative algorithms. Their asynchronous nature makes them more efficient than their synchronous counterparts in numerous cases as has already been shown in previous works. The first goal of this article is to compare several parallel programming environments in order to see if there is one of them which is best suited to efficiently implement AIAC algorithms. The main criterion for this comparison consists in the performances achieved in a global context of grid computing for two classical scientific problems. Nevertheless, we also take into account two secondary criteria which are the ease of programming and the ease of deployment. The second goal of this study is to extract from this comparison the important features that a parallel programming environment must have in order to be suited for the implementation of AIAC algorithms.
Unable to display preview. Download preview PDF.
- 1.O. Aumage, G. Mercier, and R. Namyst. MPICH/Madeleine: a True Multi-Protocol MPI for High-Performance Networks. In Proc. 15th International Parallel and Distributed Processing Symposium (IPDPS 2001), page 51, San Francisco, Apr. 2001. IEEE.Google Scholar
- 2.J. Bahi, S. Domas, and K. Mazouzi. Jace : A java environment for distributed asynchronous iterative computations. In 12-th Euromicro Conference on Parallel, Distributed and Network based Processing, PDP'04, pages 350–357, Coruna, Spain, February 2004. IEEE computer society press.Google Scholar
- 4.J. M. Bahi, S. Contassot-Vivier, and R. Couturier. Evaluation of the asynchronous model in iterative algorithms for global computing, 31(5):439–461, 2005.Google Scholar
- 5.J. M. Bahi, S. Contassot-Vivier, and R. Couturier. Asynchronism for iterative algorithms in a global computing environment. In The 16th Annual International Symposium on High Performance Computing Systems and Applications (HPCS'2002), pp. 90–97, Moncton, Canada, June 2002.Google Scholar
- 6.J. M. Bahi, S. Contassot-Vivier, and R. Couturier. Coupling dynamic load balancing with asynchronism in iterative algorithms on the computational grid. In International Parallel and Distributed Processing Symposium (IPDPS'2003), Nice, France, April 2003. Proceedings on CD-Rom.Google Scholar
- 8.D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice Hall, Englewood Cliffs NJ, 1989.Google Scholar
- 10.A. Frommer and D. B. Szyld. Asynchronous iterations with flexible communication for linear systems. Calculateurs Parallèles, Réseaux et Systèmes répartis, 10:421–429, 1998.Google Scholar
- 11.W. Gropp, E. Lusk, and A. Skjellum. Using MPI : portable parallel programming with the message passing interface. MIT Press, 1994.Google Scholar
- 13.R. Namyst and J.-F. Méhaut. PM2: Parallel multithreaded machine. A computing environment for distributed architectures. In Parallel Computing: State-of-the-Art and Perspectives, ParCo'95, volume 11, pp. 279–285. Elsevier, North-Holland, 1996.Google Scholar
- 14.Omniorb web page. http://omniorb.sourceforge.net.
- 15.A. Pope. The CORBA Reference Guide: Understanding the Common Object Request Broker Architecture. Addison-Wesley, Reading, MA, USA, Dec. 1997.Google Scholar
- 16.Y. Saad. Iterative methods for sparse linear systems, second edition. SIAM, 2003.Google Scholar
- 18.R. S. Varga. Matrix iterative analysis. Prentice-Hall, 1962.Google Scholar