Advertisement

Measuring the Scalability of Heterogeneous Parallel Systems

  • Alexey Kalinov
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3911)

Abstract

A parallel algorithm cannot be evaluated apart from the architecture it is implemented on. So, we define a parallel system as the combination of a parallel algorithm and a parallel architecture. The paper is devoted to the extension of well-known isoefficiency scalability metrics to heterogeneous parallel systems. Based on this extension the scalability of SUMMA (Scalable Universal Matrix Multiplication Algorithm) on parallel architecture with homogeneous communication system supporting simultaneous point-to-point communications is evaluated. Two strategies of data distribution are considered: (i) homogeneous – data are distributed between processors evenly; (ii) data are distributed between processors according to their performance. It is shown that under some assumption both strategies ensure the same scalability of heterogeneous parallel system. This theoretical result is corroborated with experiment.

Keywords

Problem Size Parallel Algorithm Parallel System Parallel Architecture Primary Memory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Grama, A., Gupta, A., Kumar, V.: Isoefficiency: Measuring the Scalability of Parallel Algorithms and Architectures. IEEE Parallel & Distributed Technology 1(3), 12–21 (1993)CrossRefGoogle Scholar
  2. 2.
    van de Geijn, R., Watts, J.: SUMMA: Scalable Universal Matrix Multiplication Algorithm. Concurrency: Practice and Experience 9(4), 255–274 (1997)CrossRefGoogle Scholar
  3. 3.
    Kalinov, A.: Heterogeneous two-dimensional block-cyclic data distribution for solving linear algebra problems on heterogeneous networks of computers. Programming and Computer Software 25(2), 3–11 (1999), Translated from Programmirovanie, 25(2)MathSciNetMATHGoogle Scholar
  4. 4.
    Quinn, M.: Parallel Programming in C with MPI and OpenMP. McGraw-Hill, New York (2004)Google Scholar
  5. 5.
    Pastor, L., Bosque, J.L.: An efficiency and scalability model for heterogeneous clusters. In: Proceedings of Cluster 2001, Newport Beach, CA, USA, October 8-11, 2001, pp. 427–434. IEEE Computer Society, Los Alamitos (2001)Google Scholar
  6. 6.
    Kalinov, A.: Scalability Analysis of Matrix-Matrix Multiplication on Heterogeneous Clusters. In: Proceedings of 3rd ISPDC/HeteroPar 2004, Cork, Ireland, July 05 - 07, 2004, pp. 303–309. IEEE CS Press, Los Alamitos (2004)Google Scholar
  7. 7.
    Dongarra, J., van de Geun, R., Walker, D.: Scalability Issues Affecting the Design of a Dense Linear Algebra Library. Journal of Parallel and Distributed Computing 22, 523–537 (1994)CrossRefGoogle Scholar
  8. 8.
    Beaumont, O., Boudet, V., Petitet, A., Rastello, F., Robert, Y.: A Proposal for a Heterogeneous Cluster ScaLAPACK (Dense Linear Solvers). IEEE Trans. Computers 50(10), 1052–1070 (2001)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Dovolnov, E., Kalinov, A., Klimov, S.: Natural Block Data Decomposition for Heterogeneous Clusters. In: Proceedings of 17th International Parallel and Distributed Processing Symposium, IEEE CS, Nice (April 2003) CD-ROMGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Alexey Kalinov
    • 1
  1. 1.Institute for System ProgrammingRussian Academy of SciencesMoscowRussia

Personalised recommendations