Advertisement

On the complexity of the generalized block distribution

  • Michelangelo Grigni
  • Fredrik Manne
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1117)

Abstract

We consider the problem of mapping an array onto a mesh of processors in such a way that locality is preserved. When the computational work associated with the array is distributed in an unstructured way the generalized block distribution has been recognized as an efficient way of achieving an even load balance while at the same time imposing a simple communication pattern.

In this paper we consider the problem of computing an optimal generalized block distribution. We show that this problem is NP-complete even for very simple cost functions. We also classify a number of variants of the general problem.

Keywords

Load balancing parallel data structures scheduling and mapping 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    B. Chapman, P. Mehrotra, and H. Zima, Programming in Vienna Fortran, Sci. Prog., 1 (1992), pp. 31–50.Google Scholar
  2. 2.
    -, High performance Fortran languages: Advanced applications and their implementation, Future Generation Computer Systems, (1995), pp. 401–407.Google Scholar
  3. 3.
    -, Extending HPF for advanced data parallel applications, IEEE Trans. Par. Dist. Syst., (Fall 1994), pp. 59–70.Google Scholar
  4. 4.
    G. Fox, M. Johnson, G. Lyzenga, S. Otto, J. Salmon, and D. Walker, Solving Problems on Concurrent Processors, vol. 1, Prentice-Hall, Englewood Cliffs, NJ, 1988.Google Scholar
  5. 5.
    M. R. Garey and D. S. Johnson, Computers and Intractability, Freeman, 1979.Google Scholar
  6. 6.
    M. Halldorsson and F. Manne. Private communications.Google Scholar
  7. 7.
    High Performance Fortran Forum, High performance language specification. Version 1.0, Sci. Prog., 1–2 (1993), pp. 1–170.Google Scholar
  8. 8.
    High Performance Fortran Forum Home Page. http://www.crpc.rice.edu/HPFF/home.html.Google Scholar
  9. 9.
    F. Manne, Load Balancing in Parallel Sparse Matrix Computations, PhD thesis, University of Bergen, Norway, 1993.Google Scholar
  10. 10.
    F. Manne and T. Sørevik, Structured partitioning of arrays, Tech. Rep. CS-96-119, Department of Informatics, University of Bergen, Norway, 1996.Google Scholar
  11. 11.
    B. Olstad and F. Manne, Efficient partitioning of sequences, IEEE Trans. Comput., 44 (1995), pp. 1322–1326.CrossRefMathSciNetGoogle Scholar
  12. 12.
    M. Ujaldon, S. D. Sharma, J. Saltz, and E. Zapata, Run-time techniques for parallelizing sparse matrix problems, in Proceedings of 1995 Workshop on Irregular Problems, 1995.Google Scholar
  13. 13.
    M. Ujaldon, E. L. Zapata, B. M. Chapman, and H. P. Zima, Vienna-Fortran/HPF extensions for sparse and irregular problems and their compilation. Submitted to IEEE Trans. Par. Dist. Syst.Google Scholar
  14. 14.
    H. Zima, H. Bast, and M. Gerndt, Superb: A tool for semi-automatic MIMD/SIMD parallelization, Parallel Comput., (1986), pp. 1–18.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Michelangelo Grigni
    • 1
  • Fredrik Manne
    • 2
  1. 1.Department of Mathematics and Computer ScienceEmory UniversityAtlantaUSA
  2. 2.Department of InformaticsUniversity of BergenBergenNorway

Personalised recommendations