Advertisement

Support for Irregular Computations in Massively Parallel PIM Arrays, Using an Object-Based Execution Model

  • Hans P. Zima
  • Thomas L. Sterling
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1800)

Abstract

The emergence of semiconductor fabrication technology allowing a tight coupling between high-density DRAM and CMOS logic on the same chip has led to the important new class of Processor-In-Memory (PIM) architectures. Furthermore, large arrays of PIMs can be arranged into massively parallel architectures. In this paper, we outline the salient features of PIM architectures and discuss macroservers, an object-based model for such machines. Subsequently, we specifically address the support for irregular problems provided by PIM arrays. The discussion concludes with a case study illustrating an approach to the solution of a sparse matrix vector multiplication.

Keywords

Sparse Representation Memory Chip Address Translation Work Distribution Active Thread 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    J.B. Brockman, P.M. Kogge, V.W. Preeh, S.K. Kuntz, and T.L. Sterling. Microservers: A New Memory Semantics for Massively Parallel Computing. Proceedings ACM International Conference on Supercomputing (ICS’99), June 1999.Google Scholar
  2. [2]
    B. Chapman, P. Mehrotra, and H. Zima. Programming in Vienna Fortran. Scientific Programming, 1(1):31–50, Fall 1992.Google Scholar
  3. [3]
    M. Hall, J. Koller, P. Diniz, J. Chame, J. Draper, J. La Coss, J. Granacki, J. Brockman, A. Srivastava, W. Athas, V. Freeh, J. Shin, and J. Park. Mapping Irregular Applications to DIVA, a PIM-Based Data Intensive Architecture. Proceedings SC’99, November 1999.Google Scholar
  4. [4]
    High Performance Fortran Forum. High Performance Fortran Language Specification, Version 2.0, January 1997.Google Scholar
  5. [5]
    J. Saltz, K. Crowley, R. Mirchandaney, and H. Berryman. Run-Time Scheduling and Execution of Loops on Message-Passing Machines. Journal of Parallel and Distributed Computing, 8(2), pp.303–312, 1990.CrossRefGoogle Scholar
  6. [6]
    M. Ujaldon, E.L. Zapata, B.M. Chapman, and H.P. Zima. Vienna Fortran/HPF Extensions for Sparse and Irregular Problems and Their Compilation. IEEE Transactions on Parallel and Distributed Systems, Vol.8, No.10, pp.1068–1083 (October 1997).CrossRefGoogle Scholar
  7. [7]
    H. Zima and B. Chapman. Compiling for Distributed Memory Systems. Proceedings of the IEEE, Special Section on Languages and Compilers for Parallel Machines, pp. 264–287, February 1993.Google Scholar
  8. [8]
    H. Zima and T. Sterling. Macroservers. An Object-Based Model for Massively Parallel Processor-in-Memory Arrays. Caltech CACR Technical Report, January 2000 (in preparation).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Hans P. Zima
    • 1
    • 2
  • Thomas L. Sterling
    • 1
  1. 1.CACRCalifornia Institute of TechnologyPasadenaUSA
  2. 2.Institute for Software ScienceUniversity of ViennaAustria

Personalised recommendations