Parallel High-Performance Matrix Computations in MaTRiX++

  • Tim Collins
  • James Browne
Part of the The Springer International Series in Software Engineering book series (SOFT, volume 2)

Abstract

The MaTRiX++ system is a high-performance matrix computation environment. It provides support for the task of constructing efficient implementations for the complicated matrix structures that arise in modern scientific and engineering applications. The foundation for this system is a theory of hierarchical matrix computations that defines hierarchical representations for structured matrices and recursive implementations for matrix operations. Due to the large size of application matrices, it is often necessary to utilize parallel machines in order to obtain reasonable execution times. Without suitable support, the construction of efficient parallel implementations is a tedious and error-prone task.

In this chapter we discuss additions to the MaTRiX++ language interface and compilation model that will support applications written for distributed-memory architectures. The language extensions provide methods for describing hierarchical data and computation distribution over the separate memories and processors of the target architecture. The compilation approach utilizes this distribution information in order to construct threaded, data-driven single-program multiple-data (SPMD) processor programs. The high level of abstraction of the system encourages rapid prototyping without sacrificing efficiency.

Keywords

Matrix Computation Matrix Type Matrix Operation Compilation Model Language Interface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    E. Anderson, Z. Bai, C. Bischof, J.W. Demmel, J.J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK: A Portable Linear Algebra Library for High-Performance Computers. In Supercomputing, pages 2-11. IEEE Computer Society Press, 1990.Google Scholar
  2. [2]
    A.J.C. Bik, and H.A.G. Wijshoff. On Automatic Data Structure Selection and Code Generation for Sparse Computations. In 6th International Workshop on Languages and Compilers for Parallel Computing, pages 57-75, eds. U. Banerjee et al, Portland, Oregon, August 1993.Google Scholar
  3. [3]
    K.M. Chandy and C. Kesselman. Compositional C++: Compositional Parallel Programming. Technical Report, Caltech-CS-TR-92-13, Department of Computer Science, California Institute of Technology, 1992.Google Scholar
  4. [4]
    B. Chapman, P. Mehrotra, and H. Zima. Vienna Fortran — A Fortran Language Extension For Distributed Memory Multiprocessors. ICASE Report No. 91-72, NASA, September 1991.Google Scholar
  5. [5]
    T.S. Collins. Efficient Matrix Computations Through Hierarchical Type Specifications. Ph.D. dissertation, The University of Texas at Austin, August 1995.Google Scholar
  6. [6]
    T.S. Collins, and J.C. Browne. MATRIX++: An Object-Oriented Approach to the Hierarchical Matrix Algebra. In Proceedings of the Second Annual Object-Oriented Numerics Conference, pages 219-232, Sun River, Oregon, April 1994.Google Scholar
  7. [7]
    J.J Dongarra, J. Du Croz, I.S. Duff, and S. Hammarling. A Set of Level 3 Basic Linear Algebra Subprograms. ACM Transactions on Mathematical Software, pages 1–17, 16(1990).MATHCrossRefGoogle Scholar
  8. [8]
    J.J. Dongarra, R. van de Geijn, and D.W. Walker. Scalability Issues Affecting the Design of a Dense Linear Algebra Library. Submitted to Journal of Parallel and Distributed Computing. October 1993.Google Scholar
  9. [9]
    J.J. Dongarra, R. Pozo, and D. Walker. LAPACK++: A Design Overview of Object-Oriented Extensions for High Performance Linear Algebra. LAPACK++ Manual. November 1993.Google Scholar
  10. [10]
    G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel, U. Kremer, C. Tseng, and M. Wu. Fortran D Language Specification. Technical Report CS-TR90079, Rice University, March 1991.Google Scholar
  11. [11]
    D. Gannon and J.K. Lee. Object Oriented Parallelism: pC++ Ideas and Experiments. In 1991 Japan Society for Parallel Processing, pages 13-23, 1991.Google Scholar
  12. [12]
    CM. Goral, K.E. Torrance, D.P. Greenberg, and B. Battaile. Modeling the Interaction of Light Between Diffuse Surfaces. In Computer Graphics, Volume 18, Number 3, July 1984.Google Scholar
  13. [13]
    A.S. Grimshaw. An Introduction to Parallel Object-Oriented Programming with Mentat. Computer Science Report TR-91-07, University of Virginia, 1991.Google Scholar
  14. [14]
    P. Hatcher, M. Quinn, A.J. Lapadula, B.K. Seevers, R.J. Anderson, and R.R. Jones. Data Parallel Programming on MIMD Computers. In IEEE Transactions on Parallel and Distributed Systems, pages 377–383, Vol. 3, No. 2, July 1991.CrossRefGoogle Scholar
  15. [15]
    High Performance Fortran Language Specification. January 1993.Google Scholar
  16. [16]
    J.T. Oden, and L. Demkowicz. h-p Adaptive Finite Element Methods in Computational Fluid Dynamics. In Computer Methods in Applied Mechanics and Engineering, pages 11–40, 89(1990).MathSciNetCrossRefGoogle Scholar
  17. [17]
    J.T. Oden, and L. Demkowicz. Advances in Adaptive Improvements: A Survey in Computational Mechanics, ed. A.K. Noor, American Society of Mechanical Engineering, N.Y., 1988.Google Scholar
  18. [18]
    J.T. Oden, A. Patra, and Y. Feng. Domain Decomposition for Adaptive hp Finite Element Methods. In Proceedings of the VII International Conference on Domain Decomposition, State College, PA., October 1993.Google Scholar
  19. [19]
    M. Rosing, R. Schnabel, and R. Weaver. The DINO Parallel Programming Language. Technical Report CU-CS-457-90, Department of Computer Science, University of Colorado, April 1990.Google Scholar

Copyright information

© Springer Science+Business Media New York 1996

Authors and Affiliations

  • Tim Collins
    • 1
  • James Browne
    • 1
  1. 1.Department of Computer SciencesThe University of Texas at AustinAustinUSA

Personalised recommendations