Advertisement

Matrix algebra and applicative programming

  • David S. Wise
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 274)

Abstract

The broad problem of matrix algebra is taken up from the perspective of functional programming. A key question is how arrays should be represented in order to admit good implementations of well-known efficient algorithms, and whether functional architecture sheds any new light on these or other solutions. It relates directly to disarming the “aggregate update” problem.

The major thesis is that 2 d -ary trees should be used to represent d-dimensional arrays; examples are matrix operations (d=2), and a particularly interesting vector (d=1) algorithm. Sparse and dense matrices are represented homogeneously, but at some overhead that appears tolerable; encouraging results are reviewed and extended. A Pivot Step algorithm is described which offers optimal stability at no extra cost for searching. The new results include proposed sparseness measures for matrices, improved performance of stable matrix inversion through repeated pivoting while deep within a matrix-tree (extendible to solving linear systems), and a clean matrix derivation of the vector algorithm for the fast Fourier transform. Running code is offered in the appendices.

This work is particularly important because of the importance of this family of problems. Progress would be of simultaneous use in decomposing algorithms over traditional vector multi-processors, as well as motivate practical interest in highly parallel functional architectures.

CR categories and Subject Descriptors

C.1.2 [Multiple Data Stream Architectures (Multiprocessors)]: Array and vector processors, Parallel processors D.1.1 [Applicative (Functional) Programming TechniquesG.1.3 [Numerical Linear Algebra]: Sparse and very large systems E.1 [Data Structures]: Trees F.2.1 [Numerical Algorithms and Problems]: Computation of fast Fourier transform 

General Term

Algorithms 

References

  1. 1.
    S. K. Abdali. & D. D. Saunders. Transitive closure and related semiring properties via eliminants. Theoretical Computer Science 40, 2,3 (1985), 257–274.Google Scholar
  2. 2.
    P. J. Denning Parallel computing and its evolution. Comm. ACM 29, 12 (December, 1986), 1163–1167.Google Scholar
  3. 3.
    I. S. Duff. A survey of sparse matrix research. Proc. IEEE 65, 4 (April, 1977), 500–535.Google Scholar
  4. 4.
    P. C. Fischer & R. L. Probert. Storage reorganization techniques for matrix computation in a paging environment. Comm. ACM 22, 7 (July, 1979), 405–415.Google Scholar
  5. 5.
    D. P. Friedman & D. S. Wise. Aspects of applicative programming for parallel processing. IEEE Trans. Comput. C-27, 4 (April, 1978), 289–296.Google Scholar
  6. 6.
    A. George & J. W-H Liu. Computer Solution of Large Sparse Positive Definite Systems, Englewood Cliffs, NJ, Prentice-Hall (1981), Chapter 8.Google Scholar
  7. 7.
    P. Hudak. Arrays, non-determinism, side-effects, and parallelism: a functional perspective (Extended Abstract). Proc. LANL/MCC Graph Reduction Workshop Sante Fe, September, 1986, Lecture Notes in Computer Science, New York, Springer (to appear).Google Scholar
  8. 8.
    S. D. Johnson. “Storage Allocation for List Multiprocessing”, Indiana University Computer Science Dept. Technical Report No. 168, (March, 1985).Google Scholar
  9. 9.
    S. D. Johnson. Synthesis of Digital Designs from Recursion Equations, Cambridge, MA, M.I.T. Press (1984).Google Scholar
  10. 10.
    D. E. Knuth. The Art of Computer Programming, I, Fundamental Algorithms, 2nd Ed., Reading, MA, Addison-Wesley (1975), 299–318 + 401, 556.Google Scholar
  11. 11.
    A. C. McKellar & E. G. Coffman, Jr. Organizing matrices and matrix operations for paged memory systems. Comm. ACM 12, 3 (March, 1969), 153–165.Google Scholar
  12. 12.
    H. J. Nussbaumber. Fast Fourier Transforms and Convolution Algorithms, Berlin, Springer (1982).Google Scholar
  13. 13.
    J. T. O'Donnell. An architecture that efficiently updates associative aggregates in applicative programming languages. In Jean-Pierre Jouannaud (ed.), Functional Programming Languages and Computer Architecture, Lecture Notes in Computer Science 201, Berlin, Springer (1985), 164–189.Google Scholar
  14. 14.
    F. J. Peters. Parallel pivoting algorithms for sparse symmetric matrices. Parallel Computing 1, 1 (August, 1984), 99–110.Google Scholar
  15. 15.
    M. C. Pease. An adaptation of the fast Fourier transform for parallel processing. J. ACM 15, 2 (April, 1968), 252–264.Google Scholar
  16. 16.
    R. Rettberg & R. Thomas. Contention is no obstacle to shared-memory multiprocessing. Comm. ACM 29, 12 (December, 1986), 1202–1212.Google Scholar
  17. 17.
    V. Strassen. Gaussian elimination is not optimal. Numer. Math. 13, 4 (August, 1969), 354–356.Google Scholar
  18. 18.
    D. S. Wise. Representing matrices as quadtrees for parallel processors. Information Processing Letters 20 (May, 1985), 195–199.Google Scholar
  19. 19.
    D. S. Wise. Design for a Multiprocessing Heap with On-Board Reference Counting. In Jean-Pierre Jouannaud (ed.), Functional Programming Languages and Computer Architecture, Lecture Notes in Computer Science 201, Berlin, Springer (1985), 289–304.Google Scholar
  20. 20.
    D. S. Wise. Parallel decomposition of matrix inversion using quadtrees. Proc. 1986 International Conference on Parallel Processing (IEEE Cat. No. 86CH2355-6), 92–99.Google Scholar
  21. 21.
    D. S. Wise & J. Franco. Costs of quadtree representation of sparsely patterned matrices (in preparation.)Google Scholar
  22. 22.
    M. F. Young. A functional language and modular arithmetic for scientific computing. In Jean-Pierre Jouannaud (ed.), Functional Programming Languages and Computer Architecture, Lecture Notes in Computer Science 201, Berlin, Springer (1985), 305–318.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1987

Authors and Affiliations

  • David S. Wise
    • 1
  1. 1.Computer Science DepartmentIndiana UniversityBloomington

Personalised recommendations