Efficient Gram-Schmidt orthogonalisation on an array processor

  • M. Clint
  • J. S. Weston
  • J. B. Flannagan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 854)


The orthogonalisation of sets of vectors is an important operation in many applications of linear algebra. Among these are the computation of subsets of the eigenvalues and eigenvectors of symmetric matrices, the solution of least squares problems and singular value decomposition. A standard method for orthogonalising sets of vectors is the modified Gram-Schmidt (MGS) algorithm. With the advent of a variety of novel architecture computers it is essential to investigate how efficiently standard numerical mathematical algorithms may be implemented on them. In this paper the performances, on an array processor, of straightforward implementations of the MGS and a new parallel version of the algorithm are compared. The array processor used is the DAP 510. It is shown that the use of special facilities provided by the machine considerably improves the performance of both formulations of the algorithm and that, for this machine, the performance of the straightforward implementation is better than that of the new version. However, for some particularly important applications in which the lengths of the vectors involved are systematically extended or reduced, the performance of the new version of the algorithm is significantly better than that of the standard one.


Orthogonalisation Modified Gram-Schmidt parallelism array processors 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Arnoldi, WE: The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quart. Appl. maths., 9, 17–29 (1951).Google Scholar
  2. [2]
    Lanczos, C: An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, Jnl. Res. Nat. Bur. Stand., 45, 255–282 (1950).Google Scholar
  3. [3]
    Clint, M and Jennings, A: The evaluation of eigenvalues and eigenvectors of real symmetric matrices by simultaneous iteration, Comp. Jnl., 76–80 (1970).Google Scholar
  4. [4]
    Farebrother, RW: Linear Least Squares Computations (Statistics: Textbooks and Monographs) vol 91, Marcel Dekker (1988).Google Scholar
  5. [5]
    Stewart, GW: Introduction to Matrix Computations, Academic Press (1973).Google Scholar
  6. [6]
    O'Leary, DP and Whitman, P: Parallel QR Factorization by Householder and modified Gram-Schmidt algorithms, Parallel Comput. 16, 99–112 (1990).Google Scholar
  7. [7]
    Hendrickson, B: Parallel QR factorization using the torus-wrap mapping, Parallel Comput. 19, 1259–1271 (1993).Google Scholar
  8. [8]
    Zapata, EL, Lamas, JA, Rivera, FF and Plata, OG: Modified Gram-Schmidt QR factorization on hypercube SIMD computers, Jnl. Par. Distr. Comput., 12, 60–69 (1991).Google Scholar
  9. [9]
    Clint, M and Waring, LC: The orthogonalisation of small sets of very long vectors on massively parallel computers, in Parallel Computing '93 (to appear).Google Scholar
  10. [10]
    Kontoghiorghes, EJ: Algorithms for Linear Model Estimation on Massively Parallel Systems, PhD Dissertation, University of London (1993).Google Scholar
  11. [11]
    Golub, GH and van Loan, CF: Matrix Computations, North Oxford Academic (1983).Google Scholar
  12. [12]
    Lawson, CL and Hanson, RJ: Solving Least Squares Problems, Prentice-Hall (1974).Google Scholar
  13. [13]
    Fortran-Plus Enhanced, man 102.01, AMT (1990).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • M. Clint
    • 1
  • J. S. Weston
    • 2
  • J. B. Flannagan
    • 1
  1. 1.Department of Computer ScienceThe Queen's University of BelfastBelfastUK
  2. 2.Department of Computing ScienceThe University of Ulster at ColeraineColeraineUK

Personalised recommendations