Programming Discipline on Vector Computers: “Vectors” as a Data-Type and Vector Algorithms

  • Alain Bossavit


Vector machines perform well on vectors. The purpose of this paper is not primarily to explain how and why. If we do so, to some extent, it is only so far as necessary to find a functional characterization of this class of machines. Two concepts seem basic in this respect: extension and regular representation. “Extension”, defined below, is a functor by which a scalar arithmetic operation is extended to vectors. “Regularity” means that vectors should be stored at memory locations in arithmetic progression. “Vector operations” are, first the vector extensions of scalar ones, next all operations compatible with the regularity constraint. All of this is explained and justified at full length in the first part of this paper, with many references to the CRAY-1, so this part may be read as a presentation of vector computers, especially the CRAY. But the real point is to define an abstract model of vector computers, rich enough to take their special characteristics into account, yet simple enough not to obfuscate the essential ideas behind vector algorithms.


Dependence Graph Clock Period Linear Recurrence Vector Computer Vector Operation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Brent, R.P., and Kung, M.T., 1982, A regular layout for parallel adders, IEEE Trans. on Comp., C31, 3, 260: 64.Google Scholar
  2. Calahan, D.A., 1980, Multi level vectorized sparse solution of LSI circuits, in: “Proc. IEEE COnference on Circuits and Computers”, IEEE, New York.Google Scholar
  3. Dijkstra, E.W., 1976, “A Discipline of Programming”, Prentice-Hall, Englewood Cliffs, N.J.Google Scholar
  4. Dubois, P.F., and Rodrigue, G.H., 1977, An analysis of the recursive doubling algorithm, in “High Speed Computer and Algorithm Organization”, D.J. Kuck et al., eds., Academic Press, New York.Google Scholar
  5. Gries, D., 1981, “The Science of Programming”, Springer, Berlin.Google Scholar
  6. Hockney, C.W., and Jesshope, C.R., 1981, “Parallel Computers”, Adam Hilger, Bristol.Google Scholar
  7. Knuth, D.E., 1969, “The Art of Computer Programming (Vol. 2/ Seminumerical algorithms)”, Addison-Wesley, Reading, Mass.Google Scholar
  8. Kogge, P.M., 1981, “The Architecture of Pipelined Computers”, Mc Graw-Hill, New York.Google Scholar
  9. Kogge, P.M., and Stone, H.S., 1973, A parallel algorithm for the efficient solution of a general class of recurrence equations, IEEE Trans. on Comp., C-22, 8, 786: 93.Google Scholar
  10. Meyer, B., and Baudoin, C., 1978, “Méthodes de programmation”, Eyrolles, Paris.Google Scholar
  11. Ramamoorthy, C.W., and Li, H.F., 1977, Pipeline Architecture, ACM Comp.Surveys, 61: 102.Google Scholar
  12. Robinson, D., 1980, “A Course in the Theory of Groups”, Springer, New York.Google Scholar
  13. Stone, M.S., 1973, An efficient parallel algorithm for the solution of a tridiagonal system of equations, J.A.C.M., 20, 1, 27: 38.Google Scholar
  14. Wang, H.H., 1981, A parallel method for tridiagonal equations, ACM T,.O.M.S., 7, 170: 83.Google Scholar

Copyright information

© Plenum Press, New York 1985

Authors and Affiliations

  • Alain Bossavit
    • 1
  1. 1.Etudes et RecherchesElectricité de FranceClamartFrance

Personalised recommendations