Advertisement

Mathematical Programming

, Volume 56, Issue 1–3, pp 1–30 | Cite as

A hierarchical algorithm for making sparse matrices sparser

  • S. Frank Chang
  • S. Thomas McCormick
Article

Abstract

IfA is the (sparse) coefficient matrix of linear equality constraints, for what nonsingularT isÂTA as sparse as possible, and how can it be efficiently computed? An efficient algorithm for thisSparsity Problem (SP) would be a valuable pre-processor for linearly constrained optimization problems. In this paper we develop a two-pass approach to solve SP. Pass 1 builds a combinatorial structure on the rows ofA which hierarchically decomposes them into blocks. This determines the structure of the optimal transformation matrixT. In Pass 2, we use the information aboutT as a road map to do block-wise partial Gauss-Jordan elimination onA. Two block-aggregation strategies are also suggested that could further reduce the time spend in Pass 2. Computational results indicate that this approach to increasing sparsity produces significant net reductions in simplex solution time.

Keywords

Linear Equality Mathematical Method Computational Result Equality Constraint Efficient Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. I. Adler, N. Karmarkar, M.G.C. Resende and G. Veiga, “Data structures and programming techniques for the implementation of Karmarkar's Algorithm,”ORSA Journal on Computing 1 (1987) 84–106.Google Scholar
  2. A.V. Aho, J.E. Hopcroft and J.D. Ullman,The Design and Analysis of Computer Algorithms (Addison-Wesley, Reading, MA, 1979).Google Scholar
  3. A.L. Brearley, G. Mitra and H.P. Williams, “Analysis of mathematical programming problems prior to applying the Simplex Algorithm,”Mathematical Programming 8 (1975) 54–83.Google Scholar
  4. S.F. Chang, “Increasing sparsity in matrices for large scale optimization — theoretical properties and implementational aspects,” Ph.D. Thesis, Columbia University (New York, 1989).Google Scholar
  5. S.F. Chang and S.T. McCormick, “A faster implementation of a bipartite cardinality matching algorithms,” Working Paper 90-MSC-005, Faculty of Commerce, University of British Columbia (Vancouver, BC, 1990a).Google Scholar
  6. S.F. Chang and S.T. McCormick, “Implementation and computational results for the Hierarchical Algorithm for making sparse matrices sparser,” Working Paper 90-MSC-013, Faculty of Commerce, University of British Columbia (Vancouver, BC, 1990b).Google Scholar
  7. T.F. Coleman,Large Sparse Numerical Optimization, Lecture Notes in Computer Science No. 165 (Springer, Berlin 1984).Google Scholar
  8. R.W. Cottle, “Manifestations of the Schur complement,”Linear Algebra and its Applications 8 (1974) 182–211.Google Scholar
  9. I.S. Duff, “Ma28 — a set ofFortran subroutines for sparse unsymmetric linear equations,” A.E.R.E. Harwell Report 8730 (Harwell, UK, 1977).Google Scholar
  10. I.S. Duff, “On algorithms for obtaining a maximum transversal,”ACM Transactions on Mathematical Software 7 (1981) 315–330.Google Scholar
  11. I.S. Duff, A.M. Erisman and J.K. Reid,Direct Methods for Sparse Matrices (Clarendon Press, Oxford, UK, 1986).Google Scholar
  12. I.S. Duff and T. Wiberg, “Remarks on implementations of O(n 1/2 τ) assignment algorithms,”ACM Transactions on Mathematical Software 14 (1988) 267–287.Google Scholar
  13. D.M. Gay, “Electronic mail distribution of linear programming test problems,”Mathematical Programming Society Committee on Algorithms Newsletter 13 (1985) 10–12.Google Scholar
  14. M. Grötschel, L. Lovász and A. Schrijver, “The Ellipsoid Method and its consequences in combinatorial optimization,”Combinatorica 1 (1981) 169–197.Google Scholar
  15. A.J. Hoffman and S.T. McCormick, “A fast algorithm that makes matrices optimally sparse,” in: W.R. Pulleyblank, ed.,Progress in Combinatorial Optimization (Academic Press, London and New York, 1984) pp. 185–196.Google Scholar
  16. E.L. Lawler,Combinatorial Optimization:Networks and Matroids (Holt, Rinehart and Winston, New York, 1976).Google Scholar
  17. C.L. Liu,Introduction to Combinatorial Mathematics (McCraw-Hill, New York, 1972).Google Scholar
  18. S.T. McCormick “A combinatorial approach to some sparse matrix problems,” Ph.D. Thesis, Stanford University (Stanford, CA, 1983).Google Scholar
  19. S.T. McCormick, “Making sparse matrices sparser: Computational results,”Mathematical Programming 49 (1990) 91–111.Google Scholar
  20. S.T. McCormick and S.F. Chang, “The weighted sparsity problem: Complexity and algorithms,” Working Paper 90-MCS-007, Faculty of Commerce, University of British Columbia (Vancouver, BC, 1990).Google Scholar
  21. K. Murota, “Sparsity and block-triangularization,” Research Memorandum RMI 87-07, Department of Mathematical Engineering and Information Physics, University of Tokyo (Tokyo, Japan, 1987).Google Scholar
  22. B.A. Murtagh and M.A. Saunders, “MINOS 5.0 user's guide,” Technical Report SOL 83-20, Systems Optimization Laboratory, Department of Operations Research, Stanford University (Stanford, CA, 1983).Google Scholar
  23. C.H. Papadimitriou and K. Steiglitz,Combinatorial Optimization:Algorithms and Complexity (Prentice Hall, Englewood Cliffs, NJ, 1982).Google Scholar

Copyright information

© The Mathematical Programming Society, Inc. 1992

Authors and Affiliations

  • S. Frank Chang
    • 1
  • S. Thomas McCormick
    • 2
  1. 1.GTE Laboratories, Inc.WalthamUSA
  2. 2.Management Science Division, Faculty of Commerce and Business AdministrationThe University of British ColumbiaVancouverCanada

Personalised recommendations