Numerical experiments with partially separable optimization problems
In this paper, we present some numerical experiments with an algorithm that uses the partial separability of an optimization problem. This research is motivated by the very large number of minimization problems in many variables having that particular property. The results discussed in the paper cover both unconstrained and bound constrained cases, as well as numerical estimation of gradient vectors. It is shown that exploiting the present underlying structure can lead to efficient algorithms, especially when the problem dimension is large.
Unable to display preview. Download preview PDF.
- Ph.E. Gill and W. Murray. Conjugate Gradient Methods for Large Scale Nonlinear Optimization. Technical Report SOL 79-15, Dept. of Operations Research, Stanford University, Stanford, 1979.Google Scholar
- A. Griewank and Ph.L. Toint. On the Unconstrained Optimization of Partially Separable Functions. In M.J.D. Powell (editor), Nonlinear Optimization 1981. Academic Press, New-York, 1982.Google Scholar
- A. Griewank and Ph.L. Toint. On the Existence of Convex Decompositions of Partially Separable Functions. Mathematical Programming to appear, 1983.Google Scholar
- E. Marwil. Exploiting Sparsity in Newton-Like Methods. PhD thesis, Cornell University, Ithaca, New-York, 1978.Google Scholar
- D.F. Shanno and K.H. Phua. Matrix Conditionning and Nonlinear Optimization. Mathematical Programming 14, 1978.Google Scholar
- G.W. Stewart. A Modification of Davidon’s Minimization Method to Accept Difference Approximations of Derivatives. Journal of the ACM 14, 1967.Google Scholar