Nongradient minimization methods for parallel processing computers, part 2
- 48 Downloads
This paper analyzes the mathematical behavior of nongradient parallel minimization algorithms. The convergence of parallel synchronous iterative procedures corresponding to linearly independent direction methods and to mutually conjugate direction methods is discussed. For the latter, convergence with finite termination on quadratic objective functions and convergence on sufficiently smooth nonquadratic objective functions is proved.
Key WordsParallel algorithms unconstrained optimization nongradient methods
Unable to display preview. Download preview PDF.
- 1.Sutti, C.,Nongradient Minimization Methods for Parallel Processing Computers, Part 1, Journal of Optimization Theory and Applications, Vol. 39, No. 4, 1983.Google Scholar
- 2.Powell, M. J. D.,An Efficient Method for Finding the Minimum of a Function of Several Variables without Calculating Derivatives, Computer Journal, Vol. 7, pp. 155–162, 1964.Google Scholar
- 3.Sutti, C.,A New Method for Unconstrained Minimization without Derivatives, Toward Global Optimization-1, Edited by L. C. W. Dixon and G. P. Szegö, North Holland, Amsterdam, Holland, 1975.Google Scholar
- 4.Resta, G., andSutti, C.,Some Safeguards for Descent Minimization Algorithms to Avoid Numerical Nonconvergence, Toward Global Optimization-2, Edited by L. C. W. Dixon and G. P. Szegö, North Holland, Amsterdam, Holland, 1978.Google Scholar
- 5.Toint, P. L., andCallier, F. M.,On the Accelerating Property of an Algorithm for Function Minimization without Calculating Derivatives, Journal of Optimization Theory and Applications, Vol. 23, No. 4, 1977.Google Scholar
- 6.Sutti, C.,Superlinear Convergence of Sutti's Nongradient Minimization Algorithm, Numerical Optimization of Dynamic Systems, Edited by L. C. W. Dixon and G. P. Szegö, North Holland, Amsterdam, Holland, 1980.Google Scholar