Advertisement

Primal-Dual Nonlinear Rescaling Method for Convex Optimization

  • R. Polyak
  • I. Griva
Article

Abstract

In this paper, we consider a general primal-dual nonlinear rescaling (PDNR) method for convex optimization with inequality constraints. We prove the global convergence of the PDNR method and estimate the error bounds for the primal and dual sequences. In particular, we prove that, under the standard second-order optimality conditions, the error bounds for the primal and dual sequences converge to zero with linear rate. Moreover, for any given ratio 0 > γ > 1, there is a fixed scaling parameter kγ > 0 such that each PDNR step shrinks the primal-dual error bound by at least a factor 0 > γ > 1, for any k ≥ kγ. The PDNR solver was tested on a variety of NLP problems including the constrained optimization problems (COPS) set. The results obtained show that the PDNR solver is numerically stable and produces results with high accuracy. Moreover, for most of the problems solved, the number of Newton steps is practically independent of the problem size.

Nonlinear rescaling duality Lagrangian primal-dual methods multipliers methods 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wright, S., Primal-Dual Interior Points Methods, SIAM, Philadelphia, Pennsylvania 1997.Google Scholar
  2. 2.
    Megiddo, N., Pathways to the Optimal Set in Linear Programming, Interior Point and Related Methods, Edited by N. Megiddo, Springer Verlag, New York, NY, pp. 131–158, 1989.Google Scholar
  3. 3.
    Mehrotra, S., On Implementation of a Primal-Dual Interior-Point Method, SIAM Journal on Optimization, Vol. 2, pp. 575–601, 1992.Google Scholar
  4. 4.
    Potra, F., and Wright, S., Interior-Point Methods, Technical Report, Department of Mathematics and Statistics, University of Maryland Baltimore County, 1999.Google Scholar
  5. 5.
    Lustig, I., Marsten, R., and Shanno, D., Computational Experience with a Globally Convergent Primal-Dual Predictor-Corrector Algorithm for Linear Programming, Mathematical Programming, Vol. 66, pp. 23–135, 1994.Google Scholar
  6. 6.
    Vanderbei, R., Symmetric Quasidefinite Matrices, SIAM Journal on Optimization, Vol. 5, pp. 100–113, 1995.Google Scholar
  7. 7.
    Zhang, Y., Solving Large-Scale Linear Programs by Interior-Point Methods under MATLAB Environment, Department of Computational and Applied Mathematics, Rice University, 1996.Google Scholar
  8. 8.
    Zhang, Y., Taptia, R., and Dennis, J., On the Superlinear and Quadratic Convergence of Primal-Dual Interior-Point Linear Programming Algorithms, SIAM Journal on Optimization, Vol. 2, pp. 304–324, 1992.Google Scholar
  9. 9.
    Forsgren, A., and Gill, P., Primal-Dual Interior Methods for Nonconvex Nonlinear Programming, SIAM Journal on Optimization, Vol. 8, pp. 1132–1152, 1998.Google Scholar
  10. 10.
    Gay, D., Overton, M., and Wright, M., A Primal-Dual Interior Method for Nonconvex Nonlinear Programming, Technical Report 97–4–08, Computing Science Research Center, Bell Laboratories, Murray Hill, New Jersey, 1997.Google Scholar
  11. 11.
    Nocedal, J., and Wright, S., Numerical Optimization, Springer, New York, NY, 1999.Google Scholar
  12. 12.
    Shanno, D.F., and Vanderbei, R.J., An Interior-Point Algorithm for Nonconvex Nonlinear Programming, Computational Optimization and Applications, Vol. 13, pp. 231–252, 1999.Google Scholar
  13. 13.
    Polyak, R., Modified Barrier Functions, Mathematical Programming, Vol. 54, 177–222, 1992.Google Scholar
  14. 14.
    Polyak, R., and Teboulle, M., Nonlinear Rescaling and Proximal-Like Methods in Convex Optimization, Mathematical Programming, Vol. 76 pp. 265–284, 1997.Google Scholar
  15. 15.
    Polyak, R., Griva, I., and Sobieski, J., The Newton Log-Sigmoid Method in Constrained Optimization, A Collection of Technical Papers, 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Vol. 3, pp. 2193–2201, 1998.Google Scholar
  16. 16.
    Polyak, R., Log-Sigmoid Multipliers Method in Constrained Optimization, Annals of Operations Research, Vol. 101, pp. 427–460, 2001.Google Scholar
  17. 17.
    Polyak, R., Nonlinear Rescaling vs. Smoothing Technique in Convex Optimization, Mathematical Programming, Vol. 92A, pp. 197–235, 2002.Google Scholar
  18. 18.
    Bondarenko, A.S., Bortz, D.M., and More, J.J, COPS: Large-Scale Nonlinearly Constrained Optimization Problems, Technical Report ANL/MCS-TM-237, Mathematics and Computer Science Division, Argonne National Laboratory, 1999.Google Scholar
  19. 19.
    Auslender, A., Cominetti, R., and Haddou, M., Asymptotic Analysis for Penalty and Barrier Methods in Convex Inequalities and Linear Complementary Problems, Mathematical Programming, Vol. 71, pp. 51–69, 1995.Google Scholar
  20. 20.
    Chen, C., and Mangasarian, O., Smoothing Methods for Convex Inequalities and Linear Complementary Problems, Mathematical Programming, Vol. 71, pp. 51–69, 1995.Google Scholar
  21. 21.
    Fiacco, A., and McCormick, G., Nonlinear Programming: Sequential Unconstrained Minimization Techniques, SIAM Classics in Applied Mathematics, SIAM, Philadelphia, Pennsylvania, 1990.Google Scholar
  22. 22.
    Kort, B. W., and Bertsekas, D. P., Multiplier Methods for Convex Programming, Proceedings of IEEE Conference on Decision and Control, San Diego, California, pp. 428–432, 1973.Google Scholar
  23. 23.
    Ben-tal, A., Yuzefovich, I., and Zibulevski, M., Penalty/Barrier Multiplier Method for Minimax and Constrained Smooth Convex Problems, Research Report 9/92, Optimization Laboratory, Faculty of Industrial Engineering and Management, Technion, Israel, 1992.Google Scholar
  24. 24.
    Breitfelt, M., and Shanno, D., Experience with Modified Log-Barrier Method for Nonlinear Programming, Annals of Operations Research, Vol. 62, 439–464, 1996.Google Scholar
  25. 25.
    Ng, E., and Peyton, B. W., Block Sparse Cholesky Algorithms on Advanced Uniprocessor Computers, SIAM Journal on Scientific Computing, Vol. 14, pp. 1034–1056, 1993.Google Scholar
  26. 26.
    Polyak, R., and Griva, I., Nonlinear Rescaling in Constrained Optimization: Primal-Dual approach, Technical Report SEOR–02–05, Fairfax, Virginia, 2002; see http://mason.gmu.edu/¡rpolyak/papers.html.Google Scholar
  27. 27.
    Smale, S., Newton's Method Estimates from Data at One Point, The Merging of Disciplines in Pure, Applied, and Computational Mathematics, Edited by R.E. Ewing, Springer, New York, NY, pp. 185–196, 1986.Google Scholar
  28. 28.
    Tichonov, A., and Arseniev, V., Methods de Resolution de Problemes Mal Posés, MIR, Moscow, Russia, 1974.Google Scholar
  29. 29.
    Debreu, G., Definite and Semifinite Quadratic Forms, Econometrica, Vol. 20, pp. 295–300, 1952.Google Scholar
  30. 30.
    Kantorovich, L.V., Functional Analysis and Applied Mathematics, Uspekhi Mathematicheskikh Nauk, Vol. 3, pp. 89–185, 1948.Google Scholar
  31. 31.
    Nesterov, Yu., and Nemirovsky, A., Self-Concordant Functions and Polynomial-Time Methods in Convex Programming, CEMI Academy of Sciences, Moscow, Russia, 1989.Google Scholar
  32. 32.
    Polyak, B. T., Introduction to Opitmization, Software, New York, NY, 1987.Google Scholar

Copyright information

© Plenum Publishing Corporation 2004

Authors and Affiliations

  • R. Polyak
    • 1
  • I. Griva
    • 1
  1. 1.Department of Systems Engineering and Operations Research and Mathematical Sciences DepartmentGeorge Mason UniversityFairfaxVirginia

Personalised recommendations