Advertisement

Annals of Operations Research

, Volume 46, Issue 1, pp 157–178 | Cite as

Error bounds and convergence analysis of feasible descent methods: a general approach

  • Zhi-Quan Luo
  • Paul Tseng
Part I: Surveys

Abstract

We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem. This approach is based on a certain error bound for estimating the distance to the solution set and is applicable to a broad class of methods.

Keywords

Error bound linear convergence feasible descent methods 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    A. Auslender,Optimisation Méthodes Numériques (Masson, Paris, 1976).Google Scholar
  2. [2]
    D.P. Bertsekas, On the Goldstein-Levitin-Polyak gradient projection method, IEEE Trans. Auto. Contr. AC-21 (1976) 174–184.CrossRefGoogle Scholar
  3. [3]
    D.P. Bertsekas, Projected Newton methods for optimization problems with simple constraints, SIAM J. Contr. Optim. 20 (1982) 221–246.CrossRefGoogle Scholar
  4. [4]
    D.P. Bertsekas and E. Gafni, Projection methods for variational inequalities with application to the traffic assignment problem, Math. Prog. Study 17 (1982) 139–159.Google Scholar
  5. [5]
    D.P. Bertsekas and J.N. Tsitsiklis,Parallel and Distributed Computation: Numerical Methods (Prentice-Hall, Englewood Cliffs, NJ, 1989).Google Scholar
  6. [6]
    J.F. Bonnans, A variant of a projected variable metric method for bound constrained optimization problems, Working paper, INRIA, France (1983).Google Scholar
  7. [7]
    L.M. Bregman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, USSR Comp. Math. Math. Phys. 7 (1967) 200–217.CrossRefGoogle Scholar
  8. [8]
    Y.C. Cheng, On the gradient-projection method for solving the nonsymmetric linear complementarity problem, Appl. Math. Optim. 43 (1984) 527–540.CrossRefGoogle Scholar
  9. [9]
    C.W. Cryer, The solution of a quadratic programming problem using systematic over-relaxation, SIAM J. Contr. Optim. 9 (1971) 385–392.CrossRefGoogle Scholar
  10. [10]
    D. D'Esopo, A convex programming procedure, Naval Res. Logist. Quart. 6 (1959) 33–42.CrossRefGoogle Scholar
  11. [11]
    R.S. Dembo and U. Tulowizki, Local convergence analysis for successive inexact quadratic programming methods, Working Paper, School of Organization and Management, Yale University, New Haven, CT (1984).Google Scholar
  12. [12]
    J.C. Dunn, Global and asymptotic convergence rate estimates for a class of projected gradient processes, SIAM J. Contr. Optim. 19 (1981) 368–400.CrossRefGoogle Scholar
  13. [13]
    J.C. Dunn, On the convergence of projected gradient processes to singular critical points, J. Optim. Theory Appl. 55 (1987) 203–216.CrossRefGoogle Scholar
  14. [14]
    E.M. Gafni and D.P. Bertsekas, Two-metric projection methods for constrained optimization, SIAM J. Contr. Optim. 22 (1984) 936–964.CrossRefGoogle Scholar
  15. [15]
    M. Gawande and J.C. Dunn, Variable metric gradient projection processes in convex feasible sets defined by nonlinear inequalities, Appl. Math. Optim. 17 (1988) 103–119.CrossRefGoogle Scholar
  16. [16]
    A.A. Goldstein, Convex programming in Hilbert space, Bull. Am. Math. Soc. 70 (1964) 709–710.CrossRefGoogle Scholar
  17. [17]
    A.A. Goldstein, On gradient projection,Proc. 12th Ann. Allerton Conf. on Circuits and Systems, Allerton Park, IL (1974) 38–40.Google Scholar
  18. [18]
    O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Contr. Optim. 29 (1991) 403–419.CrossRefGoogle Scholar
  19. [19]
    C. Hildreth, A quadratic programming procedure, Naval Res. Logist. Quart. 4 (1957) 79–85; see also Erratum, Naval Res. Logist. Quart. 4 (1957) 361.CrossRefGoogle Scholar
  20. [20]
    A.J. Hoffman, On approximate solutions of systems of linear inequalities, J. Res. Natl. Bur. Standards 49 (1952) 263–265.Google Scholar
  21. [21]
    A.N. Iusem, On dual convergence and the rate of primal convergence of Bregman's convex programming method, SIAM J. Optim. 1 (1991) 401–423.CrossRefGoogle Scholar
  22. [22]
    A.N. Iusem and A. De Pierro, On the convergence properties of Hildreth's quadratic programming algorithm, Math. Prog. 47 (1990) 37–51.CrossRefGoogle Scholar
  23. [23]
    H.B. Keller, On the solution of singular and semidefinite linear systems by iteration, SIAM J. Numer. Anal. 2 (1965) 281–290.Google Scholar
  24. [24]
    E.N. Khobotov, A modification of the extragradient method for the solution of variational inequalities and some optimization problems, Zh. Vychisl. Mat i Mat. Fiz. 27 (1987) 1462–1473.Google Scholar
  25. [25]
    G.M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekon. i Mat. Metody, translated into English as Matecon 12 (1976) 747–756.Google Scholar
  26. [26]
    B.W. Kort and D.P. Bertsekas, Combined primal-dual and penalty methods for convex programming, SIAM J. Contr. Optim. 14 (1976) 268–294.CrossRefGoogle Scholar
  27. [27]
    J. Kruithof, Calculation of telephone traffic, De Ingenieur (E. Electrotechnik 3) 52 (1937) E15–25.Google Scholar
  28. [28]
    W. Li, Remarks on convergence of matrix splitting algorithm for the symmetric linear complementarity problem, SIAM J. Optim. 3 (1993) 155–163.CrossRefGoogle Scholar
  29. [29]
    Y.Y. Lin and J.-S. Pang, Iterative methods for large convex quadratic programs: A survey, SIAM J. Contr. Optim. 25 (1987) 383–411.CrossRefGoogle Scholar
  30. [30]
    D.G. Luenberger,Linear and Nonlinear Programming (Addison-Wesley, Reading, MA, 1984).Google Scholar
  31. [31]
    Z.-Q. Luo and P. Tseng, On global error bound for a class of monotone affine variational inequality problems, Oper. Res. Lett. 11 (1992) 159–165.CrossRefGoogle Scholar
  32. [32]
    Z.-Q. Luo and P. Tseng, Error bound and reduced gradient projection algorithms for convex minimization over a polyhedral set, Technical Report, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario and Department of Mathematics, University of Washington, Seattle, WA (December 1990); to appear in SIAM J. Optim.Google Scholar
  33. [33]
    Z.-Q. Luo and P. Tseng, On the convergence of a matrix splitting algorithm for the symmetric monotone linear complementarity problem, SIAM J. Contr. Optim. 29 (1991) 1037–1060.CrossRefGoogle Scholar
  34. [34]
    Z.-Q. Luo and P. Tseng, On the rate of convergence of a class of distributed asynchronous routing algorithms, Technical Report, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario and Department of Mathematics, University of Washington, Seattle, WA (May 1991), to appear in Math. Oper. Res. 18 (1993).Google Scholar
  35. [35]
    Z.-Q. Luo and P. Tseng, On the convergence rate of dual ascent methods for strictly convex minimization, to appear in Math. Oper. Res. 18 (1993).Google Scholar
  36. [36]
    Z.-Q. Luo and P. Tseng, On the convergence of the coordinate descent method for convex differentiable minimization, J. Optim. Theory Appl. 72 (1992) 7–35.CrossRefGoogle Scholar
  37. [37]
    Z.-Q. Luo and P. Tseng, Error bound and the convergence analysis of matrix splitting algorithms for the affine variational inequality problem, SIAM J. Optim. 2 (1992) 43–54.CrossRefGoogle Scholar
  38. [38]
    Z.-Q. Luo and P. Tseng, On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Contr. Optim. 30 (1992) 408–425.CrossRefGoogle Scholar
  39. [39]
    O.L. Mangasarian, Solution of symmetric linear complementarity problems by iterative methods, J. Optim. Theory Appl. 22 (1977) 465–485.CrossRefGoogle Scholar
  40. [40]
    O.L. Mangasarian, Sparsity-preserving SOR algorithms for separable quadratic and linear programming, Comp. Oper. Res. 11 (1984) 105–112.CrossRefGoogle Scholar
  41. [41]
    O.L. Mangasarian, Error bounds for nondegenerate monotone linear complementarity problems, Math. Prog. 48 (1990) 437–445.CrossRefGoogle Scholar
  42. [42]
    O.L. Mangasarian, Convergence of iterates of an inexact matrix splitting algorithm for the symmetric monotone linear complementarity problem, SIAM J. Optim. 1 (1991) 114–122.CrossRefGoogle Scholar
  43. [43]
    O.L. Mangasarian, Global error bounds for monotone affine variational inequality problems, Lin. Alg. Appl. 174 (1992) 153–163.CrossRefGoogle Scholar
  44. [44]
    O.L. Mangasarian and R. De Leone, Error bounds for strongly convex programs and (super)linearly convergent iterative schemes for the least 2-norm solution of linear programs. Appl. Math. Optim. 17 (1988) 1–14.CrossRefGoogle Scholar
  45. [45]
    O.L. Mangasarian and T.-H. Shiau, Error bounds for monotone linear complementarity problems, Math. Prog. 36 (1986) 81–89.CrossRefGoogle Scholar
  46. [46]
    P. Marcotte, Application of Khobotov's algorithm to variational inequalities and network equilibrium problems, Inf. Syst. Oper. Res. 29 (1991) 258–270.Google Scholar
  47. [47]
    B. Martinet, Regularisation d'inéquations variationnelles par approximations successives, Rev. Française d'Auto. et Inform. Rech. Opér. 4 (1970) 154–159.Google Scholar
  48. [48]
    B. Martinet, Determination approchée d'un point fixe d'une application pseudo-contractante, C.R. Acad. Sci. Paris 274 (1972) 163–165.Google Scholar
  49. [49]
    R. Mathias, and J.-S. Pang, Error bounds for the linear complementarity problem with aP-matrix, Lin. Alg. Appl. 132 (1990) 123–136.CrossRefGoogle Scholar
  50. [50]
    J.J. Moré, Gradient projection techniques for large-scale optimization problems,Proc. 28th Conf. on Decision and Control, Tampa, FL (December 1989).Google Scholar
  51. [51]
    J.M. Ortega and W.C. Rheinboldt,Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970).Google Scholar
  52. [52]
    J.-S. Pang, On the convergence of a basic iterative method for the implicit complementarity problem, J. Optim. Theory Appl. 37 (1982) 149–162.CrossRefGoogle Scholar
  53. [53]
    J.-S. Pang, Necessary and sufficient conditions for the convergence of iterative methods for the linear complementarity problem, J. Optim. Theory Appl. 42 (1984) 1–17.CrossRefGoogle Scholar
  54. [54]
    J.-S. Pang, More results on the convergence of iterative methods for the symmetric linear complementarity problem, J. Optim. Theory Appl. 49 (1986) 107–134.CrossRefGoogle Scholar
  55. [55]
    J.-S. Pang, Inexact Newton methods for the nonlinear complementarity problem, Math. Prog. 36 (1986) 54–71.CrossRefGoogle Scholar
  56. [56]
    J.-S. Pang, A posteriori error bounds for the linearly-constrained variational inequality problem, Math. Oper. Res. 12 (1987) 474–484.CrossRefGoogle Scholar
  57. [57]
    S.M. Robinson, Some continuity properties of polyhedral multifunctions, Math. Prog. Study 14 (1981) 206–214.Google Scholar
  58. [58]
    S.M. Robinson, Generalized equations and their solutions, Part II: Applications to nonlinear programming, Math. Prog. Study 14 (1982) 200–221.Google Scholar
  59. [59]
    R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Contr. Optim. 14 (1976) 877–898.CrossRefGoogle Scholar
  60. [60]
    R.T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Math. Oper. Res. 1 (1976) 97–116.CrossRefGoogle Scholar
  61. [61]
    P. Tseng, Dual Ascent methods for problems with strictly convex costs and linear constraints: A unified approach, SIAM J. Contr. Optim. 28 (1990) 214–242.CrossRefGoogle Scholar
  62. [62]
    P. Tseng, On the rate of convergence of a partially asynchronous gradient projection algorithm, SIAM J. Optim, 1 (1991) 603–619.CrossRefGoogle Scholar
  63. [63]
    J.N. Tsitsiklis and D.P. Bertsekas, Distributed asynchronous optimal routing in data networks, IEEE Trans. Auto. Contr. AC-31 (1986) 325–332.CrossRefGoogle Scholar
  64. [64]
    J.N. Tsitsiklis, D.P. Bertsekas and M. Athans, Distributed asynchronous deterministic and stochastic gradient optimization algorithms, IEEE Trans. Auto. Contr. AC-31 (1986) 803–812.CrossRefGoogle Scholar
  65. [65]
    J. Warga, Minimizing certain convex functions, J. Soc. Indust. Appl. Math. 11 (1963) 588–593.CrossRefGoogle Scholar
  66. [66]
    E.S. Levitin and B.T. Polyak, Constrained minimization methods, USSR Comp. Math. Math. Phys. 6 (1965) 1–50.CrossRefGoogle Scholar
  67. [67]
    J.C. Dunn, A subspace decomposition principle for scaled gradient projection methods: global theory, SIAM J. Contr. Optim. 29 (1991) 1160–1175.CrossRefGoogle Scholar

Copyright information

© J.C. Baltzer AG, Science Publishers 1993

Authors and Affiliations

  • Zhi-Quan Luo
    • 1
  • Paul Tseng
    • 2
  1. 1.Department of Electrical and Computer EngineeringMcMaster UniversityHamiltonCanada
  2. 2.Department of Mathematics, GN-50University of WashingtonSeattleUSA

Personalised recommendations