Skip to main content

Primal Methods

  • Chapter
  • First Online:
Linear and Nonlinear Programming

Part of the book series: International Series in Operations Research & Management Science ((ISOR,volume 228))

  • 4118 Accesses


In this chapter we initiate the presentation, analysis, and comparison of algorithms designed to solve constrained minimization problems. The four chapters that consider such problems roughly correspond to the following classification scheme. Consider a constrained minimization problem having n variables and m constraints. Methods can be devised for solving this problem that work in spaces of dimension n − m, n, m, or n + m. Each of the following chapters corresponds to methods in one of these spaces. Thus, the methods in the different chapters represent quite different approaches and are founded on different aspects of the theory. However, there are also strong interconnections between the methods of the various chapters, both in the final form of implementation and in their performance. Indeed, there soon emerges the theme that the rates of convergence of most practical algorithms are determined by the Lipschitz constants and the structure of the Hessian of the Lagrangian much like the structure of the Hessian of the objective function determines the rates of convergence for a wide assortment of methods for unconstrained problems. Thus, although the various algorithms of these chapters differ substantially in their motivation, they are ultimately found to be governed by a common set of principles.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions


  1. 1.

    Actually a more standard procedure is to define the pseudoinverse \(\overline {\mathbf {L}}_{k}^{\dagger }\), and then \(\mathbf {z}=\overline {\mathbf {L}}_{k}^{\dagger }{\mathbf {y}}_{k}\).

  2. 2.

    The exact solution is obviously symmetric about the center of the chain, and hence the problem could be reduced to having ten links and only one constraint. However, this symmetry disappears if the first constraint value is specified as nonzero. Therefore for generality we solve the full chain problem.


  1. J. Abadie, J. Carpentier, Generalization of the Wolfe reduced gradient method to the case of nonlinear constraints, in Optimization, ed. by R. Fletcher (Academic, London, 1969), pp. 37–47

    Google Scholar 

  2. A. Beck, First-Order Methods in Optimization (SIAM, 2017)

    Google Scholar 

  3. s. Bubeck, Convex optimization: algorithms and complexity (2014). arXiv preprint arXiv:1405.4980

    Google Scholar 

  4. M. Frank, P. Wolfe, An algorithm for quadratic programming. Naval Res. Logist. Q. 3, 95–110 (1956)

    Article  Google Scholar 

  5. P.E. Gill, W. Murray, M.H. Wright, Practical Optimization (Academic, London, 1981)

    Google Scholar 

  6. A.A. Goldstein, Convex programming in Hilbert space. Bull. Am. Math. Soc. 70(5), 709–710 (1964)

    Article  Google Scholar 

  7. E.S. Levitin, B.T. Polyak, Constrained minimization methods. Zh. vychisl. Math. Math. Fiz 6(5), 1–50 (1966)

    Google Scholar 

  8. D.G. Luenberger, The gradient projection method along geodesics. Manag. Sci. 18(11), 620–631 (1972)

    Article  Google Scholar 

  9. A. Naber, Memory-Efficient Optimization Over Positive Semidefinite Matrices, Ph.D. Thesis (Stanford University, 2020)

    Google Scholar 

  10. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer, Boston, 2004)

    Book  Google Scholar 

  11. J. Rosen, The gradient projection method for nonlinear programming, I. Linear constraints. J. Soc. Ind. Appl. Math. 8, 181–217 (1960)

    Article  Google Scholar 

  12. J. Rosen, The gradient projection method for nonlinear programming, II. Non-linear constraints. J. Soc. Ind. Appl. Math. 9, 514–532 (1961)

    Article  Google Scholar 

  13. D.M. Topkis, A.F. Veinott Jr., On the convergence of some feasible direction algorithms for nonlinear programming. J. SIAM Control 5(2), 268–279 (1967)

    Article  Google Scholar 

  14. P. Wolfe, On the convergence of gradient methods under constraints. IBM Research Report RZ 204, Zurich (1966)

    Google Scholar 

  15. P. Wolfe, Methods of nonlinear programming (Chap. 6), in Nonlinear Programming, ed. by J. Abadie. Interscience (Wiley, New York, 1967), pp. 97–131

    Google Scholar 

  16. Y. Ye, Interior Point Algorithms (Wiley, New York, 1997)

    Google Scholar 

  17. W.I. Zangwill, Nonlinear Programming: A Unified Approach (Prentice-Hall, Englewood Cliffs, 1969)

    Google Scholar 

  18. G. Zoutendijk, Methods of Feasible Directions (Elsevier, Amsterdam, 1960)

    Google Scholar 

Download references

Author information

Authors and Affiliations


Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Luenberger, D.G., Ye, Y. (2021). Primal Methods. In: Linear and Nonlinear Programming. International Series in Operations Research & Management Science, vol 228. Springer, Cham.

Download citation

Publish with us

Policies and ethics