Computational Optimization and Applications

, Volume 51, Issue 1, pp 125–157 | Cite as

Adaptive constraint reduction for convex quadratic programming

  • Jin Hyuk Jung
  • Dianne P. O’Leary
  • André L. Tits
Article

Abstract

We propose an adaptive, constraint-reduced, primal-dual interior-point algorithm for convex quadratic programming with many more inequality constraints than variables. We reduce the computational effort by assembling, instead of the exact normal-equation matrix, an approximate matrix from a well chosen index set which includes indices of constraints that seem to be most critical. Starting with a large portion of the constraints, our proposed scheme excludes more unnecessary constraints at later iterations. We provide proofs for the global convergence and the quadratic local convergence rate of an affine-scaling variant. Numerical experiments on random problems, on a data-fitting problem, and on a problem in array pattern synthesis show the effectiveness of the constraint reduction in decreasing the time per iteration without significantly affecting the number of iterations. We note that a similar constraint-reduction approach can be applied to algorithms of Mehrotra’s predictor-corrector type, although no convergence theory is supplied.

Keywords

Convex quadratic programming Constraint reduction Column generation Primal-dual interior-point method 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Absil, P.-A., Tits, A.L.: Newton-KKT interior-point methods for indefinite quadratic programming. Comput. Optim. Appl. 36(1), 5–41 (2007) CrossRefMATHMathSciNetGoogle Scholar
  2. 2.
    Atkinson, K.E.: An Introduction to Numerical Analysis. Wiley, Washington (2005) Google Scholar
  3. 3.
    Burges, C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2(2), 121–167 (1998) CrossRefGoogle Scholar
  4. 4.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995) MATHGoogle Scholar
  5. 5.
    Dantzig, G.B.: Linear Programming and Extensions. Princeton University Press, Princeton (1963) MATHGoogle Scholar
  6. 6.
    Dantzig, G.B., Ye, Y.: A build–up interior method for linear programming: Affine scaling form. Technical report, University of Iowa, Iowa City, IA 52242, USA (July 1991) Google Scholar
  7. 7.
    Ferris, M.C., Munson, T.S.: Interior-point methods for massive support vector machines. SIAM J. Optim. 13(3), 783–804 (2002) CrossRefMathSciNetGoogle Scholar
  8. 8.
    Griva, I., Nash, S.G., Sofer, A.: Linear and Nonlinear Optimization, 2nd edn. SIAM, Philadelphia (2009) CrossRefMATHGoogle Scholar
  9. 9.
    Hertog, D., Roos, C., Terlaky, T.: A build-up variant of the path-following method for LP. Technical Report DUT-TWI-91-47, Delft University of Technology, Delft, The Netherlands (1991) Google Scholar
  10. 10.
    Hertog, D., Roos, C., Terlaky, T.: Adding and deleting constraints in the path–following method for linear programming. In: Advances in Optimization and Approximation (Nonconvex Optimization and Its Applications), vol. 1, pp. 166–185. Kluwer Academic, Dordrecht (1994) CrossRefGoogle Scholar
  11. 11.
    Higham, N.J.: Analysis of the Cholesky decomposition of a semi-definite matrix. In: Cox, M.G., Hammarling, S.J. (eds.) Reliable Numerical Computation, Walton Street, Oxford OX2 6DP, UK, 1990, pp. 161–185. Oxford University Press, Oxford (1990) Google Scholar
  12. 12.
    Householder, A.S.: The Theory of Matrices in Numerical Analysis. Blaisdell, New York (1964). Reprinted by Dover, New York (1975) MATHGoogle Scholar
  13. 13.
    Jung, J.H.: Adaptive constraint reduction for convex quadratic programming and training support vector machines. PhD thesis, University of Maryland (2008). Available at http://hdl.handle.net/1903/8020
  14. 14.
    Jung, J.H., O’Leary, D.P., Tits, A.L.: Adaptive constraint reduction for training support vector machines. Electron. Trans. Numer. Anal. 31, 156–177 (2008) MATHMathSciNetGoogle Scholar
  15. 15.
    Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4(4), 373–395 (1984) CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Luo, Z.-Q., Sun, J.: An analytic center based column generation algorithm for convex quadratic feasibility problems. SIAM J. Optim. 9(1), 217–235 (1998) CrossRefMATHMathSciNetGoogle Scholar
  17. 17.
    Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992) CrossRefMATHMathSciNetGoogle Scholar
  18. 18.
    Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Berlin (2000) Google Scholar
  19. 19.
    Nordebo, S., Zang, Z., Claesson, I.: A semi-infinite quadratic programming algorithm with applications to array pattern synthesis. IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process. 48(3), 225–232 (2001) CrossRefGoogle Scholar
  20. 20.
    Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM, Philadelphia (2003). Chap. 9 CrossRefMATHGoogle Scholar
  21. 21.
    Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001) Google Scholar
  22. 22.
    Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. Wiley, Washington (2005) Google Scholar
  23. 23.
    Tits, A.L., Zhou, J.L.: A simple, quadratically convergent interior point algorithm for linear programming and convex quadratic programming. In: Hager, W.W., Hearn, D.W., Pardalos, P.M. (eds.) Large Scale Optimization: State of the Art, pp. 411–427. Kluwer Academic, Dordrecht (1994) CrossRefGoogle Scholar
  24. 24.
    Tits, A.L., Absil, P.-A., Woessner, W.P.: Constraint reduction for linear programs with many inequality constraints. SIAM J. Optim. 17(1), 119–146 (2006) CrossRefMATHMathSciNetGoogle Scholar
  25. 25.
    Tone, K.: An active-set strategy in an interior point method for linear programming. Math. Program. 59(3), 345–360 (1993) CrossRefMATHMathSciNetGoogle Scholar
  26. 26.
    Wang, W., O’Leary, D.P.: Adaptive use of iterative methods in predictor-corrector interior point methods for linear programming. Numer. Algorithms 25(1–4), 387–406 (2000) CrossRefMATHMathSciNetGoogle Scholar
  27. 27.
    Watson, G.: Choice of norms for data fitting and function approximation. Acta Numer. 7, 337–377 (1998) CrossRefGoogle Scholar
  28. 28.
    Winternitz, L., Nicholls, S.O., Tits, A., O’Leary, D.: A constraint reduced variant of Mehrotra’s Predictor-Corrector Algorithm, 2007. Submitted for publication. http://www.optimization-online.org/DB_FILE/2007/07/1734.pdf
  29. 29.
    Wright, S.J.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997) CrossRefMATHGoogle Scholar
  30. 30.
    Ye, Y.: A “build-down” scheme for linear programming. Math. Program. 46(1), 61–72 (1990) CrossRefMATHGoogle Scholar
  31. 31.
    Ye, Y.: An \(O(n^{\mbox{3}}L)\) potential reduction algorithm for linear programming. Math. Program. 50, 239–258 (1991) CrossRefMATHGoogle Scholar
  32. 32.
    Ye, Y.: A potential reduction algorithm allowing column generation. SIAM J. Optim. 2(1), 7–20 (1992) CrossRefMATHMathSciNetGoogle Scholar
  33. 33.
    Zhang, Y.: Solving large–scale linear programs by interior–point methods under the MATLAB environment. Technical Report 96–01, Dept. of Mathematics and Statistics, Univ. of Maryland Baltimore County (1996) Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Jin Hyuk Jung
    • 1
  • Dianne P. O’Leary
    • 2
  • André L. Tits
    • 3
  1. 1.Department of Computer ScienceUniversity of MarylandCollege ParkUSA
  2. 2.Department of Computer Science and Institute for Advanced Computer StudiesUniversity of MarylandCollege ParkUSA
  3. 3.Department of Electrical and Computer Engineering and the Institute for Systems ResearchUniversity of MarylandCollege ParkUSA

Personalised recommendations