Skip to main content
Log in

Approximation accuracy, gradient methods, and error bound for structured convex optimization

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of first-order gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TV-regularized image denoising, and sensor network localization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Abernethy J., Bach F., Evgeniou T., Vert J.-P.: A new approach to collaborative filtering: operator estimation with spectral regularization. J. Mach. Learn. Res. 10, 803–826 (2009)

    Google Scholar 

  2. Alfakih A.Y.: Graph rigidity via Euclidean distance matrices. Linear Algebra Appl. 310, 149–165 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  3. Argyriou A., Evgeniou T., Pontil M.: Convex multi-task feature learning. Mach. Learn. 73, 243–272 (2008)

    Article  Google Scholar 

  4. Aspnes, J., Goldenberg, D., Yang, Y.R.: On the computational complexity of sensor network localization. In: Algorithmic Aspects of Wireless Sensor Networks: First International Workshop, ALGOSENSORS 2004 (Turku, Finland, July 2004), vol. 3121 of Lecture Notes in Computer Science, Springer, pp. 32–44

  5. D’Aspremont A., Banerjee O., Ghaoui L.E.: First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30(1), 56–66 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  6. Auslender A.: Minimisation sans contraintes de fonctions localement lipschitziennes: Applications à la programmation mi-convexe, mi-différentiable. Nonlinear Program. 3, 429–460 (1978)

    MathSciNet  Google Scholar 

  7. Auslender, A., Teboulle, M.: Interior projection-like methods for monotone variational inequalities. Math. Program. 104 (2005)

  8. Auslender A., Teboulle M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. Banerjee O., Ghaoui L.E., D’Aspremont A.: Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. J. Mach. Learn. Res. 9, 485–516 (2008)

    MathSciNet  Google Scholar 

  10. Baraniuk R., Davenport M., DeVore R., Wakin M.: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  11. Bauschke H.H., Borwein J.M., Combettes P.L.: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42(2), 596–636 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  12. Beck A., Teboulle M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  13. Beck A., Teboulle M.: A linearly convergent dual-based gradient projection algorithm for quadratically constrained convex minimization. Math. Oper. Res. 31(2), 398–417 (2006). doi:10.1287/moor.1060.0193

    Article  MATH  MathSciNet  Google Scholar 

  14. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. Technical report, Department of Industrial Engineering and Management, Technion, Haifa (2008)

  15. Becker, S., Bobin, J., Candés, E.: Nesta: a fast and accurate first-order method for sparse recovery. Technical report, California Institute of Technology, April (2009)

  16. Bertsekas D.P.: Nonlinear Programming. 2nd edn. Athena Scientific, Belmont, MA (1999)

    MATH  Google Scholar 

  17. Biswas, P., Aghajan, H., Ye, Y.: Semidefinite programming algorithms for sensor network localization using angle of arrival information. In: Proceedings of 39th Annual Asiloinar Conference on Signals, Systems, and Computers (Pacific Grove, CA, 2005)

  18. Biswas, P., Liang, T.-C., Toh, K.-C., Wang, T.-C., Ye, Y.: Semidefinite programming approaches for sensor network localization with noisy distance measurements. Autom. Sci. Eng. IEEE Trans. 3 (2006)

  19. Biswas P., Liang T.-C., Wang T.-C., Ye Y.: Semidefinite programming based algorithms for sensor network localization. ACM Trans. Sens. Netw. 2, 188–220 (2006)

    Article  Google Scholar 

  20. Biswas P., Toh K.-C., Ye Y.: A distributed SDP approach for large-scale noisy anchor-free graph realization with applications to molecular conformation. SIAM J. Sci. Comput. 30(3), 1251–1277 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  21. Biswas, P., Ye, Y.: Multiscale optimization methods and applications, vol. 82 of Nonconvex optimization and its applications. Springer, 2003, ch. A Distributed Method for Solving Semidefinite Programs Arising from Ad Hoc Wireless Sensor Network Localization.

  22. Biswas, P., Ye, Y.: Semidefinite programming for ad hoc wireless sensor network localization. In: Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks (2004)

  23. Blatt, D., Hero, A.O., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18 (2007)

  24. Bregman L.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)

    Article  Google Scholar 

  25. Cai, J.-F., Candés, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. Report, California Institute of Technology, Pasadena, September (2008)

  26. Candès E.: The restricted isometry property and its implications for compressed sensing. Comptes rendus-Mathématique 346(9–10), 589–592 (2008)

    Article  MATH  Google Scholar 

  27. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. To appear in Found. of Comput. Math. (2008)

  28. Candès E.J., Romberg J., Tao T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  Google Scholar 

  29. Candès E.J., Romberg J., Tao T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MATH  Google Scholar 

  30. Candès E.J., Tao T.: Decoding by linear programming. IEEE Trans. Inform. Theory 51(12), 4203–4215 (2005)

    Article  MathSciNet  Google Scholar 

  31. Carter M.W., Jin H.H., Saunders M.A., Ye Y.: Spaseloc: an adaptive subproblem algorithm for scalable wireless sensor network localization. SIAM J. Optim. 17(4), 1102–1128 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  32. Censor Y., Zenios S.: Parallel Optimization: Theory, Algoritluns, and Applications. Oxford University Press, New York (1997)

    Google Scholar 

  33. Chen G., Teboulle M.: Convergence analysis of a proximal-like minimization algorithm using Bregman functions. SIAM J. Optim. 3, 538–543 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  34. Chen S.S., Donoho D.L., Saunders M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)

    Article  MathSciNet  Google Scholar 

  35. Dasarathy B., White L.J.: A maxmin location problem. Oper. Res. 28(6), 1385–1401 (1980)

    Article  MATH  MathSciNet  Google Scholar 

  36. Daubechies I., Defrise M., Mol C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)

    Article  MATH  Google Scholar 

  37. Davies, M.E., Gribonval, R.: Restricted isometry constants where p sparse recovery can fail for 0 < p ≤ 1. Publication interne 1899, Institut de Recherche en Informatique et Systémes Aléatoires, July (2008)

  38. Ding, Y., Krislock, N., Qian, J., Wolkowicz, H.: Sensor network localization, euclidean distance matrix completions, and graph realization. Technical report, Department of Combinatorics and Optimization, University of Waterloo, Waterloo, February (2008)

  39. Doherty, L., Pister, K.S.J., El Ghaoui, L.: Convex position estimation in wireless sensor networks. In: Proceedings of 20th INFOCOM, vol. 3 (2001)

  40. Donoho, D.L.: For most large underdetermined systems of linear equations the minimal l1-norm near-solution approximates the sparsest near-solution. Technical report, Department of Statistics, Stanford University, Stanford (2006)

  41. Donoho D.L.: For most large underdetermined systems of linear equations, the minimal l1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59, 797–829 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  42. Donoho D.L., Elad M.: Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization. Proc. Natl. Acad. Sci. USA 100(5), 2197–2202 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  43. Donoho D.L., Elad M., Temlyakov V.: Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory 52(1), 6–18 (2006)

    Article  MathSciNet  Google Scholar 

  44. Donoho D.L., Huo X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory 47(7), 2845–2862 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  45. Donoho D.L., Johnstone I.M.: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425–455 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  46. Donoho D.L., Johnstone I.M.: Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 90(432), 1200–1224 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  47. Donoho, D.L., Tsaig, Y., Drori, I., Starck, J.-L.: Sparse Solution of Underdetermined Linear Equations by Stagewise Orthogonal Matching Pursuit. Tech. Rep. 2006–2, Dept. of Statistics, Stanford University, April (2006)

  48. Eckstein J.: Nonlinear proximal point algorithms using bregman functions, with applications to convex programming. Math. Oper. Res. 18(1), 202–226 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  49. Eren, T., Goldenberg, D.K., Whiteley, W., Yang, Y.R., Morse, A.S., Anderson, B.D.O., Belhumeur, P.: Rigidity, computation, and randomization in network localization. In: 23rd INFOCOM, vol. 4. (2004)

  50. Evgeniou, T., Pontil, M.: Regularized multi-task learning. In: Proceedings of 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2004)

  51. Facchinei F., Pang J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. I and II. Springer, New York (2003)

    Google Scholar 

  52. Fazel, M., Hindi, H., Boyd, S.P.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of American Control Conference (Arlington, 2001)

  53. Ferris, M.C., Mangasarian, O.L.: Parallel variable distribution. SIAM J. Optim. 4 (1994)

  54. Fletcher R.: An overview of unconstrained optimization. In: Spedicato, E. (ed.) Algorithms for Continuous Optimization, Kluwer Academic, Dordrecht (1994)

    Google Scholar 

  55. Friedman J., Hastie T., Höfling H., Tihshirani R.: Pathwise coordinate optimization. Ann. Appl. Stat. 1, (2007)

  56. Friedman, J., Hastie, T., Tibshirani, R.: Sparse inverse covariance estimation with the graphical lasso. Biostatistics (2007)

  57. Friedman, J., Hastie, T., Tihshirani, R.: Regularization paths for generalized linear models via coordinate descent. Report, Department of Statistics, Stanford University, Stanford, July (2008)

  58. Fuchs J.-J.: On sparse representations in arbitrary redundant bases. IEEE Trans. Inform. Theory 50(6), 1341–1344 (2004)

    Article  MathSciNet  Google Scholar 

  59. Fuchs, J.-J.: Recovery of exact sparse representations in the presence of bounded noise. IEEE Trans. Inform. Theory 51 (2005)

  60. Fukushima, M.: Parallel variable transformation in unconstrained optimization. SIAM J. Optim. 8 (1998)

  61. Fukushima M., Mine H.: A generalized proximal point algorithm for certain non-convex minimization problems. Int. J. Syst. Sci. 12, 989–1000 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  62. Gilbert, A.C., Muthukrishnan, S., Strauss, M.J.: Approximation of functions over redundant dictionaries using coherence. In: 14th Annual ACM-SIAM Symposium on Discrete Algorithms (New York, 2003), ACM

  63. Goldfarb, D., Yin, W.: Second-order cone programming methods for total variation-based image restoration. SIAM J. Sci. Comput. 27 (2005)

  64. Gonzaga, C.C., Karas, E.W.: Optimal steepest descent algorithms for unconstrained convex problems: fine timing nesterov’s method. Technical report, Departnient of Mathematics, Federal University of Santa Catarina, Florianópolis, August (2008)

  65. Gribonval R., Nielsen M.: Sparse representations in unions of bases. IEEE Trans. Inform. Theory 49(12), 3320–3325 (2003)

    Article  MathSciNet  Google Scholar 

  66. Hale, E., Yin, W., Zhang, Y.: A fixed-point continuation method for l1-regularized minimization with applications to compressed sensing. Tech. Rep. TR07-07, Department of Computational and Applied Mathematics, Rice University, Houston, TX (2007)

  67. Hoda, S., Gilpin, A., Pefla, J.: Smoothing techniques for computing nash equilibria of sequential games. Technical report, Carnegie Mellon University, Pittsburg, March (2008)

  68. Juditsky, A., Lan, G., Nemirovski, A., Shapiro, A.: Stochastic approximation approach to stochastic programming. Technical report, to appear in SIAM J. Optim. (2007)

  69. Kim, S., Kojima, M., Waki, H.: Exploiting sparsity in sdp relaxation for sensor network localization, report b-447. Technical report, Tokyo Institute of Technology, Tokyo, October (2008)

  70. Kiwiel, K.C.: Proximal minimization methods with generalized bregman functions. SIAM J. Control Optim. 35 (1997)

  71. Kiwiel K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2003)

    Article  MathSciNet  Google Scholar 

  72. Kiwiel, K.C.: On linear-time algorithms for the continuous quadratic knapsack problem. J. Optim. Theory Appl. 134 (2007)

  73. Krislock, N., Piccialli, V., Wolkowicz, H.: Robust semidefinite programming approaches for sensor network localization with anchors. Technical report, Department of Combinatorics and Optimization, University of Waterloo, Waterloo, May (2006)

  74. Lan, G., Lu, Z., Monteiro, R.D.C.: Primal-dual first-order methods with o(1/e)o(1/ε) iteration-complexity for cone programming. Technical report, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, December (2006)

  75. Liang, T.-C., Wang, T.-C., Ye, Y.: A gradient search method to round the semidefinite programming relaxation solution for ad hoc wireless sensor network localization. Technical report, Electrical Engineering, Stanford University, October (2004)

  76. Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. Technical report, Electrical Engineering Department, UCLA, Los Angeles (2008)

  77. Lu, Z.: Smooth optimization approach for sparse covariance selection. Technical report, Department of Mathematics, Simon Fraser University, Burnaby, January 2008. submitted to SIAM J. Optim

  78. Lu, Z., Monteiro, R.D.C., Yuan, M.: Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression. Technical report, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, January 2008. revised March (2009)

  79. Luo, Z.Q.: On the convergence of the lms algorithm with adaptive learning rate for linear feedforward networks. Neural Comput. 3 (1991)

  80. Luo, Z.Q., Tseng, P.: On the linear convergence of descent methods for convex essentially smooth minimization. SIAM J. Control Optim. 30 (1992)

  81. Luo, Z.Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46 (1993)

  82. Luo, Z.Q., Tseng, P.: On the convergence rate of dual ascent methods for linearly constrained convex minimization. Math. Oper. Res. 18 (1993)

  83. Ma, S., Goldfarb, D., Chen, L.: Fixed point and bregman iterative methods for matrix rank minimization. Report 08–78, UCLA Computational and Applied Mathematics (2008)

  84. Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. Signal Process. IEEE Trans. 41 (1993)

  85. Mangasarian O.L.: Sparsity-preserving sor algorithms for separable quadratic and linear programming. Comnput. Oper. Res. 11, 105–112 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  86. Mangasarian, O.L.: Mathematical programming in neural networks. ORSA J. Comput. 5 (1993)

  87. Mangasarian O.L.: Parallel gradient distribution in unconstrained optimization. SIAM J. Control Optim. 33, 1916–1925 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  88. Mangasarian, O.L., Musicant, D.R.: Successive over relaxation for support vector machines. IEEE Trans. Neural Netw. 10 (1999)

  89. Meier L., van de Geer S., Bühlmann P.: The group Lasso for logistic regression. J. Royal Statist. Soc. B 70, 53–71 (2008)

    Article  Google Scholar 

  90. More, J.J., Wu, Z.: Global continuation for distance geometry problems. SIAM J. Optim. 7 (1997)

  91. Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12 (2001)

  92. Nemirovski, A.: Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15 (2005)

  93. Nemirovski A., Yudin D.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)

    Google Scholar 

  94. Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence o(1/k 2). Doklady AN SSSR 269 (1983). translated as Soviet Math. Dokl

  95. Nesterov, Y.: On an approach to the construction of optimal methods of minimization of smooth convex functions. Ekonom. i. Mat. Metody 24 (1988)

  96. Nesterov Y.: Introductory Lectures on Convex Optimization. Kluwer Academic Publisher, Dordrecht, The Netherlands (2004)

    MATH  Google Scholar 

  97. Nesterov, Y.: Smoothing technique and its applications in semidefinite optimization. Technical report, CORE, Catholic University of Louvain, Louvain-la-Neuve, Belgium, October (2004)

  98. Nesterov, Y. Excessive gap technique in nonsmooth convex minimization. SIAM J. Optim. 16 (2005)

  99. Nesterov Y.: Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  100. Nesterov, Y.: Gradient methods for minimizing composite objective function. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (2007)

  101. Nesterov, Y.: Gradient methods for minimizing composite objective function. Technical report, CORE, Catholic University of Louvain, Louvain-la-Neuve, Belgium, September (2007)

  102. Nesterov Y.: Primal-dual subgradient methods for convex problems. Math. Program. 120, 221–259 (2007). doi:10.1007/s10107-007-0149-x

    Article  MathSciNet  Google Scholar 

  103. Nesterov, Y.E., Nemirovski, A.: Interior-Point Polynomial Algorithms in Convex Programming, vol. 13 of Stud. Appl. Math. Society of Industrial and Applied Mathematics, Philadelphia (1994)

  104. Nie, J.: Sum of squares method for sensor network localization. Technical report, Department of Mathematics, University of California, Berkeley, June 2006. to appear in Comput. Optim. Appl

  105. Nocedal J., Wright S.J.: Numerical Optimization. Springer, New York (1999)

    Book  MATH  Google Scholar 

  106. Obozinski G., Taskar B., Jordan M.I.: Joint covariate selection and joint subspace selection for multiple classification problems. Stat. Comput. 20, 231–252 (2009)

    Article  Google Scholar 

  107. Ortega J.M., Rheinboldt W.C.: Iterative Solutions of Nonlinear Equations in Several Variables. Academic Press, London (1970)

    Google Scholar 

  108. Osborne M.R., Presnell B., Turlach B.A.: On the LASSO and its dual. J. Comput. Graph. Stat. 9, 319–337 (2000)

    Article  MathSciNet  Google Scholar 

  109. Osher S., Burger M., Goldfarb D., Xu J.: An iterative regularization method for total variation-based image restoration. SIAM J. Multiscale Model. Simul. 4, 460–489 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  110. Pang, J.-S.: A posteriori error bounds for the linearly constrained variational inequality problem. Math. Oper. Res. 12 (1987)

  111. Park M.-Y., Hastie T.: An L1 regularization-path algorithm for generalized linear models. J. Roy. Soc. Stat. B 69, 659–677 (2007)

    Article  MathSciNet  Google Scholar 

  112. Pfander, G.B., Rauhut, H., Tanner, J. Identification of matrices having a sparse representation. IEEE Trans. Signal Proc. 56 (2008)

  113. Platt, J.: Fast training of support vector machines using sequential minimal optimization. In: Advances in Kernel Methods: Support Vector Learning. MIT Press, Cambridge, MA, USA (1998)

  114. Polyak B.T.: Introduction to optimization. Optimization Software, New York (1987)

    Google Scholar 

  115. Pong, T.K., Tseng, P.: (robust) edge-based semidefinite programming relaxation of sensor network localization. Technical report, Department of Mathematics, University of Washington, Seattle, January (2009)

  116. Ravi, S.S., Rosenkrantz, D.J., Tayi, G.K.: Heuristic and special case algorithms for dispersion problems. Oper. Res. 42 (1994)

  117. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. arXiv 0706.4138, June (2007)

  118. Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  119. Rockafellar R.T., Wets R.J.-B.: Variational Analysis. Springer, New York (1998)

    Book  MATH  Google Scholar 

  120. Rohl C.A., Strauss C.E.M., Misura K., Baker D.: Protein structure prediction using rosetta. Methods Enzym. 383, 66–93 (2004)

    Article  Google Scholar 

  121. Sagastizabal C.A., Solodov M.V.: Parallel variable distribution for constrained optimization. Comput. Optim. Appl. 22, 111–131 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  122. Sardy S., Antoniadis A., Tseng P.: Automatic smoothing with wavelets for a wide class of distributions. J. Comput. Graph. Stat. 13, 399–421 (2004)

    Article  MathSciNet  Google Scholar 

  123. Sardy S., Bruce A., Tseng P.: Block coordinate relaxation methods for nonparametric wavelet denoising. J. Comput. Graph. Stat. 9, 361–379 (2000)

    Article  MathSciNet  Google Scholar 

  124. Sardy, S., Bruce, A., Tseng, P.: Robust wavelet denoising. IEEE Trans. Signal Proc. 49 (2001)

  125. Sardy S., Tseng P.: AMlet, RAMlet, and GAMlet: automatic nonlinear fitting of additive models, robust and generalized, with wavelets. J. Comput. Graph. Stat. 13, 283–309 (2004)

    Article  MathSciNet  Google Scholar 

  126. Sardy S., Tseng P.: On the statistical analysis of smoothing by maximizing dirty markov random field posterior distributions. J. Amer. Statist. Assoc. 99, 191–204 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  127. Shi, J., Yin, W., Osher, S., Sajda, P.: A fast algorithm for large scale L1-regularized logistic regression. Technical report, Department of Computational and Applied Mathematics, Rice University, Houston (2008)

  128. So A.M.-C., Ye Y.: Theory of semidefinite programming for sensor network localization. Math. Program. 109, 367–384 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  129. Solodov M.V.: Incremental gradient algorithms with step sizes bounded away from zero. Comput. Optim. Appl. 11, 23–25 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  130. Strohmer T., Heath R.J.: Grassmannian frames with applications to coding and communications. Appl. Comp. Harm. Anal. 14, 257–275 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  131. Rockafellar R.T.: Conjugate Duality and Optimization. Society for Industrial and Applied Mathematics, Philadelphia (1974)

    MATH  Google Scholar 

  132. Teboulle, M.: Convergence of proximal-like algorithms. SIAM J. Optim. 7 (1997)

  133. Toh K.-C., Yun, S.: An accelerated proximal gradient algorithm for nuclear regularized least squares problems. Technical report, Department of Mathematics, National University of Singapore, Singapore (2009)

  134. Toh, S. Y. K.-C.: A coordinate gradient descent method for l1-regularized convex minimization. Technical report, Department of Mathematics, National University of Singapore, Singapore, 2008. submitted to Comput. Optim. Appl

  135. Tropp J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 50(10), 2231–2242 (2004)

    Article  MathSciNet  Google Scholar 

  136. Tropp J.A.: Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans. Inform. Theory 52(3), 1030–1051 (2006)

    Article  MathSciNet  Google Scholar 

  137. Tropp J.A., Gilbert A.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory 53(12), 4655–4666 (2007)

    Article  MathSciNet  Google Scholar 

  138. Tseng P.: Dual coordinate descent methods for non-strictly convex minimization. Math. Prog. 59, 231–247 (1993)

    Article  MathSciNet  Google Scholar 

  139. Tseng, P.: An incremental gradient(-projection) method with momentum term and adaptive stepsize rule. SIAM J. Optim. 8 (1998)

  140. Tseng, P.: Nonlinear Optimization and Related Topics. Kiuwer, Dordrecht, 2000, ch. Error bounds and superlinear convergence analysis of sonic Newton-type methods in optimization

  141. Tseng P.: Convergence of block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109, 475–494 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  142. Tseng, P.: Second-order cone programming relaxation of sensor network localization. SIAM J. Optim. 18 (2007)

  143. Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. Technical report, Department of Mathematics, University of Washington, Seattle, May 2008. submitted to SIAM J. Optim

  144. Tseng, P.: Further results on a stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory. 55 (2009)

  145. Tseng, P., Yun, S.: A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training. Comput. Optim. Appl. (2008)

  146. Tseng P., Yun S.: A block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. J. Optim. Theory Appl. 140, 513–535 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  147. Tseng P., Yun S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  148. Wang, Y., Yang, J., Yin, W., Zhang, Y.: A new alternating minimization algorithm for total variation image reconstruction. Technical report, Department of Computational and Applied Mathematics, Rice University, Houston, 2007. to appear in SIAM Imaging Sci

  149. Wang, Z., Zheng, S., Ye, Y., Boyd, S.: Further relaxations of the semidefinite programming approach to sensor network localization. SIAM J. Optim. 19 (2008)

  150. White, D.J.: A heuristic approach to a weighted maxmin dispersion problem. IMA J. Manag. Math. 9 (1996)

  151. Wright S.J.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)

    MATH  Google Scholar 

  152. Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. Technical report, Computer Sciences Department, University of Wisconsin, Madison, October (2007)

  153. Yang, J., Zhang, Y., Yin, W.: An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise. Technical report, Department of Computational and Applied Mathematics, Rice University, Houston (2008)

  154. Ye J., Ji S., Chen J.: Multi-class discriminant kernel learning via convex programming. J. Mach. Learn. Res. 9, 719–758 (2008)

    MathSciNet  Google Scholar 

  155. Ye Y.: Interior Point Algorithms: Theory and Analysis. John Wiley & Sons, New York (1997)

    MATH  Google Scholar 

  156. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for l1 minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1 (2008)

  157. Yuan M., Ekici A., Lu Z., Monteiro R.: Dimension reduction and coefficient estimation in multivariate linear regression. J. Royal Stat. Soc. B. 69, 329–346 (2007)

    Article  MathSciNet  Google Scholar 

  158. Yuan M., Lin Y.: Model selection and estimation in regression with grouped variables. J. Royal Stat. Soc. B. 68, 49–67 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  159. Yuan, M., Lin, Y.: Model selection and estimation in the gaussian graphical model. Biometrika 94 (2007)

  160. Zhu, M., Chan, T.F.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical report, Department of Mathematics, UCLA, Los Angeles (2008)

  161. Zhu, M., Wright, S.J., Chan, T.F.: Duality-based algorithms for total-variation regularized image restoration. Technical report, Department of Mathematics, UCLA, Los Angeles (2008)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Tseng.

Additional information

This work was supported by National Science Foundation grant DMS-0511283.

P. Tseng was invited to speak on this paper at ISMP 2009. Tragically, Paul went missing on August 13, 2009, while kayaking in China. The guest editors thank Michael Friedlander (mpf@cs.ubc.ca) for carrying out minor revisions to the submitted manuscript as suggested by the referees.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tseng, P. Approximation accuracy, gradient methods, and error bound for structured convex optimization. Math. Program. 125, 263–295 (2010). https://doi.org/10.1007/s10107-010-0394-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-010-0394-2

Keywords

Mathematics Subject Classification (2000)

Navigation