Advertisement

Exploiting problem structure in optimization under uncertainty via online convex optimization

  • Nam Ho-Nguyen
  • Fatma Kılınç-Karzan
Full Length Paper Series A

Abstract

In this paper, we consider two paradigms that are developed to account for uncertainty in optimization models: robust optimization (RO) and joint estimation-optimization (JEO). We examine recent developments on efficient and scalable iterative first-order methods for these problems, and show that these iterative methods can be viewed through the lens of online convex optimization (OCO). The standard OCO framework has seen much success for its ability to handle decision-making in dynamic, uncertain, and even adversarial environments. Nevertheless, our applications of interest present further flexibility in OCO via three simple modifications to standard OCO assumptions: we introduce two new concepts of weighted regret and online saddle point problems and study the possibility of making lookahead (anticipatory) decisions. Our analyses demonstrate that these flexibilities introduced into the OCO framework have significant consequences whenever they are applicable. For example, in the strongly convex case, minimizing unweighted regret has a proven optimal bound of \(O(\mathop {\mathrm{log}}(T)/T)\), whereas we show that a bound of O(1 / T) is possible when we consider weighted regret. Similarly, for the smooth case, considering 1-lookahead decisions results in a O(1 / T) bound, compared to \(O(1/\sqrt{T})\) in the standard OCO setting. Consequently, these OCO tools are instrumental in exploiting structural properties of functions and results in improved convergence rates for RO and JEO. In certain cases, our results for RO and JEO match the best known or optimal rates in the corresponding problem classes without data uncertainty.

Mathematics Subject Classification

90C25 (Convex programming) 90C06 (Large-scale problems) 

Notes

Acknowledgements

The authors wish to thank the review team for their constructive feedback that improved the presentation of the material in this paper. This research is supported in part by NSF Grant CMMI 1454548.

References

  1. 1.
    Abernethy, J., Bartlett, P.L., Rakhlin, A., Tewari, A.: Optimal strategies and minimax lower bounds for online convex games. In: Proceedings of the 19th Annual Conference on Computational Learning Theory (2008)Google Scholar
  2. 2.
    Ahmadi, H., Shanbhag, U.V.: Data-driven first-order methods for misspecified convex optimization problems: global convergence and rate estimates. In: 53rd IEEE Conference on Decision and Control, pp. 4228–4233 (2014)Google Scholar
  3. 3.
    Andrew, L.L., Barman, S., Ligett, K., Lin, M., Meyerson, A., Roytman, A., Wierman, A.: A tale of two metrics: simultaneous bounds on competitiveness and regret. J. Mach. Learn. Res.: Workshop Conf. Proc. 30, 741–763 (2013)Google Scholar
  4. 4.
    Ben-Tal, A., Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton (2009)Google Scholar
  5. 5.
    Ben-Tal, A., Hazan, E., Koren, T., Mannor, S.: Oracle-based robust optimization via online learning. Oper. Res. 63(3), 628–638 (2015)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Ben-Tal, A., Nemirovski, A.: Robust optimization—methodology and applications. Math. Program. 92(3), 453–480 (2002)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Ben-Tal, A., Nemirovski, A.: Selected topics in robust convex optimization. Math. Program. 112(1), 125–158 (2008)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Borodin, A., Linial, N., Saks, M.E.: An optimal on-line algorithm for metrical task system. J. ACM 39(4), 745–763 (1992)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Bubeck, S.: Convex optimization: algorithms and complexity. Found. Trends Mach. Learn. 8(3–4), 231–357 (2015)CrossRefMATHGoogle Scholar
  12. 12.
    Buchbinder, N., Chen, S., Naor, J., Shamir, O.: Unified algorithms for online learning and competitive analysis. J. Mach. Learn. Res.: Workshop Conf. Proc. 23, 5.1–5.18 (2012)Google Scholar
  13. 13.
    Caramanis, C., Mannor, S., Xu, H.: Robust optimization in machine learning. In: Sra, S., Nowozin, S., Wright, S. (eds.) Optimization for Machine Learning. MIT Press, Cambridge (2012)Google Scholar
  14. 14.
    Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, Cambridge (2006)CrossRefMATHGoogle Scholar
  15. 15.
    Chiang, C.-K., Yang, T., Lee, C.-J., Mahdavi, M., Lu, C.-J., Jin, R., Zhu, S.: Online optimization with gradual variations. In: Conference on Learning Theory, pp. 6-1 (2012)Google Scholar
  16. 16.
    Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Hazan, E.: Introduction to online convex optimization. Found. Trends Optim. 2(3–4), 157–325 (2016)CrossRefGoogle Scholar
  18. 18.
    Hazan, E., Kale, S.: Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization. J. Mach. Learn. Res. 15, 2489–2512 (2014)MathSciNetMATHGoogle Scholar
  19. 19.
    Ho-Nguyen, N., Kılınç-Karzan, F.: Online First-order Framework for Robust Convex Optimization. Oper. Res. (to appear)Google Scholar
  20. 20.
    Jenatton, R., Huang, J., Csiba, D., Archambeau, C.: Online Optimization and Regret Guarantees for Non-additive Long-Term Constraints. Technical report (2016). arXiv:1602.05394
  21. 21.
    Jiang, H., Shanbhag, U.V.: On the solution of stochastic optimization problems in imperfect information regimes. In: 2013 Winter Simulations Conference, pp. 821–832 (2013)Google Scholar
  22. 22.
    Jiang, H., Shanbhag, U.V.: On the solution of stochastic optimization and variational problems in imperfect information regimes. SIAM J. Optim. 26(4), 2394–2429 (2016)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Juditsky, A., Nemirovski, A.: First-order methods for nonsmooth convex large-scale optimization, I: general purpose methods. In: Sra, S., Nowozin, S., Wright, S. (eds.) Optimization for Machine Learning. MIT Press, Cambridge (2012)Google Scholar
  24. 24.
    Juditsky, A., Nemirovski, A.: First-order methods for nonsmooth convex large-scale optimization, II: utilizing problem’s structure. In: Sra, S., Nowozin, S., Wright, S. (eds.) Optimization for Machine Learning. MIT Press, Cambridge (2012)Google Scholar
  25. 25.
    Koppel, A., Jakubiec, F.Y., Ribeiro, A.: A saddle point algorithm for networked online convex optimization. IEEE Trans. Signal Process. 63(19), 5149–5164 (2015)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Lacoste-Julien, S., Schmidt, M.W., Bach, F.R.: A Simpler Approach to Obtaining an \({O}(1/t)\) Convergence Rate for the Projected Stochastic Subgradient Method. Technical report (2012). arXiv:1212.2002
  27. 27.
    Mahdavi, M., Jin, R., Yang, T.: Trading regret for efficiency: online convex optimization with long term constraints. J. Mach. Learn. Res. 13, 2503–2528 (2012)MathSciNetMATHGoogle Scholar
  28. 28.
    Mutapcic, A., Boyd, S.: Cutting-set methods for robust convex optimization with pessimizing oracles. Optim. Methods Softw. 24(3), 381–406 (2009)MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Nemirovski, A.: Prox-method with rate of convergence \(O(1/t)\) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)MathSciNetCrossRefMATHGoogle Scholar
  30. 30.
    Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Nedić, A., Lee, S.: On stochastic subgradient mirror-descent algorithm with weighted averaging. SIAM J. Optim. 24(1), 84–107 (2014)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Rakhlin, A., Shamir, O., Sridharan, K.: Making gradient descent optimal for strongly convex stochastic optimization. In: Langford, J., Pineau, J. (eds.) Proceedings of the 29th International Conference on Machine Learning (ICML-12) (ICML’12), pp. 449–456. Omnipress, New York, NY, USA (2012)Google Scholar
  33. 33.
    Rakhlin, A., Sridharan, K.: Online learning with predictable sequences. In: Conference on Learning Theory, pp. 993–1019 (2013)Google Scholar
  34. 34.
    Rakhlin, A., Sridharan, K.: Optimization, learning, and games with predictable sequences. In: Advances in Neural Information Processing Systems, pp. 3066–3074 (2013)Google Scholar
  35. 35.
    Robbins, H.: Asymptotically subminimax solutions of compound statistical decision problems. In: In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pp. 131–149 (1950)Google Scholar
  36. 36.
    Shalev-Shwartz, S.: Online learning and online convex optimization. Found. Trends Mach. Learn. 4(2), 107–194 (2012)CrossRefMATHGoogle Scholar
  37. 37.
    Sion, M.: On general minimax theorems. Pac. J. Math. 8(1), 171–176 (1958)MathSciNetCrossRefMATHGoogle Scholar
  38. 38.
    Yang, T., Mahdavi, M., Jin, R., Zhu, S.: Regret bounded by gradual variation for online convex optimization. Mach. Learn. 95(2), 183–223 (2014)MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21–24, 2003, Washington, DC, USA, pp. 928–936 (2003)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018

Authors and Affiliations

  1. 1.Tepper School of BusinessCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations