Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Generalized Gauss inequalities via semidefinite programming


A sharp upper bound on the probability of a random vector falling outside a polytope, based solely on the first and second moments of its distribution, can be computed efficiently using semidefinite programming. However, this Chebyshev-type bound tends to be overly conservative since it is determined by a discrete worst-case distribution. In this paper we obtain a less pessimistic Gauss-type bound by imposing the additional requirement that the random vector’s distribution must be unimodal. We prove that this generalized Gauss bound still admits an exact and tractable semidefinite representation. Moreover, we demonstrate that both the Chebyshev and Gauss bounds can be obtained within a unified framework using a generalized notion of unimodality. We also offer new perspectives on the computational solution of generalized moment problems, since we use concepts from Choquet theory instead of traditional duality arguments to derive semidefinite representations for worst-case probability bounds.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3


  1. 1.

    If the noise is Gaussian, then minimum distance decoding is the same as maximum likelihood decoding.

  2. 2.

    An Intel(R) Core(TM) Xeon(R) CPU E5540 @ 2.53 GHz machine.

  3. 3.

    Historically, in their famous monograph on the limit distributions of sums of independent random variables, Gnedenko and Kolmogorov [12] used a false theorem owing to Lapin stating a projection property for unimodal distributions. Chung highlighted the mistake in his English translation of the monograph.


  1. 1.

    Bertsimas, D., Popescu, I.: On the relation between option and stock prices: a convex optimization approach. Oper. Res. 50(2), 358–374 (2002)

  2. 2.

    Bertsimas, D., Popescu, I.: Optimal inequalities in probability theory: a convex optimization approach. SIAM J. Optim. 15(3), 780–804 (2005)

  3. 3.

    Bienaymé, I.J.: Considérations à l’appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés. Comptes Rendus de l’Académie des Sciences 37, 159–184 (1853)

  4. 4.

    Billingsley, P.: Convergence of probability measures. In: Barnett, V., Cressie, N.A., Fisher, N.I., Johnstone, I.M., Kadane, J.B., Kendall, D.G., Scott, D.W., Silverman, B.W., Smith, A.F.M., Teugels, J.L., Bradley, R.A., Hunter, J.S. (eds.) Wiley Series in Probability and Statistics, vol. 493. Wiley, New York (2009)

  5. 5.

    Birnbaum, Z., Raymond, J., Zuckerman, H.: A generalization of Tshebyshev’s inequality to two dimensions. Ann. Math. Stat. 18(1), 70–79 (1947)

  6. 6.

    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

  7. 7.

    Chebyshev, P.: Des valeurs moyennes. Journal de Mathématiques Pures et Appliquées 12(2), 177–184 (1867)

  8. 8.

    Cheng, J., Delage, E., Lisser, A.: Distributionally Robust Stochastic Knapsack Problem. Technical report, HEC Montréal (2013)

  9. 9.

    Delage, E., Ye, Y.: Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 58(3), 595–612 (2010)

  10. 10.

    Dharmadhikari, S., Joag-Dev, K.: Unimodality, Convexity, and Applications, Probability and Mathematical Statistics, vol. 27. Academic Press, London (1988)

  11. 11.

    Gauss, C.: Theoria combinationis observationum erroribus minimis obnoxiae, pars prior. Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores 33, 321–327 (1821)

  12. 12.

    Gnedenko, B., Kolmogorov, A., Chung, K., Doob, J.: Limit distributions for sums of independent random variables. In: Addison-Wesley Series in Statistics, vol. 233. Addison-Wesley, Cambridge (1968)

  13. 13.

    Grundy, B.: Option prices and the underlying asset’s return distribution. J. Financ. 46(3), 1045–1069 (1991)

  14. 14.

    Hadamard, J.: Sur les problèmes aux dérivées partielles et leur signification physique. Princet. Univ. Bull. 13, 49–52 (1902)

  15. 15.

    Isii, K.: On sharpness of Tchebycheff-type inequalities. Ann. Inst. Stat. Math. 14(1), 185–197 (1962)

  16. 16.

    Lanckriet, G., El Ghaoui, L., Bhattacharyya, C., Jordan, M.: A robust minimax approach to classification. J. Mach. Learn. Res. 3, 555–582 (2003)

  17. 17.

    Lasserre, J.: Global optimization with polynomials and the problem of moments. SIAM J. Optim. 11(3), 796–817 (2001)

  18. 18.

    Lo, A.: Semi-parametric upper bounds for option prices and expected payoffs. J. Financ. Econ. 19(2), 373–387 (1987)

  19. 19.

    Löfberg, J.: Yalmip: A toolbox for modeling and optimization in matlab. In: Proceedings of the CACSD Conference, pp. 284–289 (2004)

  20. 20.

    Markov, A.: On Certain Applications of Algebraic Continued Fractions. Ph.D. thesis, St. Petersburg (1884)

  21. 21.

    Meaux, L., Seaman Jr, J., Boullion, T.: Calculation of multivariate Chebyshev-type inequalities. Comput. Math. Appl. 20(12), 55–60 (1990)

  22. 22.

    Natarajan, K., Song, M., Teo, C.P.: Persistency model and its applications in choice modeling. Manag. Sci. 55(3), 453–469 (2009)

  23. 23.

    Nesterov, Y., Nemirovskii, A.: Interior-point polynomial algorithms in convex programming. In: Studies in Applied and Numerical Mathematics, vol. 13. SIAM, Philadelphia (1994)

  24. 24.

    Olshen, R., Savage, L.: A Generalized Unimodality. Technical report 143, Stanford University (1969)

  25. 25.

    Phelps, R. (ed.): Lectures on Choquet’s Theorem.In: Lecture Notes in Mathematics, vol. 1757. Springer, Berlin (2001)

  26. 26.

    Pólik, I., Terlaky, T.: A survey of the S-lemma. SIAM Rev. 49(3), 371–418 (2007)

  27. 27.

    Popescu, I.: A semidefinite programming approach to optimal-moment bounds for convex classes of distributions. Math. Oper. Res. 30(3), 632–657 (2005)

  28. 28.

    Rockafellar, R., Wets, R.B.: Variational Analysis, Grundlehren der mathematischen Wissenschaften, vol. 317. Springer, Berlin (1998)

  29. 29.

    Rogosinski, W.: Moments of non-negative mass. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 245(1240), 1–27 (1958)

  30. 30.

    Sellke, T.: Generalized Gauss–Chebyshev inequalities for unimodal distributions. Metrika 43(1), 107–121 (1996)

  31. 31.

    Shapiro, A.: On duality theory of conic linear problems. Nonconvex Optim. Appl. 57, 135–155 (2001)

  32. 32.

    Shapiro, A., Kleywegt, A.: Minimax analysis of stochastic problems. Optim. Methods Softw. 17(3), 523–542 (2002)

  33. 33.

    Smith, J.: Generalized Chebychev inequalities: theory and applications in decision analysis. Oper. Res. 43(5), 807–825 (1995)

  34. 34.

    Stellato, B.: Data-Driven Chance Constrained Optimization (2014). doi:10.3929/ethz-a-010266857

  35. 35.

    Tütüncü, R., Toh, K., Todd, M.: Solving semidefinite–quadratic–linear programs using SDPT3. Math. Progr. 95(2), 189–217 (2003)

  36. 36.

    Van Parys, B., Kuhn, D., Goulart, P., Morari, M.: Distributionally Robust Control of Constrained Stochastic Systems. Technical report, ETH Zürich (2013)

  37. 37.

    Vandenberghe, L., Boyd, S., Comanor, K.: Generalized Chebyshev bounds via semidefinite programming. SIAM Rev. 49(1), 52–64 (2007)

  38. 38.

    Vorobyov, S., Chen, H., Gershman, A.: On the relationship between robust minimum variance beamformers with probabilistic and worst-case distortionless response constraints. IEEE Trans. Signal Process. 56(11), 5719–5724 (2008)

  39. 39.

    Xu, H., Caramanis, C., Mannor, S.: Optimization under probabilistic envelope constraints. Oper. Res. 60(3), 682–699 (2012)

  40. 40.

    Yamada, Y., Primbs, J.: Value-at-risk estimation for dynamic hedging. Int. J. Theor. Appl. Financ. 5(4), 333–354 (2002)

  41. 41.

    Ye, Y.: Interior point algorithms: theory and analysis. In: Graham, R.L., Lenstra, J.K. (eds.) Wiley Series in Discrete Mathematics and Optimization, vol. 44. Wiley, New York (2011)

  42. 42.

    Yu, Y.L., Li, Y., Schuurmans, D., Szepesvári, C.: A general projection property for distribution families. In: Bengio, Y., Schuurmans, D., Lafferty, J.D., Williams, C.K.I., Culotta, A. (eds.) Advances in Neural Information Processing Systems. Wiley, New York, pp. 2232–2240 (2009)

  43. 43.

    Zymler, S., Kuhn, D., Rustem, B.: Distributionally robust joint chance constraints with second-order moment information. Math. Progr. 137(1–2), 167–198 (2013)

  44. 44.

    Zymler, S., Kuhn, D., Rustem, B.: Worst-case value-at-risk of non-linear portfolios. Manag. Sci. 59(1), 172–188 (2013)

Download references

Author information

Correspondence to Bart P. G. Van Parys.

Appendix: Unimodality of ambiguity sets

Appendix: Unimodality of ambiguity sets

In Sect. 4 the notion of \(\alpha \)-unimodality was mainly used as a theoretical tool for bridging the gap between the Chebyshev and Gauss inequalities. The purpose of this section is to familiarize the reader with some of the properties and practical applications of \(\alpha \)-unimodal ambiguity sets. This section borrows heavily from the standard reference on unimodal distributions [10] and the technical report [24] where \(\alpha \)-unimodality was first proposed.

Since some of the properties of \(\alpha \)-unimodal ambiguity sets depend not only on \(\alpha \) but also on the dimension \(n\), we make this dependence now explicit and denote the set of all \(\alpha \)-unimodal distributions supported on \({{\mathbb {R}}}^n\) as \({\fancyscript{P}}_{\alpha , n}\). Taking \(B = {\mathbb {R}}^n\) in Definition 7, it is clear that \({\fancyscript{P}}_{\alpha , n} = \emptyset \) for all \(\alpha < 0\). Similarly, taking \(B\) a neighborhood around the origin shows that \({\fancyscript{P}}_{0, n} = \{ \delta _0 \}\) justifying the condition \(\alpha > 0\) required in Definition 7. As the \(\alpha \)-unimodal ambiguity sets enjoy the nesting property \({\fancyscript{P}}_{\alpha , n} \subseteq {\fancyscript{P}}_{\beta , n}\) whenever \(\alpha \le \beta \le \infty \), we may define the \(\alpha \)-unimodality value of a generic ambiguity set \({\fancyscript{P}}\) as the smallest \(\alpha \) for which \({\fancyscript{P}} \subseteq {\fancyscript{P}}_{\alpha , n}\). In [24, Lemma 1] it is shown that the infimum over \(\alpha \) is always achieved. Table 1 reports the \(\alpha \)-unimodality values for some common distributions (that is, singleton ambiguity sets).

Table 1 \(\alpha \)-Unimodality values of some common distributions

The notion of \(\alpha \)-unimodality was originally introduced after the discovery that the projections of star-unimodal distributions need not be star-unimodal.Footnote 3 Indeed, a situation in which star-unimodality is not preserved under a projection is visualized in Fig. 4. A correct projection property for \(\alpha \)-unimodal distributions is given in the following theorem.

Theorem 5

(Projection property [10]) If the random vector \(\xi \in {{\mathbb {R}}}^n\) has a distribution \({{\mathbb {P}}} \in {\fancyscript{P}}_{\alpha , n}\) for some \(0 < \alpha \le \infty \), and \(A\) is a linear transformation mapping \({\mathbb {R}}^n\) to \({\mathbb {R}}^m\), then the distribution of \(A\xi \) belongs to \({\fancyscript{P}}_{\alpha , m}\).

The projection property of Theorem 5 has great theoretical and practical value because it justifies, for instance, the identity

$$\begin{aligned} \sup _{{{\mathbb {P}}} \in {\fancyscript{P}}_{\alpha , n}(\mu , S)} ~~ {{\mathbb {P}}}( A \xi \notin \varXi ) \quad = \quad \sup _{{{\mathbb {P}}} \in {\fancyscript{P}}_{\alpha , m}(A \mu , A S A^{\top })} ~~ {{\mathbb {P}}}( \xi \notin \varXi )\,, \end{aligned}$$

which can be useful for dimensionality reduction. See [42] for concrete practical applications of this identity. Additionally, Theorem 5 allows us to interpret the elements of \({\fancyscript{P}}_{\alpha , n}\) for \(\alpha \in {\mathbb {N}}\) as projections of star-unimodal distributions on \({\mathbb {R}}^\alpha \).

Fig. 4

The univariate distribution visualized in the right panel of the figure is not star-unimodal despite being the marginal projection of the uniform distribution on the star-shaped set \(\left\{ \xi \in {\mathbb {R}}^2\ : \ 0\le \xi _1\le 1, 0\le \xi _2\le 1 \right\} \cup \left\{ \xi \in {\mathbb {R}}^2\ : \ {-1}\le \xi _1\le 0, ~{-1}\le \xi _2\le 0 \right\} \), shown in the left panel, under the linear transformation \(A:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}\), \((\xi _1, \xi _2) \mapsto \xi _1+\xi _2\)

We can now envisage a further generalization of the definition of \(\alpha \)-unimodality. Specifically, we can denote a distribution \({{\mathbb {P}}}\) as \(\phi \)-unimodal if \(\phi (t) {\mathbb {P}} (B/t)\) is non-decreasing in \(t\in (0, \infty )\) for every Borel set \(B\in {\fancyscript{B}}({\mathbb {R}}^n)\), while the set of all \(\phi \)-unimodal distributions on \({{\mathbb {R}}}^n\) is denoted by \({\fancyscript{P}}_{\phi , n}\). This is indeed a natural generalization as it reduces to the definition of \(\alpha \)-unimodality for \(\phi (t)=t^\alpha \). However, as shown in [24], the notions of \(\phi \)-unimodality and \(\alpha \)-unimodality are in fact equivalent in the sense that \({\fancyscript{P}}_{\alpha , n} = {\fancyscript{P}}_{\phi , n}\) for

$$\begin{aligned} \alpha = \inf _{t>0}\frac{1}{t \phi (t+0)} \frac{\mathrm {d}\phi (t)}{\mathrm {d}t}, \end{aligned}$$

where \(\frac{\mathrm {d}\phi (t)}{\mathrm {d}t}\) represents the lower right Dini derivative of \(\phi (t)\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Van Parys, B.P.G., Goulart, P.J. & Kuhn, D. Generalized Gauss inequalities via semidefinite programming. Math. Program. 156, 271–302 (2016). https://doi.org/10.1007/s10107-015-0878-1

Download citation

Mathematics Subject Classification

  • 90C15 Stochastic programming
  • 90C22 Semidefinite programming