Abstract
This short addendum consists of two sections. The first provides proofs that were omitted in Ahmadi-Javid (J. Optim. Theory Appl., 2012) for the sake of brevity, and also demonstrates that the dual representation of the entropic value-at-risk, which is given in Ahmadi-Javid (J. Optim. Theory Appl., 2012) for the case of bounded random variables, holds for all random variables whose moment-generating functions exist everywhere. The second section provides a few corrections.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
1 Supplementary Proofs
In this section, we begin by providing detailed proofs for some of the statements made in [1], which may be helpful to readers who are less familiar with convex optimization. Then we discuss the dual representation of the entropic value-at-risk (EVaR) for any random variable whose moment-generating function exists everywhere.
The following lemma proves the convexity of cumulant-generating functions, which is used in the proof of Lemma 3.1 of [1].
Lemma 1.1
The cumulant-generating function lnM X (t) is convex in X.
Proof
Without any loss of generality, we can set t=1. It now suffices to show that, for any two random variables X and Y with finite M X (1), M Y (1) and any λ∈[0,1],
Defining W:=e X/E(e X) and V:=e Y/E(e Y), this inequality is equivalent to E(W λ V 1−λ)≤1 that immediately follows from the inequality of weighted arithmetic and geometric means for the non-negative random variables W and V, i.e., W λ V 1−λ≤λW+(1−λ)V. □
The following lemma proves a statement used in the proof of Theorem 3.1 of [1].
Lemma 1.2
The function inf t>0{κ α (X,t)} is convex in X for all α∈ ]0,1].
Proof
For all ε>0, and \(X,Y \in\mathbf{L}_{M^{+}}\), the continuity of κ α (X,t) in t>0 implies the existence of t 1,t 2>0 such that
Since κ α (X,t) is convex in (X,t) from Lemma 3.1 of [1], we further find that, for all λ∈[0,1],
As this holds for all ε>0, the proof is done by taking the limit ε↓0. □
The following lemma proves the validity of the last identities used in the proofs of Theorems 3.3 and 5.1 of [1].
Lemma 1.3
Let g:ℝ→[0,∞] be a non-negative closed convex function with g(1)=0 and domg=[γ 1,γ 2] where γ 1<1<γ 2 and γ 1,γ 2∈[−∞,∞]. Then the following identity holds for any β>0
Proof
By denoting \(Y = \frac{dQ}{dP}\), which is a non-negative random variable with the mean equal to 1, the above identity can be rewritten as
where L(t)= inf Y∈S {−E Q (XY)+t(E P (g(Y))−β)} is the Lagrangian associated with the optimization problem on the right-hand side, and S={Y∈L 1:E P (Y)=1,max{γ 1,0}≤Y≤γ 2 a.e.}. Hence, it suffices to show that the optimal duality gap for the right-hand side optimization problem is zero. This is possible by showing that the generalized Slater’s constraint qualification [2] holds for this problem, i.e., that there exists \(\hat{Y} \in\mathbf{L}_{1}\) satisfying \(\mathrm{E}_{P}( \hat{Y} ) =1\), \(\max\{ \gamma_{1},0 \} < \hat{Y} < \gamma_{2}\) a.e., \(\mathrm{E}_{P}( g(\hat{Y} ) ) < \beta\). As we assumed γ 1<1<γ 2, the solution \(\stackrel{\frown}{Y} = 1\) a.e. fulfills these conditions, and so the proof is complete. □
Remark 1.1
We can also show the validity of the identity in Lemma 1.3 for the case where g is not always non-negative over its domain. If g is not non-negative, then, assuming that c(x−1) is a supporting hyperplane to the epigraph of g at the point (1,0), it is sufficient to replace g with the non-negative function \(\tilde{g}( x ) = g( x ) - c( x - 1 )\), for which we have \(H_{\tilde{g}}( P,Q ) = H_{g}( P,Q )\). Moreover, if γ 1=1 or γ 2=1, then the validity of the identity in Lemma 1.3 is clear, because the constraint \(\mathrm{E}_{P}( \frac{dQ}{dP} ) = 1\) implies \(\frac{dQ}{dP} = 1\) a.e. or, equivalently, Q=P. Finally, note that the proof for the case β=0 is also straightforward, given that γ 1<1<γ 2 and g is non-negative. Indeed, for this case, the constraint H g (P,Q)≤0 is feasible if and only if g is zero over the interval dom g=[γ 1,γ 2], which implies that the constraint H g (P,Q)≤0 is redundant since it is equivalent to \(\gamma_{1}\le\frac{dQ}{dP} \le\gamma_{2} \) a.e. which already exists in the set S.
Here we give the detailed proof of Proposition 4.2 of [1].
Proof of Proposition 4.2 of [1]
To use Theorem 4.1 of [1], we first need to reformulate problem (8) of [1] by using the CVaR representation given in (2) of [1] and adding the additional constraint lC≤−t≤uC. The resulting problem can be rewritten in the form of problem (6) of [1] as follows:
where x=(w T,t)T, ξ=R, X=W×[−uC,−lC] and \(F(\boldsymbol{x},\boldsymbol{\xi} ) = t + \alpha^{ - 1}[ -\sum_{i = 1}^{n} w_{i}R_{i} - t ]_{ +}\). Then we need to find D and L. In this case, \(D = \mathop{\sup}_{\boldsymbol{x},\boldsymbol{x}' \in \mathbf{X}}\| \boldsymbol{x} - \boldsymbol{x}' \| = C\sqrt{2 + B^{2}}\), where the maximum value is attained for x=(C,0,…,0,−uC)T and x′=(0,C,…,0,−lC)T, or other similar pairs of points. To determine the Lipschitz constant L, we have
where, in the last two inequalities, we used the Cauchy–Schwarz inequality and the fact that z∈S ξ =S R =[l,u]n. This shows that \(L = 1 + \alpha^{ - 1}\sqrt{n\max \{u^{2},l^{2} \} + 1}\). □
The next theorem proves that the dual representation in Theorem 3.3 of [1], which is given for bounded random variables, also holds for X∈L M .
Theorem 1.1
(Dual representation of the EVaR)
For X∈L M and any α∈ ]0,1]
where ℑ={Q≪P:D KL (Q∥P)≤−lnα}.
Proof
By virtue of the result given in Sect. 5.4 of [3], for any X∈L M , we have
where
Hence, similarly as in the proofs of Lemma 1.3 and Theorem 3.3 of [1], and together with Remark 1.1, we can show
where \(\Im'' = \{ Q \ll P:D_{KL}( Q \parallel P ) \le - \ln\alpha,\exists c >0:\mathrm{E}_{P}( h( c\frac{dQ}{dP} ) ) < \infty\}\). However, one can see that the constraint D KL (Q∥P)≤−lnα implies \(\mathrm{E}_{P}( h( c\frac{dQ}{dP} ) ) < \infty\) with c=1, because the relative entropy can be rewritten as \(D_{KL}(Q \parallel P ) =\mathrm{E}_{P}( e( \frac{dQ}{dP} ) )\) with e(x)=xlnx−x+1, x>0. Hence, the two sets ℑ and ℑ″ are actually identical. This completes the proof. □
2 Corrections
This section corrects a few errors found in [1]. Since the numbering system of [1] was changed for publication, there are now two lemmas numbered 3.1: one precedes Theorem 3.1 of [1] and the other precedes Theorem 3.3 of [1]. The former of these lemmas was only used in the proof of Theorem 3.1 of [1], but the latter was used both in the proof of Theorem 3.3 of [1], and in the statement of Lemma 5.1 of [1]. Furthermore, at the end of the proof of the former, there are a few unnecessary sections that should be removed. Here is the corrected proof.
Proof of Lemma 3.1 of [1]
We must show that, for all λ∈[0,1], \(X,Y \in\mathbf{L}_{M^{ +}}\) and t 1,t 2>0,
which is equivalent to
Denoting t=λt 1+(1−λ)t 2 and w=λt 1/t, the left-hand side of the above inequality can be expressed as
Then by using the known fact that the cumulant-generating function is convex, it yields
This completes the proof. □
In the first sentence, coming after the proof of Proposition 4.2 of [1], the term “the feasible set of problem (8)” must be replaced by “the feasible set of the problem obtained from problem (8) by using equation (2).” Note that the problem obtained from problem (8) of [1] by using equation (2) of [1] is as follows:
References
Ahmadi-Javid, A.: Entropic value-at-risk: A new coherent risk measure. J. Optim. Theory Appl. (2012), this issue
Jeyakumar, V., Wolkowicz, H.: Generalizations of Slater’s constraint qualification for infinite convex programs. Math. Program., Ser. B 57, 85–101 (1992)
Cheridito, P., Li, T.: Risk measures on Orlicz hearts. Math. Finance 19, 189–214 (2009)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ahmadi-Javid, A. Addendum to: Entropic Value-at-Risk: A New Coherent Risk Measure. J Optim Theory Appl 155, 1124–1128 (2012). https://doi.org/10.1007/s10957-012-0014-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-012-0014-9