### Proof of Lemma 1

There is a sequence \((M_n)_{n \in \mathbb {N}} \subset \mathrm {SDF}_\beta (M)\) such that \(\lim _{n \rightarrow \infty } \mathbb {E}(M_nX)^+ / \mathbb {E}(M_nX)^- = \mathrm {SGLR}_\beta ^M(X)\). Since \(\mathrm {Var}(M_n) \le \mathrm {Var}(M)+\beta \), the sequence is bounded in \(L^2\). Thus, the sequence \(\mu _n:=P((M,M_n,X) \in \cdot )\) is tight. Hence, it is weakly compact, i.e. there is a subsequence

\((M,M_{k_n}, X)_{n \in \mathbb {N}}\) which converges in law to a triple of random variables

\((\tilde{M}, M^*, \tilde{X})\) which satisfies that

\((\tilde{M}, \tilde{X})\) has the same law as (

*M*,

*X*) and

$$\begin{aligned} \mathbb {E}M^*&= \lim _{n \rightarrow \infty } \mathbb {E}M_n=1, \quad \mathbb {E}( M^* \tilde{X})^\pm = \lim _{n \rightarrow \infty } \mathbb {E}(M_nX)^\pm , \\ \mathrm {Var}(M^*)&= \lim _{n \rightarrow \infty } \mathrm {Var}(M_n) \le \mathrm {Var}(M) + \beta . \end{aligned}$$

By the Portmanteau theorem,

$$\begin{aligned} P(\tilde{M} - M^*=0) \ge \limsup _{n \rightarrow \infty } P(M_n - M^*=0) \ge 1-\beta . \end{aligned}$$

Upon possibly extending the underlying probability space, a copy of

\(M^*\) can be realized on the same probability space as

*M*, hence becoming an element of

\(\mathrm {SDF}_\beta (M)\).

Now assume (

A1). For any

\(M' \in \mathrm {SDF}_\beta ^+(M)\), using that

\(P(X=x_j)=1/T\) for each

*j*,

$$\begin{aligned} \frac{\mathbb {E}(M'X)^+}{\mathbb {E}(M'X)^-}&= \frac{\sum _{j=1}^T x_j^+ \mathbb {E}[M'\, |\, X=x_j] P(X=x_j)}{\sum _{j=1}^T x_j^- \mathbb {E}[M'\, |\, X=x_j] P(X=x_j)} \nonumber \\&= \frac{\sum _{j=1}^T x_j^+ \mathbb {E}[M'\, |\, X=x_j] }{\sum _{j=1}^T x_j^- \mathbb {E}[M'\, |\, X=x_j] }. \end{aligned}$$

(2)

This shows that the Gain–Loss-Ratio depends on

\(M'\) only through the values

\(\mathbb {E}[M' |X=x_j]\),

\(j=1, \ldots , T\). This can be further separated into (assuming that the conditions have nonzero probability)

$$\begin{aligned} \mathbb {E}[M' |X=x_j] = \mathbb {E}[M' |X=x_j, M'\ne M] + \mathbb {E}[M | X=x_j, M'=M]. \end{aligned}$$

In order to optimize

\(M'\) in the sense of a minimal Gain–Loss-Ratio, one has to decrease its value at large positive

\(x_j\) and / or increase its value at large negative

\(x_j\), with the constraints on expectation and variance. It follows from the formula of total variance that the variance of

\(M'\) is minimal if it holds

\(M'=\mathbb {E}[M' | X=x_j, M' \ne M]\) on the sets

\(\{X=x_j, M'\ne M\}\), compared to the case where

\(M'\) is not constant on these sets. By the above considerations, the Gain–Loss-Ratio remains the same irrespective of whether

\(M'\) is constant or not. Hence, the optimal

\(M'\) is constant on such sets.

Observe furthermore that the value in (

2) depends only on the

*probabilities* of the sets not on the explicit realisation of

\(\{X=x_j, M' \ne M\}\) as subsets of

\(\varOmega \). Thus, the following defines a (

*M*,

*X*,

*U*)-measurable random variable which is an element of

\(\mathrm {SDF}_\beta ^+(M)\) and has the same Gain–Loss-Ratio as

\(M^*\). Let

\(p_j:=P(X=x_j, M^* \ne M)\),

\(m_j^*:= \mathbb {E}[M^* \, | \, X=x_j, M^* \ne M)\) and define

Turning to the last assertion, we may as before choose a sequence

\(\big ((m_i^n)_{i=1}^{Tk}\big )_n\) such that the associated Gain–Loss-Ratios converge to the minimal one. Since each

\(m_i^n \ge 0\) and

\(\frac{1}{Tk}\sum _{i=1}^{Tk} m_i^n =1\), the numbers

\(m_i^n\) are uniformly bounded (by

*Tk*), and hence there is a convergent subsequence, the limit of which we denote by

\((m_i^*)_{i=1}^{Tk}\). It is then readily checked that

\((m_i^*) \in d\mathrm {SDF}_\beta ^+(M)\), and that

$$\begin{aligned} d\mathrm {SGLR}_{\beta ,k}^M(X) = \frac{\sum _{i=1}^{Tk} m_i^* x_{(i \text { mod }\ T )}^+}{\sum _{i=1}^{Tk} m_i^* x_{(i \text { mod }\ T )}^-} . \end{aligned}$$

\(\square \)Now we are in a position to prove Proposition 1.