Adapted Wasserstein Distances and Stability in Mathematical Finance

Assume that an agent models a financial asset through a measure Q with the goal to price / hedge some derivative or optimize some expected utility. Even if the model Q is chosen in the most skilful and sophisticated way, she is left with the possibility that Q does not provide an exact description of reality. This leads us the following question: will the hedge still be somewhat meaningful for models in the proximity of Q? If we measure proximity with the usual Wasserstein distance (say), the answer is NO. Models which are similar wrt Wasserstein distance may provide dramatically different information on which to base a hedging strategy. Remarkably, this can be overcome by considering a suitable adapted version of the Wasserstein distance which takes the temporal structure of pricing models into account. This adapted Wasserstein distance is most closely related to the nested distance as pioneered by Pflug and Pichler. It allows us to establish Lipschitz properties of hedging strategies for semimartingale models in discrete and continuous time. Notably, these abstract results are sharp already for Brownian motion and European call options.

1. Introduction 1.1. Outline. Assume that a reference measure P is used to model the evolution of a financial asset X with the purpose to hedge a financial claim or to maximize some expected utility. We do not expect that the model P captures reality in an absolutely accurate way. However, supposing that P is close enough to reality (described by a probability Q) we would still hope that a strategy which is developed for P leads to reasonable results.
A main goal of this paper is to establish this intuitive idea rigorously based on a new notion of adapted Wasserstein distance AW p between semimartingale measures. To fix ideas, we provide a first example of the results we are after. Theorem 1.1. Let P, Q be continuous semimartingale models for the asset price process X, and assume that C(X) denotes an L-Lipschitz payoff of a (pathdependent) derivative C. Assume that a predictable trading strategy H = (H t ) t∈[0,T ] , |H| ≤ k and an initial endowment m ∈ R constitute a P-superhedge of C(X), i.e. C(X) ≤ m + (H • X) T , P-almost surely.
Then there is a predictable G such that m, G constitute an "almost" Q-superhedge: E Q [(C(X) − m − (G • X) T ) + ] ≤ 6(k + L) · AW 1 (P, Q). (1.1) While the adapted Wasserstein distance will be defined in abstract terms (see (1.3)), it relates directly to the model parameters for 'simple' models. In particular, if P, Q are Brownian models with different volatilities, than the distance between these models is just the difference of these volatilities. Moreover, the bound in (1.1) (as well as further Lipschitz bounds given below) are already sharp in such a simple setting and for C a European call option.
Below we will provide a number of results with similar flavour as Theorem 1.1. E.g. we will provide versions where the hedging error is controlled in terms of risk measures and we will show that a Lipschitz bound of the type (1.1) applies (with bigger constants) if the same trading strategy H is applied in the model P as well as in the model Q. Importantly, we establish that comparable results of Lipschitz continuity apply to utility maximization.
We emphasize that familiar concepts such as the Lévy-Prokhorov metric or the usual Wasserstein distance do not appear suitable to derive results comparable to Theorem 1.1. E.g. in the vicinity of financial meaningful models there are models with arbitrarily high arbitrage even for bounded strategies; similar phenomena appear wrt completeness / incompleteness. Instead we introduce an adapted Wasserstein distance AW p which takes the temporal structure of semimartingale models into account. These distances are conceptually closely related to the nested distance as pioneered by Pflug and Pichler [50,51,52]; see [1,27,18] for first articles which link such a type of distance to finance. We describe these contributions more closely in Section 2 below.
The first setting shall be referred to as the discrete time case, and the second as the continuous time case. 1 In the first case we denote by I = {1, . . . , T } the time-index set, and in the second I = [0, T ]. Throughout the article we will provide definitions and results without specifying which of the two cases we are referring to: This means that the definitions / results apply in both cases. Only occasionally will we consider one case specifically, and in this situation we will state this explicitly.
We interpret Ω as the set of all possible evolutions (in time) of the 1-dimensional asset price. Importantly, mutatis mutandis, all our results (except Lemma 3.3 and Example 3.4) remain true for multi-dimensional asset price processes (corresponding to Ω = (R d ) T / Ω = C([0, T ], R d )). We chose to go for the 1-dimensional version to simplify notation.
The mappings X, Y : Ω → Ω denote the canonical processes (i.e. the identity map), and we make the convention that on Ω × Ω the process X denotes the first coordinate and Y the second one. The spaces Ω and Ω × Ω are endowed with the maximum-norm and the corresponding Borel-σ-field. In continuous time, the space Ω is endowed with the right-continuous filtration generated by X, in discrete time we use the plain filtration generated by X, and in any case Ω × Ω is endowed with the product of these filtrations. The set Cpl(P, Q) of couplings between probability measures P, Q consists of all probability measures π on Ω × Ω such that X(π) = P and Y (π) = Q. A Monge coupling is a coupling that is of the form π = (Id, T )(P) for some Borel mapping T : Ω → Ω that transports P to Q, i.e. satisfies T (P) = Q. Given a metric d on Ω and p ≥ 1, the p-Wasserstein distance of P, Q is In many cases of practical interest the infimum in (1.2) remains unchanged if one minimizes only over Monge couplings, cf. [53].
Before defining the adapted Wasserstein distance between measures P and Q on Ω, let us hint why distances related to weak convergence are not suitable for the results we have in mind. Assume for example that we are interested in a utility maximization problem in two periods and that Figure 1 describes the laws P, Q of 1 Indeed the arguments in the discrete and the continuous case use the same set of ideas but the presentation is significantly less technical in the discrete case which was an important reason to include the discrete case in the paper. two traded assets. Clearly they are very close in Wasserstein distance, as follows from considering the obvious Monge coupling induced by T : Ω → Ω, T (P) = Q depicted in Figure 1. At the same time, the outcome of utility maximization is certainly very different. Similarly, P is a martingale measure while Q allows for arbitrage. The clear reason for that is the different structure of information available at time 1. To exhibit why the Wasserstein distance does not reflect this different structure of information, let us review the transport condition T (P) = Q. We rephrase it as While this condition is of course perfectly natural in mass transport, (1.3) almost seems like cheating when viewed from a probabilistic perspective: the map T 1 should not be allowed to consider the future value X 2 in order to determine Y 1 . To define an adapted version of the Wasserstein distance, the 'process' (T i ) i=1,2 should be taken to be adapted in order to account for the different information structures of P and Q.
Naturally our official definition of adapted Wasserstein distances will not refer to adapted Monge transports but rather to couplings which are 'adapted' in an appropriate sense. Following Lasalle [42], we call such couplings (bi-)causal. Since the definition below may appear a bit technical at first glance, the following may be reassuring: In the discrete time setting and for absolutely continuous measures P, the weak closure of the set of adapted Monge couplings, i.e. π = (Id, T )(P) for T adapted, is precisely the set of all causal couplings, see [39]. Definition 1.2 ((bi-)causal couplings). For a coupling π of P, Q ∈ P(Ω) denote by π(dω, dη) = P(dω)π ω (dη) a regular disintegration w.r.t. P. The set Cpl C (P, Q) of causal couplings consists of all π ∈ Cpl(P, Q) such that for all t ∈ I and A ∈ F t The set of all bi-causal couplings Cpl BC (P, Q) consists of all π ∈ Cpl C (P, Q) such that also S(π) ∈ Cpl C (Q, P), where S : Ω × Ω → Ω × Ω, S(ω, η) := (η, ω).
In discrete time, a coupling π is causal if and only if for every t and Borel set A ⊆ R t , that is, at time t, given the past (X 1 , . . . , X t ) of X, the distribution of Y t does not depend on the future (X t+1 , . . . , X N ) of X.
Replacing couplings by bi-causal couplings in (1.2) one arrives at the nested distance as introduced by Pflug and Pichler [49,50]. Since our goal is to compare also semimartingale models in continuous time we will work with an adapted Wasserstein distance that is defined slightly differently. (Notably, the two distances are equivalent for probabilities on R N . We will elaborate in Remark 3.6 below, why the definition in (1.4) is more appropriate for our purposes even in discrete time.) Denote by SM(Ω) the set of all probabilities P on (the Borel σ-field of) Ω under which the canonical process X is a semimartingale, and for p ∈ [1, ∞) by SM p (Ω) the subset thereof for which Here X = M + A denotes the unique continuous semimartingale decomposition of X under P into a continuous martingale M starting in zero and a continuous adapted process A of finite variation, [·] is the quadratic variation and | · | 1-var the first variation norm. Of course all adapted processes are semimartingales in the discrete time case.
It is shown in Lemma 3.1 that AW p is well-defined (i.e. that X − Y is a semimartingale under every bi-causal coupling) and in Lemma 3.2 that AW p in fact defines a metric. Remark 1.4. In the continuous time setup, the adapted Wasserstein distance can also be computed through In Section 3.1 below we will give explicit formulae for the adapted Wasserstein distance in the case of semi-martingale measures described by simple SDEs.
1.3. Stability of Superhedging. For the rest of this article, fix some k ∈ R + and let H k be the set of all predictable processes For every p ≥ 1, write b p for the 'upper' Burkholder-Davis-Gundy (BDG) constant, cf. Remark 3.12 below. In particular it is known that b 1 ≤ 6 and that b 2 = 2.
Our first main result concerns the stability of superhedging and constitutes a stronger version of Theorem 1.1 stated above. Theorem 1.5. Let P, Q ∈ SM 1 (Ω), H ∈ H k and let C : Ω → R be Lipschitz with constant L. Then the hedging error under Q is bounded by the distance of P and Q plus the hedging error under P in the following sense: there exists G ∈ H k such that Assume in addition that H t : Ω → R is Lipschitz with constantL for every t ∈ I. Then we can take G = H and obtain where β := 2 √ 2b 1L min{AW 2 (P, δ 0 ), AW 2 (Q, δ 0 )}.
Importantly, it is impossible to transfer a superhedge under P into a superhedge under Q. This occurs already in a one-period framework and is not a by-product of our definition of adapted Wasserstein distance; see Remark 5.2. A similar reasoning requires to consider only trading strategies bounded by k; see Remark 5.3.
It is worthwhile to compare the relation of the two inequalities (WHI) and (SHI): (S) In a certain sense the 'strong hedging inequality' (SHI) seems to be the more relevant assertion: after all a trader does not know that the model Q (rather than the model P) describes reality and hence she might (somewhat stubbornly) stick to the initial plan of hedging her risk according to the strategy H. The inequality (SHI) then allows to quantify the losses due to this model-error. (W) However, the 'weak hedging inequality' (WHI) also has a particular merit: suppose that a trader W starts with the prior belief that the asset price evolves according to a Black-Scholes model with volatility σ 1 but soon after time 0 realizes that a volatility σ 2 (where σ 2 = σ 1 ) yields a more adequate description of reality. If the witty trader W makes an accurate guess about the correct model and updates her trading strategy accordingly, her losses can be controlled through the tighter bound in (WHI). In Theorem 4.2 we provide a version of Theorem 1.5, where (·) + is replaced by a convex, strictly increasing loss function l : R → R + .
Another way to gauge the effectiveness of an almost superhedge is by means of risk measures. We postpone the general formulation to Theorem 4.3 and first present a version that appeals to the average value of risk AVaR P α . Recall that for a random variable Z : Ω → R is the average value at risk at level α ∈ (0, 1) under model P. We then have Theorem 1.6. Assume that C : Ω → R is Lipschitz with constant L. Then is Lipschitz with constantL for every t ∈ I and β is the constant defined in Theorem 1.5, then The interpretation of this result is similar to the one of Theorem 1.5: As AVaR P α (·) is translation invariant, one has inf H∈H k , and the right-hand side constitutes a relaxed version of the superhedging price.
Notably, the explicit calculations of adapted Wasserstein distance given in Section 3.1 imply that Theorem 1.6 (and similarly Theorem 1.5) are sharp Example 1.7 (Hedging in a Brownian framework). Consider a European call option C(X) = (X T − K) + , where for simplicity K = 0. Moreover, let P σ be Wiener measure with constant volatility σ ≥ 0. Then for every σ,σ ≥ 0, k ≥ 1, and α ∈ (0, 1) it holds that (we defer the proof of this fact to Section 4) This shows that the estimate in Theorem 1.6 is tight (up to constants), in the sense that it is essentially impossible to improve on the probability metric AW 1 .
We make the important remark that Glanzer, Pflug, and Pichler [27] use the nested distance to control acceptability prices in discrete time models in a Lipschitz fashion through the nested distance of these models. Specifically, in a discrete one-period framework [27,Proposition 3] and Theorem 1.6 yield almost the same assertion: in this setup, the only difference is that [27, Proposition 3] does not specify a Lipschitz constant and does not assume uniform boundedness of the admissible hedging strategy. (However, the latter seems to be in conflict with our Remark 5.3 below.) 1.4. Stability of Utility Maximization. We move on to consider the continuity of utility maximization. Let U : R → R, be a utility function which is concave, increasing, and denote by U the left-continuous version of the derivative. We have Theorem 1.8. Let C : Ω → R be Lipschitz continuous and assume that there exists The failure of usual Wasserstein distances to guarantee stability of utility maximization is illustrated in Remark 5.1.
1.5. Structure of the paper. In Section 2 we briefly review the literature related to this paper. In Section 3 we establish some basic properties of the adapted Wasserstein distance, discuss the choice of cost function and give some examples. Moreover we derive a contraction principle (Theorem 3.10) which relates adapted Wasserstein distance with a 'weak' (in the sense of Gozlan et al [29]) transport distance. This result forms the basis for the proofs of the results mentioned in the introduction, as well as certain extensions of these results, see Section 4. Finally we conclude with some remarks in Section 5.

Literature
The articles closest in spirit to ours are [1,18,27]. Acciaio, Zalashko and one of the present authors consider in [1] an object related to the adapted Wasserstein distance in continuous time in connection with utility maximization, enlargement of filtrations and optimal stopping. Glanzer, Pflug, and Pichler [27] prove a deviationinequality for the so-called nested distance in a discrete time framework, and consider acceptability pricing over an ambiguity set given described through the nested distance. Bion-Nadal and Talay [18] study via PDE arguments a continuoustime optimization problem which is related related to the adapted Wasserstein distance.
The concept of causal couplings, and optimal transport over causal couplings, has been recently popularized by Lassalle [42] although precursors can be found in the works [58,55]. This notion is central to the recent articles [1,9,7,8].
The idea of strengthening weak convergence of measures in order to account for the temporal evolution has some history. Indeed several authors have independently introduced different approaches to address this challenge: The seminal unpublished work by Aldous [2] introduces the notion of extended weak convergence for the study of stability of optimal stopping problems. The principal idea is not to compare the laws of process directly, but rather the laws of the corresponding prediction processes. Independently, Hellwig [30] introduces the information topology for the stability of equilibrium problems in economics. Roughly, two probability measures on a product of finitely many spaces X 1 × . . . × X N are considered to be close is for each t ≤ N the projections onto the first t coordinates as well as the corresponding conditional (regular) disintegrations are close. Unrelated to these developments Pflug and Pichler [49,50,51] have introduced the nested distances for the stability of stochastic programming in discrete time. The nested distance is the obvious role model for the adapted Wasserstein distances considered in this article and (as mentioned above) for a fixed number of time steps and p ≥ 1, they are obviously equivalent. Yet another idea to account for the temporal evolution of processes would be to symmetrise the causal transport costs W c (P, Q) costs defined by Lassalle [42] by taking the maximum or sum of W 2 c (P, Q) and W 2 c (Q, P); this was pointed out by Soumik Pal.
In parallel work [6], the four authors of the present article investigate the relations between these concepts in detail. Remarkably, in discrete time all of the concepts mentioned above (adapted Wasserstein distances, extended weak convergence, information topology, and nested distances, symmetrised causal transport costs) define the same topology.
The question of stability in mathematical finance has been studied from different perspectives over the years. Notably, starting with the articles of Lyons [43] and Avellaneda, Levy, Paras [5] the area of robust finance has mainly focused on extremal models and hedging strategies which dominate the payoff for every model in a specified class. Following the publication of Hobson's seminal article [33] connections with the Skorokhod embedding problem have been a driving force of the field, see the surveys of Hobson [34] and Ob lój [45]. Recently this has been complemented by techniques coming from (martingale) optimal transport, early papers which advance this viewpoint include [35,14,26,15,19,24,22,13]. The literature on 'local' misspecification of volatility in a sense more closely related to the present article appears more spare. El Karoui, Jeanblanc, and Shreve [25] establish in a stochastic volatility framework that if the misspecified volatility dominates the true volatility, then the misspecified price of call options dominates the real price; see also the elegant account of Hobson [36]. More recently, the question of pricing and hedging under uncertainty about the volatility of reference local volatility model is studied by Herrmann, Muhle-Karbe, and Seifried [32] (see also [31]). Less plausible models are penalized through a mean square distance to the volatility of the reference model and the authors obtain explicit formulas for prices and hedging strategies in a limit for small uncertainty aversion. Becherer and Kentia [12] derive worst-case good-deal bounds under model ambiguity which concerns drift as well as volatility. Indeed, discussions with Dirk Becherer motivated us to consider also models with drift in our results on stability of super hedging. The behaviour of the superhedging price in a ball (wrt. various notions of distance) around a reference model is studied in depth by Ob lój and Wiesel [46] for a d-dimensional asset and one time period.
A notable implication of this work is that it yields a coherent way to measure model-uncertainty (in the sense of Cont's influential article [23]): Fix a subset M 0 of the set M of all consistent models, i.e. martingale measures which are consistent with benchmark instruments whose price can be observed on the market. Given M 0 , the model uncertainty associated to a derivative f can be gauged through The worst-case approach typically pursued in robust finance then yields ρ M0 (f ) for M 0 = M , but it appears equally natural to take M 0 to be an infinitesimal ball around a reference model. This approach is first carried out by Drapeau, Ob lój, Wiesel and one of the present authors [11] in a one period framework. Our results indicate that adapted Wasserstein distance provides a way to extend this to a multi-period setup, we intend to pursue this further in future work.
On a different note, much work has been done regarding the convergence of discrete time models to their continuous time analogues. Due to the vastness of this literature we refer the reader to the book [54] for references. Finally, in more recent times and starting from the works of Kardaras andŽitković, the stability of utility maximization has been studied in [38,40,41,44,57] among others.

The adapted Wasserstein distance
The following Lemma shows that AW p is well-defined. Proof. Let X = M + A be the semimartingale decomposition under P and consider M and A as processes on Ω × Ω via M (ω, η) := M (ω) and A(ω, η) := A(ω). Further let π = P(dω)π ω (dη) be a bi-causal coupling between P and Q. To show that X = M + A remains the semimartingale decomposition under π, it is enough to show that M is a martingale under π. To that end, let 0 ≤ s ≤ t and let Z : Ω × Ω → R be F s ⊗ F s -measurable and bounded. Then the random variable Z : Ω → R defined by Z (ω) := Z(ω, η) π ω (dη) is F s -measurable up to P-null sets, and clearly and bounded. Indeed, if Z(ω, η) = Z 1 (ω)Z 2 (η) for F s -measurable bounded functions Z 1 and Z 2 , then it follows from the definition of bi-causality that Z is F s -measurable modulo P; the general statement then follows from a monotone class argument. Therefore  Proof. It is clear that AW p (P, Q) = AW p (Q, P) ≥ 0 for all P, Q ∈ SM p (Ω). Suppose that AW p (P, Q) = 0. As · ∞ ≤ | · | 1-var , it is immediate that if π participates in the minimization problem defining AW p (P, Q), then where b p denotes the BDG constant and we used the BDG inequality for the martingale M . Hence the usual Wasserstein distance between P and Q (defined w.r.t. the · ∞ -norm) is dominated from above by AW p (P, Q), and so P = Q.
If π(dω, dγ) := Ω Π(dω, dη, dγ) is the projection of Π onto the first and third components, then it is clear that the first and second marginals of π are P and R respectively. Moreover, a disintegration of π = π ω (dγ) P(dω) is given by In particular, for every A ∈ F t we have that ω → π ω (A) is F t -measurable modulo P.
The argument for π = π γ (dω) R(dγ) is similar and therefore π is a bi-causal coupling between P and R. Finally, it follows as in the proof of Lemma 3.1 that, if X = M X +A X , Y = M Y +A Y , and Z = M Z +A Z are the semimartingale decompositions under P, Q, and R, then they remain the semimartingale decomposition under Π on Ω 3 endowed with the product filtration.
To finish the proof of the triangle inequality, we observe that p/2 T + |A| p 1-var ] 1/p is a norm on the product of these spaces. We conclude the proof for the triangle inequality with To conclude the proof, it remains to show that AW p (P, Q) < ∞ for all P, Q ∈ SM p (Ω). By Lemma 3.1, we have AW p (P, is the semimartingale decomposition under P. Therefore the triangle inequality implies that AW p is real-valued on SM p (Ω).
Assume that each SDE admits a unique strong solution and denote by P µi,σi the respective laws. Further assume that • µ 1 is a function of time only (namely µ 1 : [0, T ] → R) • σ 1 , σ 2 ≥ 0 and at least one of them is a function of time only.
The discrete time version of the aforementioned synchronous coupling is given by the Knothe-Rosenblatt rearrangement [9], and a variant of the previous result can also be obtained in the discrete time framework.
Proof. Let π be a feasible coupling for AW p (P µ1,σ1 , P µ2,σ2 ), leading to a finite cost. Naturally for this proof we denote the coordinate process on Ω × Ω by (X 1 , X 2 ). As before we let X i = A i + M i be the unique continuous Doob-Meyer decomposition of X i under the P µi,σi -completion of its right-continuous filtration. Observe that d dt A 1 is a.s. deterministic, by the assumption on µ 1 , and that the law of d dt A 2 is independent of the coupling π. Both facts can be derived easily from the identity which by Lebesgue differentiation theorem holds dt × dπ-a.s. As a consequence, the term E π [|A 1 − A 2 | p 1−var ] is independent of the coupling π and so we may ignore it and only focus on the term where W,Ŵ are independent standard one-dimensional Brownian motions and {σ ik : i, k ∈ {1, 2}} real-valued processes, both of them adapted in the enlarged filtered space. In the following we will omit the argument {X i s } s≤t from σ i . Necessarily By Cauchy-Schwarz inequality we deduce that almost surely and accordingly we get the lower bound As in the beginning of the proof, the right-hand side does not depend on the coupling π thanks to either σ i being a function of time only. To conclude observe that for the synchronous coupling π * we have equality in the above equation.
As an easy consequence we have Example 3.4. For bounded Lipschitz functions µ 1 , µ 2 , σ 1 , σ 2 we denote by P µi,σi the law of the diffusion

Assume that
• µ i is independent of the x-variable, some i ∈ {1, 2}, and • σ k is independent of the x-variable, some k ∈ {1, 2}.
Calling j ∈ {1, 2}\{i} and ∈ {1, 2}\{k}, we have We now illustrate that in general it is not true that the straightforward synchronous coupling of Lemma 3.3 is optimal. As a consequence, we do not expect a closed-form expression for the adapted Wasserstein distance. A discrete-time version of this observation is discussed in [7, Section 7].
Example 3.5. Consider d = 1, T = 2, and for each c ∈ R introduce µ c t (ω) := c1 [1,2] (t) sign(ω 1 ) andμ c t (ω) := −µ c t (ω). Assuming that B is a Brownian motion, and for σ ∈ R + , we introduce the couplings These couplings share the same marginals and each of them is bi-causal. It is easy to compute p/2 T + |A| p 1-var = (8σ 2 ) p/2 . We conclude that, for each p, there are plenty of pairs (c, σ) such that the "synchronous" coupling π 1 is not optimal between its marginals for the metric AW p .
Remark 3.6 (The choice of the quadratic variation). In continuous time, a natural choice for the cost function on Ω is the maximum norm. When the adapted distance is defined w.r.t. this cost function, the BDG-inequalities imply that this yields an equivalent distance on the set of true martingale measures. However, when considering semimartingales, this cost is too coarse. For example, let (ω n ) be a sequence in Ω which converges to zero in maximum norm but for which the first variation tends to infinity. Then P n := δ ωn converges to P := δ 0 (when adapted distance is defined only with maximum norm), however, none of our optimization problems converge (take a strategy H ∈ H k for which (H(X) • X) T ≈ k|ω n | 1-var almost surely).
The above argumentation of course does not work in discrete time, as then all norms are equivalent. However, a revealing reason to stick to "quadratic plus first variation" is that discrete time approximations work well together with quadratic variation, and not with general distances: Consider Ω = C([0, 1]), for deterministic continuous σ : [0, 1] → R + , let P σ be Wiener measure on Ω with volatility σ. For each N , let P σ N be the law of a random walk on {0, 1/N, 2/N, . . . , 1} with independent increments from n/N to (n + 1)/N distributed according to N (0, σ 2 n/N /N ). Then one can compute that That is, the discrete approximation preserves the correct distance. However, this situation changes drastically when R N is equipped with say the 1 norm. Indeed, in this case a computation shows AW 2 (P σ N , P σ N ) → ∞ as N → ∞ whenever σ = σ . 3.2. Stochastic integrals and a contraction principle. We present here the two technical results which underlie the proofs of the main theorems in the article. The first one is Lemma 3.7. Let P, Q ∈ SM(Ω), H ∈ H k , and π be a bi-causal coupling between P and Q. Then there exists a process G ∈ H k such that G t (Y ) = E π [H t (X)|Y ] for every t, π-almost surely. Moreover, we have Let π = π η (dω) P(dω) be a disintegration and define for every t and η ∈ Ω. By definition of bi-causal coupling G t is measurable w.r.t. the Q-completion of F t−1 . It remains to pick functions G t which are F t−1 measurable In continuous time we take G to be the predictable projection of H, under the reference measure π, with respect to the π-completion of the filtration {∅, Ω} ⊗ F Y . By [1, Lemma C.1] the result is π-indistinguishable from a predictable process under the Q-completion of the filtration F Y . The t-by-t, π-almost sure equality G t (Y ) = E π [H t (X)|Y ], is then a consequence of the definition of predictable projection. The π-almost sure equality (G(Y )  Proof. The statement is true if instead of the stochastic integrals we considered the integrals w.r.t. the finite variation part of Y (either by properties of Riemann-Stieltjes integrals, or directly from the definition of predictable projection). For this reason we may now assume that Y is itself a martingale.
We first take for granted that the following result: if h is bounded and predictable in the filtration of (X, Y ), and if g denotes its predictable projection in the filtration of Y under the measure π, then We know that there exist a sequence (H n ) of predictable simple processes s.t.

By Itô isometry the stochastic integrals (H
Denoting by G n the predictable projection of H n with respect to the Y -filtration, we deduce from (3.2) that so again by Itô isometry (G n • Y ) T converges in L 2 (π) to (G • Y ). The π-almost sure equality (G n • Y ) T = E π [(H n • Y ) T |Y ] follows easily by the bi-causality of the coupling π, and by taking L 2 limits the desired conclusion is obtained.
To finish the proof we must establish (3.2). First we observe that as follows from predictable projection and upon taking The result is a consequence of the equality Our next crucial technical result is given in Theorem 3.10 below. But first we need some preparation. Lemma 3.9. Let P, Q ∈ SM p (Ω), let π be a bi-causal coupling between P and Q, let H ∈ H k , and write X − Y = M + A for the semimartingale decomposition under π. Then, for every p ≥ 1, we have , where b p is the upper constant in the BDG-inequality. If further H t : Ω → R is L-Lipschitz continuous for every t, then we have Proof. The elementary inequality (x + y) p ≤ 2 p−1 x p + 2 p−1 y p for x, y ≥ 0 together with BDG inequality and the fact that · ∞ ≤ | · | 1-var imply . This proves the first part. The same arguments imply p/2 T + |A| p 1-var ] from which the second part follows. To prove the third claim, write The second term is smaller than 2 p−1 2 p−1 k p b p E π [[M ] p/2 T + |A| p 1-var ] by the second part. It remains to estimate E π [|((H(Y ) − H(Y )) • X) T | p ]. Write X = N + B for the semimartingale decomposition of X under P. By Lemma 3.1, the semimartingale decomposition under π is still X = N + B. Moreover, the BDG-inequality, the Lipschitz-continuity of H, and Hölder's inequality, imply that Putting all estimates together and replacing X and Y yields the claim.
Denote by P p (R) the set of all Borel probability measures µ on R such that |x| p µ(dx) < ∞. Moreover, let d p (µ, ν) be the usual p-Wasserstein distance, and let d w p the weak p-Wasserstein cost, that is, : γ is a coupling of µ and ν , : γ is a coupling of µ and ν .
Here γ = µ(dx)γ x (dy) denotes the disintegration. Note that d w p is not symmetric and as a consequence of Jensen's inequality, we always have d w p ≤ d p . Problems akin to d w p (µ, ν) go under the name of 'weak optimal transport' and have been recently introduced by Gozlan et al. in [29], but see also [3,4,10,8,28]. We have Theorem 3.10 (Contraction). Let P, Q ∈ SM p (Ω), let π a bi-causal coupling between P and Q, let C : Ω → R be Lipschitz with constant L, and let H ∈ H k . Further denote by X − Y = M + A the semimartingale decomposition under π and let G ∈ H k such that (G(Y ) Now assume in addition that H t : Ω → R isL-Lipschitz continuous for every t, then Proof. We start by proving the first claim. Let π be as stated, and define a(X) := C(X) + (H(X) • X) T as well as b(Y ) := C(Y ) + (G(Y ) • Y ) T . Now let γ := (b(Y ), a(X))(π) so that γ is trivially a coupling between b(Y )(Q) and a(X)(P). Therefore By assumption it holds that Thus, using the tower property and Jensen's inequality, it follows that The claim now follows from the first and second estimates in Lemma 3.9.
In the second case where H is additionally Lipschitz, let d(X) := C(X) + (H(X) • X) T as well as e(Y ) := C(Y ) + (H(Y ) • Y ) T and γ := (e(Y ), d(Y ))(π). Then, similarly as before, and the claim follows from the first and third estimates of Lemma 3.9.
Remark 3.12. By b p we denote the smallest real number such that for every martingale M . For p ≥ 2 it was established by Burkholder [20] that b p = p but the value of b p is unknown for p ∈ [1, 2) according to [47], [48, page 427]. By [16], b 1 ≤ 6. (The optimal constant in the reverse inequality is known for the trivial case p = 2 and for p = 1. In the latter instance one obtains √ 3 [21] and 1.2727 . . . [56] for continuous martingales, resp.)

Proofs of the results stated in the introduction and extensions
Thanks to the previous work, the strategy for the proofs boils down into two parts. In a first step, one forgets about the space Ω and only focuses on continuity of the problem at hand with respect to d p or d w p when image measures on R are plugged in: e.g. in utility maximization this means to study continuity of µ → U (x) µ(dx). In a second step, one uses the obtained continuity and the contraction theorem in the previous section.

4.1.
Proof of Theorem 1.5. We will need the elementary estimate Lemma 4.1. Let µ, ν ∈ P 1 (R) and let f : R → R be convex and Lipschitz. Then where L is Lipschitz constant of f .
Proof. Let γ be a coupling of µ and ν. Applying Jensen's inequality we obtain As γ was arbitrary, this implies the claim.
In fact there is equality in the previous lemma, as shown in [29,Proposition 3.2].
We now turn to the proof of Theorem 1.5. For n > 0 let π be a bi-causal coupling which attains the infimum in the definition of AW 1 (P, Q) modulo a 1/n-margin. (Note that µ n , ν ∈ P 1 (R) as P, Q ∈ SM 1 (Ω).) By Lemma 4.1 we have T ] < ∞ and denote by A the finite variation process associated to Y . Then, as (G n ) is uniformly bounded by k, there exists a predictable G and a sequence of forward-convex combinations of (G n ) which converge in L 2 (dQ⊗ d([Y ]+A)) to G. This, (4.1), and the convexity of (·) + lead to the desired conclusion. The general case follows by a simple but notationally heavy localization argument.
The proof in case that G = H and H is Lipschitz follows analogously from the second part of Theorem 3.10.

4.2.
Proof of Theorem 1.6. In a first step notice that for all P, P and random variables Z, Z , it follows as in Lemma 4.1 that AVaR P α (Z) − AVaR P α (Z ) ≤ d w 1 (Z(P), Z (P ))/α. Indeed, if γ is a coupling from µ := Z(P) to ν := Z (P ) then so minimizing over γ yields the claim. The rest of the proof now follows the line of argumentation as in the proof for Theorem 1.5. Fix P, Q ∈ SM 1 (Ω). Assume only for notational simplicity that there exists a bi-causal coupling π which attains the infimum in the definition of AW 1 (P, Q), and that there exist H * ∈ H k such that where the last inequality is due to Theorem 3.10. Interchanging the role of P and Q yields the desired conclusion. The proof for the second estimate follows analogously.

4.3.
Proof of Example 1.7. First note that AVaR P α (Z) ≥ E P [Z] for every integrable random variable Z. Indeed, this follows from integrating the pointwise inequality x = x + m − m ≤ (x + m) + /α − m. Therefore, as the Brownian stochastic integral has expectation zero, we conclude that inf On the other hand, define where N (0, σ 2 (T − t)) stands for the normal distribution with mean 0 and variance σ 2 (T − t). Then C(X) = f (T, X T ) and E P [f (t, X t )|F s ] = f (s, X s ) for every 0 ≤ s ≤ t ≤ T . Thus, by Itô's formula and fact that the martingale property implies that the finite variation part vanishes, one has f (t, X t ) = f (0, 0) + (H * (X) · X) T for the predictable trading strategy H * t := ∂ x f (t, X t ). As further |H * t | ≤ 1 for every t and f (0, 0) = σ/ √ 2π, one has The proof now follows from the explicit formula for the adapted Wasserstein distance derived in Example 3.4 and the fact that E P [C(X)] = σ/ √ 2π.

4.4.
Proof of Theorem 1.8. Recall that U (x) ≤ c(1 + |x| p−1 ) for all x ∈ R and some constant c. Let P, Q ∈ SM p (Ω) be arbitrary and assume only for notational simplicity that there is H * ∈ H k such that and that there is a bi-causal coupling π coupling between P and Q which is optimal for AW p (P, Q). By Lemma 3.7 there is and let γ be an (almost) optimal coupling for d w p (µ, ν). As U is concave and increasing, we have U (y) − U (x) ≤ U (min{x, y})|x − y|. Using Jensen's inequality for the concave function U we have where we used Hölder's inequality in the last line and q denotes the conjugate Hölder exponent of p (that is, 1/p + 1/q = 1). As q(p − 1) = p, the growth assumption on U implies that |U (min{x, y})| q ≤ c(1 + |x| p + |y| p ) for some (new) constant c. Then, by Lemma 3.9, we have for e :=c(1 + R p + R p ). Exchanging the roles of P and Q and using Theorem 3.10 completes the proof.
4.5. Two generalizations. The following two results can be proved using almost the same arguments as used in the proofs of Theorem 1.6 and Theorem 1.8. In particular the proofs boil down to establishing convergence for image measures with respect to d p and give no new insight on adapted Wasserstein distances, so we shall skip them. Let ρ be a law-invariant risk measure which we directly view as a functional from P p (R) to the reals. For P ∈ SM p (Ω) and a random variable Z : Ω → R (such that Z(P) ∈ P p (R)) we write ρ P (Z) = ρ(Z(P)). A typical example of a law invariant risk measure which satisfies ρ(µ) − ρ(ν) ≤ Ld w (µ, ν) for some constant L depending on the p-the moment of µ and ν is the optimized certainty equivalent, introduced to the mathematical finance community in [17]. For a convex, increasing function : R → R which is bounded from below and satisfies (x)/x → ∞ as x → ∞, the optimized certainty equivalent is defined via If (x) ≤ c(1 + |x| p−1 ), then it follows that the infimum over m can be taken in some compact set depending on the p-th moment.s Due to cash additivity of ρ, the following proposition has the same interpretation as Theorem 1.6. Proposition 4.3. Assume that ρ : P p (R) → R satisfies ρ(µ) − ρ(ν) ≤ Ld w (µ, ν) for some constant L depending on the p-the moment of µ and ν. Then, for every Lipschitz function C : Ω → R, the mapping P → inf Finally, let us point out that (though not a convex risk measure) the Value-at-Risk (VaR) would be another natural candidate to study continuity. However, as VaR is not continuous w.r.t. weak convergence, already in a one period model continuity of P → inf{m ∈ R : there is H ∈ H k with VaR P (C(X) − m − (H(X) • X) T ) ≤ 0} does not hold.
Then P and each P n satisfy the classical no-arbitrage condition, unlike the situation described in Figure 1. While P n converges to P in usual Wasserstein distance, one can verify that convergence in nested distance does not hold. For example in utility maximization of the trivial claim C = 0, we have sup H∈H k E P [U (C(X) + (H(X) • X) T )] = U (0) by Jensen's inequality (as X is a martingale under P). For P n taking the strategy H * consisting of H * 0 = 0 and H * 1 (x) = k sign(x), one gets sup showing the lack of continuity.
Remark 5.2 (Usual Wasserstein does not work II). As explained in the introduction, the objective in Theorem 1.5 can be seen as a relaxed version of the superhedging problem. The reason to consider this relaxation is not a technical simplification but necessary to to obtain continuity without further assumptions. Indeed, the problem of superhedging inf m ∈ R : there is H ∈ H k such that m + (H • X) T ≥ C(X), P-almost surely is not continuous in P w.r.t. adapted distance for any k ∈ [0, ∞]. In fact, this already happens in one period, where adapted and the usual Wasserstein distances coincide. Consider and sequence of measures P n with full support which converge weakly to a measure P. Then the superhedging price w.r.t. P n equals the concave envelope of C, while the superhedging price w.r.t. P equals the concave envelope of C restricted to the support of P. For a recent paper on this problem in one period, see the work of Ob lój and Wiesel [46]. Remark 5.3 (Uniformly bounded strategies are necessary). Similar as in Remark 5.2 the restriction to trading strategies in H k (i.e. uniformly bounded strategies) is also no technical simplification. For example, in a one-period framework, the measures P ε := (1 − ε)δ (0,ε) + εδ (0,−ε) converges to P := δ (0,0) in every (adapted) Wasserstein distance. However, we have for small ε > 0 where H ∈ H ∞ := k∈N H k is the set of all bounded trading strategies.