Long Term Average Cost Control Problems Without Ergodicity

We consider a stochastic control problem with time-inhomogeneous linear dynamics and a long-term average quadratic cost functional. We provide sufficient conditions for the problem to be well-posed. We describe an explicit optimal control in terms of a bounded and non-negative solution of a Riccati equation on [0,∞)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0, \infty )$$\end{document}, without an initial and terminal condition. We show that, in contrast to the time-homogeneous case, in the inhomogeneous case the optimally controlled state dynamics are not necessarily ergodic.


Introduction
Suppose that the dynamics of some controlled state satisfy where W is a one-dimensional Brownian motion, α is some square-integrable control process and b, B, c, C are real-valued deterministic bounded functions. We consider where f is a quadratic cost function of the form with β x x , β x , β xa , β aa , β a , β 0 being real-valued, deterministic, left-continuous and bounded functions. The problem of minimizing (0.1) arises in stylized form, e.g. in applications where an agent aims at keeping a state close to a possibly time-dependent target level, and any adjustment of the state position entails costs depending on the adjustment rate α. We refer to the end of Sect. 1 for a description of some more detailed examples.
The homogeneous problem version, in which b, B, c, C, β x x , β x , β xa , β aa , β a , β 0 are all constant functions, is already well-studied in the literature, even for a multidimensional generalization (see, e.g., [3]). The focus of the present article lies on the inhomogenity of the setting. Our aim is to provide sufficient conditions for the inhomogeneous problem to be well-posed and to derive an explicit formula for an optimal control.
As is well-known, the solvability of finite-time inhomogeneous linear-quadratic control problems is strongly linked to the solvabilitity of a related Riccati equation (see e.g. [20] and [22]), which in dimension one has the form (note that U corresponds to 2P in Sect. 2 of [20]). Given a finite time horizon T ∈ (0, ∞), a solution of the problem of minimizing E T 0 f (t, X t , α t )dt can be expressed in terms of the solution of (0.2) with the terminal condition U T = 0.
We show that also the problem of minimizing the long-term cost average functional (0.1) can be reduced to the Riccati equation (0.2). The difficulty in the infinite horizon case, however, is that no terminal condition can be imposed. In order to isolate the solution of (0.2) that determines the minimizer of (0.1), we impose the conditions that the solution is non-negative and bounded from above. Probably the most challenging part of the article is to prove that there exists a unique solution of the Riccati equation (0.2) satisfying these boundedness conditions.
Using the unique bounded non-negative solution of (0.2) on [0, ∞) we define a specific control and show, via a classical verification argument, that it is indeed optimal. In contrast to the homogeneous case, the HJB equation characterizing the control problem does depend on time. This goes in line with the fact that the optimally controlled state dynamics are, again in contrast to the homogeneous case, not necessarily ergodic.
There are many articles that solve long-term average cost control problems with time-homogeneous state dynamics. We refer to [19] for an early survey. In homogeneous models the optimally controlled state dynamics usually are ergodic. Therefore, the literature frequently refers to such problems as ergodic control problems. One message of the current paper is that long-term average cost control problems can be well-posed, even without ergodicity of the optimally controlled state.
A fundamental topic in the field of control theory with long-term average cost functionals is the convergence of the HJB equations of the finite time problem version to an ergodic PDE. More precisely, assume that the HJB equation of a finite time control problem is given by where A is the value set of the controls and L a denotes the generator of the controlled state dynamics. There are many contributions providing conditions under which (0.3) transforms into an ergodic PDE of the type as the time horizon converges to infinity. Notice that a solution of (0.4) consists of a pair (η, v) ∈ R × C[0, ∞). Usually it is assumed that f does not depend on time.
Exceptions are [2], [4] assuming a periodicity in time, and [5] assuming that f depends recursively on the value function divided by time-to-maturity.
[1], [14] consider a homogeneous setting and prove convergence, in some sense, of (0.3) to (0.4) under some state periodicity assumptions. [8], [18], [5] use probabilistic representations in terms of backward stochastic differential equations to establish convergence under dissipativity assumptions guaranteeing that the optimally controlled state is ergodic. [9] consider a system of ergodic BSDEs with dissipative forward part and apply them to a long-term utility maximization problem with regime switching.
We stress that in the present article we do not impose any kind of time periodicity assumption. The only assumption on the state coefficients and the cost coefficients is that they are bounded and left-continuous. As the time horizon converges to infinity, the time dependence in the HJB (0.3) does, in general not disappear, and hence we do not have convergence to (0.4). A time-dependent, but periodic, PDE limit is also described in [4].
Finally, we remark that we do not impose any regularity with respect to time, and hence we can not transform the setting into a 2-dimensional homogeneous setting with time as a new state variable.

Main Results
In this section we rigorosly describe the model and summarize our main results.
Let W be a one-dimensional Brownian motion on a probability space ( , F, P). We denote by (F t ) t∈[0,∞) the filtration generated by W , completed by the P-null sets in F.
By a control process α we mean a (F t )-progressively measurable process α such that for all T ∈ [0, ∞) we have T 0 α 2 s ds < ∞. Given a control α, we assume that state process satisfies the SDE (1.1) Notice that our assumptions imply that for every x ∈ R the SDE (1.1) has a unique solution X x,α satisfying X x,α [13]). Given an initial state x ∈ R, we say that a control α is admissible if sup t∈[0,∞) E[(X x,α t ) 2 ] < ∞; and we denote the set of all admissible controls by A(x).
For the admissible controls α ∈ A(x) we define the limsup long term average cost functionalJ We now consider the problem of minimizingJ (x, α) among all admissible controls. To this end we introduce the value function for all x ∈ R. We show below thatV does not depend on x; but since this is a priorily not known, in the definition ofV we add the argument x. We say that α ∈ A(x) is an optimal control for (1.2), if we haveJ (x, α) =V (x). Moreover, we say that α ∈ A(x) is a closed-loop control if there exists a function a : [0, ∞) × R → R such that for all x ∈ R the SDE has a unique solution X x,a and α t = a(t, X x,a t ), t ∈ [0, ∞). We now summarize our main results. First, we describe an optimal control and the value function in terms of a solution of the Riccati equation (0.2). We show that there exists a unique initial condition such that equation (0.2) has on [0, ∞) a solution that is bounded from above and bounded from below by 0. Proposition 1.2 There exists exactly one non-negative and bounded solution of (0.2) on [0, ∞).
The result of Proposition 1.2 is proved in Sect. 2 as a part of Theorem 2.1. In the following we denote by U ∞ the unique non-negative bounded solution of (0.2) described in Proposition 1.2.
In Sect. 3 we show that there exist constants δ 1 , δ 2 > 0 such that We can thus define a further bounded process We next describe a solution of the long term cost minimization problem in terms of U ∞ and φ ∞ .

Theorem 1.3 The closed-loop control with feedback function
is an optimal control. Moreover, Note that (1.5) implies thatV does not depend on x. In the following we therefore omit the argument x and interpretV as a constant. Furthermore, the optimal control is not unique. An alteration of the strategy in (1.4) on a compact interval of time would move the process as if having started in another value and generate limited costs. Changing the starting value and modifying costs on a compact interval does not affect the long term average costsV . Thus the alteration is also optimal.
We prove Theorem 1.3 in Sect. 3 as a part of Theorem 3.6. We next proceed by comparing the problem of minimizingJ (x, α) with the problem of minimizing the liminf long term average cost functional We define also the liminf value One can show that the feedback function (1.4) is also optimal for (1.7) and that V does not depend on x. Moreover, we have In general, V is not equal toV . If V <V , then X ∞,x , the state process controlled with the optimal control α ∞,x is not ergodic, i.e. it does not hold true that the cost time average converges almost surely. More precisely, we have the following. Proof We first show that the family 1 To this end let p ∈ (1, ∞). By Jensen's inequality we have, for some constant K independent of T , Hence, by the de la Vallee-Poussin theorem, the family 1 )ds converges a.s. Then, due to uniform integrability, we also have convergence in L 1 . This contradicts however that V <V . Proposition 1.4 entails, in particular, that if V <V , then the distribution of X ∞,x t does not converge to a stationary distribution, as t → ∞.
In the homogeneous case where the drift, diffusion and cost functionals do not depend on t, the optimally controlled state X ∞,x is ergodic. The homogeneous case is already well studied in the literature (see e.g. [3]). For the convenience of the reader we briefly explain how our results simplify in the homogeneous case and how they can be extended.

The Homogeneous Case
Suppose that all modelling functions b, B, c, C, β x x , β x , β xa , β aa , β a , β 0 are constant. In this case also U ∞ and φ ∞ are constant; in particular we have 2β aa , and notice that the optimally controlled state X ∞ satisfies the homogeneous SDE (1.11) Property (1.11), sometimes referred to as dissipativity, guarantees that (1.10) possesses a unique stationary distribution π (see, e.g., Theorem 8.3 in [17]; use for example the Lyapunov function W (x) = x 2 /2). Moreover, if X ∞,x denotes the solution of (1.10) with initial condition x ∈ R, then the distribution of X ∞,x t converges to the stationary distribution, as t → ∞ (see Remark 8.6 in [17]). This further entails that

Dissipativity in the Inhomogeneous Case
Observe that the optimally controlled state X ∞ satisfies the SDE 2β aa (t) . By Theorem 2.1 below we obtain that there are constants This implies, that for large enough time intervals [t 1 , t 2 ] we have which seems to be a time-average version of the dissipativity condition (1.11). However, consider B t = 2 · 1 {t∈[0,1)} and all other parameters to be constant with which means that for at least a short time the condition (1.11) is not satisfied.

Example 1.5 Consider the control problem with
Given this sequence we set First, observe that the function constant equal to 1 is a solution of (0.2), and hence Thus, the larger we choose t 2k+1 , the closer φ ∞ s , s ∈ [t 2k , (t 2k + t 2k+1 )/2], gets to 2 3 . Now we choose t 2k+1 such that We next describe how to choose t 2k+2 . Observe that Therefore, the larger we choose t 2k+2 , the closer φ ∞ s , s ∈ [t 2k+1 , (t 2k+1 + t 2k+2 )/2], gets to 4 3 . Now choose t 2k+2 such that We have thus recursively defined the sequence (t k ) k∈N 0 . From (1.5) and (1.8) we now obtainV ≥ 1 and V ≤ 0.

Comparison with the Finite Time Control Problem
The optimal control in (1.4) has a similar form as the corresponing optimal control with a finite time horizon If we replace U ∞ with U T and φ ∞ with φ T in (1.4), then we obtain an optimal closed loop control for the problem of minimizing E T 0 f (t, X x,α t , α t )dt (see, e.g., Theorem 2.4.3 in [20]). Moreover, one can show that U T and φ T converge to U ∞ and φ ∞ , respectively and hence the optimal feedback function of the finite horizon problem converges to a ∞ as T → ∞ (see Chapter 4 in [6]).
We remark that in the finite time linear quadratic control problem with stochastic (more precisely: progressively measurable) coefficients the equation (0.2) becomes a backward stochastic differential equation (BSDE). If the BSDE is solvable, the control problem is well-posed and its solution can be characterized in terms of the BSDE (see, e.g., [15], [12], [11] and [21]).

Applications
We close this section by describing some possible applications of the solution of the long-term average cost minimization problem (1.2).
Inventory management. One can think of the state X as the inventory level of some good. Usually a low inventory level entails shortage costs and a high level increases holding costs. Both costs can be taken into accout for by the quadratic dependence of f on x. With the control α the inventory manager can continuously adjust the inventory level, in both directions. Quadratic dependence of f on α reflects that level corrections imply costs. The demand of the good and adjustment costs may be subject to seasonal variations and to some long-term trends, allowed for by the time-inhomogenity of the cost function f . In this context (1.4) is the policy that minimizes the long term average inventory costs.
Cash balance management. Companies aim for an optimal cash balance (see, e.g., [7]). On the one hand, they want to avoid being short of cash for meeting obligations. On the other hand they want to avoid holding costs entailed by large cash positions. Any adjustment of the cash position involves transaction costs. The problem of minimizing the long term average overall costs can be formulated as a problem of type (1.2).
Inflation rate regulation. One of the main tasks of central banks is to keep the inflation rate at a healthy level. Both a high inflation rate and deflation can have severe economic implications. To this end there are several tools at the disposal of central banks in oder to affect the inflation rate into either direction, which however also come with side effects of political or economical nature. For example bubbles in stock markets or recessions can be unfavorable outcomes. The parameters of cost of inflation or deflation and the measures against them can change over time with the circumstances. Furthermore, a central bank should aim at presevering a near optimum state for a long time without any visible time horizon, which makes taking the average over time a good target functional.

Existence and Uniqueness of U ∞
In this section we show the existence and uniqueness of U ∞ , which is defined as the non-negative bounded solution of (0.2). In fact, we show a little more than that, as can be seen in the following theorem, which contains the main result of this section.
Furthermore, there are constants δ 1 , δ 2 > 0 such that for any initial value U 0 We approach this problem by considering a simplified quadratic integral equation, at first for constant and then for piecewise constant parameter functions. Finally we generalize to right-continuous functions and prove Theorem 2.1 via a time-reversal.

Assumption 2.2 Let p, q, a : R → R be deterministic right-continuous functions such that for all s∈
For the following we define the constantsY :=p + p 2 +q,Ŷ :=p + p 2 +q. Moreover, for all t ∈ R and x ∈ [0,Ŷ ] we define Y t,x as the solution of the ODE To shorten notation, we often omit the superscript t and x.

Remark 2.3
Actually it is not necessary for p, q, a to be defined on the complete real axes. Being defined on an interval of the form (−∞, K ] for K ∈ R would suffice. Being right-continuous is also not necessary. What we actually use is firstly in the proof of Proposition 2.10 that p, q, a can be approximated by piecewise constant functions with respect to the ess sup-norm and secondly in the proof of Theorem 2.11 that Y t,x is weakly differentiable with respect to its inital value x. However, for simplicity of argument and notation we assume right-continuity.

Lemma 2.4 Let Assumption 2.2 be fulfilled and t ∈ R.
Then for every starting value x ∈ [0,Ŷ ] Equation (2.1) has a unique solution Y t,x , which is furthermore bounded by Proof We define the auxiliary processỸ as the unique solution of the Lipschitz ODE where T is the truncation operator defined by T have thatỸ s ≥Ỹ t for all s ∈ [t, ∞), since Y is continuous. By the same argument we also obtain forỸ t ≥Y thatỸ s cannot reach any value belowY and likewise becausẽ Y t ≤Ŷ thatỸ s ≤Ŷ . Thus, the truncation of the quadratic term has no consequence and can be omitted without changing the solution. Hence,Ỹ is also a solution of the untruncated ODE (2.1). Let Z be an arbitrary solution of (2.1). If Z t attainsŶ , then it has a non-positive derivative in that point and hence cannot exceedŶ . Similarily, if Z attains zero, then it has a non-negative derivative and hence cannot plunge below zero. Consequently, also Z is a solution of the Lipschitz ODE (2.2). However, uniqueness of (2.2) implies Z =Ỹ .
In the following we denote by Y the solution of Equation (2.1).

Remark 2.5
In the proofs of this section we make use of the following hyperbolic identities without explicitly mentioning it: Lemma 2.6 Let Assumption 2.2 be fulfilled, t ∈ R and x ∈ [0,Ŷ ]. Furthermore, assume that for some s > t the functions p, q, a are constant on the interval [t, s), i.e. there arep,q,ā ∈ R such that p r =p, q r =q and a r =ā for all r ∈ [t, s). Then

for all r ∈ [t, s]. In particular, Y t,x is monotone on the interval [t, s].
Proof Observe that the dynamics of Y t,x can be reformulated for r ∈ [t, s) as the separable ODE By using the method of separation of variables, the three cases follow. Also, Lemma 2.4 provides uniqueness. The remaining monotonicity follows from the monotonicity of tanh and coth.
The proofs of the following three lemmas are technical and can be found in the appendix.

Lemma 2.7 Let Assumption 2.2 be fulfilled and
Furthermore, assume that x ∈ [0,Ŷ ] and that the functions p, q, a are constant on the interval [t 1 , t 2 ), i.e. there arep,q,ā ∈ R such that p r =p, q r =q and a r =ā for all r ∈ [t 1 , t 2 ). Then, for Y := Y t 1 ,x and t 1 and for Y t =p + p 2 +q we moreover have Proof See appenix.
Lemma 2.7 gives us the value of the integral in (2.4) when the parameters are constant all the way. Next, we want to find the value of that integral when the process Y goes up and down ending at the value where it started, which we later call an excursion. ] and p, q, a be constant on [t 1 , t 2 ) and also on [t 3 , t 4 ). Then

Lemma 2.8 Let Assumption 2.2 be fulfilled. Furthermore, let
Proof See appendix.

Lemma 2.9 Let Assumption 2.2 be fulfilled and assume that on the interval
Proof See appendix.

Proposition 2.10 Let Assumption 2.2 be fulfilled
Then there exist constants δ 1 , δ 2 > 0 independent of t 0 and t 1 such that Proof First, we have a look at the case, where p, q, a are piecewise constant. We split the path of Y into many excursions (as described in Lemma 2.8) and left over time intervals which can not be put together to excursions. Those left over time intervals have to be such that either Y is monotone decreasing or Y is monotone increasing on all of them. Since 0 ≤ Y ≤Ŷ (see Lemma 2.4) we get from Lemma 2.9 that the contributions of the left over monotone intervals in the estimate are bounded by 2ŶqŶ =: δ 2 . Now we set which is the minimum of the factors that get multiplied with the time increments, given in Lemma 2.8 and Lemma 2.9. Hence, the result holds for all piecewise constant functions p, q, a uniformly. Since Y depends continuously on a, p and q, for every 1 > 0 we can choose piecewise constant approximationsã,p,q fulfilling Assumption 2.2 for the same bounds as a, p, q and generating aỸ such that max a −ã ∞, Hence, we can choose for every 2 > 0 our 1 as 1 = 2 3T 1 Y +max{|p|,|p|}+â and obtain Thus, the result for piecewise constant functions holds also true for all allowed functions a, p and q.

Furthermore, there exists a bounded function V
Moreover, V is the unique process bounded between 0 andŶ and solving Equation (2.1).

Remark 2.12
Equation (2.6) means that V can be interpreted as a pullback attractor. More precisely, in the terminology of nonautonomous dynamical systems, the family of singleton sets {V t }, indexed by R, is pullback attracting for the dynamical system associated to (2.1) (see e.g. Definition 3.3 in [10]).

Proof of Theorem 2.11
The first of the two inequalities is given by Proposition 2.10.
For the other one, by introducing the function h(r , x) := −a r x 2 − 2 p r x − q r for (r , x) ∈ [0, ∞) × R and using differentiation in its weak sense, we can write the dynamics of Y t,x 0 as By standard theory (see e.g. Theorem 1 in Chapter 2.5 of [16]) it is known that Y t,x 0 is also differentiable with respect to its initial value x 0 and that ∂ x 0 Y t,x 0 s solves the differential equation which has the solution Therefore, for some constants δ 1 , δ 2 > 0 by Proposition 2.10. Hence, Thus, defining K 1 := e 2δ 2 and K 2 := 2δ 1 we obtain the claimed inequalities.
x s is a Cauchy-sequence for decreasing t and hence converges. Note that the limit does not depend on the initial value x. We denote this limit by V : Furthermore, due to dominated convergence, which means that V solves Equation (2.1).
Assuming that there is another process U bounded between 0 andŶ and solving Equation (2.1), we can find for every > 0 and s ∈ R a real t < s such that which means that V and U are identical. Now we have all necessary tools in order to prove Theorem 2.1.

Verification of the Linear-Quadratic Non-ergodic Control
In this section we first prove a verification result, and then apply it in order to prove Theorem 1.3.
Recall the control problem of Sect. 1 with value function (1.2). To shorten notation we abbreviate the drift and the diffusion coefficient in the state dynamics (1.1) by We show that one can characterize the solution of the control problem in terms of the following PDE As a terminal condition we impose that there exists η ∈ R such that for all x ∈ R we have lim sup Proposition 3.1 a) Let ψ ∈ C 1,2 ([0, ∞)) be a function satisfying (3.1). Suppose that there exists η ∈ R such that (3.2) holds true for all x ∈ R. Moreover, suppose that there exists K ∈ [0, ∞) such that for all t ∈ [0, ∞) and x ∈ R we have

3)
and that also the space derivative ∂ x ψ grows at most polynomially in x, uniformly in t.

4)
possesses a unique solution X * ,x and sup t∈[0, in particular α * is an optimal control.
Proof Let x ∈ R and α ∈ A(x). We shortly write X = X x,α in the following. The Ito formula and (3.1) imply where The assumptions on ψ and on α entail that T 0 (∂ x ψ(t, X t )σ (t, X t )) 2 dt < ∞, and hence E(M T ) = 0. Therefore, taking expectations on both sides of (3.6) and multiplying with − 1 T yields Notice that T . (3.8) By assumption (3.2), for the first fraction on the RHS of (3.8) we have and, since sup t E(X 2 t ) < ∞, for the second we have Thus, from (3.7) we get Since α is chosen arbitrarily, we also have inf α∈A(x)J (x, α) ≥ η. Now suppose that (3.4) has a unique solution X * = X * ,x and that sup t∈[0,∞) E[(X * t ) 2 ] < ∞. Then the control α * t = a * (t, X * t ), t ≥ 0, belongs to A(x). Notice that the inequalities (3.6) and (3.7) become equalities if we replace X with X * . We thus obtain η =J (x, α * ). This yields, together with the first part of the proof, the statement (3.5).
A verification result for the liminf cost functional can be shown similarly. One simply needs to replace the limsup in (3.2) by a liminf. Remember that U ∞ is the unique non-negative, bounded solution of (0.2) as described in Theorem 2.1.

Lemma 3.2 Let Assumption 1.1 be fulfilled. Then the process
for t ∈ [0, ∞) is well defined and bounded uniformly in time.
Proof By Theorem 2.1 we obtain which means that φ ∞ is well defined and bounded.
In the following we use for t ∈ [0, ∞), x ∈ R the definitions

Lemma 3.3 Let Assumption 1.1 be fulfilled. Then there exists an
For the proof of Lemma 3.3 we need the following. Proof That h solves the integral equation is straightforward by weak differentiation. The uniqueness follows since the integral equation is linear in h with bounded coefficients, which makes it a Lipschitz ODE.

Proof of Lemma 3.3 Observe that
Furthermore, using Itô's formula With Jensen's inequality this implies for all q ∈ (0, 2] that Furthermore, for 2 ≤ p ∈ R we analogously obtain Since δ 1 > 0 and sup s∈[0,∞) C 2 s < ∞, we get that for every p with 2 ≤ p < 2 +  Proof A straightforward calculation yields that , a ∞ (t, x)). (3.10) Next, observe that since f is strictly convex in a and the remainder of Equation (3.1) is affine linear in a, we get that if the derivative with respect to a becomes zero, we are in the unique minimum. Using this, we obtain by that a ∞ minimizes Equation (3.1). Therefore, plugging the minimizer a ∞ into Equation (3.1) and using the result of Equation (3.10) we get Proof We want to apply Proposition 3.1. Lemma 3.5 already yields that ψ fulfills (3.1) and that a ∞ is the corresponding minimizer. Next, observe that lim sup since U ∞ and φ ∞ are bounded. Furthermore, for the same reason, and ∂ x ψ(t, x) = U ∞ t x + φ ∞ t is linear in x with an in time uniformly bounded factor.
Finally, Lemma 3.3 gives the bounded second moment of the controlled process, which means that all conditions of Proposition 3.1 are fulfilled, yielding the statement.

Conclusion
We have shown that under Assumption (1.1) the problem of minimizing the limsup long-term average cost functional (0.1) is well-posed, and we have described an optimal closed loop control in terms of the unique bounded and non-negative function U ∞ . Some further questions arise naturally.
First, is it possible to extend the results to a multi-dimensional setting? Following the same approach, a multi-dimensional Riccati equation on [0, ∞) has to be studied. Notice that some of the comparison arguments of Sect. 2 can not be simply transferred to a multidimensional setting.
If the drift and diffusion coefficients in the linear state dynamics and the coefficient of the quadratic cost function f are themselves stochastic, then it is natural to assume that the solution of the control problem can be described in terms of a stochastic Riccati equation on [0, ∞). Is it possible to prove existence and uniqueness of a solution and to obtain an optimal control with it?
Finally, we believe that one can generalize the results of Theorem 2.11 to the more general setting, where the derivative of Y is a strictly concave function having a strictly negative and a strictly positive zero. Also the starting value of Y can be generalized to be greater than any negative zero of the derivative of Y t,x . A proof for this claim, using abstract arguments instead of the tedious calculations as presented in Sect. 2, is left for future research.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.