STRONG FELLER PROPERTY FOR SDES DRIVEN BY MULTIPLICATIVE CYLINDRICAL STABLE NOISE

Abstract. We consider the stochastic differential equation dXt = A(Xt−) dZt, X0 = x, driven by cylindrical α-stable process Zt in R , where α ∈ (0, 1) and d ≥ 2. We assume that the determinant of A(x) = (aij(x)) is bounded away from zero, and aij(x) are bounded and Lipschitz continuous. We show that for any fixed γ ∈ (0, α) the semigroup Pt of the process Xt satisfies |Ptf(x) − Ptf(y)| ≤ ct−γ/α|x − y|γ ||f ||∞ for arbitrary bounded Borel function f . Our approach is based on Levi’s method.

It is well known that SDEs (1) has a unique strong solution X t , see e.g. [28,Theorem 34.7 and Corollary 35.3]. By [31,Corollary 3.3] X t is a Feller process.
Let E x denote the expected value of the process X starting from x and B b (R d ) denote the set of all Borel bounded functions f : The main result of this paper is the following theorem, which gives the strong Feller property of the semigroup P t . Theorem 1.1. For any γ ∈ (0, α), τ > 0, t ∈ (0, τ ], x, y ∈ R d and where c depends on τ, α, d, η 1 , η 2 , η 3 , γ. T. Kulczycki  Strong Feller property for SDEs driven by additive cylindrical Lévy processes have been intensively studied recently (see e.g. [30,36,11]). The SDE (1) (with multiplicative noice) was studied by Bass and Chen in [1]. They proved existence and uniqueness of weak solutions of SDE (1) under very mild assumptions on matrices A(x) (i.e. they assumed that A(x) are continuous and bounded in x and nondegenerate for each x). In [24] SDE (1) was studied for diagonal matrices A(x), which diagonal coefficients are bounded away from zero, from infinity and Hölder continuous. Under these assumptions the corresponding transition density p A (t, x, y) was constructed and Hölder estimates x → p A (t, x, y) were obtained. These estimates imply strong Feller property of the corresponding semigroup.
The case of non-diagonal matrices A(x), treated in this paper, is much more difficult. The strong Feller property for semigroups generated by solutions to SDEs is often obtained by suitable versions of the Bismut-Elworthy-Li formula. We were not able to get such formula but we use instead Levi's method to construct the semigroup P t and to obtain Theorem 1.1. However there are many problems in applying this method to the case of non-diagonal matrices A(x). Therefore we had to introduce some new ideas. Below we briefly describe the main steps in our approach.
The first problem with Levi's method in our case is that the standard approximation of the transition density (the so-called "frozen density") does not have good integrability properties. To overcome this we truncate the Lévy measure of the process Z t in a convenient way. Then, using Levi's method, we construct the transition density (denoted by u(t, x, y)) of the solution of (1) driven by this truncated process. As usual we represent u(t, x, y) as a series ∞ n=0 q n (t, x, y). Typically, in many papers using Levi's method, the first step was to obtain precise bounds for q 0 (t, x, y) which allow to estimate q n (t, x, y) inductively point-wise. In our case it seems impossible to obtain such precise bounds, hence we prove (see Proposition 3.9) some different kind of results for q 0 (t, x, y), which are sufficient for our purposes. The main tools to prove Proposition 3.9 are Lemma 3.6 and the estimates (13). These key estimates (13) are proven using the techniques and results from [23], [22] and [33]. After constructing the transition density u(t, x, y) we use the technique developed by Knopova and Kulik [19] to show that u(t, x, y) satisfies the appropriate heat equation in the so-called approximate setting. In the next step we construct the semigroup T t for the solution of SDE (1) (driven by the not truncated process). Roughly speaking, this construction is based on adding long jumps to the truncated process. Next we show that u(t, x) := T t f (x) satisfies the appropriate heat equation in the approximate setting (see Lemma 4.18), which allows to prove that the constructed semigroup T t is in fact the semigroup P t .
Our current technique is restricted to the case α ∈ (0, 1). The main difficulty for α ∈ [1,2) is that in such case one has to effectively estimate the expression p y (t, x + a i (x)w) + p y (t, x − a i (x)w) − 2p y (t, x) instead of p y (t, x + a i (x)w) − p y (t, x), where p y (t, x) is the frozen density for the truncated process (see Section 3 for the precise definition of p y (t, x)) and a i (x) = (a 1i (x), . . . , a di (x)). Our crucial estimate (13) allows suitable estimate of (8) but fails to bound (7) in a way sufficient for our purpose.
It is worth mentioning that strong Feller property and gradient estimates for the semigroups associated to SDEs driven by Lévy processes in R d with jumps, with absolutely continuous Lévy measures, have been studied for many years (see e.g. [34,32,25,39,35,37]).
One may ask about further regularity properties of the semigroup P t , in particular about boundedness of the operators P t : L 1 (R d ) → L ∞ (R d ), which is related to the boundedness of the transition densities of P t . It turns out that for some choices of matrices A(x) (satisfying (2), (3), (4)) and for some t > 0 the operators P t : L 1 (R d ) → L ∞ (R d ) are not bounded (see Remark 4.23 and Remark 4.24). Nevertheless we have the following regularity result.
The paper is organized as follows. In Section 2 we study properties of the transition density of a suitably truncated one-dimensional stable process. These properties are crucial in the sequel. In Section 3 we construct the transition density u(t, x, y) of the solution of (1) driven by the truncated process. We also show that it satisfies the appropriate equation in the approximate setting. In Section 4 we construct the transition semigroup of the solution of (1). We also prove Theorems 1.1 and 1.2.

Preliminaries
All constants appearing in this paper are positive and finite. In the whole paper we fix τ > 0, α ∈ (0, 1), d ∈ N, d ≥ 2, η 1 , η 2 , η 3 , where η 1 , η 2 , η 3 appear in (2), (3) and (4). We adopt the convention that constants denoted by c (or c 1 , c 2 , . . .) may change their value from one use to the next. In the whole paper, unless is explicitly stated otherwise, we understand that constants denoted by c (or c 1 , c 2 , . . .) depend on τ, α, d, η 1 , η 2 , η 3 . We also understand that they may depend on the choice of the constants ε and γ. We The standard inner product for x, y ∈ R d we denote by xy.
For any t > 0, x ∈ R d we define the measure σ t (x, ·) by for any Borel set A ⊂ R d . P x denotes the distribution of the process X starting from x ∈ R d . For any t > 0, x ∈ R d we have It is well known that the density of the Lévy measure of the one-dimensional symmetric standard α-stable process is given by A α |x| −1−α , where A α = 2 α Γ((1 + α)/2)/(π 1/2 |Γ(−α/2)|). In the sequel we will need to truncate this density. The truncated density will be denoted by and for x ≥ 2δ we put µ (δ) (x) = 0. Moreover, µ (δ) is defined so that it is weakly decreasing, weakly convex and C 1 on (0, ∞) and satisfies We also define It is well known that g (δ) t belongs to C 1 ((0, ∞)) as a function of t and belongs to C 2 (R) as a function of x.

Similar calculations show that
In the sequel we will need a version of the inverse map theorem for a Lipschitz function f : R n → R n , n ∈ N. The corresponding theorem is the main result in [9], however it is not formulated in a suitable way for our purpose. Below, closely following the arguments from [9], we provide a version we need.
It is well known that y almost surely the Jacobi matrix J f (y) of f exists. For any y 0 ∈ R n we define (see Definition 1 in [9]) the generalized Jacobian denoted ∂f (y 0 ) as the convex hull of the set of matrices which can be obtained as limits of J f (y n ), when y n → y 0 .
We denote by B(x, r) an open ball of the center x ∈ R n and radius r > 0. For any matrix M we denote by ||M|| ∞ the maximum of its entries.
Lemma 2.7. Let f : R n → R n be a Lipschitz map and x ∈ R n . Suppose that for any y ∈ R n , the generalized Jacobian ∂f (y) consist of the matrices which can be represented as M(x) + R, where matrices M(x), R satisfy the following conditions: there are positive β and η such that ||R|| ∞ ≤ η|x − y| and |vM(x) T | ≥ 2β for every v ∈ R n , |v| = 1. Then f is injective on B(x, β/(nη)) and we have B(f (x), β 2 /(2nη)) ⊂ f (B(x, β/(nη))).

Proof.
Let v be an arbitrary unit vector in R n . Let M ∈ ∂f (y) and let z = vM(x) T . Since M T = M(x) T + R T the scalar product of z and w = vM T = z + vR T can be estimated as follows Next, taking w * = z/|z| we have for |x − y| ≤ β/(nη), Using this fact we can apply Lemma 3 and Lemma 4 of [9] to claim that for every which shows that f is injective in a ball B(x, β/(nη)). Next, by similar arguments, we show that which proves that all matrices from the set ∂f (y) are of full rank if |y − x| ≤ β/(nη).

Construction and properties of the transition density of the solution of (1) driven by the truncated process
The approach in this section is based on Levi's method (cf. [27,12,26]). This method was applied in the framework of pseudodifferential operators by Kochubei [20] to construct a fundamental solution to the related Cauchy problem as well as transition density for the corresponding Markow process. In recent years it was used in several papers to study transition densities of Lévy-type processes see e.g. [7,17,8,15,13,5,18,19,21]. Levi's method was also used to study gradient and Schrödinger perturbations of fractional Laplacians see e.g. [4,6,38].
We first introduce the generator of the process X t . We define Kf (x) by the following formula for any Borel function f : R d → R and any x ∈ R d such that all the limits on the right hand side exist. Recall that Kf (X s ) ds is a martingale. Let us fix ε ∈ (0, 1] (it will be chosen later). Recall that for given ε the constant δ is chosen according to Lemma 2.1. For such fixed ε, δ we abbreviate where for any Borel function f : R d → R and any x ∈ R d such that all the limits on the right hand side exists. Our first aim will be to construct the heat kernel u(t, x, y) corresponding to the operator L. This will be done by using the Levi's method. For each z ∈ R d we introduce the "freezing" operator (2) and (4) with possibly different constants η * 1 and η * 3 , but taking maximums we can assume that η * 1 = η 1 and η * 3 = η 3 . For any y ∈ R d , i = 1, . . . , d we put We also denote B ∞ = max{|b ij | : i, j ∈ {1, . . . , d}}. For any t > 0, x, y ∈ R d we define It may be easily checked that for each fixed y ∈ R d the function p y (t, x) is the heat kernel of L y that is For any t > 0, x, y ∈ R d we also define and In this section we will show that q n (t, x, y), q(t, x, y), u(t, x, y) are well defined and we will obtain estimates of these functions. First, we will get some simple properties of p y (t, x) and r y (t, x).
Proof. Of course, we may assume that γ = 1. We have Then the assertion follows easily from Lemma 2.3.
Using the definition of p y (t, x) and properties of g t (x) we obtain the following regularity properties of p y (t, x).
Proof. The estimates follow from Lemma 2.5 and the same arguments as in the proof of (22).
There is a positive ε 0 = ε 0 (η 1 , η 3 , η 4 , η 5 , d) ≤ 1 2η 5 such that the map Ψ x and its Jacobian determinant denoted by J Ψx (w, y) has the property then the Jacobian of Φ y denoted by J Φy (w, x) has the property Proof. In the proof we assume that constants c may additionally depend on η 4 , η 5 . We prove the statement for the map Ψ x , only. Since Next, we observe that (w, y) almost surely where 1 2 ≤ κ ≤ 2. Let J Ψx (w, y) be the Jacobi matrix for the map Ψ x which is defined (w, y) almost surely. Let ∂Ψ x (w, y) denote the generalized Jacobian of Ψ x at the point (w, y). Then from the form of J Ψx it is clear that every matrix M ∈ ∂Ψ x (w, y) can be written as Since ||R|| ∞ ≤ c |w| 2 + |x − y| 2 we can apply Lemma 2.7 with n = d + 1 to show on the set {(w, y); |w| 2 + |x − y| 2 ≤ β/(c(d + 1))} the map Ψ x is injective. This fact, combined with (27) and (28), completes the proof if we . Then using the same arguments as in the above proof we can find ε 0 such that all the assertions of Lemma 3.6 are true and additionally for |x − y| ≤ ε 0 , y almost surely. Moreover, the mapΨ x is injective on B(x, ε 0 ). We can also find . To this end we apply the last assertion of Lemma 2.7.
Let b * i (x, y) be the functions introduced in Lemma 3.6. We will use the following abbreviations Corollary 3.8. Assume that 2δ < ε 0 , where ε 0 is from Lemma 3.6. With the assumptions of Lemma 3.6 we have for t ≤ τ , Proof. In the proof we assume that constants c may additionally depend on η 4 , η 5 .

This implies that
From this we infer that Due to Lemma 3.6, almost surely on Q x , the absolute value of the Jacobian determinant of the map Ψ x is bounded from below and above by two positive constants and Ψ x is an injective transformation.
Observing that the support of the measure µ is contained in [−ε 0 , ε 0 ] and then applying the above change of variables, we have where the last equality follows from the general change of variable formula for injective Lipschitz maps (see e.g. [14,Theorem 3]). Since |ξ| ≤ 1 for (w, ξ) ∈ V x , we get Applying Lemma 2.6 we have Finally, Similarly we obtain which completes the proof of the first bound. To estimate |y−x|≤ε 0 A l dx we proceed exactly in the same way.
Recall that if we fixed ε we fix δ according to Lemma 2.1.
In particular for x, y ∈ R d , t ∈ (0, τ ], |y − x| ≥ ε 0 we have For any t ∈ (0, τ ], y ∈ R d we have Proof. We have For i = 1, . . . , d we put We have q 0 (t, x, y) = R 1 + . . . + R d . It is clear that it is enough to handle R 1 alone. Note that We will use the following abbreviations Note that k 10 = 1 and k i0 = 0, 2 ≤ i ≤ d.

The above inequalities yield
Since R 1 is invariant with respect to permutations of z 2 , . . . , z d we infer that On the other hand, again by Lemma 2.4, For l ≥ 2, a similar argument leads to Observing that |z 1 + w| ≥ |z 1 |/2 for |w| ≤ 2δ ≤ 4δ ≤ |z 1 | we conclude that Combining (34) and (35) we arrive at By Lemma 3.2, max 1≤i≤d |z i | ≥ 1 η 1 d 3/2 |x − y|, and by the choice of ε, ε 0 , δ we have Finally, for |y − x| ≥ ε 0 we have Next, we prove the bound of the integral (30). Let x ∈ R d be fixed. Applying For |y − x| ≥ ε 0 we have |q 0 (t, x, y)| ≤ ce −(ε/ε 0 )|x−y| which implies that This completes the proof of (30). The estimate (31) is proved exactly in the same way.
Using similar arguments as in the proof of Proposition 3.9 we obtain the following result.
For any δ 1 > 0, We have lim uniformly with respect to x ∈ R d .
Note that the coordinates of the matrix B(y) have partial derivatives y almost surely, bounded uniformly. We can calculate the absolute value of Jacobian determinant JΨ x (y), y almost surely, as JΨ x (y) = det B(y) + R(x, y), |R(x, y)| ≤ c|y − x|. Next, Applying (41), (40) and the change of variable formula we obtain Now we can pick, independenly of x, positive δ 1 and δ 2 such that B(0, δ 2 ) ⊂ V x (δ 1 ) (see Remark 3.7). Applying again the change of variable formula we obtain This completes the proof that uniformly with respect to x, lim t→0 + |x−y|≤δ 1 p y (t, x − y)dy = 1, which combined with (38) proves (39).
In the sequel we will use the following standard estimate. For any γ ∈ (0, 1], θ 0 > 0 there exists c = c(γ, θ 0 ) such that for any θ ≥ θ 0 , t > 0 we have Lemma 3.11. For any t > 0, x ∈ R d and n ∈ N the kernel q n (t, x, y) is well defined. For any t ∈ (0, τ ], x ∈ R d and n ∈ N we have For any t ∈ (0, τ ], x, y ∈ R d and n ∈ N we have |q n (t, x, y)| ≤ c 1 c n 2 t n/2−1 (n!) 1/2 t d/α .
Proof. By Proposition 3.9 there is a constant c * such that for any x, y ∈ R d , t ∈ (0, τ ] we have It follows from (42) there is p ≥ 1 such that for n ∈ N, We define c 1 = pc * ≥ c * and c 2 = 2 d/α+1 c 1 (2 + p) > c 1 .
We will prove (43), (44), (45) simultaneously by induction. They are true for n = 0 by (47, 49, 50) and the choice of c 1 . Assume that (43), (44), (45) are true for n ∈ N, we will show them for n + 1. By the definition of q n (t, x, y) and the induction hypothesis we obtain Hence we get (45) for n + 1. In particular this gives that the kernel q n+1 (t, x, y) is well defined.
By the definition of q n (t, x, y), (49) and the induction hypothesis we obtain which proves (43) for n + 1. Similarly we get (44). Now we will show (46). For n = 0 this follows from (48). Assume that (46) is true for n ∈ N, we will show it for n + 1.
Using our induction hypothesis, (43) and (44) we get for |x − y| ≥ n + 2 q 0 (t − s, x, z)q n (s, z, y) dz ds By standard estimates one easily gets where C 1 depends on C.
By (21), Corollary 3.3, Proposition 3.10 and Proposition 3.12 we immediately obtain the following result.
Proof. We estimate the term for i = 1. By Lemma 3.1 for γ = 1 we get for w ∈ R Recall that if |w| ≥ 2δ then µ(w) = 0. So we may assume that |w| ≤ 2δ. By Corollary 3.3 we get

Now (58) and (59) follow by the fact that
Proof. The lemma follows easily by Propostion 3.10.
For any t > 0, x, y ∈ R d we define Clearly we have Now following ideas from [19] we will define the so-called approximate solutions. For any t ≥ 0, ξ ∈ [0, 1], t + ξ > 0, x, y ∈ R d we define . By the same arguments as Corollary 3.13 we obtain the following result.
(iii) This follows easily from (ii) and Corollary 3.16.
(iv) Fix f ∈ C 0 (R d ). By Lemma 3.4, Corollary 3.3 and the dominated convergence theorem we obtain that Using Lemma 3.4, Corollary 3.3, Proposition 3.12 and the dominated convergence theorem we obtain that for any s ∈ (0, τ ) is continuous on [0, 1] × (s, τ ] × R d . Using this, Corollary 3.3, Proposition 3.12 and the dominated convergence theorem we obtain that This and (iii) implies (iv).
By the same arguments as in the proof of Lemma 3.19 (iv) we obtain the following result.

Construction and properties of the semigroup of X t
Let us intruduce the following notation Note that by (19), for any x ∈ R d and f ∈ B b (R d ), we have We denote, for any x ∈ R d and f ∈ B b (R d ), It is clear that Proof. For any t ∈ (0, τ ], x ∈ R d by Corollary 3.13 we get Using the same arguments as in Lemma 4.1 for any t ∈ (0, τ ], x ∈ R d , n ∈ N, n ≥ 1 one gets |Ψ n,t f (x)| ≤ c n t n−1 f 1−γ ∞ f γ 1 /(n − 1)!, which implies the assertion of the theorem.
Proof. We have Let n ∈ N, n ≥ 1. By (79) we get Hence for s ∈ (0, τ ], by (77), we arrive at Using this and Lemma 3.18 we get Combining this with (85) we obtain This implies (84), which finishes the proof.
Using this and (89) we get the assertion of the lemma.
By Lemma 4.10 and Theorem 4.3 one easily obtains the following result. For any ζ > 0 we have By the dominated convergence theorem and (69) we obtain This and (97) gives the assertion of the lemma.
Let t ∈ (0, τ ], n ∈ N, i, j ∈ {1, . . . , d}. The fact that ∂ ∂x i Ψ (ξ) n,t f (x) is well defined and continuous as a function of x ∈ R d follows from (96), Lemmas 3.4, 3.5, Proposition 3.12 and Lemma 3.19. By the above arguments we also get Using this, Lemma 3.19 and Lemma 4.1 we arrive at (99). By similar arguments we obtain that ∂ 2 n,t f (x) is well defined and continuous as a function of x ∈ R d and Using this and Lemma 4.1 we get (100). For any ζ > 0 we have Proof. The lemma follows from Lemma 3.22, Proposition 3.10, Lemma 4.10, Lemma 4.1, (75), (76) and the boundedness of N : The next result (positive maximum principle) is based on the ideas from [19,Section 4.2]. Its proof is very similar to the proof of [19,Lemma 4.3] and it is omitted.
Assume that for each ξ ∈ (0, 1] sup t∈(0,τ ],x∈R d |v (ξ) (x, t)| < ∞, v (ξ) is C 1 in the first variable and C 2 in the second variable. We also assume that (for any τ > 0) is a linear, bounded operator follows by the definition of T t and Lemma 4.1. Fix f ∈ B b (R d ), R ≥ 1 and k ∈ N, k ≥ 1. By Lemma 4.10 there exists R k ≥ R such that for any x ∈ B(0, R) we have Put g 1,k (x) = 1 B(0,R k ) (x)f (x), g 2,k (x) = 1 B c (0,R k ) (x)f (x). By standard methods there exists f k ∈ C 0 (R d ) such that ]. Let f ∈ C 2 0 (R 2 ) be such that f ≡ 1 on B(0, 1) ⊂ R d and let f n (x) = f (x/n), x ∈ R d , n ∈ N, n ≥ 1. For any x ∈ R d we have lim n→∞ f n (x) = 1, lim n→∞ Kf n (x) = 0 and sup n∈N,n≥1 ( f n ∞ ∨ Kf n ∞ ) < ∞. By Corollary 4.11, for any s, t > 0 and x ∈ R d , we get lim Using (106) for f n and (107) we obtain (iii). ( We also have and By Proposition 3.10 there exists τ 1 ∈ (0, τ ] such that By Proposition 3.12 we obtain that This implies (v).
We are now in a position to provide the proof of Theorems By Proposition 4.21, for any function f ∈ C 2 b (R d ), the process Kf (X s )ds is a (P x , F t ) martingale, where F t is a natural filtration. That isP x solves the martingale problem for (K, C 2 b (R d )). On the other hand, according to [1,Theorem 6.3], the unique solution X to the stochastic equation (1) has the law which is the unique solution to the martingale problem for (K, C 2 b (R d )). HenceX and X have the same law so for any t > 0, x ∈ R d and any Borel bounded set A ⊂ R d we have where σ t (x, A) is defined by (9). Using this, (10) and (108) we obtain  Proof. First we define A(x 1 , x 2 ). Let κ(r) : [0, ∞) → [0, ∞) be defined by κ(r) = 0 for r ∈ [0, 1], κ(r) = r − 1 for r ∈ (1, 1 + π/4], κ(r) = π/4 for r > 1 + π/4. It is easy to check that κ(r) = ((r − 1) ∨ 0) ∧ (π/4) and that it is a Lipschitz function. Now let us introduce standard polar coordinates (r, ϕ), r ∈ [0, ∞), ϕ ∈ [0, 2π) by x 1 = r cos ϕ, x 2 = r sin ϕ.
For any f ∈ B b (R d ), t > 0, x ∈ R d we have P t f (x) = T t f (x) = e −λt ∞ n=0 Ψ n,t f (x). For our purposes it is enough to study Ψ 1,t . We have u(s, z + a i (z)w, y)f (y)dyν(w)dwdzds. (110) By arguments similar to the proof of Theorem 4.22 one can show that for any t > 0, x ∈ R d and almost all y ∈ R d we have u(t, x, y) ≥ 0 and for any s, t > 0, x ∈ R d , f ∈ B b (R d ) we have U t+s f (x) = U t (U s f )(x), (we omit the details here). Put R u(s, z + a i (z)w, y)ν(w) dw dz ds. (111) By (110) we have x, y)f (y) dy.
By Lemma 3.23 and the semigroup property of U t one can easily obtain that there exists t 2 > 1 such that for any t ∈ [t 2 − 1, t 2 ] D u(t, 0, z) dz ≥ c.
By (109) we have T t = P t . By Theorem 1.1 x → P t 1 B(0,r) (x) is continuous so P t 1 B(0,r) ∞ ≥ P t 1 B(0,r) (0). Using this and (116) for r ∈ (0, t  (109) we infer that transition densities p(t, x, y) for X t exist. We point out that the existence of transition densities is already well known, see [10]. In the above example (in R 2 ) we showed that the transition density p(t, 0, y), for some t > 0, is an unbounded function. In fact, the following estimate holds y almost surely p(t, 0, y) ≥ c|y| α−1 , |y| ≤ ε 1 , where c, ε 1 are some positive constants possibly dependent on t. Hence, we can not expect a general result saying that, with our assumptions, we have the standard estimates for p(t, x, y) of the form p(t, x, y) ≤ Ct −d/α , as for example in the case of diagonal matrices [24], or matrices satisfying some further regularity assumptions [29]. On the other hand, the assumption α < 1 plays an important role (in R 2 ), since for α > 1, by the results of [5], the transition density is bounded.