Functional inequalities for two-level concentration

Probability measures satisfying a Poincar{\'e} inequality are known to enjoy a dimension free concentration inequality with exponential rate. A celebrated result of Bobkov and Ledoux shows that a Poincar{\'e} inequality automatically implies a modified logarithmic Sobolev inequality. As a consequence the Poincar{\'e} inequality ensures a stronger dimension free concentration property , known as two-level concentration. We show that a similar phenomenon occurs for the Latala-Oleszkiewicz inequalities, which were devised to uncover dimension free concentration with rate between exponential and Gaussian. Motivated by the search for counterexamples to related questions, we also develop analytic techniques to study functional inequalities for probability measures on the line with wild potentials.


Introduction
This article is a contribution to the functional approach to concentration inequalities, see, e.g., [18]. We work in the setting of Euclidean spaces R d , ·, · , | · | , although most of the results extend to more general settings as Riemannian manifolds.
First we recall the main functional inequalities which allow to establish concentration properties. A probability measure µ on R d satisfies a logarithmic Sobolev inequality if there is a constant C LS < ∞ such that for all smooth functions f : R d → R, where Ent µ (g) = g log g dµ − g dµ log g dµ is the entropy of a nonnegative function. It is convenient to denote by C LS (µ) the smallest possible constant C LS in (1.1). Classically the inequality tensorizes, meaning C LS (µ ⊗n ) = C LS (µ) for all n, and the standard Gaussian measure satisfies a logarithmic Sobolev inequality. Conversely, measures with a log-Sobolev inequality enjoy a dimension-free concentration inequality with Gaussian rate: for all n ≥ 1 and for all measurable A ⊂ R nd with µ ⊗n (A) ≥ 1 2 , it holds for all t > 0 where K is a numerical constant and B 2 = B nd 2 denotes the Euclidean unit ball (of R nd ). This concentration property can also be formulated in terms of deviations of functions, as we will mention later.
The other main property in the field is the Poincaré inequality. A probability measure µ on R d enjoys a Poincaré inequality if there exists a constant C P < ∞ such that for all smooth f : R d → R, where Var µ (f ) = f 2 dµ − f dµ 2 is the variance of f with respect to µ. Again C P (µ) denotes the minimal constant for which the inequality holds. The Poincaré inequality tensorizes (C P (µ ⊗n ) = C P (µ)) and ensures dimension-free concentration properties with exponential rate: for all n and all A ⊂ R nd with µ ⊗n (A) ≥ 1 2 , and for all t > 0, where K is a numerical constant. The symmetric exponential distribution dν(t) = e −|t| dt/2 on R satisfies a Poincaré inequality, but not the log-Sobolev inequality.
In [22], Talagrand proved a stronger concentration property than the above one: if ν ⊗n (A) ≥ 1 2 then for all t > 0, for some universal constant C, where B p = B n p = {x ∈ R n ; n i=1 |x i | p ≤ 1}. Talagrand's two-level concentration inequality (1.4) is doubly sharp: for A = (−∞, 0] × R n−1 it captures the exponential behaviour of coordinates marginals of ν ⊗n , while for A = {x; i x i ≤ 0} using that B n 1 ⊂ {x ∈ R n ; i x i ≤ 1} and B n 2 ⊂ {x ∈ R n ; i x i ≤ √ n}, one gets which is asymptotically of the right order in t when n → ∞, according to the Central Limit Theorem. Talagrand's two-level concentration phenomenon was incorporated in the functional approach by Bobkov and Ledoux: they introduced a modified log-Sobolev inequalities which implies concentration of the type (1.4). More importantly, they showed that it is implied by the Poincaré inequality (which means that the concentration consequences of that inequality are stronger than (1.3)). More precisely, Theorem 1.1 (Bobkov-Ledoux [8]). Let µ be a probability measure on R d , which satisfies a Poincaré inequality with constant C P . Then for any c ∈ (0, C −1/2 P ), there exists K(c, C P ) < ∞ such that for all smooth f : R d → (0, ∞) such that pointwise ∇f f ≤ c, The Central Limit Theorem and an argument of Talagrand [22] roughly imply that if dimension-free concentration in Euclidean spaces occurs, then the rate of concentration cannot be faster than Gaussian, and the measure should be exponentially integrable. In this sense, Poincaré and log-Sobolev inequalities describe the extreme dimension-free properties. The functional approach to concentration with intermediate rate (between exponential and Gaussian) was developed by Latała and Oleszkiewicz [17]. We say that a probability measure µ on R d satisfies the Latała-Oleszkiewicz inequality with parameter r ∈ [1,2], if there exists a constant C LO(r) < ∞ such that for every smooth f : R d → R one has (1.5) sup Let us stress here that most of the information is encoded in the speed at which (2 − θ) 2(1−1/r) vanishes as θ → 2 − (by omitting the supremum on the left-hand side of (1.5) and only considering a fixed θ ∈ (1, 2) one gets a significantly weaker inequality). We sometimes omit the dependence in r in the notation when there is no ambiguity on the value of r. For r = 1 the inequality is equivalent to Poincaré inequality. For r = 2 and the Gaussian measure, such inequalities were first considered by Beckner [7]. Moreover, up to the constants, the inequality for r = 2 is equivalent to the log-Sobolev inequality (note in particular that the limit as θ → 2 − of the ratio on the left-hand side is the entropy). Latała and Oleszkiewicz proved that the above functional inequality tensorizes and implies dimension-free concentration with rate exp(−t r ): (their proof yields K = 1/3; see also [25] and Section 6 of [3] for an extension to a more general setting). By µ r we denote the probability measure on the real line with density For r ∈ (1, 2), Latała and Oleszkiewicz [17] showed that µ r satisfies the inequality (1.5) with a uniformly bounded constant (in this case d = 1). For these measures, one obtains a dimension-free concentration inequality with a rate corresponding to the tails. Another approach was suggested by Gentil, Guillin, and Miclo [13], which we present now. For r ∈ (1, 2], we say that a probability measure µ on R d satisfies the modified log-Sobolev inequality with parameter r if there exists a constant C mLS(r) < ∞ such that for every smooth function f : where H r ′ (t) := max{t 2 , |t| r ′ } for t ∈ R and r ′ ≥ 2 is the dual exponent of r, defined by 1 r + 1 r ′ = 1. This is a natural extension of the modified log-Sobolev inequality of Bobkov and Ledoux, see Theorem 1.1, which appears as the limit case r = 1. Indeed, when r → 1 + , r ′ → ∞ and The modified log-Sobolev inequality tensorizes as follows: if µ satisfies (1.6), then for any positive integer n and every smooth function f : R dn → (0, ∞) one has where ∇ i f denotes the partial gradient with respect to the i-th d-tuple of coordinates of R dn . It is proved in [13] that for each r ∈ (1, 2) the measure µ r satisfies a modified log-Sobolev inequality (1.6) with parameter r. This allows to recover a two-level concentration inequality of Talagrand [23], extending (1.4): for r ∈ (1, 2), if µ ⊗n r (A) ≥ 1 2 , then µ ⊗n In view of Theorem 1.1, it is natural to conjecture, that similarly the Latała-Oleszkiewicz inequality implies the modified log-Sobolev inequality (and therefore improved two-level concentration), cf. Remark 21 in [6].

Main result and organization of the article
We are ready to state our main result. Let us emphasize that it is not restricted to measures on the real line and that the dimension d does not enter into the dependence of constants.
Theorem 2.1. Let r ∈ (1, 2). Let µ be a probability measure on R d which satisfies the Latała-Oleszkiewicz inequality (1.5) with parameter r and with constant C LO(r) . Then µ satisfies the modified log-Sobolev inequality (1.6) with parameter r and a constant C depending only on C LO(r) and r. More precisely, one can take C = α(r) max{C LO(r) , C 1/r LO(r) } for some function α. The concentration property can also be formulated in terms of functions. As shown in [17], the Latała-Oleszkiewicz inequality (1.5) implies that for any integer n ≥ 1 and every 1-Lipschitz function f : R dn → R, one has The modified log-Sobolev inequality (1.6) implies via a modification of Herbst's argument a stronger deviation inequality. Therefore our main theorem ensures that the Latała-Oleszkiewicz inequality ensures such an improved concentration. This is the content of the next two corollaries, for which some notation is needed.
(here | · | stands for the ℓ 2 norm on R d ; in the notation we suppress the roles of d and n, but they will always be clear from the context).
Corollary 2.2. Let r ∈ (1, 2). Let µ be a probability measure on R d which satisfies the Latała-Oleszkiewicz inequality (1.5) with parameter r and with constant C LO . Then there exists a constant C > 0, depending only on C LO and r, such that for any positive integer n, any smooth f : R dn → R, and all t > 0, Using standard smoothing arguments one can also obtain a result for not necessarily smooth functions, expressed in terms of their Lipschitz constants. Corollary 2.3. Let r ∈ (1, 2). Let µ be a probability measure on R d which satisfies the Latała-Oleszkiewicz inequality (1.5) with parameter r and constant C LO . Then there exists a constant C > 0, depending only on C LO and r, such that for any positive integer n the following holds: if f : for all x, y ∈ (R d ) n , then for all t > 0 One can also express concentration in terms of enlargements of sets. Below B dn 2 and B dn r stand for the unit balls in the ℓ 2 and ℓ r -norms on R dn , respectively. Also, let be the unit ball in the norm · r,2 . Observe that for r ∈ (1, 2), Corollary 2.4. Let µ be a probability measure on R d which satisfies the Latała-Oleszkiewicz inequality (1.5) with parameter r and constant C LO . Then there exists a constant K > 0, depending only on C LO and r, such that for any positive integer n and any set A ⊂ R dn with µ ⊗n (A) ≥ 1/2, In particular, One can take K = 1 32 min{1/C, 1/C r−1 }, where C is taken from Theorem 2.1. This corollary should be compared with the results of Gozlan [14]. He proved that if a probability measure µ on R d satisfies the Latała-Oleszkiewicz inequality, then it satisfies a Poincaré type inequality involving a non-standard length the gradient (see Corollary 5.17 in [14]), which in turn implies a slightly different type of two-level concentration (see Proposition 2.4 and Proposition 1.2 in [14]). However, unlike in the above two corollaries, the constants which appear in his formulations do depend on the dimension d of the underlying space (even though they do not depend on n). Namely, if we denote x i = (x 1 i , . . . , x d i ) ∈ R d for i = 1, . . . , n, then [14] shows the existence of a constant K > 0 (depending only on C LO and r) such that for any positive integer n and any set A ⊂ R dn with µ ⊗n (A) ≥ 1/2, (the d in the denominator on the left-hand side comes from Corollary 5.17 of [14] and the d on the right-hand side-from Proposition 2.4 therein). In particular, this implies In terms of the dependence on d this is weaker than (2.2), since The organization of the rest of the article is the following. In Section 3 we introduce two more functional inequalities. They will serve as intermediate steps between the Latała-Oleszkiewicz and the modified log-Sobolev inequalities. In Section 4 we prove the main result and Corollaries 2.2 and 2.4. The rest of the paper deals with measures on the real line. One motivation is to make progress on a question that we do not fully settle: our main theorem shows an implication between two properties; are they actually equivalent? In Section 5, we recall the known criteria. In Section 6 we consider the weighted log-Sobolev inequalities used by [13] in order to derive (1.6). We show that the two properties are not equivalent.
Workable criteria are available for measures on R with strictly increasing potential close to ∞. In the final section, we develop an elementary approach to deal with potential with vanishing derivatives or even decreasing parts. We illustrate this method on the functional inequalities of interest.

Preliminaries: a few more inequalities
We start with the following observation. Proof. By taking θ → 1 + in (1.5) we see that (1.2) holds for all positive smooth functions (with constant C LO ). Since the variance is translation invariant, we conclude that (1.2) holds for all smooth functions bounded from below. The general case follows by approximation.
Remark 3.2. Alternatively, one can deduce the Poincaré inequality from the fact that Inequality (1.5) implies dimension-free concentration and the results of [16].
We say that a probability measure µ on R d satisfies an F r -Sobolev inequality if there exists C such that for every smooth g : R d → R, This inequality is tight, i.e., we have equality for constant functions (if f is constant and equal to zero on its support, then the expression 0/0 should be interpreted as 0 here and in (3.3) below). We say that µ on R d satisfies a defective F r -Sobolev inequality if there exists B and C such that for every smooth g : R d → R, In [3] Barthe, Cattiaux, and Roberto provided capacity criteria for, among others, the Latała-Oleszkiewicz and F r -Sobolev inequalities. We refer to Section 5 of [3] for a thorough overview of the topic, and in particular to the diagram on page 1041, which we use as a road map. The following theorem is a direct corollary of the results contained therein (and also in Wang's independent paper [25]).   [3] imply that if µ satisfies the Latała-Oleszkiewicz inequality (1.5) with some constant C LO , then , we obtain that for all A as above, Theorem 28 of [3] then gives 1 that for every smooth function f : R d → R, Substituting Remark 3.4. The latter theorem remains valid even if µ is not absolutely continuous. To see this we use an approximation argument. Let γ ε be the centered Gaussian measure on R d with covariance matrix ε Id. For small enough ε > 0, γ ε satisfies the Latała-Oleszkiewicz inequality with the same constant as µ and hence, by tensorization, so does µ⊗γ ε . Testing the inequality with the function (x, y) → f (x+ y), we conclude that µ * γ ε also satisfies the Latała-Oleszkiewicz inequality (with a constant at most 2C LO (µ) when ε is small enough). Thus, by Theorem 3.3, µ * γ ε satisfies the F r -Sobolev inequality. We fix a bounded smooth Lipschitz function, take ε → 0, and arrive at the conclusion that µ satisfies the F r -Sobolev inequality for all bounded smooth Lipschitz functions (we have pointwise convergence and since the function is Lipschitz and bounded we can use the dominated convergence theorem). Now if f is an arbitrary smooth function such that R d |∇f | 2 dµ < ∞, then it suffices to consider functions f n = Ψ n (f ), where Ψ n : R → R is, say, an odd and non-decreasing function defined by and ψ : [0, 2] → [0, 1] is smooth and increasing on (0, 2), such that ψ(0) = 0, We then use dominated convergence on the right-hand side and monotone convergence on the left-hand side (note that by the Poincaré inequality f is square-integrable).
We need another inequality introduced by Barthe and Kolesnikov in [4]. Let τ ∈ (0, 1), one says that a probability measure µ on R d satisfies the inequality I(τ ) if there exists constants B 1 and C 1 such that for every smooth f : R d → R, This inequality is related to the previous ones. The next statement is a quotation of Theorem 4.1 in [4], with a stronger assumption (of a Poincaré inequality instead of a local Poincaré inequality) 1 The assumption of absolute continuity of µ comes into play at this point. Indeed, the proof of Theorem 28 in [3] relies on a decomposition of R d into level sets {f 2 > ρ k }, for some well chosen ρ k (cf. proof of Theorem 20 in [3]), and one needs to know that the sets {f 2 = ρ k } ∩ {|∇f | = 0} are negligible. Theorem 3.5 (Barthe-Kolesnikov [4]). Let r ∈ (1, 2). Let µ be a probability measure satisfying Inequality I(2/r ′ ). Then µ satisfies a defective F r -Sobolev inequality (3.2) and a defective modified log-Sobolev inequality with parameter r.
If in addition µ satisfies a Poincaré inequality, then its satisfies an F r -Sobolev inequality (3.1) and a modified log-Sobolev inequality with parameter r, (1.6), with constants depending only on the constants of the input inequalities.
We establish a partial converse to the above implication: Theorem 3.6. Let r ∈ (1, 2). Assume that a probability measure µ on R d satisfies the defective F r -Sobolev inequality (3.2) with constants B and C. Then µ satisfies the I(2/r ′ ) inequality (3.3) with some constants B 1 and C 1 which depend only on B, C, and r.
Proof. We reverse the reasoning from the proof of Theorem 4.1 in [4]. Fix a smooth function f such that the right-hand side of (3.3) is finite. We may and do assume that (which is convex since the function t → t log 1−2/r ′ (e + t) is convex and increasing for t > 0. Recall that r ′ > 2). Denote by L the Luxemburg norm of f related to Φ: Let us first express the right-hand side of this inequality in terms of f . For x ∈ R denote ϕ(x) := x log 1/2−1/r ′ (e + x 2 ). Then and thus dµ.
As for the left-hand side of (3.4), it is easy to see that there exists κ 1 = κ 1 (r) > 0 such that, for y > 0, Applying this inequality with y = f 2 /L 2 , we arrive at It remains to replace the expression on the left-hand side by Ent µ (f 2 ) and estimate L 2 . Since Finally, it is easy to see that for every ε > 0 there exist κ 2 = κ 2 (ε, r) such that, for y > 0, Using first the definition of L and the fact that L 2 ≥ R d f 2 dµ, and then the above bound (with y = f 2 / R d f 2 dµ) we can thus estimate Eventually, for ε small enough we combine this bound with (3.6) and simplify the entropy terms (recall that by our assumption Ent µ (f 2 ) < ∞) in order to reach the claim.

Proof of the main result and its corollaries
Proof of Theorem 2.1. Our assumption is that µ satisfies a Latała-Oleszkiwicz inequality with parameter r. Therefore by Lemma 3.1 it also satisfies a Poincaré inequality, and by Theorem 3.3 (and Remark 3.4 if µ is not absolutely continuous) it satisfies a (tight) F r -Sobolev inequality. From Theorem 3.6, we deduce that µ enjoys an I(2/r ′ )-inequality. Eventually Theorem 3.5 asserts that the I(2/r ′ ), together with the Poincaré inequality, implies a (tight) modified log-Sobolev inequality with parameter r. The constants that we obtain in the above inequalities only depend on r and C LO(r) . Proving the claimed dependence in C LO(r) is straightforward, but requires to track the constants in the various intermediate statements. We omit the details.
Remark 4.1. Let us comment here that for d = 1 it is known that the inequalities (1.5) and (1.6) hold if and only if they hold with the integration with respect to µ on the right-hand side replaced by integration with respect to µ ac , the absolutely continuous part of µ (cf. [9], Appendix of [19], Appendix of [15]).
For the proofs of the corollaries we need one more technical lemma. We denote by H * r ′ (t) := sup s∈R {st − H r ′ (s)}, t ∈ R, the Legendre transform of H r ′ (we refer to the book [20] for more information on this topic).  ∈ (1, 2). The function H * r ′ is given by the formula Proof. The first part is a straightforward calculation. To prove the second part, first notice that

This allows to verifty the inequality H
Thefore it is enough to check the inequality at the endpoints of the interval, which we have already done.
Proof of Corollary 2.2. A classical argument of Herbst (see, e.g., [18]) allows to deduce concentration bounds from log-Sobolev inequalities. It was implemented in [6] for modified log-Sobolev inequalities with energy terms H(∇f /f ) involving general functions H. We rather follow the calculation of [1] which is more suited to the case H = H r ′ . Take a function f : (R d ) n → R and denote Moreover, let F (λ) = R dn e λf (x) dµ ⊗n . Then λf (x)e λf (x) dµ ⊗n and hence, since µ satisfies the modified log-Sobolev inequality with some constant C = C(C LO , r) (by Theorem 2.1) and by the tensorization property, where we used the inequality i max{a 2 After dividing both sides by λ 2 F (λ) we can rewrite this as Since the right-hand side is an increasing function of λ > 0 and we deduce from the last inequality that Therefore from Chebyshev's inequality we get, for t > 0 and any λ > 0, Now we can optimize the right-hand side with respect to λ. Let U and V be such and hence Using Lemma 4.2 and the definitions of U and V we get which yields the assertion of the corollary. Since it is smooth, the ℓ 2 -norm and the norm · r ′ ,2 of its gradient can be estimated pointwise by L 2 and L r,2 , respectively. Therefore we can apply Corollary 2.2 to f ε . Moreover, and hence f ε converges uniformly to f as ε tends to zero. This observation ends the proof of the corollary.
Proof of Corollary 2.4. We follow the approach of Bobkov and Ledoux from Section 2 of [8]. Take a set A ⊂ R dn with µ ⊗n (A) ≥ 1/2. For x = (x 1 , . . . , x n ) ∈ (R d ) n , denote Take any t > 0 and set f = min{F, t}.
We claim that for all x, y ∈ (R d ) n , Suppose that we already know that this holds. Note that (2t 1/r ′ ) r = 2 r t r−1 . Also, R dn f dµ ⊗n ≤ t/2 since F = 0 on A and µ ⊗n (A) ≥ 1/2. Consequently, by Corollary 2.3 and (4.1), where K = 1 32 min{1/C, 1/C r−1 } and C is the constant with which, by Theorem 2.1, the modified log-Sobolev inequality holds for µ. Since clearly this yields the first assertion of the corollary. The second part follows by the inclusion It remains to prove the claim (4.1). To this end, consider the functions and g(x) = min{G, t}. Since g is locally Lipschitz it suffices to show that, a.e., Indeed, this will imply that (4.1) holds with g in place of f (note that the norm · r,2 is dual to the norm · r ′ ,2 ). Since f (x) = inf a∈A g(x − a) (and the infimum of Lipschitz functions is Lipschitz with the same constant), the same estimates will be inherited by f .
On the open set {G > t} the estimates obviously hold, since g is constant. The set {G = t} is Lebesgue negligible. Thus in what follows it suffices to consider the set {G < t} on which g = G.
If, for some i, |x i | < 1, then If on the other hand |x i | > 1, then Thus, a.e. (the set where |x i | = 1 for some i is negligible), Consequently, on the set {G < t}, it holds a.e.
Therefore the proof is complete.

Criteria for measures on the real line
From now on we restrict to probability measures on the real line. In this setting, more tools are available. For several functional inequalities, workable equivalent criteria are available. They are based on Hardy type inequalities, of the form We refer to [2] for the history of the topic, from the original book of Hardy, Littlewood and Pólya, to the general version by Muckenhoupt. The textbook [2] also mentions that such Hardy inequalities yield the following criterion for Poincaré inequalities on R (where we include a numerical improvement from [19]): Let µ be a probability measure on R, with median m. Let ν be a probability measure on R, and let n denote the density of its absolutely continuous part. Then the (possibly infinite) best constant C P such that for all smooth f , where by convention 0 · ∞ = 0. Bobkov and Götze [9] extended the reach of these methods, by proving a similar statement for log-Sobolev inequalities of the form Their result reads as the previous one, with different numerical constants and B + P , B − P replaced by and B − LS defined similarly for x < m. This criterion was later extended to the Latała-Oleszkiewicz inequality (1.5). Let µ be a probability measure on R. Denote by m the median of µ and by n the density of its absolutely continuous part. Barthe and Roberto [5] proved that µ satisfies the Latała-Oleszkiewicz inequality (1.5) if and only if max{B  ∈ (1, 2).
In the subsequent paper [6], Barthe and Roberto provided a criterion for the modified log-Sobolev inequality (1.6). However, they did not reach a full equivalence. Here is the outline of Theorem 10 in [6]. Let dµ(t) = n(t)dt be a probability measure on R with median m. If µ satisfies the Poincaré inequality with constant C P and max{B + mLS(r) , B − mLS(r) } < ∞, where  and B − mLS is defined similarly but with x < m, then µ satisfies the modified log-Sobolev inequality (1.6) with constant The converse implication is, so far, known only under the following additional assumption: there exists ε > 0 such that for all x = m In this case, if µ satisfies the modified log-Sobolev inequality (1.6), then max{B + mLS(r) , B − mLS(r) } < ∞ and this quantity can be estimated in terms of the constant C mLS(r) up to constants depending on r and ε. The Poincaré inequality is a classical consequence of modified log-Sobolev inequality, exactly as in Lemma 3.1.
Even though the above criteria involve simple concrete quantities, it does not seem easy to use them in order to reprove our main result Theorem 2.1 for measures on R. However, if one assumes for example that dµ( then one can estimate the quantities B + LO(r) , B + mLS(r) and show that the Latała-Oleszkiewicz inequality (1.5) is equivalent to the modified log-Sobolev inequality (1.6) and furthermore to the condition Remark 21 in [6]).
In the rest of the paper we use and develop one-dimensional criteria in order to study whether the modified log-Sobolev inequality is actually equivalent to other inequalities which are known to imply it.

Weighted vs. modified log-Sobolev inequality
It is known that if a probability measure µ on R d satisfies a certain weighted log-Sobolev inequality (and an integrability condition), then it also satisfies a modified log-Sobolev inequality, see Theorem 3.4 in [12] (in the context of a specific measure on the real line a similar argument appears already in the large entropy case of the proof of Theorem 3.1 from [13]). The goal of this subsection is to show that the converse implication does not hold in general, even for measures on the real line.
First we present a workable criterion for the weighted log-Sobolev inequality.
dx be a probability measure on the real line. Let V : R → R be even and locally bounded. Assume that in some neighborhood of ∞, the function V is of class C 2 , and that V ′ (x) 2 = 0. Then, there exists C < ∞ such that µ satisfies the following weighted log-Sobolev inequality: for every f : R → R, Remark 6.2. Condition (ii) can be weakened to lim sup x→∞ By the Bobkov-Götze criterion [9] (see (5.2)), µ satisfies the weighted log-Sobolev inequality if and only if Of course, it suffices to investigate what happens for x → ∞. Note that (by Assumption (i)) and (here by '∼' we mean that the ratio of both sides tends to 1 as x → ∞; to prove that this is indeed the case it suffices to consider the ratio of the derivatives of both sides). Therefore, (6.2) holds if and only if which, since V ′ (x) is bounded away from zero as x → ∞, happens if and only if Our example is a modification of the example constructed by Cattiaux and Guillin [11] to prove that the log-Sobolev inequality is strictly stronger than Talagrand's transportation cost inequality. Proposition 6.3. For r ∈ (1, 2) and max{r/2, r − 1/r} < β − 1 < r − 1/2 define Let µ r,β be the probability measure with density proportional to e −U r,β (x) . Then µ r,β satisfies the modified log-Sobolev inequality (1.6) and the Latała-Oleszkiewicz inequality (1.5) (with d = 1).
On the other hand, µ r,β does not satisfy the weighted log-Sobolev inequality (6.1).
Clearly, U ′ (x) ≥ βx β−1 ; in particular lim inf x→∞ U ′ (x) > 0. Moreover, for x > 1, |U ′′ (x)| can be bounded by M x r for some constant M = M (r, β). Thus, since β − 1 > r/2. We are thus in position to apply workable versions of the criteria for the modified and weighted log-Sobolev inequalities (note that the normalization of µ r,β amounts to adding a constant to the potential U , which does not affect the calculations and reasoning below).
On the other hand, for certain values of x → ∞ (e.g., for since β − 1 < r − 1/2. Thus, by Proposition 6.1 above, µ r,β cannot satisfy the weighted log-Sobolev inequality. Remark 6.4. The introduction of [21], suggests that the results of our Theorem 2.1 are contained in [26], namely that it follows from [26] that the F r -Sobolev inequality (3.1) implies the modified log-Sobolev inequality (1.6). We would like to rectify this: Wang's paper [26] deals with measures with faster decay than Gaussian. He proves that in that setting an appropriate super Poincaré inequality (or equivalently, an appropriate F -Sobolev inequality) implies a certain weighted log-Sobolev inequality. However, in our setting (measures with tail decay slower than Gaussian), we have an example of a measure which satisfies the modified log-Sobolev inequality (1.6) and the Latała-Oleszkiewicz inequality (1.5) (or equivalently, the F r -Sobolev inequality (3.1)), but does not satisfy the weighted log-Sobolev inequality (6.1). Therefore Theorem 2.1 cannot be deduced from Wang's paper [26].

7.
On potentials with vanishing derivatives 7.1. Motivation. Recall that r ∈ (1, 2) is the parameter associated with the Latała-Oleszkiewicz inequality (1.5) and the modified log-Sobolev inequality (1.6). Throughout this section we consider symmetric probability measures on the real line of the form where V : R → R is even and Z is the normalization constant. It is easy to see that if ε ∈ [0, 1) and for x ∈ R then µ V satisfies both the Latała-Oleszkiewicz inequality (1.5) and the modified log-Sobolev inequality (1.6). Indeed, if ε ∈ [0, 1), then lim inf x→+∞ V ′ (x) > 0, lim x→∞ V ′′ (x)/V ′ (x) 2 = 0 and the claim follows from the simplified versions of the Barthe-Roberto criteria (see (5.6)). This example becomes more interesting for ε = 1: since for any integer k, V ′ ((2k+ 1)π) = 0 we cannot apply the simplified asymptotic versions of the criteria. In particular, one would like to know if, for measures with such potentials, the modified log-Sobolev inequality (1.6) and the Latała-Oleszkiewicz inequality (1.5) are valid simultaneously.
In the limit case r = 2, Cattiaux [10] proved that if then µ V satisfies the classical log-Sobolev inequality if and only if |λ| < 1 (note that this potential differs from (x + λ sin(x)) 2 only by a bounded perturbation). He used probabilistic methods which seem to rely on the fact that r = 2. Below we present an analytic approach and obtain an extension of his results.
The threshold r 0 (α) in Inequalities (1.5) and (1.6) suggests a weaker concentration than the one actually exhibited by the measures ν α , which is better described by transportation cost inequalities, see [24,23]. Let α ∈ (1,2]. Recall that we say that a probability measure µ on the real line satisfies the transport-entropy inequality T min{x 2 ,|x| α } (a) if for any probability measure σ on the real line where T α,a is the optimal transport cost between the measures µ and σ with respect to the cost function t → min{(at) 2 , |at| α }, i.e., where the infimum runs over the set of couplings between µ and σ, and H(σ|µ) stands for the relative entropy of σ with respect to µ. Proposition 7.3. Let α ∈ (1, 2]. The measure ν α satisfies the transport-entropy inequality T min{x 2 ,|x| α } (a) with some constant a > 0 depending only on α.
One can also wonder what happens if we allow the potential V to have even bigger oscillations. For α > 1 and λ > 1 define and let ν α,λ be the probability measure with density proportional to exp(−V α,λ ). 7.3. Proofs. In the next two proofs we shall omit the subscript α in the notation and write V , ν, and Z instead of V α , ν α and Z α , respectively.

Denote for simplicity
where we used the fact the V is increasing on (0, ∞) and substituted t = u + u −β in the second integral. Note that, by the convexity of the function x → x α , x > 0, for some c 1 = c 1 (α) > 0 (for, say, u ≥ 1). Thus (7.1) implies that for sufficiently large x,
As above, denote for simplicity β = (α − 1)/3. For x > 2π we have where we used the fact the V is increasing on (0, ∞) and substituted t = u − u −β in the second integral (l(x) > x − 2π is the unique number such that for some c 2 = c 2 (α) > 0 and sufficiently large u > 0. Thus (7.3) implies that for sufficiently large x, for some c 2 = c 2 (α) > 0. Thus, for sufficiently large x, For q > 2 the function t → t log 2/q (1 + 1/(2t)) is increasing for small enough positive t. Using (7.2) and (7.4), we see that for sufficiently large x > 0, (we omit multiplicative constants not depending on x). Clearly, if 1 < r ≤ r 0 (α), then this is bounded as x → ∞, and by the Barthe-Roberto criterion (see (5.3) above) the Latała-Oleszkiewicz inequality with parameter r does hold.
Conversely, let us show that if ν enjoys the Latała-Oleszkiewicz with parameter r then necessarily r ≤ r 0 (α). This can be seen by focusing on the points x k = (2k + 1)π where V ′ vanishes and our estimates can be reversed up to multiplicative constants. Indeed, for k large enough one can find a constant c 3 such that for For k and y as above, Consequently, as the function t → t log 2/r ′ (1+1/(2t)) is increasing for small enough positive t, for k sufficiently large we can write: If ν satisfies the Latała-Oleszkiewicz Inequality with parameter r, then by the Barthe-Roberto criterion (see (5.3) above) the latter quantity remains bounded from above when k → ∞. This forces α/r ′ ≤ β, or equivalently r ≤ r 0 (α).
Next we turn to the proof of (i) ⇐⇒ (iii). In view of Theorem 2.1, we just need to prove that if ν satisfies a modified log-Sobolev inequality with parameter r then necessarily r ≤ r 0 (α). We apply the necessity part of the criterion of Barthe-Roberto. It requires Assumption (5.5), which is verified since (7.4) is valid for (r − 1)V instead of V , with different numerical constants. Then we use the fact that the quantity B + mLS(r) defined in (5.4) is bounded, together with lower bounds of ∞ x k e −V and x k 0 e (r−1)V . The computations are similar than the above ones for the Latała-Oleszkiewicz inequality, and yield r ≤ r 0 (α). We omit the details.
Proof of Proposition 7.3. Fix α ∈ (1, 2] and denote, for t ≥ 0, Let F ν and F exp be the cumulative distribution functions of ν and the symmetric exponential measure with density 1 2 e −|x| respectively. By Proposition 7.1, ν satisfies the Latała-Oleszkiewicz inequality with r 0 (α) > 1, so it satisfies the Poincaré inequality. Thus, by Theorem 1.1 of [15], in order to prove the assertion it suffices to show that there exists b = b(α) > 0 such that for all x, y ∈ R. Note that for x ≥ 0 we havex = F −1 ν (F exp (x)) if and only if Thus it suffices to check whether there exists b = b(α) > 0 such that for all x, y ∈ R (recall that ν is symmetric). If |x − y| ≤ 2π, then one can guarantee that (7.5) holds simply by taking b ≤ (2π) −1 . Let therefore consider the case when |x − y| ≥ 2π. We have to cases: 1. x, y are of different signs, 2.
x, y are of the same sign. Case 1. In the first case we have From the proof of Proposition 7.1 we know that for sufficiently large t > 0 we have (see (7.2)). Thus, for sufficiently large t > 0, Therefore, we can choose b > 0 to be such that 2 α−1 b α t α ≤ N (t) + 1 2 holds for all t > 0. Then which is exactly (7.5) in the case when x, y are of different signs.
Case 2. Suppose now that x, y are of the same sign, say x ≥ y + 2π ≥ y ≥ 0. Observe that, for t > 0 and k ∈ N, Thus, for t > 0 and s ≥ 2π, Therefore, for t > 0 and s ≥ 2π, Thus, if we take b ≤ 1/2, then (7.5) holds also for x ≥ y + 2π ≥ y ≥ 0 (we substitute x = t + s, y = t). This finishes the proof.
Moreover, by the convexity of the function x → x α (x > 0), for h ∈ [0, δ 0 ] we have Putting together (7.6) and (7.7), we observe that The classical approach is based on writing e V = 1 V ′ × V ′ e V and on an integration by parts. It works if there exits x 0 and ε > 0 such that for x ≥ x 0 , V is of class C 2 , V ′ > 0 and V " (V ′ ) 2 ≤ 1 − ε. In this case, up to multiplicative constants which depend on ε, for x ≥ x 0 , This approach cannot work if V ′ vanishes for arbitrarily large values, as it was the case for V α . The approach that we used for ν α can still be applied in such situations, when the potential is, in some sense, essentially increasing. The key parameter at point x is a number θ(x) > 0 so that for some constants C ≥ c > 0 (independent of x), In words, V is essentially constant on [x − θ(x), x + θ(x)], but does increase between the left endpoint and the center, and between the center and the right endpoint. For the upper bound, one also needs V to grow at least linearly: V (x + K) ≥ V (x) + c. Under some additional assumptions (e.g., θ ′ is small enough compared to c), one gets ∞ x e −V ≈ θ(x)e −V (x) , x x0 e V ≈ θ(x)e V (x) · Note that when V ′ (x) > 0, 1/V ′ (x) is heuristically the scale at which V moves by 1, which makes a connection with the classical approach. Let us also mention that for measures having the latter properties, the Latała-Oleszkiewicz and the modified log-Sobolev inequality with parameters r will be true simultaneously. Indeed if θ is bounded from above, then Condition (5.5) should be verified. Then the quantities B + LO(r) and B + mLS(r) are comparable since for x large Let us conclude with a simple observation about potentials which are nonincreasing on infinitely many intervals (variants involving essentially nonincreasing ones can be written).
Lemma 7.5. Let µ be a probability measure on the real line with density proportional to exp(−V (x)) for some locally bounded V : R → R. Suppose that there exists ε > 0 and a sequence of positive real numbers x n → ∞, such that V is nonincreasing on (x n − ε, x n + ε). Then µ does not satisfy the Latała-Oleszkiewicz inequality (1.5) with any parameter r ∈ (1, 2).
The above result should be compared to Proposition 7.4. Observe that measures satisfying the hypotheses of the Lemma may verify a Poincaré inequality. This is the case for the potential V (x) = ⌊|x|⌋ involving the integer part. This potential is constant on every interval [k, k + 1), ∈ N. Nevertheless V (x) is a bounded additive perturbation of the potential |x| of the symmetric exponential distribution, hence the associated measure satisfies a Poincaré inequality.