A Fundamental Condition for Harmonic Analysis in Anisotropic Generalized Orlicz Spaces

Anisotropic generalized Orlicz spaces have been investigated in many recent papers, but the basic assumptions are not as well understood as in the isotropic case. We study the greatest convex minorant of anisotropic Φ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi $$\end{document}-functions and prove the equivalence of two widely used conditions in the theory of generalized Orlicz spaces, usually called (A1) and (M). This provides a more natural and easily verifiable condition for use in the theory of anisotropic generalized Orlicz spaces for results such as Jensen’s inequality which we obtain as a corollary.

In essence, the (A1) conditions says that + B can be bounded by − B in small balls B ⊂ R n in a quantitative way, whereas (M) say that it can be similarly bounded by the least convex minorant ( − B ) conv of − B (see Definition 3.2). Obviously, the latter is a stronger condition, and it is also more difficult to verify, since the relationship between ( − B ) conv and − B may be complicated in the anisotropic case. In the isotropic case − B ( − B ) conv so (M) and (A1) are equivalent. In the anisotropic case this inequality does not hold (see Example 4.1), but we are nevertheless able to prove the equivalence of the conditions by a more careful analysis. This result and the techniques introduced in this paper will allow for the development of a theory of anisotropic generalized Orlicz spaces with more natural assumptions. As an example we prove the following Jensen-type inequality. Although the extra assumption ( f ) ≤ 1 in the corollary may seem strange, it follows naturally for instance when dealing with local regularity and it is known that the anisotropic Jensen-inequality does not hold without restrictions.
Let us next define precisely the concepts we are using and characterize functions for which the equivalence conv holds (Corollary 2.4). In Sect. 3, we define the conditions (A1) and (M) and give preliminary remarks regarding the definitions. Finally, in Sect. 4 we prove the main results mentioned above.

Almost Convexity and the Greatest Convex Minorant
I refer to the monographs [9,18] for background on isotropic and anisotropic generalized Orlicz spaces, respectively. We consider functions : × R m → [0, ∞]; the capital letter is used to highlight the distinction from the isotropic case L ϕ in [18] where ϕ : We use the equivalence relation which means that there exists β > 0 such that Here and in the rest of the paper β denotes a parameter which is given by by one or more conditions; if the conditions hold with different β k , then we can use β := min k β k for all the conditions so that we may just as well use only the one common β. Since the parameter λ is inside in the definition of v , this is the natural way to compare functions (cf. Example 2.2). To ensure that the integral in makes sense and · is a norm we require some conditions. Definition 2.1 Let ⊂ R n be an open set. We say that : × R m → [0, ∞] is a strong -f unction, and write ∈ s ( ), if the following four conditions hold: is convex for a.e. x ∈ : With these conditions, · is a norm. Note that continuity in ξ follows from convexity if is real-valued and (3) is only needed to ensure that does not jump to ∞. Note also that this class of strong -functions is broader than that studied [9] since we do not require that upper and lower bounds in terms of N -functions independent 7 Page 4 of 15 P. A. Hästö of x. For instance, this definition allows for L 1 -and L ∞ -type growth. In [18] in the isotropic case we relaxed (3) and (4) further, and so used "strong" for this class, even though it is still less restrictive than N -functions.
For the study of -functions depending on the space-variable x, we use local approximations with the functions + B and − B from (1.1) [18,22]. However, − B need not be convex even if each (x, ·) is (just think of min{t, t 2 }). In the isotropic case, ϕ − B nevertheless satisfies the following weaker variant of (4) above: is almost convex if there exists β > 0 such that for a.e. x ∈ and α, α ≥ 0 with α + α = 1.
Unfortunately, even this does not hold for − B in the anisotropic case (see Example 4.1). The constant β in the almost convexity condition (W4) should be inside the function since we do not assume doubling, or even finite, functions, as the following example illustrates. A constant outside is possible, but too restrictive.
If we choose ξ = 0 in the almost convexity condition (W4), then we obtain (aInc) 1 : In the special case β = 1, i.e. for a convex function, we have (Inc) 1 : These inequalities mean that the function t → (x,tξ) t is almost increasing or increasing, hence the notation (aInc) 1 and (Inc) 1 . In [18] we showed that these inequalities are useful substitutes for convexity in the isotropic case. In particular, it is easy to see that + B and − B satisfy (aInc) 1 or (Inc) 1 if does. For the anisotropic case the almost convexity is more appropriate since it also carries information about non-radial behavior.
Let us denote by conv the greatest convex minorant of . This function is often denoted by * * , since it can be obtained by applying the conjugation operation * twice [9, Corollary 2.1.42], but we will not use this fact here. We next show a connection between the greatest convex minorant and the almost convexity condition. The following is a version of Carathéodory's Theorem from convex analysis. Probably it is known, but a proof is included for completeness, since I could not find a reference.
Proof Consider the epigraph of , By Carathéodory's Theorem (see, e.g., [30, Theorem 2.1.3]), every point in the convex hull of E can be represented as a convex combination of at most m + 2 points ξ k from E. Furthermore, we observe that if any of the points ξ k are from the interior of E, then the convex combination is also in the interior of the convex hull. Thus the points of the boundary, i.e. the graph of conv , are given as a convex combination of points in the boundary of E, i.e. on the graph of . Hence This is the claim, except with one extra point ξ m+2 .
However, ξ lies in the convex hull of ξ 1 , . . . , ξ m+2 ∈ R m . Thus by Carathéodory's Theorem in R m , ξ can be expressed as the convex combination of at most m + 1 of the points ξ 1 , …, ξ m+2 . By re-labeling if necessary, we obtain k=1 , all lie on the same hyper-plane for a boundary point, we also have We can now show that conv for almost convex functions. Proof Clearly, conv ≤ since conv is defined as a minorant of . Let 2 i ≥ m + 1 and set α k := 0 and ξ k := 0 for k > m + 1. By the almost convexity condition, Iterating this i times, we obtain that By Lemma 2.3 and this inequality, Thus the almost convexity implies that conv . If, on the other hand, conv with constant β, then we directly obtain For almost convex functions we easily obtain a Jensen inequality with an extra constant.
Proof By Corollary 2.4 and Jensen's inequality for the convex function conv ,

Definition of and Remarks on Conditions
The (aInc) 1 and almost convexity (W4) conditions connect (x, ξ) for different values of ξ with x fixed. However, more advanced results such as the density of smooth functions in Sobolev spaces require connecting (x, ξ) for different values of x, cf. [6]. This is the purpose of the conditions (A1-) and (M-), which generalize (A1) and (M).
However, let us first start with the more elementary condition (A0): there exists β > 0 such that for N -functions m 1 and m 2 used in [1,[7][8][9][10][11]17]. This property is inherited by other versions of : conv is defined as the greatest convex minorant. Consequently, The condition (A1) was introduced in [22] (see also [18,28]) and is essentially optimal for the boundedness of the maximal operator in isotropic generalized Orlicz spaces. It also implies the Hölder continuity of solutions and (quasi)minimizers [5,20,21]. For higher regularity, we introduced in [23] a vanishing-(A1) condition along the same lines. These previous studies apply to the isotropic case, i.e. m = 1. In [24,25] we generalized the (A1)-conditions to the anisotropic case, although only the quasi-isotropic case was considered in the main results.
Chlebicka, Gwiazda, Zatorska-Goldstein and co-authors [1,[7][8][9][10][11][12]17] considered the assumption (M) in the anisotropic case; in the next definition their condition is reformulated to make it easier to compare with the (A1) condition (see also Lemma 3.4); also note that some of the earlier works included additional restrictions in the condition. Definition 3.2 Let , ∈ s ( ). We say that satisfies (A1-) or (M-) if for any K > 0 there exists β > 0 such that for all balls B ⊂ R n with μ(B) ≤ 1 and ξ ∈ R n . When The role of is to calibrate the almost continuity requirement with the information on the function we are interested in and was developed from the initial condition (A1) over the course of several studies [5,20,21]. For instance, we showed in [5,Theorem 3.9] that the weak Harnack inequality holds for non-negative supersolutions of div ϕ (|∇u|) ∇u |∇u| = 0 if the isotropic -function ϕ satisfies (A1-) and the supersolution satisfies u ∈ W 1,ψ ( ), where ψ ∈ w ( ) is a potentially different function. Note that this involves a trade-off, since larger ψ means more restriction on u and less restriction on ϕ.
As far as I know, Chlebicka, Gwiazda, Zatorska-Goldstein and co-authors considered (M) only in the case (t) := t and (t) := t p (i.e. (M-1) and (Mp) in the notation above). However, the next example illustrates why this does not lead to optimal results.

Example 3.3 (Variable exponent double phase) Let ϕ(x, t) := t p(x) + a(x)t q(x)
where a ∈ C 0,α ( ), a ≥ 0 and 1 < p ≤ q. Now the (A1) or (M) conditions reduce to which is worse, and quite unnatural as the largest value of q is bounded by the smallest value of p.
As a final remark about the formulation, we note that earlier papers used a form without the "+1" and instead restricted the range of − B . However, if (A0) holds, then these formulations are equivalent. We prove it for (M), the same applies to (A1).
, so the condition of the lemma holds with constant β 2 .

Equivalence of Conditions
In the previous section we introduced and motivated the conditions (A1) and (M) and their variants. We now move on to the main result, and consider their relation to one another. Since ( − B ) conv ≤ − B , (M-) implies (A1-). If ϕ is isotropic and satisfies (aInc) 1 , then I showed in [22] Hence the two conditions are equivalent in this case. However, as pointed out in [9, Remarks 2.3.14 and 3.7.6], this approach is not possible in the anisotropic case. Since I did not understand the examples implicit in these remarks without consulting the authors, I include here an explicit example based on ideas of Piotr Nayar communicated to me by Iwona Chlebicka. Let : R m → [0, ∞] be a strong -function independent of x. Denote K s := { ≤ s} and observe that it is a convex compact set which includes 0 in its interior. Define Here · K s is the Minkowski functional of the set K s , first studied by Kolmogorov [26]. The Luxemburg norm · defined previously is another example of a Minkowski functional. Note that N s is a convex function with {N s ≤ s} = K s . Since is convex, (λξ ) ≤ λ (ξ ) for λ ≤ 1. Thus N s ≤ outside K s , N s ≥ in K s and N s = on the boundary ∂ K s . In other words, we take the s-level set of and replace outside of it by the function N s which grows linearly.
not be even almost convex. However, in the next proposition we show that min{ , N s } is almost convex, since the two functions are somehow compatible. This will be used to construct a convex minorant of − B . The proposition also demonstrates the utility of the almost convexity condition, as it seems much more difficult to choose N s to make the minimum convex while still being a minorant of − B .
Proof Note that M s = χ K s + N s χ R m \K s and let α, α > 0 with α + α = 1. If ξ, ξ / ∈ K s , then the convexity of N s implies that If ξ, ξ ∈ K s , then the inequality follows from the convexity of , which holds by assumption. Therefore it suffices to show that when ξ ∈ K s and ξ / ∈ K s . Defineξ := αξ + α ξ andζ := 1 2ξ . We will show that N s (ξ )).

4.3
Observe that M s satisfies (Inc) 1 , since N s and do. By (Inc) 1 , (4.3) implies the previous inequality with constant β := 1 2C and concludes the proof. We consider two cases to prove (4.3).
Case 2ζ / ∈ K s . Let ν := ζ −1 K s < 1. Since K s is closed, it follows from the definition of · K s that (νζ ) = s and νζ ∈ ∂ K s . Furthermore, N s (νζ ) = s = M s (νζ ) and so where we used the previous case for νζ ∈ K s in the first inequality and (Inc) 1 for the last step.
We are ready to prove the main theorem, i.e. the equivalence of (A1) and (M). If

Proof of Theorem 1.2 Since
Combining the two cases, we find that If ξ / ∈ K s , then ν := ξ −1 K s < 1. As in Case 2 of the previous proof νξ ∈ ∂ K s and where we used (Inc) 1 of − B in the last step. Note that s s−1 = 1 + μ(B) K ≤ 1 + 1 K since we assumed that μ(B) ≤ 1.
In the previous paragraph we have shown that M s ≤ 1 + 1 K − B +1. Therefore, the convex minorant of M s is also a convex minorant of 1 + 1 K − B +1, and we conclude that (M s ) conv ≤ 1 and we have established (M) with constant K K +1 ββ .

Remark 4.4
From the proof of Proposition 4.2, we see that the almost convexity constant equals 8. From the proof of Corollary 2.4, we see that β = 8 −i , where i is the smallest integer with 2 i ≥ m + 1. Thus 2 −i > 1 2(m+1) so that β > (2(m + 1)) −3 . Then we see from the proof or the previous theorem that the constant in (M) can be chosen as (2(m + 1)) −3 K K +1 β when β is the constant from (A1). In other words, the constants from the two conditions are comparable up to a constant depending on the dimension when K ≥ 1.
The assumption − B (ξ ) ≤ K μ(B) from (A1) is somewhat difficult to verify. In the isotropic case, if we assume that ϕ ( f ) ≤ 1, then we can conclude from Jensen's inequality that .

4.5
Thus we may apply (A1) to conclude that This argument is not possible in the anisotropic case, since ( − B ) conv is not comparable to − B . Fortunately, the condition of (M) is easier to use. Proof of Corollary 1.3 By Theorem 1.2, satisfies (M). Since ( − B ) conv is convex, it follows by Jensen's inequality that Therefore we can use (M) with ξ = ffl B f dμ and the previous inequality to conclude that Acknowledgements I would like to thank Iwona Chlebicka for comment and help in explaining [9, Remarks 2.3.14 and 3.7.6] and the Antti and Jenny Wihuri Foundation for financial support.
Funding Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.