1 Introduction

There are several different types of inequalities being called by the name of the great French mathematician. One of them asserts that, given a connected bounded open subset \(\Omega \) of \(\mathbb {R}^n\) with Lipschitz boundary and \(p\in [1,\infty )\), there is a constant C such that for any \(u\in W^{1,p}(\Omega )\) we have

$$\begin{aligned} ||u-u_{\Omega }||_p \le C||\nabla u||_p, \end{aligned}$$

where we have set

$$\begin{aligned} u_{\Omega }:=\frac{1}{|\Omega |}\int _{\Omega } u(x)\,\mathrm{{d}}x \end{aligned}$$
(0.1)

and \(|\Omega |\) stands for the Lebesgue measure of \(\Omega \).

Under this form, it is sometimes called Poincaré-Wirtinger inequality. This inequality plays an important role in the theory of partial differential equations. It is well-known that it is no longer true if we drop the assumption that \(\Omega \) has Lipschitz boundary. It is actually an interesting problem to study the interplay between the geometry of the singularities of the boundary and this result of analysis [6] (see for example sections 1.1.11 and 6.4 of this book).

We show in this article that this inequality holds on every subanalytic connected bounded open subset of \(\mathbb {R}^n\), with possibly singular boundary. The idea is to use the techniques that the second author recently developed to study \(L^p\) de Rham cohomology on singular subanalytic varieties [9] (see Theorem 1.4 below). We do not put any extra ad hoc assumption on the Lipschitz geometry of the boundary.

The geometric properties that are needed are not really specific to the subanalytic category and could be proved to be valid on every polynomially bounded o-minimal structure [2, 4], with almost no modifications in the proofs, which means that our theorem could be established for domains that are definable in such a structure. It seems that the open sets that are definable in these structures constitute a natural class of singular domains to extend the theory of partial differential equations, and indeed the case of semi-algebraic domains, on which it is possible to carry out effective computations, is already satisfying for most of the applications.

We start by giving definitions and needed facts of subanalytic geometry. Since we do not restrict ourselves to star-shaped domains, the proof of Poincaré inequality will force us to construct homotopies that will not be smooth, yet will only be continuous almost everywhere. We thus establish two lemmas that are devoted to the construction of these homotopies, and then state and prove the main theorem (Theorem 2.3). Although these techniques are very particular to our framework, these lemmas in fact bear some resemblance with tools developed to investigate much wider classes of metric spaces [7].

We denote by \(||\cdot ||\) the Euclidean norm of \(\mathbb {R}^n\), by \(B(x_0,\varepsilon )\) (resp. \(\overline{B}(x_0,\varepsilon )\)) the open (resp. closed) ball of radius \(\varepsilon >0\) centered at \(x_0\in \mathbb {R}^n\) for this norm, and by \(S(x_0,\varepsilon )\) the corresponding sphere.

2 Subanalytic Sets

We now recall some basic facts about subanalytic sets and functions. We refer to [1] (see also [3, 5, 10]) for proofs and related facts.

Definition 1.1

A subset \(E\subset \mathbb {R}^n\) is called semi-analytic if it is locally defined by finitely many real analytic equalities and inequalities. Namely, for each \(a \in \mathbb {R}^n\), there is a neighborhood U of a, and real analytic functions \(f_{ij}, g_{ij}\) on U, where \(i = 1, \ldots , r, j = 1, \ldots , s_i\), such that

$$\begin{aligned} E \cap U = \bigcup _{i=1}^r\bigcap _{j=1} ^{s_i} \{x \in U : g_{ij}(x) > 0 \text{ and } f_{ij}(x) = 0\}. \end{aligned}$$
(1.1)

The flaw of the semi-analytic category is that it is not preserved by analytic morphisms, even when they are proper. To overcome this problem, it is convenient to work with a bigger class of sets, the subanalytic sets, which are defined as the projections of the semi-analytic sets:

Definition 1.2

A subset \(E\subset \mathbb {R}^n\) is subanalytic if each point \(x\in \mathbb {R}^n\) has a neighbourhood U such that \(U\cap E\) is the image under the canonical projection \(\pi :\mathbb {R}^n\times \mathbb {R}^k\rightarrow \mathbb {R}^n\) of some relatively compact semi-analytic subset of \(\mathbb {R}^n\times \mathbb {R}^k\) (where k can depend on x).

Definition 1.3

We say that a mapping \(f:A \rightarrow B\) is subanalytic, \(A \subset \mathbb {R}^n\), \(B\subset \mathbb {R}^m\) subanalytic, if its graph is a subanalytic subset of \(\mathbb {R}^{n+m}\). In the case \(B=\mathbb {R}\), we say that f is a subanalytic function.

Subanalytic sets constitute a nice category to study the geometry of semi-analytic sets: it is stable under union, intersection, complement, and Cartesian product. The closure of a subanalytic set is subanalytic. Moreover, these sets enjoy many finiteness properties. For instance, bounded subanalytic sets always have finitely many connected components, each of them being subanalytic.

It is also well-known that they are \(C^0\) triangulable, in the sense that a subanalytic set is always homeomorphic to a simplicial complex. This implies in particular that locally, they have the topology of a cone over what is generally called, its link. This fact is sometimes referred as the local conic structure of the topology of subanalytic sets. Germs of subanalytic sets are nevertheless not bi-Lipschitz homeomorphic to cones, as it is shown by the simple example of a cusp \(y^2=x^3\) in \(\mathbb {R}^2\). In particular, the cone property which is often used in functional analysis (see [6] for the definition), which is of metric nature, may fail. The theorem below, achieved in [9], however unravels the Lipschitz properties of the local conic structure. This Lipschitz conic structure was actually derived from techniques developed to construct triangulations that describe the metric properties of singularities [8] (see also the survey [10]).

In the theorem below, \(x_0* (S(x_0,\varepsilon )\cap X)\) stands for the cone over \(S(x_0,\varepsilon )\cap X\) with vertex at \(x_0\).

Theorem 1.4

(Lipschitz Conic Structure) Let \(X\subset \mathbb {R}^n\) be subanalytic and \(x_0\in X \). For \(\varepsilon >0\) small enough, there exists a Lipschitz subanalytic homeomorphism

$$\begin{aligned} H: x_0* (S(x_0,\varepsilon )\cap X)\rightarrow \overline{B}(x_0,\varepsilon ) \cap X, \end{aligned}$$

satisfying \(H_{| S(x_0,\varepsilon )\cap X}=Id\), preserving the distance to \(x_0\), and having the following metric properties:

  1. (i)

    The natural retraction by deformation onto \(x_0\)

    $$\begin{aligned} r:[0,1]\times \overline{B}(x_0,\varepsilon )\cap X \rightarrow \overline{B}(x_0,\varepsilon )\cap X, \end{aligned}$$

    defined by

    $$\begin{aligned} r(s,x):=H(sH^{-1}(x)+(1-s)x_0), \end{aligned}$$

    is Lipschitz. Indeed, there is a constant C such that for every fixed \(s\in [0,1]\), the mapping \(r_s\) defined by \(x\mapsto r_s(x):=r(s,x)\), is Cs-Lipschitz.

  2. (ii)

    For each \(\delta >0\), the restriction of \(H^{-1}\) to \(\{x\in X:\delta \le ||x-x_0||\le \varepsilon \}\) is Lipschitz and, for each \(s\in (0,1]\), the map \(r_s^{-1}: \overline{B}(x_0,s\varepsilon ) \cap X\rightarrow \overline{B}(x_0,\varepsilon ) \cap X\) is Lipschitz.

Remark 1.5

This theorem is actually valid in every polynomially bounded o-minimal structure, the same proof applying, simply replacing the word “subanalytic” with definable.

Example 1.6

The difference between the notions of Lipschitz conic structure of a set and cone property may be unclear to the reader, and we wish to illustrate it by sketching what can be the mapping H of the above theorem on the example of a cusp

$$\begin{aligned} X:=\{(x,y)\in [0,1]\times \mathbb {R}:|y|\le x^2\} \end{aligned}$$

with \(x_0=(0,0)\). For each \((x,y)\in x_0*(S(0,1)\cap X)\), let

$$\begin{aligned} H(x,y):=(t(x,y) x,\,t^2(x,y)xy), \end{aligned}$$

where

$$\begin{aligned} t(x,y)=\left( \frac{2x^2+2y^2}{x^2+\sqrt{x^4+4x^2y^2(x^2+y^2)}}\right) ^{1/2}. \end{aligned}$$

The choice that we made for t(xy) ensures that this mapping preserves the distance to the origin. Moreover, a straightforward computation yields that on \(x_0*(S(0,1)\cap X)\) we have \(|\partial t(x,y)|\le \frac{C}{x}\) for some positive constant C, from which it comes down that H has bounded first order partial derivatives (the function t is bounded away from zero and infinity on \(x_0*(S(0,1)\cap X)\)).

Let us here emphasize that if we drop the condition that H preserves the distance to the origin but simply require \(\frac{|(x,y)|}{C}\le |H(x,y)| \le C|(x,y)| \) for some constant C (which is sufficient for proving Poincaré inequality) then it suffices to set \(H(x,y)=(x,xy)\), which leads to much easier computations.

3 Poincaré Inequality

For an open subset \(\Omega \subset \mathbb {R}^n\) and \(p\ge 1\) we denote by

$$\begin{aligned} W^{1,p}(\Omega ):= \{u\in L^p(\Omega ),\; \frac{\partial u}{\partial x_i}\in L^p(\Omega )\; \text{ for } \text{ any } \, 1\le i\le n\}, \end{aligned}$$

the Sobolev space, where \(\frac{\partial u}{\partial x_i}\) are the partial derivatives of u in the sense of distributions. This space, equipped with the norm

$$\begin{aligned} ||u||_{p}+\sum _{i=1}^n||\frac{\partial u}{\partial x_i}||_{p}, \end{aligned}$$

is a Banach space. Here, as usual, \(||\cdot ||_p\) stands for the \(L^p\) norm. It is well known that the set of smooth functions \({\mathcal {C}}^\infty (\Omega )\) is dense in \(W^{1,p}(\Omega )\) .

The proof of Theorem 2.3 will require two geometric lemmas that we now present, the first being necessary to establish the second one. Let us recall that since subanalytic mappings are differentiable on an open dense subanalytic subset of their domain, they are always differentiable almost everywhere.

Lemma 2.1

Let \(\Omega \subset \mathbb {R}^n\) be a bounded open connected subanalytic subset. There exists a subanalytic map

$$\begin{aligned} h: \Omega \times [0,1]\rightarrow \Omega , \, (x,s)\mapsto h_s(x) \end{aligned}$$

continuous with respect to the second variable and such that

  1. (1)

    \(h_1(\Omega )\subset B(z,\alpha )\subset \Omega \) for some \(\alpha >0\) and \(z\in \Omega \);

  2. (2)

    \(d_xh_t\) is invertible for almost every \((x,t)\in \Omega \times [0,1]\), and moreover there exists \(C>0\) such that whenever \(d_xh_t\) is invertible, we have \(||d_xh_t^{-1}||\le C\).

Proof

The proof relies on the following two observations.

  1. (a)

    In condition (1), possibly composing h with a homothetic transformation, we can choose \(\alpha \) as small as we wish. Moreover, as \(\Omega \) is connected and subanalytic, any two points of \(\Omega \) can be joined by a subanalytic arc \(\gamma \). Therefore, possibly composing with a translation of the ball \(B(z,\varepsilon )\) through an arc \(\gamma \), the point z can be replaced with any element of \(\Omega \).

  2. (b)

    It is enough to construct the desired family of maps on a finite (subanalytic) cover of \(\Omega \). The reason is that, if there exist mappings \(h:U \times [0,1]\rightarrow \Omega \) and \(h':U'\times [0,1]\rightarrow \Omega \), where \(U, U'\subset \Omega \) are subanalytic subsets, satisfying (1) and (2) of the lemma, then we can construct a subanalytic mapping \(h'':(U\cup U')\times [0,1]\rightarrow \Omega \) satisfying (1) and (2) as well. Indeed, for instance, it is enough to set

    $$\begin{aligned} h''(x,t)={\left\{ \begin{array}{ll}h(x,t),&{} x\in U\\ h'(x,t),&{} x\in U'{\setminus } U.\end{array}\right. } \end{aligned}$$

    The obtained mapping \(h''\) is then subanalytic and continuous with respect to t (it may fail to be continuous with respect to x but, as it is subanalytic, it is smooth on an open dense subanalytic set). By induction, given several sets \(U_1,\ldots ,U_k\) and several mappings \(h_i:U_i\times [0,1]\rightarrow \Omega \), \(i=1,\ldots ,k\), we therefore can define a mapping \(h:\bigcup _{i=1}^k U_i \times [0,1]\rightarrow \Omega \) that has the required properties.

As the set \(\overline{\Omega }\) is compact, by the just above observation (b), it suffices to define \(h_s(x)\) on \(\overline{B}(x_0,\varepsilon )\cap \Omega \), for each \(x_0\in \overline{\Omega }\), with \(\varepsilon >0\) small. Of course, the result is clear if \(x_0\in \Omega \). Fix now \(x_0\) in the boundary of \(\Omega \). Given \(s\in [0,1)\), define a function \(\nu _s:[0,1]\rightarrow [s,1]\) by

$$\begin{aligned} \nu _s(u)=s+(1-s)u. \end{aligned}$$

For every \(s<1\), \(\nu _s'(u)\equiv 1-s\), and therefore \(\nu _s\) is a bi-Lipschitz homeomorphism with bi-Lipschitz constant \(L_{\nu _s}\) which remains bounded if s stays bounded away from 1. Observe also that, since we can argue separately on each connected component of \(\overline{B}(x_0,\varepsilon )\cap \Omega \) (thanks to (b)), it is no loss of generality to assume that this set is connected.

Let r and \(\varepsilon \) be as in the Lipschitz Conic Structure Theorem (applied to \(\Omega \cup \{x_0\}\) at \(x_0\)). Set \(\rho _\varepsilon (x):=\frac{||x-x_0||}{\varepsilon }\). We claim that for every \(s\in [0,1)\) the mapping \(x \mapsto r_{\nu _s^{-1}(\rho _\varepsilon (x))}(x)\) induces a homeomorphism \(\theta _s\) from \(\overline{B}(x_0,\varepsilon )\cap \Omega {\setminus } \overline{B}(x_0,s\varepsilon )\) onto \(\overline{B}(x_0,\varepsilon )\cap \Omega \). Actually, since the homeomorphism H, given by the Lipschitz Conic Structure Theorem, preserves the distance to the point \(x_0\), and because for each s the map \(r_s\) is, up to H, a homothetic transformation (i.e., \(H\circ r_s\circ H^{-1}\) is a homothetic transformation), \(r_s\) maps injectively \(S(x_0,t)\cap \Omega \) onto \(S(x_0,ts)\cap \Omega \) for all \(s\in (0,1]\) and \(t\in [0,\varepsilon ]\). As \(\nu _s\) is a strictly increasing function, it therefore suffices to check that \(x\mapsto r_{\nu _s^{-1}(\rho _\varepsilon (x))}(x)\) maps \(S(x_0,\varepsilon )\cap \Omega \) onto itself and \(S(x_0,s\varepsilon )\cap \Omega \) onto \(\{x_0\}\), that is to say, that \(\nu _s(1)=1\) and \(\nu _s(0)=s\), which is clear from the definition of \(\nu _s\).

Hence, we can define for each \(s\in [0,\frac{1}{2}]\) a mapping

$$\begin{aligned} g_s:\overline{B}(x_0,\varepsilon )\cap \Omega \rightarrow \overline{B}(x_0,\varepsilon )\cap \Omega {\setminus } \overline{B}(x_0,s\varepsilon )\end{aligned}$$
(2.3)

by setting for \(y\in \overline{B}(x_0,\varepsilon )\cap \Omega \)

$$\begin{aligned} g_s(y):=\theta ^{-1}_s(y). \end{aligned}$$

Note that, since \(g^{-1}_s=\theta _s\), it is a Lipschitz mapping, with a Lipschitz constant which can be bounded independently of \(s\le \frac{1}{2}\).

By (b) we may assume that \(S(x_0,\varepsilon )\cap \Omega \) is included in a chart of some coordinate system of \(\Omega \). By induction on n, we therefore know that there is a family of mappings

$$\begin{aligned} \tilde{h}_s:S(x_0,\varepsilon ) \cap \Omega \rightarrow S(x_0,\varepsilon ) \cap \Omega , \quad s\in [0,1] \end{aligned}$$

satisfying \(\tilde{h}_1(S(x_0,\varepsilon ) \cap \Omega )\subset B(a,\tilde{\alpha })\) for some \(a\in S(x_0,\varepsilon )\cap \Omega \) and \(\tilde{\alpha }\) small enough, such that \(d \tilde{h}_s^{-1}\) is bounded uniformly in s. Let us extend trivially (i.e., constantly with respect to the last variable) this family of mappings to a family of mappings, keeping the same notation

$$\begin{aligned} \tilde{h}_s:S(x_0,\varepsilon ) \cap \Omega \times \Big [\frac{1}{2},1\Big ]\rightarrow S(x_0,\varepsilon ) \cap \Omega \times \Big [\frac{1}{2},1\Big ], \quad s\in \Big [\frac{1}{2},1\Big ] . \end{aligned}$$

We now shall define the desired mapping h by applying successively g and \(\tilde{h}\). Observe for this purpose that r induces a bi-Lipschitz homeomorphisms from \(S(x_0,\varepsilon )\cap \Omega \times [\frac{1}{2},1]\) to \( \Omega \cap \overline{B}(x_0,\varepsilon ){\setminus } B(x_0,\frac{\varepsilon }{2})\). Denote by \(\Psi \) its inverse and let \(h_s:\Omega \cap \overline{B}(x_0,\varepsilon )\rightarrow \Omega \cap \overline{B}(x_0,\varepsilon ) \) be defined by

$$\begin{aligned} h_s(x)={\left\{ \begin{array}{ll}g_s(x), &{} s\le \frac{1}{2} \\ r(\tilde{h}_s(\Psi (g_{1/2}(x))), &{}s\ge \frac{1}{2} \end{array}\right. }. \end{aligned}$$

The inverse of its derivative is bounded by construction. \(\square \)

The above lemma makes it possible for us to prove the following lemma which will be useful to establish Theorem 2.3. We use the notation \(\text{ Jac }(\Gamma )\) to denote the absolute value of the determinant of the Jacobian matrix of a mapping \(\Gamma \).

Lemma 2.2

Let \(\Omega \subset \mathbb {R}^n\) be an open bounded connected subanalytic subset. There exists a subanalytic family of continuous arcs \(\gamma _{x,y}:[0,1]\rightarrow \Omega \), \(x,y\in \Omega \), such that \(\gamma (0)=x\), \(\gamma (1)=y\) for each such x and y, and \(||\gamma '_{x,y}(s)||\le C\) for all \(s\in (0,1)\), and some constant C independent of x and y. Moreover, there is \(\eta >0\) such that for almost every \(x,y\in \Omega \):

$$\begin{aligned} {\left\{ \begin{array}{ll}\text{ Jac }(\Gamma _{s,y})(x)\ge \eta ,&{} s\ge \frac{1}{2}\\ \text{ Jac }(\tilde{\Gamma }_{s,x})(y)\ge \eta ,&{} s< \frac{1}{2}\end{array}\right. } \end{aligned}$$

where \(\Gamma _{s,y}:\Omega \ni x\mapsto \gamma _{x,y}(s)\in \Omega \) and \(\tilde{\Gamma }_{s,x}:\Omega \ni y\mapsto \gamma _{x,y}(s)\in \Omega \).

Proof

The desired family of arcs will require the following mappings. Let:

  1. (1)

    \(h:\Omega \times [0,1]\rightarrow \Omega \) be a family of mappings as provided by Lemma 2.1.

  2. (2)

    \(g:B(z,\alpha )\times B(z,\alpha )\times [0,1]\rightarrow B(z,\alpha )\) be defined by \((x,y,s)\mapsto (1-s)x+sy\) (where z and \(\alpha \) are provided by Lemma 2.1).

Next, we define \(H:\Omega \times \Omega \times [0,3]\rightarrow \Omega \) by

$$\begin{aligned} H(x,y,s)={\left\{ \begin{array}{ll}h(x,s)&{} 0\le s\le 1\\ g(h(x,1),h(y,1),s-1)&{} 1<s\le 2\\ h(y,3-s)&{} 2<s\le 3. \end{array}\right. } \end{aligned}$$

Note that, by (1) of Lemma 2.1, we know that there is \(\lambda \) such that \(|\det d_x h_s^{-1}|\le \lambda \) for almost every x in \(\Omega \). As a matter of fact:

  • for \(s\le \frac{3}{2}\) and any fixed y, the jacobian of the map \(H_{s,y}:x\mapsto H(x,y,s)\) can be bounded as follows:

    $$\begin{aligned} \text{ Jac }(H_{s,y})(x)={\left\{ \begin{array}{ll}\text{ Jac }(h_s)(x)=\frac{1}{|\det d_x h_s^{-1}| }\ge \frac{1}{\lambda }&{} s\in [0,1]\\ (2-s)^n\text{ Jac }(h_{1})(x)>\frac{1}{2^n\lambda }&{} s\in (1,\frac{3}{2}];\end{array}\right. } \end{aligned}$$
  • for \(s> \frac{3}{2}\) and any fixed x we have:

    $$\begin{aligned} \text{ Jac }(H_{s,x})(y)={\left\{ \begin{array}{ll}(s-1)^n \text{ Jac }(h_{1})(y)>\frac{1}{2^n\lambda }&{} s\in (\frac{3}{2},2]\\ \text{ Jac }(h_{3-s})(y)=\frac{1}{|\det d_y h_{3-s}^{-1}| }\ge \frac{1}{\lambda }&{}s\in (2,3].\end{array}\right. } \end{aligned}$$

Hence, the family of arcs \(\gamma _{x,y}:[0,1]\rightarrow \Omega \) defined by \(\gamma _{x,y}(s)=H(x,y,3s)\) fulfills the required properties. \(\square \)

We are now ready to prove our main result:

Theorem 2.3

Let \(\Omega \subset \mathbb {R}^n\) be a bounded connected open subanalytic subset. For each \(p\ge 1\), there exists \(C>0\) such that for any \(u\in W^{1,p}(\Omega )\) the following inequality holds

$$\begin{aligned} ||u-u_{\Omega }||_p \le C||\nabla u||_p, \end{aligned}$$

where \(u_\Omega \) is as in (0.1).

Proof

We may assume that u is smooth. For x and y in \(\Omega \), let \(\gamma _{x,y}\) be the mapping provided by Lemma 2.2. We have

$$\begin{aligned} ||u-u_{\Omega }||_p= & {} \left( \int _{\Omega }\left| u(x)-\frac{1}{|\Omega |}\int _{\Omega } u(y)\mathrm{{d}}y\right| ^p\mathrm{{d}}x\right) ^{1/p} \nonumber \\= & {} \left( \int _{\Omega }\frac{1}{|\Omega |^p}\left| \int _{\Omega } u(x)-u(y)\,\mathrm{{d}}y\right| ^p\mathrm{{d}}x\right) ^{1/p} \nonumber \\= & {} \frac{1}{|\Omega |}\left( \int _{\Omega }\left| \int _{\Omega }\int _0^1<\nabla u (\gamma _{x,y}(s)),\gamma '_{x,y}(s)>\mathrm{{d}}s\mathrm{{d}}y\right| ^p\mathrm{{d}}x\right) ^{1/p}\nonumber \\\le & {} \frac{1}{|\Omega |}\left( \int _{\Omega }\left| \int _0^1\int _{\Omega }||\nabla u (\gamma _{x,y}(s))||\cdot ||\gamma '_{x,y}(s)||\mathrm{{d}}y\mathrm{{d}}s\right| ^p\mathrm{{d}}x \right) ^{1/p}\nonumber \\\le & {} \frac{C}{|\Omega |}\left( \int _{\Omega }\left| \int _0^1\int _{\Omega }||\nabla u (\gamma _{x,y}(s))||\mathrm{{d}}y\mathrm{{d}}s\right| ^p\mathrm{{d}}x \right) ^{1/p}. \end{aligned}$$
(2.4)

Thanks to Minkowski’s inequality, we can write

$$\begin{aligned}&\left( \int _{\Omega }\left| \int _{1/2}^1\int _{\Omega }||\nabla u (\gamma _{x,y}(s))||\mathrm{{d}}y\mathrm{{d}}s\right| ^p\mathrm{{d}}x \right) ^{1/p}\\&\quad \le \int _{1/2}^1\int _{\Omega }\left( \int _{\Omega }||\nabla u (\gamma _{x,y}(s))||^p\mathrm{{d}}x\right) ^{1/p}\mathrm{{d}}y\mathrm{{d}}s. \end{aligned}$$

Note that, by Lemma 2.2, we have for all \(s\in [1/2,1]\):

$$\begin{aligned} \int _{\Omega }||\nabla u (\gamma _{x,y}(s))||^p \mathrm{{d}}x \le \frac{1}{\eta }\int _{\Omega }||\nabla u (\gamma _{x,y}(s))||^p \text{ Jac }(\Gamma _{s,y}(x))\mathrm{{d}}x \le C\frac{||\nabla u||_p^p}{\eta }, \end{aligned}$$

for some constant C, so that, plugging this into the preceding estimate, we get:

$$\begin{aligned} \left( \int _{\Omega }\left| \int _{1/2}^1\int _{\Omega }||\nabla u (\gamma _{x,y}(s))||\mathrm{{d}}y\mathrm{{d}}s\right| ^p\mathrm{{d}}x \right) ^{1/p}\le C\frac{|\Omega |}{\eta ^{1/p}}||\nabla u||_p. \end{aligned}$$
(2.5)

Moreover, denoting by \(p'\) the Hölder conjugate of p, thanks to Hölder’s inequality, we can write:

$$\begin{aligned} \left| \int _{\Omega }\int _0^{1/2}||\nabla u (\gamma _{x,y}(s))||\mathrm{{d}}s\mathrm{{d}}y\right| ^p\le \frac{|\Omega |^{p/p'}}{2^{p/p'}} \int _{\Omega }\int _0^{1/2}||\nabla u (\gamma _{x,y}(s))||^p \mathrm{{d}}s \mathrm{{d}}y \\ \le \frac{|\Omega |^{p/p'}}{\eta \, 2^{p/p'}} \int _0^{1/2} \int _{\Omega }||\nabla u (\gamma _{x,y}(s))||^p \text{ Jac }(\tilde{\Gamma }_{s,x}(y))\mathrm{{d}}x\mathrm{{d}}s \le C \frac{|\Omega |^{p/p'}}{\eta \,2^{p/p'}} ||\nabla u||_p^p\,, \end{aligned}$$

for some constant C. Together with (2.4) and (2.5), this yields the desired result. \(\square \)