1 Introduction

The Loomis–Whitney inequality in \(\mathbb {R}^{d}\) bounds the volume of a set \(K \subset \mathbb {R}^{d}\) by the areas of its coordinate projections:

$$\begin{aligned} |K| \le \prod _{j = 1}^{d} |{\tilde{\pi }}_{j}(K)|^{\frac{1}{d - 1}}, \end{aligned}$$
(1.1)

where \({\tilde{\pi }}_{j}(x_{1},\ldots ,x_{d}) = (x_{1},\ldots ,x_{j - 1},x_{j + 1},\ldots ,x_{d})\). Here |A| refers to k-dimensional Lebesgue outer measure in \(\mathbb {R}^{k}\) whenever \(A \subset \mathbb {R}^{k}\). The inequality (1.1) is due to Loomis and Whitney [37] from 1949. It is trivial for \(d=2\) and follows by induction, using Hölder’s inequalities, for \(d>2\). The Loomis–Whitney inequality is one of the fundamental inequalities in geometry and has been studied intensively; we refer to [6, 8, 12, 25, 33] and references therein for a historical account and some applications of the Loomis–Whitney inequality.

The present note discusses analogues of (1.1) in Heisenberg groups \(\mathbb {H}^n\). It arose as a complement to manuscript [23] with Tuomas Orponen, in which we reduced the proof of the Loomis–Whitney inequality for \(\mathbb {H}^1\) to an incidence geometric problem in the plane that we resolved using the method of polynomial partitioning. Later we learned that the Loomis–Whitney inequality in the first Heisenberg group—and inequalities of similar type—had already been obtained earlier [18, 19, 32, 38] by a Fourier-analytic approach or the so-called method of refinements, albeit not phrased in terms of Heisenberg projections. In addition to acknowledging previous work, the aim of the present note is to show how the Loomis–Whitney inequality in \(\mathbb {H}^n\) for \(n>1\) can be proven by induction, similarly as the original inequality [37], but now using the version in \(\mathbb {H}^1\) as a base case. Alternatively, one could apply the method of refinements also for \(n>1\), see the related comment in [42, Sect. 4]. The inductive approach in the present note has the advantage of easily yielding certain strong-type endpoint inequalities, see Theorem 1.8, which are not covered by [42] or other literature we are aware of. For applications to geometric Sobolev and isoperimetric inequalities in \(\mathbb {H}^n\), the weak-type inequalities would however be sufficient.

1.1 Heisenberg groups

The nth Heisenberg group \(\mathbb {H}^n\) is the group \((\mathbb {R}^{2n+1},\cdot )\) with

$$\begin{aligned} (x,t) \cdot (x',t') := \left( x + x', t + t' + \tfrac{1}{2}\sum _{j=1}^n x_j x_{n+j}'-x_{n+j}x_j'\right) , \end{aligned}$$
(1.2)

which makes it a nilpotent Lie group of step 2. Here, (xt) denotes a point in \(\mathbb {R}^{2n+1}\) with \(x=(x_1,\ldots ,x_{2n})\in \mathbb {R}^{2n}\) and \(t\in \mathbb {R}\). For \(x\in \mathbb {R}^{2n}\) and \(k\in \{1,\ldots ,2n\}\), we will use the symbol \({\hat{x}}_k\) to denote either the point in \(\mathbb {R}^{2n}\) that is obtained by replacing the k-th coordinate of x with 0, or the point in \(\mathbb {R}^{2n-1}\) that is obtained by simply deleting the k-th coordinate of x. The meaning should always be clear from the context.

In geometric measure theory of the sub-Riemannian Heisenberg group [41], an important role is played by Heisenberg projections that are adapted to the group and dilation structure of \(\mathbb {H}^n\) and that map onto homogeneous subgroups of \(\mathbb {H}^n\). We only consider projections associated to the "coordinate" hyperplanes containing the t-axis, so we limit our discussion to those. Let \(\mathbb {W}_{j}\subset \mathbb {H}^n\), \(j=1,\ldots , 2n\), be the (1-codimensional) vertical subgroups of \(\mathbb {H}^n\) given by the hyperplanes \(\{(x,t)\in \mathbb {R}^{2n+1}:\,x_j=0\}\), respectively. Write

$$\begin{aligned} \mathbb {L}_{j} := \{(0,\ldots ,0,x_j,0,\ldots ,0) : x_j \in \mathbb {R}\} \end{aligned}$$

for the span of the j-th standard basis vector. So \(\mathbb {L}_j\) is a complementary (1-dimensional) horizontal subgroup of \(\mathbb {W}_{j}\). This means, for example, that every point \(p \in \mathbb {H}^n\) has a unique decomposition \(p = w_{j} \cdot l_{j}\), where \(w_{j} \in \mathbb {W}_{j}\) and \(l_{j} \in \mathbb {L}_{j}\). These decompositions give rise to the vertical coordinate projections

$$\begin{aligned} p \mapsto w_{j} =: \pi _{j}(p) \in \mathbb {W}_{j},\quad j=1,\ldots ,2n. \end{aligned}$$

Using the group product in (1.2), it is easy to write down explicit expressions for \(\pi _{j}\):

$$\begin{aligned} \pi _{j}(x,t) = ({\hat{x}}_j,t + \tfrac{x_j x_{n+j}}{2}) \quad \text {and} \quad \pi _{n+j}(x,t) = ({\hat{x}}_{n+j},t - \tfrac{x_j x_{n+j}}{2}),\quad j=1,\ldots ,n.\nonumber \\ \end{aligned}$$
(1.3)

Readers who are not comfortable with the Heisenberg group can simply identify \(\mathbb {W}_{j}\) with \(\mathbb {R}^{2n}\), and consider the maps

$$\begin{aligned} (x,t)\mapsto (x_1,\ldots ,x_{j-1},x_{j+1},\ldots ,x_{2n},t+ \tfrac{x_j x_{n+j}}{2}),\quad \text {for }j=1,\ldots ,n, \end{aligned}$$

and their analogs for \(j=n+1,\ldots ,2n\), without paying attention to their origin. It is clear that the projections \(\pi _{1},\ldots ,\pi _{2n}\) are smooth, and hence locally Lipschitz with respect to the Euclidean metric in \(\mathbb {R}^{2n+1}\), and they satisfy

$$\begin{aligned} \det \left( D \pi _j(p) D \pi _j(p)^t\right) \ge 1,\quad j=1,\ldots ,2n,\;p\in \mathbb {R}^{2n+1}. \end{aligned}$$
(1.4)

Vertical projections are, in fact, not Lipschitz with respect to the Korányi distance \(d(p,q) = \Vert q^{-1} \cdot p\Vert \) on \(\mathbb {H}^n\). Nonetheless they play a significant role in the geometric measure theory of Heisenberg groups—as do orthogonal projections in \(\mathbb {R}^{d}\)—so they have been actively investigated in recent years, see [2, 3, 15, 22, 34, 35]. The vertical projections are non-linear maps, but their fibres \(\pi _{j}^{-1}\{w\}\) are nevertheless lines. In fact, the fibres of \(\pi _{j}\) are precisely the left translates of the line \(\mathbb {L}_j\), that is, \(\pi _{j}^{-1}\{w\} = w \cdot \mathbb {L}_j\) for \(w \in \mathbb {W}_j\).

For subsets of \(\mathbb {H}^n \cong \mathbb {R}^{2n+1}\), the notation \(|\cdot |\) will refer to Lebesgue (outer) measure on \(\mathbb {R}^{2n+1}\), and for subsets of a vertical plane \(\mathbb {R}^{2n} \cong \mathbb {W}_j \subset \mathbb {H}^n\), the notation \(|\cdot |\) will refer to Lebesgue (outer) measure in \(\mathbb {R}^{2n}\). Up to multiplicative constants, they could also be defined as the \((2n+2)\)- and \((2n+1)\)-dimensional Hausdorff measures, respectively, relative to the Korányi metric on \(\mathbb {H}^n\). So, our measures coincide with canonical "intrinsic" objects in \(\mathbb {H}^n\). All integrations on \(\mathbb {H}^n\) or \(\mathbb {W}_j\) will be performed with respect to Lebesgue measures.

1.2 Loomis–Whitney inequalities in \(\mathbb {H}^n\) and their generalizations

We can now state a variant of the Loomis–Whitney inequality (1.1) for subsets of \(\mathbb {H}^n\) in terms of the vertical coordinate projections \(\pi _{j}\). In \(\mathbb {R}^{d}\), the inequality makes a reference to the d orthogonal coordinate projections \({\widetilde{\pi }}_1,\ldots ,{\widetilde{\pi }}_d\). These are, now, best viewed as the projections whose fibres are translates of lines parallel to the coordinate axes. In \(\mathbb {H}^n\), we consider instead the vertical projections \(\pi _{j}\) whose fibres are left translates of \(\mathbb {L}_{j}\), \(j=1,\ldots ,2n\); the precise formulae were stated in (1.3). With this notation, the following variant of the Loomis–Whitney inequality holds:

Theorem 1.5

(Loomis–Whitney inequality in \(\mathbb {H}^n\)) Fix \(n\in \mathbb {N}\). Let \(K \subset \mathbb {R}^{2n+1}\) (or \(K \subset \mathbb {H}^n\)) be an arbitrary set. Then

$$\begin{aligned} |K| \lesssim \prod _{j=1}^{2n} |\pi _{j}(K)|^{\frac{n+1}{n(2n+1)}}. \end{aligned}$$
(1.6)

Here and in the following, the symbol \(\lesssim \) indicates that the inequality holds up to a positive and finite multiplicative constant on the right-hand side. We only have to prove the inequality for Lebesgue measurable sets \(K\subset \mathbb {R}^{2n+1}\). In the general case, we simply pick \(G_{\delta }\)-sets \(K_j\subset \mathbb {R}^{2n}\) with \(K_j\supseteq \pi _j(K)\) and \(|K_j|= |\pi _j(K)|\) for \(j=1,\ldots , 2n\), assuming that the right-hand side of (1.6) is finite. Then \(K':=\bigcap _{j=1}^{2n} \pi _j^{-1}(K_j)\) is a Lebesgue measurable subset of \(\mathbb {R}^{2n+1}\) that contains K and it suffices to apply the Loomis–Whitney inequality to \(K'\).

So we consider only Lebesgue measurable sets K in the following. By the inner regularity of the Lebesgue measure, Theorem 1.5 is then equivalent to the validity of (1.6) for all compact sets \(K\subset \mathbb {R}^{2n+1}\). Since every such set satisfies \( \chi _K(p)\le \prod _{j=1}^{2n}\chi _{\pi _j(K)}(\pi _j(p))\), for all \(p\in \mathbb {R}^{2n+1}\), and on the other hand, \(\bigcap _{j=1}^{2n} \pi _j^{-1}(K_j)\) is compact in \(\mathbb {R}^{2n+1}\) whenever \(K_1,\ldots ,K_{2n}\) are compact subsets of \(\mathbb {R}^{2n}\), Theorem 1.5 is equivalent to the statement that

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n}\chi _{K_j}(\pi _j(p))\;dp \lesssim \prod _{j=1}^{2n}|K_j|^{\frac{n+1}{n(2n+1)}} \end{aligned}$$
(1.7)

holds for all compact sets \(K_1,\ldots ,K_{2n}\subset \mathbb {R}^{2n}\). Here we have identified, for \(j=1,\ldots ,2n\), the \(\{x_j=0\}\)-plane in \(\mathbb {R}^{2n+1}\) with \(\mathbb {R}^{2n}\), so that \(\pi _1,\ldots ,\pi _{2n}\) are now mappings from \(\mathbb {R}^{2n+1}\) to \(\mathbb {R}^{2n}\). Using this expression, it is evident that Theorem 1.5 follows from the next result:

Theorem 1.8

Fix \(n\in \mathbb {N}\). Then

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\pi _j(p))\,dp \lesssim \prod _{j=1}^{2n} \Vert f_j\Vert _{\frac{n(2n+1)}{n+1}}, \end{aligned}$$
(1.9)

for all nonnegative Lebesgue measurable functions \(f_1,\ldots ,f_{2n}\) on \(\mathbb {R}^{2n}\).

The coarea formula coupled with (1.4) shows that the preimages of Lebesgue null sets in \(\mathbb {R}^{2n}\) under \(\pi _j\) are Lebesgue null sets in \(\mathbb {R}^{2n+1}\), and so \(f_j \circ \pi _j:\mathbb {R}^{2n+1}\rightarrow [0,+\infty ]\) is Lebesgue measurable under the assumptions of the theorem, and the integral on the left-hand side of (1.9) makes sense.

The bilinear case (\(n=1\)) of Theorem 1.8 follows directly from the \(L^{3/2}-L^3\) boundedness of the standard Radon transform in \(\mathbb {R}^2\), and as such was known—by a Fourier-analytic proof—at least since the work of Oberlin and Stein [38]; see Sect. 2. Theorem 1.8 for \(n=1\) is also an instance of [18, Theorem 1.1] (with \(b=(2,2)\) in [18, (1.6)] and \((p_1,p_2)=(3/2,3/2)\) in [18, (1.8)]). The corresponding weak-type bound (Theorem 1.5 for \(n=1\)) was also obtained by Gressman as a special case of the endpoint restricted weak-type estimates in [32, Theorem 2]. Due to the nilpotent group structure of the Heisenberg group and the invariance of the problem under Heisenberg dilations, it is a particularly simple instance of Gressman’s more general theorem. The proofs in [18, 32] used an adaptation of the method of refinements, which was initiated by Christ [16] in order to prove \(L^p-L^q\) bounds for certain convolution-type operators.

To the best of our knowledge, Theorem 1.8 for \(n>1\) has not appeared in the literature before. Stovall proved in [42] similar inequalities for multilinear Radon-like transforms, but (1.9) for \(n>1\) constitutes a strong-type endpoint case that is not covered by her work. In her notation, our setting corresponds to \(b(p)=((n+1)/n,\ldots ,(n+1)/n)\), which is a point on the boundary of the polytope P mentioned in [42, Theorem 3].

Our approach to Theorem 1.8 can be applied to prove something a bit more general, see Theorem 5.16 for the precise statement. The idea is to apply the same inductive procedure and reduce the claim to an \(L^{3/2}\)-\(L^3\) boundedness statement for a certain operator in the plane. In the case of Theorem 1.8, this operator happens to be the standard Radon transform, but other choices are possible as well, for instance convolution by a fixed parabola in \(\mathbb {R}^2\), cf. the use of (5.7) in connection with Example 5.4.

It is easy to see that the exponents in the Heisenberg Loomis–Whitney inequality (1.6) are sharp by considering boxes of the form \([-r,r]^{2n} \times [-r^{2},r^{2}]\). Besides the difference in the definition of the projections \({\tilde{\pi }}_j\) and \(\pi _j\), there is another obvious difference between (the case \(d = 2n+1\) of) the standard Loomis–Whitney inequality (1.1), and (1.6): the former bounds the volume of K in terms of \(2n+1\) projections, and the latter in terms of only 2n projections. One might therefore ask: is there a version of (1.1) for 2n orthogonal projections \(\mathbb {R}^{2n+1} \rightarrow \mathbb {R}^{2n}\)—and does it look like (1.6)? The answer is negative. This is a very special case of [5, Theorem 1.13] (cf. also [20, 42, 43]), but perhaps it is illustrative to see an explicit computation for \(n=1\):

Example 1.10

Consider the two standard orthogonal coordinate projections \(\tilde{\pi }_{1},\tilde{\pi }_{2}\) in \(\mathbb {R}^{3}\) to the \(x_2t\)- and \(x_1t\)-planes. If \(K = [0,1]^{2} \times [0,\delta ]\), then \(|K| = \delta \), and also \(|\tilde{\pi }_{1}(K)| = \delta = |\tilde{\pi }_{2}(K)|\). So, for \(\delta > 0\) small, an inequality of the form

$$\begin{aligned} |K| \lesssim |\tilde{\pi }_{1}(K)|^{\lambda } \cdot |\tilde{\pi }_{2}(K)|^{\lambda } \end{aligned}$$
(1.11)

can only hold for \(\lambda \le \tfrac{1}{2}\). On the other hand, if \(K_{R} = [0,R]^{3}\), with \(R \gg 1\), then \(|K_{R}| = R^{3}\) and \(|\tilde{\pi }_{1}(K_{R})| = R^{2} = |\tilde{\pi }_{2}(K_{R})|\), so (1.11) can only hold for \(\lambda \ge \tfrac{3}{4}\). The latter example naturally does not contradict (1.6): note that \(|\pi _{j}(K_{R})| \sim R^{3}\) for \(R \gg 1\).

1.3 Gagliardo–Nirenberg–Sobolev inequalities in \(\mathbb {H}^n\)

In \(\mathbb {R}^{d}\), it is well-known that the Loomis–Whitney inequality implies the Gagliardo–Nirenberg–Sobolev inequality

$$\begin{aligned} \Vert f\Vert _{d/(d - 1)} \le \prod _{j = 1}^{d} \Vert \partial _{j}f\Vert _{1}^{1/d}, \qquad f \in C^{1}_{c}(\mathbb {R}^{d}). \end{aligned}$$
(1.12)

Similarly, an \(\mathbb {H}^n\)-analogue of (1.12) can be obtained as a corollary of Theorem 1.5:

Theorem 1.13

Let \(f \in BV(\mathbb {H})\). Then,

$$\begin{aligned} \Vert f\Vert _{\frac{2n+2}{2n+1}} \lesssim \prod _{j=1}^{2n}\Vert X_jf\Vert ^{\frac{1}{2n}}. \end{aligned}$$
(1.14)

Here

$$\begin{aligned} X_j = \partial _{x_j} - \tfrac{x_{n+j}}{2}\partial _{t} \quad \text {and} \quad X_{n+j} = \partial _{x_{n+j}} + \tfrac{x_j}{2}\partial _{t},\quad (j=1,\ldots ,n), \end{aligned}$$
(1.15)

are the standard left-invariant "horizontal" vector fields in \(\mathbb {H}^n\), and \(BV(\mathbb {H}^n)\) refers to functions \(f \in L^{1}(\mathbb {H}^n)\) whose distributional \(X_j\) derivatives are signed Radon measures with finite total variation, denoted \(\Vert \cdot \Vert \).

Theorem 1.13 presents a sharper version of the well-known "geometric" Sobolev inequality

$$\begin{aligned} \Vert f\Vert _{\frac{2n+2}{2n+1}} \lesssim \Vert \nabla _{\mathbb {H}}f\Vert , \qquad f \in BV(\mathbb {H}^n), \end{aligned}$$
(1.16)

proven by Pansu [40] for \(n=1\) as a corollary of the isoperimetric inequality in \(\mathbb {H}^1\). Here \(\nabla _{\mathbb {H}}f = (X_1f,\ldots ,X_{2n}f)\). Versions of geometric Sobolev inequalities and isoperimetric inequalities were obtained in \(\mathbb {H}^n\) and even more general frameworks by several authors, for instance in [14, 30]. A proof of (1.16) for \(n=1\), using the fundamental solution of the sub-Laplace operator \(\bigtriangleup _{\mathbb {H}}\), is discussed in [13, Sect. 5.3], following the approach of [14]. On the other hand, Theorem 1.13 can be derived from Theorem 1.5. This deduction follows a standard argument, but we present it here to highlight the fact that the geometric Sobolev and isoperimetric inequalities in all Heisenberg groups are ultimately based on planar geometry and they can be deduced from boundedness properties of the Radon transform in \(\mathbb {R}^2\).

Theorem 1.8 is related to Brascamp-Lieb inequalities. We direct the reader to e.g. [4, 5, 10] and the references therein. Euclidean Loomis–Whitney and Brascamp-Lieb inequalities can be proven by the technique of heat flow monotonicity, see [5]. The same approach has been attempted in Carnot groups by Bramati [9], but there seems to be a gap in the argument, which has been confirmed with the author. More precisely, the exponents appearing in the proof of [9, Theorem 3.2.3] have not been chosen consistently. It remains an open problem to see whether the Loomis–Whitney inequalities in Carnot groups can be obtained by the heat flow approach.

Structure of the paper. In Sect. 2, we explain how Theorems 1.5 and 1.8 for \(n=1\) follow from known \(L^p\) improving properties of the Radon transform in \(\mathbb {R}^2\). In Sect. 3, we deduce Theorems 1.5 and 1.8 for arbitrary \(n>1\) by induction from the corresponding inequalities in \(\mathbb {H}^1\). In Sect. 4, we show how to derive the Gagliardo–Nirenberg–Sobolev inequality, Theorem 1.13, as an application of the Loomis–Whitney inequality in \(\mathbb {H}^n\). Finally, in Sect. 5 we explain how to adapt the approach from Sect. 3 to prove the generalized Loomis–Whitney-type inequality stated in Theorem 5.16.

2 Inequalities in the first Heisenberg group

In this section, we review the proof for the Loomis–Whitney inequality in the first Heisenberg group. For this purpose it is more convenient to use slightly different notation. In particular, points in \(\mathbb {R}^3\) will be denoted by (xyt) (instead of \((x,t)=(x_1,x_2,t))\). The group product of \(\mathbb {H}^1\) then reads in coordinates as follows:

$$\begin{aligned} (x,y,t) \cdot (x',y',t') := (x + x', y + y', t + t' + \tfrac{1}{2}(xy' - yx')). \end{aligned}$$
(2.1)

The vertical Heisenberg projections to the yt- and the xt-plane, respectively, are explicitly given by

$$\begin{aligned} \pi _{1}(x,y,t) = (0,y,t + \tfrac{xy}{2}) \quad \text {and} \quad \pi _{2}(x,y,t) = (x,0,t - \tfrac{xy}{2}). \end{aligned}$$

We recall the statement of Theorems 1.5 and 1.8 for \(n=1\):

Theorem 2.2

(Loomis–Whitney inequality in \(\mathbb {H}^1\)) Let \(K \subset \mathbb {H}^1\) be arbitrary. Then,

$$\begin{aligned} |K| \lesssim |\pi _{1}(K)|^{2/3} \cdot |\pi _{2}(K)|^{2/3}. \end{aligned}$$
(2.3)

Theorem 2.4

For all nonnegative Lebesgue measurable functions \(f_1\) and \(f_2\) on \(\mathbb {R}^2\) it holds that

$$\begin{aligned} \int _{\mathbb {R}^3} f_1(\pi _1(p)) f_2(\pi _2(p))\,dp \lesssim \Vert f_1\Vert _{\frac{3}{2}} \Vert f_2\Vert _{\frac{3}{2}}. \end{aligned}$$
(2.5)

On the left-hand side of (2.3), the notation "\(|\cdot |\)" refers to Lebesgue outer measure on \(\mathbb {R}^{3}\). Similarly, on the right-hand side of (2.3), the notation "\(|\cdot |\)" refers to Lebesgue outer measure on \(\mathbb {R}^{2}\). Clearly, Theorem 2.4 implies Theorem 2.2. We now explain how Theorem 2.4 itself follows directly from known \(L^p\)-improving properties of the standard Radon transform in the plane \(\mathbb {R}^2\).

Let \(S^1\) be the unit sphere in \(\mathbb {R}^2\). For a smooth, compactly supported function f on \(\mathbb {R}^2\), the Radon transform (or X-ray transform) Rf is defined by

$$\begin{aligned} Rf(\sigma ,s):= \int _{\langle z,\sigma \rangle = s} f(z)\,dz,\quad (\sigma ,s)\in S^1\times \mathbb {R}. \end{aligned}$$
(2.6)

Here dz is the 1-dimensional Lebesgue measure on the line \(\{z\in \mathbb {R}^2:\, \langle z,\sigma \rangle = s\}\). Using Fourier analysis (notably Plancherel’s theorem) and complex interpolation, Oberlin and Stein [38] proved that R extends to a bounded operator from \(L^{3/2}(\mathbb {R}^2)\) to \(L^3(S^1 \times \mathbb {R})\). Their result is more general, but this is the only information one needs to deduce Theorem 2.4.

The connection between inequality (2.5) and the Radon transform is illustrated by the formula

$$\begin{aligned} \int _{\mathbb {R}^3} f_1(\pi _1(p))f_2(\pi _2(p))\;dp = \int _{\mathbb {R}^2}R\left( f_1\right) (\sigma (x),s_{x,t})f_2(x,t)\,\frac{d(x,t)}{\sqrt{1+x^2}} \end{aligned}$$
(2.7)

with \(s_{x,t}= t/\sqrt{1+x^2}\) and \(\sigma (x):= \frac{1}{\sqrt{1+x^2}}(-x,1)\) for smooth compactly supported functions \(f_1\) and \(f_2\) on \(\mathbb {R}^2\). The proof of inequality (2.5) using the result in [38] is an instance of a more general phenomenon that relates \(L^p\)-improving properties of averaging operators along curves to inequalities of the form (2.5) with two factors in the integral. The general framework is explained in detail in [20, 9.5. Double fibration formulation] and [43, Sect. 1]. For our purpose it is convenient to work with a linear operator T that yields functions on \(\mathbb {R}^2\), rather than \(S^1\times \mathbb {R}\) as in the case of the Radon transform, so instead of applying directly (2.7), we will pass via an identity of the form

$$\begin{aligned} \int _{\mathbb {R}^3} f_1(\pi _1(p))f_2(\pi _2(p))\;dp = \int _{\mathbb {R}^2}Tf_1(x,t)f_2(x,t)\,d(x,t); \end{aligned}$$

see the proof of Theorem 2.4. For smooth, compactly supported functions f on \(\mathbb {R}^2\), we define

$$\begin{aligned} Tf(x,t):= \int _{\mathbb {R}} f(y,t+xy)\,dy,\quad (x,t)\in \mathbb {R}^2. \end{aligned}$$
(2.8)

The next statement follows immediately from [38] by relating the operator T to the Radon transform R, and we do not claim any novelty for it, see also [17, Sect. 2].

Theorem 2.9

There exists a constant C such that the operator T defined in (2.8) satisfies

$$\begin{aligned} \Vert Tf\Vert _{3}\le C \Vert f\Vert _{\frac{3}{2}} \end{aligned}$$

for all smooth, compactly supported functions f.

Proof

We reduce Theorem 2.9 to a statement about the Radon transform that was proven in [38]. We fix a smooth compactly supported function f and start by writing

$$\begin{aligned} \Vert Tf\Vert _3&=\left[ \int _{\mathbb {R}^2} \left| \int _{\mathbb {R}} f(y,t+xy)\,dy\right| ^{3}d(x,t)\right] ^{\frac{1}{3}} \end{aligned}$$
(2.10)
$$\begin{aligned}&= \left[ \int _{\mathbb {R}^2} \left| \int _{\mathbb {R}} f(y,t+xy)\sqrt{1+x^2}\,dy\right| ^{3}\frac{d(x,t)}{(1+x^2)^{3/2}}\right] ^{\frac{1}{3}}\nonumber \\&=\left[ \int _{\mathbb {R}^2}\left| \int _{\ell _{x,t}} f\,d\lambda _{\ell _{x,t}}\right| ^3\,\frac{d(x,t)}{(1+x^2)^{3/2}}\right] ^{\frac{1}{3}}. \end{aligned}$$
(2.11)

Here \(d\lambda _{\ell _{x,t}}\) denotes the 1-dimensional Lebesgue measure on the line

$$\begin{aligned} \ell _{x,t}:= & {} \left\{ z\in \mathbb {R}^2:\, \langle z,\sigma (x)\rangle = \frac{t}{\sqrt{1+x^2}} \right\} \\= & {} \{(y,t+ xy):\; y\in \mathbb {R}\}\text { with }\sigma (x):= \frac{1}{\sqrt{1+x^2}}\begin{pmatrix}-x\\ 1\end{pmatrix}. \end{aligned}$$

Thus, recalling the definition of the Radon transform in (2.6), we obtain from (2.11) that

$$\begin{aligned} \Vert Tf\Vert _3&= \left[ \int _{\mathbb {R}^2}|Rf(\sigma (x),s_{x,t})|^3\,\frac{d(x,t)}{(1+x^2)^{3/2}}\right] ^{\frac{1}{3}}\\&= \left[ \int _{\mathbb {R}}\left( \int _{\mathbb {R}}|Rf(\sigma (x),s_{x,t})|^3\,\frac{dt}{\sqrt{1+x^2}}\right) \frac{dx}{1+x^2}\right] ^{\frac{1}{3}} \end{aligned}$$

with \(s_{x,t}= t/\sqrt{1+x^2}\). Changing variables in the inner integral, and observing that \(x\mapsto \sigma (x)\) parameterizes an arc in \(S^1\), we then deduce that

$$\begin{aligned} \Vert Tf\Vert _3= & {} \left[ \int _{\mathbb {R}}\left( \int _{\mathbb {R}}|Rf(\sigma (x),s)|^3\,ds\right) |\sigma '(x)|\,dx\right] ^{\frac{1}{3}}\\\le & {} \left[ \int _{S^1}\left( \int _{\mathbb {R}}|Rf(\sigma ,s)|^3\,ds\right) \,d\sigma \right] ^{\frac{1}{3}}=\Vert Rf\Vert _{3}, \end{aligned}$$

where \(\sigma \) denotes the usual Lebesgue (arc-length) measure on \(S^1\). Now the theorem follows from the inequality \(\Vert Rf\Vert _3 \le C \Vert f\Vert _{\frac{3}{2}}\) for the Radon transform, which was established as a special case of [38, Theorem 1]. \(\square \)

Theorem 2.4 is an immediate corollary of Theorem 2.9.

Proof of Theorem 2.4

It suffices to prove the theorem for nonnegative smooth, compactly supported functions on \(\mathbb {R}^2\). Indeed, if \(f_1\) is an arbitrary nonnegative Lebesgue measurable function on \(\mathbb {R}^2\), we take a sequence \((f_{1,k})_{k\in \mathbb {N}}\) of nonnegative \(\mathcal {C}^{\infty }_{c}\) functions which converges to \(f_1\) with respect to \(\Vert \cdot \Vert _{3/2}\) and pointwise almost everywhere. In the same way, we approximate a given nonnegative Lebesgue measurable function \(f_2\) by a sequence \((f_{2,k})_{k\in \mathbb {N}}\) of nonnegative \(\mathcal {C}^{\infty }_{c}\) functions. Then, assuming that the theorem holds for nonnegative \(\mathcal {C}^{\infty }_{c}\) functions, we apply it to the pair \(f_{1,k},f_{2,k}\) for every \(k\in \mathbb {N}\). The desired inequality (2.5) for the functions \(f_1,f_2\) follows by Fatou’s lemma, observing that for \(j\in \{1,2\}\), the sequence \((f_{j,k}\circ \pi _j)_{k\in \mathbb {N}}\) converges pointwise almost everywhere to \((f_j\circ \pi _j)_{j\in \mathbb {N}}\) since the preimage of a Lebesgue null set in \(\mathbb {R}^2\) is a Lebesgue null set in \(\mathbb {R}^3\), according to the remark below Theorem 1.8.

We now prove the theorem for nonnegative \(\mathcal {C}^{\infty }_c\) functions on \(\mathbb {R}^2\). Let \(f_1\) and \(f_2\) be such functions and let us prove that they satisfy the inequality (2.5). To this end, we rewrite the left-hand side using the volume-preserving diffeomorphism

$$\begin{aligned} \Phi :\mathbb {R}^3 \rightarrow \mathbb {R}^3,\quad \Phi (x,y,t)=(x,0,t)\cdot (0,y,0)=\left( x,y,t+\tfrac{1}{2}x y\right) . \end{aligned}$$

With this definition,

$$\begin{aligned} \pi _1(\Phi (x,y,t))=(y,t+xy)\quad \text {and}\quad \pi _2(\Phi (x,y,t))=(x,t) \end{aligned}$$

for all \((x,y,t)\in \mathbb {R}^3\). Hence the left-hand side of (2.7) can be expressed as follows:

$$\begin{aligned} \int _{\mathbb {R}^3} f_1(\pi _1(p))f_2(\pi _2(p))\;dp&= \int _{\mathbb {R}^2}\int _{\mathbb {R}}f_1(\pi _1(\Phi (x,y,t))f_2(\pi _2(\Phi (x,y,t)))\, dy \,d(x,t)\\&= \int _{\mathbb {R}^2}\left( \int _{\mathbb {R}}f_1(y,t+xy)\,dy\right) f_2(x,t)\,d(x,t)\\&= \int _{\mathbb {R}^2}Tf_1(x,t) f_2(x,t)\,d(x,t), \end{aligned}$$

using the linear operator T defined in (2.8). Thus, it follows from Hölder’s inequality with exponents \(p=3\) and \(p'=3/2\), and the mapping property of T stated in Theorem 2.9, that

$$\begin{aligned} \int _{\mathbb {R}^3} f_1(\pi _1(p))f_2(\pi _2(p))\;dp \le \Vert Tf_1\Vert _3 \Vert f_2\Vert _{\frac{3}{2}}\le C\Vert f_1\Vert _{\frac{3}{2}} \Vert f_2\Vert _{\frac{3}{2}}, \end{aligned}$$

as desired. \(\square \)

3 Inequalities in higher-dimensional Heisenberg groups

In this section we prove Theorem 1.8 for arbitrary \(n>1\) by induction, using Theorem 2.4 as a base case. To be precise, instead of directly aiming at inequality (1.9) in Theorem 1.8, we will prove Theorem 3.1 first. Its statement reflects the algebraic structure of the Heisenberg group. In brief, for a fixed \(k\in \{1,\ldots ,n\}\), the different Lebesgue exponents on the right-hand side of (3.2) appear by applying once the commutator relation \([X_k,X_{n+k}]=\partial _t\), where \(X_k\) and \(X_{n+k}\) are defined as in (1.15). This is done by employing the strong-type bound for \(\mathbb {H}^1\) given by Theorem 2.4. After this initial step, the remaining steps of the induction use only standard properties of integrals and elementary estimates by Hölder’s and Minkowski’s integral inequalities.

Theorem 3.1

Fix \(n\in \mathbb {N}\). Then, for all nonnegative Lebesgue measurable functions \(f_1,\ldots ,f_{2n}\) on \(\mathbb {R}^{2n}\), we have

$$\begin{aligned}&\int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\pi _j(p))\;dp \lesssim \Vert f_k\Vert _{\frac{2n+1}{2}}\Vert f_{n+k}\Vert _{\frac{2n+1}{2}} \prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^n\left( \Vert f_j\Vert _{2n+1}\,\Vert f_{n+j}\Vert _{2n+1}\right) ,\nonumber \\&\quad k\in \{1,\ldots ,n\}, \end{aligned}$$
(3.2)

with an implicit constant that may depend on n. For \(n=1\), the right-hand side of (3.2) equals \( \Vert f_1\Vert _{\frac{3}{2}}\Vert f_{2}\Vert _{\frac{3}{2}}\).

The Lebesgue exponents in Theorem 3.1 correspond to vertex points on the boundary of the Newton polytope in [42, Sect. 3] and as such are not covered by [42, Theorem 3]. For instance, the exponents in (3.2) for \(k=1<n\) corresponds to \(b(p)=(2,1,\ldots ,1,2,1,\ldots ,1)\) in the notation of [42, (2.5)].

For \(n=1\), the statements of Theorem 3.1 and Theorem 1.8 are equivalent. For \(n>1\), Theorem 3.1, (3.2), consists of n separate inequalities. Knowing that they all hold for all nonnegative measurable functions, one can deduce the inequality

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\pi _j(p))\;dp \lesssim \prod _{j=1}^{2n} \Vert f_j\Vert _{\frac{n(2n+1)}{n+1}} \end{aligned}$$
(3.3)

postulated in Theorem 1.8 by multilinear interpolation, as we will explain below the next remark.

Remark 3.4

If one is only interested in the Loomis–Whitney inequality in \(\mathbb {H}^n\) (Theorem 1.5), and not in the strong-type bound stated in Theorem 1.8, then one can finish the proof without using multilinear interpolation. In particular all the geometric consequences that we list in Sect. 4 can be obtained by this simpler argument. Indeed, let \(K\subset \mathbb {R}^{2n+1}\) be a compact set. Then Theorem 3.1 implies that

$$\begin{aligned} |K| \lesssim |\pi _k(K)|^{\frac{2}{2n+1}} |\pi _{n+k}(K)|^{\frac{2}{2n+1}} \prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^n\left( |\pi _j(K)|^{\frac{1}{2n+1}}\,|\pi _{n+j}(K)|^{\frac{1}{2n+1}}\right) . \end{aligned}$$

for all \(k\in \{1,\ldots ,n\}\). Multiplying these n inequalities together, we obtain

$$\begin{aligned} |K|^n \lesssim \prod _{j=1}^{2n}|\pi _j(K)|^{\frac{n+1}{2n+1}}, \end{aligned}$$

from where the Loomis–Whitney inequality in \(\mathbb {H}^n\) follows by taking the n-th root.

To prove Theorem 1.8, we will rephrase Theorem 3.1 by duality as bounds of the type

$$\begin{aligned} \Vert T(f_1,\ldots ,f_{2n-1})\Vert _{q_k} \lesssim \prod _{j=1}^{2n-1} \Vert f_j\Vert _{p_{j,k}},\quad \text { for }k=1,\ldots ,n, \end{aligned}$$
(3.5)

for a certain multilinear operator T. Then multilinear interpolation will allow us to deduce the bound

$$\begin{aligned} \Vert T(f_1,\ldots ,f_{2n-1})\Vert _{q}\lesssim \prod _{j=1}^{2n-1} \Vert f_j\Vert _{p_j} \end{aligned}$$
(3.6)

with

$$\begin{aligned} \frac{1}{q} = \frac{1}{n} \sum _{k=1}^n \frac{1}{q_k}, \quad \text {and}\quad \frac{1}{p_j}= \frac{1}{n}\sum _{k=1}^n \frac{1}{p_{j,k}},\quad j=1,\ldots , 2n-1. \end{aligned}$$
(3.7)

Finally, (3.6) will yield (3.3). Before turning to the details, we state the multilinear interpolation theorem which will be applied repeatedly to infer (3.6) from (3.5). It can be proven by the method of complex interpolation [7, 11] and we simply state here a version that is useful for our purposes. The theorem is formulated for finitely simple functions on a measure space. These are functions of the form \(\sum _{i=1}^N c_i \chi _{E_i}\) with the requirement that \(E_i\) is a measurable set of finite mass. In our application, the relevant measure spaces will all be equal to \(\mathbb {R}^{2n}\) with the Lebesgue measure.

Theorem 3.8

(Corollary 7.2.11 in [31]) Assume that T is an m-linear operator on the m-fold product of spaces of finitely simple functions of \(\sigma \)-finite measure spaces \((Y_j,\mu _j)\), and suppose that T takes values in the set of measurable functions of a \(\sigma \)-finite measure space \((Z,\nu )\). Let \(1\le p_{1,j},p_{2,j},q_1,q_2\le \infty \) for all \(1\le j\le m\), \(0<\theta <1\). Suppose that for all finitely simple \(f_j\) on \(Y_j\) one has

$$\begin{aligned} \Vert T(f_1,\ldots ,f_m)\Vert _{q_1}\le M_1 \prod _{j=1}^m\Vert f_j\Vert _{p_{1,j}}\quad \text {and}\quad \Vert T(f_1,\ldots ,f_m)\Vert _{q_2}\le M_2\prod _{j=1}^m \Vert f_j\Vert _{p_{2,j}}. \end{aligned}$$

Then for all finitely simple functions \(f_j\) on \(Y_j\) it holds that

$$\begin{aligned} \Vert T(f_1,\ldots ,f_m)\Vert _q \le M_1^{1-\theta } M_2^{\theta } \prod _{j=1}^m\Vert f_j\Vert _{p_j}, \end{aligned}$$

where

$$\begin{aligned} \frac{1}{q}=\frac{1-\theta }{q_1}+\frac{\theta }{q_2}\quad \text {and}\quad \frac{1}{p_j}=\frac{1-\theta }{p_{1,j}}+\frac{\theta }{p_{2,j}}\quad \text {for }j=1,\ldots ,m. \end{aligned}$$

Recalling Theorem 2.4, it suffices to prove Theorem 1.8 for \(n>1\).

Proof of Theorem 1.8

for \(n>1\) using Theorem 3.1

Assume that the statement of Theorem 3.1 holds for a fixed natural number \(n>1\). Our aim is to verify (3.3) for all nonnegative measurable functions \(f_1,\ldots ,f_{2n}\) on \(\mathbb {R}^{2n}\). The desired inequality can be spelled out as follows:

$$\begin{aligned}&\int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{n} \left( f_j({\hat{x}}_j,t+\tfrac{1}{2}x_j x_{n+j})\,f_{n+j}({\hat{x}}_{n+j},t-\tfrac{1}{2}x_j x_{n+j})\right) \;d(x,t) \nonumber \\&\quad \lesssim \prod _{j=1}^{2n} \Vert f_j\Vert _{\frac{n(2n+1)}{n+1}}. \end{aligned}$$
(3.9)

Here we have used the same notational convention as at the beginning of Sect. 1.1. The coordinate expressions appearing in (3.9) help us to define a multilinear operator T for which a bound of the type (3.6) will yield (3.9). The idea is, essentially, to express the left-hand side of (3.9) as the pairing of \(T(f_1,\ldots ,f_{2n-1})\) with \(f_{2n}\), similarly as we did in the proof of Theorem 2.4. To bring the integral into this form, we first apply the Fubini-Tonelli theorem and then the change of variables \(\tau = t-\frac{1}{2}x_nx_{2n}\) in the t-coordinate so that the left-hand side of (3.3) equals

$$\begin{aligned}&\int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\pi _j(p))\;dp \\ {}&\quad = \int _{\mathbb {R}^{2n}} \left[ \int _{\mathbb {R}}f_n({\hat{x}}_n,\tau +x_nx_{2n}) \prod _{\begin{array}{c} j=1\\ j\ne n \end{array}}^{2n-1} f_j(\pi _j(x,\tau +\tfrac{1}{2}x_nx_{2n}))\,dx_{2n}\right] f_{2n}({\hat{x}}_{2n},\tau )\; d({\hat{x}}_{2n},\tau ).\nonumber \end{aligned}$$
(3.10)

This identity motivates the following definition of the operator T. For all finitely simple functions \(g_1,\ldots ,g_{2n-1}\) on \(\mathbb {R}^{2n}\), we define

$$\begin{aligned} T (g_1,\ldots ,g_{2n-1})({\hat{x}}_{2n},\tau ) :=\int _{\mathbb {R}}g_n({\hat{x}}_n,\tau +x_nx_{2n}) \prod _{\begin{array}{c} j=1\\ j\ne n \end{array}}^{2n-1} g_j(\pi _j(x, ,\tau +\tfrac{1}{2}x_nx_{2n}))\,dx_{2n}.\nonumber \end{aligned}$$

Using (3.10), and applying Hölder’s inequality with exponents \(n(2n+1)/(n+1)\) and its dual exponent

$$\begin{aligned} q:=\frac{n(2n+1)}{2n^2-1}, \end{aligned}$$
(3.11)

we find for all nonnegative finitely simple functions \(f_1,\ldots ,f_{2n-1}\) and all nonnegative measurable functions \(f_{2n}\) that

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\pi _j(p))\;dp&= \int _{\mathbb {R}^{2n}} T(f_1,\ldots ,f_{2n-1})(w) f_{2n}(w)\; dw \\&\le \Vert T(f_1,\ldots ,f_{2n-1})\Vert _q\, \Vert f_{2n}\Vert _{\frac{n(2n+1)}{n+1}}. \end{aligned}$$

Hence, to prove (3.3) for such functions \(f_1,\ldots ,f_{2n-1}\), we aim to show

$$\begin{aligned} \Vert T(f_1,\ldots ,f_{2n-1})\Vert _{q}\lesssim \prod _{j=1}^{2n-1} \Vert f_j\Vert _{p_j},\quad \text {for }p_1=\ldots =p_{2n-1}=\frac{n(2n+1)}{n+1} \end{aligned}$$
(3.12)

and q as in (3.11). Having established (3.3) for nonnegative finitely simple functions, it is straightforward to obtain the inequality also for all nonnegative measurable functions \(f_1,\ldots ,f_{2n}\). Indeed, given nonnegative measurable functions \(f_1,\ldots ,f_{2n}\), we may assume that the right-hand side of (3.3) is finite, and then each \(f_j\) is the pointwise almost everywhere limit of an increasing sequence of nonnegative finitely simple functions that converge to \(f_j\) also in \(\Vert \cdot \Vert _{p_j}\)-norm, and Theorem 1.8 follows, by an analogous argument as described at the beginning of the proof of Theorem 2.4.

Thus it remains to prove the claim (3.12) for nonnegative finitely simple functions. It may be illustrative to compare this with the bound for the linear operator T in Theorem 2.9, which is essentially the case \(n=1\) of what we aim to prove, albeit stated for smooth and compactly supported functions.

For \(n>1\), we will deduce (3.12) from Theorem 3.1. Recall that the left-hand sides of the inequalities in (3.2) can be expressed as pairings of \(T(f_1,\ldots ,f_{2n-1})\) with \(f_{2n}\), according to the formula (3.10) and the definition of T if \(f_1,\ldots ,f_{2n-1}\) are nonnegative finitely simple functions and \(f_{2n}\) is an arbitrary nonnegative measurable function. Then the inequalities stated in (3.2) for \(k=1,\ldots ,n\) imply by duality that

$$\begin{aligned} \Vert T(f_1,\ldots ,f_{2n-1})\Vert _{q_k} \lesssim \prod _{j=1}^{2n-1} \Vert f_j\Vert _{p_{j,k}},\quad \text { for }k=1,\ldots ,n, \end{aligned}$$
(3.13)

for all nonnegative finitely simple functions \(f_1,\ldots ,f_{2n-1}\) on \(\mathbb {R}^{2n}\), and exponents

$$\begin{aligned} q_k=\left\{ \begin{array}{ll}(2n+1)/(2n),&{}k=1,\ldots ,n-1, \\ (2n+1)/(2n-1),&{}k=n,\end{array}\right. \end{aligned}$$

and

$$\begin{aligned}p_{j,k}= \left\{ \begin{array}{ll}2n+1,&{}k\notin \{j,j+n,j-n\},\\ (2n+1)/2,&{}k\in \{j,j+n,j-n\} \end{array}\right. ,\quad j=1,\ldots ,2n-1,\quad k=1,\ldots ,n. \end{aligned}$$

Here, for every \(k=1,\ldots ,n\), we take the Lebesgue exponent associated to the \(f_{2n}\)-term on the right-hand side of the corresponding inequality in (3.2), and we let \(q_k\) be the dual of that exponent. This explains why the formula for \(q_n\) is different from \(q_1=\ldots =q_{n-1}\). The exponent \(p_{j,k}\) is simply the Lebesgue exponent of the \(f_j\)-term that appears in the k-th inequality of (3.2).

The key property of the exponents in (3.12) and (3.13) is that they are related by convex combinations as indicated in (3.7). Indeed, we compute that

$$\begin{aligned} \frac{1}{q} = \frac{2n^2-1}{n(2n+1)} =\frac{1}{n}\frac{2n-1}{2n+1}+ \sum _{k=1}^{n-1} \frac{1}{n} \frac{2n}{2n+1}= \sum _{k=1}^n \frac{1}{n} \frac{1}{q_k}, \end{aligned}$$

and similarly,

$$\begin{aligned} \frac{1}{p_j}= \frac{n+1}{n(2n+1)} = \frac{1}{n} \frac{2}{2n+1}+\sum _{\begin{array}{c} k=1\\ k\notin \{j,n-j\} \end{array}}^n \frac{1}{n} \frac{1}{2n+1} = \sum _{k=1}^n \frac{1}{n}\frac{1}{p_{j,k}},\quad j=1,\ldots , 2n-1. \end{aligned}$$

To conclude the proof, we apply multilinear interpolation. Theorem 3.8 allows us to interpolate between two operator bounds. In order to deduce (3.12) from the family of n operator bounds stated in (3.13), we apply Theorem 3.8 for \(m=2n-1\) iteratively \((n-1)\)-times, noting that (3.13) also holds for finitely simple functions, as required by Theorem 3.8. The specific form of the exponents is not used in this argument, we only need to know that we are dealing with convex combinations as in (3.7), and observe the identity

$$\begin{aligned} \tfrac{1}{k}[a_1+\cdots + a_{k}] = \left( 1-\tfrac{1}{k}\right) \left( \tfrac{1}{k-1}[a_1+\cdots +a_{k-1}]\right) +\tfrac{1}{k} a_k, \end{aligned}$$

for \(k>1\), which allows to obtain (3.12) by successive interpolation. More precisely, we apply first Theorem 3.8 for \(\theta =\frac{1}{2}\) to the two operator bounds given by (3.13) for \(k=1\) and \(k=2\). Then we apply Theorem 3.8 with \(\theta =\frac{1}{3}\) to interpolate between this newly obtained bound and the operator bound stated in (3.13) for \(k=3\). We continue until, in the last step, we apply the theorem with \(\theta =\frac{1}{n}\) to interpolate between the previously obtained bound and the bound for \(k=n\). This yields (3.12) for all nonnegative finitely simple functions \(f_1,\ldots ,f_{2n-1}\), and thus concludes the proof of the theorem. \(\square \)

Proof of Theorem 3.1

First, by the same reasoning as at the beginning of the proof of Theorem 2.4, it suffices to verify the claim for nonnegative, smooth, and compactly supported functions.

We fix \(n\in \mathbb {N}\), \(n>1\), and assume that the statement of Theorem 3.1 has already been proven for all natural numbers from 1 to \(n-1\). Recall that the base case of this induction is the content of Theorem 2.4. Given nonnegative \(\mathcal {C}^{\infty }_{c}\) functions \(f_1,\ldots ,f_{2n}\), we now aim to show the n inequalities stated in (3.2). We will explain the details only for \(k=1\), as the other inequalities can be proven in exactly the same manner.

Throughout the following computation, points in \(\mathbb {R}^{2n+1}\) will be denoted in coordinates by (xt) with \(x\in \mathbb {R}^{2n}\) and \(t\in \mathbb {R}\). For \(1\le i<2n\), we also write \(\hat{x}_{j_1,\ldots ,j_i}\) to denote the point in \(\mathbb {R}^{2n-i}\) that is obtained by deleting the \(j_1,\ldots ,j_i\)-th coordinates of x.

First, we apply the Fubini-Tonelli theorem and then the transformation \(t\mapsto t-\tfrac{1}{2}x_n x_{2n} = \tau \) in the inner integral:

$$\begin{aligned} I:=&\int _{\mathbb {R}^{2n}} \int _{\mathbb {R}} \prod _{j=1}^{2n} f_j(\pi _j(x,t))\,dt\,dx= \int _{\mathbb {R}^{2n}} \int _{\mathbb {R}} \prod _{j=1}^{2n}f_j(\pi _j(x,\tau +\tfrac{1}{2}x_n x_{2n}))\,d\tau dx\\ =&\int _{\mathbb {R}^{2n}} \int _{\mathbb {R}} f_n({\hat{x}}_n,\tau + x_n x_{2n}) f_{2n}({\hat{x}}_{2n}, \tau ) \prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\pi _j(x,\tau +\tfrac{1}{2}x_n x_{2n}))\,d\tau \,dx\\ =&\int _{\mathbb {R}^{2n}} f_{2n}({\hat{x}}_{2n}, \tau )\left[ \int _{\mathbb {R}} f_n({\hat{x}}_n,\tau + x_n x_{2n}) \prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\pi _j(x,\tau +\tfrac{1}{2}x_n x_{2n}))\,dx_{2n}\right] \,\\&\qquad d({\hat{x}}_{2n},t). \end{aligned}$$

Here, \(d{\hat{x}}_{2n}= dx_1\ldots dx_{2n-1}\), and similar notation will be used also below. The change of variables was motivated by the observation that

$$\begin{aligned} \pi _{2n}\left( x,\tau +\tfrac{1}{2}x_n x_{2n}\right) =({\hat{x}}_{2n},\tau ), \end{aligned}$$

so that the \(f_{2n}\)-term becomes independent of the 2n-th coordinate of x. Applying Hölder’s inequality with exponents \(p=2n+1\) and \(p'=(2n+1)/2n\), we can split this factor off to obtain \(I\le \Vert f_{2n}\Vert _{2n+1}\, J\) with

$$\begin{aligned} J:=\left[ \int _{\mathbb {R}^{2n}} \left( \int _{\mathbb {R}}f_n({\hat{x}}_n,\tau + x_n x_{2n})\prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\pi _j(x,\tau +\tfrac{1}{2}x_n x_{2n}))\,d x_{2n}\right) ^{\frac{2n+1}{2n}} \,d({\hat{x}}_{2n},t)\right] ^{\frac{2n}{2n+1}}. \end{aligned}$$

The remaining task is to show that

$$\begin{aligned} J \lesssim \Vert f_1\Vert _{\frac{2n+1}{2}}\Vert f_{n+1}\Vert _{\frac{2n+1}{2}}\Vert f_n\Vert _{2n+1}\, \prod _{j=2}^{n-1} \left( \Vert f_j\Vert _{2n+1}\Vert f_{n+j}\Vert _{2n+1}\right) . \end{aligned}$$
(3.14)

We will next extract the \(f_n\)-term from the expression J. First, by Minkowski’s integral inequality, Fubini’s theorem, and the transformation \(\tau \mapsto t= \tau + x_n x_{2n}\), we obtain the bound

$$\begin{aligned} J&\le \int _{\mathbb {R}} \left[ \int _{\mathbb {R}^{2n-1}}\int _{\mathbb {R}} f_n({\hat{x}}_n,t)^{\frac{2n+1}{2n}}\prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\pi _j(x,t-\tfrac{1}{2}x_n x_{2n}))^{\frac{2n+1}{2n}}\,dt\,d{\hat{x}}_{2n}\right] ^{\frac{2n}{2n+1}}\,dx_{2n}. \end{aligned}$$

After this transformation, the \(f_n\)-term is independent of the n-th coordinate of x. We can separate it from the other factors by applying Hölder’s inequality with exponents \(p=2n\) and \(p'= 2n/(2n-1)\) to the expression inside the square brackets. This yields

$$\begin{aligned} J&\le \int _{\mathbb {R}} F_n\, F_{\Pi }\,dx_{2n} \end{aligned}$$
(3.15)

where

$$\begin{aligned} F_n:= \left[ \int _{\mathbb {R}^{2n-1}} f_n({\hat{x}}_n,t)^{2n+1} \,d({\hat{x}}_{n,2n},t)\right] ^{\frac{1}{2n+1}} \end{aligned}$$

and

$$\begin{aligned} F_{\Pi }:= \left[ \int _{\mathbb {R}^{2n-1}} \left( \int _{\mathbb {R}} \prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\pi _j(x,t-\tfrac{1}{2}x_n x_{2n}))^{\frac{2n+1}{2n}} \,dx_n\right) ^{\frac{2n}{2n-1}} \,d({\hat{x}}_{n,2n},t) \right] ^{\frac{2n-1}{2n+1}}. \end{aligned}$$

Applying once more Hölder’s inequality, but now to the \(x_{2n}\)-integral in (3.15), and with exponents \(p=2n+1\) and \(p'=(2n+1)/2n\), yields

$$\begin{aligned} J\le \left( \int _{\mathbb {R}} F_n^{2n+1}\,dx_{2n}\right) ^{\frac{1}{2n+1}}\, \left( \int _{\mathbb {R}} F_{\Pi }^{\frac{2n+1}{2n}}\,dx_{2n}\right) ^{\frac{2n}{2n+1}}= J_n \cdot J_{\Pi }. \end{aligned}$$

Here

$$\begin{aligned} J_n := \left( \int _{\mathbb {R}} F_n^{2n+1}\,dx_{2n}\right) ^{\frac{1}{2n+1}} = \left( \int _{\mathbb {R}^{2n}} f_n({\hat{x}}_n,t)^{2n+1} \,d({\hat{x}}_{n},t)\right) ^{\frac{1}{2n+1}} =\Vert f_n\Vert _{2n+1} \end{aligned}$$

is one of the factors in the desired upper bound for J, recall (3.14). Hence, in order to prove (3.14), it suffices to show that

$$\begin{aligned} J_{\Pi }:= \left( \int _{\mathbb {R}} F_{\Pi }^{\frac{2n+1}{2n}}\,dx_{2n}\right) ^{\frac{2n}{2n+1}} \lesssim \Vert f_1\Vert _{\frac{2n+1}{2}}\Vert f_{n+1}\Vert _{\frac{2n+1}{2}} \prod _{j=2}^{n-1}\left( \Vert f_j\Vert _{2n+1}\, \Vert f_{n+j}\Vert _{2n+1}\right) .\qquad \qquad \end{aligned}$$
(3.16)

To do so, we will finally use our induction hypothesis. We start by expanding

$$\begin{aligned} J_{\Pi }= \left( \int _{\mathbb {R}} \left[ \int _{\mathbb {R}^{2n-1}} \left( \int _{\mathbb {R}} \prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\pi _j(x,t-\tfrac{1}{2}x_n x_{2n}))^{\frac{2n+1}{2n}} \,dx_n\right) ^{\frac{2n}{2n-1}} \,d({\hat{x}}_{n,2n},t) \right] ^{\frac{2n-1}{2n}}\,dx_{2n}\right) ^{\frac{2n}{2n+1}}. \end{aligned}$$

Applying Minkowski’s integral inequality inside the square brackets, then Fubini’s theorem and the transformation \(t\mapsto \tau =t - \frac{1}{2}x_n x_{2n}\) yields

$$\begin{aligned} J_{\Pi }&\le \left( \int _{\mathbb {R}^2} \left[ \int _{\mathbb {R}^{2n-1}}\prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\pi _j(x,\tau ))^{\frac{2n+1}{2n-1}}\,d({\hat{x}}_{n,2n},\tau )\right] ^{\frac{2n-1}{2n}}\, d(x_n,x_{2n})\right) ^{\frac{2n}{2n+1}}. \end{aligned}$$
(3.17)

We recall that

$$\begin{aligned} f_j(\pi _j(x,\tau )) = \left\{ \begin{array}{ll}f_j({\hat{x}}_j,\tau +\frac{1}{2}x_j x_{n+j}),&{}\text {if }j=1,\ldots ,n-1,\\ f_j({\hat{x}}_j,\tau -\frac{1}{2}x_{j-n} x_{j}),&{}\text {if }j=n+1,\ldots ,2n-1. \end{array} \right. \end{aligned}$$
(3.18)

We will continue the upper bound for \(J_{\Pi }\) by applying the induction hypothesis to the expression inside the square brackets. To do so, we temporarily denote points in \(\mathbb {H}^{n-1}\) in coordinates by \((u,t)=(u_1,\ldots ,u_{2n-2},\tau )\). Here, u is a point in \(\mathbb {R}^{2n-2}\), and similarly as before, \({\hat{u}}_k\) denotes the point in \(\mathbb {R}^{2n-3}\) that is obtained from u by deleting the k-th coordinate.

To write the inner integral on the right-hand side of (3.17) in a form where the induction hypothesis is applicable, we fix \(x_n,x_{2n}\in \mathbb {R}\) and define the functions \(g_{x_n,x_{2n},j}\), \(j\in \{1,\ldots ,2n-2\}\) on \(\mathbb {R}^{2n-2}\):

$$\begin{aligned}&g_{x_n,x_{2n},j}({\hat{u}}_j,t)\nonumber \\&\quad := \left\{ \begin{array}{ll}f_1(u_2,\ldots ,u_{n-1},x_n,u_{n},\ldots ,u_{2n-2},x_{2n},t)^{\frac{2n+1}{2n-1}},&{} j=1,\\ f_j(u_1,\ldots ,u_{j-1},u_{j+1},\ldots ,u_{n-1},x_n,u_{n},\ldots ,u_{2n-2},x_{2n},t&{})^{\frac{2n+1}{2n-1}},\\ &{}2 \le j\le n-2\\ f_{n-1}(u_1,\ldots ,u_{n-2},x_n,u_{n},\ldots ,u_{2n-2},x_{2n},t)^{\frac{2n+1}{2n-1}},&{} j=n-1, \end{array}\right. \nonumber \\ \end{aligned}$$
(3.19)

and

$$\begin{aligned}&g_{x_n,x_{2n},j}({\hat{u}}_j,t)\nonumber \\&\quad :=\left\{ \begin{array}{ll} f_{n+1}(u_1,\ldots ,u_{n-1},x_n,u_{n+1},\ldots , u_{2n-2},x_{2n},t)^{\frac{2n+1}{2n-1}},&{}j=n,\\ f_{j+1}(u_1,\ldots ,u_{n-1},x_n,u_{n},\ldots ,u_{j-1},u_{j+1},\ldots u_{2n-2},&{}x_{2n},t)^{\frac{2n+1}{2n-1}},\\ {} &{}n+1\le j\le 2n-3,\\ f_{2n-1}(u_1,\ldots ,u_{n-1},x_n,u_{n},\ldots , u_{2n-3},x_{2n},t)^{\frac{2n+1}{2n-1}},&{}j=2n-2. \end{array}\right. \nonumber \\ \end{aligned}$$
(3.20)

With this notation in place, and recalling (3.18), we can restate (3.17) equivalently as follows

$$\begin{aligned} J_{\Pi }&\le \left( \int _{\mathbb {R}^2} \left[ \int _{\mathbb {R}^{2n-1}}\prod _{j=1}^{2n-2}g_{x_n,x_{2n},j}(\pi _j(u,t))\,d(u,t)\right] ^{\frac{2n-1}{2n}}\, d(x_n,x_{2n})\right) ^{\frac{2n}{2n+1}}, \end{aligned}$$

where \(\pi _j\) now denotes the Heisenberg projection from \(\mathbb {H}^{n-1}\) to the vertical plane \(\{u_j=0\}\) (identified with \(\mathbb {R}^{2n-2}\)). The induction hypothesis applied to the inner integral yields

$$\begin{aligned} J_{\Pi }&\lesssim \left( \int _{\mathbb {R}^2} \left[ \Vert g_{x_n,x_{2n},1}\Vert _{{\frac{2n-1}{2}}}\Vert g_{x_n,x_{2n},n}\Vert _{{\frac{2n-1}{2}}}\prod _{\begin{array}{c} j=1\\ j\notin {1,n} \end{array}}^{2n-2} \Vert g_{x_n,x_{2n},j}\Vert _{{2n-1}}\right] ^{\frac{2n-1}{2n}}\, d(x_n,x_{2n})\right) ^{\frac{2n}{2n+1}}. \end{aligned}$$
(3.21)

Next we apply the multilinear Hölder inequality with exponents

$$\begin{aligned} p_1=p_n=n\quad \text {and}\quad p_2=\ldots =p_{n-1}=p_{n+1}=\ldots =p_{2n-2}=2n. \end{aligned}$$

Note that

$$\begin{aligned} \sum _{j=1}^{2n-2} \frac{1}{p_j} = \frac{2}{n}+ \frac{2n-4}{2n} = 1, \end{aligned}$$

as desired. Hence we deduce from (3.21) that

$$\begin{aligned} J_{\Pi } \lesssim&\left( \int _{\mathbb {R}^2} \Vert g_{x_n,x_{2n},1}\Vert _{\frac{2n-1}{2}}^{\frac{2n-1}{2}}d(x_n,x_{2n})\right) ^{\frac{2}{2n+1}} \left( \int _{\mathbb {R}^2} \Vert g_{x_n,x_{2n},n}\Vert _{\frac{2n-1}{2}}^{\frac{2n-1}{2}}d(x_n,x_{2n})\right) ^{\frac{2}{2n+1}} \\&\quad \cdot \prod _{\begin{array}{c} j=1\\ j\notin {1,n} \end{array}}^{2n-2} \left( \int _{\mathbb {R}^2} \Vert g_{x_n,x_{2n},j}\Vert _{2n-1}^{2n-1}d(x_n,x_{2n})\right) ^{\frac{1}{2n+1}} . \end{aligned}$$

Recalling the definition of \(g_{x_n,x_{2n},j}\) for \(j=1,\ldots ,2n-2\) as stated in (3.19) and (3.20), we obtain immediately

$$\begin{aligned} J_{\Pi } \lesssim \Vert f_1\Vert _{\frac{2n+1}{2}}\Vert f_{n+1}\Vert _{\frac{2n+1}{2}} \prod _{j=2}^{n-1}\left( \Vert f_j\Vert _{2n+1}\, \Vert f_{n+j}\Vert _{2n+1}\right) . \end{aligned}$$

as desired; recall (3.16). This proves (3.14) and thus establishes the statement about \(k=1\) in the induction claim (3.2) for n. The other values of k are treated analogously, and hence we have established (3.2). \(\square \)

4 Applications of the Loomis–Whitney inequalities in Heisenberg groups

In this section, we derive the Gagliardo–Nirenberg–Sobolev inequality in \(\mathbb {H}^n\), and its variant Theorem 1.13, from the Loomis–Whitney inequality, Theorem 1.5. As a corollary of Theorem 1.13, we obtain the isoperimetric inequality in \(\mathbb {H}^n\) (with a non-optimal constant). At the end of the section, we also show how the Loomis–Whitney inequality can be used, directly, to infer a variant of the isoperimetric inequality, without passing through the Sobolev inequality.

The arguments presented here are very standard ( [1, 29, 37]), and we claim no originality. A version of this section, in the context of the first Heisenberg group, was already contained in our joint work [23] with Tuomas Orponen. In his thesis [9], Bramati also gave an argument to deduce the Gagliardo–Nirenberg–Sobolev and isoperimetric inequalities in \(\mathbb {H}^1\) from the strong version of the Loomis–Whitney inequality stated in Theorem 2.4.

We start by recalling the statement of Theorem 1.13:

Theorem 4.1

Let \(f \in BV(\mathbb {H}^n)\). Then,

$$\begin{aligned} \Vert f\Vert _{\frac{2n+2}{2n+1}} \lesssim \prod _{j=1}^{2n} \Vert X_jf\Vert ^{\frac{1}{2n}}. \end{aligned}$$
(4.2)

Recall that \(f \in BV(\mathbb {H}^n)\) if \(f \in L^{1}(\mathbb {H}^n)\), and the distributional derivatives \(X_jf\), \(j=1,\ldots ,2n\), are finite signed Radon measures. Smooth compactly supported functions are dense in \(BV(\mathbb {H}^n)\) in the sense that if \(f \in BV(\mathbb {H}^n)\), then there exists a sequence \(\{\varphi _{k}\}_{k \in \mathbb {N}} \subset C^{\infty }_{c}(\mathbb {R}^{2n+1})\) such that \(\varphi _{k} \rightarrow f\) almost everywhere (and in \(L^{1}(\mathbb {H}^n)\) if desired), and \(\Vert Z\varphi _{k}\Vert \rightarrow \Vert Z f\Vert \) for \(Z \in \{X_1,\ldots ,X_{2n}\}\). For a reference, see [27, Theorem 2.2.2]. With this approximation in hand, it suffices to prove Theorem 4.1 for, say, \(f \in C^{1}_{c}(\mathbb {R}^{2n+1})\). The following lemma contains most of the proof:

Lemma 4.3

Let \(f \in C^{1}_{c}(\mathbb {R}^{2n+1})\), and write

$$\begin{aligned} F_{k} := \{p \in \mathbb {R}^{2n+1} : 2^{k - 1} \le |f(p)| \le 2^{k}\}, \qquad k \in \mathbb {Z}. \end{aligned}$$
(4.4)

Then,

$$\begin{aligned} |\pi _{j}(F_{k})| \le 2^{-k + 2} \int _{F_{k - 1}} |X_jf| ,\quad j=1,\ldots 2n.\end{aligned}$$
(4.5)

Proof

By symmetry, it suffices to prove the inequality in (4.5) for \(j=1,\ldots ,n\). Let \(w = ({\hat{x}}_{j},t) \in \pi _{j}(F_{k})\), denote by \(e_j\) the j-th unit vector, and fix \(p = w \cdot x_j e_j \in F_{k}\) such that \(\pi _{j}(p) = w\). In particular, \(|f(p)| \ge 2^{k - 1}\). Recall the notation \(\mathbb {L}_{j} = \mathrm {span}(e_j)=\{x_j e_j : x_{j} \in \mathbb {R}\}\) and the definition of \({\hat{x}}_j\) given below (1.2). Since f is compactly supported, we may pick another point \(p' \in w \cdot \mathbb {L}_{j}\) such that \(f(p') = 0\). Since |f| is continuous, we infer that there is a non-degenerate line segment I on the line \(w \cdot \mathbb {L}_{j}\) such that \(2^{k - 2} \le |f(q)| \le 2^{k - 1}\) for all \(q \in I\) (hence \(I \subset F_{k - 1}\)), and |f| takes the values \(2^{k - 2}\) and \(2^{k - 1}\), respectively, at the endpoints \(q_{i} = w \cdot x_{j,i}e_j\) of I, \(i \in \{1,2\}\). Define \(\gamma (x_j) := w \cdot x_j e_j = (x,t - \tfrac{1}{2}x_j x_{n+j})\). With this notation,

$$\begin{aligned} 2^{k - 2}&\le |f(q_{1}) - f(q_{2})| \le \int _{x_{j,1}}^{x_{j,2}} |(f \circ \gamma )'(x_j)| \, dx_j \\&\le \int _{\{x_j : (x,t - \frac{1}{2}x_jx_{n+j}) \in F_{k - 1}\}} |X_jf(x,t - \tfrac{1}{2}x_jx_{n+j})| \, dy. \end{aligned}$$

Writing \(\Phi (x,t) := ({\hat{x}}_{j},t) \cdot x_j e_j = (x,t - \tfrac{1}{2}x_j x_{n+j})\), and integrating over

$$\begin{aligned} (x_1,\ldots ,x_{j-1},x_{j+1},\ldots ,x_{2n},t) = ({\hat{x}}_{j},t) \in \pi _{j}(F_{k}) \subset \mathbb {W}_{j}, \end{aligned}$$

it follows that

$$\begin{aligned} 2^{k - 2}|\pi _{j}(F_{k})|\le \int _{\pi _{j}(F_{k})} \left[ \int _{\{x_{j} : \Phi (x,t) \in F_{k - 1}\}} |X_{j}f(\Phi (x,t))| \, dx_{j} \right] \, d\hat{x}_{j} \, dt. \end{aligned}$$
(4.6)

Finally, we note that \(J_{\Phi } = \mathrm {det\,} D\Phi \equiv 1\). Therefore, using Fubini’s theorem, and performing a change of variables to the right-hand side of (4.6), we see that

$$\begin{aligned} 2^{k - 2}|\pi _{j}(F_{k})|&\le \int _{\{(x,t) \in \mathbb {R}^{2n+1} : \Phi (x,t) \in F_{k - 1}\}} |X_{j}f(\Phi (x,t))| \, d(x,t)\\&= \int _{F_{k - 1}} |X_{j}f(x,t)| \, d(x,t). \end{aligned}$$

This completes the proof. \(\square \)

We are then prepared to prove Theorem 4.1:

Proof of Theorem 4.1

Fix \(f \in C^{1}_{c}(\mathbb {R}^{2n+1})\), and define the sets \(F_{k}\), \(k \in \mathbb {Z}\), as in (4.4). Using first Theorem 1.5, then Lemma 4.3, then the generalized Hölder’s inequality with \(p_1=\ldots =p_{2n}=2n\), and finally the embedding \(\ell ^{1} \hookrightarrow \ell ^{(2n+2)/(2n+1)}\), we estimate as follows:

$$\begin{aligned} \int |f|^{\frac{2n+2}{2n+1}}&\sim \sum _{k \in \mathbb {Z}} 2^{\frac{(2n+2)k}{2n+1}}|F_{k}|\\&\lesssim \sum _{k \in \mathbb {Z}} 2^{\frac{(2n+2)k}{2n+1}}\prod _{j=1}^{2n} |\pi _{j}(F_{k})|^{\frac{n+1}{n(2n+1)}}\\&\lesssim \sum _{k \in \mathbb {Z}} \prod _{j=1}^{2n} \Big (\int _{F_{k - 1}} |X_jf| \Big )^{\frac{n+1}{n(2n+1)}}\\&\lesssim \prod _{j=1}^{2n} \Big [ \sum _{k \in \mathbb {Z}} \Big ( \int _{F_{k - 1}} |X_jf| \Big )^{\frac{2n+2}{2n+1}} \Big ]^{\frac{1}{2n}}\\&\lesssim \prod _{j=1}^{2n} \Big [\sum _{k \in \mathbb {Z}} \int _{F_{k - 1}} |X_jf| \Big ]^{\frac{2n+2}{2n(2n+1)}} \sim \prod _{j=1}^{2n} \Vert X_jf\Vert _{1}^{\frac{2n+2}{2n(2n+1)}}. \end{aligned}$$

Raising both sides to the power \((2n+1)/(2n+2)\) completes the proof. \(\square \)

We conclude the section by discussing isoperimetric inequalities. A measurable set \(E \subset \mathbb {H}^n\) has finite horizontal perimeter if \(\chi _{E} \in BV(\mathbb {H}^n)\). Here \(\chi _{E}\) is the characteristic function of E. Note that our definition of \(BV(\mathbb {H}^n)\) implies, in particular, that \(|E| < \infty \). We follow common practice, and write \(P_{\mathbb {H}}(E) := \Vert \nabla _{\mathbb {H}} \chi _{E}\Vert \). For more information on sets of finite horizontal perimeter, see [26]. Now, applying Theorem 4.1 to \(f = \chi _{E}\), we recover the following isoperimetric inequality (with a non-optimal constant):

Theorem 4.7

There exists a constant \(C>0\) such that

$$\begin{aligned} |E|^{\frac{2n+1}{2n+2}}\le C P_{\mathbb {H}}(E) \end{aligned}$$
(4.8)

for any measurable set \(E \subset \mathbb {H}^n\) of finite horizontal perimeter.

For \(n=1\), this is Pansu’s isoperimetric inequality [40], which has later been generalized to \(\mathbb {H}^n \) and beyond [14, 30]. We remark that the a priori assumption \(|E| < \infty \) is critical here; for example the theorem evidently fails for \(E = \mathbb {H}^n\), for which \(|E| = \infty \) but \(\Vert \nabla _{\mathbb {H}} \chi _{E}\Vert = 0\). We conclude the paper by deducing a weaker version of (4.8) (even) more directly from the Loomis–Whitney inequality. Namely, we claim that

$$\begin{aligned} |E|^{\frac{2n+1}{2n+2}}\le C {\mathcal {H}}^{2n+1}_{d}(\partial E) \end{aligned}$$
(4.9)

for any bounded measurable set \(E \subset \mathbb {H}^n\), where \({\mathcal {H}}^{2n+1}_{d}\) denotes the \(2n+1\)-dimensional Hausdorff measure on \(\mathbb {H}^n\) with respect to the Korányi distance (or the standard left-invariant sub-Riemannian metric). This inequality is, in general, weaker than (4.8): at least for open sets \(E \subset \mathbb {H}^n\), the property \({\mathcal {H}}^{2n+1}_{d}(\partial E) < \infty \) implies that \(P_{\mathbb {H}}(E) < \infty \), and then \(P_{\mathbb {H}}(E) \lesssim {\mathcal {H}}^{2n+1}_{d}(\partial E)\), see [28, Theorem 4.18]. However, if E is a bounded open set with \(C^{1}\) boundary, then \(\mathcal {H}^{2n+1}_{d}(\partial E) \sim P_{\mathbb {H}}(E)\), see [26, Corollary 7.7].

To prove (4.9), we need the following auxiliary result, see [15, Lemma 3.4] and [24, Remark 4.7]:

Lemma 4.10

Let \(n\in \mathbb {N}\). There exists a constant \(C_n > 0\) such that the following holds. Let \(\mathbb {W}\subset \mathbb {H}^n\) be a vertical subgroup of codimension 1. Then,

$$\begin{aligned} |\pi _{\mathbb {W}}(A)|\le C \mathcal {H}^{2n+1}_d(A), \qquad A \subset \mathbb {H}^n. \end{aligned}$$
(4.11)

Proof of (4.9)

Let \(E\subset \mathbb {H}\) be bounded and measurable. We first claim that

$$\begin{aligned}&\pi _{j}(E)\subseteq \pi _{j}(\partial E), \quad j=1,\ldots , 2n. \end{aligned}$$
(4.12)

Let \(w\in \pi _{j}(E)\) and consider \(\pi _{j}^{-1}\{w\} = w \cdot \mathbb {L}_{j}\) where \(\mathbb {L}_{y}=\mathrm {span}(e_j)\). By definition there exists \(x_{j,1}\in \mathbb {R}\) such that \(w \cdot x_{j,1} e_j\in E\) and since E is bounded there also exists \(x_{j,2}\in \mathbb {R}\) such that \(w \cdot x_{j,2}\in \mathbb {H}^n \, \setminus \, \overline{E}\). Since \(w \cdot \mathbb {L}_{j}\) is connected, there finally exists \(x_{j,3}\in \mathbb {R}\) such that \(w \cdot x_{j,3} e_j \in \partial E\) which immediately implies (4.12). Using Theorem 1.5 and (4.12), we get

$$\begin{aligned} |E| \lesssim \prod _{j=1}^{2n}|\pi _j(\partial E)|^{\frac{n+1}{n(2n+1)}}. \end{aligned}$$

Now the isoperimetric inequality (4.9) follows using Lemma 4.11. \(\square \)

5 Generalized Loomis–Whitney inequalities by induction

The approach described in Sect. 3 can be used to prove something a bit more general, namely we can replace the vertical Heisenberg projections \(\pi _1,\ldots ,\pi _{2n}\) by projection-type mappings of the form

$$\begin{aligned} \rho _j:\mathbb {R}^{2n+1}\rightarrow \mathbb {R}^{2n},\quad \rho _j(x,t)=( {\hat{x}}_j, t + h_j(x)),\quad j=1,\ldots ,2n, \end{aligned}$$
(5.1)

for suitable \(\mathcal {C}^1\) maps \(h_j:\mathbb {R}^{2n}\rightarrow \mathbb {R}\). The precise condition is stated in Definition 5.2 and it is tailored so that a Loomis–Whitney-type inequality for \(\rho _1,\ldots ,\rho _{2n}\) can be established based on the \(L^{3/2}\)-\(L^3\) boundedness of a linear operator in the plane, analogously as we did for \(\pi _1,\ldots ,\pi _{2n}\) and the Radon transform in Sects. 2-3. By a simple change-of-variables, one can generalize the setting even slightly further, see Remark 5.18.

For arbitrary \(\mathcal {C}^1\) functions \(h_j\), the mappings \(\rho _j\) defined in (5.1) satisfy a condition analogous to (1.4) for \(\pi _j\), which ensures by the coarea formula that the preimage of a Lebesgue null set in \(\mathbb {R}^{2n}\) under \(\rho _j\) is a Lebesgue null set in \(\mathbb {R}^{2n+1}\). More precisely, we have

$$\begin{aligned} \det (D\rho _j D\rho _j^t)= \det \begin{pmatrix}1&{}&{}&{}\\ {} &{}\ddots &{}&{}\nabla _{{\hat{x}}_j}h\\ &{}&{}1&{}\\ {} &{}\nabla _{{\hat{x}}_j}h&{}&{}(1+|\nabla h|^2)\end{pmatrix}&= 1+ (\partial _{x_j} h)^2. \end{aligned}$$

By the reasoning below Theorem 1.8 it follows that \(f\circ \rho _j\) is Lebesgue measurable on \(\mathbb {R}^{2n+1}\) if f is Lebesgue measurable on \(\mathbb {R}^{2n}\).

Definition 5.2

(\(L^{3/2}\)-\(L^3\) property) We say that a family \(\{h_1,\ldots ,h_{2n}\}\) of \(\mathcal {C}^1\) functions \(h_j\) on \(\mathbb {R}^{2n}\) has the \(L^{3/2}\)-\(L^3\) property if there exists a constant \(C<\infty \) such that the following holds for all \(k=1,\ldots ,n\):

  • If \(n>1\), then for every \({\hat{x}}_{k,n+k}\in \mathbb {R}^{2n-2}\), the operator \(T_{k,{\hat{x}}_{k,n+k}}\), defined by

    $$\begin{aligned} T_{k,{\hat{x}}_{k,n+k}}f(x_k,t):= \int _{\mathbb {R}} f(x_{n+k},t+h_k(x)-h_{n+k}(x))\, dx_{n+k},\quad f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2) \end{aligned}$$

    satisfies

    $$\begin{aligned} \Vert T_{k,{\hat{x}}_{k,n+k}}f\Vert _3\le C \Vert f\Vert _{\frac{3}{2}},\quad f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2). \end{aligned}$$

    Here, the coordinates of \({\hat{x}}_{k,n+k}\in \mathbb {R}^{2n-2}\) are \(x_i\), \(i\in \{1,\ldots ,2n\}\setminus \{k,n+k\}\), and \(x=(x_1,\ldots ,x_k,\ldots ,x_{n+k},\ldots ,x_{2n})\).

  • If \(n=1\), then the operator \(T_1=T\), defined by

    $$\begin{aligned} Tf(x_1,t):=\int _{\mathbb {R}} f(x_2,t+h_1(x_1,x_2)-h_2(x_1,x_2))dx_2,\quad f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2) \end{aligned}$$

    satisfies

    $$\begin{aligned} \Vert Tf\Vert _3\le C \Vert f\Vert _{\frac{3}{2}},\quad f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2). \end{aligned}$$

We next give examples of functions \(\{h_1,\ldots ,h_{2n}\}\) with the properties stated in Definition 5.2. Essentially, for \(k=1,\ldots ,n\), we take \(h_k\) and \(h_{n+k}\) to be polynomials of second degree as functions of \(x_k\) and \(x_{n+k}\) so that Theorem 5.5 is applicable. This class of examples includes the functions

$$\begin{aligned} h_j(x)=\left\{ \begin{array}{ll}\tfrac{1}{2}x_j x_{n+j},&{}j=1,\ldots ,n, \\ -\tfrac{1}{2} x_{j-n}x_j,&{}j=n+1,\ldots ,2n.\end{array}\right. \end{aligned}$$
(5.3)

associated to the standard Heisenberg vertical coordinate projections \(\rho _j=\pi _j\), \(j=1,\ldots ,2n\).

Example 5.4

Fix \(n>1\), \(b_j\in \mathbb {R}\) and \(c_{j,a}\in \mathcal {C}^1(\mathbb {R}^{2n-2})\) for \(j=1,\ldots ,2n\) and multi-indices \(a\in \mathcal {A}:=\{(0,0),(1,0),(0,1),(2,0),(0,2)\}\). For \(k=1,\ldots ,n\), we define

$$\begin{aligned} h_k(x):= b_k\, x_k x_{n+k} + \sum _{a=(a_1,a_2)\in \mathcal {A}}c_{k,a}({\hat{x}}_{k,n+k}) x_k^{a_1} x_{n+k}^{a_2} \end{aligned}$$

and

$$\begin{aligned} h_{n+k}(x):= b_{n+k}\, x_k x_{n+k} + \sum _{a=(a_1,a_2)\in \mathcal {A}}c_{n+k,a}({\hat{x}}_{k,n+k})x_k^{a_1} x_{n+k}^{a_2}. \end{aligned}$$

Then the operators appearing in Definition 5.2 are given by

$$\begin{aligned} T_{k,{\hat{x}}_{k,n+k}}f(x_k,t):= \int _{\mathbb {R}} f\left( x_{n+k},t+ H_{k,n+k}(x)\right) \, dx_{n+k},\quad f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2), \end{aligned}$$

where

$$\begin{aligned} H_{k,n+k}(x):= (b_k-b_{n+k})x_k x_{n+k}+\sum _{a=(a_1,a_2)\in \mathcal {A}}\left[ c_{k,a}({\hat{x}}_{k,n+k})-c_{n+k,a}({\hat{x}}_{k,n+k})\right] x_k^{a_1} x_{n+k}^{a_2}. \end{aligned}$$

If \( b_k-b_{n+k}\ne 0\) for \(k=1,\ldots ,n\), then \(\{h_1,\ldots ,h_{2n}\}\) has the \(L^{3/2}\)-\(L^3\) property by Theorem 5.5 with constant \(C \lesssim (\min _{k=1,\ldots ,n}|b_k-b_{n+k}|)^{-1/3}\). This is the case in particular for \(\{h_1,\ldots ,h_{2n}\}\) as in (5.3). Hence, Theorems 3.1 and 1.8 are special cases of Theorems 5.8 and 5.16 below.

We claim no originality for Theorem 5.5 that was applied in the previous example. It is an instance of much more general results available in the literature. We merely explain here how the statement follows from the \(L^{3/2}\)-\(L^3\) improving property of (i) the Radon transform and (ii) convolution with a measure on a parabola. Even though (i) involves integration over lines with different slopes, and (ii) concerns convolution with a fixed parabola, both operators fit in the same framework [43, p. 606].

Theorem 5.5

Let \(\alpha ,\beta ,\gamma ,\delta ,\epsilon ,\kappa \in \mathbb {R}\). If \(\beta \ne 0\), then the operator S, defined by

$$\begin{aligned} Sf(x,t)=\int _{\mathbb {R}} f(y,t+ \alpha y^2 + \beta xy + \gamma x^2 + \delta x + \epsilon y + \kappa )\,dy,\quad f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2), \end{aligned}$$

satisfies

$$\begin{aligned} \Vert Sf\Vert _{3}\lesssim |\beta |^{-1/3} \Vert f\Vert _{\frac{3}{2}},\quad f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2). \end{aligned}$$
(5.6)

Proof

We divide the proof in two cases: \(\alpha =0\) and \(\alpha \ne 0\). In the first case, we apply the \(L^{3/2}\)-\(L^3\) improving property of the Radon transform [38] (in the form of Theorem 2.9). In the second case, we reduce matters to the \(L^{3/2}\)-\(L^3\) improving property of the convolution operator with a measure on a parabola [21, 36, 39].

First, if \(\alpha =0\), then, for \(f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2)\), we relate Sf to the operator T from Theorem 2.9 as follows:

$$\begin{aligned} Sf(x,t)&= \int _{\mathbb {R}} f(y,t+ [\beta x+\epsilon ]y + [\gamma x^2 + \delta x + \kappa ])\,\\ dy&= Tf(\beta x+ \epsilon ,t+\gamma x^2 +\delta x +\kappa ). \end{aligned}$$

Thus

$$\begin{aligned} \Vert Sf\Vert _{3}&= \left( \int _{\mathbb {R}^2} |Tf(\beta x+ \epsilon ,t+\gamma x^2 +\delta x +\kappa )|^3\,d(x,t)\right) ^{\frac{1}{3}}\\&= |\beta |^{-1/3}\left( \int _{\mathbb {R}^2} |Tf(\xi ,\tau )|^3\,d(\xi ,\tau )\right) ^{\frac{1}{3}}\\ {}&= |\beta |^{-1/3} \Vert Tf\Vert _3, \end{aligned}$$

and hence Theorem 2.9 implies (5.6) in that case.

If \(\alpha \ne 0\), we instead reduce matters to [36], or the more general [21, Theorem 1]. A special case of that theorem says that

$$\begin{aligned} \Vert \mu _{\alpha } *f\Vert _3 \lesssim \Vert f\Vert _{\frac{3}{2}},\quad f\in L^{\frac{3}{2}}(\mathbb {R}^2), \end{aligned}$$
(5.7)

with implicit constant independent of \(\alpha \), where

$$\begin{aligned} \mu _{\alpha } *f (x,t):= \int _{\mathbb {R}} f((x,t)-(y,\alpha y^2))|\alpha |^{1/3}\,dy, \end{aligned}$$

see also [39, Theorem 1]. To employ this result, we aim to relate Sf for \(f\in \mathcal {C}^{\infty }_c(\mathbb {R}^2)\) to \(\mu _{\alpha } *f\). We apply elementary transformations to one of the expressions that appear in the definition of Sf, namely

$$\begin{aligned}&t+ \alpha y^2 + \beta xy + \gamma x^2 + \delta x + \epsilon y + \kappa \\&\quad = \alpha \left[ y + \tfrac{1}{2}\left( \tfrac{\beta }{\alpha }x+\tfrac{\epsilon }{\alpha }\right) \right] ^2+ \left[ -\tfrac{\alpha }{4}\left( \tfrac{\beta }{\alpha }x+ \tfrac{\epsilon }{\alpha }\right) ^2 +\gamma x^2+ \delta x + \kappa + t\right] . \end{aligned}$$

Hence, by the change-of-variables \(y\mapsto \eta = - [y + \tfrac{1}{2}\left( \tfrac{\beta }{\alpha }x+\tfrac{\epsilon }{\alpha }\right) ]\), we obtain

$$\begin{aligned}&Sf(x,t)\\&\quad = \int _{\mathbb {R}} f\left( y, \alpha \left[ y + \tfrac{1}{2}\left( \tfrac{\beta }{\alpha }x+\tfrac{\epsilon }{\alpha }\right) \right] ^2+\left[ -\tfrac{\alpha }{4}\left( \tfrac{\beta }{\alpha }x+ \tfrac{\epsilon }{\alpha }\right) ^2 + \gamma x^2+\delta x + \kappa + t\right] \right) \, dy\\&\quad =\int _{\mathbb {R}} f\left( - \tfrac{1}{2}\left( \tfrac{\beta }{\alpha }x+\tfrac{\epsilon }{\alpha }\right) -\eta , \left[ -\tfrac{\alpha }{4}\left( \tfrac{\beta }{\alpha }x+ \tfrac{\epsilon }{\alpha }\right) ^2 + \gamma x^2+\delta x + \kappa + t\right] -(-\alpha )\eta ^2\right) \, d\eta \\&\quad =|\alpha |^{-1/3} \mu _{-\alpha }*f\left( \Phi (x,t)\right) ,\end{aligned}$$

with

$$\begin{aligned} \Phi (x,t):= \left( - \tfrac{1}{2}\left( \tfrac{\beta }{\alpha }x+\tfrac{\epsilon }{\alpha }\right) , -\tfrac{\alpha }{4}\left( \tfrac{\beta }{\alpha }x+ \tfrac{\epsilon }{\alpha }\right) ^2 + \gamma x^2 +\delta x + \kappa + t\right) . \end{aligned}$$

Since

$$\begin{aligned} |\det D\Phi (x,t)|=\left| \beta \right| \,\left| 2\alpha \right| ^{-1}, \end{aligned}$$

we find that

$$\begin{aligned} \Vert Sf\Vert _3 =|\alpha |^{-1/3} \Vert \left( \mu _{-\alpha }*f \right) \circ \Phi \Vert _3 =|\alpha |^{-1/3} \left| \beta \right| ^{-1/3} \,\left| 2\alpha \right| ^{1/3} \Vert \mu _{-\alpha }*f \Vert _3. \end{aligned}$$

Thus (5.6) in the case \(\alpha \ne 0\) follows from (5.7). \(\square \)

We next prove a generalization of Theorem 3.1 that applies in particular to mappings \(\rho _1,\ldots ,\rho _{2n}\) as in (5.1) for \(h_1,\ldots ,h_{2n}\) as in Example 5.4.

Theorem 5.8

Let \(n\in \mathbb {N}\). Assume that \(\{h_1,\ldots ,h_{2n}\}\) is a family of \(\mathcal {C}^1\) functions on \(\mathbb {R}^{2n}\) with the \(L^{3/2}\)-\(L^3\) property and define

$$\begin{aligned} \rho _j:\mathbb {R}^{2n+1}\rightarrow \mathbb {R}^{2n},\quad \rho _j(x,t)=\left( {\hat{x}}_j, t + h_j(x)\right) ,\quad j=1,\ldots ,2n. \end{aligned}$$

Then, for all nonnegative Lebesgue measurable functions \(f_1,\ldots ,f_{2n}\) on \(\mathbb {R}^{2n}\), we have

$$\begin{aligned}&\int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\rho _j(p))\;dp \lesssim \Vert f_k\Vert _{\frac{2n+1}{2}}\Vert f_{n+k}\Vert _{\frac{2n+1}{2}} \prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^n\left( \Vert f_j\Vert _{2n+1}\,\Vert f_{n+j}\Vert _{2n+1}\right) ,\nonumber \\&\quad k\in \{1,\ldots ,n\}, \end{aligned}$$
(5.9)

with an implicit constant that may depend on n and the boundedness constant C associated to the family \(\{h_1,\ldots ,h_{2n}\}\). If \(n=1\), then (5.9) reads

$$\begin{aligned} \int _{\mathbb {R}^{3}} f_1(\rho _1(p))f_2(\rho _2(p))\;dp \lesssim \Vert f_1\Vert _{\frac{3}{2}}\Vert f_{2}\Vert _{\frac{3}{2}}. \end{aligned}$$

The statement can be deduced by following the proof of Theorem 3.1 almost verbatim. We decided to give the argument for Theorem 3.1 first in Sect. 3 since it is a bit easier to read and helps motivate the more general discussion in the present section. Below we merely explain how to adapt the proof of Theorem 3.1 to establish Theorem 5.8.

Proof

It suffices to verify the claim for nonnegative, smooth, and compactly supported functions \(f_1,\ldots ,f_{2n}\). The case \(n=1\) follows directly from the \(L^{3/2}\)-\(L^3\) property of \(\{h_1,h_2\}\) in Definition 5.2, and a simple change-of-variables argument, observing that

$$\begin{aligned}&\int _{\mathbb {R}^3} f_1(\rho _1(p))f_2(\rho _2(p)) dp \\&\quad = \int _{\mathbb {R}^3} f_1(x_2,t + h_1(x_1,x_2)) f_2(x_1,t+h_2(x_1,x_2))\,d(x_1,x_2,t)\\&\quad = \int _{\mathbb {R}^2} f_2(x_1,\tau )\left( \int _{\mathbb {R}}f_1(x_2,\tau +h_1(x_1,x_2)-h_2(x_1,x_2))\,dx_2\right) \,d(x_1,\tau )\\&\quad = \int _{\mathbb {R}^2} f_2(x_1,\tau )T_1f_1(x_1,\tau )\,d(x_1,\tau )\\&\quad \le \Vert T_1f_1\Vert _3 \Vert f_2\Vert _{\frac{3}{2}}\le C \Vert f_1\Vert _{\frac{3}{2}} \Vert f_2\Vert _{\frac{3}{2}}, \end{aligned}$$

for nonnegative \(f_1,f_2\in \mathcal {C}^{\infty }_c(\mathbb {R}^2)\).

Suppose next that the statement of Theorem 5.8 has already been established for all natural numbers up to \(n-1\). We will argue that it holds also for the integer n. To this end, we fix an arbitrary family \(\{h_1,\ldots ,h_{2n}\}\) of \(\mathcal {C}^1\) functions \(\mathbb {R}^{2n}\rightarrow \mathbb {R}\) with the \(L^{3/2}\)-\(L^3\) property. Given nonnegative \(\mathcal {C}^{\infty }_{c}\) functions \(f_1,\ldots ,f_{2n}\), we aim to show the n inequalities stated in (5.9), and by symmetry it suffices to discuss this for \(k=1\). By the same argument as in the proof of Theorem 3.1, but now using the transformation \(t\mapsto t+ h_{2n}(x) = \tau \), we find that

$$\begin{aligned} I:=&\int _{\mathbb {R}^{2n}} \int _{\mathbb {R}} \prod _{j=1}^{2n} f_j(\rho _j(x,t))\,dt\,dx\\ =&\int _{\mathbb {R}^{2n}} f_{2n}({\hat{x}}_{2n}, \tau )\nonumber \\&\quad \times \left[ \int _{\mathbb {R}} f_n({\hat{x}}_n,\tau +h_n(x)-h_{2n}(x)) \prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\rho _j(x,\tau -h_{2n}(x)))\,dx_{2n}\right] \,d({\hat{x}}_{2n},t).\nonumber \end{aligned}$$
(5.10)

Applying Hölder’s inequality, we can split off the factor with \(f_{2n}\) (which no longer depends on \(x_{2n}\)) and we obtain \(I\le \Vert f_{2n}\Vert _{2n+1}\, J\) with

$$\begin{aligned} J:=\left[ \int _{\mathbb {R}^{2n}} \left( \int _{\mathbb {R}}f_n({\hat{x}}_n,\tau +h_n(x)-h_{2n}(x))\prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\rho _j(x,\tau -h_{2n}(x)))\,d x_{2n}\right) ^{\frac{2n+1}{2n}} \,d({\hat{x}}_{2n},t)\right] ^{\frac{2n}{2n+1}}. \end{aligned}$$

The remaining task is to show that

$$\begin{aligned} J \lesssim _{n,C} \Vert f_1\Vert _{\frac{2n+1}{2}}\Vert f_{n+1}\Vert _{\frac{2n+1}{2}}\Vert f_n\Vert _{2n+1}\, \prod _{j=2}^{n-1} \left( \Vert f_j\Vert _{2n+1}\Vert f_{n+j}\Vert _{2n+1}\right) , \end{aligned}$$
(5.11)

and this is done as in the proof of Theorem 3.1, but using the transformation \(\tau \mapsto t= \tau + h_n(x)-h_{2n}(x)\). Then, as in the proof of Theorem 3.1, we find that in order to prove (5.11), it suffices to show that

$$\begin{aligned} J_{\Pi }\lesssim _{n,C} \Vert f_1\Vert _{\frac{2n+1}{2}}\Vert f_{n+1}\Vert _{\frac{2n+1}{2}} \prod _{j=2}^{n-1}\left( \Vert f_j\Vert _{2n+1}\, \Vert f_{n+j}\Vert _{2n+1}\right) , \end{aligned}$$
(5.12)

where

$$\begin{aligned} J_{\Pi }:= \left( \int _{\mathbb {R}} \left[ \int _{\mathbb {R}^{2n-1}} \left( \int _{\mathbb {R}} \prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\rho _j(x,t-h_n(x)))^{\frac{2n+1}{2n}} \,dx_n\right) ^{\frac{2n}{2n-1}} \,d({\hat{x}}_{n,2n},t) \right] ^{\frac{2n-1}{2n}}\,dx_{2n}\right) ^{\frac{2n}{2n+1}}. \end{aligned}$$

Applying Minkowski’s integral inequality inside the square brackets, then Fubini’s theorem and the transformation \(t\mapsto \tau =t - h_n(x)\) yields

$$\begin{aligned} J_{\Pi }&\le \left( \int _{\mathbb {R}^2} \left[ \int _{\mathbb {R}^{2n-1}}\prod _{\begin{array}{c} j=1\\ j\ne n,2n \end{array}}^{2n}f_j(\rho _j(x,\tau ))^{\frac{2n+1}{2n-1}}\,d({\hat{x}}_{n,2n},\tau )\right] ^{\frac{2n-1}{2n}}\, d(x_n,x_{2n})\right) ^{\frac{2n}{2n+1}}. \end{aligned}$$
(5.13)

We recall that

$$\begin{aligned} f_j(\rho _j(x,\tau )) = f_j({\hat{x}}_j, \tau + h_j(x)). \end{aligned}$$
(5.14)

We will continue the upper bound for \(J_{\Pi }\) by applying the induction hypothesis to the expression inside the square brackets. To do so, we temporarily denote points in \(\mathbb {H}^{n-1}\) in coordinates by \((u,t)=(u_1,\ldots ,u_{2n-2},\tau )\). Here, u is a point in \(\mathbb {R}^{2n-2}\), and similarly as before, \({\hat{u}}_k\) denotes the point in \(\mathbb {R}^{2n-3}\) that is obtained from u by deleting the k-th coordinate. With this notation in place, and recalling (5.14), we can restate (5.13) equivalently as follows

$$\begin{aligned} J_{\Pi }&\le \left( \int _{\mathbb {R}^2} \left[ \int _{\mathbb {R}^{2n-1}}\prod _{j=1}^{2n-2}g_{x_n,x_{2n},j}({\widetilde{\rho }}_{j,x_n,x_{2n}}(u,t))\,d(u,t)\right] ^{\frac{2n-1}{2n}}\, d(x_n,x_{2n})\right) ^{\frac{2n}{2n+1}}, \end{aligned}$$

where \(g_{x_n,x_{2n},j}({\hat{u}}_j,t)\) are defined exactly as in (3.19)–(3.20) and

$$\begin{aligned} {\widetilde{\rho }}_{j,x_n,x_{2n}}(u,t)=\left\{ \begin{array}{ll}\left( {\hat{u}}_j, t + h_j(u_1,\ldots ,u_{n-1},x_n,u_{n},\ldots ,u_{2n-2},x_{2n})\right) ,&{}1\le j\le n-1,\\ \left( {\hat{u}}_j, t + h_{j+1}(u_1,\ldots ,u_{n-1},x_n,u_{n},\ldots u_{2n-2},x_{2n})\right) ,&{}n\le j\le 2n-2.\end{array}\right. \end{aligned}$$

Thus, the functions \({\widetilde{\rho }}_{j,x_n,x_{2n}}\) are as in the statement of Theorem 5.8 for \(n-1\), with

$$\begin{aligned} {\widetilde{h}}_{j,x_n,x_{2n}}(u):= \left\{ \begin{array}{ll} h_j(u_1,\ldots ,u_{n-1},x_n,u_{n},\ldots ,u_{2n-2},x_{2n}),&{} 1\le j\le n-1,\\ h_{j+1}(u_1,\ldots ,u_{n-1},x_n,u_{n},\ldots ,u_{2n-2},x_{2n}),&{}n\le j\le 2n-2.\end{array}\right. \end{aligned}$$

In particular, if \(\{h_1,\ldots ,h_{2n}\}\) has the \(L^{3/2}\)-\(L^3\) property with constant C as assumed, then so does \(\{{\widetilde{h}}_{1,x_n,x_{2n}},\ldots ,{\widetilde{h}}_{2n-2,x_n,x_{2n}}\}\) for every \((x_n,x_{2n})\in \mathbb {R}^2\). The induction hypothesis applied to the inner integral therefore yields

$$\begin{aligned} J_{\Pi }&\lesssim _C \left( \int _{\mathbb {R}^2} \left[ \Vert g_{x_n,x_{2n},1}\Vert _{{\frac{2n-1}{2}}}\Vert g_{x_n,x_{2n},n}\Vert _{{\frac{2n-1}{2}}}\prod _{\begin{array}{c} j=1\\ j\notin {1,n} \end{array}}^{2n-2} \Vert g_{x_n,x_{2n},j}\Vert _{{2n-1}}\right] ^{\frac{2n-1}{2n}}\, d(x_n,x_{2n})\right) ^{\frac{2n}{2n+1}}. \end{aligned}$$
(5.15)

At the point, the proof can be concluded as in the case of Theorem 3.1, recalling that the functions \(g_{x_n,x_{2n},j}\) have been defined exactly as in (3.19)–(3.20). \(\square \)

As in the case of the Heisenberg vertical coordinate projections, we can use multilinear interpolation to deduce a Loomis–Whitney type inequality for generalized projections \(\{\rho _1,\ldots ,\rho _{2n}\}\).

Theorem 5.16

Fix \(n\in \mathbb {N}\), \(n>1\). Given a family \(\{h_1,\ldots ,h_{2n}\}\) of \(\mathcal {C}^1\) functions on \(\mathbb {R}^{2n}\) that has the \(L^{3/2}\)-\(L^3\) property with constant C, we define

$$\begin{aligned} \rho _j:\mathbb {R}^{2n+1}\rightarrow \mathbb {R}^{2n},\quad \rho _j(x,t)=\left( {\hat{x}}_j, t + h_j(x)\right) ,\quad j=1,\ldots ,2n. \end{aligned}$$

Then

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\rho _j(p))\,dp \lesssim \prod _{j=1}^{2n} \Vert f_j\Vert _{\frac{n(2n+1)}{n+1}}, \end{aligned}$$
(5.17)

for all nonnegative Lebesgue measurable functions \(f_1,\ldots ,f_{2n}\) on \(\mathbb {R}^{2n}\), where the implicit constant may depend on n and C.

Remark 5.18

A straightforward generalization of Theorem 5.16 can be obtained for the family \(\{\Phi _j \circ \rho _j :\; j=1,\ldots ,2n\}\), where \(\Phi _j:\mathbb {R}^{2n}\rightarrow \mathbb {R}^{2n}\) are \(\mathcal {C}^1\) diffeomorphisms with \(\Lambda := \min _{j=1,\ldots ,2n}|\det D\Phi _j|>0\) and \(\rho _j\) are as in Theorem 5.16. Indeed, simply apply Theorem 5.16 to the functions \(g_j:= f_j \circ \Phi _j\), \(j=1,\ldots ,2n\), and then perform changes-of-variables in the integrals in \(\Vert g_j\Vert _{\frac{n(2n+1)}{n+1}}\) to deduce that

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\Phi _j \circ \rho _j(p))\,dp \lesssim _{n,C,\Lambda } \prod _{j=1}^{2n} \Vert f_j\Vert _{\frac{n(2n+1)}{n+1}} \end{aligned}$$

for all nonnegative Lebesgue measurable functions \(f_1,\ldots ,f_{2n}\) on \(\mathbb {R}^{2n}\).

Proof of Theorem 5.16

using Theorem 5.8

By the comment made at the beginning of the proof of Theorem 5.8, we already know the case \(n=1\) of Theorem 5.16. Suppose that the statement of Theorem 5.8 holds for a given integer \(n>1\). Fix mappings \(h_j\) and \(\rho _j\), \(j=1,\ldots ,2n\), as in the statement of Theorems 5.8 and 5.16. Our aim is to verify (5.17) for all nonnegative measurable functions \(f_1,\ldots ,f_{2n}\) on \(\mathbb {R}^{2n}\). The desired inequality can be spelled out as follows:

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j({\hat{x}}_j,t+h_j(x))\;d(x,t) \lesssim \prod _{j=1}^{2n} \Vert f_j\Vert _{\frac{n(2n+1)}{n+1}}. \end{aligned}$$
(5.19)

Similarly as in the proof of Theorem 1.8, we introduce a suitable multilinear operator T. Namely, for all finitely simple functions \(g_1,\ldots ,g_{2n-1}\) on \(\mathbb {R}^{2n}\), we define

$$\begin{aligned}&T (g_1,\ldots ,g_{2n-1})({\hat{x}}_{2n},\tau )\\&\quad :=\int _{\mathbb {R}}g_n({\hat{x}}_n,\tau +h_n(x)-h_{2n}(x)) \prod _{\begin{array}{c} j=1\\ j\ne n \end{array}}^{2n-1} g_j(\rho _j(x, \tau -h_{2n}(x)))\mathrm {d}x_{2n}.\nonumber \end{aligned}$$

Hence, by the same computation that led to (5.10), we find for all finitely simple functions \(f_1,\ldots ,f_{2n-1}\) and nonnegative measurable function \(f_{2n}\) that

$$\begin{aligned} \int _{\mathbb {R}^{2n+1}} \prod _{j=1}^{2n} f_j(\rho _j(p))\;dp = \int _{\mathbb {R}^{2n}} T(f_1,\ldots ,f_{2n-1})(w) f_{2n}(w)\; dw. \end{aligned}$$

From this point on, the argument is entirely abstract and does no longer use the specific form of the operator T. Analogously as in the proof of Theorem 1.8, the inequalities we obtained in Theorem 5.8 yield bounds of the form (3.13) for the operator T. These bounds can be combined using multilinear interpolation, as in Theorem 3.8, to yield a bound of the form (3.12) for the operator T, which eventually gives (5.19). \(\square \)