Abstract
We develop product space theory of singular integrals with mild kernel regularity. We study these kernel regularity questions specifically in situations that are very tied to the T1 type arguments and the corresponding structural theory. In addition, our results are multilinear.
Similar content being viewed by others
1 Introduction
The usual definition of a singular integral operator (SIO)
involves a Hölder-continuous kernel K with a power-type continuity-modulus \(t \mapsto t^{\gamma }\). However, many results continue to hold with significantly more general assumptions. Such kernel regularity considerations become non-trivial especially in connection with results that go beyond the classical Calderón–Zygmund theory—an example is the \(A_2\) theorem of Hytönen [21] with Dini-continuous kernels by Lacey [24]. Estimates for SIOs with mild kernel regularity are, for instance, linked to the theory of rough singular integrals, see e.g. [22].
The fundamental question concerning the \(L^2\) (or \(L^p\)) boundedness of an SIO T is usually best answered by so-called T1 theorems, where the action of the operator T on the constant function 1 is key. We study kernel regularity questions specifically in situations that are very tied to the T1 type arguments and the corresponding structural theory—a big part of the modern product space theory of SIOs relies on such analysis. The proofs of T1 theorems display a fundamental structural decomposition of SIOs into their cancellative parts and so-called paraproducts. It is this structure that is extremely important for obtaining further estimates beyond the initial scalar-valued \(L^p\) boundedness. Refined versions of T1 theorems provide exact identities in terms of model operators and are called representation theorems, see [20, 21, 32].
A concrete definition of kernel regularity is as follows. It concerns the required regularity of the continuity-moduli \(\omega \) appearing in the various kernel estimates, such as,
Recently, Grau de la Herrán and Hytönen [17] proved that the modified Dini condition
with \(\alpha = \frac{1}{2}\) is sufficient to prove a T1 theorem even with an underlying measure \(\mu \) that can be non-doubling. This matches the best known sufficient condition for the classical homogeneous T1 theorem [10]—such results are implicit in Figiel [16] and explicit in Deng et al. [11]. The exponent \(\alpha = \frac{1}{2}\) has a fundamental, even sharp, feeling in all of the existing arguments.
In [17] a new type of representation theorem appears, where the key difference to the original representation theorems [20, 21] is that the decomposition of the cancellative part is in terms of different operators that package multiple dyadic shifts into one and offer more efficient bounds when it comes to kernel regularity. Some of the ideas of the decomposition in [17] are rooted in the work of Figiel [15, 16]. We simultaneously extend [17] both to the multilinear [12,13,14, 27, 33] and multi-parameter [23, 30, 32, 35] settings. The proofs of the representation theorems appear to be now converging to their final and most elegant form, and the arguments are simultaneously efficient and sharp.
Linear bi-parameter SIOs, for example, have kernels with singularities on \(x_1=y_1\) or \(x_2 = y_2\), where \(x,y\in {\mathbb {R}}^d\) are written as \(x= (x_1, x_2), y = (y_1, y_2) \in {\mathbb {R}}^{d_1}\times {\mathbb {R}}^{d_2}\) for a fixed partition \(d=d_1+d_2\). For \(x,y \in {\mathbb {C}}= {\mathbb {R}}\times {\mathbb {R}}\), compare e.g. the one-parameter Beurling kernel \(1/(x-y)^2\) with the bi-parameter kernel \(1/[(x_1-y_1)(x_2-y_2)]\)—the product of Hilbert kernels in both coordinate directions. In general, the product space analysis is quite different from one-parameter analysis and seems to resist many techniques—in part due to the failure of bi-parameter sparse domination methods, see [3] (see also [4] however), representation theorems are even more important in bi-parameter than in one-parameter. For example, the dyadic representation methods have proved very fruitful in connection with bi-parameter commutators and weighted analysis, see Holmes–Petermichl–Wick [19], Ou–Petermichl–Strouse [36] and [28]. See also [1, 2].
We discuss various applications throughout. For example, we prove the following two-weight estimate for commutators. The result (1) extends [29] and the result (2) extends [19] and [28].
Theorem 1.1
Suppose that \({\mathbb {R}}^d = {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) is the underlying bi-parameter space, \(p \in (1, \infty )\), \(\mu , \lambda \in A_p({\mathbb {R}}^d)\) are bi-parameter weights and \(\nu = \mu ^{1/p} \lambda ^{-1/p} \in A_2({\mathbb {R}}^d)\) is the Bloom weight.
-
(1)
If \(T_i\), \(i = 1,2\), is a one-parameter \(\omega _i\)-CZO on \({\mathbb {R}}^{d_i}\), where \(\omega _i \in {\text {Dini}}_{3/2}\), then
$$\begin{aligned} \Vert [T_1, [T_2, b]] \Vert _{L^p(\mu ) \rightarrow L^p(\lambda )} \lesssim \Vert b\Vert _{{\text {BMO}}_{\text {prod}}(\nu )}. \end{aligned}$$ -
(2)
Suppose that T is a bi-parameter \((\omega _1, \omega _2)\)-CZO. Then we have
$$\begin{aligned} \Vert [b_m,\cdots [b_2, [b_1, T]]\cdots ]\Vert _{L^p(\mu ) \rightarrow L^p(\lambda )} \lesssim \prod _{j=1}^m\Vert b_j\Vert _{{\text {bmo}}(\nu ^{1/m})} \end{aligned}$$if one of the following conditions holds:
-
(a)
T is paraproduct free and \(\omega _i \in {\text {Dini}}_{m/2+1}\);
-
(b)
\(m=1\) and \(\omega _i \in {\text {Dini}}_{3/2}\);
-
(c)
\(\omega _i \in {\text {Dini}}_{m+1}\).
-
(a)
See the main text for all of the definitions and for additional results. These Bloom-style two-weight estimates have recently been one of the main lines of development concerning commutators, see e.g. [1, 2, 18, 19, 25, 26, 28, 29] for a non-exhaustive list.
2 Basic Notation and Fundamental Estimates
Throughout this paper \(A\lesssim B\) means that \(A\le CB\) with some constant C that we deem unimportant to track at that point. We write \(A\sim B\) if \(A\lesssim B\lesssim A\).
Dyadic Notation. Given a dyadic grid \({\mathcal {D}}\), \(I \in {\mathcal {D}}\) and \(k \in {\mathbb {Z}}\), \(k \ge 0\), we use the following notation:
-
(1)
\(\ell (I)\) is the side length of I.
-
(2)
\(I^{(k)} \in {\mathcal {D}}\) is the kth parent of I, i.e., \(I \subset I^{(k)}\) and \(\ell (I^{(k)}) = 2^k \ell (I)\).
-
(3)
\({\text {ch}}(I)\) is the collection of the children of I, i.e., \({\text {ch}}(I) = \{J \in {\mathcal {D}}:J^{(1)} = I\}\).
-
(4)
\(E_I f=\langle f \rangle _I 1_I\) is the averaging operator, where \(\langle f \rangle _I = \fint _{I} f = \frac{1}{|I|} \int _I f\).
-
(5)
\(E_{I, k}f\) is defined via
$$\begin{aligned} E_{I,k}f = \sum _{\begin{array}{c} J \in {\mathcal {D}}\\ J^{(k)}=I \end{array}}E_J f. \end{aligned}$$ -
(6)
\(\Delta _If\) is the martingale difference \(\Delta _I f= \sum _{J \in {\text {ch}}(I)} E_{J} f - E_{I} f\).
-
(7)
\(\Delta _{I,k} f\) is the martingale difference block
$$\begin{aligned} \Delta _{I,k} f=\sum _{\begin{array}{c} J \in {\mathcal {D}}\\ J^{(k)}=I \end{array}} \Delta _{J} f. \end{aligned}$$ -
(8)
\(P_{I,k}f\) is the following sum of martingale difference blocks
$$\begin{aligned} P_{I,k}f = \sum _{j=0}^{k} \Delta _{I,j} f =\sum _{\begin{array}{c} J \in {\mathcal {D}}\\ J \subset I \\ \ell (J) \ge 2^{-k}\ell (I) \end{array}} \Delta _{J} f. \end{aligned}$$
A fundamental fact is that we have the square function estimate
See e.g. [7, 8] for even weighted \(\Vert S_{{\mathcal {D}}} f\Vert _{L^p(w)} \sim \Vert f\Vert _{L^p(w)}\), \(w \in A_p\), square function estimates and their history. A weight w (i.e. a locally integrable a.e. positive function) belongs to the weight class \(A_p({\mathbb {R}}^d)\), \(1< p < \infty \), if
where the supremum is taken over all cubes \(Q \subset {\mathbb {R}}^d\).
Lemma 2.2
Let \(p \in (1, \infty )\). There holds that
Proof
If \(f_i \in L^p\) then
This follows by extrapolating the corresponding weighted \(L^2\) version of (2.3), which, in turn, simply follows from \(\Vert S_{{\mathcal {D}}} f\Vert _{L^2(w)} \sim \Vert f\Vert _{L^2(w)}\), \(w \in A_2\). Recall that the classical extrapolation theorem of Rubio de Francia says that if \(\Vert h\Vert _{L^{p_0}(w)} \lesssim \Vert g\Vert _{L^{p_0}(w)}\) for some \(p_0 \in (1,\infty )\) and all \(w \in A_{p_0}\), then \(\Vert h\Vert _{L^{p}(w)} \lesssim \Vert g\Vert _{L^{p}(w)}\) for all \(p \in (1,\infty )\) and all \(w \in A_{p}\).
Let \(K \in {\mathcal {D}}\). We have that
Thus, (2.3) gives that
\(\square \)
We will also have use for the Fefferman–Stein inequality
where M is the Hardy–Littlewood maximal function. Often, the lighter Stein’s inequality
is sufficient.
For an interval \(J \subset {\mathbb {R}}\) we denote by \(J_{l}\) and \(J_{r}\) the left and right halves of J, respectively. We define \(h_{J}^0 = |J|^{-1/2}1_{J}\) and \(h_{J}^1 = |J|^{-1/2}(1_{J_{l}} - 1_{J_{r}})\). Let now \(I = I_1 \times \cdots \times I_d \subset {\mathbb {R}}^d\) be a cube, and define the Haar function \(h_I^{\eta }\), \(\eta = (\eta _1, \ldots , \eta _d) \in \{0,1\}^d\), by setting
If \(\eta \ne 0\) the Haar function is cancellative: \(\int h_I^{\eta } = 0\). We exploit notation by suppressing the presence of \(\eta \), and write \(h_I\) for some \(h_I^{\eta }\), \(\eta \ne 0\). Notice that for \(I \in {\mathcal {D}}\) we have \(\Delta _I f = \langle f, h_I \rangle h_I\) (where the finite \(\eta \) summation is suppressed), \(\langle f, h_I\rangle := \int fh_I\).
Bi-parameter Variants A weight \(w(x_1, x_2)\) (i.e. a locally integrable a.e. positive function) belongs to the bi-parameter weight class \(A_p({\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2})\), \(1< p < \infty \), if
where the supremum is taken over \(R = I^1 \times I^2\) and each \(I^i \subset {\mathbb {R}}^{d_i}\) is a cube. Thus, this is the one-parameter definition but cubes are replaced by rectangles.
We have
and that
while the constant \([w]_{A_p}\) is dominated by the maximum to some power. For basic bi-parameter weighted theory see e.g. [19]. We say \(w\in A_\infty ({\mathbb {R}}^{d_1}\times {\mathbb {R}}^{d_2})\) if
It is well-known that
We do not have any important use for the \(A_{\infty }\) constant. The \(w \in A_{\infty }\) assumption can always be replaced with the explicit assumption \(w \in A_s\) for some \(s \in (1,\infty )\), and then estimating everything with a dependence on \([w]_{A_s}\).
We denote a general dyadic grid in \({\mathbb {R}}^{d_i}\) by \({\mathcal {D}}^i\). We denote cubes in \({\mathcal {D}}^i\) by \(I^i, J^i, K^i\), etc. Thus, our dyadic rectangles take the forms \(I^1 \times I^2\), \(J^1 \times J^2\), \(K^1 \times K^2\) etc.
If A is an operator acting on \({\mathbb {R}}^{d_1}\), we can always let it act on the product space \({\mathbb {R}}^d = {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) by setting \(A^1f(x) = A(f(\cdot , x_2))(x_1)\). Similarly, we use the notation \(A^2 f(x) = A(f(x_1, \cdot ))(x_2)\) if A is originally an operator acting on \({\mathbb {R}}^{d_2}\). Our basic bi-parameter dyadic operators – martingale differences and averaging operators—are obtained by simply chaining together relevant one-parameter operators. For instance, a bi-parameter martingale difference is \(\Delta _R f = \Delta _{I^1}^1 \Delta _{I^2}^2 f\), \(R = I^1 \times I^2\). Bi-parameter estimates, such as the square function bound
where \(p \in (1,\infty )\) and w is a bi-parameter \(A_p\) weight, are easily obtained using vector-valued versions of the corresponding one-parameter estimates. The required vector-valued estimates, on the other hand, follow simply by extrapolating the obvious weighted \(L^2(w)\) estimates.
We systematically collect maximal function and square function bounds now. First, some notation. When we integrate with respect to only one of the parameters we may e.g. write
If \({\mathcal {D}}= {\mathcal {D}}^1 \times {\mathcal {D}}^2\) we define the dyadic bi-parameter maximal function
Now define the square functions
and define \(S_{{\mathcal {D}}^2}^2 f\) analogously. Define also
Let \(k=(k_1,k_2)\), where \(k_i \in \{0,1,2, \dots ,\}\), and \(K=K^1 \times K^2 \in {\mathcal {D}}\). We set
and define similarly \(P^2_{K^2,k_2}\). Then, we define \(P_{K,k}:= P^1_{K^1,k_1}P^2_{K^2,k_2}\).
Lemma 2.4
For \(p \in (1,\infty )\) and a bi-parameter weight \(w \in A_p\) we have
For \(k=(k_1,k_2)\), \(k_i \in \{0,1, \dots , \}\), we have the estimates
and the analogous estimate with \(P^2_{K^2,k_2}\).
Moreover, for \(p, s \in (1,\infty )\) we have the Fefferman–Stein inequality
Here M can e.g. be \(M_{{\mathcal {D}}^1}^1\) or \(M_{{\mathcal {D}}}\). Finally, we have
3 Bi-parameter Singular Integrals
Bi-parameter SIOs We say that \(\omega \) is a modulus of continuity if it is an increasing and subadditive function with \(\omega (0) = 0\). A relevant quantity is the modified Dini condition
In practice, the quantity (3.1) arises as follows:
For many standard arguments \(\alpha = 0\) is enough. For the T1 type arguments we will always need \(\alpha = 1/2\). Some further applications can require a higher \(\alpha \).
Let \({\mathbb {R}}^d = {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) and consider an n-linear operator T on \({\mathbb {R}}^d\). We define what it means for T to be an n-linear bi-parameter SIO. Let \(\omega _i\) be a modulus of continuity on \({\mathbb {R}}^{d_i}\). Let \(f_j = f_j^1 \otimes f_j^2\), \(j = 1, \ldots , n+1\).
First, we set up notation for the adjoints of T. We let \(T^{j*}\), \(j \in \{0, \ldots , n\}\), denote the full adjoints, i.e., \(T^{0*} = T\) and otherwise
A subscript 1 or 2 denotes a partial adjoint in the given parameter—for example, we define
Finally, we can take partial adjoints with respect to different parameters in different slots also—in that case we denote the adjoint by \(T^{j_1*, j_2*}_{1,2}\). It simply interchanges the functions \(f_{j_1}^1\) and \(f_{n+1}^1\) and the functions \(f_{j_2}^2\) and \(f_{n+1}^2\). Of course, we e.g. have \(T^{j^*, j^*}_{1,2} = T^{j*}\) and \(T^{0*, j^*}_{1,2} = T^{j*}_{2}\), so everything can be obtained, if desired, with the most general notation \(T^{j_1*, j_2*}_{1,2}\). In any case, there are \((n+1)^2\) adjoints (including T itself). Similarly, the dyadic model operators that we later define always have \((n+1)^2\) different forms.
Full Kernel Representation Here we assume that given \(m \in \{1,2\}\) there exists \(j_1, j_2 \in \{1, \ldots , n+1\}\) so that \({\text {spt}} \,f_{j_1}^m \cap {\text {spt}}\, f_{j_2}^m = \emptyset \). In this case we demand that
where
is a kernel satisfying a set of estimates which we specify next.
The kernel K is assumed to satisfy the size estimate
We also require the following continuity estimates—to which we continue to refer to as Hölder estimates despite the general continuity moduli. For example, we require that we have
whenever \(|x_n^1-c^1| \le 2^{-1} \max _{1 \le i \le n} |x_{n+1}^1-x_i^1|\) and \(|x_{n+1}^2-c^2| \le 2^{-1} \max _{1 \le i \le n} |x_{n+1}^2-x_i^2|\). Of course, we also require all the other natural symmetric estimates, where \(c^1\) can be in any of the given \(n+1\) slots and similarly for \(c^2\). There are, of course, \((n+1)^2\) different estimates.
Finally, we require the following mixed Hölder and size estimates. For example, we ask that
whenever \(|x_n^1-c^1| \le 2^{-1} \max _{1 \le i \le n} |x_{n+1}^1-x_i^1|\). Again, we also require all the other natural symmetric estimates.
Partial Kernel Representations Suppose now only that there exists \(j_1, j_2 \in \{1, \ldots , n+1\}\) so that \({\text {spt}}\,f_{j_1}^1 \cap {\text {spt}}\, f_{j_2}^1 = \emptyset \). Then we assume that
where \(K_{(f_j^2)}\) is a one-parameter \(\omega _1\)-Calderón–Zygmund kernel as e.g. in [17] but with a constant depending on the fixed functions \(f_1^2, \ldots , f_{n+1}^2\). For example, this means that the size estimate takes the form
The continuity estimates are analogous.
We assume the following T1 type control on the constant \(C(f_1^2, \ldots , f_{n+1}^2)\). We have
and
for all cubes \(I^2 \subset {\mathbb {R}}^{d_2}\) and all functions \(a_{I^2}\) satisfying \(a_{I^2} = 1_{I^2}a_{I^2}\), \(|a_{I^2}| \le 1\) and \(\int a_{I^2} = 0\).
Analogous partial kernel representation on the second parameter is assumed when \({\text {spt}}\, f_{j_1}^2 \cap {\text {spt}}\, f_{j_2}^2 = \emptyset \) for some \(j_1, j_2\).
Definition 3.4
If T is an n-linear operator with full and partial kernel representations as defined above, we call T an n-linear bi-parameter \((\omega _1, \omega _2)\)-SIO.
Bi-parameter CZOs We say that T satisfies the weak boundedness property if
for all rectangles \(R = I^1 \times I^2 \subset {\mathbb {R}}^{d} = {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\).
An SIO T satisfies the diagonal BMO assumption if the following holds. For all rectangles \(R = I^1 \times I^2 \subset {\mathbb {R}}^{d} = {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) and functions \(a_{I^i}\) with \(a_{I^i} = 1_{I^i}a_{I^i}\), \(|a_{I^i}| \le 1\) and \(\int a_{I^i} = 0\) we have
and
The product \({\text {BMO}}\) space is originally by Chang and Fefferman [5, 6], and it is the right bi-parameter \({\text {BMO}}\) space for many considerations. An SIO T satisfies the product BMO assumption if it holds
for all the \((n+1)^2\) adjoints \(S = T^{j_1*, j_2*}_{1,2}\). Here \(S1:= S(1, \dots , 1)\). This can be interpreted in the sense that
where \(h_R = h_{I^1} \otimes h_{I^2}\), the supremum is over all dyadic grids \({\mathcal {D}}^i\) on \({\mathbb {R}}^{d_i}\) and open sets \(\Omega \subset {\mathbb {R}}^d = {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) with \(0< |\Omega | < \infty \), and the pairings \(\langle S1, h_R\rangle \) can be defined, in a natural way, using the kernel representations.
Definition 3.7
An n-linear bi-parameter \((\omega _1, \omega _2)\)-SIO T satisfying the weak boundedness property, the diagonal BMO assumption and the product BMO assumption is called an n-linear bi-parameter \((\omega _1, \omega _2)\)-Calderón–Zygmund operator (\((\omega _1, \omega _2)\)-CZO).
Bi-parameter Model Operators For hybrid operators we will use suggestive notation, such as, \((S\pi )_i\) to denote a bi-parameter operator that behaves like an ordinary n-linear shift \(S_i\) on the first parameter and like an n-linear paraproduct \(\pi \) on the second—but this is just notation and our operators are not of tensor product form.
Shifts Let \(i=(i_1, \dots , i_{n+1})\), where \(i_j = (i_j^1, i_j^2) \in \{0,1,\ldots \}^2\). An n-linear bi-parameter shift \(S_i\) takes the form
Here \(K, R_1, \ldots , R_{n+1} \in {\mathcal {D}}= {\mathcal {D}}^1 \times {\mathcal {D}}^2\), \(R_j = I_j^1 \times I_j^2\), \(R_j^{(i_j)} := (I_j^1)^{(i_j^1)} \times (I_j^2)^{(i_j^2)}\) and \({\widetilde{h}}_{R_j} = {\widetilde{h}}_{I_j^1} \otimes {\widetilde{h}}_{I_j^2}\). Here we assume that for \(m \in \{1,2\}\) there exist two indices \(j_0,j_1 \in \{1, \ldots , n+1\}\), \(j_0 \not =j_1\), so that \({\widetilde{h}}_{I_{j_0}^m}=h_{I_{j_0}^m}\), \({\widetilde{h}}_{I_{j_1}^m}=h_{I_{j_1}^m}\) and for the remaining indices \(j \not \in \{j_0, j_1\}\) we have \({\widetilde{h}}_{I_j^m} \in \{h_{I_j^m}^0, h_{I_j^m}\}\). Moreover, \(a_{K,(R_j)} = a_{K, R_1, \ldots ,R_{n+1}}\) is a scalar satisfying the normalization
We continue to define modified shifts—they are important for the weak kernel regularity. Let
where \({\widetilde{h}}_{R_j} = {\widetilde{h}}_{I_j^1} \otimes {\widetilde{h}}_{I_j^2}\), \({\widetilde{h}}_{I_{j_1}^1} = h_{I_{j_1}^1}\), \({\widetilde{h}}_{I_{j}^1} = h_{I_{j}^1}^0\), \(j \ne j_1\), \({\widetilde{h}}_{I_{j_2}^2} = h_{I_{j_2}^2}\), \({\widetilde{h}}_{I_{j}^2} = h_{I_{j}^2}^0\), \(j \ne j_2\). A modified n-linear bi-parameter shift \(Q_k\), \(k = (k_1, k_2)\), takes the form
for some \(j_1, j_2\). Moreover, \(a_{K,(R_j)} = a_{K, R_1, \ldots ,R_{n+1}}\) is a scalar satisfying the usual normalization (3.8).
We now define the hybrid operators that behave like a modified shift in one of the parameters and like a standard shift in the other. A modified/standard n-linear bi-parameter shift \((QS)_{k, i}\), \(i = (i_1, \ldots , i_{n+1})\), \(k, i_j \in \{0, 1, \ldots \}\), takes the form
for some \(j_0\). Here we assume that \({\widetilde{h}}_{I_{j_0}^1} = h_{I_{j_0}^1}\), \({\widetilde{h}}_{I_{j}^1} = h_{I_{j}^1}^0\) for \(j \ne j_0\), and that there exist two indices \(j_1,j_2 \in \{1, \ldots , n+1\}\), \(j_1 \not =j_2\), so that \({\widetilde{h}}_{I_{j_1}^2}=h_{I_{j_1}^2}\), \({\widetilde{h}}_{I_{j_2}^2}=h_{I_{j_2}^2}\) and for the remaining indices \(j \not \in \{j_1, j_2\}\) we have \({\widetilde{h}}_{I_j^2} \in \{h_{I_j^2}^0, h_{I_j^2}\}\). Moreover, \(a_{K,(R_j)} = a_{K, R_1, \ldots ,R_{n+1}}\) is a scalar satisfying the usual normalization (3.8). Of course, \((SQ)_{i,k}\) is defined symmetrically.
Partial Paraproducts Partial paraproducts are hybrids of \(\pi \) and S or \(\pi \) and Q.
Let \(i=(i_1, \dots , i_{n+1})\), where \(i_j \in \{0,1,\ldots \}\). An n-linear bi-parameter partial paraproduct \((S\pi )_i\) with the paraproduct component on \({\mathbb {R}}^{d_2}\) takes the form
where the functions \({\widetilde{h}}_{I_j^1}\) and \(u_{j, K^2}\) satisfy the following. There are \(j_0,j_1 \in \{1, \ldots , n+1\}\), \(j_0 \not =j_1\), so that \({\widetilde{h}}_{I_{j_0}^1}=h_{I_{j_0}^1}\), \({\widetilde{h}}_{I_{j_1}^1}=h_{I_{j_1}^1}\) and for the remaining indices \(j \not \in \{j_0, j_1\}\) we have \({\widetilde{h}}_{I_j^1} \in \{h_{I_j^1}^0, h_{I_j^1}\}\). There is \(j_2 \in \{1, \ldots , n+1\}\) so that \(u_{j_2, K^2} = h_{K^2}\) and for the remaining indices \(j \ne j_2\) we have \(u_{j, K^2} = \frac{1_{K^2}}{|K^2|}\). Moreover, the coefficients are assumed to satisfy
Of course, \((\pi S)_i\) is defined symmetrically.
A modified n-linear partial paraproduct \((Q\pi )_{k}\) with the paraproduct component on \({\mathbb {R}}^{d_2}\) takes the form
for some \(j_0\)—here \({\widetilde{h}}_{I_{j_0}^1} = h_{I_{j_0}^1}\), \({\widetilde{h}}_{I_{j}^1} = h_{I_{j}^1}^0\) for \(j \ne j_0\) and \(u_{j, K^2}\) are like in (3.10). The constants satisfy the same normalization.
Full Paraproducts An n-linear bi-parameter full paraproduct \(\Pi \) takes the form
where the functions \(u_{j, K^1}\) and \(u_{j, K^2}\) are like in (3.10). The coefficients are assumed to satisfy
where the supremum is over open sets \(\Omega \subset {\mathbb {R}}^d = {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) with \(0< |\Omega | < \infty \).
Comparison to the Usual Model Operators The modified model operators can be written as suitable sums of the standard operators. This is practical when one is willing to lose \(\frac{1}{2}\) of kernel regularity or if some estimates are too difficult to carry out for the more complicated modified operators. However, some regularity is always lost if this decomposition is used, so it is preferable to make do without it. To communicate the gist we only give the following formulation.
Lemma 3.11
Let \(Q_k\), \(k = (k_1, k_2)\), be a modified n-linear bi-parameter shift. Then
where each \(S = S^{u,i_1,i_2}\) is a standard n-linear bi-parameter shift of complexity \(i^m_{S, j}\), \(j \in \{1, \ldots , n+1\}\), \(m \in \{1,2\}\), satisfying
Similarly, a modified/standard shift can be represented using standard shifts and a modified partial paraproduct can be represented using standard partial paraproducts.
Proof
For notational convenience we consider a shift \(Q_k\) of the particular form
There is no essential difference in the general case.
We define
and
We can write the shift with these similarly as in (3.12) just by replacing a with b and A with B.
For the moment we define the following shorthand. For a cube I and integers \(l,j_0 \in \{1,2, \dots \}\) we define
where \({\text {id}}\) denotes the identity operator.
Let \(R_1, \dots , R_{n+1}\) be as in the summation of \(Q_k\). We use the above notation in both parameters, and we denote this, as usual, with superscripts \(D^1_{I,l}(j,j_0)\) and \(D^2_{I,l}(j,j_0)\). With some work (we omit the details) it can be shown that
which gives that
Also, we have that
and
which gives that
and
Finally, we write that
Using the above decompositions we have the identity
The terms \(\Sigma ^1_{m_1,m_2}\) with \(m_1,m_2 \in \{1, \dots , n\}\) and the terms inside the parentheses will be written as sums of standard shifts.
First, we take one \(\Sigma ^1_{m_1,m_2}\) with \(m_1,m_2 \in \{1, \dots , n\}\). For convenience of notation we choose the case \(m_1=m_2=:m\). Recall that
Expanding
there holds that
Since
we see that
where \(S_{(0, \dots , 0,(i_1,i_2), k, \dots , k)}\) is a standard n-linear bi-parameter shift. The case of general \(m_1, m_2\) is analogous.
We turn to the terms \(\Sigma ^1_{n+1,m_2}-\Sigma ^2_{m_2}\). The terms \(\Sigma ^1_{m_1,n+1}-\Sigma ^3_{m_1}\) are symmetrical. Let \(m_2 \in \{1, \dots , n\}\). After expanding \(P^2_{K^2,k_2-1}\) in the slot \(m_2\) we have that \(\Sigma ^1_{n+1,m_2}-\Sigma ^2_{m_2}\) can be written as
This splits the difference \(\Sigma ^1_{n+1,m_2}-\Sigma ^2_{m_2}\) as
We fix one \(i_2\) at this point.
Let \(g_j^{m_2}:=g_j= \langle f_j \rangle ^2_{K^2}\) for \(j \in \{1, \dots , m_2-1\}\), \(g_{m_2}^{m_2}:=g_{m_2}= \langle f_{m_2}, h_{L^2} \rangle _2 \) and \(g_j^{m_2}:=g_j= \langle f_j \rangle ^2_{I^2_j}\) for \(j \in \{m_2+1, \dots , n\}\). Using this notation we have that the term inside the brackets is \( \prod _{j=1}^n \langle g_j \rangle _{K^1}-\prod _{j=1}^n \langle g_j \rangle _{I^1_{n+1}}. \) We write that
Then, we write \(\prod _{j=1}^n \langle g_j \rangle _{(I^1_{n+1})^{(i_1)}}-\prod _{j=1}^n \langle g_j \rangle _{(I^1_{n+1})^{(i_1+1)}}\) as the sum
Expanding
we get that \(\prod _{j=1}^n \langle g_j \rangle _{K^1}-\prod _{j=1}^n \langle g_j \rangle _{I^1_{n+1}}\) equals
This identity splits \(\Sigma ^{1,2}_{m_2,i_2}\) further as \(\Sigma ^{1,2}_{m_2,i_2} =: -\sum _{i_1=0}^{k_1-1} \sum _{m_1=1}^n \Sigma ^{1,2}_{m_1,m_2,i_1,i_2}\).
We fix some \(m_1\) and \(i_1\) and consider the corresponding term. For convenience of notation we look at the case \(m_1=m_2=:m\). There holds that
This is seen as a standard shift once we reorganize the summation and verify the normalization. We take \((I^1_{n+1})^{(i_1+1)}\) as the new “top cube” in the first parameter (\((I^1_{n+1})^{(i_1+1)}\) corresponds to \((L^1)^{(1)}\) in the summation below). There holds that \( \Sigma ^{1,2}_{m,m,i_1,i_2} \) equals
where
We have the estimate
Notice that the term in the first line in the right hand side is \(2^{d_1(n-m)/2}\) times the right normalization of the shift, since in \(\Sigma ^{1,2}_{m,m,i_1,i_2}\) we have the cubes \(L^1\) related to \(f_j\) with \(j \in \{m+1, \dots , n\}\). Also, the term in the second line is almost cancelled out when one changes the averages in \(\Sigma ^{1,2}_{m,m,i_1,i_2}\) into pairings against non-cancellative Haar functions.
We conclude that for some \(C \ge 1\) we have
where S is a standard n-linear bi-parameter shift of the given complexity. The case of general \(m_1, m_2\) is analogous.
Finally, we look at the term \(\Sigma ^1_{n+1,n+1}-\Sigma ^2_{n+1}-\Sigma ^3_{n+1}+\Sigma ^4\) which by definition is
Consider the rectangles \(K, R_1, \dots , R_{n+1}\) as fixed for the moment. There holds that \(\langle f_j \rangle _K-\prod _{j=1}^n \langle f_j \rangle _{I^1_{n+1} \times K^2}\) equals
Similarly, we have that \(-\prod _{j=1}^n \langle f_j \rangle _{K^1 \times I^2_{n+1}} + \prod _{j=1}^n \langle f_j \rangle _{R_{n+1}}\) equals
Let \(g^{m_1,i_1}_j= \langle f_j \rangle ^1_{(I^1_{n+1})^{(i_1+1)}}\) for \(j \in \{1, \dots , m_1-1\}\), \(g^{m_1,i_1}_{m_1}= \langle f_{m_1}, h_{(I^1_{n+1})^{(i_1+1)}} \rangle _1\) and \(g^{m_1,i_1}_j= \langle f_j \rangle ^1_{(I^1_{n+1})^{(i_1)}}\) for \(j \in \{m_1+1, \dots ,n\}\). The sum of (3.15) and (3.16) can similarly be split as
When one recalls the definition of the functions \(g_j^{m_1,i_1}\) and writes this in terms of the functions \(f_j\), one has that in the first parameter \(f_j\) is paired with \(1_{(I_{n+1}^1)^{(i_1+1)}}/|(I_{n+1}^1)^{(i_1+1)}|\) for \(j=1, \dots , m_1-1\), \(f_{m_1}\) with \(h_{(I^1_{n+1})^{(i_1+1)}}\) and \(f_j\) with \(1_{(I_{n+1}^1)^{(i_1)}}/|(I_{n+1}^1)^{(i_1)}|\) for \(j=m_1+1, \dots , n\). Each \(f_j\) is paired similarly in the second parameter. In the case \(m_1=m_2=:m\) the summand in (3.17) can be written as
The splitting in (3.17) gives us the identity
We fix some \(i_1\) and \(i_2\) and consider the case \(m_1=m_2=:m\). From (3.18) we see that
where
The coefficient satisfies the estimate
Thus, we see that \(C^{-1}\Sigma ^{1,2,3,4}_{m,m,i_1,i_2}\) is a standard n-linear bi-parameter shift. The complexity of the shift is \(((0,0), \dots , (0,0),(1,1),\dots ,(1,1), (i_1+1,i_2+1))\) with m zeros. The case of general \(m_1\) and \(m_2\) is analogous. \(\square \)
Bi-parameter Representation Theorem We set
and denote the expectation over the product probability space by
We also set \({\mathcal {D}}_0 = {\mathcal {D}}^1_0 \times {\mathcal {D}}^2_0\), where \({\mathcal {D}}_0^i\) is the standard dyadic grid of \({\mathbb {R}}^{d_i}\). We use the notation
Given \(\sigma = (\sigma _1, \sigma _2)\) and \(R = I_1 \times I_2 \in {\mathcal {D}}_0\) we set
Theorem 3.19
Suppose that T is an n-linear bi-parameter \((\omega _1, \omega _2)\)-CZO, where \(\omega _i \in {\text {Dini}}_{1/2}\). Then we have
where
defined in \({\mathcal {D}}_{\sigma }\), and if the operator does not depend on \(k_1\) or \(k_2\) then that particular \(k_i = 0\).
Proof
We decompose
where \(R_1, \ldots , R_{n+1} \in {\mathcal {D}}_\sigma = {\mathcal {D}}_{\sigma _1} \times {\mathcal {D}}_{\sigma _2}\) for some \(\sigma = (\sigma _1, \sigma _2)\) and \(R_j = I_j^1 \times I_j^2\).
The Main Terms For \(j_1, j_2\) we let
These are symmetric and we choose to deal with \(\Sigma _{\sigma } := \Sigma _{n, n+1, \sigma }\). After collapsing the relevant sums we have
where \(\ell (R_j) := ( \ell (I_j^1), \ell (I_j^2))\) for \(R_j = I_j^1 \times I_j^2\).
For \(R = I^1 \times I^2\) we define
Using this notation we write
where \(A_{R_1, \dots , R_{n+1}}^{n,n+1}(f_1, \dots , f_{n+1})=A_{R_1, \dots , R_{n+1}}^{n,n+1}\) is defined in (3.9).
We have
and
Then, we further have that the difference of the first two terms in the right hand side of (3.20) equals
This gives us the decomposition
where inside the brackets we have the corresponding term as in (3.21) and (3.22).
The identity (3.23) splits \(\Sigma _\sigma \) into four terms \(\Sigma _\sigma =\Sigma _\sigma ^1+\Sigma _\sigma ^2+\Sigma _\sigma ^3+\Sigma _\sigma ^4\).
The Shift Case \(\Sigma _\sigma ^1\) We begin by looking at \(\Sigma _\sigma ^1\), that is, the term coming from \([\, \cdot \,] \) in (3.23). Let us further define the abbreviation
so that
If \(R=I^1\times I^2\) is a rectangle and \(m=(m^1,m^2) \in {\mathbb {Z}}^{d_1} \times {\mathbb {Z}}^{d_2}\), then we define \(I^i \dot{+} m^i:=I^i+m^i\ell (I^i)\) and \(R \dot{+} m:= (I^1\dot{+}m^1)\times (I^2\dot{+}m^2)\). Notice that if \(I^1_i=I^1_j\) for all i, j or \(I^2_i=I^2_j\) for all i, j then \(\varphi _{R_1, \dots , R_{n+1}}=0\). Thus, there holds that
As in [17] we say that \(I \in {\mathcal {D}}_{\sigma _i}\) is k-good for \(k \ge 2\)—and denote this by \(I \in {\mathcal {D}}_{\sigma _i, {\text {good}}}(k)\)—if \(I \in {\mathcal {D}}_{\sigma _i}\) satisfies
Notice that for all \(I \in {\mathcal {D}}_0^i\) we have
Next, we consider \({\mathbb {E}}_\sigma \Sigma ^1_\sigma \) and add goodness to the rectangles R. Recall that \({\mathbb {E}}_\sigma ={\mathbb {E}}_{\sigma _1}{\mathbb {E}}_{\sigma _2}\). We write \({\mathcal {D}}_{\sigma , {\text {good}}}(k_1,k_2):= {\mathcal {D}}_{\sigma _1, {\text {good}}}(k_1) \times {\mathcal {D}}_{\sigma _2, {\text {good}}}(k_2)\). There holds that
Therefore, we have shown that
where
and C is a large enough constant.
Let \(m_1, \dots , m_{n+1}\) and \(R=I^1\times I^2\) be as in the definition of \( Q_{k_1, k_2}\). The goodness of the rectangle R easily implies (we omit the details, see [17]) that \((R \dot{+} m_j)^{(k_1, k_2)} = R^{(k_1, k_2)} =: K\) for all \(j \in \{1, \ldots , n+1\}\). Recall the definition of \(\varphi _{R \dot{+} m_1, \dots ,R \dot{+} m_{n+1} }\) from (3.24). Therefore, to conclude that \( Q_{k_1,k_2}\) is a modified bi-parameter n-linear shift it remains to prove the normalization
Let us first assume that \(k_1 \sim 1 \sim k_2\). Since \(m^1_i \not =0\) and \(m^2_j \not =0\) for some i and j we may use the full kernel representation of T to have that the left hand side of (3.27) is less than
Applying the size of the kernel K this is further dominated by
Notice that this is the right estimate, since \(\omega _i(2^{-k_i}) \sim 1\) and \(|K|= |R^{(k_1,k_2)}| \sim |R|= |I^1| |I^2|\).
Suppose then that \(k_1\) and \(k_2\) are large enough so that we can use the continuity assumption of the full kernel K. Using the zero integrals of \(h_{I^1}\) and \(h_{I^2}\) there holds that the left hand side of (3.27) equals
where \(c_{I^i}\) denotes the center of the corresponding cube. Here one can use the continuity assumption of K which leads to a product of two one-parameter integrals which can be easily estimated.
What remains is the case that for example \(k_1 \sim 1\) and \(k_2\) is large. This is done similarly as the above two cases using the mixed size and continuity assumption of K. This concludes the proof of (3.27) and we are done dealing with \({\mathbb {E}}_\sigma \Sigma _\sigma ^1\).
The Partial Paraproduct Cases \(\Sigma _\sigma ^2\) and \(\Sigma _\sigma ^3\) Next, we look at the symmetric terms \({\mathbb {E}}_\sigma \Sigma _\sigma ^2\) and \({\mathbb {E}}_\sigma \Sigma _\sigma ^3\). We explicitly consider \({\mathbb {E}}_\sigma \Sigma _\sigma ^2\) here. Recall that \(\Sigma _\sigma ^2\) equals
Since the difference \(A_{I^1_1 \times I^2_{n+1}, \dots , I^1_{n+1} \times I^2_{n+1}}^{n,n+1} -A_{I^1_n \times I^2_{n+1}, \dots , I^1_n \times I^2_{n+1}}^{n,n+1}\) depends only on the cube \(I_{n+1}^2\) in the second parameter we can further rewrite this as
Let us write the summand in (3.29) as \(\varphi _{I_1^1, \dots , I_{n+1}^1,I^2}\). By proceeding in the same way as above with \({\mathbb {E}}_\sigma \Sigma _\sigma ^1\) we have that
where
The k-goodness of \(I^1\) implies that here \((I^1\dot{+}m_j)^{(k)}=(I^1)^{(k)}=:K^1\) for all j. Therefore, to conclude that \((Q\pi )_k\) is a modified partial paraproduct with the paraproduct component in \({\mathbb {R}}^{d_2}\) it remains to show that if we fix \(m_1, \dots , m_{n+1}\) and \(I^1\) as in the above sum then
We verify the above \({\text {BMO}}\) condition by taking a cube \(I^2\) and a function \(a_{I^2}\) such that \(a_{I^2} = a_{I^2}1_{I^2}\), \(|a_{I^2}| \le 1\) and \(\int a_{I^2}=0\), and showing that
For a suitably large constant C (so that we can use the continuity assumption of the kernel below) we split the pairing as
Let us show that the first term in (3.33) is dominated by \(\omega _1(2^{-k}) |I^1|^{(n+1)/2}|I^2|/|K^1|^n\). We have two cases. The case that \(k \sim 1\) is handled with the mixed size and continuity assumption of K. The case that k is large is handled with the continuity assumption of K. We show the details for the case \(k \sim 1\). The other case is done similarly (see also the paragraph containing (3.28)).
We assume that \(k \sim 1\). Since \(a_{I^2}\) has zero integral the pairing that we are estimating equals (by definition)
The mixed size and continuity property of K implies that the absolute value of the last integral is dominated by
The integral related to \({\mathbb {R}}^{d_1}\) is dominated by \(|I^1|^{-(n-1)/2}\).
Consider the integral related to \({\mathbb {R}}^{d_2}\). By first estimating that
with some work we see that the integral over \({\mathbb {R}}^{(n+1)d_2}\) is dominated by
In conclusion, we showed that the first term in (3.33) is dominated by \(|I^1|^{-(n-1)/2}|I^2|\), which is the right estimate in the case \(k \sim 1\).
We turn to consider the second term in (3.33). We again split it into two by writing \(1=1_{(CI^2)^c}+1_{CI^2}\) in the second slot. The part with \(1_{(CI^2)^c}\) is estimated in the same way as above and then one continues with the part related to \(1_{CI^2}\). This is repeated until we are only left with the term
The estimate for this uses the partial kernel representations of T. Again, we have the two cases that either \(k \sim 1\) or k is large. These are handled in the same way using either the size or the continuity of the partial kernels. We consider explicitly the case that k is large. Using the zero integral of \(h_{I^1}\) we have that the above pairing equals
Taking absolute values and using the continuity of the partial kernel leads to
By assumption there holds that \(C(1_{CI^2},\dots ,1_{CI^2},a_{I^2}) \lesssim |I^2|\) and the integral is dominated by \(\omega _1(2^{-k}) |I^1|^{(n+1)/2}{|K^1|^n}\). This concludes the proof of (3.32) and also finishes our treatment of \({\mathbb {E}}_\sigma \Sigma _\sigma ^2\).
The Full Paraproduct \(\Sigma _\sigma ^4\) Recall that
which equals
This is directly a full paraproduct as
and so we are done with this term. Therefore, we are done with the main terms, and no more full paraproducts will appear.
The Remainder \({\text {Rem}}_{\sigma }\) To finish the proof of the bi-parameter representation theorem it remains to discuss the remainder term \({\text {Rem}}_{\sigma }\). Some of the weak boundedness type assumptions are used here—but there is nothing surprising on how they are used and we do not focus on that. We only explain the structural idea.
An \((n+1)\)-tuple \((I^i_1, \dots , I_{n+1}^i)\) of cubes \(I^i_j \in {\mathcal {D}}_{\sigma _i}\) belongs to \({\mathcal {I}}_{\sigma _i}\) if the following holds: if j is an index such that \(\ell (I^i_j) \le \ell (I^i_k)\) for all k, then there exists at least one index \(k_0 \not = j\) so that \(\ell (I^i_j) = \ell (I^i_{k_0})\). The remainder term can be written as
where as usual \(R_i=I^1_i \times I^2_i\). Let us write this as
First, we look at the terms \({\text {Rem}}_{\sigma ,j_1}^1\) and \({\text {Rem}}_{\sigma ,j_2}^2\) which are analogous. Consider for example \({\text {Rem}}_{\sigma ,n+1}^1\). We further divide \({\mathcal {I}}_{\sigma _2}\) into subcollections by specifying the slots where the smallest cubes are. For example, we consider here the part of the sum with the tuples \((I^2_1, \dots , I^2_{n+1})\) such that \(\ell (I^2_i)>\ell (I^2_n)=\ell (I^2_{n+1})\) for all \(i=1, \dots ,n-1\). By collapsing the relevant sums of martingale differences the term we are dealing with can be written as
In the first parameter there is only one martingale difference and in the second parameter there are two (in the general case at least two). Thus, the strategy is that we will write this in terms of model operators that have a modified shift or a paraproduct structure in the first parameter and a standard shift structure in the second parameter. We omit the details.
Finally, we consider \({\text {Rem}}_{\sigma }^3\). This is also divided into several cases by specifying the places of the smallest cubes in both parameters. For example, for notational convenience we take the part where \(\ell (I^1_1)=\ell (I^1_{n+1}) < \ell (I^1_i)\) and \(\ell (I^2_1)=\ell (I^2_{n+1}) < \ell (I^2_i)\) for all \(i=2, \dots , n\). Notice that in general the places and the number of the smallest cubes do not need to be the same in both parameters. After collapsing the relevant sums of martingale differences the term we are looking at is
Here we have two (in the general case at least two) martingale differences in each parameter so this will be written in terms of standard bi-parameter n-linear shifts. We omit the details. This completes the proof. \(\square \)
Corollaries We indicate some corollaries—we start with the most basic unweighted boundedness on the Banach range of exponents.
Proposition 3.37
Let \(p_j \in (1, \infty )\), \(j=1, \dots ,n+1\), be such that \(\sum _{j=1}^{n+1} 1/p_j=1\). Suppose that \(Q_k\) is a modified n-linear bi-parameter shift. Then the estimate
holds.
Suppose that \((QS)_{k,i}\) is a modified/standard shift (here \(k \in \{1,2, \dots \}\) and \(i=(i_1, \dots , i_{n+1})\)). Then the estimate
holds.
Proof
We only prove the statement for the operator \(Q_k\). This essentially contains the proof for \((QS)_{k,i}\).
We assume \(Q_k\) has the explicit form
Using the notation (3.13) there holds that
We do the same decomposition with the other three terms inside the bracket \([ \, \cdot \,]\). This splits \([\, \cdot \, ]\) into a sum over \(m_1,m_2 \in \{1, \dots , n+1\}\). Then, we notice that all the terms in the sum with \(m_1=n+1\) or \(m_2=n+1\) cancel out. Thus, we get a splitting of \(\langle Q_k(f_1, \ldots , f_n), f_{n+1} \rangle \) into a sum over \(m_1,m_2 \in \{1, \dots , n\}\). All the terms with different \(m_1\) and \(m_2\) are estimated separately.
In what follows—for notational convenience—we will focus on the case \(m_1 = m_2 =: m \in \{1, \ldots , n\}\), and we define \(D^1_{K^1,k_1}(j,m)D^2_{K^2,k_2}(j,m) =: D_{K, k}(j,m)\). The term in the splitting of \(\langle Q_k(f_1, \ldots , f_n), f_{n+1} \rangle \) corresponding to \(m=m_1=m_2\) can be written as the sum
where
and \(U_2\), \(U_3\) and \(U_4\) are defined similarly just by replacing \(h^0_{R_j}\), \(j \in \{1, \dots , n\}\), by \(h_{I_{n+1}^1 \times I_j^2}^0\), \(h_{I_{j}^1 \times I_{n+1}^2}^0\) and \(h_{R_{n+1}}^0\), respectively.
With some direct calculations it can be shown that for all \(i \in \{1, \ldots , 4\}\) we have
From here the estimate can be finished by Hölder’s inequality, the Fefferman–Stein inequality and square function estimates, see Lemma 2.4. \(\square \)
Next, we look at the modified partial paraproducts. We will use the well known one-parameter \(H^1\)-\({\text {BMO}}\) duality estimate
where the cubes I are in some dyadic grid.
Proposition 3.40
Let \(p_j \in (1, \infty )\), \(j=1, \dots ,n+1\), be such that \(\sum _{j=1}^{n+1} 1/p_j=1\). Suppose \((Q\pi )_k\) is a modified n-linear partial paraproduct. Then the estimate
holds.
Proof
We assume that \(\langle (Q\pi )_k(f_1, \ldots , f_n), f_{n+1}\rangle \) has the form
We decompose
and similarly with the other term inside the bracket \([ \, \cdot \, ]\). Notice that the terms with \(m=n+1\) cancel out. Thus, we get a decomposition of \(\langle (Q\pi )_k(f_1, \ldots , f_n), f_{n+1}\rangle \) into a sum over \(m \in \{1, \dots , n\}\). The terms with different m are estimated separately.
Fix one m. The term from the decomposition of \(\langle (Q\pi )_k(f_1, \ldots , f_n), f_{n+1}\rangle \) related to m is
where \(\langle U_1(f_1, \ldots , f_n), f_{n+1}\rangle \) equals
and \(\langle U_2(f_1, \ldots , f_n), f_{n+1}\rangle \) is defined similarly just be replacing \(h^0_{I^1_j}\), \(j=1, \dots , n\), with \(h^0_{I^1_{n+1}}\).
We consider \(U_1\) first. From the one-parameter \(H^1\)-\({\text {BMO}}\) duality estimate (3.39) we have that, with fixed \(K^1\) and \(I^1_1, \dots , I^1_{n+1}\), the sum over \(K^2\) of the absolute value of the summand in (3.41) is dominated by
The sum of this over \(K^1\) and \(I^1_1, \dots , I^1_{n+1}\) such that \((I^1_j)^{(k)}=K^1\) is less than
Notice that the square function related to \(f_{n+1}\) is just the bi-parameter square function \(S_{\mathcal {D}}\). To finish the estimate it remains to use the Fefferman–Stein inequality and square function estimates, see Lemma 2.4.
The second term \(|\langle U_2(f_1, \ldots , f_n), f_{n+1}\rangle |\) satisfies the same upper bound (3.42), and can therefore be estimated in the same way. The proof is concluded. \(\square \)
The above, together with known estimates for standard operators, directly leads to Banach range boundedness of n-linear bi-parameter \((\omega _1, \omega _2)\)-CZOs with \(\omega _i \in {\text {Dini}}_{1/2}\). We do not push this further in this paper. For state-of-the-art estimates with genuinely multilinear weights (in the full multilinear range) see [31]. There we recorded some of the estimates with \({\text {Dini}}_{1}\) using the above representation theorem and the decomposition of modified operators in terms of standard operators.
We are unable to perform the estimates of [31] with the regularity \({\text {Dini}}_{\frac{1}{2}}\). However, the linear case is special: the weighted estimates of linear modified model operators with a bound depending on the square root of the complexity are easy. Notice that in principle we have already done all the necessary work. For example, if we want to estimate \(\Vert Q_k f \Vert _{L^p(w)}\), we study the unweighted pairings \(\langle Q_k f,g \rangle \). Then, we proceed as in the linear case of Proposition 3.37. Depending on the form of the shift this leads us to terms corresponding to (3.38) such as
By Hölder’s inequality this is less than
Proposition 3.43
For every \(p \in (1, \infty )\) and bi-parameter \(A_p\) weight w we have
For completeness, we record the corresponding result for CZOs. Again, for multilinear weighted estimates with the optimal weight classes see [31].
Corollary 3.44
Let \(p_j \in (1, \infty )\), \(j=1, \dots ,n+1\), be such that \(\sum _{j=1}^{n+1} 1/p_j=1\). Suppose that T is an n-linear bi-parameter \((\omega _1, \omega _2)\)-CZO, where \(\omega _i \in {\text {Dini}}_{1/2}\). Then we have the Banach range estimate
In the linear case \(n=1\) we have the weighted estimate
whenever \(p \in (1,\infty )\) and \(w \in A_p\) is a bi-parameter weight.
4 Commutator Estimates
The basic form of a commutator is \([b,T]:f \mapsto bTf - T(bf)\). We are interested in various iterated versions in the multi-parameter setting and with mild kernel regularity.
For a bi-parameter weight \(w \in A_2({\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2})\) and a locally integrable function b we define the weighted product \({\text {BMO}}\) norm
where the supremum is over all dyadic grids \({\mathcal {D}}^i\) on \({\mathbb {R}}^{d_i}\) and \({\mathcal {D}}= {\mathcal {D}}^1 \times {\mathcal {D}}^2\), and over all open sets \(\Omega \subset {\mathbb {R}}^d := {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) for which \(0< w(\Omega ) < \infty \). The following theorem, which is the two-weight Bloom version of [9], was proved in [29] with \(\omega _i(t) = t^{\gamma _i}\).
Theorem 4.2
Suppose that \(T_i\) is a one-parameter \(\omega _i\)-CZO, where \(\omega _i \in {\text {Dini}}_{3/2}\). Let \(b :{\mathbb {R}}^d \rightarrow {\mathbb {C}}\), \(p \in (1, \infty )\), \(\mu , \lambda \in A_p({\mathbb {R}}^d)\) be bi-parameter weights and \(\nu = \mu ^{1/p} \lambda ^{-1/p} \in A_2({\mathbb {R}}^d)\) be the associated bi-parameter Bloom weight. Then we have
Proof
Let \(\Vert b\Vert _{{\text {BMO}}_{\text {prod}}(\nu )} = 1\). We need to e.g. bound \(\Vert [Q_{k_1}, [Q_{k_2}, b]]f\Vert _{L^p(\lambda )}\) for one-parameter modified shifts (which have a similar definition as in the bi-parameter case). It seems non-trivial to fully exploit the operators \(Q_{k}\) here and we content on splitting the operators to standard shifts and bounding
and other similar terms, where \(S_{k_i, j_i}\) is a linear one-parameter shift on \({\mathbb {R}}^{d_i}\) of complexity \((k_i, j_i)\). Reaching \({\text {Dini}}_{1}\) would require replacing this step with a sharper estimate.
On page 11 of [29] it is recorded that
Interestingly, this part of the argument can be improved: there actually holds that
We will get back to this after completing the proof. Therefore, we have
Handling the other terms of the shift expansion of \([Q_{k_1}, [Q_{k_2}, b]]\) similarly, we get
Controlling commutators like \([Q_{k_1}, [\pi , b]]\) similarly we get the claim.
We return to (4.3) now. Decompositions are very involved in the bi-commutator case, and we prefer to give the idea of the improvement (4.3) by studying the simpler one-parameter situation \([b, S_{i,j}]\), where
is a one-parameter shift on \({\mathbb {R}}^d\) and \(b \in {\text {BMO}}(\nu )\);
Here we only have use for the expression on the right-hand side, which is the analogue of the bi-parameter definition (4.1). However, it is customary to define things as on the left-hand side in this one-parameter situation. The equivalence follows from the weighted John–Nirenberg [34]
Of course, one-parameter commutators [b, T] can be handled even with \({\text {Dini}}_{0}\), but e.g. sparse domination proofs [25, 26] are restricted to one-parameter, unlike these decompositions. To get started, we define the one-parameter paraproducts (with some implicit dyadic grid)
By writing \(b = \sum _{I} \Delta _{I} b\) and \(f = \sum _{J} \Delta _{J} f\), and collapsing sums such as \(1_I \sum _{J :I \subsetneq J} \Delta _{J} f = E_{I} f\), we formally have
We now decompose the commutator as follows
We have the well-known fact that \(\Vert A_k(b, f)\Vert _{L^p(\lambda )} \lesssim \Vert b\Vert _{{\text {BMO}}(\nu )} \Vert f\Vert _{L^p(\mu )}\) for \(k=1,2\)—this can be seen by using the weighted \(H^1\)-\({\text {BMO}}\) duality [37] (with \(a_I = \langle b, h_I\rangle \))
where
Combining this with the well-known estimate \(\Vert S_{i,j} f\Vert _{L^p(w)} \lesssim \Vert f\Vert _{L^p(w)}\) for all \(w \in A_p\) it follows that
The complexity dependence is coming from the remaining term
There are many ways to bound this, but the following way based on the \(H^1\)-\({\text {BMO}}\) duality—and executed in the particular way that we do below—gives the best dependence that we are aware of:
We write
where we further write
and similarly for \(\langle b \rangle _I - \langle b \rangle _K\). We dualize and e.g. look at
where we used the weighted \(H^1\)-\({\text {BMO}}\) duality. Here
and we can bound
We are done with the one-parameter case—the desired bi-parameter case can now be done completely similarly by tweaking the proof in [29] using the above idea. \(\square \)
Remark 4.5
The previous way to use the \(H^1\)-\({\text {BMO}}\) duality was to look at
where \(l = 0, \ldots , j-1\) is fixed, and to apply the \(H^1\)-\({\text {BMO}}\) duality to the whole K, L summation. With l fixed this yields a uniform estimate, and there is also a curious ’extra’ cancellation present—we can even bound
that is, forget the \(\Delta _{K,j}\) from g. Then it remains to sum over l which yields the dependence j instead of \(j^{1/2}\). The way in our proof above is more efficient and we see that we utilize all of the cancellation as well.
Remark 4.6
An interesting question is can we have \(\alpha = 1\) instead of \(\alpha = 3/2\) by somehow more carefully exploiting the operators \(Q_k\)—this would appear to be the optimal result theoretically obtainable by the current methods.
We also note that it is certainly possible to handle higher order commutators, such as, \([T_1, [T_2, [b, T_3]]]\).
We will continue with more multi-parameter commutator estimates – the difference to the above is that now even the singular integrals are allowed to be multi-parameter.
For a weight w on \({\mathbb {R}}^d := {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}\) we say that a locally integrable function \(b :{\mathbb {R}}^d \rightarrow {\mathbb {C}}\) belongs to the weighted little BMO space \({\text {bmo}}_{}(w)\) if
where the supremum is over rectangles \(R=I^1 \times I^2 \subset {\mathbb {R}}^d\). If \(w=1\) we denote the unweighted little \({\text {BMO}}\) space by \({\text {bmo}}\). There holds that
see [19]. Here \({\text {BMO}}(w(x_1, \cdot ))\) and \({\text {BMO}}(w(\cdot , x_2))\) are the one-parameter weighted \({\text {BMO}}\) spaces. For example,
where the supremum is over cubes \(I^2 \subset {\mathbb {R}}^{d_2}\).
The following theorem was proved in [28] with \(\omega _i(t) = t^{\gamma _i}\). The first order case [b, T] appeared before in [19]. See also [29] for the optimality of the space \({\text {bmo}}(\nu ^{1/m})\) in the case \(b_1 = \cdots = b_m = b\).
Theorem 4.8
Let \(p \in (1,\infty )\), \(\mu , \lambda \in A_p\) be bi-parameter weights and \(\nu := \mu ^{1/p}\lambda ^{-1/p}\). Suppose that T is a bi-parameter \((\omega _1, \omega _2)\)-CZO and \(m \in {\mathbb {N}}\). Then we have
if one of the following conditions holds:
-
(1)
T is paraproduct free and \(\omega _i \in {\text {Dini}}_{m/2+1}\);
-
(2)
\(m=1\) and \(\omega _i \in {\text {Dini}}_{3/2}\);
-
(3)
\(\omega _i \in {\text {Dini}}_{m+1}\).
Proof
The proof is similar in spirit to that of Theorem 4.2. We use Lemma 3.11 and estimates for the commutators of the usual bi-parameter model operators. If we use the bounds from [28] directly, we e.g. immediately get
Similarly, we can read an estimate for all the other model operators from [28]. This gives us the result under the higher regularity assumption (3). Indeed, when using the estimate (4.9) in connection with the representation theorem one ends up with the series
We split this into two according to whether \(k_1 \le k_2\) or \(k_1>k_2\) and, for example, there holds that
The first order case \(m=1\) with the desired regularity (assumption (2)) follows as the papers [1, 2, 19] dealing with commutators of the form \([T_1, [T_2, \ldots [b, T_k]]]\), where each \(T_k\) can be multi-parameter, include the proof of the first order case with the \(H^1\)-\({\text {BMO}}\) duality strategy. And this strategy can be improved to give the additional square root save as in Theorem 4.2.
For \(m \ge 2\) the new square root save becomes tricky. The paper [28] is not at all based on the \(H^1\)-\({\text {BMO}}\) duality strategy on which this save is based on (see the proof of Theorem 4.2). We can improve the strategy of [28] for shifts. Thus, we are able to make the square root save for paraproduct free T (assumption (1)). By this we mean that (both partial and full) paraproducts in the dyadic representation of T vanish, which could also be stated in terms of (both partial and full) “\(T1=0\)” type conditions. The reader can think of convolution form SIOs.
We start considering \([b_2, [b_1, S_i]]\), where \(i=(i_1,i_2)\), \(i_j=(i_j^1,i_j^2)\) and \(S_i\) is a standard bi-parameter shift of complexity i. The reductions in pages 23 and 24 of [28] (Sect. 5.1) give that we only need to bound the key term
where as usual \(K=K^1 \times K^2\) and \(R_j=I^1_j \times I^2_j\).
We write
This splits \(U^{b_1, b_2}\) into 16 different terms \(U^{b_1, b_2}_{m_1, m_2}\), where \(m_i \in \{1, \ldots , 4\}\) tells which one of the above terms we have for \(b_i\). These can be handled quite similarly, but there are some variations in the arguments. We will handle two representative ones.
We begin by looking at the term
Write
and
Writing \(\big \langle b_1, \frac{1_{K^1}}{|K^1|} \otimes h_{L^2} \big \rangle = \int _{{\mathbb {R}}^{d_1}} \langle b_1, h_{L^2} \rangle _2 \frac{1_{K^1}}{|K^1|}\) and similarly for \(\big \langle b_2, h_{L^1} \otimes \frac{1_{I_1^2}}{|I_1^2|} \big \rangle \) we arrive at
The last line can be dominated by
We have now reached the term
Recall that with fixed \(x_2\) we have \(b(\cdot , x_2) \in {\text {BMO}}(\nu ^{1/2}(\cdot ,x_2))\), see (4.7). By weighted \(H^1\)-\({\text {BMO}}\) duality we now have that
The term \((i_1^1)^{1/2} \Vert b_2\Vert _{{\text {bmo}}(\nu ^{1/2})}\) is fine and we do not drag it along in the following estimates. We are left with the task of bounding
We now put the \(\int _{{\mathbb {R}}^{d_2}}\) inside and get the term
Then, we are left with
By weighted \(H^1\)-\({\text {BMO}}\) duality we have analogously as above that
Forgetting the factor \((i_1^2)^{1/2} \Vert b_1\Vert _{{\text {bmo}}(\nu ^{1/2})}\), which is as desired, we are then left with
Writing \(\nu ^{\frac{1}{2}} = \mu ^{\frac{1}{2p}} \lambda ^{\frac{1}{2p}} \cdot \lambda ^{-\frac{1}{p}}\) we bound this with
multiplied by
It remains to use square function bounds together with the Fefferman–Stein inequality. For the more complicated term with the function f the key thing to notice is that first \(\mu ^{1/2}\lambda ^{1/2} \in A_p\) and then that \(\nu ^{p/2} \mu ^{1/2}\lambda ^{1/2} = \mu \). We have controlled \(\langle U^{b_1, b_2}_{3,4} f, g \rangle \).
The bound for \(\langle U^{b_1, b_2}f, g \rangle \) follows by handling the other similar terms \(U^{b_1, b_2}_{m_1, m_2}\). There is a slight variation in the argument needed, for example, in the following term
We expand the differences of averages as
The key difference to the above term \(U^{b_1, b_2}_{3,4}\) is that we need to further split this into two by comparing whether we have \(V^1 \subset U^1\) or \(U^1 \subsetneq V^1\). The related two terms are handled symmetrically. The absolute value of the one coming from “\(V^1 \subset U^1\)” can be written as
The last line can be dominated by
Using the weighted \(H^1\)-\({\text {BMO}}\) duality as above we have
Forgetting the factor \( (i_2^1)^{1/2} \Vert b_2 \Vert _{{\text {bmo}}(\nu ^{1/2})}\) we have reached the term
which—after using the \(H^1\)-\({\text {BMO}}\) duality—produces \((i_2^1)^{1/2} \Vert b_1 \Vert _{{\text {bmo}}(\nu ^{1/2})}\) multiplied by
Similarly as with \(U^{b_1,b_2}_{3,4}\), this term is under control. The term with \(U^1 \subsetneq V^1\) is symmetric, and so we are also done with \(U^{b_1,b_2}_{1,1}\).
This ends our treatment of \(U^{b_1, b_2}\), since the above arguments showcased the only major difference between the various terms \(U^{b_1, b_2}_{m_1, m_2}\). Thus, we are done with \([b_2, [b_1, S_{i}]]\). By Lemma 3.11 we conclude that
By handling the higher order commutators similarly, we get the claim related to assumption (1). We omit these details. \(\square \)
Remark 4.11
The new square root save from the \(H^1\)-\({\text {BMO}}\) arguments reduces the required regularity from \(m+1\) to \(m/2+1\). In these higher order commutators this is more significant than the save that could theoretically be obtained by not using Lemma 3.11. This could change the \(+1\) to \(+1/2\).
Theorem 4.2 involves only one-parameter CZOs in its estimate
while the basic estimate
of Theorem 4.8 involves a bi-parameter CZO T. A joint generalization—considered in the unweighted case in [36]—is an estimate for
where each \(T_i\) can be a completely general m-parameter CZO. Then the appearing \({\text {BMO}}\) norm is some suitable combination of little \({\text {BMO}}\) and product \({\text {BMO}}\). See [1, 2] for a fully satisfactory Bloom type upper estimate in this generality – however, only for CZOs with the standard kernel regularity. The general case of [1, 2] is hard to digest, but let us formulate a model theorem of this type with mild kernel regularity.
Theorem 4.12
Let \({\mathbb {R}}^d = \prod _{i=1}^4 {\mathbb {R}}^{d_i}\) be a product space of four parameters and let \({\mathcal {I}}= \{{\mathcal {I}}_1, {\mathcal {I}}_2\}\), where \({\mathcal {I}}_1 = \{1,2\}\) and \({\mathcal {I}}_2 = \{3,4\}\), be a partition of the parameter space \(\{1, 2, 3, 4\}\). Suppose that \(T_i\) is a bi-parameter \((\omega _{1,i}, \omega _{2,i})\)-CZO on \(\prod _{j \in {\mathcal {I}}_i} {\mathbb {R}}^{d_j}\), where \(\omega _{j, i} \in {\text {Dini}}_{3/2}\). Let \(b :{\mathbb {R}}^d \rightarrow {\mathbb {C}}\), \(p \in (1, \infty )\), \(\mu , \lambda \in A_p({\mathbb {R}}^d)\) be 4-parameter weights and \(\nu = \mu ^{1/p} \lambda ^{-1/p}\) be the associated Bloom weight. Then we have
Here \({\text {bmo}}^{{\mathcal {I}}}(\nu )\) is the following weighted little product \({\text {BMO}}\) space:
where \({\bar{u}} = (u_i)_{i=1}^2\) is such that \(u_i \in {\mathcal {I}}_i\) and \({\text {BMO}}_{{\text {prod}}}^{{\bar{u}}}(\nu )\) is the natural weighted bi-parameter product \({\text {BMO}}\) space on the parameters \({\bar{u}}\). For example,
where the last weighted product \({\text {BMO}}\) norm is defined in (4.1).
The proof is again a combination of Lemma 3.11 with the known estimates for the commutators of standard model operators [1, 2]. However, there is again the additional square root save. There are no new significant challenges with this, which was not the case with Theorem 4.8 above, since these references are completely based on the \(H^1\)-\({\text {BMO}}\) strategy. In this regard the situation is closer to that of Theorem 4.2.
References
Airta, E.: Two-weight commutator estimates: general multi-parameter framework. Publ. Mat. 64(2), 681–729 (2020)
Airta, E., Li, K., Martikainen, H., Vuorinen, E.: Some new weighted estimates on product space. Indiana Univ. Math. J. (2019), to appear. https://www.iumj.indiana.edu/IUMJ/Preprints/8807.pdf
Barron, A., Conde-Alonso, J.M., Ou, Y., Rey, G.: Sparse domination and the strong maximal function. Adv. Math. 345, 1–26 (2019)
Barron, A., Pipher, J.: Sparse domination for bi-parameter operators using square functions (2017), preprint. arXiv:1709.05009
Chang, S.-Y.A., Fefferman, R.: A continuous version of duality of H1 with BMO on the bidisc. Ann. Math. (2) 112(1), 179–201 (1980)
Chang, S.-Y.A., Fefferman, R.: Some recent developments in Fourier analysis and Hp-theory on product domains. Bull. Am. Math. Soc. (N.S.) 12(1), 1–43 (1985)
Chang, S.-Y.A., Wilson, J.M., Wolff, T.H.: Some weighted norm inequalities concerning the Schrödinger operators. Comment. Math. Helv. 60(2), 217–246 (1985)
Cruz-Uribe, D., Martell, J.M., Pérez, C.: Sharp weighted estimates for classical operators. Adv. Math. 229(1), 408–441 (2012)
Dalenc, L., Ou, Y.: Upper bound for multi-parameter iterated commutators. Publ. Mat. 60(1), 191–220 (2016)
David, G., Journé, J.L.: A boundedness criterion for generalized Calderón-Zygmund operators. Ann. Math. (2) 120(2), 3710397 (1984)
Deng, D., Yan, L., Yang, Q.: Blocking analysis and T (1) theorem. Sci. China Ser. A 41(8), 801–808 (1998)
Di Plinio, F., Li, K., Martikainen, H., Vuorinen, E.: Banach-valued multilinear singular integrals with modulation invariance. Int. Math. Res. Not. (2020), to appear. https://doi.org/10.1093/imrn/rnaa234
Di Plinio, F., Li, K., Martikainen, H., Vuorinen, E.: Multilinear operator-valued Calderón-Zygmund theory. J. Funct. Anal. 279(8), 108666 (2020)
Di Plinio, F., Li, K., Martikainen, H., Vuorinen, E.: Multilinear singular integrals on non-commutative Lp spaces. Math. Ann. 378(3–4), 1371–1414 (2020)
Figiel, T.: On equivalence of some bases to the Haar system in spaces of vector-valued functions. Bull. Polish Acad. Sci. Math. 36 (1988), no. 3-4, 119–131 (1989)
Figiel, T.: Singular integral operators: a martingale approach. In: Geometry of Banach Spaces (Strobl, 1989), (1990), pp. 95–110
Grau de la Herrán, A., Hytönen, T.: Dyadic representation and boundedness of nonhomogeneous Calderón- Zygmund operators with mild kernel regularity. Mich. Math. J. 67(4), 757–786 (2018)
Holmes, I., Lacey, M.T., Wick, B.D.: Commutators in the two-weight setting. Math. Ann. 367(1–2), 51–80 (2017)
Holmes, I., Petermichl, S., Wick, B.D.: Weighted little bmo and two-weight inequalities for Journé commutators. Anal. PDE 11(7), 1693–1740 (2018)
Hytönen, T.: Representation of singular integrals by dyadic operators, and the A2 theorem. Expo. Math. 35, 166–205 (2011)
Hytönen, T.P.: The sharp weighted bound for general Calderón-Zygmund operators. Ann. Math. (2) 175(3), 1473–1506 (2012)
Hytönen, T.P., Roncal, L., Tapiola, O.: Quantitative weighted estimates for rough homogeneous singular integrals. Isr. J. Math. 218(1), 133–164 (2017)
Journé, J.-L.: Calderón-Zygmund operators on product spaces. Rev. Mat. Iberoam. 1, 550–91 (1985)
Lacey, M.T.: An elementary proof of the A2 bound. Isr. J. Math. 217(1), 181–195 (2017)
Lerner, A.K., Ombrosi, S., Rivera-Ríos, I.P.: On pointwise and weighted estimates for commutators of Calderón-Zygmund operators. Adv. Math. 319, 153–181 (2017)
Lerner, A.K., Ombrosi, S., Rivera-Ríos, I.P.: Commutators of singular integrals revisited. Bull. Lond. Math. Soc. 51(1), 107–119 (2019)
Li, K., Martikainen, H., Ou, Y., Vuorinen, E.: Bilinear representation theorem. Trans. Am. Math. Soc. 371(6), 4193–4214 (2019)
Li, K., Martikainen, H., Vuorinen, E.: Bloom-type inequality for bi-parameter singular integrals: efficient proof and iterated commutators. Int. Math. Res. Not. IMRN (2019), rnz072. https://doi.org/10.1093/imrn/rnz072
Li, K., Martikainen, H., Vuorinen, E.: Bloom type upper bounds in the product BMO setting. J. Geom. Anal. 30, 3181–3203 (2019)
Li, K., Martikainen, H., Vuorinen, E.: Bilinear Calderón-Zygmund theory on product spaces. J. Math. Pures Appl. 138, 356–412 (2020)
Li, K., Martikainen, H., Vuorinen, E.: Genuinely multilinear weighted estimates for singular integrals in product spaces. Adv. Math. 393 (2021). https://doi.org/10.1016/j.aim.2021.108099
Martikainen, H.: Representation of bi-parameter singular integrals by dyadic operators. Adv. Math. 229(3), 1734–1761 (2012)
Martikainen, H., Vuorinen, E.: Dyadic-probabilistic methods in bilinear analysis. Mem. Am. Math. Soc. 274 (2021), no. 1344. https://doi.org/10.1090/memo/1344
Muckenhoupt, B., Wheeden, R.L.: Weighted bounded mean oscillation and the Hilbert transform. Studia Math. 54 (1975/76), no. 3, 221–237
Ou, Y.: Multi-parameter singular integral operators and representation theorem. Rev. Mat. Iberoam. 33(1), 325–350 (2017)
Ou, Y., Petermichl, S., Strouse, E.: Higher order Journé commutators and characterizations of multiparameter BMO. Adv. Math. 291, 24–58 (2016)
Wu, S.: A wavelet characterization for weighted Hardy spaces. Rev. Mat. Iberoam. 8(3), 329–349 (1992)
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funding
Open Access funding provided by University of Helsinki including Helsinki University Central Hospital.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
H.M. was supported by the Academy of Finland through the Grants 294840 and 327271, and by the three-year research Grant 75160010 of the University of Helsinki. E.A. and E.V. were supported by the Academy of Finland through the Grant 327271. All are members of the Finnish Centre of Excellence in Analysis and Dynamics Research supported by the Academy of Finland (Project No. 307333).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Airta, E., Martikainen, H. & Vuorinen, E. Product Space Singular Integrals with Mild Kernel Regularity. J Geom Anal 32, 24 (2022). https://doi.org/10.1007/s12220-021-00757-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s12220-021-00757-3
Keywords
- Singular integrals
- Commutators
- Weighted estimates
- Kernel regularity
- Multilinear analysis
- Multi-parameter analysis