Abstract
We prove Nikol’skii type inequalities that, for polynomials on the n-dimensional torus \(\mathbb {T}^n\), relate the \(L^p\)-norm with the \(L^q\)-norm (with respect to the normalized Lebesgue measure and \(0 <p <q < \infty \)). Among other things, we show that \(C=\sqrt{q/p}\) is the best constant such that \(\Vert P\Vert _{L^q}\le C^{\text {deg}(P)} \Vert P\Vert _{L^p}\) for all homogeneous polynomials P on \(\mathbb {T}^n\). We also prove an exact inequality between the \(L^p\)-norm of a polynomial P on \(\mathbb {T}^n\) and its Mahler measure M(P), which is the geometric mean of |P| with respect to the normalized Lebesgue measure on \(\mathbb {T}^n\). Using extrapolation, we transfer this estimate into a Khintchine–Kahane type inequality, which, for polynomials on \(\mathbb {T}^n\), relates a certain exponential Orlicz norm and Mahler’s measure. Applications are given, including some interpolation estimates.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The Mahler measure of a polynomial P on \(\mathbb {C}^n\) is given by
where \({\text {d}}z = {\text {d}}z_1 \ldots {\text {d}}z_n\) stands for the normalized Lebesgue measure on the n-torus \(\mathbb {T}^n\). Thus M(P) is the geometric mean of P over the n–torus \(\mathbb {T}^n\) (we define \(M(0) = 0\)). We point out that Mahler [14] used the functional M as a powerful tool in a simple proof of the “Gelfond–Mahler inequality,” which found important applications in transcendence theory. It seems that the Mahler measure for polynomials in one complex variable appears the first time in Lehmer [13], where it is proved that if \(P(z)=\sum _{k=0}^m a_k z^k, \, z \in \mathbb {C}\) with \(a_m\ne 0\) and zeros \(\alpha _1, \ldots , \alpha _m \in \mathbb {C}\), then
Let us also recall the following crucial multiplicativity property of the Mahler measure M: For two polynomials P and Q on \(\mathbb {C}^n\),
The following inequality due to Arestov [2] is a key result for this article (first results in this direction are due Mahler [14] and Duncan [11]): For every polynomial P in one complex variable and of degree \(\text {deg}(P)\le m\), and every \(0 < p < \infty \), we have
where
By definition, we have that \(\Vert P\Vert _{L^p(\mathbb {T})} = \Lambda (p, m)\) for the polynomial \(P(z)= (1+z)^m, \, z \in \mathbb {C}\), and moreover, by (2), that \(M(P)=1\), which implies that (4) is sharp. Arestov [2] proved, using asymptotic formulas for the \(\Gamma \)-function, that for fixed \(p>0\),
Throughout the paper, we will use standard notation; some nonstandard notation will be given locally. Let \(\mathcal {P}(\mathbb {K}^n)\) be the space of all polynomials P on \(\mathbb {K}^n\), where \(\mathbb {K}=\mathbb {R}\) or \(\mathbb {K}=\mathbb {C}\); i.e.,
Here the sum is taken over finitely many multi indices \(\alpha =(\alpha _1, \ldots , \alpha _n) \in \mathbb {N}_0^n\), and \(z^{\alpha } := z_1^{\alpha _1}\,\ldots \,z_n^{\alpha _n}\) denotes the \(\alpha \)th monomial. As usual, \( |\alpha |= \sum _{j=1}^n \alpha _j, \) and we call
the total degree of P. If \(m=\text {deg}(P)\) and all monomial coefficients \(c_\alpha =c_\alpha (P) =0 \) for all \(|\alpha | < m\), then P is said to be m-homogeneous. For every polynomial \(P\in \mathcal {P}(\mathbb {K}^{n})\) and \((u_1,\ldots ,u_n) \in \mathbb {K}^{n}\), the degree of the one-variable polynomial
equals
Finally, in contrast to (6), we write
The following extension of (4), a sharp Khintchine–Kahane type inequality that allows one to estimate the \(L^p(\mathbb {T}^n)\)-norm \(\Vert P\Vert _{L^p(\mathbb {T}^n)}\) of \(P\in \mathcal {P}(\mathbb {C}^n)\) by its Mahler measure M(P), was the starting point of this article, and will be proved in Sect. 3: For every \(P\in \mathcal {P}(\mathbb {C}^n)\) and every \(0 < p < \infty \),
In view of (1), this inequality can be seen as a limit case of Khintchine–Kahane type inequalities relating, for \(0 <p <q < \infty \), the \(L^q{(\mathbb {T}^n)}\)-norm of \(P\in \mathcal {P}(\mathbb {C}^n)\) to its \(L^p{(\mathbb {T}^n)}\)-norm. The study of \(L^p\)-norms of polynomials has a rich history, and in Sect. 2, we give new information in this direction. Recently, there has been a considerable interest in the behavior of the constants of Khintchine–Kahane type equivalences \(\Vert \cdot \Vert _{L^p} \approx \Vert \cdot \Vert _{L^q}\) in the case when \(L^p\)-spaces are considered on arbitrary unit-volume convex bodies K in \(\mathbb {R}^n\); see, e.g., the work of Gromov–Milman [12] and Bourgain [8]. We also mention [6], where it is proved that if \(\mu \) is an arbitrary log-concave probability measure \(\mu \) on \(\mathbb {R}^n\), then, for all \(1 \le p < \infty \) and all \(P\in \mathcal {P}(\mathbb {R}^n)\), we have
A Borel probability measure \(\mu \) on \(\mathbb {R}^n\) is said to be log-concave whenever \(\mu \) is supported by some affine subspace E, where it has a log-concave density \(u:E\rightarrow [0, \infty )\) (i.e., \(\ln u\) is concave) with respect to the Lebesgue measure on E. The inequality in (8) implies that on \(\mathcal {P}(\mathbb {R}^n)\) all \(L^p(\mu )\)-norms are equivalent with constants which are independent of n and depend only on \(\text {deg}_{\infty }(P)\) and p. Replacing \(\mathbb {R}^n\) by \(\mathbb {C}^n\), a well-known inequality due to Nikol’skii (see [18, 19]) states that for every \(P\in \mathcal {P}(\mathbb {C}^n)\) and \(0<q<p<\infty \),
Such inequalities are called Nikol’skii inequalities or different metrics inequalities. There is an extensive literature on such inequalities; we refer to the monographs [10, 17], and [21].
Recall the following important result from Weissler’s article [22, Corollary 2.1] on the hypercontractivity of convolution with the Poisson kernel in Hardy spaces: Given \(0<p<q < \infty \), the constant \(r=\sqrt{\frac{p}{q}}\) is the best constant \(0<r\le 1\) such that for every \(P\in \mathcal {P}(\mathbb {C})\),
In [4, Theorem 9] Bayart used a standard iteration argument through Fubini’s theorem in order to extend (10) to polynomials in several variables: The constant \(r=\sqrt{\frac{p}{q}}\) is the best constant \(0 < r\le 1\) such that for every \(P\in \mathcal {P}(\mathbb {C}^n)\), we have
in particular, for every homogeneous polynomial \(P \in \mathcal {P}(\mathbb {C}^n)\),
This Khintchine–Kahane type inequality extends earlier work of Beauzamy et al. from [5] (\(p=2<q\) and \(p< q=2\)) and Bonami [7] (\(q=2,p=1\)). The striking fact is that the constants involved in (12) are independent of the dimension n, while they grow exponentially with the degree \(\deg (P)\). On the other hand, the constants in (9) approach infinity as n tends to infinity.
In Sect. 2 we supply more information on (12). We show that the constant \(C=\sqrt{\frac{q}{p}}\) in this inequality is best possible, and moreover it even holds for all polynomials \(P \in \mathcal {P}(\mathbb {C}^n)\) (not necessarily homogeneous). This will follow by Proposition 2.1, a result based on private communication with S. Kwapień and here published with his permission. Several interesting applications are in order—some motivated by the original work of Mahler and his followers.
2 \(L^p\)-Norms Versus \(L^q\)-Norms of Polynomials on \(\mathbb {T}^n\)
The following Khintchine–Kahane type inequality is the main result of this section.
Theorem 2.1
Let \(0 < p < q < \infty \).
-
(i)
For each \(n \in \mathbb {N}\) and every \(P \in \mathcal {P}(\mathbb {C}^n)\), we have with \(C= \sqrt{\frac{q}{p}}\),
$$\begin{aligned} \Vert P\Vert _{L^q(\mathbb {T}^n)} \le C^\mathrm{{deg}(P)}\,\, \Vert P\Vert _{L^p(\mathbb {T}^n)}\,; \end{aligned}$$here the constant \(C=\sqrt{\frac{q}{p}}\) is best possible which is an immediate consequence of the following statement.
-
(ii)
The best possible constant \(C=C(p,q)\) such that for each \(n, m\in \mathbb {N}\) and every m-homogeneous polynomial \(P \in \mathcal {P}(\mathbb {C}^n)\) we have
$$\begin{aligned} \Vert P\Vert _{L^q\left( \mathbb {T}^n\right) } \le C^{m}\, \Vert P\Vert _{L^p\left( \mathbb {T}^n\right) } \end{aligned}$$(13)equals \(\sqrt{\frac{q}{p}}\).
As mentioned in the discussion preceeding inequality (12), the upper estimate \(\mathrm{{(ii)}}\) is due to Bayart [4]; see also [5, Lemma 1.C1] and [7, Theorem III.7] (\(q=2,p=1\) with \(C= \sqrt{2}\)). Before turning to the proof of Theorem 2.1, let us note that the strategy for the proof will be as follows: we first prove statement \(\mathrm{(i)}\), and in the final part of this section, we turn to the proof of statement \(\mathrm{(ii)}\) based on Kwapień Proposition 2.1.
Proof of statement (i) in Theorem 2.1
For a given polynomial \(P(w) = \sum _{\alpha \in \mathbb {N}_0^n} c_\alpha w^\alpha \in \mathcal {P}(\mathbb {C}^n)\), we define the polynomial \(Q \in \mathcal {P}(\mathbb {C}^{n+1}) \) given by
Clearly, Q is m-homogeneous with \(m= \text {deg}(P)\) and for every \(0 < r < \infty \), we have (by rotation invariance of the Lebesgue measure on \(\mathbb {T}^n\))
Then statement (i) follows from (12). \(\square \)
The proof of statement (ii) in Theorem 2.1 will be a consequence of the following proposition due to Kwapień. We need some more notation: for fixed \(0 < p < q < \infty \) and \(m \in \mathbb {N}\), let C(p, q; m) be the best constant such that
holds for all m-homogeneous polynomials \(P \in \mathcal {P}(\mathbb {C}^n)\) which are affine in each variable (i.e., all m-homogeneous polynomials with \(\text {deg}_\infty (P) =1\)).
Proposition 2.1
For every \(0 < p < q< \infty \) and each \(m \in \mathbb {N}\) the constant C(p, q; m) defined above satisfies,
where the symbol \(\asymp \) means that the terms are equal up to constants only depending on p and q.
Using the estimate (12) for homogeneous polynomials, we immediately obtain the required upper estimate. The proof of the lower bound in (14) needs a bit more preparation. For each \(k,n \in \mathbb {N}\) with \(k\le n\), define the following two k-homogeneous polynomials on \(\mathbb {C}^n\):
In the literature, \(e_{k,n}\) and \(p_{k,n}\) are called the k-th elementary symmetric and the k-th power symmetric polynomial, respectively. It is known that
where
and the sum extends over all \(j_1,\ldots ,j_k \in \mathbb {N}_0\) with \(j_1+2j_2+\cdots +kj_k = k\), and at least one of the indices \(j_2,\ldots , j_k \) is not zero. Note that there is a recursive formula for the coefficients \(a_{j_1,\ldots ,j_k}\) (seemingly due to Newton).
Lemma 2.1
For \(m, n\in \mathbb {N}\) with \(m\le n\), define the m-homogeneous polynomial
Then for each \(0 < p < \infty \), we have that
Proof
For each \(1\le k \le m\), let
Hereafter, we treat the \(z_i, 1 \le i \le n\) as random variables on the probability space \(\mathbb {T}^n\); we consider \(z_i:\mathbb {T}^n \rightarrow \mathbb {C}, (z_j )\mapsto z_i\) (called Steinhaus random variables, i.e., independent identically distributed random variables on a probability measure space, uniformly distributed on \(\mathbb {T}\)).
By the central limit theorem (see, e.g., [1]), the sequence \(Q_{1,n}\) converges in distributions, as \(n\rightarrow \infty \), to a canonical complex Gaussian random variable G, and for each \(t>0\), there is \(C_t >0\) such that for all n, we have
Hence
Since the random variable \(Q_{k,n}\) is distributed the same as \(Q_{1,n}n^{-\frac{k-1}{2}}\) for each \(k, n\in \mathbb {N}\) (because \(z^k_i\) is distributed the same as \(z_i\)), it follows that for each \(k\in \mathbb {N}\) and every \(t>0\), we have
Hence by the Minkowski/Hölder inequality and (16), we obtain for each \(k>1\) and every \(t>0\),
Finally, (15) gives
the desired result. \(\square \)
Proof of Proposition 2.1
Recall that only the lower estimate in (14) remains to be shown. An immediate consequence of the preceding lemma is that
and hence we get, by the quantitative version of Stirling’s formula (\(\sqrt{2\pi } x^{x+ 1/2}e^{-x} <\Gamma (x+1) < \sqrt{2\pi } x^{x + 1/2} e^{-x + 1/12x}\) for all \(x>0\)),
which is the desired estimate. \(\square \)
We finish by completing the proof of Theorem 2.1.
Proof of statement (ii) in Theorem 2.1
Assume that (13) holds with the constant C for all homogeneous polynomials on \({\mathbb {C}}^n\). Using (14), we see that there is some constant \(D=D(p,q)\) such that for all m,
and hence we get the desired result when m tends to infinity. \(\square \)
3 \(L^p\)-Norms Versus Mahler’s Measure for Polynomials on \(\mathbb {T}^n\)
Based on Arestov’s estimate (4), we prove an exact inequality between the \(L^p\)-norm of a polynomial P on \(\mathbb {T}^n\) and its Mahler measure M(P).
Theorem 3.1
For every \(P\in \mathcal {P}(\mathbb {C}^n)\) and every \(0 < p < \infty \),
where \(d_j=\mathrm{{deg}}(P_j)\) is as in (7). Moreover, this inequality is sharp since for the polynomial
with \(P_j(z)=(1 + z)^{d_j},z\in \mathbb {C}\,\) and \(d_j \in \mathbb {N},\,1\le j\le n\), we have that \(\Vert P\Vert _{L^p(\mathbb {T}^n)} = \prod _{j=1}^{n} \Lambda (p, d_j)\) as well as \(M(P)=1\).
Proof
We use induction with respect to n. The inequality is true for \(n=1\) by Arestov’s result from (4). Fix a positive integer \(n\ge 2\), and assume that the inequality is true for \(n-1\). We fix a sequence \((q_k)\) of positive real numbers such that \(0<q_k<\text {min}\{1, p\}\) for each \(k\in \mathbb {N}\) and \(q_{k}\downarrow 0\), and write for \(1\le k\le n\),
Combining Fubini’s theorem with Jensen’s inequality and the definition (1) gives, by the inductive hypothesis,
It now follows by Fatou’s lemma and (the continuous) Minkowski’s integral inequality (we have \(0 <pq_k <p\) for each \(k\in \mathbb {N}\)),
Then by Arestov’s estimate from (4), we obtain that
which by another application of the inductive hypothesis and Fubini’s theorem finally leads to the desired estimate,
It remains to check the comment on the sharpness of the inequality. By Fubini’s theorem and the definition of \(\Lambda (p,d_j)\) from (5), we immediately see that \(\Vert P\Vert _{L^p(\mathbb {T}^n)} = \prod _{j=1}^{n} \Lambda (p, d_j)\). On the other hand, the multiplicativity property (3) of the Mahler measure gives that \(M(P)= \prod _{j=1}^{n} M(P_j) =1\), where the latter equality follows from (2). This completes the proof. \(\square \)
We conclude this section with a simple corollary.
Corollary 3.1
For every \(P\in \mathcal {P}(\mathbb {C}^n)\) with \(m= \mathrm{{deg}}_\infty (P)\), and every \(0 < p < \infty \), we have
Proof
Fix \(p>0\). Substituting the polynomial \(P\equiv 1\) into (4) yields \(1 \le \Lambda (p, m)\) for each \(m\in \mathbb {N}\). Since \(\Lambda (p, m)\) is the least constant in inequality (4) and the class of polynomials becomes larger as m grows, it follows that \(\Lambda (p, k_1) \le \Lambda (p, k_2)\) provided \(k_1 < k_2\). Now if \(P\in \mathcal {P}(\mathbb {C}^n)\) with \(\text {deg}_\infty (P)=m\), then \(d_j := \text {deg}(P_j)\le m\) for each \(1\le j\le n\), and so \(\prod _{j=1}^{n} \Lambda (p,d_j) \le \Lambda (p, m)^n\). Thus the required estimate follows from Theorem 3.1. \(\square \)
4 Applications
In the final section, we discuss some applications of our previous results. The first one is motivated by a result due to Bourgain [8] (answering affirmatively a question by Milman), which states that there are universal constants \(t_0>0\) and \(c\in (0, 1)\) such that, for every convex set \(K\subset \mathbb {R}^n\) of volume one and every \(P\in \mathcal {P}(\mathbb {R}^n)\), the following distribution inequality holds:
where \(\mu _K\) is the Lebesgue measure on K and \(\Vert f\Vert _1\) is the \(L^1\)-norm of P with respect to \(\mu _K\). It is known that such inequalities may be rewritten in terms of the Orlicz space \(L_{\psi _{\alpha }}(K)\) generated by the convex function \(\psi _{\alpha }(t)= \text {exp}(t^{\alpha }) - 1\), \(t\ge 0\), as follows:
where \(\alpha = c/n\) and \(C>0\) is some absolute constant.
We recall that for a given convex function \(\varphi :[0, \infty ) \rightarrow [0, \infty )\) with \(\varphi ^{-1}(\{0\})=\{0\}\) and any measure space \((\Omega , \Sigma , \mu )\), the Orlicz space \(L_{\varphi }(\Omega )\) is defined to be the space of all complex functions \(f\in L^0(\mu )\) such that \(\int _{\Omega }\varphi (\lambda |f|)\,{\text {d}}\mu <\infty \) for some \(\lambda >0\), and it is equipped with the norm
Now, using an extrapolation trick, we will prove a variant of (17), but for polynomials on the n-torus \(\mathbb {T}^n\) instead of \(\mathbb {R}^n\). We will use Corollary 3.1 to deduce the following Khintchine–Kahane type inequality relating, for polynomials on the n-torus, a corresponding exponential Orlicz norm and Mahler’s measure.
Theorem 4.1
For every \(P\in \mathcal {P}(\mathbb {C}^n)\) with \(m = \mathrm{{deg}}_\infty (P)\), we have
Proof
Observe that the integral form of \(\Lambda (p,m)\) given in (5) implies that for each \(k\in \mathbb {N}\),
Now fix \(P\in \mathcal {P}(\mathbb {C}^n)\). Combining the above inequality with Corollary 3.1 (for \(p=k/m\)) yields
Recall that \(\psi _{1/m}(t) = \text {exp}(t^{1/m})-1\) for all \(t\ge 0\). Then, using Taylor’s expansion and (18), we obtain that
Hence, by the definition of the norm in the Orlicz space \(L_{\psi _{1/m}}(\mathbb {T}^n)\), we get
and this completes the proof. \(\square \)
The final applications are motivated by some interesting results from the remarkable article [16]. To explain these results, following [16], for a given polynomial \(P \in \mathcal {P}(\mathbb {C}^n)\), we define
In [16], Mahler established a number of inequalities connecting L(P), H(P), and M(P), and showed applications to estimates of \(\Vert P\Vert _{L^2(\mathbb {T}^n)}\) in terms of L(P), H(P), or M(P). A series of papers studies this and related problems, in particular, the problem of finding norm estimates
where \(\Vert \cdot \Vert \) is some norm on \(\mathcal {P}(\mathbb {C}^n)\), P, \(Q\in \mathcal {P}(\mathbb {C}^n)\), and C a constant depending only on the degrees of P, Q; see, e.g., [5, 11, 16].
It was proved by Duncan [11, Theorem 3] that if \(P,Q\in \mathcal {P}(\mathbb {C})\) with \(m= \mathrm{{deg}}(P)\) and \(k= \mathrm{{deg}}(Q)\), then
Below we present a more general multidimensional variant that estimates the \(L^p\)-norms of products of polynomials over the n-torus \(\mathbb {T}^n\) also in the quasi-Banach case, i.e., \(0<p<1\).
Proposition 4.1
For every \(P,Q\in \mathcal {P}(\mathbb {C}^n)\) with \(m= \mathrm{{deg}}_\infty (P)\) and \(k= \mathrm{{deg}}_\infty (Q)\), and every \(0<p<\infty \), we have
In particular, if \(P(z)= \sum _\alpha c_{\alpha }(P) z^{\alpha }\) and \(Q(z)= \sum _\alpha c_{\alpha }(Q) z^{\alpha }\), then
Proof
Combining Corollary 3.1 with the multiplicativity property (3) of the Mahler measure M, we obtain
To conclude, note that
and \(\Lambda (2, m) = {2m \atopwithdelims ()m}^{1/2}\) for each \(m\in \mathbb {N}\). \(\square \)
In [15], Mahler proved the following univariate “triangle inequality”:
with the constant \(\kappa (m)= 2^{m}\), where \(\text {deg}(P) = \text {deg}(Q)=m\), and he observed that it has applications in the theory of diophantine approximation. Duncan [11] obtained the above inequality with the smaller constant
We refer the reader to the paper [3] of Arestov, which contains a refinement of these results for algebraic polynomials on the unit circle; more precisely, the following two-sided estimates:
are obtained, where \(m\ge 6\), \(R=\root 6 \of {40} \approx 1{,}8493\) and \(r = \text {exp}(2G/\pi )\approx 1{,}7916\) with \(G= \sum _{\nu =0}^{\infty } (-1)^{\nu }/(2\nu + 1)^{2}\).
We have the following multidimensional variant of Duncan’s result:
Proposition 4.2
For P, \(Q \in \mathcal {P}(\mathbb {C}^n)\) with \(m=\mathrm{{deg}}_{\infty }(P) = \mathrm{{deg}}_{\infty }(Q)\), we have
Proof
The proof is a direct consequence of Theorem 3.1: Since \(\Lambda (2, m) = {2m \atopwithdelims ()m}^{1/2}\) for each \(m\in \mathbb {N}\), Jensen’s inequality in combination with Theorem 3.1 yields
\(\square \)
We conclude the paper with an interpolation inequality for \(L^p\)-norms of polynomials on \(\mathbb {T}^n\) that is interesting in its own right; here for a given measure space \((\Omega , \Sigma , \mu )\), \(E\in \Sigma \) with \(\mu (E)>0\), and \(0<p<\infty \), we write \(\Vert f\Vert _{L^p(E)}:= \left( \int _{E}|f|^p\,{\text {d}}\mu \right) ^{1/p},\,\,f\in L^p(\mu )\).
Theorem 4.2
Let E be a Lebesgue measurable subset of \(\mathbb {T}^n\) with \(\lambda _{n}(E) \ge \theta >0\). Then the following interpolation inequality holds for any polynomial \(P\in \mathcal {P}(\mathbb {C}^n)\) with \(\mathrm{{deg}}_{\infty }(P)=m\) and any \(0<p < \infty \):
where \(C(\theta ) = \frac{1}{\theta ^{\theta }(1-\theta )^{1-\theta }}\le 2\) for every \(0<\theta <1\), and \(C(1)=1\).
Proof
The inequality is an immediate consequence of Corollary 3.1 in combination with the following lemma, which is surely known to specialists; for the sake of completeness we include a proof of it. \(\square \)
Lemma 4.1
Let \((\Omega , \Sigma , \mu )\) be a probability measure. If \(\log |f|\in L^1(\mu )\) and \(A\in \Sigma \) with \(\mu (A)=\alpha \), then
In particular, if \(\mu (A)\ge \theta >0\), then
Proof
Recall that if \((\Omega , \Sigma , \nu )\) is a probability measure space and \(\log |g| \in L^1(\nu )\), then it follows by Jensen’s inequality that
With \(A^{\prime }= \Omega \setminus A\), this inequality yields
To complete the proof, it is sufficient to observe that \(x^{x}(1- x)^{1- x} \ge 1/2\) for every \(0< x <1\). \(\square \)
References
Araujo, A., Giné, E.: The Central Limit Theorem for Real and Banach Valued Random Variables. Wiley, New York (1980)
Arestov, V.V.: Inequality of various metrics for trigonometric polynomials. Mat. Zametki 27(4), 539–547 (1980). (Russian)
Arestov, V.V.: Integral inequalities for algebraic polynomials on the unit circle . Mat. Zametki 48(4), 7–18 (1990). (Russian); translation in Math. Notes 48(3–4), 977–984 (1990)
Bayart, F.: Hardy spaces of Dirichlet series and their composition operators. Monatsh. Math. 136(3), 203–236 (2002)
Beauzamy, B., Bombieri, E., Enflo, P., Montgomery, H.L.: Products of polynomials in many variables. J. Number Theory 36(2), 219–245 (1990)
Bobkov, S.G.: Remarks on the growth of \(L^p\)-norms of polynomials. Geom. Asp. Funct. Anal. Lect. Notes Math. 1745, 27–35 (2000)
Bonami, A.: Étude des coefficients de Fourier de fonctions de \(L^p(G)\). Ann. Inst. Fourier (Grenoble) 20(fasc.2), 335–402 (1970)
Bourgain, J.: On the distribution of polynomials on high-dimensional convex sets. Geom. Asp. Funct. Anal. Lect. Notes Math. 1469, 127–137 (1991)
Cole, B.J., Gamelin, T.W.: Representing measures and Hardy spaces for the infinite polydisk algebra. Proc. Lond. Math. Soc. (3) 53(1), 112–142 (1986)
DeVore, R.A., Lorentz, G.G.: Constructive Approximation. Springer, Berlin (1993)
Duncan, R.L.: Some inequalities for polynomials. Am. Math. Mon. 73, 58–59 (1966)
Gromov, M., Milman, V.D.: Brunn Theorem and a Concentration of Volume for Convex Bodies, Israel Seminar on Geometrical Aspects of Functional Analysis (1983/1984). Tel Aviv University, Tel Aviv (1984)
Lehmer, D.H.: Factorization of certain cyclotomic functions. Ann. Math. 34(2), 461–479 (1933)
Mahler, K.: An application of Jensen’s formula to polynomials. Mathematika 7, 98–100 (1960)
Mahler, K.: On the zeros of the derivative of a polynomial. Proc. R. Soc. Ser. A 264, 145–154 (1961)
Mahler, K.: On some inequalities for polynomials in several variables. J. Lond. Math. Soc. 37(2), 341–344 (1962)
Milovanović, G.V., Mitrinović, D.S., Rassias, Th.M.: Topics in Polynomials: Extremal Problems, Inequalities, Zeros. World Scientific, Singapore (1994)
Nikol’skiĭ, S.M.: Inequalities for entire functions of finite degree and their application in the theory of differentiable functions of several variables. (Russian) Trudy Mat. Inst. Steklov., vol. 38, pp. 244–278. Izdat. Akad. Nauk SSSR, Moscow (1951)
Nikol’skiĭ, S.M.: Approximation of Functions of Several Variables and Imbedding Theorems. Springer, New York (1975)
Privalov, I.I.: Graničnye svoĭstva analitičeskih funkciiĭ. (Russian) [Boundary properties of analytic functions] 2d (ed.) Gosudarstv. Izdat. Tehn.-Teor. Lit., Moscow-Leningrad (1950)
Rahman, Q.I., Schmeisser, G.: Analytic Theory of Polynomials. Oxford University Press, Oxford (2002)
Weissler, F.B.: Logarithmic Sobolev inequalities and hypercontractive estimates on the circle. J. Funct. Anal. 37(2), 218–234 (1980)
Acknowledgments
We thank the referees for helpful comments which improved the presentation, and also S. Kwapień for his permission to include his result from Proposition 2.1. Finally, we are grateful to D. Galicer and A. Pérez for some clarifying discussions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Doron S. Lubinsky.
The work was supported by The Foundation for Polish Science (FNP).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Defant, A., Mastyło, M. \(L^{p}\)-Norms and Mahler’s Measure of Polynomials on the n-Dimensional Torus. Constr Approx 44, 87–101 (2016). https://doi.org/10.1007/s00365-015-9319-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-015-9319-x