Aspects of the Kahane–Salem–Zygmund inequalities in Banach spaces

The main aim of this work is to discuss several different approaches to the celebrated Kahane–Salem–Zygmund inequalities. In particular, we prove estimates for exponential Orlicz norms of averages sup1≤j≤N|∑i=1Kai(j)γi|,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sup _{1\le j\le N}\Big |\sum _{i=1}^K a_i(j) \gamma _i\Big |\,,$$\end{document} where (ai(j))∈ℓ∞N,1≤i≤K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ (a_i(j)) \in \ell _\infty ^N, \, 1 \le i \le K$$\end{document} and the (γi)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\gamma _i)$$\end{document} form a sequence of real or complex subgaussian random variables. Lifting these inequalities to finite dimensional Banach spaces, we get some new Kahane–Salem–Zygmund type inequalities—in particular, for spaces of subgaussian random polynomials and multilinear forms on finite dimensional Banach spaces, and also for subgaussian random Dirichlet polynomials.


Introduction
The study of random inequalities for trigonometric polynomials in one variable goes back to the seminal work of Salem and Zygmund in [35], and later it was Kahane who in [19] extended these ideas to more abstract settings-including trigonometric polynomials in several variables. In the recent decades such inequalities have been of central importance in numerous topics of modern analysis, as, e.g., Fourier analysis, analytic number theory, or holomorphy in high dimensions.
In this work we attempt to discuss three different approaches to Kahane-Salem-Zygmund type inequalities-in particular for subgaussian variables. Each of these approaches is based on different tools, and hence they have different advantages (and disadvantages). The applications we give, improve several probabilistic estimates which recently were of importance in various multilinear settings.
Let us give a brief description of some keystones. Given a subgaussian sequence (γ i ) i∈N of random variables over a probability measure space ( , A, P), and a finite sequence (a i ) of vectors in N ∞ , we are interested in estimates for the expectation of i a i γ i N ∞ , which we call abstract Kahane-Salem-Zygmund inequalities (KSZ -inequalities for short).
More generally (and more precisely), given a Banach function space X over a probability measure space ( , A, P) and a sequence of random variables (γ i ) i∈N ⊂ X , we are looking for a function ψ : N → (0, ∞) and a sequence (S n ), S n := (K n , · n ), n ∈ N of semi-normed spaces such that, for every choice of finitely many vectors (a i ( j)) N j=1 ∈ N ∞ , In what follows, a sequence (γ i ) i∈N of random variables in X is said to satisfy the KSZinequality of type (X , (S n ) n , ψ) provided that the above inequality holds. Of special importance for the applications we have in mind, are Rademacher random variables and the much larger class of subgaussian random variables-including real and complex normal Gaussian as well as complex Steinhaus variables.
Let us explain why we call such estimates abstract KSZ -inequalities. Note first that if we take a sequence (ε i ) i∈N of independent Rademacher random variables (that is, independent random variables taking values +1 and −1 with equal probability 1 2 ), then for X = L r (P) with 1 ≤ r < ∞, S n := n 2 for each n ∈ N and N = 1 the estimate from (1) reduces to (the right-hand side of) Khintchine's inequality: There is a constant 0 < B r such that, for each K ∈ N and all scalars t 1 , . . . , t K , |t i | 2 1 2 . (2) To describe results for Orlicz spaces, recall that if : R + → R + is an Orlicz function (that is, a convex, increasing and continuous positive function with (0) = 0), then the Orlicz space L (ν) (L for short) on a measure space ( , A, ν) is defined to be the space of all (real or complex) A-measurable functions f such that (λ| f |) dν < ∞ for some λ > 0, and it is equipped with the norm If is a finite or countable set, A = 2 , and ν the counting measure, then we write ϕ (ν) instead of L ϕ (ν). For r ∈ [1, ∞) we consider the (so-called) exponential Orlicz space L ϕ r generated by the Orlicz function ϕ r given by ϕ r (t) := e t r − 1 for all t ∈ [0, ∞). Observe that for all 1 ≤ r < ∞ L ϕ r → L r , and f L r ≤ f L ϕr for all f ∈ L ϕ r .
See e.g. the monograph [24] for all needed information on Orlicz functions and spaces. For later reference we mention the following equivalent formulation of L ϕ r in terms of L p -spaces: f ∈ L ϕ r if only if f ∈ L p for all 1 ≤ p < ∞ and sup 1≤ p<∞ p −1/r f p < ∞, and in this case up to equivalence with constants which only depend on r . In Theorem 3.6 (here only formulated for Rademacher instead of subgaussian random variables) we prove that for every 2 ≤ r < ∞ there is a constant C r > 0 such that, for each K , N ∈ N and for every choice of finitely many a 1 , . . . , a K ∈ N ∞ , with a i = (a i ( j)) N j=1 , 1 ≤ i ≤ K , we have and for r ∈ (2, ∞) here r ,∞ as usual indicates the classical Marcinkiewicz sequence space. Moreover, we will see that the asymptotic behaviour of the constant C r (1 + log N ) 1/r can not be improved. Several remarks are in order. Note first that for N = 1 and r = 2 this estimate is due to Zygmund [42]. For N = 1 and r ∈ (2, ∞), Pisier in [30] proved that the Marcinkiewicz sequence space r ,∞ , instead of 2 , comes into play. We also remark that this fact was mentioned by Rodin and Semyonov in [34,Section 6]. Observe that in view of (3), these estimates (still for N = 1) obviously extend (the right-hand part of) Kinchine's inequality.
Moreover, (4) and (5) hold not only for Rademacher random variables, but even for the much larger class of subgaussian random variables-including real and complex normal Gaussian as well as complex Steinhaus variables.
In fact, for subgaussian random variables we intend to give three different approaches to (4) and (5). The first one from Sect. 3.1 is elementary, and based on Khintchine's inequality and extrapolation of the N ∞ -norm. In Sect. 4 we extend and recover this result using the so-called lattice M-constants. And finally in Sect. 5 we once again widen our perspective of all this, applying various techniques from interpolation theory.
Obviously both estimates from (4) and (5) have the form discussed in (1), so let us come back to the above question why we decided to call them 'abstract' KSZ -inequalities. Our main initial intention was to derive new multidimensional KSZ -inequalities. The first estimates of this type were studied by Kahane who proves in [19, pp. 68-69] that, given a trigonometric Rademacher random polynomial P in n variables of degree deg(P) ≤ m, that is where the ε α for α ∈ Z n with |α| = k |α k | ≤ m are independent Rademacher variables on the probability space ( , A, P), the expectation of the sup norm of the random polynomial on the n-dimensional torus T n has the following upper estimate: E sup z∈T n P(·, z) ≤ C n(1 + log m) 1 2 |α|≤m |c α | 2 1 2 , where C > 0 is a universal constant.
Let us indicate how (4) implies (7). Denote by T m (T n ) the space of all trigonometric polynomials P(z) = |α|≤m c α z α , z ∈ T n with deg P ≤ m, which together with the sup norm on T n forms a Banach space. A well-known consequence of Bernstein's inequality (see, e.g., [33,Corollary 5.2.3]) is that, for all positive integers n, m there is a subset F ⊂ T n of cardinality card F ≤ (1 + 20 m) n such that, for every P ∈ T m (T n ), we have In other terms, for N = (1 + 20m) n the linear mapping is an isomorphic embedding satisfying I I −1 ≤ 2 . We now observe as an immediate consequence of (4) and (5) that, for each 2 ≤ r < ∞, there exists a constant C r > 0 such that, for any choice of polynomials P 1 , . . . , Applying this result to the Rademacher random polynomial P given by we obviously get a strong extension of (7), which can be seen as a sort of 'exponential variant' of the KSZ -inequality. Working out these ideas, we in Sect. 6 show that this way various recent KSZ -inequalities for polynomials and multilinear forms on finite dimensional Banach spaces can be simplified, unified, and extended-in particular, recent results of Bayart [4] and Pellegrino et. al. [28]. Finally, we in Sect. 6.4 prove KSZ -inequalities for randomized Dirichlet polynomials. These results are heavily based on 'Bohr's point of view', which shows an intimate interaction between the theory of Dirichlet polynomials and theory of trigonometric polynomials in several variables.

Preliminaries
We use standard notation from Banach space theory. Let X , Y be Banach spaces. We denote by L(X , Y ) the space of all bounded linear operators T : X → Y with the usual operator norm. If we write X → Y , then we assume that X ⊂ Y and the inclusion map id : X → Y is bounded. If X = Y with equality of norms, then we write X ∼ = Y . We denote by B X the closed unit ball of X , and by X * its dual Banach space. We also need some definitions from the local Banach space theory. Let X and Y be Banach spaces. An operator T : X → Y is said to be an isomorphic embedding of X into Y if there exists C > 0 such that T x Y ≥ C x X for every x ∈ X . In this case T −1 is a well-defined operator from (T X, · Y ) onto X . Given a real number 1 ≤ λ < ∞, we say that X λ-embeds into Y whenever there exists an isomorphic embedding T of X into Y such that In this case, we call T a λ-embedding of X into Y . Observe that this is equivalent to the existence of a set {x * 1 , . . . , x * N } of functionals in B X * such that for some L, M > 0 with L M ≤ λ, we have Then the operator T : X → N ∞ given by x ∈ X induces the λ-embedding of X into N ∞ . Throughout the paper, ( , A, P) stands for a probability measure space. Given two sequences (a n ) and (b n ) of nonnegative real numbers we write a n ≺ b n or a n = O(b n ), if there is a constant c > 0 such that a n ≤ c b n for all n ∈ N, while a n b n means that a n ≺ b n and b n ≺ a n hold. Analogously we use the symbols f ≺ g and f g for nonnegative real functions.
Banach function and sequence spaces. Let ( , μ) := ( , , μ) be a complete σ -finite measure space and let X be a Banach space. L 0 (μ, X ) denotes the space of all equivalence classes of strongly measurable X -valued functions on , equipped with the topology of convergence in measure (on sets of finite μ-measure). In the case X = K, we write L 0 (μ) for short instead of L 0 (μ, K) (where as usual K := C or K := R). Let E be a Banach function lattice over ( , μ) and let X be a Banach space. The Köthe-Bochner space E(X ) is defined to consist of all f ∈ L 0 (μ, X ) with f (·) X ∈ E, and is equipped with the norm Recall that E ⊂ L 0 (μ) is said to be a Banach function lattice, if there exists h ∈ E with h > 0 a.e. and E is an Banach ideal in L 0 (μ), that is, if | f | ≤ |g| a.e. with g ∈ E and f ∈ L 0 (μ), then f ∈ E and f E ≤ g E . If J = N or J = Z and ( , A, μ) := J, 2 J , μ where μ is the counting measure, then a Banach lattice E in ω(J) := L 0 J, 2 J , μ is said to be a Banach sequence space on J. For any such Banach lattice E and any positive sequence w = (w n ) on J, we define the weighted lattice E((w n )) on J to be the Banach space of all scalar sequences ξ = (ξ n ) n∈J such that ξw := (ξ n w n ) n∈J ∈ E. The norm on E((w n )) is defined in the usual way, that is, In what follows, we briefly write denotes the decreasing rearrangement of the sequence (|x k |). Given a Banach sequence space E and a positive integer N , In what follows we identify (x k ) N k=1 with N k=1 x k e k and for simplicity of notation, we write (x k ) N k=1 E instead of (x k ) N k=1 E N . We will consider the Marcinkiewicz symmetric sequence spaces m w . Recall that if w = (w k ) is a non-increasing positive sequence, then m w is defined to be the space of all sequences In particular, if r ∈ (1, ∞) and ψ(n) = n 1−1/r , then the space m v coincides with the classical Marcinkiewicz space r ,∞ and, in the above estimate, C r = r /(r − 1).
Subgaussian random variables. Closely following Pisier [31], we list some basic facts about real and complex subgaussian random variables. Let ( , A, P) be a probability space, and f a random variable. If f is real-valued, then f is said to be subgaussian, whenever there is some s ≥ 0 such that for every and if f is complex-valued, whenever there is some s ≥ 0 such that for every z ∈ C E exp(Re(z f )) ≤ exp(s 2 |z| 2 /2).
In this case, the best such s is denoted by sg( f ). Note that subgaussian random variables always have mean zero. By Markov's inequality it is well-known that, given a real subgaussian f , we for all t > 0 have whereas in the complex case The Orlicz spaces L ϕ r , 1 ≤ r < ∞ provide a natural framework for the study of subgaussian random variables. Among others it is known, that a real mean-zero random variable f is subgaussian if and only if f ∈ L ϕ 2 , in which case sg( f ) and f L ϕ 2 are equivalent up to universal constants (see, e.g., [31,Lemma 3.2]). A real-valued sequence ( f n ) is called subgaussian if there is s ≥ 0 such that for any x = (x n ) ∈ 2 of norm one, the random variable f = ∞ n=1 x n f n is subgaussian. Equivalently, for every finitely supported sequence Additionally, a complex-valued sequence ( f n ) is said to be subgaussian, whenever the realvalued sequence formed by both, the real parts Re f n and the imaginary parts Im f n , is subgaussian. This implies that there is some s ≥ 0 such that (11) holds for any finitely supported x = (x n ) ∈ C N . Moreover, sg(( f n )) denotes the best possible number such that (11) holds. Let us recall a few examples ( [31, Lemma 1.2 and p.5]). Clearly, every sequence of independent, real (resp., complex) normal gaussian variables is subgaussian with constant 1. Moreover, every sequence of independent Rademacher variables ε i is subgaussian with sg((ε i )) = 1, and also independent Steinhaus variables z i (random variables with values in the unit circle T and with distribution equal to the normalized Haar measure) form subgaussian sequences for which sg((z i )) = 1.

Polynomials.
Given Banach spaces X 1 , . . . , X m , the product X 1 ×· · ·× X m is equipped with the standard norm (x 1 , . . . , x m ) := max 1≤ j≤m x j X j , for all (x 1 , . . . , x m ) ∈ X 1 × · · · × X m . The Banach space L m (X 1 , . . . , X m ) of all scalar-valued m-linear bounded mappings L on X 1 × · · · × X m is equipped with the norm A scalar-valued function P on a Banach space X is said to be an m-homogeneous polynomial if it is the restriction of an m-linear form L on X m to its diagonal, i.e., P(x) = L(x, . . . , x) for all x ∈ X . We say that P is a polynomial of degree at most m whenever P = m k=0 P k , where all P k are k-homogeneous (P 0 a constant). For a given positive integer m, we denote by P m (X ) the Banach space of all polynomials on X of degree at most m equipped with the norm P := sup{|P(z)|; z ∈ B X }. The symbol P(X ) denotes the union of all P m (X ), m ∈ N. More generally, we write P E := sup{|P(z)|; z ∈ E}, whenever E is a non-empty subset of X .
For n ∈ N and m ∈ N 0 we denote by T m (T n ) the space of all trigonometric polynomials P(z) = α∈Z n ,|α|≤m c α z α on the n-dimensional torus T n which have degree deg(P) ≤ m.
Clearly, T m (T n ) together with the sup norm · T n also denoted by · ∞ ) forms a Banach space.
Interpolation. We recall some fundamental notions from interpolation theory (see, e.g., [5,8,27]). The pair X = (X 0 , X 1 ) of Banach spaces is called a Banach couple if there exists a Hausdorff topological vector space X such that X j → X , j = 0, 1. A mapping F , acting on the class of all Banach couples, is called an interpolation functor if for every couple is bounded for every operator T : X → Y (meaning T : X 0 + X 1 → Y 0 + Y 1 is linear and its restrictions T : X j → Y j , j = 0, 1 are defined and bounded). If additionally there is a constant C > 0 such that for each T : Clearly, C ≥ 1, and if C = 1, then F is called exact.
Following [27], the function ψ F which corresponds to an exact interpolation functor F by the equality is called the characteristic function of the functor F . Here αR denotes R equipped with the norm · αR := α| · | for α > 0.

Gateway
We prove KSZ -inequalities in the sense of (1) for subgaussian random variables. Our approach seems entirely elementary. In Sect. 3.1 we use Khintchine's inequality and extrapolation, and in Sect. 3.2 we elaborate these ideas using a convexity argument.

KSZversus Khinchine's inequality
The following estimate for Rademacher averages in N ∞ is weaker than what we are going to prove in Theorem 3.6, where we replace the Rademacher variables ε i by subgaussian variables γ i and L r -spaces by exponential Orlicz spaces L ϕ r . But its proof is considerably simpler than what is going to follow later-though it still reflects some of the main ideas of this article. Theorem 3.1 Let (ε i ) i∈N be a sequence of independent Rademacher random variables. Then, for every r ∈ [2, ∞), every N ∈ N, and every choice of finitely many a 1 , . . . , a K ∈ N ∞ with Moreover, if we denote by C(N , r ) the best constant in this inequality, then we have Note again that for N = 1 these estimates (up to constants) are covered by (the right hand side) of Khintchine's inequality from (2).
For the proof we need slightly more preparation. Define for each N ∈ N the N -th harmonic number In what follows, we will use the following obvious estimates without any further reference: We add an elementary observation which will turn out to be crucial.

Lemma 3.2 For every
Proof From the obvious inequality log t t ≤ 1 e , t ≥ 1, we get that This combined with the homogeneity of the norm yields the left hand estimate.
We are ready for the proof of Theorem 3.1.

Proof of Theorem 3.1 By
where the last estimate follows from Hölder's inequality. Now the continuous Minkowski inequality implies Finally, we use Khintchine's inequality (2) together with the well-known estimate B rh N ≤ √ rh N to get that Using the fact that h From the norm equivalence (3) (take there r = 2) we immediately deduce the following consequence.

Corollary 3.3 Let (ε i ) i∈N be a sequence of independent Rademacher random variables. Then, for any choice of finitely many scalars a
By a result of Peskir [29] it is known that for N = 1 the best possible constant here equals √ 8/3. Note that Corollary 3.3 states that X = L ϕ 2 and S n = n 2 for each n ∈ N in the language of (1) satisfy an abstract KSZ -inequality with constant ϕ(N ) = e 2 (1 + log N ) 1

.
In the following two sections (Lemma 3.4 and Theorem 3.6) this result will be extended to X = L ϕ r , 2 < r < ∞, S = r ,∞ , and subgaussian random variables.

KSZ-inequalities for subgaussian random variables
As discussed in the introduction the following result is one of our crucial tools. In the case of Rademacher random variables see Zygmund [42] (r = 2), Pisier [30], and Rodin-Semyonov [34] (where the lemma is mentioned without proof). Replacing Rademacher random variables by sugaussians, it is an improvement of a result mentioned by Pisier in [31, Remark 10.5], and it is surely well-known to specialists.
For the sake of completeness we include an entirely elementary proof which is done in a similar fashion as in the case of Rademacher random variables in [20,Sect. 4.1].

Lemma 3.4
Let (γ i ) i∈N be a sequence of (real or complex) subgaussian random variables over ( , A, P) such that s = sg((γ i )).
(1) There is a constant C 2 = C(s) > 0 such that, for any choice of (real or complex) scalars (2) Assume, additionally, that M = sup i γ i ∞ < ∞. Then for any r ∈ (2, ∞) there is a constant C r = C(r , s, M) > 0 such that, for any choice of (real or complex) scalars Note again that by [29] in the case of Rademacher random variables ε i the best constant C 2 is precisely √ 8/3.

Proof
We only discuss the real case-the proof of the complex case is similar. (9) and (11), we deduce Then, for every c > 0, we have Choosing c = c(s) large enough, gives the conclusion.
(2) Take r ∈ (2, ∞), and α 1 , . . . , α n ∈ R decreasing such that |α i | ≤ |i| −1/r for each 1 ≤ i ≤ n (without loss of generality). We prove that for some constant since then the conclusion follows as before. We distinguish two cases, t < 2Mr and t ≥ 2Mr. In the first case, it is obvious (since r and obtain (if m(t) ≥ n, then the second sum is supposed to be 0). Then, for all t > 0, we get that Finally, using the fact that |α i | 2 ≤ i −2/r for all i, we see that there is some C r = C (r , s, M) > 0 such that for all t ≥ 2Mr, and this completes the proof.
As promised above, we now extend the abstract KSZ -inequalities from Theorem 3.1. We prepare the proof with an elementary lemma, which is easily verified by using standard calculus.

Lemma 3.5 For any c
We are ready to state the following theorem, which for N = 1 recovers Lemma 3.4 (being the crucial tool for its proof).
(1) There is a constant C 2 = C(s) > 0 such that, for each K , N ∈ N, and every choice of finitely many a 1 , . . . , Moreover, for a fixed sequence (γ i ) we denote the best constant in (1) (case r = 2) and (2) (case r ∈ (2, ∞)) by C(N , r ). Then for normal Gausian, Rademacher or Steinhaus variables we in (1) have that C(N , 2) (1 + log N ) 1 2 , up to universal constants, whereas in (2), we have that for Rademacher or Steinhaus random variables C(N , r ) (1 + log N ) 1 r , up to constants only depending on r .
Proof Fix r ∈ [2, ∞). We claim that the following estimate holds: Use of the exponential Khintchine inequality from Lemma 3.4 then finishes the proof of the theorem. In order to establish the claim, we may assume without loss of generality that Note first that by Lemma 3.2 we have that , and so we need to estimate the second term. We define α(N ) := r h N and c(N ) := 1 , and handle the following two different cases separately, the first case: α(N ) < 1, and the second: α(N ) ≥ 1. We start with the first case. To do so, we define Since P is a probability measure and ϕ r (1) = e − 1, we get Finally, this implies and hence the claim in the first case follows from It remains to discuss the second case α(N ) = r h N ≥ 1, which is more simple since now the function ϕ N , and this as above completes the proof of the claim.
Let us check the final result of the theorem, and prove that whenever we consider Rademacher variables ε i . Indeed, we have that where T r ( N ∞ ) denotes the Rademacher type r of N ∞ (which up to constants in r equals the Gaussian as well as the Steinhaus type r of N ∞ ). But it is well-known (see, e.g., [36, p.16]) that up to universal constants, we have the conclusion. For all other cases the same proof works.

KSZ-inequalities by lattice M-constants
We here suggest an entirely different approach to the abstract KSZ -inequalities from Theorem 3.6. In fact, the outcome of this theorem turns out to be an immediate consequence of Lemma 3.4 as well as the forthcoming Lemma 4.1 and Corollary 4.3.
We prove that, given some Banach function lattice X , then every sequence (γ i ) i of random variables in X satisfies a KSZ -inequality of type (X , (S n ), ψ) (as defined in (1)), whenever the semi normed spaces S n = (K n , · n ) are defined in a proper way. To show this, we use a characteristic from the theory of abstract Banach lattices called M-constant (see [1,37]). If X is a Banach lattice, then the constant μ n (X ) is given by Clearly, (μ n (X )) n is a non-decreasing sequence and μ n (X ) ∈ [1, n] for each n ∈ N. It was shown in [1, Theorem 1] that μ n (X ) n decreases. Moreover, it easy to see that (μ n (X )) n is submultiplicative, that is, and hence lim n→∞ , 1} (see in [1] the proof of Theorem 2). This implies μ n (X ) = n for each n whenever lim n→∞ μ n (X ) n = 1 (see [1,Lemma 3]). The following lemma is obvious-although crucial for our purposes.

Lemma 4.1 Let X be a Banach lattice over
where the semi norms · n (resp., norms · n , whenever the γ i are linearly independent) are given by Proof It follows from the definition of μ n (X ) that for any g 1 , . . . , This implies that for every choice of finitely many vectors a 1 , . . . , Clearly, · n is a norm, whenever (γ i ) is a linearly independent system, and this completes the proof.
It may be expected that an abstract KSZ -inequality (like in (1)) for a specific sequence (γ i ) i of random variables from a concrete Banach function lattice X has better constants ψ(N ) than the M-constants μ N (X ) indicated by the preceding lemma.
To see that this is indeed true, observe first that for any L r -space with 1 ≤ r < ∞ on an atomless measure space ( , A, ν), one has μ N (L r ) = N 1/r . But then, secondly, we see by Theorem 3.6 that Rademacher variables in an L r -space with r ∈ [2, ∞) satisfy a KSZinequality with considerably better constants ψ(N ) than the one from Lemma 4.1 (namely, C r (1 + log N ) 1 r instead of N 1/r ). We now find estimates of M-constants for some class of Orlicz spaces on probability measures spaces.
Combining this with our hypothesis on ϕ, yields that for all ω ∈ and each n ≥ 1, The above two inequalities in combination with the convexity of the function give . This implies that g L ≤ C. Since f 1 , . . . , f n in the unit ball of L were arbitrary, the required estimate follows.
The following corollary is an immediate consequence of Proposition 4.2.

Corollary 4.3
For r ∈ [1, ∞) let L ϕ r be an Orlicz space over a probability measure space ( , A, ν) with ϕ r (t) = e t r − 1 for all t ≥ 0. Then for each n ∈ N one has We conclude with two remarks. The first one shows that we reached the goal formulated at the beginning of this section.

Approach by interpolation
In this section we use interpolation theory to prove more abstract KSZ -inequalities in the sense of (1), which in fact extend and strengthen the previous ones. As an additional benefit we find two more proofs of Theorem 3.6 in the case r > 2 (using the case r = 2 as a starting point). Whereas our proof from Sect. 3.1 is entirely elementary, the proof from Sect. 4 is based on Banach lattices theory. The new arguments used here, heavily rely on the technically more complicated machinery of interpolation in Banach spaces-in particular that of the K -and orbit method.

Exact interpolation functors
Let F be an exact interpolation functor. In what follows we use an inequality that is an obvious consequence of the definition of the fundamental function φ F (given in the preliminaries): For any operator T : X → Y between Banach couples X = (X 0 , In this section we mainly consider a special class of exact interpolation functors F . Clearly, by the interpolation property, it follows that for any Banach couple X = (X 0 , X 1 ), we have This motivates us to introduce the following definition: An interpolation functor F is said to have the ∞-property on X with constant δ > 0 whenever and F has the uniform ∞-property with constant δ whenever it has the ∞-property on any Banach couple X with constant δ. Moreover, we need the following useful interpolation formula from [9], which is a consequence of the Hahn-Banach-Kantorovich theorem.
Lemma 5.1 Let E 0 and E 1 be Banach function lattices on a measure space ( , A, μ) and let X be a Banach space. Then, for any exact interpolation functor F , we have Now we are prepared to prove the following key interpolation theorem based on the case r = 2 from Theorem 3.6. The space of all scalar N × K -matrices is denoted by M N ,K . Theorem 5.2 Let (γ i ) i∈N be a subgaussian sequence of (real or complex) random variables with s = sg((γ i )) and M = sup i γ i ∞ < ∞. Suppose that F is an exact interpolation functor with the ∞-property with constant δ.
Then there exists a constant C = C(s, M) > 0 such that for every matrix (a i, j ) ∈ M N ,K , we have where φ F is the fundamental function of F . In particular, C = √ 8/3, whenever (γ i ) = (ε i ) is a sequence of independent random Rademacher variables.
Proof Define the linear mapping T : We claim that has norm less than or equal to C 2 √ 1 + log N . By the interpolation property, and our hypothesis that F has the ∞-property with constant δ, we get that for all Thus the above interpolation estimate combined with Lemma 5.1 yields the required estimate. If (γ i ) = (ε i ) i∈N , then we have T = 1 and C 2 = √ 8/3.
As an application of Theorem 5.2, we get the interpolation variant of Theorem 3.6.

Remark 5.3
Let (γ i ) i∈N be a subgaussian sequence of (real or complex) random variables with s = sg((γ i )) and M = sup i γ i ∞ < ∞. Let F be an exact interpolation functor with the ∞-property with constant δ. Then there exists a constant C = C(s, M) > 0 such that, for every Banach space E, every λ-embedding I : E → N ∞ , and every choice of x 1 , . . . , where φ F is the fundamental function of F .
In order to apply all this within the setting of Orlicz spaces, the following lemma from [22, Proof of Lemma 4] is going to be crucial.

The K-method
We now restrict Theorem 5.2 to two interpolation methods, which play a fundamental role in interpolation theory in Banach spaces, the K -method and the Orbit method. If Theorem 5.2 is to be applied in a specific situation, then a naturally given exact interpolation functor F is usually the best choice. Then, in order to prove abstract KSZ -inequalities (like e.g., Theorem 3.6 for r > 2), the main difficulty lies (A) in proving that F has the ∞-property, and (B) in providing a best possible estimate of the fundamental function φ F .
It is worth pointing out, that for the key Theorem 5.2 we only need to know that F has the ∞-property on the special Banach couple (L ∞ , L ϕ 2 ).
We start with the K -method of interpolation. Let F be a Banach sequence lattice of (two-sided) sequences such that (min{1, 2 k }) k∈Z ∈ F. If (X 0 , X 1 ) is a Banach couple, then the K -method of interpolation produces (X 0 , X 1 ) F , the Banach space of all x ∈ X 0 + X 1 equipped with the norm where K is the Peetre functional given for all x ∈ X 0 + X 1 and all s, t > 0 by If ψ ∈ Q (see the preliminaries) and F := ∞ (1/ψ(1, 2 n )), then the space (X 0 , X 1 ) F is denoted by In the particular case that θ ∈ (0, 1) and ψ(s, t) = s 1−θ t θ for all s, t > 0, we recover the classical Lions-Peetre space (X 0 , X 1 ) θ,∞ .
In what follows, for any ψ ∈ Q, we define the function ψ ∈ Q by Moreover, we need another lemma.
This immediately implies that F has the ∞-property with constant 2. Since for any operator T : X → Y , x ∈ X 0 + X 1 , and n ∈ Z the estimate φ F (s, t) ≤ 2ψ(s, t) for all s, t > 0 is obvious.
Then for the special case of Lions-Peetre interpolation the following consequence is immediate from Theorem 5.2 (in the form given in Remark 5.3).

Theorem 5.6
Let ψ ∈ Q, and (γ i ) i∈N be a subgaussian sequence of (real or complex) random variables with s = sg((γ i )) and M = sup i γ i ∞ < ∞.
Then there exists a constant C = C(s, M) > 0 such that, for every Banach space E, every λ-embedding I : E → N ∞ , and every choice of x 1 , . . . , As announced, we now show that Theorem 5.6 combined with Lemma 5.4 in the case 2 < r < ∞ recovers Theorem 3.6.
We refer to [21], where it shown that for some class of functions ψ, the interpolation spaces ( 1 , 2 ) ψ,∞ equal, up to equivalence of norms, the symmetric Marcinkiewicz sequence spaces m w , where the weight w = (w n ) only depends on ψ. Moreover, these results show that in the scalar case the estimate in Theorem 5.6 is best possible in general, that is, the two sides of the inequality appearing there are equivalent.

The orbit method
Now we consider the method of orbits (see [8,27]). Given a Banach couple A = (A 0 , A 1 ), we fix an arbitrary element a = 0 in A 0 + A 1 . The orbit of the element a in a Banach couple X is the Banach space Orb A (a, X ) := {T a; T : A → X } equipped with the norm It is easy to see that F := Orb A (a, ·) is an exact interpolation functor. The fundamental function φ F of F is given by the formula (see [27, p. 389-390 As we did for the K -method in the previous section, we now intend to analyze under which additional assumptions Theorem 5.2 and Remark 5.3 apply to the orbit method. This in particular means to understand when the orbit method leads to functors satisfying the ∞property, and when the generated fundamental functions may be estimated in a proper way. We start with the following lemma on the ∞-property. 0 , A 1 ) and a = 0 in A 0 + A 1 . Then, for any Banach couple X = (X 0 , X 1 ) and each positive integer N , we have with F := Orb A (a, ·)

Lemma 5.7 Given a Banach couple A = (A
that is, the functor Orb A (a, ·) has the ∞-property with constant 1.
Proof It is enough to show that, for each N ∈ N, This implies that, for each 1 ≤ j ≤ N there exists T j : A → X such that x j = T j (a) and T j : A → X ≤ 1. Define an operator ⊕ T j : Observe that ⊕ T j : with x ≤ 1. This completes the proof.
For a given ϕ ∈ Q, we let a ϕ := (ϕ(1, 2 n )) n∈Z and ∞ := ( ∞ , ∞ (2 −n )). We consider the orbit Orb ∞ (a ϕ , ·), and remark that this functor appeared in [26] in a slightly different form. In what follows this functor is denoted by ϕ = Orb ∞ (a ϕ , ·). As explained above we need to estimate the fundamental function of this functor. To do this, we provide a close to optimal estimate, which seems of independent interest for various other types of interpolation problems (for ϕ ∈ Q see again (14) for the definition of ϕ).

Lemma 5.8
If ϕ ∈ Q, then for every operator T : that is, the fundamental function of ϕ satisfies φ ϕ ≤ 4 ϕ. and ψ(s, t) := K (s, t, a; A) for all s, t > 0. Moreover, we abbreviate K (1, s, a; A) in the following by K (s, a; A). We first prove a major step: (2 n , a; A), n ∈ Z.

Proof For the proof we need the isometrical formula Orb
By the Hahn-Banach theorem, for each n ∈ Z we can find a functional f n ∈ (A 0 + A 1 ) * such that f n (a) = K (2 n , a; A) and This inequality implies that sup n∈Z f n A * 0 ≤ 1 and sup n∈Z 2 −n f n A * 1 ≤ 1. It is easy to see that |ξ n | ≤ K (2 n , ξ; ∞ ), n ∈ Z.
From the above relations, we conclude that the mapping S given on A 0 + A 1 by the formula defines a bounded operator from A into ∞ with S : A → ∞ ≤ 1 and Sa = ξ . In consequence ξ ∈ Orb A (a, ∞ ) with ξ Orb ≤ 1. This proves the major step.
Since, for any operator T : A → ∞ , the reverse continuous inclusion follows with Now we will use the isometrical formula shown above with A := ∞ to get that for ϕ ( To see this we recall the following easily verified formula which states that, for all ξ = (ξ k ) ∈ ∞ + ∞ (2 −k ), we have In particular, we get that for ψ(1, 2 n ) := K (2 n , a ϕ , ∞ ) with a ϕ = {ϕ(1, 2 k )} the following estimates hold: Now we are ready to prove the required statement. Let T : X → Y be a nontrivial operator. Fix x ∈ ϕ ( X ), and take any S : ∞ → X such that x = Sa ϕ . For each ν ∈ Z, we consider the shift operator τ ν defined by τ ν (ξ n ) := (ξ n+ν ). Clearly, τ ν : ∞ → ∞ with τ ν : ∞ → ∞ = max{1, 2 ν }. By the interpolation property T x ∈ ϕ ( Y ). Then, for each k ∈ Z, we get that Choose k such that 2 k T : Then applying the estimate proved above, we obtain Since S : ∞ → X with x = Sa ϕ was arbitrary, the above estimates yields This completes the proof.

The Calderón-Lozanovskii method
In order to illustrate that Lemma 5.7 and Lemma 5.8 combined with Theorem 5.2 and Remark 5.3 indeed lead to concrete abstract KSZ -inequalities, we finally turn to the socalled Calderón-Lozanovskii method. This interpolation method of Banach function spaces, based on the Orbit method, is particularly adapted to the setting of Orlicz spaces.
It is well known that if ϕ : R + × R + → R + is a non-vanishing, concave function, which is continuous in each variable and positive homogeneous of degree one, then for any couple (X 0 , X 1 ) of Banach function lattices on a measure space ( , A, μ) with the Fatou property the formula ϕ (X 0 , X 1 ) = ϕ(X 0 , X 1 ) holds (see [26]), up to equivalence of norms with universal constants. Here ϕ(X 0 , X 1 ) denotes the Calderón-Lozanovskii space, which consists of all f ∈ L 0 (μ) such that | f | ≤ λ ϕ(| f 0 |, | f 1 |) μ-a.e. for some λ > 0 and f j ∈ B X j , j ∈ {0, 1}. It is a Banach lattice endowed with the norm Combining the Lemmas 5.7 and 5.8 with Remark 5.3 yields the following result. Theorem 5.9 Let ϕ ∈ Q be a concave function, and (γ i ) i∈N a subgaussian sequence of (real or complex) random variables with s = sg((γ i )) and M = sup i γ i ∞ < ∞.
Then there is a universal constant c > 0 and a constant C = C(s, M) > 0 such that, for every Banach space E, every λ-embedding I : E → N ∞ , and every choice of x 1 , . . . , We note that if the Orlicz function is defined by (t) = e ϕ(1, with universal constants for the equivalence of the norms. Moreover, by a well-known result from [26] we have where the Orlicz function φ is given by φ −1 (t) = ϕ(t, √ t) for all t ≥ 0.

Applications
In this section we apply the abstract KSZ -inequality from Theorem 3.6 (see also again (1)) to trigonometric polynomials, as well as polynomials and multilinear forms on Banach spaces. The formulations, which distinguishes the apparently two different cases in this theorem, are somewhat cumbersome. This is the reason why for simplicity of notation and presentation, we in the following remark make several agreements which are used throughout the rest of the paper. Remark 6.1 All subgaussian sequences (γ i ) i∈J of random variables, with a given countable set J of indices, are defined over a common probability measure space ( , A, P). In each of our applications the varying index set J will be clear from the context.
The symbol S r denotes the Hilbert space 2 , whenever r = 2, and the Marcinkiewicz space r ,∞ , whenever r ∈ (2, ∞). The space S r is here understood as a Banach sequence space on a corresponding countable set I of indices.
If the subgaussian sequence (γ i ) i∈J comes along with the Banach sequence space S r , r ∈ [2, ∞), then we write s = sg((γ i )) for short, and assume additionally that Moreover, if in this case, the constant C r , r ∈ [2, ∞) appears, then C 2 = C(s) will only depend on s, and C r = C(r , s, M) only on r , s, M. For appropriate samples of all that we once again refer to Lemma 3.4 and Theorem 3.6.
The following remark is going to help to apply Theorem 3.6 in concrete cases. Remark 6.2 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that, for every Banach space E, for every λ-embedding I : E → N ∞ , and for every choice of x 1 , . . . , Indeed, by Theorem 3.6 we have In view of this result, for a given finite dimensional Banach space E, we are interested in finding λ-embeddings of E into N ∞ with the best possible dimension N = N (dim E, λ). In this section we mainly concentrate on the Banach spaces E = T m (T n ), E = P m (X ), and L m (X 1 , . . . , X m ) (see again the preliminaries for the definitions). All coming estimates are based on the following well-known result, which is a consequence of a volume argument (see [41,Proposition 10,p. 74] for details). ∈ (0, 1). Then there exists an ε-net {x j } N j=1 in B E with N ≤ 1 + 1 ε ) n for real E, and N ≤ (1 + 1 ε 2n for complex E.

Proposition 6.3 Let E be an n-dimensional Banach space and ε
The following corollary (see [41,Proposition 13,p.76]) is an immediate consequence.

Corollary 6.4 For every n-dimensional
Banach space E and for every ε ∈ (0, 1) there exists an isomorphic embedding I : For later use we collect another immediate consequence of Proposition 6.3. To see a first example at what we aim for, we mention the following abstract KSZ -inequality for n-dimensional Banach spaces E, which is now an immediate consequence of Theorem 3.6 (in the form of Remark 6.2) and Corollary 6.4. Theorem 6.6 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that for every n-dimensional Banach space E and, for every choice of

Corollary 6.5 Let E be an n-dimensional
We point out that it is easy to show that here the exponent in the term n 1 r , 2 ≤ r < ∞ can not be improved.

Trigonometric polynomials
Originally one of the initial motivations of this paper was to prove new general variants of Kahane-Salem-Zygmund random polynomial inequalities, which recover the classical known results. We point out that in their seminal Acta paper Salem and Zygmund (see [35]; [19, p. 69]) proved a theorem for one-variable random trigonometric polynomials which states: Assume that P 1 , . . . , P K are trigonometric polynomials on T of degree at most m, and γ 1 , . . . , γ K are independent subgaussian random variables. Then, there exists a universal constant C > 0 such that There is a large number of remarkable applications of this result, and in order to illustrate this, we comment two of them. The first one, due to Odlyzko [25], is related to the problem of minimizing where the infimum is taken over all choices of b k ∈ N 0 with ∞ k=1 b k = n.
The Salem-Zygmund result was used in [25] to prove that given any trigonometric cosine polynomial P(θ ) = b 0 + N k=1 b k cos kθ, θ ∈ [0, 2π], it is possible to change its coefficients slightly so as to make them integers without affecting the values of the polynomial too severely. More precisely, let be the random cosine polynomial given by ξ k = 0 for each 1 ≤ k ≤ N , whenever b k is an integer, and else P( Then the Salem-Zygmund inequality yields that whereas the polynomial P(θ ) + R(ω, θ ) has always integer coefficients (except perhaps the constant coefficient).
Applying this random modification to the classical Fejér kernel, Odlyzko proved that and this leads, in particular, to improved upper estimates for a problem of Erdös and Szekeres [16] asking for the largest possible value of all polynomials on the unit circle T with α ∈ N n 0 . The second application we wish to mention here, is related to the Hardy-Littlewood majorant problem for trigonometric polynomials in L p (T) with 2 < p / ∈ 2N. In their remarkable paper [23], Mockenhaupt and Schlag proved a version of the Salem-Zygmund inequality for asymmetric i.i.d. Bernoulli variables, and used it (in combination with Bourgain's results from [7] on ( p)-Sidon sets), to show for each N ∈ N and 0 < ρ < 1 the existence of random sets A ⊂ {1, . . . , N } of size N ρ that satisfy, for all α > 0, the majorant inequality, sup |a n |≤1 n∈A a n z n with a large probability.
Let us come back to multidimensional Salem-Zygmund inequalities, first studied by Kahane (recall that we in short write KSZ -inequalities). These inequalities have numerous applications in many areas of modern analysis as e.g. shown in [19], and also [13,33]. Various variants were proved over recent years, and what may be the most important one gives an upper bound of the expectation for the norm of random trigonometric polynomials. As already indicated in the introduction, the following result is an extension of the KSZ -inequality for random trigonometric Rademacher polynomials of degree less than or equal m (see again (6) and (7)). Theorem 6.7 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that, for any choice of trigonometric polynomials P 1 , . . . , P K ∈ T m (T n ), we have Proof This follows from the embedding in (8) and Theorem 3.6 (via a similar argument as in Remark 6.2).
The following corollary for subgaussian random polynomials is then obvious. Corollary 6.8 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that for every random trigonometric polynomial |α|≤m ε α c α z α ∈ T m (T n ), we have

Polynomials in Banach spaces
In recent years many different types of extensions of the KSZ -inequality (7) were obtained, where the supremum is taken over various Reinhard domains R ⊂ C n (e.g., the unit ball B n p of the Banach space n p , 1 ≤ p < ∞ instead of the n-dimensional torus T n ). Extending results from [6,11,12], Bayart in [4] estimates the expectation of the norm of an m-homogeneous random Rademacher polynomial P(ω, z) = |α|=m ε α (ω)c α z α on an arbitrary n-dimensional complex Banach space X n = (C n , · ). It is shown that, given r ∈ [2, ∞), where C r > 0 is a constant only depending on r .
To prove results of this type, Bayart uses two different methods. The first method is based on Khintchine-type inequalities for Rademacher processes, and the second relies on controlling increments of a Rademacher process in an Orlicz space, and in this case an entropy argument is used.
We mention that Bayart applied his results in the study of multidimensional Bohr radii, as well as unconditionality in Banach spaces of homogenous polynomials; all this is also collected in the recent monograph [13]. Finally, we recall that [14,22] have several extensions of Bayart's results-two articles depending heavily on abstract interpolation theory.
In the following theorem, based on the abstract KSZ -inequality from Theorem 3.6, we extend several of these results-in particular those obtained by Bayart's first method. Theorem 6.9 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that for every m ∈ N 0 , n ∈ N, every complex n-dimensional Banach space X , and every choice of polynomials P 1 , . . . , P K ∈ P m (X ), we have We start the proof with another definition. Given a real or complex Banach space X and a compact set K ⊂ B X , we say that K satisfies a Markov-Fréchet inequality whenever there is an exponent ν ≥ 0, and a constant M > 0 such that, for every P ∈ P(X ), we have where ∇ P(z) ∈ X * denotes the Fréchet derivative of P in z ∈ K . If this inequality only holds for a subclass P of P(X ), then we say that K satisfies a Markov-Fréchet inequality for P with exponent ν and constant M. Lemma 6.10 Let X be an n-dimensional Banach space (real or complex), and K ⊂ B X a convex and compact set, which satisfies a Markov-Fréchet inequality with exponent ν and constant M. Then for each m ∈ N there exists a subset F ⊂ K such that, for every P ∈ P m (X ), we have Proof We assume that X is complex, and take P ∈ P m (X ) (the real case follows the same way). Then for z 1 , z 2 ∈ K we obtain, using the fact that K is convex and satisfies a Markov-Fréchet inequality, Applying Corollary 6.5 with ε := 1 2 Mm ν , we conclude that there is a finite set F ⊂ K with card F ≤ (1 + 2Mm ν ) 2n such that K ⊂ u∈F B X (u, ε).
and the proof is complete.
When does the unit ball B X of a complex Banach space X itself satisfy a Markov-Fréchet inequality? For later use, we collect a few results in this direction, and start with the following result due to Harris [18,Corollary 3]. For our purposes it will be enough to know that this result holds with exponent ν = 2, and for this weaker fact we include a self-contained proof.
Proof of Lemma 6.11 with exponent ν = 2. Take P ∈ P(X ) with m = deg P, and consider its Taylor expansion P = m k=0 P k with P k ∈ P k (X ) (see, e.g., [13, 15.4]). For each 1 ≤ k ≤ m denote byP k the unique symmetric m-linear form on X associated to P k . Then by polarization (see, e.g., [13, (15.18)]), for each 2 ≤ k ≤ n and for all z, h ∈ B X , we have This combined with the Cauchy inequality (see, e.g., [13,Proposition 15.33]) yields and so the required estimate follows.
Finally, we are ready to give the Proof of Theorem 6.9 Consider the 2-embedding of the space E = P m (X ) into N ∞ proved in Lemma 6.10. Then Theorem 6.9 is an immediate consequence of Theorem 3.6 (in the form of Remark 6.2) observing that every z ∈ B X defines a norm one functional x * ∈ E * by x * (P) = P(z). Lemma 6.11 is a result on complex Banach spaces X . For real X the proof of Lemma 6.11 does not work, since then no Cauchy inequality with constant 1 is available, that is, the projection which assigns to each polynomial its k-th Taylor polynomial is not contractive on P m (X ).
However, applying the idea of the preceding proof to homogeneous polynomials only and using the 'real polarization estimate' of Harris from [18,Corollary 7], we get, in the homogeneous case, the following real variant of Lemma 6.11. Lemma 6.12 Let X be a real Banach space. Then B X satisfies a Markov-Fréchet inequality for all homogeneous polynomials with constant M = √ e and exponent ν = 1/2. Remark 6.13 Lemma 6.12 combined with Lemma 6.10 shows that Theorem 6.9 holds for real Banach spaces X and a real sequence (γ i ) of subgaussian random variables if we replace the space P m (X ) by its subspace of all m-homogeneous polynomials.
Next we present another result which may be of interest. To state it, we recall that for every convex and compact set C ⊂ R n the minimal width of C is given by where w(u) is the width of C in the direction of the normal vector u ∈ R n , u 2 = 1. Theorem 6.14 Adopting the notation used in Remark 6.1 together with the additional assumption that all subgaussians γ i are real, for every r ∈ [2, ∞) there is a constant C r > 0 such that, for every convex and compact subset C in R n with non-empty interior, and every choice of polynomials P 1 , . . . , P K of degree ≤ m on R n , we have Proof This result is a consequence of Theorem 3.6 in combination with Lemma 6.10, since a remarkable result due to Wilhelmsen [40,Theorem 3.1] states that a convex and compact subset C of the real Hilbert space n 2 with non-empty interior satisfies a Markov-Fréchet inequality with constant M = 4/w(C) and exponent ν = 2.
Fixing a basis in X , that is looking at X = (C n , · ), we finally (like in Corollary 6.8) list another two corollaries of Theorem 6.9 for subgaussian random polynomials. Corollary 6.15 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that for every Banach space X = (C n , · ), and every random polynomial |α|≤m γ α c α z α ∈ P m (X ), we have We remark that for X n = n p with 1 ≤ p ≤ ∞, we have (see, e.g., [ Corollary 6. 16 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that, for every Banach space X = (C n , · ) and every random polynomial |α|≤m γ α c α z α ∈ P m (X ), we have Proof In view of Corollary 6.15, all we have to show is that for 2 ≤ r < ∞ and z ∈ C n |α|≤m |c α z α | r To understand this we need a bit more of notation. Following [4] or [13], for each positive integers m and n, we define To see the optimality in the case of Rademacher random variables ε α , note first that for every ω where χ(m, n p ) stands for the unconditional basis constant of the basis sequence formed by all monomials z α (for α ∈ N n 0 with |α| = m) in the Banach space of m-homogeneous polynomials on n p . But it is well-known that where the constants only depend on m and p (see, e.g., [13,Corollary 19.8 ]) . This gives Taking norms in L ϕ r( p) leads to the desired lower bound for Rademacher random variables. For Steinhaus and Gaussian variables, note that in these cases L ϕ r( p) -averages are dominated by the corresponding Rademacher average.

Multilinear forms in Banach spaces
We here apply our techniques to spaces of multilinear forms on finite dimensional Banach spaces, and our main contribution is as follows.
Theorem 6. 18 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that, for every choice of finite dimensional (real or complex) Banach spaces X j with dim X j = n j , 1 ≤ j ≤ m, and m-linear mappings L 1 , . . . , L K ∈ L m (X 1 , . . . , X m ), we have Our strategy for the proof is exactly as before, we start with a multilinear analog of Lemma 6.10. The proof is again based on Corollary 6.5, hence, if all Banach spaces X i are real, we may replace the exponents 2n j by n j .
Proof For L ∈ L m (X 1 , . . . , X m ) and z j , v j ∈ B X j for each 1 ≤ j ≤ m, we have and hence By Corollary 6.5, for Then for every (z 1 , . . . , Since card F ≤ m j=1 1 + 2m 2n j , the conclusion follows. Corollary 6.20 Using for the index set J = M the notation of Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that for every m-linear random mapping on the product X 1 × · · · × X m of Banach spaces X j := (K n j , · j ), 1 ≤ j ≤ m, we have where the constant D m > 0 only depends on m. Taking norms in L ϕ r(p) , finishes the argument for this case. But vector-valued L ϕ r(p) -averages taken with respect to Steinhaus or Gaussian random variables dominate the corresponding L ϕ r(p) -averages for Rademacher random variables which completes the argument.

Randomized Dirichlet polynomials
This section is inspired by Queffélec's paper [32]. Based on Bohr's vision of ordinary Dirichlet series and results from the preceding sections, our goal is to provide some new KSZ -inequalities for randomized Dirichlet polynomials. Inequalities of this type recently play a crucial role within the study of Dirichlet seriessee in particular the probabilistic proofs of the Bohr-Bohnenblust-Hille theorem on Bohr's absolute convergence problem from [13,Remark 7.3] and [33,Theorem 5.4.2]. For more applications in this direction, see e.g., [13,17,19,39].
Given a finite subset A ⊂ N, we denote by D A the |A|-dimensional linear space of all Dirichlet polynomials D defined by D(s) = n∈A a n n −s , s ∈ C, with complex coefficients a n , n ∈ A. Since each such Dirichlet polynomial obviously defines a bounded and holomorphic function on the right half-plane in C, the space D A forms a Banach space whenever it is equipped with the norm D ∞ = sup Res>0 N n=1 a n n −s = sup t∈R N n=1 a n n −it .
We note that the particular cases a n = 1 and a n = (−1) n play a crucial role within the study of the Riemann zeta-function ζ : C \ {1} → C. In fact, in recent times, techniques related to random inequalities for Dirichlet polynomials have gained more and more importance. This may be illustrated by a deep classical result of Turán [38], which states that the truth of the famous Lindelöf's conjecture: with an arbitrarily small ε > 0, is equivalent to the validity of the inequality: for an arbitrarily small ε > 0 and with C depending on ε.
In order to formulate our main result we need two characteristics of the finite set A ⊂ N defining D A . For x ≥ 2 we denote (as usual) by π(x) the number of all primes in the interval [2, x], and by (n) the number of prime divisors of n ∈ N counted accorded to their multiplicities. We define Before we give the proof of this result, we state an immediate consequence of independent interest. Corollary 6.23 Adopting the notation used in Remark 6.1, for every r ∈ [2, ∞) there is a constant C r > 0 such that, for every Dirichlet random polynomial n∈A γ n a n n −it in D A , we have sup t∈R n∈A γ n a n n −it L ϕr ≤ C r 1 + (A) 1 + 20 log (A) 1 r (a n ) n∈A S r .
As mentioned, our proof of Theorem 6.22 is based on 'Bohr's point of view' (carefully explained in [13,32,33]). More precisely, in our situation we need to embed D A into a certain space of trigonometric polynomials, controlling the degree as well as the number of variables of the polynomials in this space. To achieve this, we consider the following so-called Bohr lift: n∈A a n n −s → α:p α ∈A a p α z α .
By (a particular case of) Kronecker's theorem on Diophantine approximation we know that the continuous homomorphism has dense range (see, e.g., [13,Proposition 3.4] or [33,Section 2.2]). This implies that L A is an isometry into.
Moreover, we repeat from (8)  Then D A(N ,x) is the space of all Dirichlet polynomials of length N , which only 'depend on π(x) primes'. Using the remarkably sharp estimates for π(x) due to Costa Periera [10]: x log 2 log x < π(x), x ≥ 5 and π(x) < 5x 3 log x , x > 1 , we see that Moreover, since for any 1 ≤ n = p α ≤ N with α ∈ N π(x) we have that 2 |α| ≤ N , we get In passing, we note that by Bohr's inequality the linear bijection We close the paper with following interpolation estimate for randomized Dirichlet polynomials which is a consequence of Proposition 6.24 and Remark 5.3.

Theorem 6.25
Let (γ i ) i∈N be a subgaussian sequence of (real or complex) random variables with s = sg((γ i )) and M = sup i γ i ∞ < ∞. Suppose that an exact interpolation functor F has the ∞-property with constant δ.
Then there is a constant C = C(s, M) > 0 such that, for any finite set A ⊂ N and any choice of finitely many Dirichlet polynomials D 1 , . . . , D K ∈ D A , we have where φ F is the fundamental function of F .