Estimates for the Derivatives of the Poisson Kernel on Nilpotent Meta-Abelian Groups

Let S be a semi direct product \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$S=N\rtimes A$\end{document} where N is a connected and simply connected, non-abelian, nilpotent meta-abelian Lie group and A is isomorphic with ℝk, k > 1. We consider a class of second order left-invariant differential operators on S of the form \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal{L}_\alpha=L^a+\Delta_\alpha,$\end{document} where α ∈ ℝk, and for each a ∈ ℝk, La is left-invariant second order differential operator on N and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\Delta_\alpha=\Delta-\langle\alpha,\nabla\rangle,$\end{document} where Δ is the usual Laplacian on ℝk. Using some probabilistic techniques (skew-product formulas for diffusions on S and N respectively, the concept of the derivative of a measure, etc.) we obtain an upper bound for the derivatives of the Poisson kernel for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal{L}_\alpha.$\end{document} During the course of the proof we also get an upper estimate for the derivatives of the transition probabilities of the evolution on N generated by Lσ(t), where σ is a continuous function from [0, ∞ ) to ℝk.


Introduction
Let S be a semi direct product S = N A where N is a connected, simply connected, non-abelian, nilpotent, meta-abelian, group and A is isomorphic with R k . 1 Specifically, we assume that where M and V are abelian Lie groups with the corresponding Lie algebras m and v. Then there are bases {X 1 , . . . , X m } and {Y 1 , . . . , Y n } for m and v respectively such that {X 1 , . . . , X m , Y 1 , . . . , Y n } forms a Jordan-Hölder basis for the Lie algebra n of N which diagonalizes the ad a action on n (a is the Lie algebra of A). We assume that these bases are ordered so that the matrix of ad Z is strictly lower triangular for all Z ∈ n. We use these bases to identify m and v with R m and R n respectively and we use the exponential map to identify M, V and A with the corresponding Lie algebras.
In what follows the Euclidean space R k is endowed with the usual scalar product ·, · and the corresponding 2 norm · . For the vector x ∈ R k we write By · ∞ , we denote the ∞ norm x ∞ = max 1≤i≤k |x i |. For g ∈ S we let z(g) = z ∈ N and a(g) = a ∈ A denote the components of g in N A so that g = (z, a). Similarly, for z ∈ N we let x(z) = x ∈ M and y(z) = y ∈ V denote the components of z in M V. The dimension k of A is called the rank of S. Let Hence, for all H ∈ a, (1.1) Let q = m + n. For 1 ≤ i ≤ q we set The principal object of study in this work is the left-invariant differential operator on S, where, for α = (α 1 , . . . , α k ) ∈ R k , and the Y i and X j are considered as left invariant differential operators on N. We assume that for all i, In particular, none of the λ i are identically 0. Hence the {λ i } 1≤i≤q span a * since their joint nullspace consists of vectors annihilated by ad a . We set Thus inequality (1.3) means that α ∈ A + . We study the Poisson kernel for the operators (1.2). To describe this concept let χ be the modular function for left invariant Haar measure on S. Thus for all g ∈ S, where ds is left-invariant Haar measure on S. Then χ(g) = det(Ad(g)) = e ρ(a) , (1.4) where ρ = q j=1 λ j . (1.5) Assumption (1.3) together with [5] implies there exists a Poisson kernel ν for L α . That is, there is a C ∞ function ν on N such that every bounded L α -harmonic function F on S may be written as a Poisson integral against a bounded function f on the quotient space A \ S = N, Our goal in this work is to obtain growth estimates for the functions for general multi-indices I and J in the rank S > 1 case. In the rank one case, the growth estimates for both of ν(z) and its derivatives are well understood, even for general nilpotent N, due to a number of works such as [4,[6][7][8]13]. However, virtually nothing seems to be known about the growth estimates for the derivatives of the Poisson kernel in higher rank. The techniques used in the above mentioned works do not seem to generalize to higher rank groups, even for the I = J = 0 case. In [11,12] the authors introduced some new techniques for studying the growth of the Poisson kernel in higher rank cases. At that time we had hoped that these techniques could finally yield insights into the growth of the derivatives of the Poisson kernel in the higher rank case. This hope is, in a sense, validated by the current work. However, the analysis of the growth of the derivatives, even given the work in [12], has forced the introduction of a host of new, and we feel exciting, techniques. (See Section 2 for an outline of some of these techniques.) To describe our main result, we identify R k * with R k . This allows us to write We say that positivity holds if all of the λ i, j are non-negative.
To state our main result we require some notation. If F ⊂ is any set of roots and a ∈ R k , let Let T = {τ 1 , . . . , τ } be an orthogonal family of vectors in R k such that α · τ i > 0 for all i. Let We assume the positivity condition-i.e., the λ i, j in Eq. 1.7 are non-negative. Let and define, We also assume that for 1 ≤ ≤ k, where |I| + |J| = 0.
For t ∈ R + and a ∈ A + , let Then t → δ a t is a one parameter group of automorphisms of N for which the corresponding eigenvalues on n are all positive. It is known [9] that then N has a δ a t -homogeneous norm: a non-negative continuous function | · | a on N such that |z| = 0 if and only if z = e and |δ a t z| a = t|z| a .

Example
Consider N = H n , the (2n + 1)-dimensional Heisenberg group, which we realize as R n × R n × R with the Lie group multiplication given by where · denotes the scalar product in R n . The corresponding Lie algebra h n is then spanned by the left invariant vector fields Let a = R n and let {A 1 , . . . , A n } be the standard basis for R n and let the corresponding dual basis for (R n ) * be {e 1 , . . . , e n }. We define an a action on h n , the Lie algebra of H n , by Exponentiation yields a group action of A = R n on H n and a solvable Lie group S = H n R n . Then It is clear that the positivity condition below Eq. 1.7 holds. Let α = (α 1 , . . . , α n ) where 0 < α 1 ≤ α 2 · · · ≤ α n . It is easily checked that (Note that for any positive increasing sequence β i , √ β 1 ≤ 1 β i .) Now let a = tρ. Then ρ = 0 and a max = 4t. Hence −a · (λ(I, J)) + a max (1 + ρ /2)(|I| + |J|) = 0.

Thus Theorem 1.1 gives
for all α satisfying Eq. 1.8. Corollary 1.2 states that under the same assumptions for all h ∈ H n with |h| ρ ≥ 1, We do not expect that this estimate is optimal since the rate of decay should depend on α, I, and J.

Outline of the Proofs of the Main Results
In this section we introduce some notation and describe for the reader's convenience the main idea of the proofs of the results stated above.
Our proofs make use of a well known probabilistic formula for ν a on a general N A group. Specifically, the diffusion σ (t) on R k generated by α , is the k-dimensional Brownian motion with drift −2α, i.e., Let thought of as a time dependent family of left invariant operators on N. This family gives rise to a diffusion which is described by a family of convolution kernels P σ t,s (z), s ≤ t, z ∈ N, which satisfy the Chapman-Kolmogorov equations with respect to convolution on N. (See [12, Section 2.3].) We let P σ t = P σ t,0 . We also typically drop the interval (0, t) in our notation so that, for example, the symbols A σ V, j (0, t) and η (0, t) introduced below will usually be denoted by A σ V, j and η , respectively.
It follows from formula (5.3) of [12] that where the expectation is with respect to the Wiener measure W a on the set of continuous paths in R k starting at a. From Eqs. 1.1 and 1.6, for z ∈ N and a ∈ A, To bound the expectation in Corollary 2.1, we make use of a formula that expresses P σ t as a kind of skew product of kernels on M and V. Specifically, the family of left-invariant, time dependent operators on V gives rise to diffusion on V in the same manner as L σ,t defines a diffusion on N. This diffusion may be described by a process X t with state space R n and, for each starting point a ∈ R n , a probability measure W V,σ a on C([0, ∞), R n ) which may be explicitly computed since V is abelian (see Proposition 3.1). More generally, for each T > 0 we obtain a probability measure gives rise to a diffusion having transition probabilities described by convolution kernels P M,η,σ t,s (x) on M which again may be explicitly computed (see formula (4.2)). Corollary 3.6 of [11] implies that for all ψ ∈ C c (V), t ≤ T, and a.e. σ, More generally letX i denote X i considered as a right-invariant differential operator on N. Then for all multi-indices I, and t ≤ T, We provide upper bounds on X I P M,η,σ t (x) in Section 4. For the operators Y I the situation is more complicated. Here we make use of the concept of the derivative of a measure [2,10]. Let V be a vector space with a σ -algebra F of subsets of V which is invariant with respect to the shifts along a given vector h ∈ V, i.e., if A ∈ F then A + th ∈ F for every t ∈ R. In this case we define provided this exists in the weak topology. It follows almost by definition that if f is a C ∞ function on V then In Eq. 2.6, we are integrating against the measure On the other hand In Section 5.2 we show: From this point on we assume that T = t. To consider Y J , let Q n (y, z) be the rational function on R 2 defined by

It follows from Proposition 2.2 and induction that for any multi-index
where D J t and Q J (z, y) are defined by and We bound the integrands in Section 5.2 and then obtain an upper bound (Theorem 6.1) for by estimating the expectations (with respect to the distribution of η). Finally, in Section 7, we use Eq. 2.1, to get the estimate for derivatives of the Poisson kernel ν, that is we take the limit (as t → ∞) of the expectation (with respect to σ ) of the upper bound of the quantity in Eq. 2.9.

The Diffusion on V
is a symmetric, positive definite matrix with entries belonging to C([0, ∞), R). Proposition 2.9 of [12] states that for such an operator the transition functions are given by convolution against and, for an n × n invertible matrix A we set Specifically, if we choose a basis of R n so that Y i corresponds with ∂ xi then for the operator (2.3) the functions corresponding to a and A are, respectively, Hence, by Eq. 3.2 the corresponding transition probabilities P V,σ t,s (x, dy) satisfy where p t (x, y) is a classical Gaussian kernel, The kernel p t (x, y) is the transition function for the one dimensional Brownian process. 2 Thus the process η(t) generated by L V,σ,t has coordinates η j (t) which are independent Brownian motions with time changed according to the clock governed by σ.
We may use this observation to realize our process. Let T > 0 and σ be fixed. Let Let W T 0 be the product Wiener measure on the space i.e., is the Wiener measure on C([0, T j ], R), and let

Consider the linear map
The following proposition is clear: The dif fusion def ined by L V,σ,t starting at 0, 0 ≤ t ≤ T is realizable as the process b t on V T 1 with the probability measure We may of course apply the same ideas with the intervals [0, T] and [0, T j ] replaced by [0, ∞) and [0, T j ) respectively. In this case we omit the superscript T.

The Diffusion on M
From Section 3.1 of [12], the matrix [a ij ] from Eq. 3.1 for the operator defined in  .
where · ∞ is the ∞ -norm on R n . We typically denote η (0, t) by η . We set Let · be any norm on the set of m × m matrices. We need an estimate describing how A σ,η M −1 depends on η .

Proposition 4.1
There is a C > 0 such that Proof We let C denote a generic constant depending only on m that can change from line to line. Since we see that (η − pρ denotes the action of − pρ on η). Choose p > 0 so that where C is as in the last line of Eq. 4.4. It follows from the series expansion of However, from Eq. 4.5, from which our result follows.
Lemma 3.3 of [12] implies the following result: We note also the following result that is an immediate consequence of Eq. 4.3:

The Derivatives of P σ
In this section we estimate the derivatives of the evolution kernel P σ described in formula (2.5). Let A σ be the q × q matrix For 0 = y ∈ R n given and ε > 0, let 3 Our estimates all follow from Theorem 5.5 which is a corollary of Proposition 5.1 below. Let, for a ≥ 0, Proof For k ∈ N and ε > 0, we let Note that where (See Lemma 4.3 of [12].) Lemma 4.4 of [12], together with the reasoning above formula (4.7) of [12], implies: Lemma 5.2 Let n o be the smallest integer such that n o ≥ y ∞ . There are constants C, D > 0 such that The first term in Eq. 5.4 is dominated by which in turn is dominated by the term in Eq. 5.3 involving F 1 . The F 2 term comes from the following lemma upon setting c = x 1/(ko+1) and E = (2 A σ V ) −1 .

Lemma 5.3
Let D, E, and a ≥ 0 be given. Then there is a C > 0, independent of D, E, and a such that for all c ≥ 0, Proof We first note the following lemma which is a simple calculus exercise.

Lemma 5.4
For all x ≥ b ≥ 0 and a > 0, We apply this lemma with a := 2a/E, and raise the resulting inequality to the E 2 th power, concluding that the term on the left in Eq. 5.5 is dominated by We split the sum in Eq. 5.6 into two parts corresponding to: The first part is dominated by the first term on the right in Eq. 5.5. The second summation is dominated by Our lemma follows.
Now let φ k be as in Eq. 5.2. The main result of this section is: Proof We note that Thus the term in Eq. 5.3 coming from F 2 is bounded by the right side of Eq. 5.7.
In the region y ≥ x 1 ko +1 , the result follows from the observation that and the result again follows.

Derivatives of P M,σ,η (x) with Respect to x
For an m × m symmetric matrix B let Then for 1 ≤ i, j ≤ m, In general where Q I is a polynomial in the variables λ i (x), 1 ≤ i ≤ m, and β j,k , 1 ≤ j, k ≤ m. Furthermore, it is easily seen that Hence, from Proposition 4.1,

Corollary 5.6
There is a C > 0 such that where notation is as in Eq. 5.1.
The desired estimate on the function X I P σ t (x, y) follows immediately from Theorem 5.5.

Derivatives of X I P M,σ,η (x) with Respect to η
As mentioned in Section 2, we also require estimates on the derivatives in η along the curves γ i in Eq. 2.7 of the expression in equality (4.2). Our first result is: Proposition 5.7 Let D J t be as in Eq. 2.8 and ρ ∈ A + . Then there is a C > 0 such that where notation is as in Eq. 5.1.
Proof From Eq. 4.1, since V is abelian and In particular, since γ i (t) is increasing,

More generally, it follows by induction that
Of course By a decomposition of the multi-index J we mean a finite set P of multi-indices such that Let P J be the set of all decompositions of J. It follows by induction that where the C J are constants indexed by P J . Forming the inverse Fourier transformation shows that D J t P M,σ,η (x) is a linear combination of terms of the form ⎛ where J ∈ P J . Let J = {J 1 , . . . , J }. Expanding this product and multiplying by X I shows that D J t X I P M,σ,η is a linear combination of terms of the form From Corollary 5.6 the above expression is bounded by a multiple of We note that Proposition 5.7 follows.
We conclude with the proof of Proposition 2.2: Proof of Proposition 2.2 From Proposition 3.1  ) and therefore is a differentiable vector. Also Proposition 2.2 follows.

The Derivatives of P σ t (x, y)
The goal of this section is the proof of the following result.

Theorem 6.1
There is a C > 0 such that Proof It is an immediate consequence of Proposition 5.7 and Theorem 5.5 that Of course for f ∈ C ∞ (N), where the operator on the right acts only on x and From Eq. 4.3, More generally, Theorem 6.1 follows.

The Derivatives of the Poisson Kernel
From Corollary 2.1 we need to bound lim t→∞ E a X I Y JP σ t (g o ), where |g o | = 1. Theorem 6.1 bounds X I Y J P σ t by a function of the exponential functionals A σ,0 M (0, t) and A σ V (0, t). There is an exact formula for such an expectation for t = ∞ in the case of independent Brownian motions. Specifically, let b (t) = (b 1 (t), . . . , b n (t)) be an n-dimensional Brownian motion normalized so that Var b j (t) = 2t and let α ∈ R n . Let τ ∈ R n satisfy τ · α > 0. We define where A τ and τ are thought of as random variables on C([0, ∞), R k ).
Remark Note that our current use use of " τ " and "A τ " is a change of notation from that of Section 4.
Let {e 1 , . . . , e k } be the standard basis for R k . Then {e 1 · b (t), . . . , e k · b (t)} is a family of independent, one-dimensional Brownian motions with Var(e i · b (t)) = 2t. For b ∈ R k and u ∈ R + k we define The following result follows from Theorem 2.2 of [12].

Proposition 7.1 Let f be a continuous function on
Then Remarkably, the function τ → log(A τ ) behaves in some respects as if it depended linearly on τ . To explain this, let T = {τ 1 , . . . , τ } be an orthogonal family of vectors in R k such that α · τ i > 0 for all i. For u = α/ α let We assume the positivity condition-i.e., the λ i, j in Eq. 1.7 are non-negative. Let We let The following is a direct consequence of Corollary 7.11 which is proved in Section 7.1 below.
To simplify notation we write W a instead of W ∞,R a to denote the Wiener measure on C([0, +∞), R).

Proposition 7.2
For all a ∈ R n , n ∈ Z, and 1 ≤ i ≤ q, Proof According to Eq. 2.2, Then, almost certainly, For σ ∈ Z ,n , In particular from Proposition 7.2,

Corollary 7.4
For ∈ N and n ∈ Z, Proof This follows from the observation that for ∈ N and n ∈ Z.
Hence, for a ∈ −A + , Analogous statements hold for Q V . We make the changes of variables The convergence of this integral is clear for x i small. For large x the growth of the integrand is determined by R(x). But This integral will be finite provided for 1 ≤ ≤ k It is easily seen that this implies: Corollary 7.6 Let τ ∈ R k . Given d, D there is a C D such that for all n ∈ N, Since we see that: Corollary 7.7 Let τ ∈ R k . Given D there is a C D such that for all n ∈ N, Let τ 1 , τ 2 ∈ R n be orthogonal -i.e., τ 1 · τ 2 = 0. Assume that τ i · α = α i > 0. Then σ i (t) = τ i · σ t is a pair of independent Brownian motions with drifts −2α i and Let i = τi and = τ1+τ2 . For u = α/ α , let Proposition 7.8 For d as in Eq. 7.2, In the proof we will need the following very well known result (see e.g. [3] on p. 197).

Lemma 7.9
Let w(t) be the one dimensional Brownian motion with negative drift, i.e., Proof of Proposition 7.8 In order to prove Eq. 7.3 it is enough to show that W 0 ({n ≤ | − ( 1 + 2 )| ≤ n + 1}) ≤ Ce − α dn (7.4) with a constant C > 0 not depending on n.

Also
from which the corollary follows. Proof We may assume a = 0 since the probability of the stated set is clearly independent of the starting point. If τ belongs to the set described in the left side of the preceding inequality then Of course Our corollary follows from Corollaries 7.7 and 7.10.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.