Abstract
We give a Gaussian-type upper bound for the transition kernels of the time-inhomogeneous diffusion processes on a nilpotent meta-abelian Lie group N generated by the family of time dependent second order left-invariant differential operators. These evolution kernels are related to the heat kernel for the left-invariant second order differential operators on higher rank NA groups.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Time-dependent parabolic equations and, in particular, the problem of finding the upper and lower bounds for their fundamental solutions has attracted considerable attention in recent years (see e.g. [5, 12–15, 31] and the monographs by Stroock and Varadhan [27], and van Casteren [4]). The aim of this paper is to get a Gaussian-type upper bound for the transition kernel of a particular kind of diffusion process (evolution) on a nilpotent meta-abelian group N. The type of the evolution equation considered here comes from the study of the heat equation on a class of solvable Lie groups, the so called higher rank NA groups which are, by definition, the semi-direct products of a nilpotent and abelian (with dimension greater than 1) groups (more on that in Sect. 1.4).
1.1 Our setting
In what follows we assume that the group N is meta-abelian
where M and V are abelian Lie groups with the corresponding Lie algebras \(\mathfrak m\) and \(\mathfrak v.\) We consider a family of automorphisms \(\{\Phi (a)\}_{a\in \mathbb {R}^k}\) of \(\mathfrak n,\) that leaves \(\mathfrak m\) and \(\mathfrak v\) invariant, where \(a\mapsto \Phi (a)\) is a homomorphism of \(\mathbb {R}^k\) into \(\text {Aut}(\mathfrak n)\). Let \(\mathfrak m\) and \(\mathfrak v\) be spanned, respectively, by \(\{Y_1,\ldots ,Y_{d_1}\}\) and \(\{X_1,\ldots ,X_{d_2}\}.\) We use these bases to identify \(\mathfrak m\) and \(\mathfrak v\) with \(\mathbb {R}^{d_1}\) and \(\mathbb {R}^{d_2}\) respectively. We also use the exponential mapping to identify M and V with \(\mathfrak m\) and \(\mathfrak v\) and thus with \(\mathbb {R}^{d_1}\) and \(\mathbb {R}^{d_2}\) respectively. For \(x\in N\) we write \(x=m(x)v(x)=mv=(m,v)\) where \(m(x)=m\in M\) and \(v(x)=v\in V\) denote the components of x in \(M\rtimes V.\)
Now we consider the action of an Lie abelian group \(A=\mathbb {R}^k\) on N. We have a semi-direct product \(S=N\rtimes A=N\rtimes \mathbb R^k\) with the multiplication in S given by
where, for \(x=\exp X,\) \(X\in \mathfrak n,\) the action of \(a\in A=\exp A=\mathbb R^k\) on N is defined as
The group S is a solvable Lie group. The rank of S is, by definition, equal to \(\dim A.\) Similarly, for \(g\in S\) we write \(g=x(g)a(g)=xa=(x,a),\) where \(x(g)=x\in N\) and \(a(g)=a\in A\) denote the components of g in \(N\rtimes A.\) In what follows we identify the group A, its Lie algebra \(\mathfrak {a},\) and \(\mathfrak {a}^*,\) the space of linear forms on \(\mathfrak {a},\) with the Euclidean space \(\mathbb {R}^k\) endowed with the usual scalar product \(\langle \cdot ,\cdot \rangle \) and the corresponding norm \(\Vert a\Vert =\langle a,a\rangle ^{1\slash 2}.\) By \(\Vert \cdot \Vert _\infty \) we denote the maximum norm \(\Vert a\Vert _\infty =\max _{1\le j\le k}|a_j|.\)
Let \(\sigma \) be a continuous function from \([0,+\infty )\) to \(A=\mathbb {R}^k,\) and denote
We assume also that
-
(A1)
in the \(\{Y_i\}_{1\le i\le d_1}\) basis on \(\mathfrak m,\) \({{\mathrm{ad}}}_X\) is lower triangular for all \(X\in \mathfrak v\) and
-
(A2)
the restriction \(S^\sigma \) of \(\Phi ^\sigma \) to M considered as a linear operator on \(\mathfrak m\) is given in the \(\{Y_i\}_{1\le i\le d_1}\) basis by a \(d_1\times d_1\) lower triangular matrix:
$$\begin{aligned} S^\sigma (t)=\Phi ^\sigma (t)|_M=[s^\sigma _{ij}]_{1\le i,j\le d_1}. \end{aligned}$$Specifically, for \(i\ge j,\)
$$\begin{aligned} s_{ij}^\sigma (u)=h_{ij}^M(\sigma (u))e^{\xi _j(\sigma (u))}, \end{aligned}$$where \(h_{ij}^M\in \mathbb {R}[a_1,\ldots ,a_k]\) are polynomials in \(a\in A=\mathbb {R}^k\) with \(h_{jj}^M=1,\) for \(1\le j\le d_1,\) and \(\xi _1,\ldots ,\xi _{d_1}\in A^*=(\mathbb {R}^k)^*.\)
-
(A3)
The matrix
$$\begin{aligned} T^\sigma (t)=\Phi ^\sigma (t)|_V=[t^\sigma _{ij}]_{1\le i,j\le d_2} \end{aligned}$$is a \(d_2\times d_2\) lower triangular and, for \(i\ge j,\)
$$\begin{aligned} t_{ij}^\sigma (u)=h_{ij}^V(\sigma (u))e^{\vartheta _j(\sigma (u))}, \end{aligned}$$where \(h_{ij}^V\in \mathbb {R}[a_1,\ldots ,a_k]\) are polynomials in \(a\in A=\mathbb {R}^k\) with \(h_{jj}^V=1,\) for \(1\le j\le d_2,\) and \(\vartheta _1,\ldots ,\vartheta _{d_2}\in A^*=(\mathbb {R}^k)^*.\)
1.2 Evolution kernel
Let, for \(Z\in \mathfrak n,\)
Let,
Now we consider the evolution process generated by \(\mathcal L_N^\sigma (t).\) By C(N) we denote the set of coninuous functions on N. Let
Let \(d=\dim \mathfrak n.\) For \(X\in \mathfrak {n},\) we let \(\tilde{X}\) denote the corresponding right-invariant vector field. For a multi-index \(I=(i_1,\ldots ,i_{d}),\) \(i_j\in \mathbb {Z}^+\) and a basis \(X_1,\ldots ,X_d\) of the Lie algebra \(\mathfrak n\) we write \(X^I=X_1^{i_1},\ldots , X_m^{i_d}.\) For \(\kappa ,\ell =0,1,2,\ldots ,\infty \) we define
and
In particular \(C^{(0,2)}(N)\) with the norm \(\Vert f\Vert _{(0,2)}\) is a Banach space. It is known (see [4, 19, 28]) that there exists the (unique) family of bounded operators \(U^\sigma _{s,t}\) on \(C_\infty \) which satisfies
-
(i)
\(U^\sigma _{s,s}=\mathrm {Id},\) for all \(s\ge 0,\)
-
(ii)
\(\lim _{h\rightarrow 0}U^\sigma _{s,s+h}f=f\) in \(C_\infty (N),\)
-
(iii)
\(U^\sigma _{s,r}U^\sigma _{r,t}=U^\sigma _{s,t},\) \(0\le s\le r\le t,\)
-
(iv)
\(\partial _sU^\sigma _{s,t}f=-\mathcal L^\sigma _N(s) U^\sigma _{s,t}f\) for every \(f\in C^{(0,2)}(N),\)
-
(v)
\(\partial _tU^\sigma _{s,t}f=U^\sigma _{s,t}\mathcal L^\sigma _N(t)f\) for every \(f\in C^{(0,2)}(N),\)
-
(vi)
\(U^\sigma _{s,t}:C^{(0,2)}(N)\rightarrow C^{(0,2)}(N)\) for all \(s\le t.\)
The family \(U^\sigma _{s,t}\) is called the evolution generated by \(\mathcal L_N^\sigma (t).\) By \(P^\sigma _{t,s}\) we denote the corresponding kernel
Since \(\mathcal L^\sigma _N(t)\) commutes with left translation, the same is true for \(U^\sigma _{s,t}.\) Hence,
With a small abuse of notation we write
Hence, the operator \(U^\sigma _{s,t}\) is a convolution operator with a probability measure (with a smooth density) \(P^\sigma _{t,s},\)
We call \(P^\sigma _{t,s}(x)\) or \(P^\sigma _{t,s}(x;y)\) the evolution kernel. Sometimes \(P^\sigma _{t,s}(x;y)\) is called the transition kernel since in probabilistic terms \(P^\sigma _{t,s}(x;y)\) is the transition kernel for the time-dependent Markov process (or evolution), \(\omega (t),\) on N defined by the operator \(\mathcal L^{\sigma }_N(t).\) Probability that starting from x at time s the proces \(\omega (t)\) is in a given set \(B\subset N\) is
By (iii), for \(s\le r\le t,\)
1.3 Main result
Our aim is to estimate the evolution kernel \(P^\sigma _{t,s}.\) In order to do this, first we disintegrate the process \(\omega (t)\) into the corresponding processes on M and V respectively. Specifically, let
thought of as operators on M and V respectively.
For \(v\in V,\) let
Then the operator \(\mathcal L^\sigma _N(t)\) is the skew-product of the above defined operators, i.e.,
The time-dependent family of operators \(\mathcal L_V^{\sigma }(t)\) gives rise to an evolution on \(V=\mathbb {R}^{d_2}\) that is described by a kernel \(P^{V,\sigma }_{t,s}\) which may be explicitly computed, since V is abelian. For \(\eta \in C^\infty ([0,+\infty ), V)\) let
This family of operators gives rise to an evolution on \(M=\mathbb {R}^{d_1}\) that is described by a kernel \(P^{M,\sigma ,\eta }_{t,s}\) which may also be explicitly computed (see Sect. 4).
One of our main tools is the following skew-product formula for \(P^\sigma _{t,s}\) (which can be proved along the lines of [23, Theorem 1.2], where diagonal action of A on N was considered).
Theorem 1.1
For \(m\in M\) and \(v\in V,\)
where \(\mathbf W^{V,\sigma }_{s,v}\) is the probability measure on the space \(C([s,+\infty ),V)\) generated by the diffusion process \(\eta (t)\) starting from \(v\in V\) at time s, with the generator \(\mathcal L_V^\sigma (t).\)
A difficulty in applying the above formula is that the process \(\eta (t)\) does not have independent coordinates. This difficulty is overcome with the help of Proposition 3.1 which gives the estimate for the joint probability of \(\sup _{u\in [s,t]}\Vert \eta (u)\Vert _\infty \) and the position of the process \(\eta \) at time t, i.e., \(\eta (t).\) This makes all the computation quite involved.
In order to state our main theorem we need to introduce some notation. Let, for \(1\le j\le d_2,\)
where
Set
The main result is the following estimate.
Theorem 1.2
For every \(T>0\) there are positive constants \(c_1,c_2,c_3\) and a natural number \(k_o\) such that for all \(T\ge t\ge \tau \ge 0\) and all \((m,v)\in N,\)
where
and
Remark
In Sect. 7 we give explicit estimates for the quantities \(\tilde{\Theta }(\tau ,t,v)\) and \(\mathcal V(\tau ,t).\)
Remark
Gaussian estimates in \(\mathbb {R}^n\) for the fundamental solution of the time-dependent parabolic equations are usually obtained under the assumption that the operator is (uniformly) elliptic (see e.g. classical papers by Aronson [2] and Fabes and Stroock [11]). We do not require this condition and our estimate explicitly depends on the coefficients of the operator.
Remark
If the action of A on N is diagonal, i.e., the polynomials in entries of matrices \(S^\sigma (t)\) and \(T^\sigma (t)\) [see the assumptions (A2) and (A3)] satisfy \(h_{ij}^M=h_{ij}^V=0\) for \(i\not =j\) then all the quantities appearing in Theorem 1.2 can be easily computed. We get
and
Finally,
In this setting Theorem 1.2 simplifies and we obtain [23, Theorem 4.1].
1.4 Applications
Since the estimate given by Theorem 1.2, at first glance, seems to be quite technical and complicated it is worth to explain why this formula is important and where it can be used. First of all the estimate for \(P^\sigma _{t,s},\) given by Theorem 1.2, can be applied in the analysis of left-invariant, second-order differential operators on the higher rank NA groups, i.e., the semi-direct product \(N\rtimes \mathbb {R}^k\) as described above (at this moment we do not assume that \(N=M\rtimes V\)). Consider, for \(\alpha =(\alpha _1,\ldots ,\alpha _k)\in \mathbb {R}^k,\) the left-invariant differential operator of the form
where
In this setting properties of bounded harmonic functions on S is certainly of interest. Under some assumption on the drift vector \(\alpha \) there exists a Poisson kernel \(\nu \) for \(\mathcal L_\alpha \) [6, 7]. That is, there is a \(C^\infty \) function \(\nu \) on N such that every bounded \(\mathcal L_\alpha \)-harmonic function F on S may be written as a Poisson integral against a bounded function f on \(S\slash A=N,\)
where
where \(\chi \) is the modular function for left invariant Haar measure on S, i.e.,
Conversely the Poisson integral of any \(f\in L^\infty (N)\) is a bounded \(\mathcal L_\alpha \)-harmonic function.
It is known that the Poisson kernel \(\nu \) is equal to \(\lim _{t\rightarrow \infty }\pi _N(\mu _t),\) where \(\pi _N(g)=x(g)\) is a projection from S onto N. To get some information on \(\mu _t\) we use a well known formula which express \(T_t\) as a skew-product of the diffusion on N and A. For \(f\in C_c(N\times \mathbb {R}^k)\) and \(t\ge 0,\)
where the expectation \(\mathbf{E}\) is taken with respect to the distribution of the process \(\sigma _t\) (Brownian motion with drift) in \(\mathbb {R}^k\) generated by \(\Delta _\alpha .\) The operator \(U^\sigma _{0,t}\) acts on the first variable of the function f (as a convolution operator). The idea of such a decomposition goes back to [16, 17, 29]. In the context of NA groups with \(\dim A=1\) this decomposition was used in [7–10], and later was generalized by the authors and applied for \(\dim A>1,\) see e.g. [20, 22]. Note that Theorem 1.1 is a generalization of (1.10) to evolution operators.
Estimates for the Poisson kernel for the operator (1.9) were obtained by the authors in a series of papers [20–24]. However, in all these papers the action of A on N is diagonal. Thus Theorem 1.2 opens the door to consider non-diagonal actions. This is going to be the subject of our future research.
1.5 Structure of the paper
The outline of the rest of the paper is as follows. In Sect. 2 we state the formula for the evolution kernel in \(\mathbb {R}^n\) and recall the Borell–TIS inequality which is in Sect. 3 used in the proof of an appropriate estimate for \(\mathbf{P}\left( \sup _{s\in [\tau ,t]}\Vert \eta (s)\Vert _\infty \ge u\text { and }\eta (t)\in B\right) \) for \(u\in \mathbb {R}\) and \(B\subset \mathbb {R}^n.\) In Sects. 4 and 5 we study evolutions on M and V, respectively. Finally in Sect. 6 we give the proof of Theorem 1.2 and in Sect. 7 we give some estimates for quantieties given in (1.7) and (1.8).
2 Preliminaries
2.1 Gaussian variables and fields
We follow the presentation in [1]. For \(\mathbb {R}^n\)-valued random variables X and Y their covariance matrix is defined as \(\mathrm {Cov}(X,Y)=\mathbf{E}(X-\mathbf{E}X)(Y-\mathbf{E}Y)^t.\) An \(\mathbb {R}^n\)-valued random variable X is said to be multivariate Gaussian if for every non-zero \(\alpha =(\alpha _1,\ldots ,\alpha _n)\in \mathbb {R}^n,\) the real valued random variable \(\langle \alpha ,X\rangle =\sum _{i=1}^n\alpha _iX_i\) is Gaussian. In this case the density of X is given by the multivariate normal density
where \(m=\mathbf{E}X\) and \(C=\mathrm {Cov}(X,X)\) is a positive semi-definite \(n\times n\) covariance matrix. In this case we write \(X\sim \mathcal N_n(m,C)\) or simply \(X\sim \mathcal N(m,C).\)
Lemma 2.1
Let \(X\sim \mathcal N_n(m,C).\) Assume that \(d<n\) and make the partition
and
where \(C_{11}\) is a \(d\times d\)-matrix. Then each \(X^i\sim \mathcal N(m^i,C_{ii})\) and the conditional distribution of \(X^i\) given \(X^j\) is also Gaussian, with mean vector
and covariance matrix
Proof
See e.g. [1, p. 8]. \(\square \)
A random field is a stochastic process, taking values in some space, usually in a Euclidean space, and defined over a parametric space T. A real valued Gaussian process is a random field f on a parameter set T for which the (finite dimensional) distributions of \((f_{t_1},\ldots ,f_{t_n})\) are multivariate Gaussian for each \(1\le n<+\infty \) and each \((t_1,\ldots ,t_n)\in T^n.\)
2.2 Gaussian inequalities
The following powerful inequality was discovered independently, and was proved in very different ways, by Borell [3] and Tsirelson et al. [30]. Following [1] we call the following inequality Borell–TIS inequality.
Theorem 2.2
(Borell–TIS inequality) Let \(f_t\) be a centered Gaussian process, almost surely bounded on T. Write \(|f|_T=\sup _{t\in T}f_t.\) Then \(\mathbf{E}|f|_T<+\infty \) and, for all \(u>0,\)
where
Proof
For the proof see the original papers [3, 30] or [1]. \(\square \)
Immediately, we get the following
Corollary 2.3
Let \(f_t\) be a centered Gaussian process, almost surely bounded on T. Then for all \(u>\mathbf{E}|f|_T,\)
2.3 Evolution equation in \(\mathbb {R}^n\)
Let
where \(\partial _i=\partial _{x_i}\) and \(a(t)=[a_{ij}(t)]\) is a symmetric, positive definite matrix and the \(a_{ij}\) and \(\delta _j\) belong to \( C([0,\infty ),\mathbb {R})\). For \(s>t\), let \(P_{t,s}\) be the evolution kernel generated by L(t). Let, for \(1\le i,j\le n,\)
Proposition 2.4
The evolution kernel \(P_{t,s}\) corresponding to the operator L(t) defined in (2.1) is given by
Proof
See e.g. [23, Proposition 2.9] \(\square \)
3 Main probabilistic estimate
Consider the operator L(t), defined in (2.1), without the drift vector \(\delta (t)=(\delta _1(t),\ldots ,\delta _n(t)),\) i.e,
Let \(b_s=(b_s^1,\ldots ,b_s^n)\) be the stochastic process generated by the operator L(t). Define, for \(v\in \mathbb {R}^n,\)
and let \(\Vert \cdot \Vert _\infty \) denote the \(\ell ^\infty \)-norm on \(\mathbb {R}^n,\) i.e., for a vector \(y\in \mathbb {R}^n,\) \(\Vert y\Vert _\infty =\max _{1\le j\le n}|y_j|.\)
The distribution of the process \(b_t\) starting at time \(\tau \) from v, i.e., \(b_\tau =v,\) is denoted by \(\mathbf{P}_{\tau ,v}(\cdot ).\) This is a probability measure on the space of trajectories \(b_t\in C([0,\infty ),\mathbb {R}^n).\)
Proposition 3.1
Let \(b_t\) be the process generated by L(t) defined in (3.1). For every \(T>0\) there exists a constant \(C>0\) such that, for every \(\varepsilon >0,\) \(u\ge 0,\) \(v\in \mathbb {R}^n,\) and all \(T\ge t\ge \tau \ge 0,\) the following estimate holds,
for all \(u>\sup _{a\in B_\varepsilon (v)}\Theta (\tau ,t,a)+C\sum _{j=1}^{n}V_j(\tau ,t)^{1\slash 2},\) where
and
Proof
Let \(\tau \) be fixed. To simplify notation we write, for \(s\ge \tau ,\)
where \([A_{ij}]\) is the matrix defined in (2.2). We write
Now we estimate the conditional probability under the integral sign. For \(s\le t\) and fixed, consider 2n-dimensional random vector
By Proposition 2.4, \(\mathrm {Cov}(b_s,b_s)=[A_{ij}(0,s)]=A_s.\) Since the process \(b_s\) has independent increments we get
By \(\mathrm {Cov}(b_t,b_s)=\mathrm {Cov}(b_s,b_t)^t\) we get,
By Lemma 2.1 the conditional distribution of \(b_s\) given \(b_t\) is Gaussian with mean vector
and covariance matrix
Let \(b_s(a)=(b_s^1(a),\ldots ,b_s^n(a))\) denote the process whose distribution is the conditional distribution of \(b_s\) given \(b_t=a.\) In this notation
Let
Clearly,
We continue (3.3) as follows.
Thus, by symmetry of \(\tilde{b}_s(a)\) (see (3.4)),
Denote
By Corollary 2.3, with \(T=\{1,\ldots ,n\}\times [\tau ,t],\)
for all \(u>\Theta (\tau ,t,a)+\Xi (\tau ,t,a).\) Taking together the above estimate, (3.3), (3.5), and putting the resulting upper bound into (3.2) we get
for all \(u>\sup _{a\in B_\varepsilon (v)}\Theta (\tau ,t,a)+\Xi (\tau ,t,a).\) We used Proposition 2.4 in the last inequality.
Let us estimate the quantities introduced in (3.6). The coordinate process \(\tilde{b}_s^j(a)\) is a Gaussian process and, by (3.4),
In this notation
It follows from [26, (1.1)] that for every \(\eta >0\) and \(T>0\) there is \(c=c_{\eta ,T}\) such that for every \(u>0\) and all \(T\ge t\ge \tau \ge 0,\)
Hence, taking \(\eta =1\slash 2,\)
Hence,
By (3.8),
Hence, for \(u>\sup _{a\in B_\varepsilon (v)}\Theta (\tau ,t,a)+C\sum _{j=1}^nV_j(\tau ,t)^{1\slash 2},\) we can rewrite (3.7) as follows,
Hence, the result follows. \(\square \)
With the notation as in Proposition 3.1 we have immediately the following
Corollary 3.2
For every \(T>0\) there exists a constant \(C>0\) such that for all \(\varepsilon >0\) and all \(T\ge t\ge \tau \ge 0\) the following estimate holds,
for all \(u>\sup _{a\in B_\varepsilon (v)}\Theta (\tau ,t,a)+C\sum _{j=1}^nV_j(\tau ,t)^{1\slash 2}.\)
4 Evolution on M
We choose coordinates \(y_i\) for M for which \(Y_i\) corresponds to \(\partial _i=\partial _{y_i},\) \(1\le i\le d_1.\) Let \(\eta \in C([0,\infty ),V)\) and consider the evolution on M generated by the operator
Then
and consequently,
where \(\psi (t)=[\psi _{i,j}(t)]\) is the matrix of \({{\mathrm{Ad}}}(\eta (t))\Phi ^\sigma (t)|_M.\) Thus the matrix \([a_{ij}]\) from (2.1) for the operator \(\mathcal L_M^\sigma (t)^\eta \) is
where the adjoint is in the \(y_j\) coordinates. Let
For a \(d\times d\) invertible matrix A we set
It follows from Proposition 2.4 that the evolution kernel \(P^{M,\sigma ,\eta }_{t,s}\) for the operator \(\mathcal L_M^\sigma (t)^\eta \) is Gaussian, and in our notation, is given by
and the corresponding transition kernel is,
For a matrix A the operator norm, that is the norm A considered as a liner operator from \(\ell ^2(\mathbb {R}^n)\rightarrow \ell ^2(\mathbb {R}^n)\) is denoted by \(\Vert A\Vert .\) We will need the following two simple lemmas.
Lemma 4.1
Let A be a positive semi-definite matrix. Then
Proof
See e.g. [22, Lemma 4.1] \(\square \)
Lemma 4.2
Let K and D be square matrices and let
If \(\det K\not =0\) then \(\det A=\det K\det (D-CK^{-1}B).\)
Proof
See e.g. [32]. \(\square \)
Now we prove an upper bound on \(\mathcal {D}(A^{\sigma ,\eta }(s,t))\) that is independent of \(\eta \) generalizing [22, Lemma 4.2].
Lemma 4.3
There is a constant \(C>0\) such that
where \(s^\sigma _{ij}(t)\) are the entries of the matrix \(S^\sigma (t)\) defined in (1.1).
Proof
By the assumptions (A1) and (A2) on p. 2 the operator \({{\mathrm{ad}}}_X:\mathfrak m\rightarrow \mathfrak m\) is lower triangular for all \(X\in \mathfrak v\) and \(\Phi ^\sigma (t)|_M=S^\sigma (t),\) where \(S^\sigma (t)\) is linear operator on \(\mathfrak m\) that is lower triangular in the \(Y_i\) basis. We omit the t and \(\sigma \) dependence for the sake of simplicity. In the coordinates defined by the \(Y_i\) basis,
where the \(X_o\) is \((d_1-1)\times (d_1-1)\)-matrix and v is a \((d_1-1)\times 1\)-column vector.
Then
Then
where
Hence,
where
From Lemma 4.2,
The determinant on the right is non-negative since it is the \(s_{d_1d_1}=0\) case of formula (4.1). Hence,
Our result follows by induction. \(\square \)
Now we estimate the operator norm of the matrix
Recall that we assume that \(S^\sigma \) is lower triangular [assumption (A2)]. Specifically, for \(i\ge j,\)
where \(h_{ij}^M\) are polynomials in \(a\in A=\mathbb {R}^k,\) and \(h_{jj}^M=1.\)
Lemma 4.4
Let \(\eta =\eta (u)=(\eta _1(u),\ldots ,\eta _{d_2}(u))\in \mathbb {R}^{d_2}=V\) be a continuous function. There exist constants \(C>0\) and \(k_o\in \mathbb {N},\) such that
where
Proof
We note first that for \(X\in \mathfrak {n},\)
Hence,
Since all norms on finite dimensional vector space are equivalent we get
Our result follows by bringing the norm inside the integral in (4.2). \(\square \)
5 Evolution on V
Now we consider the evolution process \(\eta (t)\) on V generated by
(see (1.1) on p. 4). The matrix \(a_t=[a_{ij}(t)]\) defined in (2.1) is equal to
Let
One of the differences between this setting and the case of meta-abelian groups considered in [23] is that the coordinates \(\eta _j(t),\) \(j=1,\ldots ,d_2,\) of the process \(\eta (t)\) are no longer independent.
Notice that exactly in the same way as in the proof of Lemma 4.3 we can show that
We have the following, analogous to (4.3), inequality
which implies (as in Lemma 4.4)
Hence in the notation introduced in (1.3) and (1.5), using Lemma 4.1, Corollary 3.2 reads,
Proposition 5.1
For every \(T>0\) there exist constants \(C,c>0\) such that, for all \(T\ge t\ge \tau \ge 0\), all \(u>0\), and all \(\varepsilon >0\), the following estimate holds,
for all \(u>\sup _{a\in B_\varepsilon (v)}\Theta (\tau ,t,a)+C\sum _{j=1}^{d_2}V_j(\tau ,t)^{1\slash 2},\) where
6 Proof of Theorem 1.2
In this section we estimate the transition kernel for the evolution on \(N=M\rtimes V,\)
Proof of Theorem 1.2
We allow the constants C and D to change from line to line. By Lemmas 4.1 and 4.3, for \(t\ge \tau \) and \(m,m^\prime \in M,\)
By Theorem 1.1, for \(m,m^\prime \in M\) and \(v,v^\prime \in V\),
Let
where
Then, by Lemma 4.4,
For \(v\in \mathbb {R}^{d_2}\) given and \(\varepsilon >0\), let
where
We will estimate (6.2) with \(\psi _\varepsilon \) in place of \(\psi \) as \(\varepsilon \) tend to zero.
Let \(\mathbf{E}_{\tau ,v}^\eta \) denote expectation with respect to the distribution \(d\mathbf {W}_{\tau ,v}^{V,\sigma }(\eta )\) of \(\eta \) in the space of trajectories (\(\eta (\tau )=v\in V\)). For \(\ell =1,2,\cdots ,\) define the sets of paths in V,
The integral on the right in (6.2) can be written as an infinite sum and estimated as follows
for some \(k_o\ge 1.\)
To simplify notation we introduce
Let \(v\not =0\) and choose \(\varepsilon \slash 2<\Vert v\Vert _\infty \). If \(\eta \in \mathcal {A}_\ell \) and \(\eta (t)\in B_\varepsilon (v)\) then \(\Vert \eta (t)\Vert _\infty \ge \Vert v\Vert _\infty -\varepsilon \slash 2\). Hence,
Let
and set
Obviously, \(\lceil \tilde{\Theta }(\tau ,t,v)\rceil >\Vert v\Vert _\infty -\varepsilon \slash 2\) for sufficiently small \(\varepsilon .\) Since \(\ell ^*_\varepsilon \ge \lceil \tilde{\Theta }(\tau ,t,v)\rceil \) we get that \(\ell ^*_\varepsilon >\Vert v\Vert _\infty -\varepsilon \slash 2\) for sufficiently small \(\varepsilon .\) Hence,
We estimate \(I_1.\) Since
where
Now we consider \(I_2.\) In this case to estimate \(\mathcal E_\ell (\varepsilon )\) we use Proposition 5.1,
Now we estimate the series above. We split the sum above into two parts: \(\ell ^*_\varepsilon \le \ell \le \ell ^*_\varepsilon +\Vert m\Vert ^{1\slash 2k_o}\) and \(\ell >\ell ^*_\varepsilon +\Vert m\Vert ^{1\slash 2k_o},\) and estimate the corresponding parts by the following two terms [we note that if \(\varepsilon \rightarrow 0\) then \(\ell ^*_\varepsilon \rightarrow \tilde{\Theta }(\tau ,t,v)\)]:
and
The theorem follows. \(\square \)
7 Estimates for quantities (1.8) and (1.7)
In order to apply the estimate given in Theorem 1.2 into some particular problem one needs to control the quantities (1.8) and (1.7) appearing there. The aim of this section is to give some estimates for them which make the bound in Theorem 1.2 explicit.
We will need the following classical bounds for the norm of the inverse matrices which is due to Richter [25] (see also [18] for a different proof). Recall that for a matrix A, \(\Vert A\Vert \) stands for the operator norm \(\ell ^2\rightarrow \ell ^2.\)
Theorem 7.1
Let A be a nonsingular \(n\times n\)-matrix. Then
For the matrix \(A_V^\sigma (\tau ,t),\) defined in (1.4) on p. 5, we have by (5.2),
where \(\mathcal T(\tau ,t)\) is defined in (1.5). Also note that \(\mathcal T(\tau ,t)\) is an increasing function of t. Hence, the following is a corollary from Theorem 7.1. Notation below is as in (1.5).
Lemma 7.2
There is a constant \(c>0\) such that for every \(t\ge \tau \ge 0\) and for every \(v\in \mathbb {R}^{d_2},\)
Proof
By Theorem 7.1,
Now (5.1) finishes the proof. \(\square \)
Lemma 7.3
There is a positive constant c such that for every \(t\ge \tau \ge 0\) and for every \(v\in \mathbb {R}^{d_2},\)
Proof
Since all norms on the finite dimensional vector space are equivalent we get, by (1.3), for every j,
From the proof of Lemma 7.2
Thus
Hence the sum \(\sum _jV_j(\tau ,t)^{1\slash 2}\) has the same (with a different constat) estimate. This together with the estimate obtained in Lemma 7.2 finish the proof. \(\square \)
From the proof of Lemma 7.3 we have the following corollary .
Lemma 7.4
There is \(c>0\) such that for all \(t\ge \tau >0,\)
References
Adler, R.J., Taylor, J.E.: Random fields and geometry. In: Springer Monographs in Mathematics. Springer, New York (2007)
Aronson, D.G.: Bounds for the fundamental solution of a parabolic equation. Bull. Am. Math. Soc. 73, 890–896 (1967)
Borell, C.: The Brunn–Minkowski inequality in Gauss space. Invent. Math. 30(2), 207–216 (1975)
van Casteren, J.A.: Feller semigroups and evolution equations. In: Series on Concrete and Applicable Mathematics, vol. 12. World Scientific Publishing Co., Pte. Ltd., Hackensack (2011)
Cho, S., Kim, P., Park, H.: Two-sided estimates on Dirichlet heat kernels for time-dependent parabolic operators with singular drifts in \(C^{1,\alpha }\)-domains. J. Differ. Equ. 252(2), 1101–1145 (2012)
Damek, E.: Left-invariant degenerate elliptic operators on semidirect extensions of homogeneous groups. Stud. Math. 89(2), 169–196 (1988)
Damek, E., Hulanicki, A.: Boundaries for left-invariant subelliptic operators on semidirect products of nilpotent and abelian groups. J. Reine Angew. Math. 411, 1–38 (1990)
Damek, E., Hulanicki, A.: Maximal functions related to subelliptic operators invariant under an action of a solvable Lie group. Stud. Math. 101(1), 33–68 (1991)
Damek, E., Hulanicki, A., Urban, R.: Martin boundary for homogeneous riemannian manifolds of negative curature at the bottom of the spectrum. Rev. Mat. Iberoam. 17(2), 257–293 (2001)
Damek, E., Hulanicki, A., Zienkiewicz, J.: Estimates for the Poisson kernels and their derivatives on rank one \(NA\) groups. Stud. Math. 126(2), 115–148 (1997)
Fabes, E.B., Stroock, D.W.: A new proof of Moser’s parabolic Harnack inequality using the old ideas of Nash. Arch. Ration. Mech. Anal. 96(4), 327–338 (1986)
Hofmann, S., Kim, S.: Gaussian estimates for fundamental solutions to certain parabolic systems. Publ. Mat. 48(2), 481–496 (2004)
Kim, S.: Gaussian estimates for fundamental solutions of second order parabolic systems with time-independent coefficients. Trans. Am. Math. Soc. 360(11), 6031–6043 (2008)
Lierl, J., Saloff-Coste, L.: Parabolic Harnack inequality for time-dependent non-symmetric Dirichlet forms (2012). arXiv:1205.6493
Liskevich, V., Semenov, Y.: Estimates for fundamental solutions of second-order parabolic equations. J. Lond. Math. Soc. (2) 62(2), 521–543 (2000)
Malliavin, P.: Géométrie Différentiale Stochastique. Seminaire de Mathématique Supérieur Été 1997 No. 64. Les Press de l’Université de Montréal (1978)
Malliavin, M.P., Malliavin, P.: Factorisation et lois limites de la diffusion horizontale au-dessus d’un espace riemannien symétrique. In: Vidal, E. (ed.) Lecture Notes in Mathematics, vol. 404, pp. 164–217. Springer, New York (1974)
Mirsky, L.: The norms of adjugate and inverse matrices. Arch. Math. (Basel) 7, 276–277 (1956)
Pazy, A.: Semigroups of linear operators and applications to partial differential equations. In: Marsden, J.E., Sirovich, L., John, F. (eds.) Applied Mathematical Sciences, vol. 44. Springer, New York (1983)
Penney, R., Urban, R.: Estimates for the Poisson kernel on higher rank \(NA\) groups. Colloq. Math. 118(1), 259–281 (2010)
Penney, R., Urban, R.: An upper bound for the Poisson kernel on higher rank \(NA\) groups. Potential Anal. 35(4), 373–386 (2011)
Penney, R., Urban, R.: Estimates for the Poisson kernel and the evolution kernel on the Heisenberg group. J. Evol. Equ. 12(2), 327–351 (2012)
Penney, R., Urban, R.: Estimates for the Poisson kernel and the evolution kernel on nilpotent meta-abelian groups. Stud. Math. 219(1), 69–96 (2013)
Penney, R., Urban, R.: Poisson kernels on nilpotent, 3-meta-abelian groups. Ann. Mat. Pura Appl. (2014). doi:10.1007/s10231-014-0463-x
Richter, H.: Bemerkung zur Norm der Inversen einer Matrix. Arch. Math. (Basel) 5, 447–448 (1954)
Samorodnitsky, G.: Probability Tails of Gaussian Extrema. Technical Report No. 864. Cornell University, Ithaca (1989). https://ecommons.library.cornell.edu/bitstream/1813/8748/1/TR000864. Accessed 25 July 2015
Stroock, D.W., Varadhan, S.R.S.: Multidimensional diffusion processes. In: Chern, S.S., Doob, J.L., et al. (eds.) Classics in Mathematics (1997). Reprint of the 1997 edition. Springer, Berlin (2006)
Tanabe, H.: Equations of evolutions. Translated from the Japanese by N. Mugibayashi and H. Haneda. In: Monographs and Studies in Mathematics, vol. 6. Pitman, London (1979)
Taylor, J.C.: Skew products, regular conditional probabilities and stochastic differential equations: a technicalemark in Séminaire de Probabilité XXVI. In: Lecture Notes in Math., vol. 1526, pp. 299–314. Springer, New York (1992)
Tsirelson, B.S., Ibragimov, I.A., Sudakov, V.N.: Norms of Gaussian sample functions. In: Proceedings of the Third Japan-USSR Symposium on Probability Theory (Tashkent, 1975). Lecture Notes in Math., vol. 550, pp. 20–41. Springer, Berlin (1976)
Yosida, K.: On the fundamental solution of the parabolic equation in a Riemannian space.Osaka Math. J. 5(1), 65–74 (1953)
Zhang, F.: Matrix theory. Basic results and techniques. In: Axler, S., Capasso, V., Casacuberta, C., MacIntyre, A., Ribet, K., Sabbah, C., Süli, E., Woyczyński, W.A. (eds.) Universitext. Springer, New York (1999)
Acknowledgments
The authors would like to thank prof. Krzysztof Dȩbicki for pointing out the Ref. [1] and some discussion on Borell–TIS inequality. The authors also wish to thank the anonymous referee for a very careful reading of the manuscript and several remarks that improved the overall presentation of the results.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Penney, R., Urban, R. Gaussian-type upper bound for the evolution kernels on nilpotent meta-abelian groups. Positivity 20, 257–281 (2016). https://doi.org/10.1007/s11117-015-0353-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11117-015-0353-5
Keywords
- Left invariant differential operators
- Time-dependent parabolic opearators
- Brownian motion
- Evolution kernel
- Diffusion process
- Meta-abelian nilpotent Lie groups
- Borell–TIS inequality