Journal of Algebraic Combinatorics

, Volume 38, Issue 4, pp 901–913 | Cite as

Logarithmic derivatives and generalized Dynkin operators

Article

Abstract

Motivated by a recent surge of interest for Dynkin operators in mathematical physics and by problems in the combinatorial theory of dynamical systems, we propose here a systematic study of logarithmic derivatives in various contexts. In particular, we introduce and investigate generalizations of the Dynkin operator for which we obtain Magnus-type formulas.

Keywords

Dynkin operator Free Lie algebras Logarithmic derivative Rota–Baxter algebra 

1 Introduction

Dynkin operators are usually defined as iterated bracketings. They are particularly popular in the framework of linear differential equations and the so-called continuous Baker–Campbell–Hausdorff problem (to compute the logarithm of an evolution operator). We refer to [17] for details and an historical survey of the field. Dynkin operators can also be expressed as a particular type of logarithmic derivatives (see Corollary 3 below). They have received increasingly more and more interest during recent years, for various reasons.
  1. (1)

    In the theory of free Lie algebras and noncommutative symmetric functions, it was shown that Dynkin operators generate the descent algebra (the direct sum of Solomon’s algebras of type A, see [9, 17]) and play a crucial role in the theory of Lie idempotents.

     
  2. (2)

    Generalized Dynkin operators can be defined in the context of classical Hopf algebras [16]. The properties of these operators generalize the classical ones. Among others (see also [6, 7]), they can be used to derive fine properties of the renormalization process in perturbative quantum field theory (pQFT): Dynkin operators can be shown to give rise to the infinitesimal generator of the differential equation satisfied by renormalized Feynman rules [5]. This phenomenon has attracted attention to logarithmic derivatives in pQFT where generalized Dynkin operators are expected to lead to a renewed understanding of Dyson–Schwinger-type equations, see e.g. [11].

     

The present article originated from different problems, namely problems in the combinatorial theory of dynamical systems (often referred to as Ecalle’s mould calculus, see e.g. [18], also for further references on the subject) and in particular in the theory of normal forms of vector fields. It appeared very soon to us that the same machinery that had been successfully used in the above mentioned fields and problems was also relevant for the study of dynamical systems. However, the particular form of logarithmic derivatives showing up in this field requires the generalization of the known results on logarithmic derivatives and Dynkin operators to a broader framework: we refer to Sect. 3.1 of the present article for more details of the subject.

The purpose of the present article is therefore to develop further the algebraic and combinatorial theory of logarithmic derivatives, following various directions and perspectives (free Lie algebras, Hopf algebras, Rota–Baxter algebras, …), all of them known to be relevant for the study of dynamical systems but also to various other fields, running from the numerical analysis of differential equations (see e.g. [12, 13]) to pQFT.

We limit the scope of the present article to the general theory and plan to use the results in forthcoming articles.

2 Twisted Dynkin operators on free Lie algebras

Let T(X) be the tensor algebra over an alphabet X={x1,…,xn,…}. It is graded by the length of words: the degree n component Tn(X) of T(X) is the linear span (say over the rationals) of the words y1yn, yiX. The length, n, of y1yn is written l(y1yn). It is equipped with the structure of a graded connected cocommutative Hopf algebra by the concatenation product:
$$\mu(y_1\ldots y_n\otimes z_1\ldots z_m)=y_1\ldots y_n\cdot z_1 \ldots z_m:=y_1\ldots y_nz_1 \ldots z_m $$
and the unshuffling coproduct:
$$\varDelta(y_1\ldots y_n):=\sum _{p=0}^n\sum_{I\amalg J=[n]}y_{i_1} \ldots y_{i_p}\otimes y_{j_1}\ldots y_{j_{n-p}}, $$
where 1≤i1<i2<⋯<ipn, 1≤j1<j2<⋯<jnpn and I={i1,…,ip}, J={j1,…,jnp}. The antipode is given by S(y1yn)=(−1)nyny1, where yiX. We refer e.g. to [17] for further details on the subject and general properties of the tensor algebra viewed as a Hopf algebra and recall simply that the antipode is the convolution inverse of the identity map of T(X): S∗Id=Id∗S=ε, where ε is the projection on the scalars T0(X) orthogonally to the higher degree components Tn(X), n>0, and the convolution product fg of two linear endomorphisms of T(X) (and, more generally of a Hopf algebra with coproduct Δ and product μ) is given by fg:=μ∘(fg)∘Δ. In particular, S∗Id is the null map on Tn(X) for n>0.

We write δ an arbitrary derivation of T(X) (in particular δ acts as the null application on the scalars, T0(X)). The simplest and most common derivations are induced by maps f from X to its linear span: the associated derivation, written \(\tilde{f}\), is then defined by \({\tilde{f}}(y_{1}\ldots y_{n})=\sum_{i=0}^{n}y_{1}\ldots y_{i-1}f(y_{i})y_{i+1}\ldots y_{n}\). Since for \(a,b\in T(X),\ \tilde{f}([a,b])=[f(a),b]+[a,f(b)]\), where [a,b]:=abba, these particular derivations are also derivations of the free Lie algebra over X. These are the derivations we will be most interested in practice (the ones that map the free Lie algebra over X to itself), we call them Lie derivations.

In the particular case f=Id, we also write Y for \(\tilde {\mathrm{Id}}\): Y is the graduation operator, Y(y1yn)=ny1yn. When \(f=\delta_{x_{i}}\) (f(xi)=xi,f(xj)=0 for ji), \(\tilde{f}\) counts the multiplicity of the letter xi in words and is the noncommutative analog of the derivative with respect to xi of a monomial in the letters in X.

Proposition 1

For arbitrary lettersy1,…,ynofX, we have, forDδ:=Sδ:
$$D_\delta(y_1\ldots y_n)=\bigl[\ldots\bigl[ \bigl[\delta(y_1),y_2\bigr],y_3\bigr] \ldots,y_n\bigr]. $$
Let us assume, by induction, that the identity holds in degrees <n. Then, with the same notation as the one used to define the coproduct Δ:
$$S\ast\delta(y_1\ldots y_n) = \sum _{p=0}^n(-1)^p y_{i_p}\ldots y_{i_1}\delta(y_{j_1}\ldots y_{j_{n-p}}) $$
where IJ=[n]. Notice that, if \(i_{p}\not=n\), jnp=n (and conversely). Therefore, since δ is a derivation: where the sums run over the partitions IJ=[n−1].

The first and third term of the summation sum up to [Sδ(y1yn−1),yn] which is, by induction, equal to […[[δ(y1),y2],y3]…,yn]. The second computes S∗Id(y1yn−1)δ(yn), which is equal to 0 for n>1. The Proposition follows.

Corollary 2

For a Lie derivationδ, the mapDδmapsT(X) to Lie(X), the free Lie algebra overX. Moreover, we have, forl∈Lie(X),
$$D_\delta(l)=\delta(l). $$
The first part of the corollary follows from the previous proposition. To prove the second part, recall that l∈Lie(X) if and only if Δ(l)=l⊗1+1⊗l. Notice furthermore that, since Dδ=Sδ, we have δ=Id∗Dδ. Therefore:
$$\delta(l)=(\mathrm{Id}\ast D_\delta) (l)=D_\delta(1)\cdot l+D_\delta(l). $$
The proof follows since Dδ(1)=0.

We recover in particular the theorem of Dynkin [3], Specht [19], Wever [20] (case f=Id) and obtain an extension thereof to the case \(f=\delta_{x_{i}}\). We let the reader derive similar results for other families of Lie derivations.

Corollary 3

We have, for the classical Dynkin operatorD=SYand an arbitrary elementlinTn(X)∩Lie(X):
$$D(l)=n\cdot l. $$
In particular, the operator\(\frac{D}{n}\)is a projection fromTn(X) ontoTn(X)∩Lie(X).

The definition D=SY of the Dynkin operator seems to be due to von Waldenfels, see [17].

Let us write \(T_{n}^{i}(X)\) for the linear span of words over X such that the letter xi appears exactly n times. The derivation \(\tilde{\delta}_{x_{i}}\) acts as the multiplication by n on \(T_{n}^{i}(X)\).

Corollary 4

We have for\(D_{x_{i}}:=S\ast\tilde{\delta}_{x_{i}}\)and an arbitrary elementlin\(T_{n}^{i}(X)\cap\mathrm{Lie}(X)\):
$$D_{x_i}(l)=n\cdot l. $$
In particular, the operator\(\frac{D_{x_{i}}}{n}\)is a projection from\(T_{n}^{i}(X)\)onto\(T_{n}^{i}(X)\cap\mathrm{Lie}(X)\).

3 Abstract logarithmic derivatives

Quite often, the logarithmic derivatives one is interested in arise from dynamical systems and geometry. We explain briefly why this is so in the context of a fundamental example, the classification of singular vector fields (Sect. 3.1). Although we settle our later computations in the general framework of Lie and enveloping algebras, the reader may want to keep this motivation in mind.

The second Sect. 3.2 shows briefly how to extend the results on generalized Dynkin operators obtained previously in the tensor algebra to the general setting of enveloping algebras.

We show finally (Sect. 3.3) how these results connect to the theory of Rota–Baxter algebras, which is known to be the right framework to investigate formal properties of derivations. Indeed, as we will recall below, Rota–Baxter algebra structures show up naturally when derivations have to be inverted. See also [1, 6, 7] for further details on the subject of Rota–Baxter algebras and their applications.

3.1 An example from the theory of dynamical systems

Derivations on graded complete Lie algebras appear naturally in the framework of dynamical systems, especially when dealing with the formal classification (up to formal change of coordinates) of singular vector fields.

The reader is referred to [10] for an overview and further details on the objects we consider (such as identity-tangent diffeomorphisms or substitution automorphisms); let us also mention that the reader who is interested only in formal aspects of logarithmic derivatives may skip the current section.

A formal singular vector field in dimension ν is an operator:
$$X=\sum_{i=1}^{\nu} f_i(x_1, \dots,x_{\nu})\frac{\partial}{\partial x_i} $$
such that fi(0)=0 for all i (that is, fi∈ℂ≥1[[x1,…,xν]]). Such operators act on the algebra of formal series in ν variables. In practice, a vector field is given by a series of operators such as \(x_{1}^{n_{1}} \dots x_{\nu}^{n_{\nu}} \frac{\partial}{\partial x_{i}}\) with n1+⋯+nν≥1 that acts on monomials \(x_{1}^{m_{1}} \dots x_{\nu}^{m_{\nu}}\):
$$\biggl( x_1^{n_1} \dots x_{\nu}^{n_{\nu}} \frac{\partial}{\partial x_i} \biggr) x_1^{m_1} \dots x_{\nu}^{m_{\nu}} = m_i\cdot x_1^{m_1 + n_1} \dots x_i^{m_i + n_i - 1} \dots x_{\nu}^{m_{\nu} + n_{\nu}} $$
so that the total degree goes from m1+⋯+mν to m1+⋯+mν+n1+⋯+nν−1 and the graduation for such an operator is then n1+⋯+nν−1.
The 0 graded component of a vector field X is called the linear part since it can be written \(X_{0}=\sum A_{ij}x_{i}\frac{\partial }{\partial x_{j}}\) and a fundamental question in dynamical systems is to decide if X is conjugate, up to a change of coordinates, to its linear part X0. Notice that:
  • The vector space L of vector fields without linear part (or without component of graduation degree 0) is a graded complete Lie algebra. The vector space of linear vector fields is written L0.

  • The exponential of a vector field in L gives a one to one correspondence between vector fields and substitution automorphisms on formal power series, that is, operators F such that
    $$F\bigl(A(x_1,\dots,x_\nu)\bigr)=A\bigl(F(x_1), \dots,F(x_{\nu})\bigr) $$
    where (F(x1),…,F(xν)) is a formal identity-tangent diffeomorphism.
  • The previous equation also determines an isomorphism between the Lie group of L and the group of formal identity-tangent diffeomorphism G1.

These are essentially the framework (the one of graded complete Lie algebras) and the objects (elements of the corresponding formal Lie groups) that we will consider and investigate in our forthcoming developments.
Consider now a vector field X=X0+YL0L and suppose that it can be linearized by a change of coordinates in G1, or rather by a substitution automorphism F in the Lie group of L (see [8, 10]). The corresponding conjugacy equation reads
$$X_0 F=F(X_0 +Y) \Longleftrightarrow[X_0,F]=FY \Longleftrightarrow \mathrm{ad}_{X_0}(F)=FY. $$
This equation, called the homological equation, delivers a derivation \(\delta=\mathrm{ad}_{X_{0}}\) on L that is compatible with the graduation. The linearization problem is then obviously related to the inversion of the logarithmic derivation Dδ(F):=F−1δ(F).

In the framework of dynamical systems, the forthcoming Theorem 15 ensures that if the derivation \(\mathrm{ad}_{X_{0}}\) is invertible on L, any vector field X0+Y can be linearized. This is the kind of problems that can be addressed using the general theory of logarithmic derivatives to be developed in the next sections.

3.2 Hopf and enveloping algebras

We use freely in this section the results in [16] to which we refer for further details and proofs. The purpose of this section is to extend the results in [16] on the Dynkin operator to more general logarithmic derivatives.

Let \(L=\bigoplus_{n\in\mathbf{N}^{\ast}}L_{n}\) be a graded Lie algebra, \(\hat{L}=\prod_{n\in\mathbf{N}^{\ast}}L_{n}\) its completion, \(U(L)=\bigoplus_{n\in\mathbf{N}^{\ast}}U(L)_{n}\) the (graded) enveloping algebra of L and \(\hat{U}(L)= \prod_{n\in\mathbf{N}^{\ast}}U(L)_{n}\) the completion of U(L) with respect to the graduation (N stands for the set of strictly positive integers).

The ground field is chosen to be Q (but the results in the article would hold for an arbitrary ground field of characteristic zero and, due to the Cartier–Milnor–Moore theorem [14, 15], for arbitrary graded connected cocommutative Hopf algebras).

The enveloping algebra U(L) is naturally provided with the structure of a Hopf algebra. We denote by ϵ:Q=U(L)0U(L) the unit of U(L), by η:U(L)→Q the counit, by Δ:U(L)→U(L)⊗U(L) the coproduct and by μ:U(L)⊗U(L)→U(L) the product. An element l of U(L) is primitive if Δ(l)=l⊗1+1⊗l; the set of primitive elements identifies canonically with L. Recall that the convolution product ∗ of linear endomorphisms of U(L) is defined by fg=μ∘(fg)∘Δ; ν:=ϵη is the neutral element of ∗. The antipode is written S, as usual.

Definition 5

An element f of End(U(L)) admits F∈End(U(L))⊗End(U(L)) as a pseudo-coproduct if FΔ=Δf. If f admits the pseudo-coproduct fν+νf, we say that f is pseudo-primitive.

In general, an element of End(U(L)) may admit several pseudo-coproducts. However, this concept is very flexible, as shows the following result [16, Thm. 2].

Proposition 6

Iff,gadmit the pseudo-coproductsF,GandαQ, thenf+g, αf, fg, fgadmit, respectively, the pseudo-coproductsF+G,αF,FG,FG, where the productsandare naturally extended to End(U(L))⊗End(U(L)).

An elementf∈End(U(L)) takes values in Prim(U(L)) if and only if it is pseudo-primitive.

Let δ be an arbitrary derivation of L (∀l,l′∈L, δ([l,l′])=[δ(l),l′]+[l,δ(l′)]). We also write δ for its unique extension to a derivation of U(L) and write Dδ:=Sδ. For an element lL, exp(l) is group-like (Δ(exp(l))=exp(l)⊗exp(l)), from which it follows that
$$D_\delta\bigl(\exp(l)\bigr)=S\bigl(\exp(l)\bigr)\delta\bigl(\exp(l) \bigr)=\exp(-l)\delta\bigl(\exp(l)\bigr), $$
the (noncommutative) logarithmic derivative of exp(l) with respect to δ. We call therefore Dδ the logarithmic derivative corresponding to δ.

Proposition 7

The logarithmic derivativeDδis a pseudo-primitive: it mapsU(L) to L.

Indeed, SS is a pseudo-coproduct for S (see [16]). On the other hand U(L) is spanned by products l1ln of elements of L. Since δ is a derivation, we get
$$\varDelta\circ\delta(l_1\dots l_n)=\varDelta\Biggl(\sum _{i=1}^nl_1\dots \delta (l_i)\dots l_n\Biggr)=(\delta\otimes\mathrm{Id}+ \mathrm{Id}\otimes\delta )\circ\varDelta(l_1\dots l_n), $$
where the last identity follows directly from the fact that the li are primitive, which implies that Δ(l1ln) can be computed by the same formula as the one for the coproduct in the tensor algebra. In particular, δ⊗Id+Id⊗δ is a coproduct for δ. We get finally from which the proof follows.

Proposition 8

ForlL, we haveδ(l)=Dδ(l). In particular, whenδis invertible onL, Dδ(l) is a projection fromU(L) ontoL.

The proof is similar to the one in the free Lie algebra. We have Dδ(l)=(Sδ)(l)=π∘(Sδ)∘Δ(l)=π∘(Sδ)(l⊗1+1⊗l) =π(S(l)⊗δ(1)+S(1)⊗δ(l))=δ(l), since δ(1)=0 and S(1)=1.

3.3 Integro-differential calculus

The notations are the same as in the previous section, but we assume now that the derivation δ is invertible on L and extends to an invertible derivation on U(L)+:=⨁n≥1U(L)n. The simplest example is provided by the graduation operator Y(l)=nl for lLn (resp. Y(x)=nx for xU(L)n). This includes the particular case, generic for various applications to the study of dynamical systems, where L is the graded Lie algebra of polynomial vector fields spanned linearly by the xnx and δ:=x∂x acting on P(x)x as δ(P(x)x):=xP′(x)x.

We are interested in the linear differential equation
$$ \delta\phi=\phi\cdot x,\quad x\in L,\ \phi\in1+U(L)^+. $$
(1)
The inverse of δ is written R and satisfies, on U(L)+, the identity:
$$R(x)R(y)=R\bigl(R(x)y\bigr)+R\bigl(xR(y)\bigr), $$
that follows immediately from the fact that δ is an invertible derivation. In other terms, U(L)+ is provided by R with the structure of a weight 0 Rota–Baxter algebra and solving (1) amounts to solving the so-called Atkinson recursion:
$$\phi=1+R(\phi\cdot x). $$
We refer to [6, 7] for a detailed study of the solutions to the Atkinson recursion and further references on the subject. Perturbatively, the solution is given by the Picard series:
$$\phi= 1+\sum_{n\geq1}R^{[n]}(x), $$
where R[1](x)=R(x) and R[n](x):=R(R[n−1](x)x).
Since the restriction to the weight 0 is not necessary for our forthcoming computations, we restate the problem in a more general setting and assume from now on that R is a weight θ Rota–Baxter (RB) map on U(L)+, the enveloping algebra of a graded Lie algebra. That is, we assume that
$$R(x)R(y)=R\bigl(R(x)y\bigr)+R\bigl(xR(y)\bigr)-\theta R(xy). $$
This assumption allows to extend vastly the scope of our forthcoming results since the setting of Rota–Baxter algebras of arbitrary weight includes, among others, a natural link to renormalization in perturbative quantum field theory and difference calculus, the latter setting being relevant to the study of diffeomorphisms in the field of dynamical systems. We refer in particular to the various works of K. Ebrahimi-Fard on the subject (see e.g. [1, 7] for various examples of RB structures and further references).
We assume furthermore from now on in this section that R respects the graduation and that L is invariant under the bilinear map
$$x\bullet_Ry:=R(x)y-yR(x)+\theta yx. $$
When R is the inverse of δ as above, xRy=[R(x),y] and this condition is automatically satisfied. In the general case, •R is a left preLie product appearing naturally in the computation of the action of the Dynkin operator in the framework of Rota–Baxter algebras, see [6, 7] for details.

Lemma 9

The solution to the Atkinson recursion is a group-like element in 1+U(L)+. In particular, S(ϕ)=ϕ−1.

Recall from [16] and [5] that the generalized Dynkin operator D:=SY (the convolution of the antipode with the graduation map in U(L)) maps U(L) to L and, more specifically, defines a bijection between the set of group-like elements in \(\hat{U}(L)\) and \(\hat{L}\). The inverse is given explicitly by [5, Thm. 4.1]:
$$D^{-1}(l)=1+\sum_{n\in\mathbf{N}^\ast}\sum _{k_1+\cdots +k_l=n}\frac{l_{k_1}\cdot \cdots\cdot l_{k_l}}{k_1(k_1+k_2)\cdots(k_1+\cdots+k_l)}, $$
where ln is the component of lL in Ln. According to [6, Thm. 4.2], l:=D(1+∑n≥1R[n](x)) belongs to L. Moreover, by [6, Thm. 4.3], we also have
$$1+\sum_{n\geq1}R^{[n]}(x)=1+\sum _{n\in\mathbf{N}^\ast}\sum_{k_1+\cdots+k_l=n}\frac{l_{k_1}\cdot\cdots\cdot l_{k_l}}{k_1(k_1+k_2)\cdots(k_1+\cdots+k_l)}, $$
that is, 1+∑n≥1R[n](x) is a group-like element in U(L). The last part of the Lemma is a general property of the antipode acting on a group-like element; the Lemma follows.

We are interested now in the situation where another Lie derivation d acts on U(L) and commutes with R (or equivalently with δ when R is the inverse of a derivation). A typical situation is given by Schwarz commuting rules between two different differential operators associated to two independent variables.

Theorem 10

Letdbe a graded derivation onU(L) commuting with the weightθRota–Baxter operatorR. Then, forϕa solution of the Atkinson recursion as above, we have
$$D_d(\phi)=\phi^{-1}\cdot d(\phi)=\sum _{n\geq1}R_d^{[n]}(x) $$
with\(I_{d}^{[1]}=d(x)\), \(R_{d}^{[1]}(x)=R(d(x))\), \(I_{d}^{[n+1]}(x)=[R_{d}^{[n]}(x),x]+\theta x\cdot I_{d}^{[n]}(x)\)and\(R_{d}^{[n+1]}(x)=R(I_{d}^{[n+1]}(x))\).
Notice, although we will not make use of this property, that \(I_{d}^{[n+1]}(x)= I_{d}^{[n]}(x)\bullet_{R} x\). In particular, for y the solution to the preLie recursion:
$$y=d(x)+y\bullet_R x, $$
we have Dd(ϕ)=R(y).

The first identity Dd(ϕ)=ϕ−1d(ϕ) follows from the definition of the logarithmic derivative Dd:=Sd and from the previous Lemma.

The second is equivalent to \(d(\phi)=\phi\cdot\sum_{n\geq1}R_{d}^{[n]}(x)\), that is,
$$d\bigl(R^{[n]}(x)\bigr)=R_d^{[n]}(x)+\sum _{i=1}^{n-1}R^{[i]}(x)R_d^{[n-i]}(x). $$
For n=1, the equation reads d(R(x))=R(d(x)) and expresses the commutation of d and R.

The general case follows by induction. Let us assume that the identity holds for the components in degree n<p of Dd(ϕ). We summarize in a technical Lemma the main ingredient of the proof. Notice that the Lemma follows directly from the Rota–Baxter relation and the definition of \(R_{d}^{[n+1]}(x)\).

Lemma 11

We have, forn,m≥1:
We have, for the degree p component of Dd(ϕ), using the Lemma to rewrite \(R(R^{[m]}(x)\cdot R_{d}^{[n]}(x)\cdot x)\): The fourth and sixth terms cancel partially and add up to \(R(R^{[p-2]}(x)\cdot x\cdot R_{d}^{[1]}(x))-R(x\cdot R_{d}^{[p-1]}(x))\). The fifth and last terms cancel partially and add up to \(\theta R(x\cdot I_{d}^{[p-1]}(x))-\theta R(R^{[p-2]}(x)\cdot x\cdot I_{d}^{[1]}(x))\). In the end, we get Using the RB identity for the expressions inside brackets, we get finally:
$$d\bigl(R^{[p]}(x)\bigr)=\sum_{k=1}^{p-2}R^{[p-1-k]}(x) \cdot R_d^{[k+1]}(x)+R^{[p-1]}(x)R_d^{[1]}(x)+R_d^{[p]}(x), $$
from which the theorem follows.

4 Magnus-type formulas

The classical Magnus formula relates “logarithms and logarithmic derivatives” in the framework of linear differential equations. That is, it relates explicitly the logarithm log(X(t))=:Ω(t) of the solution to an arbitrary matrix (or operator) differential equation X′(t)=A(t)X(t), X(0)=1 to the infinitesimal generator A(t) of the differential equation:
$$\varOmega'(t)=\frac{\mathrm{ad}_{\varOmega(t)}}{\exp^{\mathrm {ad}_{\varOmega(t)}}-1}A(t)=A(t)+\sum _{n>0}\frac{B_n}{n!}\mathrm{ad}_{\varOmega(t)}^n \bigl(A(t)\bigr), $$
where ad stands for the adjoint representation and the Bn are the Bernoulli numbers.

The Magnus formula is a useful tool for numerical applications (computing the logarithm of the solution improves the convergence at a given order of approximation). It has recently been investigated and generalized from various points of view, see e.g. [4], where the formula has been extended to general dendriform algebras (i.e. noncommutative shuffle algebras such as the algebras of iterated integrals of operators), [2] where the algebraic structure of the equation was investigated from a purely preLie algebras point of view, or [1] where a generalization of the formula has been introduced to model the commutation of time-ordered products with time-derivations.

The link with preLie algebras follows from the observation that (under the hypothesis that the integrals and derivatives are well-defined), for arbitrary time-dependent operators, the preLie product
$$M(t)\curvearrowleft N(t):=\int_0^t \bigl[N(u),M'(u)\bigr]\,du $$
satisfies (M(t)↶N(t))′=adN(t)M′(t). The Magnus formula rewrites therefore:
$$\varOmega'(t)= \biggl(\int_0^tA(x) \,dx\curvearrowleft \biggl( \frac {\varOmega }{\exp(\varOmega)-1} \biggr) \biggr)' $$
where \(\frac{\varOmega}{\exp(\varOmega)-1}\) is computed in the enveloping algebra of the preLie algebra of time-dependent operators.

Here, we would like to go one step further and extend the formula to general logarithmic derivatives in the suitable framework in view of applications to dynamical systems and geometry, that is, the framework of enveloping algebras of graded Lie algebras and Lie derivations actions. Notations are as before, that is, L is a graded Lie algebra and d a graded Lie derivation (notice that we do not assume its invertibility on L or U(L)).

Lemma 12

ForlL, k≥1 andxU(L), we have
$$x\cdot l^k=\sum_{i=0}^k{{k} \choose{ i}}l^{k-i}\cdot (-\mathrm{ad}_{l})^i(x). $$
The proof is by induction on n. The formula holds for n=1: xl=−[l,x]+lx. Let us assume that it holds for an arbitrary n<p. Then we have The identity follows then from Pascal’s triangular computation of the binomial coefficients.

Theorem 13

ForlL, we have
$$D_d \bigl(\exp(l)\bigr)=\frac{\exp(-\mathrm{ad}_l)-1}{-\mathrm{ad}_l}d(l). $$
Indeed, from the previous formula we get Since exp(l), being the exponential of a Lie element is group-like, Dd(exp(l))=exp(−l)d(exp(l)) and the theorem follows.

Let us show as a direct application of Theorem 13 how to recover the classical Magnus theorem (other applications to mould calculus and to the formal and analytic classification of vector fields are postponed to later works).

Example 14

Let us consider once again an operator-valued linear differential equation X′(t)=X(t)λA(t), X(0)=1. Notice that the generator A(t) is written to the right, for consistency with our definition of logarithmic derivatives Dd=Sd. All our results can of course be easily adapted to the case dS (in the case of linear differential equations this amounts to consider instead X′(t)=A(t)X(t)), this easy task is left to the reader. Notice also that we introduce an extra parameter λ, so that the perturbative expansion of X(t) is a formal power series in λ.

Consider then the Lie algebra O of operators M(t) (equipped with the bracket of operators) and the graded Lie algebra \(L=\bigoplus_{n\in \mathbf{N}^{\ast}}\lambda^{n} O=O\otimes\lambda\mathbf{C}[\lambda]\). Applying Theorem 13, we recover the classical Magnus formula.

Although a direct consequence of the previous theorem (recall that any group-like element in the enveloping algebra U(L) can be written as an exponential), the following proposition is important and we state it also as a theorem.

Theorem 15

Whendis invertible onL, the logarithmic derivativeDdis a bijection between the set of group-like elements inU(L) andL.

Indeed, for hL the Magnus-type equation in L
$$l=d^{-1}\biggl(\frac{-\mathrm{ad}_l}{\exp(-\mathrm{ad}_l)-1}(h)\biggr) $$
has a unique recursive solution lL such that \(\exp(l)=D_{d}^{-1}(h)\).

References

  1. 1.
    Bauer, M., Chetrite, R., Ebrahimi-Fard, K., Patras, F.: Time ordering and a generalized Magnus expansion. Lett. Math. Phys. 103, 331–350 (2013) MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Chapoton, F., Patras, F.: Enveloping algebras of preLie algebras, Solomon idempotents and the Magnus formula. Int. J. Algebra Comput. (to appear) Google Scholar
  3. 3.
    Dynkin, E.B.: Calculation of the coefficients in the Campbell–Hausdorff formula. Dokl. Akad. Nauk SSSR 57, 323–326 (1947) MathSciNetMATHGoogle Scholar
  4. 4.
    Ebrahimi-Fard, K., Manchon, D.: A Magnus- and Fer-type formula in dendriform algebras. Found. Comput. Math. 9, 295–316 (2009) MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Ebrahimi-Fard, K., Gracia-Bondía, J.M., Patras, F.: A Lie theoretic approach to renormalisation. Commun. Math. Phys. 276, 519–549 (2007) CrossRefMATHGoogle Scholar
  6. 6.
    Ebrahimi-Fard, K., Gracia-Bondía, J., Patras, F.: Rota–Baxter algebras and new combinatorial identities. Lett. Math. Phys. 81(1), 61–75 (2007) MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Ebrahimi-Fard, K., Manchon, D., Patras, F.: A noncommutative Bohnenblust–Spitzer identity for Rota–Baxter algebras solves Bogoliubov’s recursion. J. Noncommut. Geom. 3, 181–222 (2009) MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Écalle, J.: Singularités non abordables par la géométrie. Ann. Inst. Fourier (Grenoble) 42(1–2), 73–164 (1992) MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Gelfand, I.M., Krob, D., Lascoux, A., Leclerc, B., Retakh, V., Thibon, J.-Y.: Noncommutative symmetric functions. Adv. Math. 112, 218–348 (1995) MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Ilyashenko, Y., Yakovenko, S.: Lectures on Analytic Differential Equations. Graduate Studies in Mathematics, vol. 86. American Mathematical Society, Providence (2008) MATHGoogle Scholar
  11. 11.
    Kreimer, D., Yeats, K.: An etude in non-linear Dyson–Schwinger equations. Nucl. Phys. Proc. Suppl. 160, 116–121 (2006) MathSciNetCrossRefGoogle Scholar
  12. 12.
    Lundervold, A., Munthe-Kaas, H.: Hopf algebras of formal diffeomorphisms and numerical integration on manifolds. Contemp. Math. 539, 295–324 (2011) MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lundervold, A., Munthe-Kaas, H.: Backward error analysis and the substitution law for Lie group integrators. Preprint arXiv:1106.1071
  14. 14.
    Milnor, J.W., Moore, J.C.: On the structure of Hopf algebras. Ann. Math. 81, 211–264 (1965) MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Patras, F.: L’algèbre des descentes d’une bigèbre graduée. J. Algebra 170, 547–566 (1994) MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Patras, F., Reutenaeur, C.: On Dynkin and Klyachko idempotents in graded bialgebras. Adv. Appl. Math. 28, 560–579 (2002) CrossRefMATHGoogle Scholar
  17. 17.
    Reutenauer, C.: Free Lie Algebras. Oxford University Press, Oxford (1993) MATHGoogle Scholar
  18. 18.
    Sauzin, D.: Mould expansions for the saddle-node and resurgence monomials. In: Connes, A., Fauvet, F., Ramis, J.-P. (eds.) Proceedings of the International Conference on Renormalization and Galois Theories. IRMA Lectures in Mathematics and Theoretical Physics, pp. 83–164 (2008) Google Scholar
  19. 19.
    Specht, W.: Die linearen Beziehungen zwischen höheren Kommutatoren. Math. Z. 51, 367–376 (1948) MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Wever, F.: Über Invarianten in Lieschen Ringen. Math. Ann. 120, 563–580 (1949) MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Mathématiques, Bât. 425, UMR 8628, CNRSUniversité Paris-SudOrsay CedexFrance
  2. 2.Laboratoire J.A. Dieudonné, UMR 7531, CNRSUniversité de NiceNice Cedex 2France

Personalised recommendations