Abstract
We give a recursive formula for an expansion of a solution of a general nonautonomous polynomial differential equation. The formula is given on the algebraic level with the use of a shuffle product. This approach minimizes the number of integrations on each order of expansion. Using combinatorics of trees, we estimate the radius of convergence of the expansion.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider a nonautonomous polynomial differential equation, known also as a generalized Abel differential equation:
with a solution x : [0, T] → ℝ on a small segment of the reals. In this class of differential equation, we have the following: for n = 1, the linear equation with a well-known formula for the general solution; for n = 2, the Riccati equation well known both for theoretical [3, 7, 17, 18] and practical (see [4] and references therein at the beginning of Section 4) reasons; for n = 3, the Abel differential equation of the first kind studied theoretically [15] and for practical reasons ([12, 14, 20] and references in [6, 15]); and for n > 3, the generalized Abel differential equations [1, 2]. Assuming \(X_{i} = x^{i} \frac \partial {\partial x}\), for i = 0,…,n, are differential vector fields on ℝ, one can, following Fliess [9] (see also [10, 13], and [11] with references therein), expand the solution of the equation in terms of iterated integrals
and iterated differential operators \(X_{i_{1}}\cdots X_{i_{k}}\) acting on the identity function h(x) = x and evaluated at the zero point. In this approach, one does not use a specific form of the vector fields, i.e., the fact that they are of polynomial type. In this article, we show another approach to expanding a solution of the above equation in terms of iterated integrals with the use of an important feature of Chen’s iterated integral mapping, namely, that it is a shuffle algebra homomorphism (see the comment after formula (2.4)). In fact, with the use of Chen’s mapping (2.4), we will be able to consider a purely algebraic problem instead of an analytic one. More precisely, assuming that the solution of Eq. (1.1) can be expanded in terms of iterated integrals, we show that an algebraic equation must be satisfied in the space of nonommutative series on n+1 letters. It will be easy to show the existence of the solution of the algebraic equation by a recursive formula of its homogeneous parts. Chen’s mapping gives us the expansion of the solution of the initial problem as we state in Theorem 1. This is done in Section 2. Then, in Section 3, by counting elements of a class of trees in two different ways, we show convergence of the defined expansion of x(t) for small times—this is stated in Theorem 2 (in Section 2).
As an application of this general approach, we consider, in Section 4, the cases of the linear equation (i.e., n = 1), the Riccati equation (n = 2), and the one where there are only two nonvanishing summands. In the first case, we reestablish a well-known formula for the general solution, and in the second case, we deduce convergence of the series defining coordinates of the second kind connected with 𝔞1-type involutive distribution [16].
Finally, in Section 5, we compare the Chen-Fliess approach with the one given in this article. It occurs that in the latter case, the number of integrals to compute grows significantly slower with the order of approximation than that in the first case.
2 Existence and Convergence of an Expansion
Let n ∈ ℕ,Footnote 1 T > 0, and let u 0,…, u n : [0, T] →ℝ be measurable and bounded (by a constant M ∈ ℝ) functions. Consider a nonautonomous polynomial differential equation
Two comments are in order. Firstly, the Newton symbols occurring in the above formula are for convenience reasons—without these constants, it would be harder to estimate the radius of convergence of a defined series. Secondly, we assume that the initial value equals zero. This is without loss of generality in a sense that with the linear change of variables x → x − x 0, we can transform the equation with the initial value equals x 0 to another equation with different u i ’s.
Integrating both sides of the equation, we get an integral equation:
By Caratheory’s theorem for a small 𝜖 > 0, there exists an absolutely continuous solution x̂ : [0, 𝜖] → ℝ of the initial Eq. (2.1) in a sense that x̂ satisfies the integral Eq. (2.2) for t ∈ [0, 𝜖]. We want to express the solution x̂ by means of iterated integrals of products of u i ’s. In order to do this, we introduce some algebraic tools.
To each function u i , we assign a formal variable a i , which we call a letter. The set of all letters A={a 0,…,a n } is called an alphabet. Juxtapositioning letters, we can obtain words of an arbitrary length k ∈ ℕ; the set of such words is denoted by \(\mathrm{A}^{*}_{k} = \{b_{1}\cdots b_{k}|b_{1},\ldots , b_{k} \in \mathrm{A}\}\). For k = 0, the set \(\mathrm{A}^{*}_{0} = \{1\}\) contains only one—empty—word. The set of all words is denoted by \(\mathrm{A}^{*} = \bigcup_{k=0}^{\infty } \mathrm{A}^{*}_{k}\). The juxtaposition gives rise to an associative, noncommutative product on the set of words A∗ × A∗ ∋ (v, w) → v ⋅ w = v w ∈ A∗ called the concatenation product; then the set A∗ with the concatenation product and the neutral element 1 ∈ A∗ is a free monoid generated by A. Taking ℝ-linear combination of words and bilinearly extending the concatenation product, we get the ℝ-algebras ℝ〈A〉 of noncommutative polynomials on A and ℝ〈〈A〉〉 of noncommutative series on A. In both algebras, we can consider the bilinear product ш : ℝ〈〈A〉〉 ⊗ ℝ 〈〈A〉〉 → ℝ〈〈A〉〉—the shuffle product—defined recursively for words by 1 ш w = w ш 1 = w for any w ∈ A∗, and
for all b, c ∈ A, and v, w ∈ A∗. It is easy to see that the shuffle product is commutative; thus, with 1 as the neutral element, it gives rise to an additional commutative algebra structure on \({\mathbb {R}}{\langle \mathrm{A}\rangle }\) and \({\mathbb {R}}{\langle \langle \mathrm{A}\rangle \rangle }\). We will use both concatenation and shuffle products in our considerations. It is important to indicate the priority of the shuffle product over the concatenation product in all formulas of this article, so that v ш w ⋅ a always means (v ш w)⋅a.
On ℝ〈〈A〉〉, we introduce a natural scalar product (⋅|⋅) : ℝ〈〈A〉〉 × ℝ〈〈A〉〉 → ℝ, which for elements v, w ∈ A∗ × A∗ is given by
For S ∈ ℝ〈〈A〉〉, let S k ∈ ℝ〈A〉 be the k-degree homogenous part of S, i.e.,
Clearly, \(S = \sum_{k=0}^{\infty } S_{k}\).
Define the linear homomorphism ϒt : ℝ〈A〉 → ℝ by ϒt(1) = 1, and
Equivalently, the homomorphism can be defined recursively by
for any v ∈ A∗ and a i ∈ A. Since u i ’s are bounded, the definition is correct for all t ≥ 0. One can check that ϒt is in fact a shuffle algebra homomorphism, i.e., ϒt (v ш w) = ϒt (v) ϒt (w) (see [8, 19]) which is a crucial feature in what follows. For a general series S ∈ ℝ〈〈A〉〉, the homomorphism ϒt is obviously not well defined since ϒt (S) can be divergent. We restrict the definition of ϒt to series S ∈ ℝ〈〈A〉〉 and times t ≥ 0 for which the series
is convergent.
Coming back to the initial problem, assume that there exists a series Ŝ ∈ ℝ〈〈A〉〉 such that the solution x̂ of Eq. (2.1) satisfies x̂(t) = ϒt (Ŝ) for t ∈ [0, 𝜖] (in particular ϒt (Ŝ) is convergent). Using the recursive definition of ϒt (2.4) and the fact that ϒt is a shuffle algebra homomorphism, we get from Eq. (2.2) that
where we abbreviate \(n_{i} = \left (\begin {array}{l}n\\ i\end {array}\right )\), and Ŝ шn is defined recursively in a natural way, i.e., Ŝ ш0 = 1 and Ŝ шn = Ŝ ш Ŝ ш(n−1). Now, the point is that we can forget, for a moment, about the homomorphism and consider only the algebraic equation
Proposition 2.1
There exists the unique solution Ŝ ∈ ℝ〈〈A〉〉 of the algebraic Eq. (2.5).
Proof
The equation under consideration must be satisfied for each homogeneous part, so we can split it into the following series of equations:
and for arbitrary k ∈ ℕ
where the second sum is taken over multi-indices l = (l 1,…,l i ) in
We see that the homogeneous parts of the series Ŝ are defined recursively; therefore, the series is defined uniquely.
Observe that from the recursive definition of ϒt and a property ϒt (v ш w) = ϒt (v) ϒt (w) we get
where we use an abbreviation x̂ k (t) = ϒt(Ŝ k ). By this definition, \(\hat {x}(t) = \sum_{k=0}^{\infty } \hat {x}_{k} (t)\). Moreover, any permutation of (l 1,…,l i ) gives the same expression under the integral. For l∈M(i) denote by R(l), the number of such permutations, i.e.,
Then
where the second sum is taken over
Let us state it in the following theorem.
Theorem 1
Let \(\hat {x}_{1}(t) = \int_{0}^{t} u_{0}(s) \, ds\) and recursively for k≥1 \(\hat {x}_{k+1}(t)\) is given by Eq. (2.7) or (2.8). Then \(\hat {x}(t) = \sum_{k=1}^{\infty } \hat {x}_{k} (t)\) is a formal solution of the differential Eq. (2.1).
Remark 2.2
It is worth noticing that for a fixed k ≥ 1, the number of integrals in formula (2.8) is the cardinality of \(\sum_{i=1}^{n} M_{\leq }(i)\). This is less than the cardinality of \(\sum_{i=1}^{\infty } M_{\leq }(i)\), which is the number of partitions of k. The first ten of these numbers are 1, 2, 3, 5, 7, 11, 15, 22, 30, and 42. It means that the number of integrals that we have to perform to compute \(\hat {x}_{k+1}\) grows quite slowly. In Section 5, we show that this growth is much slower than the growth of the number of nonzero components in the Chen-Fliess expansion.
There remains the problem under what assumption the solution for the algebraic equation is in the domain of the homomorphism ϒt : ℝ〈〈A〉〉 ⊃ D (ϒt) → ℝ, i.e., when \(\sum_{k}\Upsilon^{t} (\hat {S}_{k})\) is convergent. In order to solve it, we need to compute the number of words (with multiplicities) in each homogeneous part of Ŝ. So for S ∈ ℝ〈A〉, let us introduce the following definition:
Proposition 2.3
If Ŝ k is the k-degree homogeneous part of the solution Ŝ of the algebraic Eq. (2.5), then for k≥1, #Ŝ k = ((n − 1) (k − 1) + 1) ⋅ ((n − 1) (k − 2) + 1) ⋯ n (in particular #Ŝ 1 = 1) and #Ŝ 0 = 0.
In particular, for n = 0, #Ŝ 1 = 1 and #Ŝ k = otherwise; for n = 1, #Ŝ k = 1; for n = 2, #Ŝ k = k!; for n = 3, #Ŝ k = (2k − 1)!!; for n = 4, #Ŝ k = (3k − 2)!!!; and so on.
The proposition will be proved in Section 3.
Now, we state the theorem about convergence of the expansion.
Theorem 2
Let n ∈ ℕ and assume u 0,…,u n : [0, T] → ℝ are measurable functions such that |u i | ≤ M for an M > 0. Let Ŝ ∈ ℝ〈〈A〉〉 be the unique solution of the algebraic Eq. (2.5). Then the series ∑ k x̂ k (t) = ∑ k ϒt (Ŝ k ) is absolutely convergent for 0 ≤ t < min{T,1/(M(n − 1))} if n ≥ 2, 0 ≤ t ≤ T if n = 0, 1, and x̂(t) = ϒt(Ŝ) is the solution of the differential Eq. (2.1) on the same segment.
Proof
For \(v\in \mathrm{A}^{*}_{k}\), the iterated integral ϒt (v) is in fact taken over a k-dimensional simplex of k-dimensional measure t k/k!. Since u i ’s are bounded by M, we have |ϒt (v)|≤(M t)k/k!. Therefore,
Since \(\#\hat {S}_{k+1}/\#\hat {S}_{k} = \frac {(n-1)k+1}{k+1} \xrightarrow [k\to \infty ]{} n-1\), the series ∑ k ϒt(Ŝ k ) is convergent for t < 1/(M(n−1)) if n ≥ 2, and t < T if n = 1. For n = 0, the statement is obvious.
3 Counting Trees
In this section, we prove Proposition 2.3. In order to do this, we consider certain classes of trees. It occurs that the number of trees in these particular classes equals #Ŝ k on the one hand and ((n−1)(k−1)+1)⋅((n−1)(k−2)+1)⋯n on the other hand.
For k, n ∈ ℕ let \(\mathcal {T}^{n}_{k}\) denote the set of planar, rooted, full n-ary and increasingly labeled trees on k vertices. Recall that a tree is rooted if there exists a distinguished vertex called the root; is full n-ary if each vertex has exactly none or n children; is on k vertices if the number of vertices with n children (parent vertices) is equal k; and is increasingly labeled if the parent vertices are labeled by natural numbers from 1 to k, and the labels increase along each branch starting at the root (in particular, the root is labeled by “1”). A leaf of a tree is a nonparent vertex, i.e., a vertex without children. It is important to note that the number of leafs in each tree in \(\mathcal {T}^{n}_{k}\) is constant and equals (n−1)k+1. Indeed, using induction on k, we see that for k = 0, the only tree in \( \mathcal {T}^{n}_{0}\) has 0 children, so the root is the only leaf; each tree \(\mathcal {T}^{n}_{k}\) can be obtained from a tree \(\mathsf {t}\in \mathcal {T}^{n}_{k-1}\) by adding n leafs to a certain leaf of t, so the number of leafs increases by (n−1).
Now, we count the cardinality of \(\mathcal {T}^{n}_{k}\) in two different ways.
Lemma 3.1
The cardinality of \(\mathcal {T}^{n}_{k}\) equals \(\#{\mathcal {T}^{n}_{k}} = ((n-1)(k-1) +1)\cdot ((n-1)(k-2) +1)\cdots n\) for k ≥ 1 and \(\mathcal {T}^{n}_{0}=1\).
Proof
The case n = 0 is trivial. Fixn ∈ ℕ s.t. n ≥ 1. We proceed by induction on k ∈ ℕ. For k = 0, there is only one tree, so the statement is correct. Assume \(\#{\mathcal {T}^{n}_{k} = ((n-1)(k-1) +1)\cdot ((n-1)(k-2) +1)\cdots n}\). Each tree in \(\mathcal {T}^{n}_{k+1}\) comes from the unique tree t in \(\mathcal {T}^{n}_{k}\) by adding label “ k + 1” and k vertices to a leaf of t. Since the number of leafs of t is equal (n−1)k+1, we obtain the result.
Lemma 3.2
For k ∈ ℕ the cardinality of \(\mathcal {T}^{n}_{k+1}\) equals
where the sum is taken over multi-indices l = (l 1,…, l n ) in
Proof
First of all, observe that for k ∈ ℕ, each tree in \(\mathcal {T}^{n}_{k+1}\) is uniquely given by n trees \(\mathsf {t}_{1}\in \mathcal {T}^{n}_{l_{1}},\ldots ,\mathsf {t}_{n}\in \mathcal {T}^{n}_{l_{n}}\) such that l 1 + ⋯ + l n = k, and a partition of the set {2,…,k+1} into n disjoint sets I 1,…,I n of the cardinality # I i = l i for i = 1,…,n (we do not assume I i ≠ ∅), i.e.,
where I(l) is the set of all partitions of the set {2,…,k+1} into n disjoint sets I 1,…,I n s.t. # I i = l i for i = 1,…,n. Indeed, the root of a given tree \(\mathsf {t} \in \mathcal {T}^{n}_{k+1}\) has n child vertices v 1,…,v n . Each v i is the root of a certain maximal subtree \(\tilde {\mathsf {t}}_{i}\) of t. We assume that \(\tilde {\mathsf {t}}_{i}\) has l i parent vertices, which are labeled by some numbers \(2 \leq a_{i}^{1} < \cdots < a_{i}^{l_{i}} \leq k+1\). Obviously, l 1 + ⋯ + l n = k. Changing the label “\(a^{j}_{i}\)” into a label “j,” we obtain a tree \(\mathsf {t}_{i}\in \mathcal {T}^{n}_{l_{i}}\). Defining \(I_{i} = \left \{a_{i}^{1}, \ldots , a_{i}^{l_{i}}\right \}\) for i = 1,…,n, we have a partition of {2,…,k+1} into n disjoint sets, i.e., {2,…,k+1}=I 1∪⋯∪I n . It is clear how to invert this procedure in order to get its uniqueness.
Using the above correspondence, it is easy to establish the formula in the lemma since there are \(\frac {{k}!}{{l}_{1} !\cdots {l}_{n}!}\) possible partitions of the set {2,…,k+1} into n disjoint parts I 1,…,I n such that \(\# I_{i} = l_{i} \in \mathbb {N}\), i.e., \(\# I(\mathbf{l}) = \frac {{k}!}{{l}_{1} !\cdots {l}_{n} !}\).
We are now ready to prove Proposition 2.3.
Proof of Proposition 2.3
First of all, observe that for i ∈ ℕ, l 1,…,l i ∈ ℕ, and words \(v_{1}\in \mathrm{A}^{*}_{l_{1}},\ldots ,v_{i}\in \mathrm{A}^{*}_{l_{i}}\), the number of words in the shuffle product v 1 ш ⋯ ш v i equals
Using this fact, homogeneity of polynomials Ŝ l , and the recursive formula (2.6), we get
where M(i) contains multi-indices (l 1,…, l i ) ∈ ℕi such that l 1 + ⋯ + l i = k and, what is important, l 1,…,l i ≥1. In order to get rid of the first sum, we introduce the following notation:
and allow l 1,…,l i to be equal 0. Then, we rewrite Eq. (3.1) as
where M 0(n) = { (l 1,…, l n ) ∈ ℕn |l 1 + ⋯ + l n = k}. Indeed, if l 1,…, l n ∈ ℕ and only i of them, say l̂ 1,…, l̂ i , are not equal 0, then
Clearly, there are \(\left (\begin {array}{c}n \\ i\end {array} \right )\) different multi-indices (l 1,…,l n ) satisfying this condition, and this is the reason for the Newton symbol to disappear in formula (3.2).
Finally, we see that by Lemma 3.2, the recursive formula (3.2) for the numbers N k overlaps with the one for the cardinality of trees \(\#\mathcal {T}^{n}_{k}\). Since the series coincide for k = 0, i.e., \(N_{0} = \#\mathcal {T}^{n}_{0} = 1\), we conclude using Lemma 3.1 that
for k ≥ 1. The fact that #Ŝ 0 = 0 is trivial.
Remark 3.3
The above proof can be simplified for n = 0,1 when #Ŝ k ≤ 1, but also for n = 2. In this case, the recursive formula (2.6) gives
Assuming the inductive hypothesis #Ŝ l = l! for l≤k, we obtain
4 Examples
In this section, we discuss the three simplest cases when n = 0,1,2, and the case where only u 0 and u n are not vanishing. We will need one additional intuitive notation, namely for S ∈ ℝ〈〈A〉〉 such that (S|1) = 0, we define the shuffle exponent
where we recall that S ш 0=1 and S ш k=S ш S ш (k−1).
If n = 0, then the Eq. (2.1) is ẋ(t) = u 0(t), x(0) = 0, and obviously, a solution is x̂(t) = ϒt (Ŝ), where Ŝ =Ŝ 1 = a 0 is homogeneous of degree 1.
Let us pass to the case n = 1 when Eq. (2.1) is a linear equation ẋ(t) = u 0(t) + u 1(t)x(t), x (0) = 0, which can be solved by a variation of parameter. Let us see how it can be done using the series Ŝ. Using recursive formula (2.6),
we get \(\hat {S} = a_{0}\cdot \left (1 + a_{1} + a_{1}^{2} + a_{1}^{3} + \cdots \right )\). If we use formula \(a_{1}^{k} = a_{1}^{{\fontsize {8.8pt}{9.6pt}{\sqcup }\!{\sqcup }}\, k}/k!\), we get
This expression looks nice, but there is a problem since a 0 factor is on the left-hand side, and therefore the expression will not simplify if we apply ϒt to it. In order to obtain the solution in a common form, we prove the following lemma.
Lemma 4.1
For a 0, a 1 ∈ A it follows that
Proof
Observe first that for k, l ∈ ℕ
Indeed, for k = 0, the formula is correct. Using the inductive hypothesis for each m≤k and the defining formula for shuffle product (2.3), we get
Since \(a_{1}^{k+1}\,{\fontsize {8.8pt}{9.6pt}{\sqcup }\!{\sqcup }}\, a_{1}^{l} = \left (\begin {array}{c} {l+k+1} \\ {k+1} \end {array} \right )a_{1}^{l+k+1}\), we obtain formula (4.2).
Using the above proved formula, we see that
where in the last line, we change a method of summation by putting k′ = k − m and l′ = l + m. Since the expression in the squared brackets equals \(0^{l^{\prime }}\), the sum over l ′ reduces to the one summand with l ′ = 0, and therefore
This ends the proof.
From the lemma, it follows that
Since ϒt : ℝ〈〈A〉〉 → ℝ is a shuffle-algebra homomorphism, it follows that ϒt(expш (S))= exp(ϒt (S)) for all series S such that (S|1) = 0, and therefore
which is the standard formula.
For n = 2, the equation under consideration is
that is a general Riccati equation. In this case, the series Ŝ is the unique solution of
and therefore Ŝ = ∑ k Ŝ k , where Ŝ k are given by the recurrence
Let us mention that the Riccati equation is a Lie-Scheffers system of the type 𝔞1 (see [3, 7] and [17, 18]). More precisely, if we take vector fields
on ℝ, then they satisfy the following commutation relations:
It means the vector fields span a simple Lie algebra of the type 𝔞1 (isomorphic to 𝔰𝔩(2, ℝ)), and thus (4.3)—equivalent to ẋ(t) = ∑ u i (t) X i —is a Lie-Scheffers system of this type. The solution in terms of iterated integrals of u i ’s for such a system was given in [16]. Let us recall the main theorems of this article.
Theorem 3 (Theorem 1 in [16])
Let \(X_{a}, X_{b}, X_{c} \in \Gamma ({\mathfrak {M}})\) be smooth tangent vector fields on a manifold \(\mathfrak {M}\) satisfying [X a , X b ] = 2X a , [X a , X c ] = −X b , [X b , X c ] = 2 X c . Let \(u_{a}, u_{b}, u_{c} : [0,T]\to \mathbb {R}\) be fixed measurable functions. Then (locally), the solution \(x:[0,T]\to \mathfrak {M}\) of the differential equation
is of the form
Here, Ξ a , Ξ b , Ξ c : [0, T] → ℝ are given by Ξ d (t) :=ϒt (S d) (for d = a, b, c), where
and \(S_{\mathfrak {a}_{1}}\in {\mathbb {R}}{\langle \langle \mathrm{A}\rangle \rangle }\) is the unique solution of the algebraic equation
In particular, we have
Theorem 4 (Theorem 2 in [16])
For fixed measurable functions u a , u b , u c : [0, T] → ℝ the function Ξ a : [0,T] → ℝ, defined in Theorem 3 by \(\Xi_{a}(t) = \Upsilon^{t}(a\cdot \exp_{{\fontsize {8.8pt}{9.6pt}\sqcup \!\sqcup }}\,({2S_{\mathfrak {a}_{1}}}))\), is (locally) the solution of the Riccati equation:
Observe that taking X a = X 0, X b = X 1, X c = −X 2 (and therefore c = −a 2), and u a = u 0, u b = u 1, u c = u 2, and Ξ a (t) = x(t), the system (4.3) can be put into the context of the above theorems in the following way. From Theorem 4, we conclude that the solution of (4.3) is x(t) = ϒt(Ŝ), where
and \(S_{\mathfrak {a}_{1}}\in {\mathbb {R}}{\langle \langle \mathrm{A}\rangle \rangle }\) is, by Theorem 3, the unique solution of the algebraic equation
Additionally, from the last line in Theorem 3, we conclude that
In the discussed article, there was also given a recursive formula for \(S_{\mathfrak {a}_{1}}\), but observe that in fact the algebraic Eq. (4.4) for \(\hat {S}\) is simpler than the Eq. (4.8) for \(S_{\mathfrak {a}_{1}}\). In consequence, it is reasonable to invert this statement saying that the series \(S_{\mathfrak {a}_{1}}\) is given by (4.9), where \(\hat {S}\) is the solution of Eq. (4.4). Now, using Theorem 2, we get the following corollary about the \(\mathfrak {a}_{1}\)-type Lie-Scheffers system considered in Theorem 3.
Proposition 4.2
In the context of Theorem 3, if |u d | ≤ M for an M > 0 (d = a, b, c), then the solution (4.6) exists for 0 ≤ t < min{T, 1/M}.
Proof
By Theorem 1, the function ϒt(Ŝ) is well defined for 0 ≤ t < min{T, 1/M}. The above observations (in particular formula (4.9)) implies that \(\Upsilon^{t} (S_{\mathfrak {a}_{1}})\) is also well defined in this interval. Finally, by formula (4.7), each function Ξ d (t):=ϒt (S d) (d = a, b, c) is defined for 0 ≤ t < min{T, 1/M}, too.
Let us observe that in each of the discussed cases, the solution is of the form Ŝ = a 0 ⋅ expш (L), where L ∈ ℝ 〈〈A〉〉 such that (L|1) = 0, and the series L in case n reduces to L in case n−1 if taking u n ≡0. Indeed, L = 0 for n = 0, L = a 1 for n = 1, and \(L = S_{\mathfrak {a}_{1}}\) for n = 2 reduces, by Eq. (4.8), to a 1 for u 2 ≡ 0. This observation suggests a question: if the same holds for all n ∈ ℕ? Since the Riccati equation is essentially the only differential equation on a real line which is connected with the action of a group (namely the special linear group S L(2)) [4, 5], one could anticipate that a generalization is impossible. Nevertheless, the problem is open.
Another example we are going to consider is the one where there are only two nonvanishing summands, i.e., \(\dot x(t) = \left (\begin {array}{c} n \\ m \end {array} \right ) u(t)x^{m}(t) + u_{n}(t) x^{n}(t)\), x (0) = 0, and 0 ≤ m < n are fixed. The case m ≠ 0 has the trivial solution x(t) ≡ 0, so in fact we consider
with n ≥ 1 fixed.
Proposition 4.3
Let u 0, u n : [0, T] → ℝ be measurable, bounded functions. Then the solution of (4.10) is \(x(t) = \sum_{k=0}^{\infty } x_{k}(t)\), where x k (t) are recursively given by \(x_{0}(t) = \int_{0}^{t} u_{0}(s)\, ds\), and
where \(N(k) = \left \{\, {(l_{1},\ldots ,l_{k}) \in \mathbb {N}^{k}} \ | \ {n = l_{1}+\cdots +l_{k},\, k -1 = l_{2} + 2l_{3} + \cdots + (k-1)l_{k}} \, \right \}\).
Let us write the first few components of the expansion given in the above proposition.
Proof
The algebraic Eq. (2.5) associated with the differential Eq. (4.10) is
Let us first show that the only nonvanishing homogeneous parts of Ŝ are Ŝ kn+1, where k ∈ ℕ. We prove it by the induction on k. The k-th hypothesis is that Ŝ kn+l = 0 for all l = 2,…,n. For k = 0, the hypothesis is clearly correct. Assume that it is correct for k < K, and let us prove that Ŝ Kn+2 = ⋯ = Ŝ Kn+n = 0. Using (4.11) and the induction hypothesis, we see that
where the sum is taken over all (p, m) = (p 1,…,pK, m 2,…,m l−1) ∈ Ñ ⊂ ℕK+l−2,
and \(C(\mathbf{p},\mathbf{m})= \frac {n!}{\mathbf{p} !\, \mathbf{m} !}\), p!=p 1!⋯p K !, m!=m 2!⋯m l−1!. If m 2 + ⋯ + m l−1≥1, then from the second equality defining \(\tilde {N}\), we have
which is not less than n (by the first equality defining Ñ), a contradiction. If m 2 + ⋯ + m l−1 = 0, then p 1 + ⋯ + p K = n and therefore
But n does not divide l−1∈{1,…,n−1}. This implies Ñ = ∅ and Ŝ kn+l = 0 for l = 2,…,n. We conclude that the solution of (4.11) is of the form \(\hat {S} = \sum_{k=0}^{\infty } \hat {S}_{kn +1} .\)
Now, similarly as above, we see from (4.11) that
where the sum is taken over
Using the first equation defining N(k), we simplify the second equation defining N(k) as follows:
Therefore,
Denoting x k (t) = ϒt(Ŝ kn+1) and using the homomorphic property of ϒt, we obtain the hypothesis of the proposition.
5 Comparison with the Chen-Fliess Approach
In this section, we compare the number of nonzero iterated integrals in two approaches: the one given in this article and the Chen-Fliess one. Recall that in the latter approach [9], we assume that we have a differential equation:
where \(X_{i}(x) = x^{i} \frac \partial {\partial x}\), for i = 0,…,n, are differential vector fields on ℝ. The solution is given by
where for \(v = a_{i_{1}}\cdots a_{i_{k}} \in \mathrm{A}^{*} \) we define X v (x)(0) := X i 1 ⋯ X i k (x)(0) as a composition of vector fields acting on the function h(x) = x and evaluated at the initial value x 0 = 0. Since X i (x)(0)≠0 only for i = 0, the sum can be significantly reduced. Our aim is to eliminate all unnecessary summands. Since the second derivative \(\frac {\partial^{2}} {\partial x^{2}} x = 0\), we need to compute X v (x)(0) modulo the second and higher derivatives.
Lemma 5.1
For k ≥ 2 and \(v = a_{i_{1}}\cdots a_{i_{k}} \in \mathrm{A}^{*}_{k}\), we have
Proof
We use the induction on k. The case k = 2 is clear. Assume \(w = a_{i_{2}}\cdots a_{i_{k+1}} \in \mathrm{A}^{*}_{k} \) and \(v = a_{i_{1}}w \in \mathrm{A}^{*}_{k+1}\). By the induction hypothesis \(X_{w} = I x^{\alpha } \frac {\partial } {\partial x} \mod \frac {\partial^{2}} {\partial x^{2}}, \)where I = i k+1(i k+1+i k −1)⋯(i k+1 + ⋯ + i 3−k+2), α = i 2 + ⋯ + i k+1−k+1, and thus \(X_{v} = x^{i_{1} + \alpha -1} I \alpha \frac {\partial } {\partial x} \mod \frac {\partial^{2}} {\partial x^{2}}\). This ends the proof.
Using this lemma, we conclude that for \(v = a_{i_{1}}\cdots a_{i_{k}} \in \mathrm{A}^{*}_{k}\), X v (x)(0)≠0 only if i 1 + ⋯ + i k = k−1, and therefore (5.1) simplifies to
where a i :=a i 1 ⋯ a i k , and the second sum is taken over all multi-indices i = (i 1,…,i k ) in the set M 0(k)⊂({0,…,n})k given by a one equality and k−1 inequalities:
On the k-th step of approximation, there are # M 0(k) nontrivial integrals to compute. If we assume n = ∞, one can compute that these are Catalan numbers, i.e., \(\# M_{0}(k) = \frac {1}{k+1} \left (\begin {array}{c} 2k \\ k\\ \end {array}\right )\). The first ten of these numbers are 1, 2, 5, 14, 42, 132, 429, 1,430, and 4,862. We see that this growth is much faster than the growth 1, 2, 3, 5, 7, 11, 15, 22, 30, and 42 of integrals needed to compute the k-th step of the expansion in our approach, as we mentioned in Remark 2.2.
6 Concluding Remarks
In the article, we formulated a scheme for expanding a solution of a general nonautonomous polynomial differential equation. The time-dependent homogeneous parts of the expansion were expressed in terms of iterated integrals. The formula for each of this part was given recursively by Eq. (2.6). The advantage of our approach is that it is made on algebraic level. We use the shuffle product which is an algebraic analog of multiplication of iterated integrals. Therefore, the algebraic formula can be easily transformed into the analytic one giving the expansion of the solution of the initial problem, as we stated in Theorem 1.
Finally, there is some work to be done. One way of development is to write an explicit formula for the algebraic series Ŝ preferably with the use of a shuffle product. It would also be important to find a deeper algebraic structure of this solution. Another way is to rewrite the scheme for systems of nonautonomous polynomial differential equations and estimate the radius of convergence in this case. This is important, for example, to integrate higher-order Lie-Sheffers systems.
Notes
Throughout the article, we assume ℕ = {0, 1, 2, 3, …}.
References
Alwash M. Periodic solutions of Abel differential equations. J Math Anal Appl. 2007;329(2):1161–9.
Alwash MA. Periodic solutions of polynomial non-autonomous differential equations. Electron J Differ Equ. 2005;2005(84):1–8.
Cariñena JF, de Lucas J. Integrability of Lie systems through Riccati equations. J Nonlinear Math Phys. 2011;18(1):29–54.
Cariñena J, de Lucas J. Lie systems: theory, generalisations, and applications, Vol. 479. Warsaw: Institute of Mathematics, Polish Academy of Sciences; 2011.
Carinena J, Ramos A. Integrability of the Riccati equation from a group-theoretic viewpoint. Int J Mod Phys A. 1999;14(12):1935–51.
Cariñena JF, de Lucas J, Rañada MF. A geometric approach to integrability of Abel differential equations. Int J Theor Phys. 2011;50(7):2114–24.
Cariñena JF, de Lucas J, Ramos A. A geometric approach to integrability conditions for Riccati equations. Electron J Differ Equ 2007. (122) 14pp (electronic).
Chen K-T. Algebraic paths. J Algebra. 1968;10:8–36.
Fliess M. Fonctionnelles causales non lin´eaires et indéterminées non commutatives. Bull Soc Math France. 1981;109(1):3–40.
Gray WS, Wang Y. Fliess operators on L p spaces: convergence and continuity. Syst Control Lett. 2002;46(2):67–74.
Gray WS, Wang Y. Non-causal Fliess operators and their shuffle algebra. Int J Control. 2008;81(3):344–57.
Harko T, Mak M. Relativistic dissipative cosmological models and Abel differential equation. Comput Math Appl. 2003;46(5):849–53.
Kawski M, Sussmann HJ. Noncommutative power series and formal Lie-algebraic techniques in nonlinear control theory. In: Operators, systems, and linear algebra (Kaiserslautern, 1997). European Consort. Math. Indust. Teubner, Stuttgart; 1997. pp. 111–128.
Mak M, Chan H, Harko T. Solutions generating technique for Abel-type nonlinear ordinary differential equations. Comput Math Appl. 2001;41(10):1395–401.
Mak M, Harko T. New method for generating general solution of Abel differential equation. Comput Math Appl. 2002;43(1):91–4.
Pietrzkowski G. Explicit solutions of the \(\mathfrak {a}_{1}\)a1-type Lie-Scheffers system and a general Riccati equation. J Dyn Control Syst. 2012;18(4):551–71.
Redheffer RM. On solutions of Riccati’s equation as functions of the initial values. J Ration Mech Anal. 1956;5:835–48.
Redheffer RM. The Riccati equation: initial values and inequalities. Math Ann. 1957;133:235–50.
Reutenauer C. Free Lie algebras, vol. 7 of London Mathematical Society Monographs. New series. New York: Oxford University Press; 1993.
Xu Y, He Z. The short memory principle for solving Abel differential equation of fractional order. Comput Math Appl. 2011;62(12):4796–805.
Acknowledgments
The author was partially supported by the Polish Ministry of Research and Higher Education grant NN201 607540, 2011-2014.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Pietrzkowski, G. On Expansion of a Solution of General Nonautonomous Polynomial Differential Equation. J Dyn Control Syst 20, 403–417 (2014). https://doi.org/10.1007/s10883-014-9227-6
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10883-014-9227-6
Keywords
- Polynomial differential equation
- Generalized Abel equation
- Riccati equation
- Shuffle product
- Combinatorics of trees