1 Introduction

The study of functional differential equations with involutions (DEI) can be traced back to the solution of the equation x (t)=x(1/t) by Silberstein (see [1]) in 1940. Briefly speaking, an involution is just a function f that satisfies f(f(x))=x for every x in its domain of definition. For most applications in analysis, the involution is defined on an interval of ℝ and in the majority of the cases, it is continuous, which implies it is decreasing and has a unique fixed point. Ever since that foundational paper of Silberstein, the study of problems with DEI has been mainly focused on those cases with initial conditions, with an extensive research in the case of the reflection f(x)=x.

Wiener and Watkins study in [2] the solution of the equation x (t)ax(t)=0 with initial conditions. Equation x (t)+ax(t)+bx(t)=g(t) has been treated by Piao in [3, 4]. In [2, 58] some results are introduced to transform this kind of problems with involutions and initial conditions into second order ordinary differential equations with initial conditions or first order two dimensional systems, granting that the solution of the last will be a solution to the first. Furthermore, asymptotic properties and boundedness of the solutions of initial first order problems are studied in [9] and [10], respectively. Second order boundary value problems have been considered in [8, 1113] for Dirichlet and Sturm-Liouville boundary value conditions, higher order equations has been studied in [14]. Other techniques applied to problems with reflection of the argument can be found in [1517].

More recently, the papers of Cabada et al. [18, 19] have further studied the case of the second order equation with two-point boundary conditions, adding a new element to the previous studies: the existence of a Green’s function. Once the study of the sign of the aforementioned function is done, maximum and anti-maximum principles follow. Other works in which Green’s functions are obtained for functional differential equations (but with a fairly different setting, like delay or normal equations) are, for instance, [2025].

In this paper we try to answer to the following question: How is it possible find a solution of an initial problem with a differential equation with reflection? What is more, in which cases can a Green’s function be constructed and how can it be found?

Section 2 will have two parts. In the first one we construct the solutions of the n th order DEI with reflection, constant coefficients and initial conditions. In the second one we find the Green’s function for the order one case. In Section 3 we apply these findings in order to describe exhaustively the range of values for which suitable comparison results are fulfilled and we illustrate them with some examples.

2 Solutions of the problem

In order to prove an existence result for the n th order DEI with reflection, we consider the even and odd parts of a function f, that is, f e (x):=[f(x)+f(x)]/2 and f o (x):=[f(x)f(x)]/2 as done in [18].

2.1 The n th order problem

Consider the following n th order DEI with involution:

Lu:= k = 0 n [ a k u ( k ) ( t ) + b k u ( k ) ( t ) ] =h(t),tR;u( t 0 )=c,
(2.1)

where h L loc 1 (R), t 0 ,c, a k , b k R for k=0,,n1; a n =0; b n =1. A solution to this problem will be a function u W loc n , 1 (R), that is, u is k times differentiable in the sense of distributions and each of the derivatives satisfies u ( k ) | K L 1 (K) for every compact set KR.

Theorem 2.1 Assume that there exist u ˜ and v ˜ , functions such that satisfy

i = 0 n j ( i + j j ) [ ( 1 ) n + i 1 a i + j u ˜ ( i ) ( t ) + b i + j u ˜ ( i ) ( t ) ] =0,tR;j=0,,n1,
(2.2)
i = 0 n j ( i + j j ) [ ( 1 ) n + i a i + j v ˜ ( i ) ( t ) + b i + j v ˜ ( i ) ( t ) ] =0,tR;j=0,,n1,
(2.3)
( u ˜ e v ˜ e u ˜ o v ˜ o )(t)0,tR,
(2.4)

and also one of the following:

( h 1 ) L u ˜ = 0 and u ˜ ( t 0 ) 0 , ( h 2 ) L v ˜ = 0 and v ˜ ( t 0 ) 0 , ( h 3 ) a 0 + b 0 0 and ( a 0 + b 0 ) 0 t 0 ( t 0 s ) n 1 v ˜ ( t 0 ) u ˜ e ( s ) u ˜ ( t 0 ) v ˜ o ( s ) ( u ˜ e v ˜ e u ˜ o v ˜ o ) ( s ) d s 1 .

Then problem (2.1) has a solution.

Proof Define

φ:= h o v ˜ e h e v ˜ o u ˜ e v ˜ e u ˜ o v ˜ o andψ:= h e u ˜ e h o u ˜ 0 u ˜ e v ˜ e u ˜ o v ˜ o .

Observe that φ is odd, ψ is even and h=φ u ˜ +ψ v ˜ . So, in order to ensure the existence of solution of problem (2.1) it is enough to find y and z such that Ly=φ u ˜ and Lz=ψ v ˜ for, in that case, defining u=y+z, we can conclude that Lu=h. We will deal with the initial condition later on.

Take y= φ ˜ u ˜ , where

φ ˜ (t):= 0 t 0 s n 0 s 2 φ( s 1 )d s 1 n d s n = 1 ( n 1 ) ! 0 t ( t s ) n 1 φ(s)ds.

Observe that φ ˜ is even if n is odd and vice versa. In particular, we have

φ ˜ ( j ) (t)= ( 1 ) j + n 1 φ ˜ ( j ) (t),j=0,,n.

Thus,

L y ( t ) = k = 0 n [ a k ( φ ˜ u ˜ ) ( k ) ( t ) + b k ( φ ˜ u ˜ ) ( k ) ( t ) ] = k = 0 n j = 0 k ( k j ) [ ( 1 ) k a k φ ˜ ( j ) ( t ) u ˜ ( k j ) ( t ) + b k φ ˜ ( j ) ( t ) u ˜ ( k j ) ( t ) ] = k = 0 n j = 0 k ( k j ) φ ˜ ( j ) ( t ) [ ( 1 ) k + j + n 1 a k u ˜ ( k j ) ( t ) + b k u ˜ ( k j ) ( t ) ] = j = 0 n φ ˜ ( j ) ( t ) k = j n ( k j ) [ ( 1 ) k + j + n 1 a k u ˜ ( k j ) ( t ) + b k u ˜ ( k j ) ( t ) ] = j = 0 n φ ˜ ( j ) ( t ) i = 0 n j ( i + j j ) [ ( 1 ) i + n 1 a i + j u ˜ ( i ) ( t ) + b i + j u ˜ ( i ) ( t ) ] = φ ˜ ( n ) ( t ) u ˜ ( t ) = φ ( t ) u ˜ ( t ) .

Hence, Ly=φ u ˜ .

All the same, by taking z= ψ ˜ v ˜ with ψ ˜ (t):= 1 ( n 1 ) ! 0 t ( t s ) n 1 ψ(s)ds, we have Lz=ψ v ˜ .

Hence, defining u ¯ :=y+z= φ ˜ u ˜ + ψ ˜ v ˜ we find that u ¯ satisfies L u ¯ =h and u ¯ (0)=0.

If we assume (h1), w= u ¯ + c u ¯ ( t 0 ) u ˜ ( t 0 ) u ˜ is clearly a solution of problem (2.1).

When (h2) is fulfilled a solution of problem (2.1) is given by w= u ¯ + c u ¯ ( t 0 ) v ˜ ( t 0 ) v ˜ .

If (h3) holds, using the aforementioned construction we can find w 1 such that L w 1 =1 and w 1 (0)=0. Now, w 2 := w 1 1/( a 0 + b 0 ) satisfies L w 2 =0. Observe that the second part of condition (h2) is precisely w 2 ( t 0 )0, and hence, defining w= u ¯ + c u ¯ ( t 0 ) w 2 ( t 0 ) w 2 we see that w is a solution of problem (2.1). □

Remark 2.1 Having in mind condition (h1) in Theorem 2.1, it is immediate to verify that L u ˜ =0 provided that

a i =0for all i{0,,n1} such that n+i is even.

In an analogous way for (h2), one can show that L v ˜ =0 when

a i =0for all i{0,,n1} such that n+i is odd.

2.2 The first order problem

After proving the general result for the n th order case, we concentrate our work in the first order problem

u (t)+au(t)+bu(t)=h(t),for a.e. tR;u( t 0 )=c,
(2.5)

with h L loc 1 (R) and t 0 ,a,b,cR. A solution of this problem will be u W loc 1 , 1 (R).

In order to do so, we first study the homogeneous equation

u (t)+au(t)+bu(t)=0,tR.
(2.6)

By differentiating and making the proper substitutions we arrive at the equation

u (t)+ ( a 2 b 2 ) u(t)=0,tR.
(2.7)

Let ω:= | a 2 b 2 | . Equation (2.7) presents three different cases:

(C1) a 2 > b 2 . In such a case, u(t)=αcosωt+βsinωt is a solution of (2.7) for every α,βR. If we impose (2.6) onto this expression we arrive at the general solution

u(t)=α ( cos ω t a + b ω sin ω t )

of (2.6) with αR.

(C2) a 2 < b 2 . Now, u(t)=αcoshωt+βsinhωt is a solution of (2.7) for every α,βR. To get (2.6) we arrive at the general solution

u(t)=α ( cosh ω t a + b ω sinh ω t )

of (2.6) with αR.

(C3) a 2 = b 2 . In this a case, u(t)=αt+β is a solution of (2.7) for every α,βR. So, (2.6) holds provided that one of the two following cases is fulfilled:

(C3.1) a=b, where

u(t)=α(12at)

is the general solution of (2.6) with αR, and

(C3.2) a=b, where

u(t)=α

is the general solution of (2.6) with αR.

Now, according to Theorem 2.1, we denote u ˜ , v ˜ satisfying

u ˜ (t)+a u ˜ (t)+b u ˜ (t)=0, u ˜ (0)=1,
(2.8)
v ˜ (t)a v ˜ (t)+b v ˜ (t)=0, v ˜ (0)=1.
(2.9)

Observe that u ˜ and v ˜ can be obtained from the explicit expressions of the cases (C1)-(C3) by taking α=1.

Remark 2.2 Note that if u is in the case (C3.1), v is in the case (C3.2) and vice versa.

We have now the following properties of functions u ˜ and v ˜ .

Lemma 2.2 For every t,sR, the following properties hold.

  1. (I)

    u ˜ e v ˜ e , u ˜ o k v ˜ o for some real constant k a.e.,

  2. (II)

    u ˜ e (s) v ˜ e (t)= u ˜ e (t) v ˜ e (s), u ˜ o (s) v ˜ o (t)= u ˜ o (t) v ˜ o (s),

  3. (III)

    u ˜ e v ˜ e u ˜ o v ˜ o 1,

  4. (IV)

    u ˜ (s) v ˜ (s)+ u ˜ (s) v ˜ (s)=2[ u ˜ e (s) v ˜ e (s) u ˜ o (s) v ˜ o (s)]=2.

Proof (I) and (III) can be checked by inspection of the different cases. (II) is a direct consequence of (I). (IV) is obtained from the definition of even and odd parts and (III). □

Now, Theorem 2.1 has the following corollary.

Corollary 2.3 Problem (2.5) has a unique solution if and only if u ˜ ( t 0 )0.

Proof Considering Lemma 2.2(III), u ˜ and v ˜ , defined as in (2.8) and (2.9), respectively, satisfy the hypothesis of Theorem 2.1, (h1), therefore a solution exists.

Now, assume w 1 and w 2 are two solutions of (2.5). Then w 2 w 1 is a solution of (2.6). Hence, w 2 w 1 is of one of the forms covered in the cases (C1)-(C3) and, in any case, a multiple of u ˜ , that is, w 2 w 1 =λ u ˜ for some λR. Also, it is clear that ( w 2 w 1 )( t 0 )=0, but we have u ˜ ( t 0 )0 as a hypothesis, therefore λ=0 and w 1 = w 2 . This is, problem (2.5) has a unique solution.

Assume now that w is a solution of (2.5) and u ˜ ( t 0 )=0. Then w+λ u ˜ is also a solution of (2.5) for every λR, which proves the result. □

This last theorem raises an obvious question: In which circumstances u ˜ ( t 0 )0? In order to answer this question, it is enough to study the cases (C1)-(C3). We summarize this study in the following lemma, which can be checked easily.

Lemma 2.4 u ˜ ( t 0 )=0 only in the following cases:

  • if a 2 > b 2 and t 0 = 1 ω arctan ω a + b +kπfor somekZ,

  • if a 2 < b 2 , ab>0and t 0 = 1 ω arctanh ω a + b ,

  • ifa=band t 0 = 1 2 a .

Definition 2.1 Let t 1 , t 2 R. We define the oriented characteristic function of the pair ( t 1 , t 2 ) as

χ t 1 t 2 (t):={ 1 , t 1 t t 2 , 1 , t 2 t < t 1 , 0 , otherwise .

Remark 2.3 The previous definition implies that, for any given integrable function f:RR,

t 1 t 2 f(s)ds= χ t 1 t 2 (s)f(s)ds.

Also, χ t 1 t 2 = χ t 2 t 1 .

The following corollary gives the expression of the Green’s function for problem (2.5).

Corollary 2.5 Suppose u ˜ ( t 0 )0. Then the unique solution of problem (2.5) is given by

u(t):= G(t,s)h(s)ds+ c u ¯ ( t 0 ) u ˜ ( t 0 ) u ˜ (t),tR,

where

G ( t , s ) : = 1 2 ( [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] χ 0 t ( s ) + [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] χ t 0 ( s ) ) , t , s R .
(2.10)

Proof First observe that G(t,) is bounded and of compact support for every fixed tR, so the integral G(t,s)h(s)ds is well defined. It is not difficult to verify, for any tR, the following equalities:

u ( t ) c u ¯ ( t 0 ) u ˜ ( t 0 ) u ˜ ( t ) = 1 2 ( d d t 0 t [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) d s + d d t t 0 [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) d s ) = 1 2 ( d d t 0 t [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) d s + d d t 0 t [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) d s ) = h ( t ) + 1 2 ( 0 t [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) d s + 0 t [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) d s ) .
(2.11)

On the other hand,

a [ u ( t ) c u ¯ ( t 0 ) u ˜ ( t 0 ) u ˜ ( t ) ] + b [ u ( t ) c u ¯ ( t 0 ) u ˜ ( t 0 ) u ˜ ( t ) ] = 1 2 a 0 t ( [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) + [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) ) d s + 1 2 b 0 t ( [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) + [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) ) d s = 1 2 a 0 t ( [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) + [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) ) d s + 1 2 b 0 t ( [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) + [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) ) d s = 1 2 0 t ( a [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] + b [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] ) h ( s ) d s + 1 2 0 t ( a [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] + b [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] ) h ( s ) d s = 1 2 0 t ( u ˜ ( s ) [ a v ˜ ( t ) + b v ˜ ( t ) ] + v ˜ ( s ) [ a u ˜ ( t ) + b u ˜ ( t ) ] h ( s ) d s + 1 2 0 t ( u ˜ ( s ) [ a v ˜ ( t ) + b v ˜ ( t ) ] v ˜ ( s ) [ a u ˜ ( t ) + b u ˜ ( t ) ] ) h ( s ) d s = 1 2 ( 0 t ( u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ) h ( s ) d s + 0 t ( u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ) h ( s ) d s ) .
(2.12)

Thus, adding (2.11) and (2.12), it is clear that u (t)+au(t)+bu(t)=h(t).

We now check the initial condition:

u( t 0 )=c u ¯ ( t 0 )+ 1 2 0 t 0 ( [ u ˜ ( s ) v ˜ ( t 0 ) + v ˜ ( s ) u ˜ ( t 0 ) ] h ( s ) + [ u ˜ ( s ) v ˜ ( t 0 ) v ˜ ( s ) u ˜ ( t 0 ) ] h ( s ) ) ds.

Using the construction of the solution provided in Theorem 2.1, it is an easy exercise to check that

u ¯ (t)= 1 2 0 t ( [ u ˜ ( s ) v ˜ ( t ) + v ˜ ( s ) u ˜ ( t ) ] h ( s ) + [ u ˜ ( s ) v ˜ ( t ) v ˜ ( s ) u ˜ ( t ) ] h ( s ) ) dstR,

which proves the result. □

Denote now by G a , b the Green’s function for problem (2.5) with coefficients a and b. The following lemma is analogous to [[18], Lemma 4.1].

Lemma 2.6 G a , b (t,s)= G a , b (t,s), for all t,sI.

Proof Let u(t):= G a , b (t,s)h(s)ds be a solution to u (t)+au(t)+bu(t)=h(t). Let v(t):=u(t). Then v (t)av(t)bv(t)=h(t), and therefore v(t)= G a , b (t,s)h(s)ds. On the other hand, by definition of v,

v(t)= G a , b (t,s)h(s)ds= G a , b (t,s)h(s)ds,

therefore we can conclude that G a , b (t,s)= G a , b (t,s) for all t,sI. □

As a consequence of the previous result, we arrive at the following immediate conclusion.

Corollary 2.7 G a , b is positive if and only if G a , b is negative on I 2 .

3 Sign of the Green’s function

In this section we use the above obtained expressions to obtain the explicit expression of the Green’s function, depending on the values of the constants a and b. Moreover, we study the sign of the function and deduce suitable comparison results.

We separate the study in three cases, taking into consideration the expression of the general solution of (2.6).

3.1 The case (C1)

Now, assume the case (C1), i.e., a 2 > b 2 . Using (2.10), we get the following expression of G for this situation:

G(t,s)= [ cos ( ω ( s t ) ) + b ω sin ( ω ( s t ) ) ] χ 0 t (s)+ a ω sin ( ω ( s + t ) ) χ t 0 (s),

which we can rewrite as

Studying the expression of G we can obtain maximum and anti-maximum principles. In order to do this, we will be interested in those maximal strips (in the sense of inclusion) of the kind [α,β]×R where G does not change sign depending on the parameters.

So, we are in a position to study the sign of the Green’s function in the different triangles of definition. The result is the following.

Lemma 3.1 Assume a 2 > b 2 and define

η(a,b):={ 1 a 2 b 2 arctan a 2 b 2 b , if b > 0 , π 2 | a | , if b = 0 , 1 a 2 b 2 ( arctan a 2 b 2 b + π ) , if b < 0 .

Then the Green’s function of problem (2.5) is

  • positive on{(t,s),0<s<t}if and only ift(0,η(a,b)),

  • negative on{(t,s),t<s<0}if and only ift(η(a,b),0).

If a>0, the Green’s function of problem (2.5) is

  • positive on{(t,s),t<s<0}if and only ift(0,π/ a 2 b 2 ),

  • positive on{(t,s),0<s<t}if and only ift(π/ a 2 b 2 ,0),

and, if a<0, the Green’s function of problem (2.5) is

  • negative on{(t,s),t<s<0}if and only ift(0,π/ a 2 b 2 ),

  • negative on{(t,s),0<s<t}if and only ift(π/ a 2 b 2 ,0).

Proof For 0<b<a, the argument of the sin in (3.1c) is positive, so (3.1c) is positive for t<π/ω. On the other hand, it is easy to check that (3.1a) is positive as long as t<η(a,b).

The rest of the proof continues similarly. □

As a corollary of the previous result we obtain the following one.

Lemma 3.2 Assume a 2 > b 2 . Then we have the following:

  • ifa>0, the Green’s function of problem (2.5) is non-negative on[0,η(a,b)]×R,

  • ifa<0, the Green’s function of problem (2.5) is non-positive on[η(a,b),0]×R,

  • the Green’s function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.

Proof The proof follows from the previous result together with the fact that

η(a,b) π 2 ω < π ω .

 □

Remark 3.1 Realize that the rectangles defined in the previous lemma are optimal in the sense that G changes sign in a bigger rectangle. The same observation applies to similar results we will prove for the other cases. This fact implies that we cannot have maximum or anti-maximum principles on bigger intervals for the solution, something that is widely known and which the following results, together with Example 3.4, illustrate.

Since G(t,0) changes sign at t=η(a,b), it is immediate to verify that, by defining function h ϵ (s)=1 for all s(ϵ,ϵ) and h ϵ (s)=0 otherwise, we have a solution of problem (2.5) that crosses the 0 line as close to the right of η(a,b) as necessary. So the estimates are optimal for this case.

However, one can study problems with particular non-homogeneous part h for which the solution crosses 0 for a bigger interval. This is showed in the following example.

Example 3.1 Consider the problem x (t)5x(t)+4x(t)= cos 2 3t, x(0)=0.

Clearly, we are in the case (C1). For this problem,

u ¯ ( t ) : = 0 t [ cos ( 3 ( s t ) ) + 4 3 sin ( 3 ( s t ) ) ] cos 2 3 s d s 5 3 t 0 sin ( 3 ( s + t ) ) d s = 1 18 ( 6 cos 3 t + 3 cos 6 t + 2 sin 3 t + 2 sin 6 t 9 ) .

u ¯ (0)=0, so u ¯ is the solution of our problem.

Studying u ¯ , we can arrive at the conclusion that u ¯ is non-negative in the interval [0,γ], being zero at both ends of the interval and

γ = 1 3 arccos ( 1 39 [ 47 , 215 5 , 265 41 3 + 5 ( 9 , 443 + 1 , 053 41 ) 3 35 ] ) = 0.201824 .

Also, u ¯ (t)<0 for t=γ+ϵ with ϵ R + sufficiently small. Furthermore, the solution is periodic of period 2π/3 (see Figure 1).

Figure 1
figure 1

Graph of the function u ¯ on the interval [0,2π/3]. Observe that u ¯ is positive on (0,γ) and negative on (γ,2π/3).

If we use Lemma 3.2, we find that, a priori, u ¯ is non-positive on [4/15,0] which we know is true by the study we have done of u ¯ , but this estimate is, as expected, far from the interval [γ1,0] in which u ¯ is non-positive. This does not contradict the optimality of the a priori estimate, as we have showed before, some other examples could be found for which the interval where the solution has constant is arbitrarily close to the one given by the a priori estimate.

3.2 The case (C2)

We study here the case (C2). In this case, it is clear that

G(t,s)= [ cosh ( ω ( s t ) ) + b ω sinh ( ω ( s t ) ) ] χ 0 t (s)+ a ω sinh ( ω ( s + t ) ) χ t 0 (s),

which we can rewrite as

Studying the expression of G we can obtain maximum and anti-maximum principles. With this information, we can state the following lemma.

Lemma 3.3 Assume a 2 < b 2 and define

σ(a,b):= 1 b 2 a 2 arctanh b 2 a 2 b .

Then we have the following:

  • ifa>0, the Green’s function of problem (2.5) is positive on{(t,s),t<s<0}and{(t,s),0<s<t},

  • ifa<0, the Green’s function of problem (2.5) is negative on{(t,s),t<s<0}and{(t,s),0<s<t},

  • ifb>0, the Green’s function of problem (2.5) is negative on{(t,s),t<s<0},

  • ifb>0, the Green’s function of problem (2.5) is positive on{(t,s),0<s<t}if and only ift(0,σ(a,b)),

  • ifb<0, the Green’s function of problem (2.5) is positive on{(t,s),0<s<t},

  • ifb<0, the Green’s function of problem (2.5) is negative on{(t,s),t<s<0}if and only ift(σ(a,b),0).

Proof For 0<a<b, he argument of the sinh in (3.1d) is negative, so (3.2d) is positive. The argument of the sinh in (3.1c) is positive, so (3.2c) is positive. It is easy to check that (3.2a) is positive as long as t<σ(a,b).

On the other hand, (3.2b) is always negative.

The rest of the proof continues similarly. □

As a corollary of the previous result we obtain the following one.

Lemma 3.4 Assume a 2 < b 2 . Then we have the following.

  • if0<a<b, the Green’s function of problem (2.5) is non-negative on[0,σ(a,b)]×R,

  • ifb<a<0, the Green’s function of problem (2.5) is non-negative on[0,+)×R,

  • ifb<a<0, the Green’s function of problem (2.5) is non-positive on[σ(a,b),0]×R,

  • ifb>a>0, the Green’s function of problem (2.5) is non-positive on(,0]×R,

  • the Green’s function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.

Example 3.2 Consider the problem

x (t)+λx(t)+2λx(t)= e t ,x(1)=c
(3.3)

with λ>0.

Clearly, we are in the case (C2),

σ(λ,2λ)= 1 λ 3 ln[7+4 3 ]= 1 λ 1.52069.

If λ1/ 3 , then

u ¯ ( t ) : = 0 t [ cosh ( λ 3 ( s t ) ) + 2 3 sinh ( λ 3 ( s t ) ) ] e s d s u ¯ ( t ) : = + 1 3 t 0 sinh ( ω ( s + t ) ) e s d s u ¯ ( t ) = 1 3 λ 2 1 [ ( λ 1 ) ( 3 sinh ( 3 λ t ) cosh ( 3 λ t ) ) + ( 2 λ 1 ) e t λ e t ] , u ˜ ( t ) = cosh ( λ 3 t ) 3 sinh ( λ 3 t ) .

With these equalities, it is straightforward to construct the unique solution w of problem (3.3). For instance, in the case λ=c=1,

u ¯ (t)=sinh(t),

and

w(t)=sinht+ 1 sinh 1 cosh ( λ 3 ) 3 sinh ( λ 3 ) ( cosh ( λ 3 t ) 3 sinh ( λ 3 t ) ) .

Observe that for λ=1, c=sinh1, w(t)=sinht. Lemma 3.4 guarantees the non-negativity of w on [0,1.52069], but it is clear that the solution w is positive on the whole positive real line.

3.3 The case (C3)

We study here the case (C3) for a=b. In this case, it is clear that

G(t,s)= [ 1 + a ( s t ) ] χ 0 t (s)+a(s+t) χ t 0 (s),

which we can rewrite as

G(t,s)={ 1 + a ( s t ) , 0 s t , 1 a ( s t ) , t s 0 , a ( s + t ) , t s 0 , a ( s + t ) , 0 s t , 0 , otherwise .

Studying the expression of G we can obtain maximum and anti-maximum principles. With this information, we can prove the following lemma as we did with the analogous ones for cases (C1) and (C2).

Lemma 3.5 Assume a=b. Then, if a>0, the Green’s function of problem (2.5) is

  • positive on{(t,s),t<s<0}and{(t,s),0<s<t},

  • negative on{(t,s),t<s<0},

  • positive on{(t,s),0<s<t}if and only ift(0,1/a),

and, if a<0, the Green’s function of problem (2.5) is

  • negative on{(t,s),t<s<0}and{(t,s),0<s<t},

  • positive on{(t,s),0<s<t},

  • negative on{(t,s),t<s<0}if and only ift(1/a,0).

As a corollary of the previous result we obtain the following one.

Lemma 3.6 Assume a=b. Then we have the following:

  • if0<a, the Green’s function of problem (2.5) is non-negative on[0,1/a]×R,

  • ifa<0, the Green’s function of problem (2.5) is non-positive on[1/a,0]×R,

  • the Green’s function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.

For this particular case we have another way of computing the solution to the problem.

Proposition 3.7 Let a=b and assume 2a t 0 1. Let H(t):= t 0 t h(s)ds and H(t):= t 0 t H(s)ds. Then problem (2.5) has a unique solution given by

u(t)=H(t)2a H o (t)+ 2 a t 1 2 a t 0 1 c.

Proof The equation is satisfied, since

u ( t ) + a ( u ( t ) + u ( t ) ) = u ( t ) + 2 a u e ( t ) = h ( t ) 2 a H e ( t ) + 2 a c 2 a t 0 1 + 2 a H e ( t ) 2 a c 2 a t 0 1 = h ( t ) .

The initial condition is also satisfied for, clearly, u( t 0 )=c. □

Example 3.3 Consider the problem x (t)+λ(x(t)x(t))= | t | p , x(0)=1 for λ,pR, p>1. For p(1,0) we have a singularity at 0. We can apply the theory in order to get the solution

u(t)= 1 p + 1 t | t | p +12λt,

where u ¯ (t)= 1 p + 1 t | t | p and u ˜ (t)=12λt. u ¯ is positive in (0,+) and negative in (,0) independently of λ, so the solution has better properties than the ones guaranteed by Lemma 3.6.

The next example shows that the estimate is sharp.

Example 3.4 Consider the problem

u ϵ (t)+ u ϵ (t)+ u ϵ (t)= h ϵ (t),tR; u ϵ (0)=0,
(3.4)

where ϵR, h ϵ (t)=12x(ϵx) χ [ 0 , ϵ ] (x) and χ [ 0 , ϵ ] is the characteristic function of the interval [0,ϵ]. Observe that h ϵ is continuous. By means of the expression of the Green’s function for problem (3.4), we see that its unique solution is given by

u ϵ (t)={ 2 ϵ 3 t ϵ 4 , if  t < ϵ , t 4 2 ϵ t 3 , if  ϵ < t < 0 , t 4 ( 4 + 2 ϵ ) t 3 + 6 ϵ t 2 , if  0 < t < ϵ , 2 ϵ 3 t + 2 ϵ 3 + ϵ 4 , if  t > ϵ .

The a priori estimate on the solution tells us that u ϵ is non-negative at least in [0,1]. Studying the function u ϵ (see Figure 2), it is easy to check that u ϵ is zero at 0 and 1+ϵ/2, positive in (,1+ϵ/2){0} and negative in (1+ϵ/2,+).

Figure 2
figure 2

Graph of the function u 1 and h 1 (dashed). Observe that u becomes zero at t=1+ϵ/2=3/2.

The case (C3.2) is very similar,

G(t,s)={ 1 + a ( t s ) , 0 s t , 1 a ( t s ) , t s 0 , a ( s + t ) , t s 0 , a ( s + t ) , 0 s t , 0 , otherwise .

Lemma 3.8 Assume a=b. Then, if a>0, the Green’s function of problem (2.5) is

  • positive on{(t,s),t<s<0}, {(t,s),0<s<t}and{(t,s),0<s<t},

  • negative on{(t,s),t<s<0}if and only ift(1/a,0),

and, if a>0, the Green’s function of problem (2.5) is

  • negative on{(t,s),t<s<0}, {(t,s),t<s<0}and{(t,s),0<s<t},

  • positive on{(t,s),0<s<t}if and only ift(0,1/a).

As a corollary of the previous result we obtain the following one.

Lemma 3.9 Assume a=b. Then we have the following:

  • ifa>0, the Green’s function of problem (2.5) is non-negative on[0,+)×R,

  • ifa<0the Green’s function of problem (2.5) is non-positive on(,0]×R,

  • the Green’s function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.

Again, for this particular case we have another way of computing the solution to the problem.

Proposition 3.10 Let a=b, H(t):= 0 t h(s)ds and H(t):= 0 t H(s)ds. Then problem (2.5) has a unique solution given by

u(t)=H(t)H( t 0 )2a ( H e ( t ) H e ( t 0 ) ) +c.

Proof The equation is satisfied, since

u (t)+a ( u ( t ) u ( t ) ) = u (t)+2a u o (t)=h(t)2a H o (t)+2a H o (t)=h(t).

The initial condition is also satisfied for, clearly, u( t 0 )=c. □

Example 3.5 Consider the problem

x (t)+λ ( x ( t ) x ( t ) ) = λ t 2 2 t + λ ( 1 + t 2 ) 2 ,x(0)=λ

for λR. We can apply the theory in order to get the solution

u(t)= 1 1 + t 2 +λ(1+2λt)arctant λ 2 ln ( 1 + t 2 ) +λ1,

where u ¯ (t)= 1 1 + t 2 +λ(1+2λt)arctant λ 2 ln(1+ t 2 )1.

Observe that the real function

h(t):= λ t 2 2 t + λ ( 1 + t 2 ) 2

is positive on ℝ if λ>1 and negative on ℝ for all λ<1. Therefore, Lemma 3.9 guarantees that u ¯ will be positive on (0,) for λ>1 and in (,0) when λ<1.