Introduction

The purpose of this paper is to establish the existence of solutions for a class of nonlinear third order ordinary differential equations with integral boundary conditions. More specifically, we consider the following problem:

u (t)+f ( t , u ( t ) , u ( t ) , u ( t ) ) =0,t[0,1],
(1)
u(0)=0,
(2)
u (0)a u (0)= 0 1 h 1 ( u ( s ) , u ( s ) ) ds,
(3)
u (1)= 0 1 h 2 ( u ( s ) , u ( s ) ) ds,
(4)

where f:[0,1]× R 3 R, h 1 , h 2 : R 2 R are continuous functions, and a is a nonnegative real number. Several papers have been devoted to the study of third order differential equations with two-point and three-point boundary conditions. See [1]–[3], and [4] for references. Problems with integral boundary conditions have been used in the description of many phenomena in the applied sciences. We refer the interested reader to [5] and the references therein. Very few papers have dealt with nonlocal conditions for third order differential equations. We can mention [6], [7] and [8]. For higher order differential equations with functional boundary conditions the interested reader can consult [9]. In this work we use the method of lower and upper solutions to generate a sequence of modified nonlinear problems, each having a unique solution; in this way, we obtain a sequence of functions, which is uniformly bounded together with their first and second order derivatives. We then extract a subsequence converging uniformly to a solution of our original problem (1)-(4). Contrary to many works in the literature, we develop an iterative technique, which is not necessary monotone. We should point out that our approach is totally different from that of [9].

Preliminaries

Let I denote the real interval [0,1]. C(I) is the Banach space of real-valued continuous functions on I, equipped with the norm u 0 :=max{|u(t)|;tϵI}, for uC(I). Let D denote the set of all real-valued functions which are three times continuously differentiable on I. We define the norm of uD by

u D = u 0 + u 0 + u 0 + u 0 .

Let D 0 ={uD;u(0)=0}. Then ( D 0 , D ) is a Banach space.

Definition 1

A solution of problem (1)-(4) is a function u D 0 that satisfies (1) for every tI and the conditions (3) and (4).

Definition 2

Let α,β D 0 satisfy α (t) β (t) for every tI. We denote by [ α , β ] the set of all v D 0 such that α (t)v(t) β (t) for every tI.

It is clear that if u [ α , β ] and u D 0 , then u[α,β].

Definition 3

Let α,β D 0 satisfy α (t) β (t) for every tI. Let S(α,β) denote the set of all functions u D 0 such that u[α,β] and u [ α , β ].

Remark 1

It is clear that u D 0 and u [ α , β ] imply that uS(α,β).

Definition 4

Let α,β D 0 satisfy α (t) β (t) for every tI. Define the operator p: D 0 [α,β] by

(pu)(t)=max { α ( t ) , min ( u ( t ) , β ( t ) ) } ,tI,

and the operator q: D 0 [ α , β ] by

(qv)(t)=max { α ( t ) , min ( v ( t ) , β ( t ) ) } ,tI.

Remark 2

The operators p and q are continuous and bounded.

Main results

In this section we state and prove our main results. The first result is of independent interest and plays a key role in the proof of our second result.

Theorem 1

Letϕ:I×RRbe continuous, bounded and satisfy the following condition:

(H ϕ ): (ϕ(t, v 2 )ϕ(t, v 1 ))( v 2 v 1 )<0for all v 1 , v 2 Rsuch that v 1 v 2 andtI.

Then for anyδ, ρthe boundary value problem

{ u ( t ) = ϕ ( t , u ( t ) ) , t I , u ( 0 ) a u ( 0 ) = δ , u ( 1 ) = ρ
(5)

has a unique solutionu.

Proof

Uniqueness. Suppose that problem (5) has two solutions x and u in D 0 . Put z= x u . Then z(1)= x (1) u (1)=0. We show that z(0)=0. Suppose this is not true. Then either z(0)>0 or z(0)<0. We consider the case z(0)>0. From the condition at t=0 it follows that 0<z(0)=a z (0). Since a is nonnegative we have z (0)>0, which implies that z is increasing to the right of t=0. Since z(1)=0 there must exist ξ[0,1) such that z(ξ)= max t I z(t). Then

0<z(ξ),0= z (ξ)and z (ξ)0.

The differential equation in (5) and (H ϕ ) imply

0 z (ξ)z(ξ)= ( ϕ ( ξ , x ( ξ ) ) ϕ ( ξ , u ( ξ ) ) ) ( x ( ξ ) u ( ξ ) ) >0.

This is a contradiction. Similarly, if we consider the case z(0)<0 we will arrive at a contradiction. Hence z(0)=0. Now, we have a function z continuous on I with z(0)=z(1)=0. Then there exists τI such that

z(τ)= max t I z(t), z (τ)=0and z (τ)0.

Proceeding as before we show that z(τ)=0. So that z(t)=0 for all tI. This shows that x (t)= u (t) for all tI. Since x(0)=u(0)=0 it follows that x(t)=u(t) for all tI, which shows the uniqueness of the solution.

Existence. For λ[0,1] consider the family of problems

{ u ( t ) = λ ϕ ( t , u ( t ) ) , t I , u ( 0 ) a u ( 0 ) = λ δ , u ( 1 ) = λ ρ .
(6)

For λ=0 problem (6) has only the trivial solution. Thus, we consider the case λ(0,1].

  1. (i)

    u is a solution of (6) if and only if it satisfies, for all tI,

    u ( t ) = λ t a + 1 ( δ + a ρ + a 0 1 ( 1 s ) ϕ ( s , u ( s ) ) d s ) + λ t 2 2 ( a + 1 ) ( ρ δ + 0 1 ( 1 s ) ϕ ( s , u ( s ) ) d s ) λ 0 t ( t s ) 2 2 ϕ ( s , u ( s ) ) d s .
    (7)

    Indeed, it is clear that the differential equation in (6) implies

    u(t)= u (0)t+ u (0) t 2 2 λ 0 t ( t s ) 2 2 ϕ ( s , u ( s ) ) ds.
    (8)

    Then

    u (t)= u (0)+ u (0)tλ 0 t (ts)ϕ ( s , u ( s ) ) ds.
    (9)

    It follows that

    λρ= u (1)= u (0)+ u (0)λ 0 1 (1s)ϕ ( s , u ( s ) ) ds.

    But u (0)=a u (0)+λδ, so that

    u (0)= λ a + 1 [ ρ δ + 0 1 ( 1 s ) ϕ ( s , u ( s ) ) d s ] ,

    and consequently

    u (0)= λ a + 1 [ δ + a ρ + a 0 1 ( 1 s ) ϕ ( s , u ( s ) ) d s ] .

    Now, substitute the expressions of u (0) and u (0) into (8) to get (7).

  2. (ii)

    We show that there exists a positive constant L 0 , independent of λ, such that any possible solution u of (6) satisfies

    u D L 0 .
    (10)

    The boundedness of ϕ implies that there exists M ϕ >0 such that |ϕ(t, u (t))| M ϕ for all tI, so that u 0 M ϕ . Then

    | u (0)| 1 a + 1 [ | δ | + a | ρ | + a M ϕ 2 ]
    (11)

    and

    | u (0)| 1 a + 1 [ | δ | + | ρ | + M ϕ 2 ] .
    (12)

    Combining relations (9), (11), and (12) we see that

    u 0 M 2 := 1 a + 1 [ 2 | δ | + ( a + 1 ) | ρ | + ( a + 1 ) M ϕ ] .
    (13)

    Since u(t)= 0 t u (s)ds it follows that

    u 0 M 2 .

    Also, u 0 M ϕ and (12) imply

    u 0 M 1 := 1 a + 1 [ | δ | + | ρ | + ( 2 a + 3 ) M ϕ 2 ] .

    Let L 0 = M 1 +2 M 2 + M ϕ . Then any possible solution u of (6) satisfies (10).

  3. (iii)

    Define an operator Ψ: D 0 D 0 by (Ψu)(t) = the right-hand side of (7). Let Ω:={u D 0 ; u D L 0 }. Then it is easily seen that (Ψ(Ω)) is uniformly bounded and equicontinuous. Ascoli-Arzela theorem implies that the operator Ψ is compact. Moreover, the set of all solutions u of the equation u=λΨu is bounded (see (10)). It follows from Schaefer theorem (see [10]) that u=Ψu has at least one solution. Thus, (6) has at least one solution for λ=1, which is, in fact, unique from the previous step. Thus, u is a solution of (5). This completes the proof of the theorem. □

Remark 3

We should emphasize that, unlike Theorem 6 in [9], our Theorem 1 gives the uniqueness of the solution and this is essentially utilized in the proof of our Theorem 2 below.

For our second main result we introduce the notion of lower and upper solutions of problem (1), (2), (3), (4).

Definition 5

  1. (a)

    We say that α D 0 is a lower solution of problem (1), (3), (4) if

    { α ( t ) f ( t , α ( t ) , α ( t ) , α ( t ) ) for all  t I , α ( 0 ) a α ( 0 ) 0 1 h 1 ( α ( s ) , α ( s ) ) d s , α ( 1 ) 0 1 h 2 ( α ( s ) , α ( s ) ) d s .
  2. (b)

    We say that β D 0 is an upper solution of problem (1), (3), (4) if

    { β ( t ) f ( t , β ( t ) , β ( t ) , β ( t ) ) for all  t I , β ( 0 ) a β ( 0 ) 0 1 h 1 ( β ( s ) , β ( s ) ) d s , β ( 1 ) 0 1 h 2 ( β ( s ) , β ( s ) ) d s .

To state and prove our second main result we introduce the following assumptions.

(A f ): f:I× R 3 R is continuous and satisfies

  1. (1)

    there exists C 0 >0 such that any solution u of (1), with uS(α,β), satisfies | u (t)| C 0 , for all tI;

  2. (2)

    (f(t,u(t), v 2 ,w)f(t,u(t), v 1 ,w))( v 2 v 1 )<0 for v 1 , v 2 R such that v 1 v 2 , uD[α,β], wR and tI;

  3. (3)

    f(t,α(t),v(t), α (t))f(t,u(t),v(t),w)f(t,β(t),v(t), β (t)) for all uD[α,β], v C 2 [ α , β ], wR, and tI.

(A h ): h 1 , h 2 : R 2 R are continuous and nondecreasing with respect to both arguments.

Remark 4

There are several sufficient conditions that imply (A f )(1). See for instance [6], Lemma 1], [3], Lemma 1].

Theorem 2

Letα,β D 0 be, respectively, a lower and an upper solution of problem (1), (3), (4) such that α β onI. Assume that the conditions (A f ) and (A h ) are satisfied for the pair(α,β), whereαis a given lower solution andβis a given upper solution. Then problem (1), (3), (4) has at least one solutionuS(α,β).

Proof

We modify problem (1), (2), (3), (4) as follows. We define two functions f C 0 ,F:I× R 3 R by

f C 0 (t,u,v,w)= { f ( t , p ( u ) , v , C 0 ) , w C 0 , f ( t , p ( u ) , v , w ) , | w | C 0 , f ( t , p ( u ) , v , C 0 ) , w C 0
(14)

and

F(t,u,v,w)= f C 0 ( t , u , q ( v ) , w ) = { f C 0 ( t , u , α , w ) , v < α , f C 0 ( t , u , v , w ) , α v β , f C 0 ( t , u , β , w ) , v > β ,
(15)

where C 0 is the constant from condition (A f )(1). Consider the modified problem

{ u ( t ) = F ( t , u ( t ) , u ( t ) , u ( t ) ) for  t I , u ( 0 ) = 0 , u ( 0 ) a u ( 0 ) = 0 1 h 1 ( u ( s ) , u ( s ) ) d s , u ( 1 ) = 0 1 h 2 ( u ( s ) , u ( s ) ) d s .
(16)

Define a sequence ( u j ) j N of functions in D 0 as follows. Let

u 0 (t)=γt+β(t),tI,

where γ= max t I ( α β )(t), and for j1,

{ u j ′′′ ( t ) = F ( t , u j 1 ( t ) , u j ( t ) , u j 1 ′′ ( t ) ) for  t I , u j ( 0 ) = 0 , u j ( 0 ) a u j ′′ ( 0 ) = 0 1 h 1 ( u j 1 ( s ) , u j 1 ( s ) ) d s , u j ( 1 ) = 0 1 h 2 ( u j 1 ( s ) , u j 1 ( s ) ) d s .
(17)

We shall show that the sequence of modified problems (17) is such that each problem has a unique solution, which is uniformly bounded, together with its first and second order derivatives. Then we rely on Bolzano-Weierstrass theorem to extract a uniformly convergent subsequence, whose limit is the solution of our original problem.

  1. 1.

    The sequence ( u j ) j N is well defined. Indeed, for any tI and any zR we have q(z)[ α , β ] and p( u j 1 (t))[α,β]. It follows that the function ϕ:I×RR, defined by

    ϕ(t,z)=F ( t , u j 1 ( t ) , z , u j 1 ′′ ( t ) ) =f ( t , p ( u j 1 ) ( t ) , q ( z ) , u j 1 ′′ ( t ) ) ,

    is continuous and bounded for all tI and zR. Moreover, condition (A f )(2) shows that ϕ satisfies condition (H ϕ ) in Theorem 1. It follows from this theorem that (17) has a unique solution u j , for each j=1,2, .

  2. 2.

    For each j=0,1, the functions u j satisfy u j S(α,β) and the sequence ( u j ′′ ) j N is uniformly bounded.

It is clear that u j D 0 for j=0,1, . Since α β it follows that γ0, so that u 0 =γ+ β β . On the other hand, we have γ min t I ( β α )(t) β (t)+ α (t); so that u 0 α . It follows that u 0 [ α , β ], and consequently u 0 S(α,β). Also, since u 0 ′′ = β then u 0 ′′ 0 β 0 . Suppose, by induction, that we have u 1 S(α,β) and there exists K 1 C 0 such that u 1 ′′ 0 K 1 . Let

M f : = max { | f ( t , u , v , w ) | ; t I , u [ α , β ] , v [ α , β ] , | w | C 0 } , h ¯ = max { 0 1 | h 2 ( u ( s ) , u ( s ) ) h 1 ( u ( s ) , u ( s ) ) | d s ; u S ( α , β ) } .

Claim 1. There exists K depending only on K 1 , M f , h ¯ , α 0 , and β 0 such that u ′′ 0 K and u S(α,β).

To prove the claim we start with

u ′′ ( t ) = u ′′ ( 0 ) 0 t F ( s , u 1 ( s ) , u ( s ) , u 1 ′′ ( s ) ) d s = u ′′ ( 0 ) 0 t f ( s , p ( u 1 ( s ) ) , q ( u ( s ) ) , u 1 ′′ ( s ) ) d s ,

which leads to

| u ′′ (t)|| u ′′ (0)|+ M f .
(18)

The boundary conditions imply

u (1) u (0)=a u ′′ (0)+ 0 1 ( h 2 ( u 1 ( s ) , u 1 ( s ) ) h 1 ( u 1 ( s ) , u 1 ( s ) ) ) ds.

On the other hand

u (1) u (0)= 0 1 u ′′ (s)ds= u ′′ (0) 0 1 (1s)f ( s , p ( u 1 ( s ) ) , q ( u ( s ) ) , u 1 ′′ ( s ) ) ds.

It is readily seen that

(a+1)| u ′′ (0)| M f 2 + h ¯ .

Since | u 1 ′′ (1)| K 1 (by the induction hypothesis) it follows that

| u ′′ (0)| 1 a + 1 ( M f 2 + h ¯ ) = C 1 .
(19)

It follows from (18) and (19) that

| u ′′ (t)|K:=max ( C 1 + M f , α 0 , β 0 ) for tI.
(20)

Claim 2. u S(α,β). Since u (0)=0 it suffices to show that u [ α , β ], i.e. α u β . We, first, prove that α u . For this purpose, set W(t)= u (t) α (t), for tI. We show that W(t)0 for all tI. Suppose by contradiction, that there exists ξ 1 I such that W( ξ 1 )<0. Since W is continuous, it follows that there exists ηI such that W(η)=min{W(t);tI}<0. Hence we have W(η)<0, W (η)=0 and W (η)>0. Thus,

W ( η ) = u ′′′ ( η ) α ( η ) = F ( η , u ( η ) , u 1 ( η ) , u ′′ ( η ) ) α ( η ) = f ( η , p ( u 1 ( η ) ) , q ( u ( η ) ) , u 1 ′′ ( η ) ) α ( η ) .

But W(η)= u (η) α (η)<0 and W (η)= u ′′ (η) α (η)=0. Therefore, q( u (η))= α (η) and u ′′ (η)= α (η). Hence,

W ( η ) = f ( η , ( p ( u 1 ) ) ( η ) , q ( u ( η ) ) , u 1 ′′ ( η ) ) α ( η ) = f ( η , u 1 ( η ) , α ( η ) , u 1 ′′ ( η ) ) α ( η ) > 0 .

It follows that

f ( η , u 1 ( η ) , α ( η ) , u 1 ′′ ( η ) ) + α (η)<0.

Since

α (η)f ( η , α ( η ) , α ( η ) , α ( η ) ) ,

we infer that

f ( η , u 1 ( η ) , α ( η ) , u 1 ′′ ( η ) ) f ( η , α ( η ) , α ( η ) , α ( η ) ) <0.

The above inequality is not possible by (A f )(3). Now, if η=0, then W(0)<0, W (0)0, and W (0)0. It follows that

W ( 0 ) = u ( 0 ) α ( 0 ) = a u ′′ ( 0 ) + 0 1 h 1 ( u 1 ( s ) , u 1 ( s ) ) d s α ( 0 ) < 0 .

Since

W (0)= u ′′ (0) α (0)0,

we get

a α (0) α (0)+ 0 1 h 1 ( u 1 ( s ) , u 1 ( s ) ) ds<0.

The monotonicity of h 1 leads to

a α (0) α (0)+ 0 1 h 1 ( α ( s ) , α ( s ) ) ds<0.

This is not possible by the properties of the lower solution α. Finally, W(1)0. Indeed,

W ( 1 ) = u ( 1 ) α ( 1 ) = 0 1 h 2 ( u 1 ( s ) , u 1 ( s ) ) d s α ( 1 ) 0 1 h 2 ( α ( s ) , α ( s ) ) d s α ( 1 ) 0 ,

by definition of the lower solution α. Similarly we show that u β . Thus, we have shown that u [ α , β ], which implies that u S(α,β). Therefore, we have proved that the sequences ( u j ′′ ) j N , ( u j ) j N , and ( u j ) j N are uniformly bounded on the interval I. Bolzano-Weierstrass theorem implies that we can extract subsequences ( u j m ′′ ) j m N , ( u j m ) j m N and ( u j m ) j m N that are uniformly convergent on I. Using the diagonalization process, if necessary, we shall assume that lim m u j m ′′ = lim m u j m 1 ′′ =w, lim m u j m = lim m u j m 1 =v and lim m u j m = lim m u j m 1 =u. To complete the proof of our second main result we prove that u =v, u =w and u is the desired solution of our original problem. Since u j m (t)= 0 t u j m (s)ds, it follows from the uniform convergence of the two subsequences that u(t)= 0 t v(s)ds, and this equality implies that u(0)=0 and u =v. Also, we have u j m (t)= u j m (0)+ u j m (0)t+ 0 t (ts) u j m ′′ (s)ds= u j m (0)t+ 0 t (ts) u j m ′′ (s)ds, which implies that u(t)= u (0)t+ 0 t (ts)w(s)ds, from which we readily get u =w. It is clear that

uS(α,β)and u 0 K.
(21)

The differential equation u j m ′′′ (t)=F(t, u j m 1 (t), u j m (t), u j m 1 ′′ (t)), for tI, implies that u j m ′′ (t)= u j m ′′ (0)+ 0 t F(s, u j m 1 (s), u j m (s), u j m 1 ′′ (s))ds. The continuity of F and the uniform convergence of the respective subsequences imply that u (t)= u (0)+ 0 t F(s,u(s), u (s), u (s))ds, so that

u (t)=F ( t , u ( t ) , u ( t ) , u ( t ) ) for tI.

The definition of F and (21) show that

u (t)=f ( t , u ( t ) , u ( t ) , u ( t ) ) for tI.
(22)

Similarly we can show that

u (0)a u (0)= 0 1 h 1 ( u ( s ) , u ( s ) ) ds

and

u (1)= 0 1 h 2 ( u ( s ) , u ( s ) ) ds.

We see that u is a solution of (1), (2), (3), (4). Moreover, uS(α,β). This completes the proof of our main result. □