1 Introduction

Differential equations with nonlocal conditions play a crucial role in numerous fields of science, physics, engineering, and so on. The theory of such equations with respect to different types of derivatives has been investigated by many authors. For the well-known classical derivative, the second order Cauchy problem with nonlocal condition was studied by Hernández [11]. In recent years, fractional differential equations have been increasingly used to formulate many problems in biology, chemistry, and other areas of applications [13,14,15, 17]. For Caputo’s fractional derivative, a fractional Cauchy problem of order \(\beta \in (1,2)\) with nonlocal condition was treated in [18] by Shur et al. Mainly, they studied the existence and uniqueness of the corresponding mild solution. For physical interpretations of nonlocal condition, we refer to [8, 9, 16].

The conformable derivative was introduced by Khalil et al. [12]. It is well commented in a nice paper of Al-Refai et al. [5] in which they study the Sturm–Liouville eigenvalue problems with respect to the conformable derivative. Moreover, many interesting problems, associated with the conformable derivative, have been investigated. For more details, we refer to the works [1,2,3,4, 7, 10].

The notion of sequential fractional derivative was considered in the famous book [14, p. 209] in which a complete study of some special class of sequential differential equations with respect to Caputo’s derivative was given. Attracted by this type of problem, many authors have been interested in the sequential differential equations with respect to various fractional derivative types [6, 22, 23].

Motivated by such problems, here in the same way, we are concerned with a sequential second order Cauchy problem with nonlocal condition in the framework of the conformable derivative. Precisely, we are interested in the following sequential evolution conformable differential equations of second order with nonlocal condition:

$$\begin{aligned} \textstyle\begin{cases} \frac{d^{\alpha }}{dt^{\alpha }}[\frac{d^{\alpha }x(t)}{dt^{\alpha }}]=Ax(t)+f(t,x(t)), \quad 0< t\leq \tau , \quad 0< \alpha < 1, \\ x(0)=x_{0}+g(x), \\ \frac{d^{\alpha }x(0)}{dt^{\alpha }}=x_{1}+h(x). \end{cases}\displaystyle \end{aligned}$$
(1.1)

The functional framework of problem (1.1) is described as follows. The parameter t belongs to an interval \([0,\tau ]\), where τ is a fixed positive real number. The operator A is the infinitesimal generator of a cosine family \(\{C(t),S(t)\}_{t\in \mathbb{R}}\) acting on a Banach space \((X,\Vert \cdot \Vert )\). The elements \(x_{0}\) and \(x_{1}\) are two fixed vectors in the Banach space X. The function f, considered in equation (1.1), is defined on the set \([0,\tau ]\times X\) and has its values in X. We denote by \(\mathcal{C}=\mathcal{C}([0,\tau ],X)\) the Banach space of continuous functions from \([0,\tau ]\) onto X equipped with the norm \(\vert x \vert =\sup \{\Vert x(t)\Vert , t\in [0,\tau ]\}\). We give precisely that g and h are two functions defined on \(\mathcal{C}\) with values in X.

Based on the fact that the sequential problem (1.1) is well adapted with the fractional Laplace transform [1], we will be interested in the mild solutions of the above nonlocal Cauchy problem. Our method shares similarities with the standard techniques used in the classical cases [11, 21]. Precisely, we use the classical cosine family to elaborate a formula of Duhamel type. This formula leads us to treating our problem by using fixed point theory. Concretely, under the compactness of the cosine family associated with the operator A and the boundedness condition for the function \(f(t,x)\), we prove that problem (1.1) admits at least one solution. Furthermore, by adding some contraction conditions, we prove the uniqueness of the mild solution and its continuous dependance with respect to initial data. Moreover, under some regularity conditions for the function \(f(t,x)\) combined with a suitable condition on the domain \(D(A)\), we obtain the differentiability of the mild solution with respect to the conformable derivative.

This paper is summarized as follows. In Sect. 2, we review some tools related to the conformable derivative as well as some needed results. Section 3 will be devoted to the statements and the proof of the main results. In Sect. 4, as application, we study a concrete sequential conformable second order partial differential equation with nonlocal condition. In Sect. 5, we tried to discuss the problem of a definition for α-cosine family.

2 Preliminaries

We start this by recalling some concepts on conformable calculus [12].

Definition 2.1

The conformable derivative of x of order α at \(t>0\) is defined as

$$ \frac{d^{\alpha }x(t)}{dt^{\alpha }}= {\lim_{\varepsilon \longrightarrow 0}\frac{x(t+\varepsilon t^{1- \alpha })-x(t)}{\varepsilon }}. $$

When the limit exists, we say that x is \((\alpha )\)-differentiable at t.

If x is \((\alpha )\)-differentiable and \({{\lim\limits_{t\longrightarrow 0^{+}}}\frac{d^{\alpha }x(t)}{dt^{\alpha }}}\) exists, then we define

$$ \frac{d^{\alpha }x(0)}{d t^{\alpha }}= {\lim_{t\longrightarrow 0^{+}}\frac{d^{\alpha }x(t)}{dt^{\alpha }}}. $$

The \((\alpha )\)-fractional integral of a function x is given by

$$ I^{\alpha }(x) (t)= \int _{0}^{t}s^{\alpha -1}x(s)\,ds. $$

Theorem 2.1

If x is a continuous function in the domain of \(I^{\alpha }\), then we have

$$ \frac{d^{\alpha }(I^{\alpha }(x)(t))}{dt^{\alpha }}=x(t). $$

The following definition gives us the adapted Laplace transform to the conformable derivative [1].

Definition 2.2

The fractional Laplace transform of order α starting from 0 of x is defined by

$$ \mathcal{L}_{\alpha } \bigl(x(t) \bigr) (\lambda ):= \int _{0}^{+\infty }t^{\alpha -1}e ^{-\lambda \frac{t^{\alpha }}{\alpha }}x(t) \,dt. $$

The action of the fractional Laplace transform on the conformable derivative is given by the following proposition.

Proposition 2.1

If \(x(t)\) is differentiable, then we have

$$\begin{aligned}& I^{\alpha } \biggl(\frac{d^{\alpha } x}{dt^{\alpha }} \biggr) (t)=x(t)-x(0), \\& \mathcal{L}_{\alpha } \biggl(\frac{d^{\alpha }x(t)}{dt^{\alpha }} \biggr) (\lambda )= \lambda \mathcal{L}_{\alpha } \bigl(x(t) \bigr) (\lambda )-x(0). \end{aligned}$$

Now, we recall some results concerning the cosine family theory [21].

Definition 2.3

A one-parameter family \((C(t))_{t\in \mathbb{R}}\) of bounded linear operators on X is called a strongly continuous cosine family if and only if:

  1. 1.

    \(C(0)=I\);

  2. 2.

    \(C(s+t)+C(s-t)=2C(s)C(t)\) for all \(t,s\in \mathbb{R}\);

  3. 3.

    \(t\longmapsto C(t)x\) is continuous for each fixed \(x\in X\).

We define also the sine family by

$$ S(t)x:= \int _{0}^{t}C(s)x\,ds. $$

The infinitesimal generator A of a strongly continuous cosine family \(((C(t))_{t\in \mathbb{R}},(S(t))_{t\in \mathbb{R}})\) on X is defined by

$$\begin{aligned}& D(A)= \bigl\{ x\in X, t\longmapsto C(t)x \text{ is a twice continuously differentiable function} \bigr\} , \\& Ax=\frac{d^{2}C(0)x}{dt^{2}}. \end{aligned}$$

We end this section with the following results.

Proposition 2.2

The following assertions are true.

  1. 1.

    There exist constants \(K\geq 1\) and \(\omega \geq 0\) such that

    $$ \bigl\vert S(t)-S(s) \bigr\vert \leq K \biggl\vert \int _{s}^{t}\exp \bigl(\omega \vert r \vert \bigr) \biggr\vert \,dr \quad \textit{for all } t,s\in \mathbb{R}. $$
  2. 2.

    If \(x\in X\) and \(t,s\in \mathbb{R}\), then \(\int _{s}^{t}S(r)x\,dr\in D(A)\) and

    $$ A \int _{s}^{t}S(r)x\,dr=C(t)x-C(s)x. $$
  3. 3.

    If \(t\longmapsto C(t)x\) is differentiable, then \(S(t)x\in D(A)\) and \(\frac{d C(t)}{dt}x=AS(t)x\).

  4. 4.

    For λ such that \(\operatorname{Re}(\lambda )>\omega \), we have

    $$\begin{aligned}& \lambda ^{2}\in \rho (A), \quad \bigl(\rho (A): \textit{is the resolvent set of } A \bigr), \\& \lambda \bigl(\lambda ^{2}I-A \bigr)^{-1}x= \int _{0}^{+\infty }e^{-\lambda t}C(t)x\,dt, \quad x\in X, \\& \bigl(\lambda ^{2}I-A \bigr)^{-1}x= \int _{0}^{+\infty }e^{-\lambda t}S(t)x\,dt, \quad x\in X. \end{aligned}$$

3 Main results

Before presenting our main results, we introduce the following assumptions:

\((H_{1})\) :

The function \(f(t,\cdot ): X\longrightarrow X\) is continuous, and for all \(r>0\), there exists a function \(\mu _{r}\in L ^{\infty }([0,\tau ],\mathbb{R}^{+})\) such that \({\displaystyle{\sup_{\Vert x\Vert \leq r}}}\Vert f(t,x)\Vert \leq \mu _{r}(t)\) for all \(t\in [0,\tau ]\);

\((H_{2})\) :

The function \(f(\cdot ,x):[0,\tau ] \longrightarrow X\) is continuous for all \(x\in X\);

\((H_{3})\) :

There exists a constant \(l_{1}>0\) such that \(\Vert g(y)-g(x)\Vert \leq l_{1}\vert y-x\vert \) for all \(x,y\in \mathcal{C}\);

\((H_{4})\) :

There exists a constant \(l_{2}>0\) such that \(\Vert h(y)-h(x)\Vert \leq l_{2}\vert y-x\vert \) for all \(x,y\in \mathcal{C}\).

3.1 Existence and uniqueness of the mild solution

Using the fractional Laplace transform in equation (1.1), we get

$$\begin{aligned} \mathcal{L}_{\alpha } \bigl(x(t) \bigr) (\lambda )&=\lambda \bigl(\lambda ^{2}-A \bigr)^{-1} \bigl[x _{0}+g(x) \bigr]+ \bigl( \lambda ^{2}-A \bigr)^{-1} \bigl[x_{1}+h(x) \bigr] \\ &\quad{} + \bigl(\lambda ^{2}-A \bigr)^{-1} \mathcal{L}_{\alpha } \bigl(f \bigl(t,x(t) \bigr) \bigr) (\lambda ). \end{aligned}$$

According to the inverse fractional Laplace transform, we find Duhamel’s formula

$$ x(t)=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[x_{0}+g(x) \bigr]+S \biggl( \frac{t^{\alpha }}{ \alpha } \biggr) \bigl[x_{1}+h(x) \bigr]+ \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s ^{\alpha }}{\alpha } \biggr)f \bigl(s,x(s) \bigr)\,ds. $$

Taking \(\alpha =1\), we will have the standard one [11, 21]. Thus, we can introduce the following definition.

Definition 3.1

We say that \(x\in \mathcal{C}\) is a mild solution of equation (1.1) if the following assertion is true:

$$ \begin{aligned} &x(t)=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[x_{0}+g(x) \bigr]+S \biggl( \frac{t^{\alpha }}{ \alpha } \biggr) \bigl[x_{1}+h(x) \bigr]+ \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s ^{\alpha }}{\alpha } \biggr)f \bigl(s,x(s) \bigr)\,ds, \\ &\quad t\in [0,\tau ]. \end{aligned} $$

Theorem 3.1

If \((S(t))_{t>0}\) is compact and \((H_{1})\)\((H_{4})\) are satisfied, then the Cauchy problem (1.1) has at least one mild solution provided that

$$ l_{1}\sup_{t\in [0,\tau ]} \biggl\vert C \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert +l_{2} \sup _{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert < 1. $$

Proof

Choosing

$$\begin{aligned} r&\geq \biggl({\sup_{t\in [0,\tau ]}} \biggl\vert C\biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \bigl[ \Vert x _{0} \Vert + \bigl\Vert g(0) \bigr\Vert \bigr] \\ &\quad{} + {\sup_{t\in [0,\tau ]}} \biggl\vert S\biggl( \frac{t^{\alpha }}{\alpha }\biggr) \biggr\vert \biggl[\frac{ \tau ^{\alpha }}{\alpha } \vert \mu _{r} \vert _{L^{\infty }([0,\tau ],\mathbb{R^{+}})}+ \Vert x_{1} \Vert + \bigl\Vert h(0) \bigr\Vert \biggr]\biggr) \\ &\quad{}\Big/ \biggl(1-l_{1} {\sup _{t\in [0,\tau ]}} \biggl\vert C\biggl(\frac{t^{\alpha }}{\alpha }\biggr) \biggr\vert -l_{2} {\sup_{t\in [0,\tau ]}} \biggl\vert S\biggl( \frac{t^{\alpha }}{\alpha }\biggr) \biggr\vert \biggr), \end{aligned}$$

and let \(B_{r}=\{x\in \mathcal{C}, \vert x \vert \leq r\}\). Next, for \(x\in B_{r}\) define the operators \(\varGamma _{1}\) and \(\varGamma _{2}\) by

$$\begin{aligned}& \varGamma _{1}(x) (t)=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[x_{0}+g(x) \bigr]+S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \bigl[x_{1}+h(x) \bigr], \quad t\in [0,\tau ], \\& \varGamma _{2}(x) (t)= \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s^{ \alpha }}{\alpha } \biggr)f \bigl(s,x(s) \bigr)\,ds, \quad t\in [0,\tau ]. \end{aligned}$$

By using assumptions \((H_{1})\)\((H_{4})\), we show that \(\varGamma _{1}(x)+ \varGamma _{2}(y)\in B_{r}\) whenever \(x, y\in B_{r}\). Moreover, the operator \(\varGamma _{1}\) is a contraction on \(B_{r}\).

Now, we will show that \(\varGamma _{2}\) is continuous and compact.

Continuity of \(\varGamma _{2}\). Let \((x_{n})\subset B_{r}\) such that \(x_{n}\longrightarrow x\) in \(B_{r}\). Then, by using assumption \((H_{1})\), we obtain \(\Vert s^{\alpha -1}[f(s,x_{n}(s))-f(s,x(s))]\Vert \leq 2\mu _{r}(s)s^{\alpha -1}\) and \(f(s,x_{n}(s))\longrightarrow f(s,x(s))\) as \(n\longrightarrow +\infty \).

Also, we have

$$ \varGamma _{2}(x_{n}) (t)-\varGamma _{2}(x) (t)= \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t ^{\alpha }-s^{\alpha }}{\alpha } \biggr) \bigl[f \bigl(s,x_{n}(s) \bigr)-f \bigl(s,x(s) \bigr) \bigr]\,ds, \quad t\in [0,\tau ]. $$

Accordingly, we obtain

$$ \bigl\vert \varGamma _{2}(x_{n})-\varGamma _{2}(x) \bigr\vert \leq \sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert \int _{0}^{\tau }s^{\alpha -1} \bigl\Vert f \bigl(s,x _{n}(s) \bigr)-f \bigl(s,x(s) \bigr) \bigr\Vert \,ds. $$

By using the Lebesgue dominated convergence theorem, we get

$$ \lim_{n\longrightarrow +\infty } \bigl\vert \varGamma _{2}(x_{n})- \varGamma _{2}(x) \bigr\vert =0. $$

Compactness of \(\varGamma _{2}\). Claim 1: We prove that \(\{\varGamma _{2}(x)(t), x\in B_{r}\}\) is relatively compact in X.

For some fixed \(t\in{}]0,\tau [\) let \(\varepsilon \in{}]0,t[\), \(x\in B_{r}\) and define the operator \(\varGamma _{2}^{\varepsilon }\) by

$$ \varGamma _{2}^{\varepsilon }(x) (t)= \int _{0}^{(t^{\alpha }- \varepsilon ^{\alpha })^{\frac{1}{\alpha }}}s^{\alpha -1}S \biggl( \frac{t^{ \alpha }-s^{\alpha }}{\alpha } \biggr)f \bigl(s,x(s) \bigr)\,ds. $$

The relative compactness of \(\{\varGamma _{2}^{\varepsilon }(x)(t), x\in B_{r}\}\) in X is guaranteed by the compactness of \((S(t))_{t>0}\). Using assumption \((H_{1})\), we have

$$ \bigl\Vert \varGamma _{2}^{\varepsilon }(x) (t)-\varGamma _{2}(x) (t) \bigr\Vert \leq \vert \mu _{r} \vert _{L^{\infty }([0,\tau ],\mathbb{R}^{+})} \sup_{t\in [0,\tau ]} \biggl\vert S \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \frac{ \varepsilon ^{\alpha }}{\alpha }. $$

Therefore, we conclude that \(\{\varGamma _{2}(x)(t), x\in B_{r}\}\) is relatively compact in X. It is clear that the set \(\{\varGamma _{2}(x)(0), x\in B_{r}\}\) is compact. Finally, \(\{\varGamma _{2}(x)(t), x\in B_{r}\}\) is relatively compact in X for all \(t\in [0,\tau ]\).

Claim 2: We show that \(\varGamma _{2}(B_{r})\) is equicontinuous.

Let \(t_{1},t_{2} \in{}]0,\tau ]\) such that \(t_{1}< t_{2}\). We have

$$\begin{aligned} \varGamma _{2}(x) (t_{2})-\varGamma _{2}(x) (t_{1}) &= \int _{0}^{t_{1}}s^{\alpha -1} \biggl[S \biggl( \frac{t_{2}^{\alpha }-s^{\alpha }}{\alpha } \biggr)-S \biggl(\frac{t_{1}^{ \alpha }-s^{\alpha }}{\alpha } \biggr) \biggr] f \bigl(s,x(s) \bigr)\,ds \\ &\quad{} + \int _{t_{1}}^{t_{2}}s^{\alpha -1}S \biggl( \frac{t_{2}^{\alpha }-s^{\alpha }}{\alpha } \biggr)f \bigl(s,x(s) \bigr)\,ds. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \bigl\Vert \varGamma _{2}(x) (t_{2})-\varGamma _{2}(x) (t_{1}) \bigr\Vert &\leq \vert \mu _{r} \vert _{L^{\infty }([0,\tau ],\mathbb{R^{+}})} \biggl[\frac{K}{ \omega ^{2}} \biggl(\exp \biggl(\frac{\omega t_{2}^{\alpha }}{\alpha } \biggr)-\exp \biggl(\frac{ \omega t_{1}^{\alpha }}{\alpha } \biggr) \biggr) \\ &\quad{} + { \sup_{t\in [0,\tau ]}} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggl(\frac{t_{2} ^{\alpha }-t_{1}^{\alpha }}{\alpha } \biggr) \biggr]. \end{aligned}$$

When \(\omega =0\), we obtain

$$ \bigl\Vert \varGamma _{2}(x) (t_{2})-\varGamma _{2}(x) (t_{1}) \bigr\Vert \leq \vert \mu _{r} \vert _{L^{\infty }([0,\tau ],\mathbb{R^{+}})} \biggl(\frac{Kt_{1} ^{\alpha }}{\alpha }+ {\sup _{t\in [0,\tau ]}} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggr) \biggl[\frac{t_{2} ^{\alpha }-t_{1}^{\alpha }}{\alpha } \biggr]. $$

We conclude that the functions \(\varGamma _{2}(x)\) (\(x\in B_{r}\)) are equicontinuous at \(t\in [0,\tau ]\). By using Arzela–Ascoli theorem, we prove that \(\varGamma _{2}\) is compact. Finally, the Krasnoselskii fixed point theorem completes the proof. □

To obtain the uniqueness of the mild solution, we will need the following assumption:

\((H_{5})\) :

There exists a constant \(l_{3}>0\) such that \(\Vert f(t,y)-f(t,x)\Vert \leq l_{3}\Vert y-x\Vert \) for all \(x,y\in X\) and \(t\in [0,\tau ]\).

Theorem 3.2

Assume that \((H_{2})\)\((H_{5})\) hold. Then the Cauchy problem (1.1) has a unique mild solution provided that

$$ l_{1}\sup_{t\in [0,\tau ]} \biggl\vert C \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert + \biggl(l_{2}+l _{3} \frac{\tau ^{\alpha }}{\alpha } \biggr)\sup_{t\in [0,\tau ]} \biggl\vert S \biggl( \frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert < 1. $$

Proof

Let \(t\in [0,\tau ]\) and define the operator \(\varGamma :\mathcal{C} \longrightarrow \mathcal{C}\) by

$$ \varGamma (x) (t)=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[x_{0}+g(x) \bigr]+S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \bigl[x_{1}+h(x) \bigr]+ \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t ^{\alpha }-s^{\alpha }}{\alpha } \biggr)f \bigl(s,x(s) \bigr)\,ds. $$

Next, let be \(x,y\in \mathcal{C}\), then we have

$$\begin{aligned} \varGamma (y) (t)-\varGamma (x) (t) &=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[g(y)-g(x) \bigr]+S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \bigl[h(y)-h(x) \bigr] \\ &\quad{} + \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s^{\alpha }}{\alpha } \biggr) \bigl[f \bigl(s,y(s) \bigr)-f \bigl(s,x(s) \bigr) \bigr]\,ds. \end{aligned}$$

Accordingly, we obtain

$$ \bigl\Vert \varGamma (y) (t)-\varGamma (x) (t) \bigr\Vert \leq \biggl[l_{1} \sup_{t\in [0,\tau ]} \biggl\vert C \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert + \biggl(l_{2}+l_{3} \frac{ \tau ^{\alpha }}{\alpha } \biggr)\sup_{t\in [0,\tau ]} \biggl\vert S \biggl( \frac{t^{\alpha }}{ \alpha } \biggr) \biggr\vert \biggr] \vert y-x \vert . $$

Then we get

$$ \bigl\vert \varGamma (y)-\varGamma (x) \bigr\vert \leq \biggl[l_{1} \sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert + \biggl(l_{2}+l_{3}\frac{\tau ^{\alpha }}{\alpha } \biggr) \sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggr] \vert y-x \vert . $$

Therefore, Γ has a unique fixed point in \(\mathcal{C}\). □

3.2 Continuous dependence of the mild solution

Now, we will give some results concerning the continuous dependence of the mild solution.

Theorem 3.3

Assume that the conditions of Theorem 3.2 are satisfied. Let \(x_{0}, y_{0}, x_{1}, y_{1} \in X\) and denote by x, y the solutions associated with \((x_{0},x_{1})\) and \((y_{0},y_{1})\), respectively. Then we have

$$ \vert y-x \vert \leq \frac{\alpha }{\alpha -l_{3}\tau ^{\alpha }-\alpha l _{1}-\alpha l_{2}} \biggl[\sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \Vert y_{0}-x_{0} \Vert +\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert \Vert y_{1}-x_{1} \Vert \biggr]. $$

Proof

We have

$$\begin{aligned} y(t)-x(t) &=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[y_{0}-x_{0}+g(y)-g(x) \bigr]+S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \bigl[y_{1}-x_{1}+h(y)-h(x) \bigr] \\ &\quad{} + \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s^{\alpha }}{\alpha } \biggr) \bigl[f \bigl(s,y(s) \bigr)-f \bigl(s,x(s) \bigr) \bigr]\,ds. \end{aligned}$$

Since we obtain

$$\begin{aligned} \bigl\Vert y(t)-x(t) \bigr\Vert &\leq \sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert \bigl[ \Vert y_{0}-x_{0} \Vert +l_{1} \vert y-x \vert \bigr] \\ &\quad{} +\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggl[ \Vert y _{1}-x_{1} \Vert + \biggl(l_{2}+\frac{l_{3}\tau ^{\alpha }}{\alpha } \biggr) \vert y-x \vert \biggr]. \end{aligned}$$

Accordingly, we show that

$$\begin{aligned} \vert y-x \vert &\leq \sup_{t\in [0,\tau ]} \biggl\vert C \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \bigl[ \Vert y_{0}-x_{0} \Vert +l_{1} \vert y-x \vert \bigr] \\ &\quad{} +\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggl[ \Vert y _{1}-x_{1} \Vert + \biggl(l_{2}+\frac{l_{3}\tau ^{\alpha }}{\alpha } \biggr) \vert y-x \vert \biggr]. \end{aligned}$$

Finally, we get the following estimation:

$$ \vert y-x \vert \leq \frac{\alpha }{\alpha -l_{3}\tau ^{\alpha }-\alpha l _{1}-\alpha l_{2}} \biggl[\sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \Vert y_{0}-x_{0} \Vert +\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert \Vert y_{1}-x_{1} \Vert \biggr]. $$

 □

Theorem 3.4

Assume that the conditions of Theorem 3.2 are satisfied. Let \(x_{0}, y_{0}, x_{1}, y_{1} \in X\) and denote by x, y the solutions associated with \((x_{0},x_{1})\) and \((y_{0},y_{1})\), respectively. Then we have

$$\begin{aligned} \vert y-x \vert &\leq \biggl[\frac{ {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert C(\frac{t^{\alpha }}{\alpha }) \vert \Vert y _{0}-x_{0} \Vert + {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert \Vert y _{1}-x_{1} \Vert }{1-[l_{1} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert C(\frac{t^{\alpha }}{\alpha }) \vert +l_{2} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert ]\exp (\frac{l _{3}\tau ^{\alpha }}{\alpha } {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert )} \biggr] \\ &\quad{} \times \exp \biggl(\frac{l_{3}\tau ^{\alpha }}{\alpha } {\displaystyle{\sup_{t\in [0,\tau ]}}} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggr) \end{aligned}$$

provided that

$$ l_{1} {\sup_{t\in [0,\tau ]}} \biggl\vert C \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert +l_{2} {\sup _{t\in [0,\tau ]}} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert ] \exp \biggl(\frac{l _{3}\tau ^{\alpha }}{\alpha } {\sup_{t\in [0,\tau ]}} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggr)< 1. $$

Proof

For \(t\in [0,\tau ]\), we have

$$\begin{aligned} y(t)-x(t) &=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[y_{0}-x_{0}+g(y)-g(x) \bigr]+S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \bigl[y_{1}-x_{1}+h(y)-h(x) \bigr] \\ &\quad{} + \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s^{\alpha }}{\alpha } \biggr) \bigl[f \bigl(s,y(s) \bigr)-f \bigl(s,x(s) \bigr) \bigr]\,ds. \end{aligned}$$

Then we get

$$\begin{aligned} \bigl\Vert y(t)-x(t) \bigr\Vert &\leq \sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert \bigl[ \Vert y_{0}-x_{0} \Vert +l_{1} \vert y-x \vert \bigr] \\ &\quad{} +\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \bigl[ \Vert y _{1}-x_{1} \Vert + l_{2} \vert y-x \vert \bigr] \\ &\quad{} +l_{3}\sup_{t\in [0,\tau ]} \biggl\vert S \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \int _{0} ^{t}s^{\alpha -1} \bigl\Vert y(s)-x(s) \bigr\Vert \,ds. \end{aligned}$$

Therefore, we show that

$$\begin{aligned} \vert y-x \vert &\leq \biggl[\sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t^{\alpha }}{ \alpha } \biggr) \biggr\vert \bigl[ \Vert y_{0}-x_{0} \Vert +l_{1} \vert y-x \vert \bigr] + \sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \bigl[ \Vert y _{1}-x_{1} \Vert + l_{2} \vert y-x \vert \bigr] \biggr] \\ &\quad{} \times \exp \biggl(\frac{l_{3}\tau ^{\alpha }}{\alpha }\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggr). \end{aligned}$$

Finally, we conclude that

$$\begin{aligned} \vert y-x \vert &\leq \biggl[\frac{ {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert C(\frac{t^{\alpha }}{\alpha }) \vert \Vert y _{0}-x_{0} \Vert + {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert \Vert y _{1}-x_{1} \Vert }{1-[l_{1} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert C(\frac{t^{\alpha }}{\alpha }) \vert +l_{2} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert ]\exp (\frac{l _{3}\tau ^{\alpha }}{\alpha } {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert )} \biggr] \\ &\quad{} \times \exp \biggl(\frac{l_{3}\tau ^{\alpha }}{\alpha } {\displaystyle{\sup_{t\in [0,\tau ]}}} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \biggr). \end{aligned}$$

 □

Remark 3.1

If we take

$$\begin{aligned}& C_{1}=\frac{\exp (\frac{l_{3}\tau ^{\alpha }}{\alpha } {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert )}{1-[l_{1} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert C(\frac{t^{\alpha }}{\alpha }) \vert +l _{2} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert ]\exp (\frac{l _{3}\tau ^{\alpha }}{\alpha } {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert )}, \\& C_{2}=\frac{\alpha }{\alpha -l_{3}\tau ^{\alpha } {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert -\alpha l_{1} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert C(\frac{t^{\alpha }}{\alpha }) \vert -\alpha l_{2} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert }. \end{aligned}$$

We have \(C_{1}< C_{2}\). Then Theorem 3.4 is better than Theorem 3.3.

3.3 Special case of nonlocal conditions

Here, we study a special case of nonlocal conditions, this means that the functions g and h are given by

$$\begin{aligned} g(x)= {\sum_{i=1}^{n}c_{i}x(t_{i})} \quad \text{and} \quad h(x)= {\sum_{i=1}^{n}d_{i}x(t_{i})}, \end{aligned}$$

where \(c_{i}\), \(d_{i}\), \(i=1,2,\ldots,n\), are given constants and \(0< t_{1}< t_{2}<\cdots<t_{n}<\tau \).

Proposition 3.1

Assume that \((H_{2})\) and \((H_{5})\) hold. Then the Cauchy problem (1.1) has a unique mild solution provided that there exists \(\varepsilon _{0}\in{}]0,1[\) such that

$$ \sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert {\sum_{i=1}^{n} \vert c_{i} \vert }+\sup_{t\in [0,\tau ]} \biggl\vert S \biggl( \frac{t^{ \alpha }}{\alpha } \biggr) \biggr\vert {\sum_{i=1}^{n} \vert d_{i} \vert }< \varepsilon _{0}. $$

Proof

Define the operator \(\varGamma :\mathcal{C}\longrightarrow \mathcal{C}\) by

$$\begin{aligned} & \varGamma (x) (t)=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[x_{0}+g(x) \bigr]+S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \bigl[x_{1}+h(x) \bigr] + \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t ^{\alpha }-s^{\alpha }}{\alpha } \biggr)f \bigl(s,x(s) \bigr)\,ds, \\&\quad t\in [0,\tau ]. \end{aligned}$$

Now, we define a new norm \(\vert \cdot \vert _{\alpha }\) in \(\mathcal{C}\) by

$$ \vert x \vert _{\alpha }= \biggl\vert \exp \biggl(\frac{-\varepsilon (\cdot )^{\alpha }}{ \alpha } \biggr)x \biggr\vert , $$

where

$$ \varepsilon =\frac{l_{3} {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert }{\varepsilon _{0}- {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert C(\frac{t^{\alpha }}{\alpha }) \vert \sum_{i=1} ^{n} \vert c_{i} \vert - {\displaystyle{\sup_{t\in [0,\tau ]}}} \vert S(\frac{t^{\alpha }}{\alpha }) \vert \sum_{i=1} ^{n} \vert d_{i} \vert }. $$

For \(x,y\in \mathcal{C}\) and \(t\in [0,\tau ]\), we have

$$\begin{aligned} \varGamma (y) (t)-\varGamma (x) (t) &=C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[g(y)-g(x) \bigr]+S \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \bigl[h(y)-h(x) \bigr] \\ &\quad{} + \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s^{\alpha }}{\alpha } \biggr) \bigl[f \bigl(s,y(s) \bigr)-f \bigl(s,x(s) \bigr) \bigr]\,ds. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \bigl\Vert \varGamma (y) (t)-\varGamma (x) (t) \bigr\Vert &\leq \Biggl[\exp \biggl(\frac{ \varepsilon t^{\alpha }}{\alpha } \biggr)\sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert {\sum _{i=1}^{n}} \vert c_{i} \vert \\ &\quad{} +\exp \biggl(\frac{\varepsilon t^{\alpha }}{\alpha } \biggr)\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert {\sum _{i=1}^{n}} \vert d_{i} \vert \\ &\quad{} +l_{3}\sup_{t\in [0,\tau ]} \biggl\vert S \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert \int _{0} ^{t}s^{\alpha -1}\exp \biggl(\varepsilon \frac{s^{\alpha }}{\alpha } \biggr)\,ds \Biggr] \vert y-x \vert _{\alpha }. \end{aligned}$$

Accordingly, we show that

$$\begin{aligned} \bigl\vert \varGamma (y)-\varGamma (x) \bigr\vert _{\alpha } &\leq \Biggl[\sup_{t\in [0,\tau ]} \biggl\vert C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert {\sum_{i=1}^{n}} \vert c_{i} \vert +\sup_{t\in [0,\tau ]} \biggl\vert S \biggl(\frac{t^{ \alpha }}{\alpha } \biggr) \biggr\vert \Biggl( {\sum _{i=1}^{n}} \vert d_{i} \vert + \frac{l_{3}}{\varepsilon } \Biggr) \Biggr] \vert y-x \vert _{\alpha }. \end{aligned}$$

Hence, we conclude that

$$\begin{aligned} \bigl\vert \varGamma (y)-\varGamma (x) \bigr\vert _{\alpha } &\leq \varepsilon _{0} \vert y-x \vert _{\alpha }. \end{aligned}$$

Finally, thanks to the contraction principle, we get the result. □

3.4 Regularity of the mild solution

Here, we need the assumptions:

\((H_{6})\) :

The function f is \((\alpha )\)-differentiable of the first variable and differentiable of the second variable.

\((H_{7})\) :

\((x_{0}+g(x))\in D(A)\) and \(t\longmapsto C(t)[x_{0}+g(x)]\) is \((\alpha )\)-differentiable for all \(x\in \mathcal{C}\).

Theorem 3.5

Assume that \((H_{3})\)\((H_{7})\) hold. Then the mild solution of the Cauchy problem (1.1) is \((\alpha )\)-differentiable at \(t\in (0,\tau )\) provided that

$$ l_{1}\sup_{t\in [0,\tau ]} \biggl\vert C \biggl( \frac{t^{\alpha }}{\alpha } \biggr) \biggr\vert + \biggl(l_{2}+l _{3} \frac{\tau ^{\alpha }}{\alpha } \biggr)\sup_{t\in [0,\tau ]} \biggl\vert S \biggl( \frac{t ^{\alpha }}{\alpha } \biggr) \biggr\vert < 1. $$

Proof

The conditions of Theorem 3.2 hold. Then we denote by x the unique mild solution of the Cauchy problem (1.1). Next, let y be the continuous solution of the following integral equation:

$$\begin{aligned} y(t) &=S \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[A \bigl(x_{0}+g(x) \bigr) \bigr]+C \biggl(\frac{t^{\alpha }}{\alpha } \biggr) \bigl[x_{0}+g(x)+f(0,x(0) \bigr] \\ &\quad{} + \int _{0}^{t}s^{\alpha -1}S \biggl( \frac{t^{\alpha }-s^{\alpha }}{\alpha } \biggr)\frac{ \partial ^{\alpha } f}{\partial s^{\alpha } } \bigl(s,x(s) \bigr)\,ds \\ &\quad{}+ \int _{0}^{t}s ^{\alpha -1}S \biggl( \frac{t^{\alpha }-s^{\alpha }}{\alpha } \biggr)\frac{\partial f}{\partial x} \bigl(s,x(s) \bigr)y(s)\,ds, \quad t \in [0,\tau ]. \end{aligned}$$

We have \(\frac{x(t+\varepsilon t^{1-\alpha })-x(t)}{\varepsilon } \longrightarrow y(t)\) as \(\varepsilon \longrightarrow 0\) for \(t\in (0,\tau )\). Accordingly, we conclude that x is \((\alpha )\)-differentiable. □

4 Application

Consider the nonlocal fractional partial differential equation of the form

$$\begin{aligned} \begin{aligned}[b] \frac{\partial ^{\frac{1}{2}}}{\partial t^{\frac{1}{2}}}\frac{ \partial ^{\frac{1}{2}} u(t,x)}{\partial t^{\frac{1}{2}}}&=\frac{ \partial ^{2} u(t,x)}{\partial x^{2}}+ \frac{ \vert u(t,x) \vert }{1+ \vert u(t,x) \vert } \\ &\quad{} + \int _{0}^{t}\frac{ \vert u(s,x) \vert }{1+ \vert u(s,x) \vert }\,ds, \quad (t,x)\in {]0,1]}\times {]0,\pi [}, \end{aligned} \end{aligned}$$
(4.1)

with the following nonlocal conditions:

$$\begin{aligned} u(t,0)=u(t,\pi )=0 \quad \text{and} \quad u(0,x)= \frac{\partial ^{\frac{1}{2}}u(0,x)}{\partial t ^{\frac{1}{2}}}= {\sum_{i=1}^{n}}c_{i}u(t_{i},x), \quad x\in [0,\pi ], \end{aligned}$$
(4.2)

where \(0< t_{1}<\cdots<t_{n}<1\) and \(c_{1},\ldots,c_{n}\) are given real constants such that

$$ \sum_{i=1}^{n} \vert c_{i} \vert < \frac{4}{10}. $$

Let \(X=L^{2}([0,\pi ])\) and define the operator \(A: X\longrightarrow X\) by

$$ A=\frac{\partial ^{2}(\cdot )}{\partial x^{2}} \quad \text{and} \quad D(A)= \bigl\{ \omega \in H^{2}(0,\pi ), \omega (0)=\omega (\pi )=0 \bigr\} . $$

The operator A generates a cosine family \(((C(t))_{t\in \mathbb{R}},(S(t))_{t \in \mathbb{R}})\). Moreover, we have

\(\vert C(t)\vert \leq 1\) and \(\vert S(t)\vert \leq 1\) for all \(t\in [0,1]\). Next, we consider the following transformations:

$$\begin{aligned}& z(t) (x)=u(t,x), \quad\quad f \bigl(t,z(t) \bigr)=\frac{ \vert z(t) \vert }{1+ \vert z(t) \vert }+ \int _{0}^{t}\frac{ \vert z(s) \vert }{1+ \vert z(s) \vert }\,ds, \\& g(x)=h(x)= {\sum_{i=1}^{n}}c_{i}z(t_{i}). \end{aligned}$$

Then (4.1) and (4.2) become as follows:

$$ \textstyle\begin{cases} \frac{d^{\frac{1}{2}}}{d t^{\frac{1}{2}}}\frac{d^{\frac{1}{2}} z(t)}{d t^{\frac{1}{2}}}=Az(t)+f(t,z(t)),\quad t\in {]0,1]}, \\ z(0)=g(x), \\ \frac{d^{\frac{1}{2}}z(0)}{dt^{\frac{1}{2}}}=h(x). \end{cases} $$
(4.3)

Finally, we can verify all the hypotheses of Proposition 3.1. Then, the above Cauchy problem has a unique mild solution.

5 Comment

By noticing the relation \(C(t)=C((t^{\frac{1}{\alpha }})^{\alpha })\) for a cosine family \((C(t))_{t\in \mathbb{R}}\), it comes to us to consider the family of functions \(t\longmapsto C_{\alpha }(t):=C(t^{\alpha })\) and to propose as in the case of semigroup [4] the following definition for α-cosine family.

For a Banach space X, a family \(\varphi _{\alpha }: \mathbb{R}\longrightarrow X\), \(t\longmapsto \varphi _{\alpha }(t)\) will be said to be α-cosine family if it satisfies the following functional relation:

$$ \varphi _{\alpha } \bigl((t+s)^{\frac{1}{\alpha }} \bigr)+\varphi _{\alpha } \bigl((t-s)^{\frac{1}{ \alpha }} \bigr)= 2\varphi _{\alpha } \bigl(t^{\frac{1}{\alpha }} \bigr)\varphi _{\alpha } \bigl(s^{\frac{1}{\alpha }} \bigr). $$
(5.1)

But here, this definition is not as in the case of α-semigroup [4]. It poses a serious problem. Indeed, the quantity \(\varphi _{\alpha }(t^{\frac{1}{\alpha }})\) must be defined for all t in \(\mathbb{R}\). Then, given a good sense for \(t^{\frac{1}{\alpha }}\) in the case t is negative, we must consider the complex logarithm function (\(\operatorname{Ln}(z)\), \(z\in \mathbb{C}\)) [19] to define \(t^{\frac{1}{\alpha }}\) by \(e^{\frac{\operatorname{Ln}(t)}{\alpha }}\) for \(t<0\). This forces us to take X as a complex Banach space and to suppose that our α-cosine family \((\varphi _{\alpha }(t))_{t\in \mathbb{R}}\) can be extended to complex plane \(\mathbb{C}\). This is not surprising if we admit that the α-conformable derivative is well adapted to physical problems. For example, the symmetry principle in quantum mechanics requires that the states of a quantum system must be vectors of a complex Hilbert space (a particular Banach space) [20]. However, if we solve non-sequential evolution conformable differential equations of second order with nonlocal condition and define their associated α-cosine family, this can be considered as a valuable addition to the literature.

6 Conclusion

We have obtained Duhamel’s formula for sequential evolution conformable differential equations of second order with nonlocal condition. Under some suitable conditions, we have also obtained an existence result for the mild solution. In the case where the contraction condition type is satisfied, we have proved the uniqueness of the mild solution as well as its continuous dependence with respect to the initial data.

In the light of the above comment and as the anonymous referee has proposed, it would be interesting to consider non-sequential conformable second order differential equations with nonlocal condition in a coming paper.