1 Introduction

We investigate on the presence of hereditary dynamics for a model driven by the parametric differential equation

$$\begin{aligned}&\displaystyle {\frac{\partial u}{\partial t}(t,x)= -b(t,x)u(t,x) + g\left( t,u(t,x),\int _{-\tau }^0u(t+\theta ,x)\mathrm{{d}}\theta \right) },\nonumber \\&\quad t\in [0,T]\, ,\, x\in [0,1] \end{aligned}$$
(1)

with memory effects. This equation is usually adopted for describing the dynamics of a population: u(tx) represents the density of the population at time t and place x; \(-b(t,x)\) is the removal coefficient including the death rate and the displacement of the population; the nonlinearity g is the population development law involving a memory term expressed by its integral argument. Delayed and hereditary models are more appropriate than the classic ones, such as in the study of some pest populations where the individual’s maturation time is not negligible or his current life is influenced by his own past.

We will treat the above equation as a special case of the following semilinear differential equation with functional delay:

$$\begin{aligned} y'(t)= A(t)y(t)+f\left( t,y(t),y_t\right) ,\quad t\in [0,T], \end{aligned}$$
(2)

where \(\{A(t)\}_{0\le t\le T}\) is a family of linear operators generating an evolution system and f is a nonlinear function. For every \(t\in [0,T]\), the functional argument \(y_t\) of f is defined as usual by

$$\begin{aligned} y_t(\theta )=y(t+\theta ),\,\theta \in [-\tau ,0], \end{aligned}$$
(3)

and belongs to the space \({\mathcal {C}_0}\) of all piecewise continuous functions defined on a given interval \([-\tau ,0]\), \( \tau >0\). The map \(y_t\) represents the history of the process starting from the present time t and going back up to \(t-\tau \); the past from time \(t=0\) is provided by a given function \(\varphi \in {\mathcal {C}_0}\). In many fields of applied sciences the past of the system influences its evolution, consequently it is necessary to implement models in which the dependence of the studied system on the past is formalized. In this direction, several articles on the subject have appeared in recent years (see, e.g. [14, 18], or the interesting book [5]).

In our model, possible external actions aimed at regulating the evolution of the phenomena taking place at pre-established times are allowed. In the general setting, these actions are represented by functions \(I_i:{\mathcal {C}_0}\rightarrow E\), \(i=1,\dots ,p\), causing sudden abrupt changes on the state of the system driven by (2). We consider impulse functions \(I_i\) depending on the whole dynamic of the problem up to the time when the impulse has to act, so that the delay structure of the system is preserved also in this aspect of the problem, formally described by the equations

$$\begin{aligned} y(t_i^+)=y(t_i) + I_i(y_{t_i}), \quad i=1,\dots , p. \end{aligned}$$

The models involving impulsive functions are an excellent tool whenever a real world phenomenon exhibits instantaneous changes in state variables, as for example in real-time software verification, chemical process plants, mobile robotics, automotive control, nerve impulse transmission. In biology they are called “regulation functions” and are used for instance in the study of population dynamics to keep the population in a prescribed range.

To obtain the existence of solutions to our Cauchy impulsive problem, we break it down into an ordered sequence of non-impulsive Cauchy problems. This method works not only when the impulses are given at fixed times (see, e.g. [6, 8]), but also in the case with impulses at variable times (see, e.g. [2, 12]). It is worth emphasizing that the advantage of this approach in the study of the existence of solutions to impulsive problems of the first type, compared to others in the literature, is that it allows not to request hypotheses of any kind on the impulse functions. In the setting of delay problems, this technique leads to the construction of the solutions by means of an “extension-with-memory” process (see, e.g. [10] for the case of distributed delay, or e.g. [3] for the functional delay). We wish to underline that here we obtain the existence of delayed impulsive mild solutions considering problems whose initial data belong to the same space \({\mathcal {C}_0}\), unlike what is in [3] where the initial data are fixed in turn in the different spaces \(C([-\tau ,t_{i-1}],E)\), \(i=1,\dots ,p\).

Furthermore, with the same technique we also study a model driven by the parabolic partial differential equation

$$\begin{aligned}&\displaystyle {\frac{\partial u}{\partial t}(t,x)= \frac{\partial ^2 u}{\partial x^2}(t,x) + g\left( t,u(t,x),\int _{-\tau }^0u(t+\theta ,x)\mathrm{{d}}\theta \right) }\, , \nonumber \\&\quad t\in [0,T]\, ,\, x\in [0,1], \end{aligned}$$
(4)

with memory effects. Reaction–diffusion equations with time delays like (4) appear in problems with delayed connections in which the processing of certain information is required, in relation not only to the dynamics of populations but also to chemical reactions and other physical phenomena.

The paper is divided into two parts: Sect. 2 is devoted to the study the existence of solutions to the impulsive Cauchy problem with functional delay driven by Eq. (2); using the theory there developed, in Sect. 3, we provide the existence of evolutionary dynamics to the models driven, respectively, by (1) and (4).

At the end of the paper, we state an Appendix containing some background material, to make the article self-contained.

2 The Impulsive Cauchy Problem with Functional Delay

Let us consider the impulsive Cauchy problem with functional delay

$$\begin{aligned} \left\{ \begin{array}{l} y'(t)= A(t)y(t)+f\left( t,y(t),y_t\right) \ ,\ t\in [0,T]\, ,\ t\ne t_i\, , \ i=1,\dots , p\\ \\ y(t_i^+)=y(t_i) + I_i(y_{t_i}) \, ,\, i=1,\dots , p\\ \\ y_{t_0}=\varphi \end{array}\right. \end{aligned}$$
(5)

where \(\{A(t)\}_{0\le t\le T}\) is a family of linear operators; \(f:[0,T]\times E\times {\mathcal {C}_0}\rightarrow E\) is a function; \(0=t_0<t_1<\cdots<t_p<t_{p+1}=T;\) \(I_i:{\mathcal {C}_0}\rightarrow E\) is an impulse function; \(\varphi \in {\mathcal {C}_0}\). The set

$$\begin{aligned} {\mathcal {C}_0}=\left\{ c: [-\tau ,0]\rightarrow E\ : c \text{ has } \text{ at } \text{ most } \text{ a } \text{ finite } \text{ number } \text{ of } \text{ jump } \text{ discontinuities } \right\} , \end{aligned}$$

endowed with the norm

$$\begin{aligned} \Vert c\Vert _{{\mathcal {C}_0}}:=\frac{1}{\tau }\int _{-\tau }^0 \Vert c(\theta )\Vert \, \mathrm{{d}}\theta , \end{aligned}$$

is a normed (not Banach) space.

Remark 2.1

Our problem includes the possibility that an impulse \(I_0:{\mathcal {C}_0}\rightarrow E\) can be given in \(t_0=0\). In this case, we shall rewrite problem (5) taking the function

$$\begin{aligned} \varphi ^0(\theta )={\left\{ \begin{array}{ll} \varphi (\theta ), &{}\theta \in [-\tau ,0[\\ \varphi (0)+I_0(\varphi ),&{}\theta =0 \end{array}\right. } \end{aligned}$$

instead of \(\varphi \). Notice that \(\varphi ^0\) belongs to \({\mathcal {C}_0}\) as well.

On the linear part we assume that (see Sect. 3.2):

(A):

\(\{A(t)\}_{0\le t\le T}\) is a family of linear operators, \(A(t):D(A)\subset E\rightarrow E,\) \(t\in [0,T],\) D(A) is a dense subset of the Banach space E not depending on t, generating an evolution system \(\{T(t,s)\}_{0\le s\le t\le T}\).

We recall (see, e.g. [9]) that there exists a positive number D such that

$$\begin{aligned} \Vert T(t,s)\Vert _{{\mathcal {L}}(E)}\le D\, ,\ \text{ for } \text{ every } (t,s)\in \Delta , \end{aligned}$$
(6)

where \({{\mathcal {L}}}(E)\) is the space of all bounded linear operators from E to E furnished with the strong operator topology.

Let us define the sets of piecewise continuous functions, for \(i=1,\dots ,p+1\),

Definition 2.1

A function \(y:[-\tau ,T]\rightarrow E\) is said to be a delayed impulsive mild solution of problem (5) if

  1. (i)

    \(\displaystyle {y(t) = T(t,0)\varphi (0) + \sum _{0<t_i<t}T(t,t_i)I_i(y_{t_i}) + \int _0^t T(t,s)f(s,y(s),y_s)\, \mathrm{{d}}s } \ ,\ t\in [0,T]\);

  2. (ii)

    \(y(t_i^+)=y(t_i) + I_i(y_{t_i}) \, ,\ i=1,\cdots , p\);

  3. (iii)

    \(y_{t_0}=\varphi \).

We mean that \(\sum _{0<t_i<t} T(t,t_i) I_i(y_{t_i})=0 \ \text{ if } t\in [0,t_1] \ .\)

Notice that if y is a delayed impulsive mild solution, then \(y\in S([-\tau ,T],E)\).

We assume that the nonlinearity \(f:[0,T]\times E\times {\mathcal {C}_0}\rightarrow E\) satisfies the properties:

  1. (f1)

    f is such that \(f(t,\cdot ,\cdot )\) is continuous, for every \(t\in [0,T]\), and \(f\left( \cdot ,y(\cdot ),y_{(\cdot )}\right) \) is measurable, for every \(y\in S([-\tau ,T],E)\);

  2. (f2)

    there exists \(\alpha \in L^1_+([0,T])\) such that

    $$\begin{aligned} \Vert f(t,y,c)\Vert \le \alpha (t)(1+\Vert y\Vert +\Vert c\Vert _{{\mathcal {C}_0}})\ ,\ \text{ for } \text{ a.e. } t\in [0,T] \text{ and } \text{ all } y\in E,c\in {\mathcal {C}_0}\ ; \end{aligned}$$
  3. (f3)

    there exists \(h\in L^1_+([0,T])\) such that

    $$\begin{aligned} \chi (f(t,\Omega _1,\Omega _2))\le h(t)\left[ \chi (\Omega _1)+\sup _{-\tau \le \theta \le 0}\chi (\Omega _2(\theta ))\right] \ ,\ \text{ for } \text{ a.e. } t\in [0,T] \end{aligned}$$

    for every bounded sets \(\Omega _1\subset E, \, \Omega _2\subset {\mathcal {C}_0}\), where \(\chi \) is the Hausdorff measure of noncompactness in E.

Remark 2.2

If the Banach space E is separable, a sufficient condition for hypothesis (f1) to be satisfied is that f is a Carathéodory function (see [11, Corollary 2.5.24]).

On the impulse functions \(I_i:{\mathcal {C}_0}\rightarrow E\), \(i=1,\dots ,p\), no hypotheses are required, so that a wide class of problems with impulsive effects can be considered.

We denote by \(0_2\) the zero-element of \({\mathbb {R}}^2\) and by the symbol \({\mathbb {R}}^2_{0,+}\) the standard positive cone \({\mathbb {R}}^+_0\times {\mathbb {R}}^+_0\) endowed with the partial ordering \(\preccurlyeq \), where \(x\preccurlyeq y\) if and only if \(y-x\in {\mathbb {R}}_{0,+}^2\), for \(x,y\in {\mathbb {R}}^2\); clearly, \(x\prec y\) means that \(x\preccurlyeq y\) and \(x\ne y\).

Let us fix \(i\in \{0,\cdots p\}\) and \(L>0\). We consider the well-defined vectorial measure of noncompactness \(\nu ^L_i\) (cf. [13, Example 2.1.4]), acting on the bounded subsets of \(C([t_i,t_{i+1}],E)\) and taking values on \({\mathbb {R}}^2_{0,+}\), defined by

$$\begin{aligned} \nu ^L_i(\Omega )= & {} \max _{\{w_n\}_n\subset \Omega }\left( \gamma _i(\{w_n\}_n),\eta _i(\{w_n\}_n)\right) , \text{ for } \text{ all } \text{ bounded } \Omega \subset C([t_i,t_{i+1}],E),\qquad \qquad \end{aligned}$$
(7)

where

$$\begin{aligned} \gamma _i(\{w_n\}_n)= & {} \sup _{t\in [t_i,t_{i+1}]}e^{-Lt} \chi \left( \{w_n(t)\}_n\right) ; \\ \eta _i(\{w_n\}_n)= & {} mod_{C\, i}(\{w_n\}_n) \end{aligned}$$

being \(mod_{C\, i}\) the modulus of continuity in \(C([t_i,t_{i+1}],E)\). It is known that \(\nu ^L_i\) is monotone and invariant with respect to the union with compact sets (hence also nonsingular).

Finally, for every \(z\in C([t_{i-1},t_i],E)\) and \(\xi \in S([-\tau ,t_{i-1}],E)\), \(i=1,\dots ,p+1\), we consider the map \(z[\xi ]\in S([-\tau ,t_i],E)\) defined by

$$\begin{aligned} z[\xi ](t)= {\left\{ \begin{array}{ll} z(t),&{} t\in ]t_{i-1},t_i]\\ \xi (t),&{} t\in [-\tau ,t_{i-1}]. \end{array}\right. } \end{aligned}$$
(8)

2.1 Existence of Solutions

In this Section we show the existence of delayed impulsive mild solutions to (5). Preliminarily, we state the version for the Hausdorff measure of noncompactness \(\chi \) of Proposition 1.4 (b) in [16]

Proposition 2.1

If E is a real Banach space and \({{\mathcal {M}}}\subset L^1([a,b],E)\) is a countable and integrably bounded set, then the function \(\chi ({{\mathcal {M}}}(\cdot ))\) belongs to \(L^1_+([a,b])\) and satisfies the inequality

$$\begin{aligned} \chi \left( \int _a^b{{\mathcal {M}}}(s)\,\mathrm{{d}}s\right) \le 4\int _a^b\chi ({{\mathcal {M}}}(s))\, \mathrm{{d}}s. \end{aligned}$$

Further, we provide the following technical result.

Lemma 2.1

For every \(k>0\), \(w\in L^1_+([0,T])\), there exists \(H:=n(k,w)>0\) such that

$$\begin{aligned} b_H:= & {} \max _{t\in [0,T]}\int _{0}^t k w(s) e^{-H(t-s)}\, \mathrm{{d}}s\, <\, 1. \end{aligned}$$

Proof

We first show that

$$\begin{aligned} \inf _{n\in {\mathbb {N}}}b_n=0, \end{aligned}$$
(9)

where \( b_n:=\max _{t\in [0,T]}\int _{0}^t k w(s)e^{-n(t-s)}\, \mathrm{{d}}s\ ,\) \(n\in {\mathbb {N}}\).

It is easy to see that, for every \(n\in {\mathbb {N}}\), there exists \(\tau _n\in [0,T]\) such that

$$\begin{aligned} b_n-\frac{1}{n} < \int _{0}^{\tau _n}k w(s) e^{-n(\tau _n-s)}\, \mathrm{{d}}s= \int _{0}^T k w(s)\kappa _{[0,\tau _n]}(s) e^{-n(\tau _n-s)} \, \mathrm{{d}}s \end{aligned}$$
(10)

being \(\kappa _{[0,\tau _n]}\) the characteristic function of the set \([0,\tau _n]\).

Now, we note that the sequence \((\psi _n)_n\), where

$$\begin{aligned} \psi _n(s)=k w(s)\kappa _{[0,\tau _n]}(s) e^{-n(\tau _n-s)}\, ,\ \text{ for } \text{ all } s\in [0,T],\, n\in {\mathbb {N}}\, , \end{aligned}$$

eventually passing to a subsequence, is such that

$$\begin{aligned} \psi _n(s)\rightarrow 0\ ,\ \text{ for } \text{ every } s\in \, ]0,T[ \end{aligned}$$

and

$$\begin{aligned} \Vert \psi _n(s)\Vert \le k w(s) \ ,\ \text{ for } \text{ every } s\in [0,T] \, ,\, n\in {\mathbb {N}}. \end{aligned}$$

So the Lebesgue convergence theorem implies that \( \lim _{n\rightarrow \infty }\int _{0}^T\psi _n(s)\, \mathrm{{d}}s=0. \) Since \(b_n\ge 0\), by (10) we obtain \(\lim _{n\rightarrow \infty }b_n=0\). Hence, (9) is satisfied.

Therefore, there exists \(H\in {\mathbb {N}}\) such that \(b_H < 1\). \(\square \)

Now, we are in condition to provide our main result.

Theorem 2.1

Let E be a real Banach space. Assume that \(\{A(t)\}_{t\in [0,T]}\) and \(f : [0,T] \times E \times {\mathcal {C}_0}\rightarrow E\), respectively, satisfy (A) and (f1)–(f3).

Then problem (5) has at least one delayed impulsive mild solution.

Proof

We proceed by steps.

Step 1. Let us consider the real interval \([0,t_1]\) and the related Cauchy problem with functional delay arising from (5)

$$\begin{aligned} \left\{ \begin{array}{l} y'(t)= A(t)y(t) + f(t,y(t),y_t) \ ,\quad t\in [0,t_1],\\ \\ y_{t_0} = \varphi \in {\mathcal {C}_0}. \end{array}\right. \end{aligned}$$
(11)

A mild solution to (11) will be a function \(y:[-\tau ,t_1]\rightarrow E\) such that

$$\begin{aligned}&y(t)=T(t,0)\varphi (0) + \int _{0}^t T(t,s)f(s,y(s),y_s)\, \mathrm{{d}}s \, ,\ t\in [0,t_1], \nonumber \\&y_{t_0}=\varphi . \end{aligned}$$
(12)

Step 1.1. Let us define the integral operator \(\Gamma _1:C([0,t_1],E)\rightarrow C([0,t_1],E)\) as

$$\begin{aligned} \Gamma _1(y)(t)= & {} T(t,0)\varphi (0) + \int _{0}^t T(t,s)f(s,y(s),y[\varphi ]_s)\, \mathrm{{d}}s ,\nonumber \\&t\in [0,t_1]\ ,\ y\in C([0,t_1],E), \end{aligned}$$
(13)

where the map \(y[\varphi ]_s\in {\mathcal {C}_0}\), \(s\in [0,t_1]\), is expressed by

$$\begin{aligned} \text{ if } s\le \tau ,&y[\varphi ]_s(\theta )= {\left\{ \begin{array}{ll} y(s+\theta ), &{} -s< \theta \le 0\\ \varphi (s+\theta ), &{} -\tau \le \theta \le -s; \end{array}\right. } \end{aligned}$$
(14)
$$\begin{aligned} \text{ if } s> \tau ,&y[\varphi ]_s(\theta )=y(s+\theta ),\,- \tau \le \theta \le 0 \end{aligned}$$
(15)

recalling (3) and that \(y[\varphi ]\in S([-\tau ,t_1],E)\) is defined by (8). Clearly, if \(T<\tau \) the case (15) does not appear.

As a consequence of assumption (f1) (that can be applied also to functions defined on \([-\tau ,t_1]\) just by constant prolongation on \([-\tau ,T]\)), the map \(s\mapsto T(t,s)f(s,y[\varphi ](s),y[\varphi ]_s)\) is measurable; hence, by (f2), the operator \(\Gamma _1\) is well defined.

Clearly, if \({\bar{y}}\) is a fixed point for \(\Gamma _1\), then the function \({\bar{y}}[\varphi ]\) is a mild solution to problem (11).

Step 1.2. We show that there exists a set in \(C([0,t_1],E)\) which is invariant under the action of \(\Gamma _1\).

In correspondence of D (see (6)) and \(\alpha \) (cf. (f2)), by applying Lemma 2.1 with \(k=2D\) and \(w=\alpha \), there exists \(N>0\) such that

$$\begin{aligned} q_N= \max _{t\in [0,T]} \int _{0}^t 2D \alpha (s) e^{-N(t-s)}\, \mathrm{{d}}s\, <\, 1. \end{aligned}$$
(16)

We fix

$$\begin{aligned} {R_1}\ge \frac{D(\Vert \varphi (0)\Vert +\Vert \alpha \Vert _{L^1}+\Vert \varphi \Vert _{{\mathcal {C}_0}}\Vert \alpha \Vert _{L^1})}{1-q_N}. \end{aligned}$$
(17)

Let \(I\!\!B_{R_1}\) be the closed ball centered in 0 with radius \({R_1}\), given by (17), in the space \((C([0,t_1],E),\) \(\Vert \cdot \Vert _{N_{1}})\), where \(\Vert \cdot \Vert _{N_{1}}:C([0,t_1],E)\rightarrow {\mathbb {R}}^+_0\) is the Bielecki norm

$$\begin{aligned} \Vert y\Vert _{N_{1}}=\max _{t\in [0,t_1]}e^{-Nt}\Vert y(t)\Vert \ ,\quad \text{ for } \text{ all } y\in C([0,t_1],E) \end{aligned}$$
(18)

which is equivalent to the usual norm in \(C([0,t_1],E)\), that we denote \(\Vert \cdot \Vert _{C_{1}}\).

We show that

$$\begin{aligned} \Gamma _1(I\!\!B_{R_1})\subset I\!\!B_{R_1}. \end{aligned}$$
(19)

Fixed \(y\in I\!\!B_{R_1}\), for every \(t\in [0,t_1]\), by (13), (f2), (6), we get

$$\begin{aligned} e^{-Nt}\Vert \Gamma _1(y)(t)\Vert\le & {} e^{-Nt}\Vert T(t,0)\varphi (0)\Vert +e^{-Nt}\int _{0}^t \Vert T(t,s)f(s,y(s),y[\varphi ]_s)\Vert \, \mathrm{{d}}s\nonumber \\\le & {} D\Vert \varphi (0)\Vert +e^{-Nt}D\int _{0}^t \alpha (s)(1+\Vert y(s)\Vert +\Vert y[\varphi ]_s\Vert _{{\mathcal {C}_0}})\, \mathrm{{d}}s \nonumber \\\le & {} D(\Vert \varphi (0)\Vert +\Vert \alpha \Vert _{L^1})+e^{-Nt}D\int _{0}^t \alpha (s) \Vert y(s)\Vert \, \mathrm{{d}}s \nonumber \\&+ e^{-Nt}D\int _{0}^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0\Vert y[\varphi ]_s(\theta )\Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s. \end{aligned}$$
(20)

Now, we observe that, by (14) and (15), the following estimates hold:

if \(t \le \tau \) then

$$\begin{aligned}&\int _{0}^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0\Vert y[\varphi ]_s(\theta )\Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s\nonumber \\&\quad = \int _{0}^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^{-s} \Vert \varphi (s+\theta ) \Vert \mathrm{{d}}\theta +\frac{1}{\tau }\int _{-s}^0\Vert y(s+\theta )\Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s\nonumber \\&\quad = \int _{0}^t \alpha (s) \left( \frac{1}{\tau }\int _{s-\tau }^0 \Vert \varphi (w)\Vert \mathrm{{d}}w +\frac{1}{\tau }\int _0^s\Vert y(w)\Vert \mathrm{{d}}w\right) \, \mathrm{{d}}s \nonumber \\&\quad \le \int _{0}^t \alpha (s) \Vert \varphi \Vert _{\mathcal {C}_0}\mathrm{{d}}s +\int _0^t \alpha (s) \left( \frac{1}{\tau }\int _0^s e^{Nw}\Vert y\Vert _{N_{1}}\mathrm{{d}}w\right) \, \mathrm{{d}}s\nonumber \\&\quad \le \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}+\int _0^t \alpha (s) \left( \frac{1}{\tau }\Vert y\Vert _{N_{1}}\int _0^s e^{Ns}\mathrm{{d}}w\right) \, \mathrm{{d}}s\nonumber \\&\quad \le \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}+\frac{1}{\tau }\Vert y\Vert _{N_{1}}\int _0^t \alpha (s) e^{Ns} \tau \, \mathrm{{d}}s\nonumber \\&\quad = \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}+\Vert y\Vert _{N_{1}}\int _0^t \alpha (s) e^{Ns} \, \mathrm{{d}}s; \end{aligned}$$
(21)

if \(t>\tau \) (and this holds only if \(\tau <t_1\)), we first note that

$$\begin{aligned} \int _{\tau }^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0\Vert y[\varphi ]_s(\theta )\Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s= & {} \int _{\tau }^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0\Vert y(s+\theta ) \Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s\nonumber \\= & {} \int _{\tau }^t \alpha (s) \left( \frac{1}{\tau }\int _{s-\tau }^s \Vert y(w)\Vert \mathrm{{d}}w\right) \, \mathrm{{d}}s\nonumber \\\le & {} \int _{\tau }^t \alpha (s) \left( \frac{1}{\tau }\int _{s-\tau }^s e^{Nw} \Vert y\Vert _{N_{1}}\mathrm{{d}}w\right) \, \mathrm{{d}}s\nonumber \\\le & {} \int _{\tau }^t \alpha (s) \left( \frac{1}{\tau }\Vert y\Vert _{N_{1}}\int _{s-\tau }^s e^{Ns}\mathrm{{d}}w\right) \, \mathrm{{d}}s\nonumber \\\le & {} \frac{1}{\tau }\Vert y\Vert _{N_{1}} \int _{\tau }^t \alpha (s) e^{Ns}\tau \, \mathrm{{d}}s\nonumber \\= & {} \Vert y\Vert _{N_{1}} \int _{\tau }^t \alpha (s) e^{Ns}\, \mathrm{{d}}s; \end{aligned}$$
(22)

hence, by (21) and (22), we obtain

$$\begin{aligned}&\int _{0}^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0\Vert y[\varphi ]_s(\theta ) \Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s\\&\quad = \int _{0}^\tau \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0\Vert y[\varphi ]_s(\theta ) \Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s+\int _{\tau }^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0 \Vert y[\varphi ]_s(\theta )\Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s\\&\quad \le \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}+\Vert y\Vert _{N_{1}}\int _0^\tau \alpha (s) e^{Ns} \, \mathrm{{d}}s + \Vert y\Vert _{N_{1}} \int _{\tau }^t \alpha (s) e^{Ns}\, \mathrm{{d}}s\\&\quad = \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}+\Vert y\Vert _{N_{1}} \int _0^t \alpha (s) e^{Ns}\, \mathrm{{d}}s. \end{aligned}$$

Therefore, for every \(t\in [0,t_1]\) we obtain the following estimate for the last addendum of (20)

$$\begin{aligned}&e^{-Nt}D\int _{0}^t \alpha (s) \left( \frac{1}{\tau }\int _{-\tau }^0\Vert y[\varphi ]_s(\theta )\Vert \mathrm{{d}}\theta \right) \, \mathrm{{d}}s\nonumber \\&\quad \le D \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1} + D\Vert y\Vert _{N_{1}}\int _{0}^t \alpha (s) e^{-N(t-s)} \mathrm{{d}}s. \end{aligned}$$
(23)

Hence, by (20) and (23), we deduce

$$\begin{aligned} e^{-Nt}\Vert \Gamma _1(y)(t)\Vert\le & {} D(\Vert \varphi (0)\Vert +\Vert \alpha \Vert _{L^1})+ e^{-Nt}D\int _{0}^t \alpha (s)\Vert y(s)\Vert \, \mathrm{{d}}s\\&+ D \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1} + D\Vert y\Vert _{N_{1}}\int _{0}^t \alpha (s) e^{-N(t-s)} \mathrm{{d}}s\\\le & {} D(\Vert \varphi (0)\Vert +\Vert \alpha \Vert _{L^1})+D\Vert y\Vert _{N_{1}} \int _{0}^t \alpha (s)e^{-N(t-s)}\, \mathrm{{d}}s \\&+ D \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1} + D\Vert y\Vert _{N_{1}}\int _{0}^t \alpha (s) e^{-N(t-s)} \mathrm{{d}}s\\= & {} D(\Vert \varphi (0)\Vert +\Vert \alpha \Vert _{L^1}+ \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1} )\\&+2D\Vert y\Vert _{N_{1}} \int _{0}^t \alpha (s) e^{-N(t-s)}\, \mathrm{{d}}s \\\le & {} D(\Vert \varphi (0)\Vert +\Vert \alpha \Vert _{L^1}+ \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1} )\\&+ 2D{R_1}\int _{0}^t \alpha (s)e^{-N(t-s)}\, \mathrm{{d}}s. \end{aligned}$$

Recalling (16) and (17), we, therefore, obtain

$$\begin{aligned} e^{-Nt}\Vert \Gamma _1(y)(t)\Vert \le D(\Vert \varphi (0)\Vert +\Vert \alpha \Vert _{L^1}+ \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1} )+{R_1}q_N\le {R_1}. \end{aligned}$$

So, by (18) we have \(\Vert \Gamma _1(y)\Vert _{N_{1}}\le {R_1}\) and \(\Gamma _1(y)\in I\!\!B_{R_1}\) as desired.

Step 1.3. Consider the restriction of \(\Gamma _1\) to \(I\!\!B_{R_1}\), i.e. the map

$$\begin{aligned} \Gamma _{1,{R_1}}:I\!\!B_{R_1}\rightarrow I\!\!B_{R_1}. \end{aligned}$$

The ball \(I\!\!B_{R_1}\) is closed and convex in the space \(\left( C([0,t_1],E),\Vert \cdot \Vert _{N_1}\right) \), so it is a closed and convex subset of \(\left( C([0,t_1],E),\Vert \cdot \Vert _{C_1}\right) \).

Step 1.4. The integral multioperator \(\Gamma _{1,{R_1}}\) has closed graph.

Let \((y_n)_n\) be a sequence in \(I\!\!B_{R_1}\) such that \(y_n\rightarrow {\bar{y}}\) in \(\left( C([0,t_1],E),\Vert \cdot \Vert _{C_1}\right) \); further, suppose that \(\left( \Gamma _{1,{R_1}}(y_n)\right) _n\) converges to \({\bar{z}}\) in \(\left( C([0,t_1],E),\Vert \cdot \Vert _{C_1}\right) \).

Let us fix \(t\in [0,t_1]\) and consider the estimate (cf. (6))

$$\begin{aligned} \Vert \Gamma _{1,{R_1}}(y_n)(t)-\Gamma _{1,{R_1}}({\bar{y}})(t)\Vert\le & {} D \int _{0}^t \Vert f(s,y_n(s),y_n[\varphi ]_s)\nonumber \\&-f(s,{\bar{y}}(s),{\bar{y}}[\varphi ]_s)\Vert \, \mathrm{{d}}s, \end{aligned}$$
(24)

for every \(n\in {\mathbb {N}}\).

We firstly observe that the sequence \(\left( f(\cdot ,y_n(\cdot ),y_n[\varphi ]_{(\cdot )})\right) _n\) is integrably bounded in [0, t]. Indeed, put \(Q_1>0\) such that

$$\begin{aligned} I\!\!B_{R_1}\subset B(0,Q_1), \end{aligned}$$
(25)

where \(B(0,Q_1)\) is the ball centered in 0 with radius \(Q_1\) in the space \((C([0,t_1], E),\Vert \cdot \Vert _{C_1})\), by (f2), (8) and (25), we have

$$\begin{aligned} \Vert f(s,y_n(s),y_n[\varphi ]_s)\Vert\le & {} \alpha (s)\left( 1+\Vert y_n(s)\Vert +\Vert y_n[\varphi ]_s \Vert _{\mathcal {C}_0}\right) \nonumber \\\le & {} \alpha (s)\left( 1+\Vert y_n\Vert _{C_1}+\frac{1}{\tau }\int _{-\tau }^0 \Vert \varphi (w)\Vert \mathrm{{d}}w+\frac{1}{\tau }\int _0^s \Vert y_n(w)\Vert \mathrm{{d}}w\right) \nonumber \\\le & {} \alpha (s)\left( 1+\Vert \varphi \Vert _{\mathcal {C}_0}+\Vert y_n\Vert _{C_1}+ \frac{t_1}{\tau } \Vert y_n\Vert _{C_1}\right) \nonumber \\\le & {} \alpha (s)\left( 1+\Vert \varphi \Vert _{\mathcal {C}_0}+Q_1+\frac{t_1}{\tau }Q_1\right) \ ,\nonumber \\&\hbox { for a.e.} s\in [0,t] \hbox {and every} n\in {\mathbb {N}}. \end{aligned}$$
(26)

Put \(\alpha _1:[0,t_1]\rightarrow {\mathbb {R}}\) the map defined by

$$\begin{aligned} \alpha _1 (t)=\alpha (t)\left( 1+\Vert \varphi \Vert _{\mathcal {C}_0}+Q_1+\frac{t_1}{\tau }Q_1\right) \ ,\ t\in [0,t_1], \end{aligned}$$
(27)

obviously \(\alpha _1\in L^1_+([0,t_1])\) and so (26) implies the integrably boundedness.

Further, the sequence \(\left( f(\cdot ,y_n(\cdot ),y_n[\varphi ]_{(\cdot )})\right) _n\) pointwise converges in [0, t]. To this aim, fixed \(s\in [0,t]\) we show that

$$\begin{aligned} f(s,y_n(s),y_n[\varphi ]_s)\rightarrow f(s,{\bar{y}}(s),{\bar{y}}[\varphi ]_s) \text{ in } E. \end{aligned}$$
(28)

From the convergence \(y_n\rightarrow {\bar{y}}\) in \(\left( C([0,t_1],E),\Vert \cdot \Vert _{C_1}\right) \), we clearly have that \(y_n(s)\rightarrow {\bar{y}}(s)\) in E. On the other hand, we also deduce that \(y_n[\varphi ]_s\rightarrow {\bar{y}}[\varphi ]_s\) in \({\mathcal {C}_0}\), indeed using (14) and (15):

if \(s\le \tau \) then

$$\begin{aligned} \Vert y_n[\varphi ]_s - {\bar{y}}[\varphi ]_s\Vert _{{\mathcal {C}_0}}= & {} \frac{1}{\tau }\int _{-\tau }^{-s} \Vert y_n[\varphi ]_s(\theta ) - {\bar{y}}[\varphi ]_s(\theta )\Vert \mathrm{{d}}\theta \\&+ \frac{1}{\tau }\int _{-s}^0 \Vert y_n[\varphi ]_s(\theta ) - {\bar{y}}[\varphi ]_s(\theta ) \Vert \mathrm{{d}}\theta \\= & {} \frac{1}{\tau }\int _{-s}^0 \Vert y_n(s+\theta ) - {\bar{y}}(s+\theta )\Vert \mathrm{{d}}\theta \\= & {} \frac{1}{\tau }\int _0^s \Vert y_n(w) - {\bar{y}}(w)\Vert \mathrm{{d}}w\\\le & {} \frac{1}{\tau }\int _0^{t_1} \Vert y_n - {\bar{y}}\Vert _{C_1}\mathrm{{d}}w=\frac{t_1}{\tau }\Vert y_n - {\bar{y}}\Vert _{C_1}\ ; \end{aligned}$$

if \(s>\tau \) (and this holds only if \(\tau <t_1\)) then

$$\begin{aligned} \Vert y_n[\varphi ]_s - {\bar{y}}[\varphi ]_s\Vert _{{\mathcal {C}_0}}= & {} \frac{1}{\tau }\int _{-\tau }^0 \Vert y_n(s+\theta ) - {\bar{y}}(s+\theta )\Vert \mathrm{{d}}\theta \\= & {} \frac{1}{\tau }\int _{s-\tau }^s \Vert y_n(w) - {\bar{y}}(w)\Vert \mathrm{{d}}w\\\le & {} \frac{1}{\tau }\int _0^{t_1} \Vert y_n - {\bar{y}}\Vert _{C_1}\mathrm{{d}}w=\frac{t_1}{\tau }\Vert y_n - {\bar{y}}\Vert _{C_1}. \end{aligned}$$

Recalling now that \(f(s,\cdot ,\cdot )\) is continuous, in particular, in \(({\bar{y}}(s),{\bar{y}}[\varphi ]_s)\), the convergence (28) is achieved.

We can now apply the Lebesgue convergence theorem in [0, t] in (24), so that

$$\begin{aligned} \Gamma _{1,{R_1}}(y_n)(t)\rightarrow \Gamma _{1,{R_1}}({\bar{y}})(t). \end{aligned}$$

By the uniqueness of the limit algorithm, we can write

$$\begin{aligned} {\bar{z}}(t)=\Gamma _{1,{R_1}}({\bar{y}})(t). \end{aligned}$$

Therefore \(\Gamma _{1,{R_1}}(y_n)\rightarrow \Gamma _{1,{R_1}}({\bar{y}})\) in \(\left( C([0,t_1],E),\Vert \cdot \Vert _{C_1}\right) \). Hence the graph of \(\Gamma _{1,{R_1}}\) is closed.

Step 1.5. Considered the function \(h\in L^1_+([0,T])\) by (f3) and the constant D by (6), we can apply Lemma 2.1 with \(k=8D\), \(w=h\), and choose L large enough so that

$$\begin{aligned} p_L=\max _{t\in [0,T]}\int _0^t 8D e^{-L(t-s)}h(s)\, \mathrm{{d}}s\, <\, 1. \end{aligned}$$
(29)

Then we take the corresponding vectorial MNC \(\nu _1^L\) on \(C([0,t_1],E)\) defined in (7).

We prove that the integral operator \(\Gamma _{1,{R_1}}\) is \(\nu _1^L\)-condensing, i.e. that

  1. (I)

    \(\Gamma _{1,{R_1}}(I\!\!B_{R_1})\) is bounded

and

  1. (II)

    \(\nu _1^L(\Omega )\preccurlyeq \nu _1^L(\Gamma _{1,{R_1}}(\Omega ))\) implies \(\nu _1^L(\Omega )= 0_2\), for every \(\Omega \subset I\!\!B_{R_1}\).

First, by (19) and (25) we have that

$$\begin{aligned} \Gamma _{1,{R_1}}(I\!\!B_{R_1}) \subset B(0,Q_1). \end{aligned}$$

Therefore, condition (I) of \(\nu _1^L\)-condensivity is trivially satisfied.

Now, let \(\Omega \subset I\!\!B_{R_1}\) be such that

$$\begin{aligned} \nu _1^L(\Omega )\preccurlyeq \nu _1^L(\Gamma _{1,{R_1}}(\Omega )) . \end{aligned}$$
(30)

Recalling that \(\nu _1^L(\Gamma _{1,{R_1}}(\Omega ))\) is a maximum (see (7)), we consider countable set \(\{z_n\}_n\subset \Gamma _{1,{R_1}}(\Omega )\) which achieves that maximum. Let now \(\{y_n\}_n\subset \Omega \) be a set such that, for every \(n\in {\mathbb {N}}\), \(z_n=\Gamma _{1,{R_1}}(y_n)\), i.e.

$$\begin{aligned} z_n(t) = T(t,0)\varphi (0)+\int _{0}^t T(t,s)f(s,y_n(s),y_n[\varphi ]_s)\, \mathrm{{d}}s\, , \ t\in [0,t_1] . \end{aligned}$$
(31)

Of course, (30) and (7) provide

$$\begin{aligned}&\left( \gamma _1\left( \{y_n\}_n\right) , \eta _1\left( \{y_n\}_n\right) \right) \nonumber \\&\quad \preccurlyeq \nu _1^L(\Omega )\preccurlyeq \nu _1^L(\Gamma _{1,{R_1}}(\Omega ))= \left( \gamma _1\left( \{z_n\}_n\right) ,\eta _1\left( \{z_n\}_n\right) \right) . \end{aligned}$$
(32)

First of all, from the above relation, we have the inequality

$$\begin{aligned} \gamma _1\left( \{y_n\}_n\right) \le \gamma _1\left( \{z_n\}_n\right) . \end{aligned}$$
(33)

Let us estimate (cf. (7))

$$\begin{aligned} \gamma _1\left( \{z_n\}_n\right) = \sup _{t\in [0,t_1]}e^{-Lt}\chi \left( \{z_n(t)\}_n\right) . \end{aligned}$$
(34)

Now, fixed \(t\in [0,t_1]\), by (f2) and (6) and by means of analogous arguments as above, we have that the set \({{\mathcal {M}}}^1_t= \left\{ T(t,\cdot )f\left( \cdot ,y_n(\cdot ),y_n[\varphi ]_{(\cdot )}\right) \right\} _n \subset L^1([0,t],E)\) is integrably bounded. Then, by Proposition 2.1, we have that the function \(\chi ({{\mathcal {M}}}^1_t(\cdot ))\) belongs to \(L^1_+([0,t])\) and

$$\begin{aligned} \chi \left( \int _{0}^t {{\mathcal {M}}}^1_t(s)\, \mathrm{{d}}s\right) \le 4\int _{0}^t \chi \left( {{\mathcal {M}}}^1_t(s)\right) \, \mathrm{{d}}s. \end{aligned}$$

So, by (31), (6) and the properties of \(\chi \), we have

$$\begin{aligned} \chi \left( \{z_n(t)\}_n\right)\le & {} \chi \left( \{T(t,0)\varphi (0)\}\right) + 4\int _{0}^t\chi \left( \{T(t,s)f(s,y_n(s),y_n[\varphi ]_s)\}_n\right) \, \mathrm{{d}}s\nonumber \\\le & {} 4D\int _{0}^t\chi \left( \{f(s,y_n(s),y_n[\varphi ]_s)\}_n\right) \, \mathrm{{d}}s . \end{aligned}$$
(35)

Since \(\{y_n\}_n\) is a subset of \(\Omega \subset I\!\!B_{R_1}\), for every \(s\in [0,t]\) the sets \(\{y_n(s)\}_n\) and \(\{y_n[\varphi ]_s\}_n\) are bounded; indeed for \(s\in [0,t_1]\) we have \(\Vert y_n(s)\Vert \le {R_1}\) and \(\Vert y_n[\varphi ]_s\Vert _{\mathcal {C}_0}\le \Vert \varphi \Vert _{\mathcal {C}_0}+{R_1}\). Hence, we can apply (f3) and, using also the definition of \(\gamma _1\) (cf. (7)), for a.e. \(s\in [0,t]\), we get

$$\begin{aligned} \chi \left( \{f(s,y_n(s),y_n[\varphi ]_s)\}_n\right)\le & {} h(s)\left[ \chi (\{y_n(s)\}_n) + \sup _{-\tau \le \theta \le 0} \chi (\{y_n[\varphi ]_s(\theta )\}_n)\right] \nonumber \\\le & {} h(s)\left[ e^{Ls}\gamma _1(\{y_n\}_n) + \sup _{-\tau \le \theta \le 0}\chi (\{y_n[\varphi ]_s(\theta )\}_n)\right] .\nonumber \\ \end{aligned}$$
(36)

Moreover, by (14) and the definition of \(\gamma _1\), we have

if \(s\le \tau \) then

$$\begin{aligned} \sup _{-\tau \le \theta \le 0}\chi (\{y_n[\varphi ]_s(\theta )\}_n)= & {} \max \left\{ \sup _{-\tau \le \theta \le -s}\chi (\{\varphi (s+\theta )\}), \sup _{-s\le \theta \le 0}\chi (\{y_n(s+\theta )\}_n)\right\} \\= & {} \sup _{0\le w\le s}\chi (\{y_n(w)\}_n); \end{aligned}$$

if \(s>\tau \) (and this holds only if \(\tau <t_1\)) then

$$\begin{aligned} \sup _{-\tau \le \theta \le 0}\chi (\{y_n[\varphi ]_s(\theta )\}_n)= \sup _{-\tau \le \theta \le 0}\chi (\{y_n(s+\theta )\}_n) =\sup _{s-\tau \le w\le s}\chi (\{y_n(w)\}_n); \end{aligned}$$

so, in any case, the following estimate holds

$$\begin{aligned} \sup _{-\tau \le \theta \le 0}\chi (\{y_n[\varphi ]_s(\theta )\}_n) \le e^{Ls}\gamma _1(\{y_n\}_n). \end{aligned}$$

Therefore, by (36), we have

$$\begin{aligned} \chi \left( \{f(s,y_n(s),y_n[\varphi ]_s)\}_n\right)\le & {} 2 h(s)e^{Ls}\gamma _1(\{y_n\}_n) \end{aligned}$$
(37)

and hence (35) provides

$$\begin{aligned} \chi \left( \{z_n(t)\}_n\right)\le & {} 8D\gamma _1(\{y_n\}_n) \int _{0}^t e^{Ls}h(s)\, \mathrm{{d}}s . \end{aligned}$$
(38)

We can conclude that, by (34), (38), (29) and (33), the following chain of inequalities holds:

$$\begin{aligned} \gamma _1\left( \{z_n\}_n\right)\le & {} \sup _{t\in [0,t_1]}e^{-Lt} 8D\gamma _1 \left( \{y_n\}_n\right) \int _{0}^t e^{Ls}h(s)\, \mathrm{{d}}s\nonumber \\\le & {} p_L\gamma _1\left( \{y_n\}_n\right) \le p_L\gamma _1\left( \{z_n\}_n\right) . \end{aligned}$$
(39)

Since \(p_L<1\) (cf. (29)) we deduce that the above relation is true only if

$$\begin{aligned} \gamma _1(\{z_n\}_n)=0. \end{aligned}$$
(40)

Now, we show that (cf. (7))

$$\begin{aligned} \eta _1(\{z_n\}_n) = 0. \end{aligned}$$
(41)

As a consequence of (39) and (40) we also get

$$\begin{aligned} \gamma _1(\{y_n\}_n)=0. \end{aligned}$$
(42)

By (37) and (42), we get

$$\begin{aligned} \chi \left( \{f(s,y_n(s),y_n[\varphi ]_s)\}_n\right) =0. \end{aligned}$$

Moreover, by (26) and (27), we know the set \(\{f(\cdot ,y_n(\cdot ),y_n[\varphi ]_{(\cdot )}\}_n\) to be semicompact. Put \(G_1:L^1([0,t_1],E)\rightarrow C([0,t_1],E)\) given by

$$\begin{aligned} G_1g(t)=\int _0^t T(t,s) g(s)\mathrm{{d}}s, \ t\in [0,t_1], g \in L^1([0,t_1],E), \end{aligned}$$

thanks to [9, Theorem 2] we can apply [13, Theorem 5.1.1] so that the set \(\{G_1f(\cdot ,y_n(\cdot ),y_n[\varphi ]_{(\cdot )}\}_n\) is relatively compact in \(C([0,t_1],E)\). Therefore, it is equicontinuous.

Hence, by \(mod_{C_1}\left( \{G_1f(\cdot ,y_n(\cdot ),y_n[\varphi ]_{(\cdot )}\}_n\right) =0\) and (31) we get \(mod_{C_1}(\{z_n\}_n)=0\), i.e. (41) holds.

Finally, from (40), (41), and (32) we have \(\nu _1^L(\Gamma _{1,{R_1}}(\Omega ))=0_2\). By (30) we can, therefore, conclude that \(\nu _1^L(\Omega )=0_2\).

Step 1.6. We can apply [4, Theorem 2.2] and deduce that \(\Gamma _{1,{R_1}}\) has a fixed point in \(I\!\!B_{R_1}\), which is of course a fixed point for \(\Gamma _1\) too, i.e. there exists \(y^1\in I\!\!B_{R_1}\) such that

$$\begin{aligned} y^1(t)=T(t,0)\varphi (0)+\int _{0}^t T(t,s)f(s,y^1(s),y^1[\varphi ]_s)\, \mathrm{{d}}s\ ,\quad t\in [0,t_1]. \end{aligned}$$
(43)

The function \(y^1[\varphi ]\) satisfies (12) and hence is a mild solution for (11).

Step 2. Consider now the interval \([t_1,t_2]\).

We associate to \(y^1[\varphi ]\) the map \(\varphi ^1\in {\mathcal {C}_0}\) defined by

$$\begin{aligned} \varphi ^1(\theta )={\left\{ \begin{array}{ll}y^1[\varphi ]_{t_1}(\theta ) ,&{} \theta \in [-\tau ,0[ \\ y^1[\varphi ](t_1)+I_1(y^1[\varphi ]_{t_1}), &{}\theta =0\end{array}\right. } \end{aligned}$$
(44)

and the corresponding Cauchy problem with functional delay

$$\begin{aligned} \left\{ \begin{array}{l} y'(t)= A(t)y(t) + f(t,y(t),y_t) \ ,\quad t\in [t_1,t_2]\\ \\ y_{t_1} = \varphi ^1. \end{array}\right. \end{aligned}$$
(45)

A mild solution to (45) will be a function \(y:[t_1-\tau ,t_2]\rightarrow E\) such that

$$\begin{aligned}&y(t)=T(t,t_1)\varphi ^1(0) + \int _{t_1}^t T(t,s)f(s,y(s),y_s)\, \mathrm{{d}}s \, ,\ t\in [t_1,t_2], \end{aligned}$$
(46)
$$\begin{aligned}&y_{t_1}=\varphi ^1 . \end{aligned}$$
(47)

Step 2.1. Let us define the integral operator \(\Gamma _2:C([t_1,t_2],E)\rightarrow C([t_1,t_2],E)\) as

$$\begin{aligned} \Gamma _2(y)(t)= & {} T(t,t_1)\varphi ^1(0) + \int _{t_1}^t T(t,s)f(s,y(s),y[y^1[\varphi ]]_s)\, \mathrm{{d}}s \ ,\nonumber \\&\ t\in [t_1,t_2]\ ,\ y\in C([t_1,t_2],E), \end{aligned}$$
(48)

where the function \(\varphi ^1\) is defined in (44) and the map \(y^1[\varphi ]\) is the mild solution of (11) obtained in Step 1.6.

Step 2.2. Put \(R_2>0\) such that

$$\begin{aligned} R_2\ge \frac{D(\Vert \varphi ^1(0)\Vert +\Vert \alpha \Vert _{L^1}+\Vert \varphi \Vert _{{\mathcal {C}_0}} \Vert \alpha \Vert _{L^1})+R_1}{1-q_N} \end{aligned}$$
(49)

and considered the Bielecki norm in \(C([t_1,t_2],E)\), i.e. \(\Vert \cdot \Vert _{N_{2}}:C([t_1,t_2],E)\rightarrow {\mathbb {R}}^+_0\)

$$\begin{aligned} \Vert y\Vert _{N_{2}}=\max _{t\in [t_1,t_2]}e^{-Nt}\Vert y(t)\Vert \ ,\quad \text{ for } \text{ all } y\in C([t_1,t_2],E), \end{aligned}$$

let \(I\!\!B_{R_2}\) be the closed ball centered in 0 with radius \({R_2}\) in the space \((C([t_1,t_2],E),\Vert \cdot \Vert _{N_{2}})\). We show that

$$\begin{aligned} \Gamma _2(I\!\!B_{R_2})\subset I\!\!B_{R_2}. \end{aligned}$$
(50)

Indeed, fixed \(t\in [t_1,t_2]\), by (48), (f2), (6), we get

$$\begin{aligned} e^{-Nt}\Vert \Gamma _2(y)(t)\Vert\le & {} D\left( \left\| \varphi ^1(0)\right\| +\Vert \alpha \Vert _{L^1}\right) \nonumber \\&+e^{-Nt}D\int _{t_1}^t \alpha (s)(\Vert y(s)\Vert + \Vert y[y^1[\varphi ]]_s\Vert _{\mathcal {C}_0})\, \mathrm{{d}}s \nonumber \\\le & {} D\left( \left\| \varphi ^1(0)\right\| +\Vert \alpha \Vert _{L^1}\right) +D\Vert y\Vert _{N_2} \int _{t_1}^t\alpha (s)e^{-N(t-s)}\mathrm{{d}}s\nonumber \\&+e^{-Nt}D\int _{t_1}^t \alpha (s) \Vert y[y^1[\varphi ]]_s\Vert _{\mathcal {C}_0}\, \mathrm{{d}}s. \end{aligned}$$
(51)

Let us consider the last addendum. In the case \(t>\tau >t_1\), we have the estimate

$$\begin{aligned} \int _{t_1}^t \alpha (s) \Vert y[y^1[\varphi ]]_s\Vert _{\mathcal {C}_0}\, \mathrm{{d}}s= & {} \int _{t_1}^t \alpha (s) \left( \frac{1}{\tau }\int _{s-\tau }^s\Vert y[y^1[\varphi ]](w) \Vert \mathrm{{d}}w\right) \, \mathrm{{d}}s\\\le & {} \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}+ \left( \Vert y^1\Vert _{N_1}+\Vert y\Vert _{N_2}\right) \int _{t_1}^t \alpha (s) e^{Ns}\mathrm{{d}}s. \end{aligned}$$

In the cases \(\tau \le t_1\) or \(t\le \tau \) is again possible to obtain the same estimate.

Hence, recalling (16), (49), (51), we get

$$\begin{aligned} e^{-Nt}\Vert \Gamma _2(y)(t)\Vert\le & {} D\left( \left\| \varphi ^1(0)\right\| +\Vert \alpha \Vert _{L^1}\right) +D\Vert y\Vert _{N_2}\int _{t_1}^t \alpha (s)e^{-N(t-s)}\mathrm{{d}}s\\&+e^{-Nt}D \left( \Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}+ \Vert y^1\Vert _{N_1}\int _{t_1}^t \alpha (s) e^{Ns}\mathrm{{d}}s+\Vert y\Vert _{N_2}\int _{t_1}^t \alpha (s) e^{Ns}\mathrm{{d}}s\right) \\\le & {} D\left( \left\| \varphi ^1(0)\right\| +\Vert \alpha \Vert _{L^1}+\Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}\right) \\&+2D\Vert y^1\Vert _{N_1}\int _0^t\alpha (s)e^{-N(t-s)}\mathrm{{d}}s+2D\Vert y\Vert _{N_2}\int _0^t\alpha (s) e^{-N(t-s)}\mathrm{{d}}s\\\le & {} D\left( \left\| \varphi ^1(0)\right\| +\Vert \alpha \Vert _{L^1}+\Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}\right) +R_1q_N+R_2q_N\\< & {} D\left( \left\| \varphi ^1(0)\right\| +\Vert \alpha \Vert _{L^1}+\Vert \varphi \Vert _{\mathcal {C}_0}\Vert \alpha \Vert _{L^1}\right) +R_1+R_2q_N\le R_2, \end{aligned}$$

so \(\Vert \Gamma _2(y)\Vert _{N_{2}}\le R_2\). Therefore (50) holds.

Step 2.3. Consider now the map \(\Gamma _{2,{R_2}}: {I\!\!B}_{R_2}\rightarrow {I\!\!B}_{R_2}\).

Obviously \(I\!\!B_{R_2}\) is closed and convex in the space \(\left( C([t_1,t_2],E),\Vert \cdot \Vert _{N_2}\right) \), so that it is a closed and convex subset of \(\left( C([t_1,t_2],E),\Vert \cdot \Vert _{C_2}\right) \), where \(\Vert \cdot \Vert _{C_2}\) is the usual norm in \(C([t_1,t_2],E)\) and is equivalent to \(\Vert \cdot \Vert _{N_2}\).

Step 2.4. We show that \(\Gamma _{2,{R_2}}\) has closed graph.

Let \((y_n)_n\) be a sequence in \(I\!\!B_{R_2}\) such that \(y_n\rightarrow {\bar{y}}\) in \(\left( C([t_1,t_2],E),\Vert \cdot \Vert _{C_2}\right) \) . Moreover, let \(\left( \Gamma _{2,{R_2}}(y_n)\right) _n\) converge to \({\bar{z}}\) in \(\left( C([t_1,t_2],E),\Vert \cdot \Vert _{C_2}\right) \).

Fixed \(t\in [t_1,t_2]\), we consider the estimate

$$\begin{aligned} \Vert \Gamma _{2,{R_2}}(y_n)(t)-\Gamma _{2,{R_2}}({\bar{y}})(t)\Vert\le & {} D \int _{t_1}^t \Vert f(s,y_n(s),y_n[y^1[\varphi ]]_s)\nonumber \\&-f(s,{\bar{y}}(s),{\bar{y}}[y^1[\varphi ]]_s)\Vert \, \mathrm{{d}}s. \end{aligned}$$
(52)

We show that the sequence \(\left( f(\cdot ,y_n(\cdot ),y_n[y^1[\varphi ]]_{(\cdot )})\right) _n\) is integrably bounded in \([t_1,t]\): put \(Q_2\ge Q_1\) (see (25)) such that

$$\begin{aligned} I\!\!B_{R_2}\subset B(0,Q_2), \end{aligned}$$
(53)

where \(B(0,Q_2)\) is the ball in \(\left( C([t_1,t_2],E),\Vert \cdot \Vert _{C_2}\right) \), by (f2), (8) and (53), we get

$$\begin{aligned} \Vert f(s,y_n(s),y_n[y^1[\varphi ]]_s)\Vert\le & {} \alpha (s)\left( 1+\Vert y_n\Vert _{C_2}+ \frac{t_1}{\tau }\Vert y^1\Vert _{C_1} + \frac{t_2-t_1}{\tau }\Vert y_n\Vert _{C_2}+\Vert \varphi \Vert _{\mathcal {C}_0}\right) \\\le & {} \alpha (s)\left( 1+Q_2+\frac{t_1}{\tau }Q_1+ \frac{t_2-t_1}{\tau }Q_2 +\Vert \varphi \Vert _{\mathcal {C}_0}\right) \\\le & {} \alpha (s)\left( 1+Q_2+\frac{t_2}{\tau }Q_2+\Vert \varphi \Vert _{\mathcal {C}_0}\right) :=\alpha _2(t) \ , \hbox { for a.e.}\ s\in [t_1,t] , \end{aligned}$$

where \(\alpha _2\in L^1_+([t_1,t_2])\).

Then the sequence \(\left( f(\cdot ,y_n(\cdot ),y_n[y^1[\varphi ]]_{(\cdot )})\right) _n\) pointwise converges in \([t_1,t]\). Indeed, fixed \(s\in [t_1,t]\), the convergence \(y_n\rightarrow {\bar{y}}\) in \(\left( C([t_1,t_2],E),\Vert \cdot \Vert _{C_2}\right) \) implies \(y_n(s)\rightarrow {\bar{y}}(s)\) in E. Further, we have

$$\begin{aligned} \Vert y_n[y^1[\varphi ]]_s - {\bar{y}}[y^1[\varphi ]]_s\Vert _{{\mathcal {C}_0}}\le & {} \frac{t_2-t_1}{\tau }\Vert y_n - {\bar{y}}\Vert _{C_2} , \end{aligned}$$

so \(y_n[y^1[\varphi ]]_s\rightarrow {\bar{y}}[y^1[\varphi ]]_s\) in \({\mathcal {C}_0}\). Therefore, by the continuity of \(f(s,\cdot ,\cdot )\) in \(({\bar{y}}(s),{\bar{y}}[\varphi ]_s)\), we have

$$\begin{aligned} f(s,y_n(s),y_n[y^1[\varphi ]]_s)\rightarrow f(s,{\bar{y}}(s),{\bar{y}}[y^1[\varphi ]]_s) \text{ in } E. \end{aligned}$$

Hence, by the Lebesgue convergence theorem in \([t_1,t]\) applied to (52), we obtain

$$\begin{aligned} \Gamma _{2,{R_2}}(y_n)(t)\rightarrow \Gamma _{2,{R_2}}({\bar{y}})(t). \end{aligned}$$

By the uniqueness of the limit algorithm, we get \({\bar{z}}(t)=\Gamma _{2,{R_2}}({\bar{y}})(t).\)

Therefore, \(\Gamma _{2,{R_2}}(y_n)\rightarrow \Gamma _{2,{R_2}}({\bar{y}})\) in \(\left( C([t_1,t_2],E),\Vert \cdot \Vert _{C_2}\right) \). Hence, the graph of \(\Gamma _{2,{R_2}}\) is closed.

Step 2.5. We show that \(\Gamma _{2,{R_2}}\) is \(\nu _2^L\)-condensing, where \(\nu _2^L\) is the MNC on \(C([t_1,t_2],E)\) defined by (7) and associated with the constant L for which (29) holds. Indeed, by (50) and (53), we have \(\Gamma _{2,{R_2}}(I\!\!B_{R_2}) \subset B(0,Q_2)\).

Let \(\Omega \subset I\!\!B_{R_2}\) be such that

$$\begin{aligned} \nu _2^L(\Omega )\preccurlyeq \nu _2^L(\Gamma _{2,{R_2}}(\Omega )) . \end{aligned}$$

As in Step 1.5, to obtain \(\nu _2^L(\Omega )=0_2\) will be enough to get \(\nu _2^L(\Gamma _{2,{R_2}}(\Omega ))=0_2\).

By (7), let us consider the countable set \(\{z_n\}_n\subset \Gamma _{2,{R_2}}(\Omega )\) such that

$$\begin{aligned} \nu ^L_2(\Gamma _{2,{R_2}}(\Omega ))=\left( \gamma _2(\{z_n\}_n),\eta _2(\{z_n\}_n)\right) \end{aligned}$$

and the set \(\{y_n\}_n\subset \Omega \) such that, for every \(n\in {\mathbb {N}}\), \(z_n=\Gamma _{2,{R_2}}(y_n)\), i.e.

$$\begin{aligned} z_n(t) = T(t,t_1)\varphi ^1(0)+\int _{t_1}^t T(t,s)f(s,y_n(s),y_n[y^1[\varphi ]]_s)\, \mathrm{{d}}s\, , \ t\in [t_1,t_2] . \end{aligned}$$

Clearly,

$$\begin{aligned} \left( \gamma _2\left( \{y_n\}_n\right) , \eta _2\left( \{y_n\}_n\right) \right) \preccurlyeq \nu _2^L(\Omega )\preccurlyeq \left( \gamma _2\left( \{z_n\}_n\right) ,\eta _2\left( \{z_n\}_n\right) \right) . \end{aligned}$$

Hence

$$\begin{aligned} \gamma _2\left( \{y_n\}_n\right) \le \gamma _2\left( \{z_n\}_n\right) . \end{aligned}$$
(54)

We estimate

$$\begin{aligned} \gamma _2\left( \{z_n\}_n\right) = \sup _{t\in [t_1,t_2]}e^{-Lt}\chi \left( \{z_n(t)\}_n\right) . \end{aligned}$$
(55)

Fixed \(t\in [t_1,t_2]\) and put \({{\mathcal {M}}}^2_t= \left\{ T(t,\cdot )f\left( \cdot ,y_n(\cdot ),y_n[y^1[\varphi ]]_{(\cdot )}\right) \right\} _n\), by Proposition 2.1 we have \( \chi \left( \int _{t_1}^t {{\mathcal {M}}}^2_t(s)\, \mathrm{{d}}s\right) \le 4\int _{t_1}^t \chi \left( {{\mathcal {M}}}^2_t(s)\right) \, \mathrm{{d}}s, \) so

$$\begin{aligned} \chi \left( \{z_n(t)\}_n\right)\le & {} \chi \left( \{T(t,t_1)\varphi ^1(0)\}\right) + 4\int _{t_1}^t\chi \left( \{T(t,s)f(s,y_n(s),y_n[y^1[\varphi ]]_s)\}_n\right) \, \mathrm{{d}}s\nonumber \\\le & {} 4D\int _{t_1}^t\chi \left( \{f(s,y_n(s),y_n[y^1[\varphi ]]_s)\}_n\right) \, \mathrm{{d}}s . \end{aligned}$$
(56)

Now, \(\{y_n\}_n\) is a subset of \(\Omega \subset I\!\!B_{R_2}\), then for every \(s\in [t_1,t]\) the sets \(\{y_n(s)\}_n\) and \(\{y_n[\varphi ]_s\}_n\) are bounded; in fact for \(s\in [t_1,t_2]\) we have \(\Vert y_n(s)\Vert \le {R_2}\) and \(\Vert y_n[y^1[\varphi ]]_s\Vert _{\mathcal {C}_0}\le \Vert \varphi \Vert _{\mathcal {C}_0}+{R_1}+R_2\). Hence, by (f3) and the definition of \(\gamma _2\) (cf. (7)), for a.e. \(s\in [t_1,t]\), we get

$$\begin{aligned} \chi \left( \{f(s,y_n(s),y_n[y^1[\varphi ]]_s)\}_n\right)\le & {} h(s)\left[ e^{Ls} \gamma _2(\{y_n\}_n) + \sup _{-\tau \le \theta \le 0} \chi (\{y_n[y^1[\varphi ]]_s(\theta )\}_n)\right] \\\le & {} h(s)\left[ e^{Ls}\gamma _2(\{y_n\}_n) + \sup _{t_1\le w\le s} \chi (\{y_n(w)\}_n)\right] \\\le & {} 2h(s)e^{Ls}\gamma _2(\{y_n\}_n) . \end{aligned}$$

Therefore, (56) provides

$$\begin{aligned} \chi \left( \{z_n(t)\}_n\right)\le & {} 8D\gamma _2(\{y_n\}_n) \int _{t_1}^t h(s)e^{Ls}\, \mathrm{{d}}s . \end{aligned}$$
(57)

Hence, by (55), (57), (29) and (54), we obtain

$$\begin{aligned} \gamma _2\left( \{z_n\}_n\right)\le & {} \sup _{t\in [t_1,t_2]}e^{-Lt} 8D\gamma _2 \left( \{y_n\}_n\right) \int _{t_1}^t e^{Ls}h(s)\, \mathrm{{d}}s\\\le & {} p_L\gamma _2\left( \{y_n\}_n\right) \le p_L\gamma _2\left( \{z_n\}_n\right) . \end{aligned}$$

Therefore (recall \(p_L<1\)),

$$\begin{aligned} \gamma _1(\{z_n\}_n)=0. \end{aligned}$$
(58)

Further, by means of the same arguments used in Step 1.5 to prove that \(\eta _1(\{z_n\}_n) = 0\), we can say that

$$\begin{aligned} \eta _2(\{z_n\}_n) = 0. \end{aligned}$$
(59)

Finally, from (58) and (59), we conclude that \(\nu _2^L(\Gamma _{2,{R_2}}(\Omega ))=0_2\).

Step 2.6. By [4, Theorem 2.2] the solution operator \(\Gamma _{2,{R_2}}\) has a fixed point \(y^2\in I\!\!B_{R_2}\),

$$\begin{aligned} y^2(t)=T(t,t_1)\varphi ^1(0)+\int _{t_1}^t T(t,s)f(s,y^2(s),y^2[y^1[\varphi ]]_s)\, \mathrm{{d}}s\ ,\quad t\in [t_1,t_2].\nonumber \\ \end{aligned}$$
(60)

Consider now the function \(y^2[y^1[\varphi ]]:[-\tau ,t_2]\rightarrow E\). First of all, \(y^2[y^1[\varphi ]]_{|\, ]t_1,t_2]}=y^2_{|\, ]t_1,t_2]}\), so \(y^2[y^1[\varphi ]]\) satisfies (46); moreover, by (44) we get \(y^2[y^1[\varphi ]]_{t_1}=\varphi ^1\), so (47) is satisfied as well; hence, \(y^2[y^1[\varphi ]]_{|\, [t_1-\tau ,t_2]}\) is a mild solution to (45).

Step 3. Now, we can provide the mild solution to our problem (5).

Step 3.a. First, let us suppose that \(t_2=T\) (i.e. the case \(p=1\) in (5)).

In this setting the function \(y^2[y^1[\varphi ]]\in S([-\tau ,t_2],E)\) is a mild solution to (5). In fact:

  1. (i)

    if \(t\in [0,t_1]\): since by (8) we have \(y^2[y^1[\varphi ]](t) =y^1(t)\), using (43) we obtain

    $$\begin{aligned} y^2[y^1[\varphi ]](t)= & {} T(t,0)\varphi (0)+\int _{0}^t T(t,s)f(s,y^2[y^1[\varphi ]](s),y^2[y^1[\varphi ]]_s)\, \mathrm{{d}}s; \end{aligned}$$

    if \(t\in ]t_1;t_2]\): by (8) we have \(y^2[y^1[\varphi ]](t)=y^2(t)\), so by (60), (44), and (43) we get

    $$\begin{aligned} y^2[y^1[\varphi ]](t)= & {} T(t,t_1)\varphi ^1(0)+\int _{t_1}^t T(t,s)f(s,y^2 [y^1[\varphi ]](s),y^2[y^1[\varphi ]]_s)\, \mathrm{{d}}s\\= & {} T(t,t_1)[y^1[\varphi ](t_1)+I_1(y^1[\varphi ]_{t_1})]\\&+\int _{t_1}^t T(t,s)f(s,y^2[y^1[\varphi ]](s),y^2[y^1[\varphi ]]_s)\, \mathrm{{d}}s; \end{aligned}$$

    now, by applying (44), (8), and (43), we deduce

    $$\begin{aligned} y^2[y^1[\varphi ]](t)= & {} T(t,t_1)\left[ T(t_1,0)\varphi (0)+\int _0^{t_1}T(t_1,s)f(s,y^1 [\varphi ](s),y^1[\varphi ]_s)\right] \\&+T(t,t_1)I_1(y^1[\varphi ]_{t_1})+\int _{t_1}^t T(t,s)f(s,y^2[y^1[\varphi ]](s), y^2[y^1[\varphi ]]_s)\, \mathrm{{d}}s\\= & {} T(t,0)\varphi (0)+\int _0^{t_1}T(t,s)f(s,y^2[y^1[\varphi ]](s),y^2[y^1[\varphi ]]_s)\\&+T(t,t_1)I_1(y^2[y^1[\varphi ]]_{t_1})+\int _{t_1}^t T(t,s)f(s,y^2[y^1[\varphi ]](s),y^2[y^1[\varphi ]]_s)\, \mathrm{{d}}s; \end{aligned}$$

    therefore,

    $$\begin{aligned} y^2[y^1[\varphi ]](t)= & {} T(t,0)\varphi (0)+T(t,t_1)I_1(y^2[y^1[\varphi ]]_{t_1})\\&+\int _0^t T(t,s)f(s,y^2[y^1[\varphi ]](s),y^2[y^1[\varphi ]]_s)\, \mathrm{{d}}s; \end{aligned}$$
  2. (ii)

    by (8) we can write \(y^2[y^1[\varphi ]](t_1^+)=y^2(t_1)\), hence recalling (60), (44), and (8) we have

    $$\begin{aligned} y^2(t_1)= & {} \varphi ^1(0)= y^1[\varphi ](t_1)+I_1(y^1[\varphi ]_{t_1})\\= & {} y^2[y^1[\varphi ]](t_1) +I_1(y^2[y^1[\varphi ]]_{t_1}) \end{aligned}$$

    so that

    $$\begin{aligned} y^2[y^1[\varphi ]](t_1^+)=y^2[y^1[\varphi ]](t_1)+I_1(y^2[y^1[\varphi ]]_{t_1}); \end{aligned}$$
  3. (iii)

    it is immediate to see that (cf. (8))

    $$\begin{aligned} y^2[y^1[\varphi ]]_{t_0}=\varphi . \end{aligned}$$

Hence, if \(p=1\) the theorem is proved.

Step 3.b. In the case \(p>1\), by iteration a mild solution to (5) will be the function \( y^{p+1}[...y^1[\varphi ]...]\in S([-\tau ,T],E). \) \(\square \)

3 Hereditary Evolutionary Processes Under External Instantaneous Actions

Using the theory developed in Sect. 2, we can provide the existence of hereditary dynamics of two models driven respectively by the parametric differential equation (1) and a parabolic partial differential equation (4) with memory effects, regulated by external instantaneous actions.

3.1 The Population Dynamics Model

We consider the parametric differential equation

$$\begin{aligned} \frac{\partial }{\partial t}u(t,x)= & {} -b(t,x)u(t,x)+g\left( t,u(t,x),\int _{-\tau }^0u(t+\theta ,x)\mathrm{{d}}\theta \right) \, ,\\&\quad t\in [0,T]\, ,\ x\in [0,1], \end{aligned}$$

which represents the dynamics of a population where: u(tx) is the population density at time t and place x; \(-b(t,x)\) is the removal coefficient given by the death rate and the displacement of the population; g is the population development law involving a memory term expressed by the integral \(\int _{-\tau }^0u(t+\theta ,x)\mathrm{{d}}\theta \).

In a modeling process it is necessary to introduce memory terms whenever the past state of the system influences its future evolution. An example can be found in the study of some pests in which the maturation time of the individual is not negligible. The initial past is known and given by a function \(\psi :[-\tau ,0]\times [0,1]\rightarrow {\mathbb {R}}\) such that

$$\begin{aligned} u(\theta ,x)=\psi (\theta ,x)\, ,\, \theta \in [-\tau ,0]\, ,\, x\in [0,1]. \end{aligned}$$

Here we normalize the spatial environment to the interval [0, 1]. In the model under examination, possible external actions aimed at regulating the population (like a pesticide or a drug administration) taking place at pre-established times are formalized. In details, we set

$$\begin{aligned} u(t_i^+,x)=u(t_i,x) + \mathcal {I}_i\left( \int _{-\tau }^0u(t_i+\theta ,x)\mathrm{{d}}\theta \right) \, ,\, x\in [0,1] \, ,\, i=1,\dots , p, \end{aligned}$$

where: \(0=t_0<t_1<\cdots<t_p<t_{p+1}=T\); for every \(i=1,\dots , p\), \({{\mathcal {I}}}_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\); \(u(t_i^+,\cdot )=\lim _{s\rightarrow t_i^+}u(s,\cdot )\).

We treat the model as a particular case of the abstract problem (5), taking \(E=L^2([0,1])\) and \(u:[-\tau ,T]\times [0,1]\rightarrow {\mathbb {R}}\) such that \(u(t,\cdot )\in L^2([0,1])\) for every \(t\in [0,T]\).

We assume that the map \(b:[0,T]\times [0,1]\rightarrow {\mathbb {R}}\) satisfies the conditions

  1. (b.1)

    b is measurable;

  2. (b.2)

    there exists \(s\in L^1_+([0,T])\) such that

    $$\begin{aligned} 0<b(t,x)\le s(t)\, ,\ \text{ for } \text{ every } t\in [0,T], \text{ a.e. } x\in [0,1]; \end{aligned}$$
  3. (b.3)

    for every \(x\in [0,1]\), the function \(b(\cdot ,x):[0,T]\rightarrow {\mathbb {R}}\) is continuous.

On the map \(g:[0,T]\times {\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) we suppose that the following properties hold:

  1. (g1)

    \(g\left( t, v(\cdot ),\int _{-\tau }^0w(\theta )(\cdot )\mathrm{{d}}\theta \right) \in L^2([0,1])\), for every \(t\in [0,T]\), \(v\in L^2([0,1])\), \(w\in {\mathcal {C}_0}^{L^2}\);

  2. (g2)

    there exists \(l\in L^1_+([0,1])\) such that

    $$\begin{aligned}&\left| g\left( t, p_1,\int _{-\tau }^0w_1(\theta )(\cdot )\mathrm{{d}}\theta \right) -g\left( t, p_2,\int _{-\tau }^0w_2(\theta )(\cdot )\mathrm{{d}}\theta \right) \right| \\&\quad \le l(t)\left[ |p_1-p_2|+\Vert w_1-w_2\Vert _{{\mathcal {C}_0}^{L^2}}\right] , \end{aligned}$$

    for a.e. \(t\in [0,T]\) and every \(p_1,p_2\in {\mathbb {R}}\), \(w_1,w_2\in {\mathcal {C}_0}^{L^2}\);

  3. (g3)

    for every \(y\in S([-\tau , T],L^2([0,1]))\), the map \(t\mapsto g\big (t, y(t)(\cdot ),\int _{-\tau }^0y(t+\theta )(\cdot )\mathrm{{d}}\theta \big )\) is measurable;

  4. (g4)

    \(g(\cdot ,0,0)\in L^1([0,T])\);

  5. (g5)

    there exists \(h\in L^1_+([0,1])\) such that

    $$\begin{aligned} \chi _{L^2}\left( g\left( t, \Omega _1(\cdot ),\int _{-\tau }^0\Omega _2(\theta )(\cdot )\mathrm{{d}}\theta \right) \right) \le h(t) \left[ \chi _{L^2}(\Omega _1)+\frac{1}{\tau }\int _{-\tau }^0\chi _{L^2}\left( \Omega _2(\theta )\right) \mathrm{{d}}\theta \right] , \end{aligned}$$

    for a.e. \(t\in [0,T]\) and every bounded sets \(\Omega _1\subset L^2([0,T])\), \(\Omega _2\subset {\mathcal {C}_0}^{L^2}\);

    \(\chi _{L^2}\) is the Hausdorff measure of noncompactness in \(L^2([0,T])\).

On the initial function \(\psi :[-\tau ,0]\times [0,1]\rightarrow {\mathbb {R}}\) we assume that

(\(\psi 1\)):

\(\psi (\theta ,\cdot )\in L^2([0,1])\), for every \(\theta \in [-\tau ,0]\);

(\(\psi 2\)):

for every \(x\in [0,1]\), the map \(\psi (\cdot ,x)\) is piecewise continuous with a finite number of discontinuity points not depending on x.

Now, we define the following functions:

  • \(v:[0,T]\rightarrow L^2([0,1])\),

    $$\begin{aligned} v(t)(x)=u(t,x), \ t\in [0,T],\, x\in [0,1]; \end{aligned}$$
    (61)
  • \(v_t:[-\tau ,0]\rightarrow L^2([0,1])\),

    $$\begin{aligned} v_t(\theta )(x)=u(t+\theta ,x), \ \theta \in [-\tau ,0],\, x\in [0,1],\, t\in [0,T]; \end{aligned}$$
    (62)
  • \(A(t):L^2([0,1])\rightarrow L^2([0,1])\), \(t\in [0,T]\)

    $$\begin{aligned} A(t)z(x)=-b(t,x)z(x) ,\, z\in L^2([0,1]),\, x\in [0,1]; \end{aligned}$$
    (63)
  • \(f:[0,T]\times L^2([0,1])\times {\mathcal {C}_0}^{L^2}\rightarrow L^2([0,1])\)

    $$\begin{aligned} f(t,z,w)(x)= & {} g\left( t,z(x),\int _{-\tau }^0w(\theta )(x)\mathrm{{d}}\theta \right) ,\nonumber \\&t\in [0,T],\, z\in L^2([0,1]),\, w\in {\mathcal {C}_0}^{L^2},\, x\in [0,1];\end{aligned}$$
    (64)
  • \(I_i:{\mathcal {C}_0}^{L^2}\rightarrow L^2([0,1])\), \(i=1,\dots , p\),

    $$\begin{aligned} I_i(w)(x)={{\mathcal {I}}}_i\left( \int _{-\tau }^0 w(\theta )(x) \, \mathrm{{d}}\theta \right) ,\, w\in {\mathcal {C}_0}^{L^2},\, x\in [0,1]; \end{aligned}$$
    (65)
  • \(\varphi :[-\tau ,0]\rightarrow L^2([0,1])\)

    $$\begin{aligned} \varphi (\theta )(x)=\psi (\theta ,x), \ \theta \in [-\tau ,0],\, x\in [0,1]. \end{aligned}$$
    (66)

To state and prove the existence theorem for the model, we establish the next proposition, which combines the results described in [15, Section 3.1] and in [10, Proposition 3.2 and Remark 3.1].

Proposition 3.1

Under assumptions (b1)–(b3), the next properties hold:

  1. [A1]

    for every \(t\in [0,T]\), the map \(A(t):L^2([0,1])\rightarrow L^2([0,1])\) defined by (63) is a well-posed linear operator;

  2. [A2]

    the family \(\{A(t)\}_{t\in [0,T]}\) generates the noncompact evolution system \(\{T(t,s)\}_{0\le s\le t\le T}\) of bounded linear operators \(T(t,s):L^2([0,1])\rightarrow L^2([0,1])\), \(0\le s\le t\le T\), defined by

    $$\begin{aligned} {[}T(t,s)v](x)= e^{\int _s^t-b(\sigma ,x)\mathrm{{d}}\sigma }v(x)\, ,\ x\in [0,1], \, v\in L^2([0,1]). \end{aligned}$$

Further, we need the following result on function f.

Proposition 3.2

Under assumptions (g1)–(g5), the function f is well-defined and satisfies properties (f1)–(f3).

Proof

It is easy to see that f is well-defined (see (64) and (g1)).

  1. [f1]

    Fix \(t\in [0,T]\), \(({\bar{z}}, {\bar{w}})\in L^2([0,1])\times {\mathcal {C}_0}^{L^2}\) and consider \((z_n, w_n)\longrightarrow ({\bar{z}}, {\bar{w}})\) in \(L^2([0,1])\times {\mathcal {C}_0}^{L^2}\).

    By (g2), we have

    $$\begin{aligned}&\Vert f(t,z_n, w_n)-f(t,{\bar{z}}, {\bar{w}})\Vert _{L^2}^2\\&\quad =\int _0^1 \left| g\left( t,z_n(x), \int _{-\tau }^0 w_n(\theta )(x)\mathrm{{d}}\theta \right) -g\left( t,{\bar{z}}(x), \int _{-\tau }^0 {\bar{w}}(\theta )(x)\mathrm{{d}}\theta \right) \right| ^2 \mathrm{{d}}x\\&\quad \le \int _0^1 \left[ l(t)\left( |z_n(x)-{\bar{z}}(x)|+\Vert w_n-{\bar{w}}\Vert _{{\mathcal {C}_0}^{L^2}} \right) \right] ^2 \mathrm{{d}}x\\&\quad \le l^2(t)\left( \Vert z_n-{\bar{z}}\Vert _{L^2}^2 + \Vert w_n-{\bar{w}}\Vert _{{\mathcal {C}_0}^{L^2}}^2 + 2\Vert w_n-{\bar{w}}\Vert _{{\mathcal {C}_0}^{L^2}}\int _0^1 |z_n(x)-{\bar{z}}(x) |\mathrm{{d}}x\right) \\&\quad \le l^2(t)\left( \Vert z_n-{\bar{z}}\Vert _{L^2}^2 + \Vert w_n-{\bar{w}}\Vert _{{\mathcal {C}_0}^{L^2}}^2 + 2\Vert w_n-{\bar{w}}\Vert _{{\mathcal {C}_0}^{L^2}} \Vert z_n-{\bar{z}}\Vert _{L^2}\right) \\&\quad = l^2(t) \left( \Vert z_n-{\bar{z}}\Vert _{L^2} + \Vert w_n-{\bar{w}}\Vert _{{\mathcal {C}_0}^{L^2}}\right) ^2\longrightarrow _{n\rightarrow +\infty }0 \end{aligned}$$

    so that \(f(t,\cdot ,\cdot )\) is continuous in \(({\bar{z}},{\bar{w}})\). By the arbitrariness of \(({\bar{z}},{\bar{w}})\) we get the continuity of \(f(t,\cdot ,\cdot )\) in \(L^2([0,1])\times {\mathcal {C}_0}^{L^2}\).

    Further, by (g3) the measurability of \(f(\cdot , y(\cdot ), y_{(\cdot )})\) for every \(y\in S([-\tau , T],L^2([0,1]))\) follows.

    Hence (f1) holds.

  2. [f2]

    Let us fix \((t,z,w)\in [0,T]\times L^2([0,1])\times {\mathcal {C}_0}^{L^2}\). By (g2) and (g4), we get

    $$\begin{aligned} \alpha (\cdot ):=|g(\cdot ,0,0)|+l(\cdot ) \in L^1_+([0,T]) \end{aligned}$$

    and

    $$\begin{aligned} \Vert f(t,z,w)\Vert _{L^2}\le & {} \left[ \int _0^1 \left( |g(t,0,0)|+l(t)|z(x)|+l(t)\Vert w\Vert _{{\mathcal {C}_0}^{L^2}}\right) ^2\mathrm{{d}}x\right] ^{1/2}\\\le & {} \alpha (t)\left[ \int _0^1 \left( 1+|z(x)|+\Vert w\Vert _{{\mathcal {C}_0}^{L^2}}\right) ^2\mathrm{{d}}x\right] ^{1/2}\\\le & {} \alpha (t) \left( 1+\Vert z\Vert _{L^2}+\Vert w\Vert _{{\mathcal {C}_0}^{L^2}}\right) ; \end{aligned}$$

    therefore, (f2) holds too.

  3. [f3]

    Fixed a.e. \(t\in [0,T]\) for which (g5) holds, for every bounded \(\Omega _1\subset L^2([0,1])\), \(\Omega _2\subset {\mathcal {C}_0}^{L^2}\), we have (see (g5))

    $$\begin{aligned} \chi _{L^2}(f(t,\Omega _1,\Omega _2)= & {} \chi _{L^2}\left( g\left( t, \Omega _1(\cdot ),\int _{-\tau }^0\Omega _2(\theta )(\cdot )\mathrm{{d}}\theta \right) \right) \\\le & {} h(t)\left[ \chi _{L^2}(\Omega _1)+\frac{1}{\tau }\int _{-\tau }^0 \chi _{L^2}(\Omega _2(\theta ))\mathrm{{d}}\theta \right] , \end{aligned}$$

    so property (f3) is satisfied.

\(\square \)

We prove now the existence theorem for the following population dynamics model:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\frac{\partial }{\partial t}u(t,x)= -b(t,x)u(t,x)+g\left( t,u(t,x), \int _{-\tau }^0u(t+\theta ,x)\mathrm{{d}}\theta \right) }\, ,\ t\in [0,T]\, ,\ x\in [0,1],\\ t\ne t_i\, ,\ i=1,\dots , p,\\ \\ u(t_i^+,x)=u(t_i,x) + \mathcal {I}_i\left( \int _{-\tau }^0u(t_i+\theta ,x)\mathrm{{d}}\theta \right) \, ,\, x\in [0,1] \, ,\, i=1,\dots , p,\\ \\ u(\theta ,x)=\psi (\theta ,x)\, ,\, \theta \in [-\tau ,0]\, ,\, x\in [0,1]. \end{array}\right. \nonumber \\ \end{aligned}$$
(67)

Theorem 3.1

Under assumptions (b1)–(b.3), (g1)–(g5), (\(\psi \)1)–(\(\psi \)2), the model (67) has at least one hereditary impulsive mild solution, i.e. a function \(u:[-\tau , T]\times [0,1]\rightarrow {\mathbb {R}}\) such that \(u(t,\cdot )\in L^2([0,1])\) for every \(t\in [0,T]\) and

$$\begin{aligned} \begin{array}{lll} u(t,x)&{}=&{} e^{\int _0^t-b(\sigma ,x)\mathrm{{d}}\sigma }\psi (0,x) +\sum _{0<t_i<t} e^{\int _{t_i}^t-b(\sigma ,x)\mathrm{{d}}\sigma } {{\mathcal {I}}}_i\left( \int _{-\tau }^0u(t_i+\theta ,x)\mathrm{{d}}\theta \right) \\ &{}&{}+ \int _0^t e^{\int _s^t-b(\sigma ,x)\mathrm{{d}}\sigma }g\left( s,u(s,x),\int _{-\tau }^0u(s +\theta ,x)\mathrm{{d}}\theta \right) \, \mathrm{{d}}s,\ t\in [0,T],\ x\in [0,1];\\ \\ u(t_i^+,x)&{}=&{}u(t_i,x) + \mathcal {I}_i\left( \int _{-\tau }^0u(t_i+\theta ,x)\mathrm{{d}} \theta \right) \, ,\, x\in [0,1] \, ,\, i=1,\dots , p;\\ \\ u(\theta ,x)&{}=&{}\psi (\theta ,x)\, ,\, \theta \in [-\tau ,0]\, ,\, x\in [0,1]. \end{array} \end{aligned}$$

Proof

Bearing in mind that (cf. (61) and (62))

$$\begin{aligned} g\left( t,u(t,x),\int _{-\tau }^0u(t+\theta ,x)\mathrm{{d}}\theta \right) = g\left( t,v(t)(x),\int _{-\tau }^0v_t(\theta )(x)\mathrm{{d}}\theta \right) , \end{aligned}$$

recalling the setting (61)–(66) and that \(u(\theta ,x)=v(\theta )(x)=v(0+\theta )(x)=v_{t_0}(\theta )(x)\), the process (67) can be rewritten as follows:

$$\begin{aligned} \left\{ \begin{array}{l} v'(t)(x)= A(t)v(t)(x)+f\left( t,v(t),v_t\right) (x)\, ,\ t\in [0,T],\, t \ne t_i\, ,\ i=1,\dots , p,\\ \\ v(t_i^+)(x)=v(t_i)(x) + I_i\left( v_{t_i}\right) (x)\, ,\, i=1,\dots , p,\\ \\ v_{t_0}(\cdot )(x)=\varphi (\cdot )(x), \end{array}\right. \end{aligned}$$

for every \(x\in [0,1]\). Clearly, it leads to a problem of type (5) in the space \(E= L^2([0,1])\).

Notice that by (\(\psi \)2) we can say that \(\psi (\cdot ,x) \in {\mathcal {C}_0}^{{\mathbb {R}}}\) for every \(x\in [0,1]\), so that \(\varphi \in {\mathcal {C}_0}^{L^2}\). Moreover, by Proposition 3.1 immediately follows that the family \(\{A(t)\}_{t\in [0,T]}\) satisfies property (A); further, Proposition 3.2 holds as well, so that all the assumptions of Theorem 2.1 are satisfied.

We can, therefore, conclude that there exists a delayed impulsive mild solution of the abstract problem, i.e. a function \({\bar{v}}\in S([-\tau ,T],L^2([0,1]))\) such that:

$$\begin{aligned} {\bar{v}}(t)= & {} e^{\int _0^t-b(\sigma ,\cdot )\mathrm{{d}}\sigma }\varphi (0) +\sum _{0<t_i<t} e^{\int _{t_i}^t-b(\sigma ,\cdot )\mathrm{{d}}\sigma } I_i({\bar{v}}_{t_i})\\&+ \int _0^t e^{\int _s^t-b(\sigma ,\cdot )\mathrm{{d}}\sigma }f(s,{\bar{v}}(s),{\bar{v}}_s)\, \mathrm{{d}}s \ ,\ t\in [0,T]; \end{aligned}$$

\({\bar{v}}(t_i^+)={\bar{v}}(t_i) + I_i({\bar{v}}_{t_i}) \, ,\ i=1,\cdots , p\);

\({\bar{v}}_{t_0}=\varphi \).

Hence, the function

$$\begin{aligned} {\bar{u}}(t,x):={\bar{v}}(t)(x),\ t\in [0,T],\ x\in [0,1] \end{aligned}$$

is a hereditary impulsive mild solution for (67). \(\square \)

3.2 The Nonlinear Reaction–Diffusion Model

Here we consider the nonlinear impulsive reaction–diffusion model with memory

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\frac{\partial }{\partial t}u(t,x)= \frac{\partial ^2}{\partial x^2}u(t,x)+g\left( t,u(t,x),\int _{-\tau }^0u(t+\theta ,x)\mathrm{{d}}\theta \right) }\, , \ t\in [0,T]\, ,\ x\in [0,1],\\ t\ne t_i\, ,\ i=1,\dots , p,\\ \\ u(t,0)=u(t,1)=0\, ,\, t\in [0,T],\\ \\ u(t_i^+,x)=u(t_i,x) + \mathcal {I}_i\left( \int _{-\tau }^0u(t_i+\theta ,x)\mathrm{{d}}\theta \right) \, ,\, x\in [0,1] \, ,\, i=1,\dots , p,\\ \\ u(\theta ,x)=\psi (\theta ,x)\, ,\, \theta \in [-\tau ,0]\, ,\, x\in [0,1]. \end{array}\right. \end{aligned}$$

Clearly, analogously as in Sect. 3.1, the parabolic partial differential equation with memory driving the system can be rewritten as the ordinary differential equation with functional delay

$$\begin{aligned} v'(t)= A(t)v(t)+f\left( t,v(t),v_t\right) \, ,\ t\in [0,T]. \end{aligned}$$

Indeed, we use (61), (64), and \(A(t)=A\) for every \(t\in [0,T]\), where \(A:D(A)\rightarrow L^2([0,1])\) is the linear operator

$$\begin{aligned} Az=z'',\, z\in D(A), \end{aligned}$$

and D(A) is the dense subset of \(L^2([0,1])\) given by

$$\begin{aligned} D(A)= & {} \{z\in L^2([0,1]) : z, z' \text{ absolutely } \text{ continuous, } z''\in L^2([0,1]),\\&z(0)=z(1)=0\}. \end{aligned}$$

Note that A generates a compact analytic semigroup \(\{U(t)\}_{ t\in [0,T]}\) on \(L^2([0,1])\) (cf. e.g. [5, 17]). Then, taking the family of linear operators \(\{A(t)=A\}_{t\in [0,T]}\), it is well defined the corresponding evolution system \(\{T(t,s)=U(t-s)\}_{0\le s\le t\le T}\) (see, e.g. [7, Remark 1]). This yields that property (A) is satisfied.

Now, assuming that \(g:[0,T]\times {\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfies properties (g1)-(g5) and that \(\psi :[-\tau ,0]\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfies (\(\psi \)1)-(\(\psi \)2) (cf. Sect. 3.1), we can claim that a hereditary impulsive mild solution for the model exists.