1 Introduction

The investigation of distributed parameter systems (DPS) drives many useful concepts in science and engineering, including the well-known notions of stability, controllability and observability [1,2,3,4,5,6]. These concepts allow one to have a better understanding of the investigated system, enhancing the ability to control it. All these notions are important and have their particularities, but here we only focus on the concept of observability, which was firstly introduced by Kalman, for finite-dimensional systems, in 1960 [7]. The goal is to recover an initial unknown state using the output parameters or the measurements of the considered system. After the pioneer work of Kalman, the concept of observability was also developed to cover infinite-dimensional systems [8, 9].

In the nineties of the twentieth century, El Jai and others introduced a more general notion called regional observability [10,11,12]. Its main objective is to find and recuperate the unknown initial vector of the studied distributed parameter system, but only in a partial region of the spatial domain. The key advantage of regional observability becomes clear when the considered system is not (globally) observable in the whole spatial domain. In such cases, the studied system can be regionally observable in some well-chosen sub-region. Thus, we can at least partially recover the initial state, which might be useful in many areas of science [13].

Fig. 1
figure 1

Profile of an active plate

After the regional observability concept has been introduced, Zerrik, Badraoui and El Jai proposed the notion of regional boundary observability, which has the same goal of regional observability but where the desired sub-region is a part of the boundary [14, 15]. Although important, all such notions and results were not enough to get all possible characterizations of DPS. For this reason, in the twenty-first century the notion of regional gradient observability has been introduced and investigated, with the goal of finding and recover the initial gradient vector in a suitable region [16, 17]. We adopt here this notion of gradient observability, which has been subject of a recent increase of interest [18,19,20]. This is due to the fact that the concept of gradient observability finds applications in real-life situations. For instance, consider the problem of determining the laminar flux on the boundary of a heated vertical plate developed in steady state: see Fig. 1 for the profile of an active plate. In this case, the notion of gradient observability is associated with the problem of determining the thermal transfer that is generated by the heated plate. For more on the subject, and for applications of the different observability concepts for various kinds of systems, we refer the reader to [21,22,23,24].

Fractional calculus is one of the most rapidly spreading domains in mathematics nowadays, especially the use of fractional-order systems to model real-world phenomena [25,26,27,28,29,30]. It is well known that fractional operators, non-integer order differentiation and non-integer order integration operators have many outstanding properties that make them fruitful and suitable for describing and studying the characteristics of certain real-world problems. The non-local fractional operators not only consider the local points to calculate the (fractional) derivative of some function but also consider the past states, as is the case with left-sided fractional operators, or the future states, as happens with the right-sided operators. We also mention that fractional operators have hereditary properties [31, 32]. Moreover, the diversity of fractional operators can also be seen as an advantage of fractional calculus because having many different types of fractional integrals and derivatives lead to more choices in the modeling of real-world phenomena. This explains why fractional calculus has been used with success and benefit in various different domains. For more details, we refer the interested reader to the books [33, 34].

In the subject of regional observability, we can already find several works dealing with fractional systems [35,36,37,38,39,40,41]. However, investigations of regional gradient observability for time-fractional diffusion processes are scarce. We are only aware of [18], where Ge, Chen and Kou propose a reconstruction procedure for Riemann–Liouville fractional time diffusion processes. The main goal of the present paper is the investigation of regional gradient observability for time-fractional diffusion systems described by the Caputo derivative, where the purpose is to find and reconstruct the gradient of the initial state of the considered system in a desired subregion of the evolution domain. This is in contrast with [18], where non-integer order systems are written with the Riemann–Liouville derivative and where it is mentioned that their approach fails to cover systems described by the Caputo derivative (cf. Lemma 7 of [18]). Here, we prove an alternative lemma that fixes the drawbacks mentioned in [18]. Our contribution consists of giving several characterizations for regional exact and approximate gradient observability of the considered linear system. We present a method that allows the regional reconstruction of the initial gradient vector in the desired subregion. Moreover, we provide some simple numerical simulations that back-up our theoretical results.

The organization of our paper is done in the upcoming manner. In Sect. 2, the necessary background information about regional gradient observability, as well as some of its useful properties and characterizations, is given. Section 3 is devoted to illustrate, throughout a counterexample, that we can have a system that is not gradient observable, but it is regionally gradient observable in some suitable region included in the evolution space. A full characterization of the notion in hand, via gradient strategic sensors, is then given in Sect. 4, while in Sect. 5 we develop the steps to be followed in order to achieve the regional flux reconstruction. In Sects. 6 and 7, we present, respectively, two applications of the obtained results and two successful numerical simulations. Finally, we end with Sect. 8 of conclusions and some future directions of research.

2 Problem statement and regional gradient observability

We now present a general formulation of the considered problem of initial gradient reconstruction. We also layout all the needed preliminary results and ingredients to make it easy for the reader to follow smoothly throw the manuscript.

Let \(\Omega \) be a connected, open, and bounded set in \({\mathbb {R}}^n\), \(n\ge 1\), possessing a Lipschitz-continuous boundary \(\partial \Omega \). For any final time \(T\in {\mathbb {R}}^*_+\), we designate \(Q_T := \Omega \times [0,T]\) and \(\Sigma _T := \partial \Omega \times [0,T]\). Let us take the dynamic of the considered system to be the following operator defined in the state space \(E=L^{^2}(\Omega )\) as

$$\begin{aligned} {\mathcal {D}}(A)= & {} H^2(\Omega )\cap H^1_0(\Omega ) \ \text{ and } \ \nonumber \\ Ay(x)= & {} \displaystyle \sum _{k,l=1}^{n} \partial _{x_k}\left( a_{k,l}(x)\partial _{x_l}y(x)\right) , \quad \nonumber \\{} & {} \quad \forall x\in \Omega , \ \forall y\in E, \end{aligned}$$
(1)

where \(\partial _{x}\) stands for \(\dfrac{\partial }{\partial x}\) and the coefficients \(a_{i,j} \in C^1({\overline{\Omega }})\) satisfy the following hypotheses:

\(({\mathcal {H}}_1)\):

\(a_{k,l}(\cdot ) = a_{l,k}(\cdot )\);

\(({\mathcal {H}}_2)\):

\(\exists \mu \in {\mathbb {R}}\) such that:

$$\begin{aligned} \displaystyle \sum _{k,l=1}^{n} a_{k,l}(x)\varsigma _k\varsigma _l \ge \mu \Vert \varsigma \Vert ^2, \ x\in \Omega , \end{aligned}$$

for \(\varsigma =(\varsigma _1,\ldots ,\varsigma _n)\in {\mathbb {R}}^n\) and where \(\Vert \varsigma \Vert =\sqrt{\varsigma _1^2 + \cdots + \varsigma _n^2}\).

Hypotheses \(({\mathcal {H}}_1)\) and \(({\mathcal {H}}_2)\) mean, respectively, that A is symmetric and \(-A\) is uniformly elliptic. In this case, it is well known that \(-A\) has a set of eigenvalues \(\left( \lambda _i\right) _{i\ge 1}\), such that (see [42]):

$$\begin{aligned} 0<\lambda _1<\lambda _2< \cdots< \lambda _i <\cdots \rightarrow +\infty . \end{aligned}$$

Each eigenvalue \(\lambda _i\) corresponds with \(r_i\) eigenfunctions \(\left\{ \varphi _{i,j}\right\} _{1\le j \le r_i}\), where \(r_i\in {\mathbb {N}}^*\) is the multiplicity of \(\lambda _i\), such that \(A\varphi _{i,j} = \lambda _i\varphi _{i,j}\) and \(\varphi _{i,j} \in H^2(\Omega )\cap H^1_0(\Omega )\), \(\forall i\in {\mathbb {N}}^*\) and \(1\le j\le r_i\). Furthermore, the set \(\left\{ \varphi _{i,j}\right\} _{\begin{array}{c} i\ge 1 \\ 1\le j\le r_i \end{array}}\) constitute an orthonormal basis of E.

The operator A is an infinitesimal generator of a \(C_0\)-semigroup \(\left\{ {\mathcal {S}}(t)\right\} _{t\ge 0}\) on E, written as:

$$\begin{aligned} {\mathcal {S}}(t)y(x) = \displaystyle \sum _{i=1}^{+\infty }\exp (-\lambda _it) \sum _{j=1}^{r_i}\langle y,\varphi _{i,j}\rangle \varphi _{i,j}(x), \ \forall y\in E.\nonumber \\ \end{aligned}$$
(2)

Here, we study fractional systems possessing the form:

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^C}{\mathcal {D}}_{0^+}^{^\alpha }u(x,t) = Au(x,t), &{} (x,t)\in Q_T, \\ u(\xi ,t) = 0, &{} (\xi ,t)\in \Sigma _T, \\ u(x,0) = u_0(x)\in H^1_0(\Omega ), &{} x\in \Omega , \end{array}\right. \end{aligned}$$
(3)

where \(^{^C}{\mathcal {D}}_{0^+}^{^\alpha }u(\cdot ,t) := \displaystyle \int _{0}^{t}\dfrac{(t-e)^{-\alpha }}{\Gamma (1-\alpha )} \partial _e u(\cdot ,e)de\) is the fractional derivative of u, in Caputo’s sense, and \(\Gamma (\alpha ) = \displaystyle \int _{0}^{+\infty }t^{\alpha -1}e^{-t}dt\) is the Euler gamma function. For \(u_0\in H^1_0(\Omega )\), both its value and its gradient are supposedly unknown. System (3) has one and only one mild solution in \(C(0,T;E)\cap L^2(0,T;{\mathcal {D}}(A))\) of the following form:

$$\begin{aligned} u(\cdot ,t)= & {} {\mathcal {S}}_\alpha (t)u_0(\cdot ) {:}{=} \displaystyle \sum _{i=1}^{\infty } E_\alpha (-\lambda _it^\alpha )\nonumber \\{} & {} \quad \times \sum _{j=1}^{r_i}\langle u_0, \varphi _{i,j}\rangle \varphi _{i,j}(\cdot ), \ 0\le t\le T, \end{aligned}$$
(4)

where \(E_\alpha (z) = \displaystyle \sum _{k=0}^{\infty }\dfrac{z^k}{\Gamma (\alpha k+1)}\) stands for the one parameter Mittag–Leffler function [43].

Without any loss of generality, we take \(u(t) := u(\cdot ,t)\). The output function, which provides measurements and information on the consider system, is:

$$\begin{aligned} z(t) = Cu(t), \quad 0\le t\le T. \end{aligned}$$
(5)

The operator \(C: E \rightarrow {\mathcal {O}}\) satisfies the following admissibility condition for \({\mathcal {S}}_\alpha \) [44]:

$$\begin{aligned} \exists M>0, \quad \displaystyle \int _{0}^{T}\Vert C {\mathcal {S}}_\alpha (t)v\Vert _{{\mathcal {O}}}^{^2}dt \le M\Vert v\Vert _{E}^{^2}, \ \forall v\in {\mathcal {D}}(A).\nonumber \\ \end{aligned}$$
(6)

The Hilbert space \({\mathcal {O}}\) is called the observation space.

The operator \({\mathcal {S}}_\alpha (t)\) defined in (4) is a linear bounded operator, see [44], which describes the evolution of the considered time-fractional system in function of its initial state. Moreover, if the operator C is bounded, then the admissibility condition (6) is always satisfied, which means that any bounded observation operator is an admissible observation operator.

Let \(\omega \subset \Omega \) be the desired sub-region. We introduce the following restriction operators:

$$\begin{aligned} \begin{array}{lllll} \chi _{_\omega } &{}:&{} E&{}\longrightarrow &{} L^2(\omega )\\ &{} &{} u &{} \longmapsto &{} u_{|_\omega } \end{array} \ \text{ and } \ \begin{array}{lllll} \chi _{_\omega }^{n} &{}:&{} E^{^n}&{}\longrightarrow &{} (L^2(\omega ))^{^n}\\ &{} &{} u&{} \longmapsto &{} u_{|_\omega } = \left( \chi _{_\omega }u_1, \chi _{_\omega }u_2,\ldots ,\chi _{_\omega }u_n\right) , \end{array} \end{aligned}$$

and their adjoint,

$$\begin{aligned}{} & {} \begin{array}{lllll} \chi _{_\omega }^* &{}:&{} L^2(\omega )&{}\longrightarrow &{} E \\ &{}&{} v &{} \longmapsto &{} \left\{ \begin{array}{ll} v &{} \text {in} \ \omega \\ 0&{} \text {in} \ \Omega \setminus \omega \end{array}\right. \end{array}\\ \text { and }{} & {} \begin{array}{lllll} (\chi _{_\omega }^{n})^* &{}:&{} (L^2(\omega ))^{^n}&{}\longrightarrow &{} E^{^n}\\ &{} &{} v&{} \longmapsto &{} \left( \chi _{_\omega }^*v_1, \chi _{_\omega }^*v_2,\ldots ,\chi _{_\omega }^*v_n\right) . \end{array} \end{aligned}$$

Substituting (4) in (5) gives,

$$\begin{aligned} z(t) = C{\mathcal {S}}_\alpha (t)u_0 :=({\mathcal {K}}_\alpha u_0)(t), \quad t\in [0,T], \end{aligned}$$

where \({\mathcal {K}}_\alpha : {\mathcal {D}}\left( {\mathcal {K}}_\alpha \right) \subset E \longrightarrow L^2(0,T;{\mathcal {O}})\) is the observability operator, which has an important contribution in defining and characterizing both regional and regional gradient observability. In [34], the admissibility hypothesis on C makes it possible to express the adjoint of \({\mathcal {K}}_\alpha \) as:

$$\begin{aligned} \begin{array}{lllll} {\mathcal {K}}_\alpha ^*&{}:&{} {\mathcal {D}}\left( {\mathcal {K}}_\alpha ^*\right) \subset L^2(0,T;{\mathcal {O}}) &{}\longrightarrow &{}E,\\ &{} &{}z&{} \longmapsto &{} \displaystyle \int _{0}^{T}{\mathcal {S}}_\alpha ^*(e)C^*z(e)de. \end{array} \end{aligned}$$
(7)

Let \(\nabla : {\mathcal {D}}(\nabla )\subset E \rightarrow E^n\) be the gradient operator, \(\nabla v = \left( \partial _{x_1} v, \partial _{x_2} v,\ldots ,\partial _{x_n} v\right) \), for all v in \( {\mathcal {D}}(\nabla ) = H_0^1(\Omega )\). As shown in [45], the adjoint of \(\nabla \) is minus the divergence, that is, \(\forall V\in {\mathcal {D}}(\nabla ^*)\),

$$\begin{aligned} \nabla ^*V = \left\{ \begin{array}{ll} -{\text {div}}(V) := -\displaystyle \sum _{i=1}^{n}\partial _{x_i} V_i &{} \text {in} \quad \Omega ,\\ \qquad 0&{} \text {on} \quad \partial \Omega . \end{array}\right. \end{aligned}$$

The initial state can be decomposed as follows:

$$\begin{aligned} u_0=\left\{ \begin{array}{ll} u_0^1&{} \text {in} \ \omega ,\\ {\tilde{u}}_0 &{} \text {in} \ \Omega \setminus \omega . \end{array}\right. \end{aligned}$$

The purpose of regional gradient observability is to reconstruct the gradient vector \(\nabla u_0^1\) in \(\omega \).

We recall that system (3) augmented with (5) is called exactly (respectively, approximately) \(\omega \)-observable if, and only if, \(Im\left( \chi _{_\omega }{\mathcal {K}}_\alpha ^*\right) = L^2(\omega )\) (respectively, \(Ker\left( {\mathcal {K}}_\alpha \chi _{_\omega }^*\right) = \left\{ 0\right\} \)). Based on the discussion in [16], we denote \(H_\alpha := \chi _{_\omega }^n\nabla {\mathcal {K}}_\alpha ^*\) and we enunciate the following definitions.

Definition 1

System (3), augmented with (5), is exactly G-observable in \(\omega \) (G stands for Gradient) if

$$\begin{aligned} Im\left( H_\alpha \right) = \left( L^2(\omega )\right) ^n. \end{aligned}$$
(8)

Definition 2

System (3), augmented with (5), is approximately G-observable in \(\omega \) if

$$\begin{aligned} \overline{Im\left( H_\alpha \right) } = \left( L^2(\omega )\right) ^n. \end{aligned}$$
(9)

Remark 3

Definitions 1 and 2 for fractional systems coincide with the standard notions for the classical systems in the particular case \(\alpha =1\) [16].

We now present some useful results and properties. Our first result gives a characterization of the approximate regional gradient observability.

Proposition 4

The upcoming assertions are equivalent:

  1. 1-

    System (3) is approximately G-observable in \(\omega \).

  2. 2-

    \(Ker\left( H_\alpha ^*\right) \) = \(\left\{ 0\right\} \).

  3. 3-

    \(H_\alpha H_\alpha ^*\) is positive definite.

  4. 4-

    \(\left( \langle \left( \chi _{_\omega }^n\right) ^*y, \nabla {\mathcal {K}}_\alpha ^*z \rangle _{_{\left( E\right) ^n}} = 0, \ \forall z\in L^2(0,T;{\mathcal {O}})\right) \) \(\implies \) \(y=0_{_{\left( L^2(\omega )\right) ^n}}\).

Proof

The result follows by proving that \(1) \iff 2)\), \(2) \implies 3)\), \(3) \implies 4)\) and \(4)\implies 2)\).

\(1) \iff 2)\):

This is a direct consequence from the fact that \(Ker(H_\alpha ^*) = (Im(H_\alpha ))^{^\perp }\).

\(2) \implies 3)\):

Let y be in \(\left( L^2(\omega )\right) ^n\). Then,

$$\begin{aligned} \langle H_\alpha H_\alpha ^*y,y\rangle _{_{\left( L^2(\omega )\right) ^n}}= & {} \langle H_\alpha ^*y,H_\alpha ^*y\rangle _{_{L^2(0,T;{\mathcal {O}})}}\\= & {} \Vert H_\alpha ^*y\Vert _{_{L^2(0,T;{\mathcal {O}})}}^2 \ge 0. \end{aligned}$$

Moreover, we get that,

$$\begin{aligned} \langle H_\alpha H_\alpha ^*y,y\rangle _{_{\left( L^2(\omega )\right) ^n}}=0 \ \implies \ H^*_\alpha y=0, \end{aligned}$$

and, using 2), we have:

$$\begin{aligned} \langle H_\alpha H_\alpha ^*y,y\rangle _{_{\left( L^2(\omega )\right) ^n}} =0 \ \implies \ y=0. \end{aligned}$$

Thus, \(H_\alpha H_\alpha ^*\) is positive definite.

\(3) \implies 4)\):

Let us consider \(y\in \left( L^2(\omega )\right) ^n\) such that \(\langle \left( \chi _{_\omega }^n\right) ^*y,\nabla {\mathcal {K}}_\alpha ^*z \rangle _{_{\left( E\right) ^n}} = 0\), for all \(z\in L^2(0,T;{\mathcal {O}})\). Thus, by choosing \(z=H_\alpha ^*y\), we obtain that

$$\begin{aligned} \langle \left( \chi _{_\omega }^n\right) ^*y, \nabla {\mathcal {K}}_\alpha ^*H_\alpha ^*y \rangle _{_{\left( E\right) ^n}}= & {} \langle H_\alpha ^*y, H_\alpha ^*y \rangle _{_{L^2(0,T;{\mathcal {O}})}}\\= & {} \langle H_\alpha H_\alpha ^*y, y \rangle _{_{\left( L^2(\omega )\right) ^n}}=0. \end{aligned}$$

Hence, 3) implies that \(y=0_{_{\left( L^2(\omega )\right) ^n}}\).

\(4)\implies 2)\):

Let \(y\in Ker\left( H_\alpha ^*\right) \). We have \(H_\alpha ^*y=0\), which means that \(\langle H_\alpha ^* y,z\rangle _{_{L^2(0,T;{\mathcal {O}})}} =0\) for all \(z\in L^2(0,T;{\mathcal {O}})\). Hence, \(\langle \left( \chi _{_\omega }^n\right) ^*y, \nabla {\mathcal {K}}_\alpha ^*z\rangle _{_{\left( E\right) ^n}} =0\) for all \(z\in L^2(0,T;{\mathcal {O}})\). Thus, 4) implies that \(y=0_{_{\left( L^2(\omega )\right) ^n}}\) and we conclude that \(Ker\left( H_\alpha ^*\right) = \left\{ 0\right\} \).

The proof is complete. \(\square \)

Before proving our second result, we recall the following lemma.

Lemma 5

(See [1]) Let F, G and E be three reflexive Banach spaces. Let us consider \(v\in {\mathcal {L}}(F,E)\) and \(y\in {\mathcal {L}}(G,E)\). The upcoming assertions are equivalent:

  1. 1.

    \(Im(v) \subset Im(y)\);

  2. 2.

    \(\exists \) \(c>0\) such that:

    $$\begin{aligned} \Vert v^*x^*\Vert _{F^{^*}} \le c \Vert y^*x^*\Vert _{G^{^*}}, \ \forall x^*\in E^{^*}. \end{aligned}$$

The next proposition characterizes the exact regional gradient observability.

Proposition 6

The mentioned statements are equivalent:

  1. 1-

    System (3) is exactly G-observable in \(\omega \);

  2. 2-

    \(Ker\left( H_\alpha ^*\right) \) = \(\left\{ 0\right\} \) and \(Im\left( H_\alpha \right) \) is closed;

  3. 3-

    There exists \(c>0\) satisfying:

    $$\begin{aligned} \Vert u\Vert _{_{\left( L^2(\omega )\right) ^n}} \le c\Vert H_\alpha ^* u\Vert _{_{L^2(0,T;{\mathcal {O}})}}, \quad \forall u\in \left( L^2(\omega )\right) ^n. \end{aligned}$$

Proof

We show that \(1)\implies 2)\), \(2) \implies 1)\), and \(1) \iff 3)\).

\(1)\Rightarrow 2)\):

Since system (3) is exactly G-observable in \(\omega \), it is also approximately G-observable in \(\omega \). Hence, \(Ker\left( H_\alpha ^*\right) = \left\{ 0\right\} \) and \(Im(H_\alpha ) = \left( L^2(\omega )\right) ^n = \overline{Im(H_\alpha )}\). Thus, \(Ker\left( H_\alpha ^*\right) = \left\{ 0\right\} \) and \(Im(H_\alpha ) \) is closed.

\(2) \Rightarrow 1)\):

The equality \(Ker\left( H_\alpha ^*\right) = \left\{ 0\right\} \) gives that (3) is approximately G-observable in \(\omega \). This, together with the fact that \(Im(H_\alpha )\) is closed, imply that \(Im(H_\alpha ) = \overline{Im(H_\alpha )} = \left( L^2(\omega )\right) ^n\). Thus, system (3) is exactly G-observable in \(\omega \).

\(1) \Leftrightarrow 3)\):

System (3) is exactly G-observable in \(\omega \Leftrightarrow Im\left( H_\alpha \right) = \left( L^2(\omega )\right) ^n\). We already know that \(Im\left( H_\alpha \right) \subset \left( L^2(\omega )\right) ^n\), hence all that remains is to show that \(\left( L^2(\omega )\right) ^n \subset Im\left( H_\alpha \right) \). This last inclusion is a direct application of Lemma 5 with \(E=F=\left( L^2(\omega )\right) ^n\), \(G=L^2(0,T;{\mathcal {O}})\), \(v=Id_{_{\left( L^2(\omega )\right) ^n}}\), and \(y=H_\alpha .\)

The proof is complete. \(\square \)

Remark 7

Our Propositions 4 and 6 generalize the main results of [16], which are only valid for the classical integer-order case \(\alpha =1\).

3 A counterexample

To show the importance of regional gradient observability, we now give an example of a system that is not approximately gradient observable, but it is approximately G-observable in \(\omega \).

Let us set \(\Omega = ]0,1[\times ]0,1[\), and let us work with the time-fractional system given by

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^C}{\mathcal {D}}_{0^+}^{^{0.5}}u(y_1,y_2,t) = \partial _{y_1}^2u(y_1,y_2,t) + \partial _{y_2}^2u(y_1,y_2,t), &{} (y_1,y_2,t)\in Q_2, \\ u(\nu _1,\nu _2,t) = 0, &{} (\nu _1,\nu _2,t)\in \Sigma _2, \\ u(y_1,y_2,0) = u_0(y_1,y_2), &{} (y_1,y_2)\in \Omega , \end{array}\right. \nonumber \\ \end{aligned}$$
(10)

together with the output

$$\begin{aligned} z(t) = Cu(t) = \displaystyle \iint _{_D}u(y_1,y_2,t)f(y_1,y_2)dy_1dy_2, \end{aligned}$$
(11)

where \(f(y_1,y_2) = \sin (2\pi y_2)\) and \(D = \left\{ \frac{1}{2}\right\} \times ]0,1[\).

We know that the eigenvalues and eigenfunctions of \(-A = -\partial _{y_1}^2 - \partial _{y_2}^2\) are written as follows:

$$\begin{aligned} \lambda _{i,j} = (i^2 + j^2)\pi ^2, \end{aligned}$$

and

$$\begin{aligned} \varphi _{i,j}(y_1,y_2) = 2\sin (i\pi y_1)\sin (j\pi y_2). \end{aligned}$$

Moreover, from (4) and (11), one can write that:

$$\begin{aligned} {\mathcal {K}}_{_\alpha }(t)u_0= & {} C {\mathcal {S}}_{_\alpha }(t)u_0 = \displaystyle \sum _{i,j=1}^{+\infty }E_{0.5}(-\lambda _{i,j}t^{0.5}) \langle u_0,\varphi _{i,j}\rangle \nonumber \\{} & {} \quad \times \displaystyle \iint _{_D}\varphi _{i,j}(y_1,y_2)f(y_1,y_2)dy_1dy_2. \end{aligned}$$
(12)

Let \(h(y_1,y_2) = \dfrac{1}{4\pi }\left( \cos (y_1\pi )\sin (4y_2\pi ), \frac{1}{4}\sin (y_1\pi )\right. \left. \cos (4y_2\pi )\right) \) be an element of \(E^2\).

Proposition 8

The gradient h is not approximately G-observable in \(\Omega \), but it is approximately G-observable in \(\omega = ]0,1[\times ]\frac{1}{8},\frac{5}{8}[\).

Proof

Firstly, let us show that h is not approximately G-observable in \(\Omega \), i.e., \(h\in Ker\left( {\mathcal {K}}_{_\alpha }(t)\nabla ^{^*}\right) \) for all \(t\in [0,T]\). We have:

$$\begin{aligned} \begin{array}{lll} {\mathcal {K}}_{_\alpha }(t)\nabla ^{^*}h &{}&{}= \displaystyle \sum _{i,j=1}^{+\infty } E_{0.5}(-\lambda _{i,j}t^{0.5})\langle \nabla ^{^*}h,\varphi _{i,j}\rangle \\ &{}&{} \quad \times \iint _{_D}\varphi _{i,j}(y_1,y_2)f(y_1,y_2)dy_1dy_2\\ &{}&{} = \displaystyle \sum _{i,j=1}^{+\infty } E_{0.5}(-\lambda _{i,j}t^{0.5})\int _{0}^{1}\sin (y_1\pi ) \sin (iy_1\pi )dy_1\\ &{}&{} \quad \times \int _{0}^{1}\sin (4y_2\pi )\sin (jy_2\pi )dy_2\\ &{}&{} \quad \times \sin (\frac{i\pi }{2}) \displaystyle \int _{0}^{1}\sin (2y_2\pi )\sin (jy_2\pi )dy_2 = 0. \end{array} \end{aligned}$$

Hence, \(h\in Ker\left( {\mathcal {K}}_{_\alpha }(t)\nabla ^{^*}\right) \).

We now show that h is approximately G-observable in \(\omega \), i.e., \(h \notin Ker\left( {\mathcal {K}}_{_\alpha }(t)\nabla ^{^*}( \chi _{_\omega }^n)^{^*}\chi _{_\omega }^n\right) \) for all \(t\in [0,T]\). We have:

$$\begin{aligned} \begin{aligned}&{\mathcal {K}}_{_\alpha }(t)\nabla ^{^*}(\chi _{_\omega }^n)^{^*}\chi _{_\omega }^nh = \displaystyle \sum _{i,j=1}^{+\infty }E_{0.5}(-\lambda _{i,j}t^{0.5})\langle \nabla ^{^*}(\chi _{_\omega }^n)^{^*}\chi _{_\omega }^nh,\varphi _{i,j}\rangle \\&\displaystyle \times \iint _{_D}\varphi _{i,j}(y_1,y_2)f(y_1,y_2)dy_1dy_2\\&=\displaystyle \sum _{i,j=1}^{+\infty }E_{0.5}(-\lambda _{i,j}t^{0.5}) \int _{0}^{1}\sin (y_1\pi )\sin (iy_1\pi )dy_1\\&\times \int _{\frac{1}{8}}^{\frac{5}{8}} \sin (4y_2\pi )\sin (jy_2\pi )dy_2\\&\times \sin (\frac{i\pi }{2}) \displaystyle \int _{0}^{1}\sin (2y_2\pi )\sin (jy_2\pi )dy_2\\&= E_{0.5}(-\lambda _{1,2}t^{0.5})\displaystyle \int _{0}^{1}\sin (y_1\pi )^2dy_1\\&\times \int _{\frac{1}{8}}^{\frac{5}{8}}\sin (4y_2\pi )\sin (2y_2\pi )dy_2 \int _{0}^{1}\sin (2y_2\pi )^2dy_2\\&= -\dfrac{\sqrt{2}}{24\pi }E_{0.5}(5\pi ^2t^{0.5})\ne 0. \end{aligned} \end{aligned}$$

We conclude that h is approximately G-observable in \(\omega \). \(\square \)

4 Gradient strategic sensors

In this section, we give a characterization of strategic gradient sensors whenever the considered system is approximately G-observable in the desired subregion.

Definition 9

We call a sensor any element (Df), where D is the geometrical placement of the sensor, which is included in \(\Omega \), and \(f: D\rightarrow {\mathbb {R}}\) is its distribution.

We introduce here two types of sensors:

  • Zonal sensor, when D has positive Lebesgue measure, \(f\in L^2(D)\), the space \({\mathcal {O}}\) is \({\mathbb {R}}\), and the measurements are given by \(z(t)=\langle f,u(t)\rangle _{_{L^2(D)}} = \displaystyle \int _{D}f(x)u(x,t)dx\);

  • Pointwise sensor, when \(D=\left\{ b\right\} \in \Omega \), \(f \equiv \delta _b\) with \(\delta _b\) the Dirac mass centered in b, the space \({\mathcal {O}}\) is \({\mathbb {R}}\), and the output equation is given by \(z(t) = u(b,t)\).

Remark 10

When we consider a zonal sensor, the observation operator is bounded; if we take a pointwise sensor, then the observation operator is unbounded, but it is an admissible observation operator.

For more information about sensors and their characterizations, see [34, 46, 47].

Let us reconsider system (3). We take the measurements to be given by means of p sensors \(\left( D_i,f_i\right) _{1\le i\le p}\). The observation space is \({\mathcal {O}}={\mathbb {R}}^p\) and the output equation is written as:

$$\begin{aligned} z(t) = \left( z_1(t), \ldots , z_p(t)\right) ^t, \end{aligned}$$
(13)

where \(z_i(t) = \langle u(t), f_i\rangle _{_{L^2(D)}}, \ \forall i\in \llbracket 1, n \rrbracket \). The adjoint of the observation operator C is expressed for all \(u=(u_1,\ldots ,u_p)\in {\mathbb {R}}^p\) by:

$$\begin{aligned} C^*u = \displaystyle \sum _{i=1}^{p}\chi _{_{D_i}}f_iu_i, \end{aligned}$$
(14)

for the case of zonal sensors, and by

$$\begin{aligned} C^*u = \displaystyle \sum _{i=1}^{p}u_i\delta _{b_i}, \end{aligned}$$
(15)

for the case of pointwise sensors.

Definition 11

A sequence of sensors (or a sensor) is said to be gradient \(\omega \)-strategic if (3), augmented with (13), is approximately G-observable in \(\omega \).

In [18], it is given a lemma (Lemma 7 of [18]) that fails to be valid when the considered system is written in terms of the Caputo derivative, as we do here. Now, we present an alternative new lemma that allows to deal with the problem.

Remark 12

The problem of this article is formulated with Caputo-type fractional derivatives only. However, Riemann–Liouville fractional derivatives appear naturally due to fractional integration by parts and Green’s formulas (cf. Lemmas 14 and 15).

Lemma 13

Let r be a function that satisfies

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^{RL}}{\mathcal {D}}_{T^-}^{^\alpha }r(y,s) = A^*r(y,s) + C^*z(s), &{} (y,s)\in Q_T, \ \alpha \in ]0,1], \\ r(\xi ,s) = 0, &{} (\xi ,s)\in \Sigma _T, \\ \lim \limits _{s\rightarrow T^-} {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}r(y,s) = 0, &{} y\in \Omega , \end{array}\right. \end{aligned}$$
(16)

where:

$$\begin{aligned} ^{^{RL}}{\mathcal {D}}_{T^-}^{^\alpha }r(y,s) = \partial _s\displaystyle \int _{s}^{T} \dfrac{(e-s)^{-\alpha }}{\Gamma (1-\alpha )}r(y,e)de, \end{aligned}$$

is the right-sided fractional derivative in the sense of Riemann–Liouville, and,

$$\begin{aligned} {\mathcal {I}}_{_{T^-}}^{^{\alpha }}r(y,s) = \dfrac{1}{\Gamma (\alpha )}\displaystyle \int _{s}^{T}(e-s)^{\alpha -1}r(y,e)de, \end{aligned}$$

is the right-sided Riemann-Liouville fractional integral. Then the following equality holds:

$$\begin{aligned} {\mathcal {K}}_\alpha ^*z = {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}r(x,0). \end{aligned}$$

Proof

The solution of (16) can be written as:

$$\begin{aligned} r(s) = \displaystyle \int _{s}^{T}(e-s)^{\alpha -1}{\mathcal {N}}^*_\alpha (e-s)C^*z(e)de, \end{aligned}$$

where \({\mathcal {N}}_\alpha \) is a linear and bounded operator defined in terms of a probability density function [44]. From Proposition 3.3 of [44], we have that:

$$\begin{aligned} {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}r(x,0) = \displaystyle \int _{0}^{T}{\mathcal {S}}_\alpha ^*(\tau )C^*zd\tau . \end{aligned}$$

Hence, from (7), we have that:

$$\begin{aligned} {\mathcal {K}}_\alpha ^*z = \displaystyle \int _{0}^{T}{\mathcal {S}}_\alpha ^*(\tau )C^*z(s)ds = {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}r(x,0), \end{aligned}$$

and the result is proved. \(\square \)

The following fractional integration by parts formula will be useful in the sequel.

Lemma 14

[See [48]] Let v be a function in \(L^{^p}(0,T;E)\), let u be in AC(0, TE) and \(\alpha \) in ]0, 1]. The formula

$$\begin{aligned} \begin{array}{lll} &{}&{}\displaystyle \int _{0}^{T} \left( {^{^C}{\mathcal {D}}_{0^+}^{^\alpha }}u(t)\right) v(t)dt = \displaystyle \int _{0}^{T} u(t)\left( {^{^{RL}}{\mathcal {D}}_{T^-}^{^\alpha }} v(t)\right) dt\\ &{}&{}\quad + u(T)\lim \limits _{t\rightarrow T^-} {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}v(t) - u(0){\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}v(0) \end{array} \end{aligned}$$
(17)

of integration by parts holds.

Our next result (Theorem 16), provides a useful characterization of gradient \(\omega \)-strategic sensors. To prove it, we make use of (17) and also the following fractional Green’s formula.

Lemma 15

[Fractional Green’s formula [44]] For any \(f \in H^{^2}\left( 0,T;E\right) \) one has

$$\begin{aligned} \begin{aligned} \displaystyle \int _{0}^{T}&\int _{\Omega }\left( {^{^C}{\mathcal {D}}_{0^+}^{^\alpha }}r(y,s) + Ar(y,s)\right) f(y,s)dyds\\&=\displaystyle \int _{\Omega }r(y,T)\lim \limits _{s\rightarrow T^-} {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}f(y,s)dy\\&\quad - \displaystyle \int _{\Omega }r(y,0){\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}f(y,0)dy\\&\quad + \displaystyle \int _{0}^{T}\int _{\Omega }\left( {^{^{RL}}{\mathcal {D}}_{T^-}^{^\alpha }} f(y,s) + A^*f(y,s)\right) r(y,s)dyds \\&\quad + \displaystyle \int _{0}^{T}\int _{\partial \Omega }\left( r(\varsigma ,s) \dfrac{\partial f(\varsigma ,s)}{\partial \nu _{_{A^*}}} - \dfrac{\partial r(\varsigma ,s)}{\partial \nu _{_{A}}} f(\varsigma ,s)\right) d\varsigma ds. \end{aligned} \end{aligned}$$
(18)

Theorem 16

The sequence \((D_i,f_i)_{_{1\le i\le p}}\) is gradient \(\omega \)-strategic if, and only if,

$$\begin{aligned} \displaystyle \sum _{s=1}^{n}M_{j}^sy_j^s = 0_{_{{\mathbb {R}}^{^p}}} \implies y = 0_{_{(L^2(\omega ))^{^n}}}, \end{aligned}$$

where

$$\begin{aligned} M_j^s= & {} \left( \varphi _{j,k}^{i,s}\right) _{\begin{array}{c} 1 \le i \le p \\ 1 \le k \le r_j \end{array}}, \\ \varphi _{j,k}^{i,s}= & {} \left\{ \begin{array}{ll} \langle \partial _{x_s} \varphi _{j,k}, f_i\rangle _{_{L^2(D_i)}}, \quad \text { for zonal sensors},\\ \partial _{x_s} \varphi _{j,k}(b_i), \quad \text { for pointwise sensors}, \end{array}\right. \\ y_j^s= & {} \left( y_{j_{_1}}^s,\ldots ,y_{j_{_{r_j}}}^s\right) ^T \in {\mathbb {R}}^{^{r_j}}, \\ y_{_{j_{_k}}}^s= & {} \langle \chi _{_\omega }^*y_s, \varphi _{j,k}\rangle _{_{E}}, \ 1\le k \le r_j, \end{aligned}$$

and

$$\begin{aligned} y = \left( y_1,\ldots ,y_n\right) \in (L^{^2}(\omega ))^{^n}. \end{aligned}$$

Proof

From Proposition 4, we have that \((D_i,f_i)_{_{1\le i\le p}}\) is gradient \(\omega \)-strategic if, and only if,

$$\begin{aligned}{} & {} \langle \left( \chi _{_\omega }^n\right) ^{^*}y, \nabla {\mathcal {K}}_\alpha ^*z \rangle _{_{E^{^n}}} = 0, \ \forall z\in L^2(0,T;{\mathcal {O}}) \\ {}{} & {} \implies y=0_{_{\left( L^2(\omega )\right) ^n}}, \end{aligned}$$

which means, by using Lemma 13, that:

$$\begin{aligned} \displaystyle \sum _{s=1}^{n} \langle \chi _{_\omega }^*y_s,\partial _{x_s} {\mathcal {I}}^{1-\alpha }_{_{T^-}}r(0) \rangle _{_{E}} =0 \quad \implies \quad y=0_{_{\left( L^2(\omega )\right) ^n}},\nonumber \\ \end{aligned}$$
(19)

where r is the solution of (16). Let us now find the exact expression of \(\langle \chi _{_\omega }^*y_s, \partial _{x_s}{\mathcal {I}}^{1-\alpha }_{_{T^-}}r(0) \rangle _{_{E}}\). Let s be an element in \(\llbracket 1, n \rrbracket \). We introduce the system:

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^C}{\mathcal {D}}_{0^+}^{^\alpha }\phi (x,\tau ) = A\phi (x,\tau ), &{} (x,\tau )\in Q_T, \ \alpha \in ]0,1], \\ \phi (\varsigma ,\tau ) = 0, &{} (\varsigma ,\tau )\in \Sigma _T, \\ \phi (x,0) = \chi _{_\omega }^*y_s(x), &{} x\in \Omega . \end{array}\right. \end{aligned}$$
(20)

Its unique mild solution is written as:

$$\begin{aligned} \phi (\cdot ,\tau )= & {} {\mathcal {S}}_\alpha (\tau )\chi _{_\omega }^*y_s(\cdot )\\= & {} \displaystyle \sum _{j=1}^{\infty } E_\alpha (-\lambda _j\tau ^\alpha )\sum _{k=1}^{r_j}\langle \chi _{_\omega }^*y_s,\varphi _{j,k}\rangle _{_{E}}\varphi _{j,k}(\cdot ). \end{aligned}$$

Multiplying both sides of (16) by \(\partial _{x_s}\phi \) and integrating over \({\mathcal {Q}}_T =\Omega \times [0,T]\), we get that:

$$\begin{aligned}{} & {} \int _{{\mathcal {Q}}_T}{^{^{RL}} {\mathcal {D}}_{T^-}^{^\alpha }}r(x,\tau )\partial _{x_s} \phi (x,\tau )dxd\tau \nonumber \\{} & {} = \int _{{\mathcal {Q}}_T} A^*r(x,\tau ) \partial _{x_s} \phi (x,\tau )dxd\tau \nonumber \\{} & {} \quad + \int _{{\mathcal {Q}}_T}C^*z(\tau )\partial _{x_s} \phi (x,\tau )dxd\tau . \end{aligned}$$
(21)

On the other hand, equation (17) gives:

$$\begin{aligned}{} & {} \int _{{\mathcal {Q}}_T}{^{^{RL}}{\mathcal {D}}_{T^-}^{^\alpha }} r(x,\tau )\partial _{x_s} \phi (x,\tau )dxd\tau \nonumber \\{} & {} = \int _{{\mathcal {Q}}_T}r(x,\tau ) A\partial _{x_s} \phi (x,\tau )dxd\tau \nonumber \\{} & {} \quad + \int _{\Omega }\phi (x,0)\partial _{x_s} {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}r(x,0)dx. \end{aligned}$$
(22)

From equations (18), (21), and (22), and using the boundary conditions, we obtain that:

$$\begin{aligned} \begin{aligned} \langle \chi _{_\omega }^*y_s,\partial _{x_s} {\mathcal {I}}^{1-\alpha }_{_{T^-}}r(0) \rangle _{_{E}}&= \int _{\Omega }\phi (x,0)\partial _{x_s} {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}r(x,0)dx\\&= \int _{{\mathcal {Q}}_T} C^*z(\tau )\partial _{x_s} \phi (x,\tau )dxd\tau \\&= \int _{0}^{T}\langle z(\tau ),C\partial _{x_s} \phi (\cdot ,\tau )\rangle _{_{{\mathbb {R}}^{^p}}}d\tau . \end{aligned} \end{aligned}$$
(23)

Without loss of generality, we continue the proof for the case of zonal sensors (the same can be easily done for pointwise sensors). We have that:

$$\begin{aligned} C\partial _{x_s} \phi (\cdot ,t) = \displaystyle \sum _{j=1}^{+\infty } \sum _{k=1}^{r_j}E_\alpha (\lambda _jt^\alpha )\langle \chi _{_\omega }^*y_s, \varphi _{j,k}\rangle _{_{E}}\left( \begin{array}{c} \langle \partial _{x_s} \varphi _{j,k},f_1\rangle _{_{L^2(D_1)}}\\ \langle \partial _{x_s} \varphi _{j,k},f_2\rangle _{_{L^2(D_2)}}\\ \vdots \\ \langle \partial _{x_s} \varphi _{j,k},f_p\rangle _{_{L^2(D_p)}} \end{array}\right) .\nonumber \\ \end{aligned}$$
(24)

Using (19), (23) and (24), we deduce that \((D_i,f_i)_{_{1\le i\le p}}\) is gradient \(\omega \)-strategic if, and only if,

$$\begin{aligned}{} & {} \displaystyle \int _{0}^{T}\left\langle z(t) , \displaystyle \sum _{s=1}^{n}\sum _{j=1}^{+\infty }\sum _{k=1}^{r_j} E_\alpha (\lambda _jt^\alpha ) \langle \chi _{_\omega }^*y_s, \varphi _{j,k}\rangle _{_{E}}\left( \begin{array}{c} \langle \partial _{x_s} \varphi _{j,k},f_1\rangle _{_{L^2(D_1)}}\\ \langle \partial _{x_s} \varphi _{j,k},f_2\rangle _{_{L^2(D_2)}}\\ \vdots \\ \langle \partial _{x_s} \varphi _{j,k},f_p\rangle _{_{L^2(D_p)}} \end{array}\right) \right\rangle _{{{\mathbb {R}}^p}}dt \\{} & {} = 0,\forall z\in L^2(0,T;{\mathcal {O}}) \implies y=0_{_{\left( L^2(\omega )\right) ^n}}. \end{aligned}$$

From Lemma 5 in [18], we get that the last expression is equivalent to,

$$\begin{aligned}{} & {} \displaystyle \sum _{s=1}^{n}\sum _{j=1}^{+\infty } \sum _{k=1}^{r_j}E_\alpha (\lambda _jt^\alpha ) \langle \chi _{_\omega }^*y_s,\varphi _{j,k}\rangle _{_{E}}\left( \begin{array}{c} \langle \partial _{x_s} \varphi _{j,k},f_1\rangle _{_{L^2(D_1)}}\\ \langle \partial _{x_s} \varphi _{j,k},f_2\rangle _{_{L^2(D_2)}}\\ \vdots \\ \langle \partial _{x_s} \varphi _{j,k},f_p\rangle _{_{L^2(D_p)}} \end{array}\right) =0, \quad \\{} & {} \forall t\in [0,T] \implies y=0_{_{\left( L^2(\omega )\right) ^n}}, \end{aligned}$$

which is also equivalent to,

$$\begin{aligned} \displaystyle \sum _{j=1}^{+\infty } E_\alpha (\lambda _jt^\alpha ) \sum _{s=1}^{n}M_j^sy_j^s = 0, \quad \forall t\in [0,T] \implies y=0_{_{\left( L^2(\omega )\right) ^n}}. \end{aligned}$$

Because \(E_\alpha (\lambda _jt^\alpha ) > 0\) for all \(t \in [0,T]\) and for all \(j \in \mathbb {N^*}\), then we have:

$$\begin{aligned} \displaystyle \sum _{s=1}^{n}M_j^sy_j^s = 0 \implies y=0_{_{\left( L^2(\omega )\right) ^n}}, \end{aligned}$$

and the result is proved. \(\square \)

The following corollary is an immediate consequence of Theorem 16 in the one-dimensional case, i.e., when \(n=1\).

Corollary 17

If \(n=1\), then \((D_i,f_i)_{_{1\le i\le p}}\) is gradient \(\omega \)-strategic if, and only if,

  • \(p \ge \sup \{r_j\}\);

  • \(rank\ M_j^1 = r_j\) for all \(j\in {\mathbb {N}}^*\).

5 The regional gradient reconstruction method

Now we present the steps of an approach that permits the recovery of the initial gradient for (3) in \(\omega \). Our approach is an extension of the Hilbert uniqueness method (HUM) for fractional systems.

Let

$$\begin{aligned} K = \left\{ y\in \left( E\right) ^n \ | \ y = 0 \ \text { in } \ \Omega \setminus \omega \right\} \cap \left\{ \nabla h \ | \ h\in H^1_0(\Omega )\right\} . \end{aligned}$$

Remark 18

Note that \((\chi _{_\omega }^{n})^*\chi _{_\omega }^{n}f = f , \quad \forall f\in K\).

For every \({\tilde{\varphi }}_0\) in K, we introduce the system:

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^C}{\mathcal {D}}_{0^+}^{^\alpha }\varphi (y,s) = A\varphi (y,s), &{} (y,s)\in Q_T, \ \alpha \in ]0,1], \\ \varphi (\varsigma ,s) = 0, &{} (\varsigma ,s)\in \Sigma _T, \\ \varphi (y,0) = \nabla ^*{\tilde{\varphi }}_0(y), &{} y\in \Omega , \end{array}\right. \end{aligned}$$
(25)

which possesses one and only one mild solution in \(L^2(0,T;{\mathcal {D}}(A))\cap C(0,T;E)\), written as follows:

$$\begin{aligned} \varphi (t) = {\mathcal {S}}_\alpha (t)\nabla ^*{\tilde{\varphi }}_0, \ t\in [0,T]. \end{aligned}$$
(26)

We associate with \(K\times K\) the form:

$$\begin{aligned} \begin{array}{lllll} \langle \cdot , \cdot \rangle _{_K} &{} : &{} K\times K &{} \longrightarrow &{} {\mathbb {C}}\\ &{} &{} (f,h)&{} \longmapsto &{} \displaystyle \int _{0}^{T}\langle C{\mathcal {S}}_\alpha (t)\nabla ^*f, C{\mathcal {S}}_\alpha (t)\nabla ^*h\rangle _{_{\mathcal {O}}}dt, \end{array} \end{aligned}$$
(27)

where \(\langle \cdot , \cdot \rangle _{_{\mathcal {O}}}\) is the scalar product in \({\mathcal {O}}\).

Remark 19

The bilinear form \(\langle \cdot ,\cdot \rangle _{_K}\) satisfies the conjugate symmetry and positive properties, i.e., \(\langle g,f \rangle =\overline{\langle f,g \rangle }\) and \(\langle f,f \rangle _{_K} \ge 0\).

Lemma 20

If the system (25) is approximately G-observable in \(\omega \), then the bilinear form (27) becomes a scalar product on K.

Proof

By Remark 19, we only need to show that \(\langle \cdot ,\cdot \rangle _{_K}\) is definite, that is, \(\langle f,f\rangle _{_K} = 0 \implies f=0\).

Let f be an element of K. Hence,

$$\begin{aligned} \langle f,f\rangle _{_K} = \displaystyle \int _{0}^{T}\langle C{\mathcal {S}}_\alpha (t)\nabla ^*f,C{\mathcal {S}}_\alpha (t) \nabla ^*f\rangle _{_{\mathbb {O}}}dt = 0, \end{aligned}$$

which implies that:

$$\begin{aligned} \langle C{\mathcal {S}}_\alpha (t)\nabla ^*f, C{\mathcal {S}}_\alpha (t)\nabla ^*f\rangle _{_{\mathbb {O}}} = 0. \end{aligned}$$

Using Remark 18, this means that:

$$\begin{aligned} C{\mathcal {S}}_\alpha (t)\nabla ^*f = C{\mathcal {S}}_\alpha (t)\nabla ^*(\chi _{_\omega }^{n})^* \chi _{_\omega }^{n}f= 0, \end{aligned}$$

and, since (25) is approximately G-observable in \(\omega \), we have \(\chi _{_\omega }^{n}f = 0\). We conclude that \(f = 0 \) in \( \omega \). It follows that \(f = 0\) from the fact that \(f \in K\). \(\square \)

Let \(||\cdot ||_{_K}\) be the norm on K associated with \(\langle \cdot ,\cdot \rangle _{_K}\), and let us denote again by K its completion by the norm \(||\cdot ||_{_K}\). The space K endowed with \(||\cdot ||_{_K}\) is now a Hilbert space.

We introduce the following auxiliary system:

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^{RL}}{\mathcal {D}}_{T^-}^{^\alpha }\Theta (y,s) = A^*\Theta (y,s) - C^*C\varphi (s), &{} (y,s)\in Q_T, \ \alpha \in ]0,1], \\ \Theta (\varsigma ,s) = 0, &{} (\varsigma ,s)\in \Sigma _T, \\ \lim \limits _{s\rightarrow T^-}{\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}\Theta (y,s) = 0, &{} y\in \Omega , \end{array}\right. \end{aligned}$$
(28)

controlled by the solution of (25).

Remark 21

The condition \(z(t) = C\varphi (t)\) implies that system (28) is the adjoint system of (25).

We now define an operator that associates to every possible candidate of the initial gradient in \(\omega \), the projection on K \(\left( \text{ via } \text{ the } \text{ operator } (\chi _{_\omega }^{n})^*\chi _{_\omega }^{n}\right) \) of the term \(\nabla {\mathcal {I}}_{_{T^-}}^{1-\alpha }\Theta (0)\),

$$\begin{aligned} \begin{array}{lllll} \Lambda &{} : &{} K &{} \longrightarrow &{} K, \\ &{} &{} {\tilde{\varphi }}_0 &{} \longmapsto &{} (\chi _{_\omega }^{n})^*\chi _{_\omega }^{n} \nabla {\mathcal {I}}_{_{T^-}}^{1-\alpha }\Theta (0). \end{array} \end{aligned}$$
(29)

This way, the problem of regional gradient reconstruction is reduced to a solvability problem of the equation:

$$\begin{aligned} \Lambda {\tilde{\varphi }}_0 = (\chi _{_\omega }^{n})^*\chi _{_\omega }^{n} \nabla {\mathcal {I}}_{_{T^-}}^{1-\alpha }\Theta (0), \end{aligned}$$
(30)

which leads to the next result.

Theorem 22

If system (3) is approximately G-observable in \(\omega \), then equation (30) has a unique solution \({\tilde{\varphi }}_0\in K\), which corresponds to the initial gradient \(\nabla u_0\) in \(\omega \).

Proof

We shall prove that \(\Lambda \) is coercive, that is, there exists \(\sigma >0\) that verifies \(\langle \Lambda v,v\rangle _{_K} \ge \sigma \Vert v\Vert _{_K}\) for all \(v\in K\). We take \({\tilde{\varphi }}_0\) to be in K,

$$\begin{aligned} \begin{array}{lll} \langle \Lambda {\tilde{\varphi }}_0,{\tilde{\varphi }}_0\rangle _{_K} &{}=&{} \langle (\chi _{_\omega }^{n})^*\chi _{_\omega }^{n} \nabla {\mathcal {I}}_{_{T^-}}^{1-\alpha } \Theta (0),{\tilde{\varphi }}_0\rangle _{_K}\\ &{}=&{} \langle {\mathcal {I}}_{_{T^-}}^{1-\alpha }\Theta (0), \nabla ^*{\tilde{\varphi }}_0\rangle _{_K}. \end{array} \end{aligned}$$

From Proposition 3.3 in [44], we have that

$$\begin{aligned} {\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}\Theta (0) = \displaystyle \int _{0}^{T}{\mathcal {S}}_\alpha ^*(\tau ) C^*C{\mathcal {S}}_\alpha {\tilde{\varphi }}_0 d\tau . \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{array}{lll} \langle \Lambda {\tilde{\varphi }}_0,{\tilde{\varphi }}_0\rangle _{_K} &{}=&{} \left\langle \displaystyle \int _{0}^{T}{\mathcal {S}}_\alpha ^*(\tau ) C^*C{\mathcal {S}}_\alpha (\tau )\nabla ^*{\tilde{\varphi }}_0 d\tau , \nabla ^*{\tilde{\varphi }}_0\right\rangle _{_K}\\ &{}=&{} \displaystyle \int _{0}^{T}\langle C\varphi (\tau ), C\varphi (\tau )\rangle _{_{\mathcal {O}}} d\tau \\ &{}=&{} \Vert C\varphi (\cdot )\Vert _{_{L^2(0,T;{\mathcal {O}})}}^2, \end{array} \end{aligned}$$
(31)

and equation (30) possesses only one solution. \(\square \)

6 Applications

In this section, we take \(\Omega = ]0,1[\times ]0,1[\). Let \(\omega \subset \Omega \) be the desired sub-region. We consider the following time-fractional system:

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^C}{\mathcal {D}}_{0^+}^{^\alpha }y(x_1,x_2,t) = \left( \partial _{x_1}^2 + \partial _{x_2}^2 \right) y(x_1,x_2,t), &{} (x_1,x_2,t)\in Q_T, \ \alpha \in ]0,1],\\ y(\xi _1,\xi _2,t) = 0, &{} (\xi _1,\xi _2,t)\in \Sigma _T, \\ y(x_1,x_2,0) = y_0(x_1,x_2), &{} (x_1,x_2)\in \Omega . \end{array}\right. \end{aligned}$$
(32)

Our goal is to illustrate the steps used to recover the initial gradient vector \(\nabla y_0 = \left( \partial _{x_1}y_0, \partial _{x_2}y_0 \right) \) in the sub-region \(\omega \). We present the method for the two types of sensors introduced in Sect. 4. Recall that the eigenvalues and eigenfunctions of \(\left( \partial _{x_1}^2 + \partial _{x_2}^2 \right) \) are:

$$\begin{aligned} \lambda _{i,j} = -(i^2 + j^2)\pi ^2, \end{aligned}$$

and,

$$\begin{aligned} \varphi _{i,j}(x_1,x_2) = 2\sin (i\pi x_1)\sin (j\pi x_2). \end{aligned}$$

6.1 Zonal Sensors

Let us take a zonal sensor (Df) with \(D = [c_1,c_2]\times [c_3,c_4] \subset \Omega \) and,

$$\begin{aligned} f(x_1,x_2) = \cos (\sqrt{3}\pi x_1)\sin (\sqrt{2}\pi x_2). \end{aligned}$$

Proposition 23

This sensor (Df) is gradient \(\omega \)-strategic if, and only if,

$$\begin{aligned}{} & {} \gamma _1 \langle \chi _{_\omega }^*u_1,\varphi _{i,j}\rangle _{_{E}} + \gamma _2 \langle \chi _{_\omega }^*u_2,\varphi _{i,j}\rangle _{_{E}}= 0, \\{} & {} \forall i,j\in {\mathbb {N}}^*\times {\mathbb {N}}^* \implies (u_1,u_2) = (0_{_{L^2(\omega )}},0_{_{L^2(\omega )}}), \end{aligned}$$

where,

$$\begin{aligned} \gamma _1 = i\displaystyle \iint _{D}\cos (i\pi x_1) \sin (j\pi x_2)f(x_1,x_2){\textrm{d}}x_1{\textrm{d}}x_2, \end{aligned}$$

and,

$$\begin{aligned} \gamma _2 = j\displaystyle \iint _{D}\sin (i\pi x_1) \cos (j\pi x_2)f(x_1,x_2){\textrm{d}}x_1{\textrm{d}}x_2. \end{aligned}$$

Proof

In this case, \(p=1\), \(r_j=1\), and \(n=2\). Hence,

$$\begin{aligned} M_{ij}^{s}= & {} \left( \langle \partial _{x_s}\varphi _{i,j}, f\rangle _{_{L^2(D)}} \right) _{_{1\times 1}} \ \\ \text{ and } \ u_{ij}^s= & {} \langle \chi _{_\omega }^*u_s, \varphi _{i,j} \rangle _{_{E}}, \ \forall i,j\in {\mathbb {N}}^*\times {\mathbb {N}}^*, \ s=\left\{ 1,2\right\} . \end{aligned}$$

One can see that:

$$\begin{aligned} \langle \partial _{x_1} \varphi _{i,j},f\rangle _{_{L^2(D)}} = 2i\pi \displaystyle \iint _{_D} \cos (i\pi x_1)\sin (j\pi x_2)f(x_1,x_2){\text {d}}x_1 {\text {d}}x_2, \end{aligned}$$

and,

$$\begin{aligned} \langle \partial _{x_2} \varphi _{i,j},f\rangle _{_{L^2(D)}} = 2j\pi \displaystyle \iint _{D}\sin (i\pi x_1) \cos (j\pi x_2)f(x_1,x_2){\text {d}}x_1{\text {d}}x_2. \end{aligned}$$

Hence, using Theorem 16, (Df) is gradient \(\omega \)-strategic if, and only if,

$$\begin{aligned} M_{ij}^1u_{ij}^1 + M_{ij}^2u_{ij}^2 =0, \ i,j\in {\mathbb {N}}^*\times {\mathbb {N}}^* \implies (u_1,u_2)= (0_{_{L^2(\omega )}},0_{_{L^2(\omega )}}), \end{aligned}$$

that is, if, and only if,

$$\begin{aligned}{} & {} \gamma _1 \langle \chi _{_\omega }^*u_1,\varphi _{i,j} \rangle _{_{E}} + \gamma _2 \langle \chi _{_\omega }^*u_2,\varphi _{i,j}\rangle _{_{E}} = 0,\\{} & {} \quad \ \forall i,j\in {\mathbb {N}}^*\times {\mathbb {N}}^* \implies (u_1,u_2) = (0_{_{L^2(\omega )}},0_{_{L^2(\omega )}}). \end{aligned}$$

The proof is complete. \(\square \)

Let us now introduce the set:

$$\begin{aligned} {\tilde{K}} = \left\{ f \in (E)^2 \ | \ f= 0 \ \text{ in } \ \Omega \setminus \omega \right\} \bigcap \left\{ \nabla h \ | \ h \in H^1_0(\Omega )\right\} . \end{aligned}$$

Using Lemma 20, for any \(h = (h_1,h_2)\) and \(g = (g_1,g_2)\) in \({\tilde{K}}\), the expression:

$$\begin{aligned} \langle h,g \rangle _{_{{\tilde{K}}}} = \displaystyle \int _{0}^{T}\langle {\mathcal {S}}_\alpha (t) \sum _{s=1}^{2}\partial _{x_s} h_s,f \rangle _{_{L^2(D)}}\langle {\mathcal {S}}_\alpha (t)\sum _{s=1}^{2} \partial _{x_s}g_s,f \rangle _{_{L^2(D)}}{\text {d}}t, \end{aligned}$$

defines a scalar product whenever the system (32) is approximately G-observable in \(\omega \) and,

$$\begin{aligned} \Vert h\Vert _{_{{\tilde{K}}}} = \left( \displaystyle \int _{0}^{T}\langle {\mathcal {S}}_\alpha (t) \sum _{s=1}^{2}\partial _{x_s} h_s, f \rangle _{_{L^2(D)}}^2{\text {d}}t\right) ^{\frac{1}{2}}, \end{aligned}$$

is the associated norm. Keeping in mind formula (14), we can write the adjoint system as follows:

It follows from Theorem 22 that the equation \(\Lambda h = (\chi _{_\omega }^{n})^*\chi _{_\omega }^{n} \nabla {\mathcal {I}}_{_{T^-}}^{1-\alpha }\psi _1(0)\) possesses one and only one solution in \({\tilde{K}}\).

6.2 Pointwise Sensors

Now we reconsider system (32) but augmented with the output:

$$\begin{aligned} z(t) = y(b_1,b_2,t), \end{aligned}$$
(33)

where \((b_1,b_2)\) is the sensor location. Hence, from (4) and (33), we have,

$$\begin{aligned} C{\mathcal {S}}_\alpha (t)y_0 = \displaystyle \sum _{i,j=1}^{+\infty } E_{\alpha }(-\lambda _{i,j}t^{\alpha })\langle y_0, \varphi _{i,j}\rangle \varphi _{i,j}(b_1,b_2). \end{aligned}$$

Note that the pointwise sensor has an unbounded observation operator. Since \(|\varphi _{i,j}|\le 2\) for all \(i,j\in {\mathbb {N}}^*\times {\mathbb {N}}^*\), \(E_\alpha (\cdot )\) is continuous and \(\exists C>0\) such that \(|E_\alpha (-\lambda _{i,j}t^\alpha )|\le \dfrac{C}{1 + |\lambda _{i,j}|t^\alpha }\) (see [34]). Therefore, the admissibility condition (6) is satisfied for the pointwise sensor.

Proposition 24

The pointwise sensor \(\left( (b_1,b_2),\delta _{(b_1,b_2)}\right) \) is gradient \(\omega \)-strategic if, and only if,

$$\begin{aligned}{} & {} i\cos (i\pi b_1)\sin (j\pi b_2) \langle \chi _{_\omega }^*u_1, \varphi _{i,j}\rangle _{_{E}} \\{} & {} \quad + j\sin (i\pi b_1) \cos (j\pi b_2) \langle \chi _{_\omega }^*u_2,\varphi _{i,j}\rangle _{_{E}} = 0,\\{} & {} \quad \forall i,j\in {\mathbb {N}}^*\times {\mathbb {N}}^* \implies (u_1,u_2) = (0_{_{L^2(\omega )}},0_{_{L^2(\omega )}}). \end{aligned}$$

Proof

Similar to the proof of Proposition 23. \(\square \)

For any \(h=(h_1,h_2)\) in \({\tilde{K}}\), if the system (32) is approximately G-observable in \(\omega \), then:

$$\begin{aligned} \Vert h\Vert _{_{{\tilde{K}}}} = \left( \displaystyle \int _{0}^{T} \left( {\mathcal {S}}_\alpha (t) \sum _{s=1}^{2} \partial _{x_s}(\chi _{_\omega }^*h_s)\right) ^2(b_1,b_2) {\text {d}}t\right) ^{\frac{1}{2}}, \end{aligned}$$

defines a norm in \({\tilde{K}}\). Let us write the adjoint system:

$$\begin{aligned} \left\{ \begin{array}{rlll} ^{^{RL}}{\mathcal {D}}_{T^-}^{^\alpha }\psi _2(x_1,x_2,t) &{}=&{} \left( \partial _{x_1}^2 + \partial _{x_2}^2 \right) \psi _2(x_1,x_2,t) &{} (x_1,x_2,t)\in Q_T,\\ &{} &{} - \delta _{(b_1,b_2)}(x_1,x_2)\left( {\mathcal {S}}_\alpha (t) \displaystyle \sum \limits _{s=1}^{2}\partial _{x_s}(\chi _{_\omega }^*h_s)\right) (b_1,b_2), &{} \alpha \in ]0,1],\\ \psi _2(\xi _1,\xi _2,t)&{} =&{} 0, &{} (\xi _1,\xi _2,t)\in \Sigma _T, \\ \lim \limits _{t\rightarrow T^-}{\mathcal {I}}_{_{T^-}}^{^{1-\alpha }}\psi _2(x_1,x_2,t) &{}=&{} 0, &{} (x_1,x_2)\in \Omega . \end{array}\right. \end{aligned}$$

It follows from Theorem 22 that the equation \(\Lambda h = (\chi _{_\omega }^{n})^*\chi _{_\omega }^{n} \nabla {\mathcal {I}}_{_{T^-}}^{1-\alpha }\psi _2(0)\) has one and only one solution.

7 Numerical simulations

In this section, we illustrate the adopted method for solving the gradient reconstruction problem by presenting two examples that show its efficiency. In order to solve equation (30), we calculate the components of the operator \(\Lambda \) for some orthonormal basis \(\left\{ {\overline{\varphi }}_i\right\} _{_{i\in {\mathbb {N}}^*}}\) of \(E^n\), denoted by:

$$\begin{aligned} \Lambda _{ij}:= \langle \Lambda {\overline{\varphi }}_i, {\overline{\varphi }}_j\rangle _{_{(E)^n}}. \end{aligned}$$

We know that \(\left\{ \varphi _{i}\right\} _{_{i\in {\mathbb {N}}^*}}\) is an orthonormal basis of E. Then, by setting \({\overline{\varphi }}_{i,k} = \left( 0,\ldots ,\varphi _{i},0,\ldots ,0\right) \in E^{^n}\), where \(\varphi _{i}\) is at the k-th place, we have that \(\left\{ {\overline{\varphi }}_{i,k}\right\} _{\begin{array}{c} i\ge 1 \\ 1\le k \le n \end{array}}\) is an orthonormal basis of \(E^n\). From now on, by rearranging the terms, we denote \(\left\{ {\overline{\varphi }}_{i,k}\right\} _{\begin{array}{c} i\ge 1 \\ 1\le k \le n \end{array}}\) by \(\left\{ {\overline{\varphi }}_{i}\right\} _{i\in {\mathbb {N}}^*}\). This is possible since the mapping:

$$\begin{aligned} \begin{array}{lllll} g &{} : &{} {\mathbb {N}}^*\times \llbracket 1, N \rrbracket &{} \longrightarrow &{} {\mathbb {N}}^*,\\ &{} &{} (q,d)&{} \longmapsto &{} n(q-1)+d, \end{array} \end{aligned}$$

is one to one. The equation (30) can now be approximated by

$$\begin{aligned} \displaystyle \sum _{l=1}^{M}\Lambda _{il}{\tilde{\varphi }}_{0,l} = {\tilde{\Theta }}_{i}, \quad i = 1,\ldots ,M, \end{aligned}$$
(34)

with \(M\in {\mathbb {N}}^*\), \({\tilde{\varphi }}_{0,l}=\langle {\tilde{\varphi }}_0, {\overline{\varphi }}_l \rangle _{_{E^n}}\), and \({\tilde{\Theta }}_i = \langle (\chi _{_\omega }^{n})^*\chi _{_\omega }^{n} \nabla {\mathcal {I}}_{_{T^-}}^{1-\alpha } \Theta (0),{\overline{\varphi }}_i \rangle _{_{(E)^n}}\). We know that:

$$\begin{aligned} C{\mathcal {S}}_\alpha (t)\nabla ^*{\overline{\varphi }}_i = \displaystyle \sum _{k,l = 1}^{\infty } E_\alpha (-\lambda _{k,l} t^\alpha ) \langle \nabla ^*{\overline{\varphi }}_i, \varphi _{k,l} \rangle _{_{E}}C\varphi _{k,l}, \end{aligned}$$
(35)

and, from (31) and (35), we obtain that:

$$\begin{aligned} \begin{array}{llll} \langle \Lambda {\overline{\varphi }}_i,{\overline{\varphi }}_j \rangle _{_{(E)}}&{} = &{} \displaystyle \sum _{k,l,r,s=1}^{\infty } \int _{0}^{T} E_\alpha (-\lambda _{k,l}t^\alpha ) E_\alpha (-\lambda _{r,s}t^\alpha ){\text {d}}t \\ &{}\times &{}\langle \nabla ^*{\overline{\varphi }}_i, \varphi _{k,l} \rangle _{_{E}}\langle \nabla ^*{\overline{\varphi }}_j, \varphi _{r,s} \rangle _{_{E}}C\varphi _{k,l}C\varphi _{r,s}. \end{array} \end{aligned}$$

To sum up, the reconstruction method is given by the Algorithm 1.

figure a

7.1 Example with a Pointwise Sensor

In our first example, we take \(\Omega = ]0,1[\) and we consider the following time-fractional system:

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^C}{\mathcal {D}}_{0^+}^{^{0.84}}x(y,s) = \partial _y^{^2}x(y,s), &{} (y,s)\in Q_1, \\ x(0,s)=x(1,s) = 0, &{} s\in [0,1], \\ x(y,0) = x_0(y), &{} y\in \Omega , \end{array}\right. \end{aligned}$$
(36)

where the output function corresponds with the sensor \((b,\delta _b)\) with \(b= \{0.2\}\). We set \(\omega = [0.00\ ,\ 0.25 ]\) to be the desired subregion and \(g(y) = 2\pi \left( \cos (y\pi )^2 - \sin (y\pi )^2\right) \cos (y\pi )\sin (y\pi )\) to be the initial gradient vector that will be reconstructed in \(\omega \), whereas the initial state, supposedly unknown, is \(x_0(y) = \left( \cos (y\pi )\sin (y\pi )\right) ^2\). After implementing the proposed algorithm (Algorithm 1), we obtain the reconstructed initial gradient \({\tilde{\varphi }}_0\). As we can see in Fig. 2, the two graphs of the initial gradient vector and the recovered one are neighbor to one another in the desired region \(\omega \) with a reconstruction error:

$$\begin{aligned} \Vert g - {\tilde{\varphi }}_0 \Vert ^{^2}_{_{L^{^2}(\omega )}} = 9.47\times 10^{-4}. \end{aligned}$$
Fig. 2
figure 2

The initial gradient vector and the reconstructed one in \(\Omega \) for the example of Sect. 7.1

This shows that the numerical approach is successful. We remark that the proposed algorithm does not put into consideration the value of the initial gradient outside of \(\omega \).

Figure 3 portrays the manner in which the error evolves in terms of the placement of the sensor. As it is seen, there are many positions where the error is large or even explodes to infinity. In this case, we say that the sensor is non-strategic in \(\omega \). Moreover, it is clear that the optimal location of the sensor, in the sense that it gives the minimum value of the reconstruction error, is \(b=0.2\).

Fig. 3
figure 3

Reconstruction error versus sensors location for the example of Sect. 7.1

7.2 Example with a Zonal Sensor

Let us now consider the following fractional system:

$$\begin{aligned} \left\{ \begin{array}{llll} ^{^C}{\mathcal {D}}_{0^+}^{^{0.5}}x(y,s) = \partial _y^2x(y,s), &{} (y,s)\in Q_2, \\ x(0,s)=x(1,s) = 0, &{} s\in [0,2], \\ x(y,0) = x_0(y), &{} y\in \Omega , \end{array}\right. \end{aligned}$$
(37)

and take the measurements with a zonal sensor (Df) with \(D = [0.9\ ,\ 1.0]\), \(f=\chi _{_{D}}\), and the subregion \(\omega = [0.35\ ,\ 0.65]\). We choose \(x_0(y) = (y(1-y))^2\) to be the initial state and the gradient to be recovered as \(g(y) = 2y(1-y)(1-2y)\), which are both supposed to be unknown. We see in Fig. 4 that the plot of the initial gradient vector is nearly identical to the plot of the reconstructed initial gradient. In fact, the reconstruction error takes the value:

$$\begin{aligned} \Vert g - {\tilde{\varphi }}_0 \Vert ^2 = 1.26\times 10^{-6}. \end{aligned}$$
Fig. 4
figure 4

The initial gradient vector and the reconstructed one in \(\Omega \) for the example of Sect. 7.2

As it can be seen in the two examples, the case of a zonal sensor gives numerical results with better and smaller reconstruction errors in comparison with the case when we consider a pointwise sensor. This might be due to the fact that, in that case, the sensor has a bounded observation operator and a geometrical support with a non-vanishing Lebesgue measure, which means that the measurements are given in a much larger set compared with the case of a pointwise sensor, where the measurements are provided in a single point b, meaning that the quantity of obtained measurements is much less in this case. Therefore, in the case of a bounded observation operator, one has more information on the system than the one with an unbounded operator. These remarks are based upon the observations made during the implementation of the proposed algorithm, but more theoretical studies are needed, regarding the theory of strategic sensors, to confirm and validate, theoretically, the observations of our numerical simulations.

8 Conclusion

We dealt with the problem of regional gradient observability of linear time-fractional systems given in terms of the Caputo derivative. We developed a method that allows us to obtain the initial gradient vector in the desired region \(\omega \). We also gave a complete characterization of the regional gradient observability by means of gradient strategic sensors. Even though we studied two particular cases of sensors, namely pointwise and zonal, similar results can also be obtained for other kinds of sensors, for instance filament ones. The numerical simulations presented in this paper are very satisfying regarding the error rate and the computation time. We implemented the considered examples using the software Matlab R2014b on a 2.5GHz core i5 computer with 8 GB of RAM.

The strength of the HUM approach lies in fact that it can be simulated numerically, providing the regional initial gradient with a satisfying control of the error. Moreover, it can be adapted to real-world applications. One weakness that can be faced while applying this approach to an example happens when one considers a dynamic A that possesses an eigenvalue with infinite multiplicity. In such a case, one needs an infinite number of sensors to observe the system, which can never be achieved in reality. For future work, we plan to extend the results of this paper to the case of semilinear fractional systems. Regarding the numerical simulations, we have considered here some academic examples in order to illustrate the obtained theoretical results. We claim that our results can be applied and useful to real-world situations, a question that will be addressed elsewhere.