1 Introduction

Fractional calculus dates from 300 years ago and deals with arbitrary (noninteger) order differentiation and integration. Although it has a long history, it did not draw much attention from researchers due to its complexity and difficult application. However, in the last decades, the theory of fractional calculus developed mainly as a pure theoretical field of mathematics and has been used in various fields as rheology, viscoelasticity, electrochemistry, diffusion processes, and so on; see, for instance, [17] and the references therein.

It is well known that compared with integer-order models, fractional-order calculus provides a more accurate instrument for the description of memory and hereditary properties of various processes. Taking these facts into account, the incorporation of the fractional-order calculus into a neural network model could better describe the dynamical behavior of the neurons, and many efforts have been made. In [8], a fractional-order cellular neural network model was firstly proposed by Arena et al., and chaotic behavior in noninteger-order cellular neural networks was discussed in [9]. In [10], the author pointed out a fractional-order three-cell network, which put forward limit cycles and stable orbits for different parameter values. Besides, it is important to point out that fractional-order neural networks are expected to play an important role in parameter estimation [1113]. Therefore, as noted in [14], it is very significant and interesting to study fractional-order neural networks both in the area of theoretical research and in practical applications.

Recently, the dynamic analysis of fractional-order neural networks has received considerable attention, and some excellent results have been presented in [1524]. Zhang et al. [15] discussed the chaotic behaviors in fractional-order three-dimensional Hopfield neural networks. Moreover, a fractional-order four-cell cellular neural network was presented, and its complex dynamical behavior was investigated using numerical simulations in [16]. Kaslik and Sivasundaram [17] considered nonlinear dynamics and chaos in fractional-order neural networks. Nowadays, there have been some advances in the stability analysis of fractional-order neural networks. The Mittag-Leffler stability and generalized Mittag-Leffler stability of fractional-order neural networks were investigated in [1821]. The α-stability and α-synchronization of fractional-order neural networks were demonstrated in [22]. Yang et al. [23] discussed the finite-time stability analysis of fractional-order neural networks with delay. Kaslik and Sivasundaram [24] investigated the dynamics of fractional-order delay-free Hopfield fractional-order, including stability, multistability, bifurcations, and chaos. Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions was made in [25]. In [26] and [27], the global Mittag-Leffler stability and asymptotic stability were considered for fractional-order neural networks with delays and impulsive effects. The uniform stability issue was investigated in [28]. In addition, Wu et al. [29] discussed the global stability issue of the fractional-order interval projection neural network.

Since Pecora and Carroll [30] firstly put forward chaos synchronization in 1990, more and more researchers pay enough attention to studying synchronization. The increasing interest in researching synchronization stems from its potential applications in bioengineering [31], secure communication [32], and cryptography [33]. As we know, synchronization exists in various types, such as complete synchronization [34], anti-synchronization [35], lag synchronization [36], generalized synchronization [37], phase synchronization [38], projective synchronization [3941], and so on. Among all kinds of synchronization, projective synchronization, characterized by a scaling factor that two systems synchronize proportionally, is one of the most interesting problems. Meanwhile, it can be used to extend binary digital to M-nary digital communication for achieving fast communication [42]. Very recently, some results with respect to synchronization of fractional-order neural networks have been proposed in [26, 4349]. In [26], the complete synchronization of fractional-order chaotic neural networks was considered via nonimpulsive linear controller. Several results with respect to chaotic synchronization of fractional-order neural networks have been proposed in [4345]. In addition, Wang et al. [46] investigated the projective cluster synchronization for the fractional-order coupled-delay complex network via adaptive pinning control. In [47], the global projective synchronization of fractional-order neural networks was discussed, and several control strategies were given to ensure the realization of complete synchronization, anti-synchronization, and stabilization of the addressed neural networks. Razminia et al. [48] considered the synchronization of fractional-order Rössler system via active control. By using the approach in [47], Bao and Cao [49] considered the projective synchronization of fractional-order memristor-based neural networks, and some sufficient criteria were derived to ensure the synchronization goal. However, most reports related to projective synchronization of neural networks system have utilized the direct Lyapunov method, which can be a bit complicated. We applied the Mittag-Leffler theory to achieve synchronization of fractional-order system. In addition, it should be pointed out that an LMI analysis technique was not applied to develop the synchronization criteria, and hence the above results have a certain degree of conservatism.

Motivated by the previous work, in this paper, our aim is to investigate the global Mittag-Leffler projective synchronization of fractional-order neural networks by using the LMI analysis approach. The main novelty of our contribution lies in three aspects: (1) a new differential inequality of the Caputo fractional derivatives of the quadratic form, with \(0<\alpha<1\), is established, which is applied to derive the synchronization conditions; (2) the hybrid control scheme is designed via combing open-loop control and adaptive control, and unknown control parameters are determined by the adaptive fractional updated laws; (3) by applying the Mittag-Leffler stability theorem in [50, 51], the global Mittag-Leffler synchronization conditions are presented in terms of LMIs to ensure the synchronization of fractional neural networks.

The rest of this paper is organized as follows. In Section 2, some definitions and a lemma are introduced, and a new differential inequality of the Caputo fractional derivatives of the quadratic form, with \(0<\alpha<1\), is presented. A model description is given in Section 3. Some sufficient conditions for Mittag-Leffler projective synchronization are derived in Section 4. Section 5 presents some numerical simulations. Some general conclusions are drawn in Section 6.

2 Preliminaries

In this section, some basic definitions and lemmas about fractional calculations are presented.

Definition 2.1

([52])

The fractional integral of order α for a function f is defined as

$$ I^{\alpha}f(t)=\frac{1}{\Gamma(\alpha)} \int_{t_{0}}^{t}\frac{f(\tau )}{(t-\tau)^{1-\alpha}}\,d\tau, $$

where \(t\geqslant t_{0}\) and \(\alpha>0 \).

Definition 2.2

([52])

Caputo’s fractional derivative of order α of a function \(f\in C^{n}([t_{0}, +\infty],R) \) is defined by

$$ D^{\alpha}f(t)=\frac{1}{\Gamma(n-\alpha)} \int_{t_{0}}^{t}\frac {f^{n}(\tau)}{(t-\tau)^{\alpha-n+1}}\,d\tau, $$

where \(t\geqslant t_{0}\), and n is a positive integer such that \(n-1<\alpha<n \). Particularly, when \(0<\alpha<1\),

$$ D^{\alpha}f(t)=\frac{1}{\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac {f'(\tau)}{(t-\tau)^{\alpha}}\,d\tau. $$

Lemma 2.1

([52])

Let \(\Omega=[a,b]\) be an interval on the real axis R, let \(n=[\alpha]+1\) for \(\alpha\notin N \) or \(n=\alpha \) for \(\alpha\in N \). If \(y\in C^{n}[a,b] \), then

$$ I^{\alpha}D^{\alpha}y(t)=y(t)-\sum_{k=0}^{n-1} \frac{y^{(k)}(a)}{k!}(x-a)^{k}. $$

In particular, if \(0< \alpha<1\) and \(y(t)\in C^{1}[a,b] \), then

$$ I^{\alpha}D^{\alpha}y(t)=y(t)-y(a). $$

Lemma 2.2

([47])

Assume that \(x \in C^{1}[a, b]\) satisfies

$$ D^{\alpha}x(t) = f\bigl(t,x(t)\bigr)\geq0 $$

for all \(t\in[a,b] \). Then \(x(t)\) is nondecreasing for \(0< \alpha<1\). If

$$ D^{\alpha}x(t) = f\bigl(t,x(t)\bigr)\leq0, $$

then \(x(t)\) is nonincreasing for \(0< \alpha<1\).

Aguila-Camacho et al. [53] established the fractional-order differential inequality \(\frac{1}{2}{}^{\mathrm{C}}_{t_{0}}D^{\alpha}_{t}x^{2}(t)\leq x(t){}^{\mathrm{C}}_{t_{0}}D^{\alpha}_{t}x(t)\) for the Caputo fractional derivative with \(0< \alpha< 1\). In Lemma 2.3, based on the proof line from [53], we make a generalization of this inequality. We prove that \(\frac{1}{2}D^{\alpha}x^{T}(t) P x(t)\leq x^{T}(t)PD^{\alpha}x(t)\) for all \(\alpha\in(0,1)\), where P is a positive definite matrix. Obviously, we can see that the differential inequality in Lemma 2.3 is more general.

Lemma 2.3

Suppose \(x(t)=(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))^{T}\in R^{n} \) is a vector, where \(x_{i}(t) \) are continuous and differentiable functions for all \(i=1,2,\ldots,n\), and \(P\in R^{n\times n}\) is a positive definite matrix. Then, for a general quadratic form function \(x^{T}(t)Px(t) \), we have

$$ \frac{1}{2}D^{\alpha}x^{T}(t) P x(t)\leq x^{T}(t)PD^{\alpha}x(t)\quad \forall\alpha\in(0,1). $$
(1)

Proof

In order to ensure the completeness of the proof process, we recall some steps in the proof from Aguila-Camacho et al. [53]. We believe that this can make the proof easily understood for the readers.

It is easy to see that inequality (1) is equivalent to

$$ x^{T}(t)PD^{\alpha}x(t)-\frac{1}{2}D^{\alpha}x^{T}(t) P x(t)\geq0\quad \forall\alpha\in(0,1). $$
(2)

According to Definition 2.2, we have

$$\begin{aligned}& D^{\alpha}x(t)=\frac{1}{\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac {x'(\tau)}{(t-\tau)^{\alpha}}\,d\tau, \end{aligned}$$
(3)
$$\begin{aligned}& \frac{1}{2}D^{\alpha}\bigl(x^{T}(t)Px(t)\bigr)= \frac{1}{2\Gamma(1-\alpha)} \int _{t_{0}}^{t}\frac{[x^{T}(\tau)Px(\tau)]'}{(t-\tau)^{\alpha}}\,d\tau \\& \hphantom{\frac{1}{2}D^{\alpha}\bigl(x^{T}(t)Px(t)\bigr)}=\frac{1}{\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac{x^{T}(\tau)P\dot {x(\tau)}}{(t-\tau)^{\alpha}}\,d\tau. \end{aligned}$$
(4)

Substituting (3) and (4) into (2), we have

$$ \frac{1}{\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac{(x^{T}(t)-x^{T}(\tau ))P\dot{x(\tau)}}{(t-\tau)^{\alpha}}\,d\tau\geq0. $$
(5)

For convenience, we introduce the auxiliary variable \(y(\tau)=x(t)-x(\tau)\). Next, based on variable transformation, we obtain

$$ -\frac{1}{\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac{y^{T}(\tau)P\dot{y(\tau )}}{(t-\tau)^{\alpha}}\,d\tau\geq0, $$

namely,

$$ \frac{1}{\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac{y^{T}(\tau)P\dot{y(\tau )}}{(t-\tau)^{\alpha}}\,d\tau\leq0. $$
(6)

By applying integration by parts to (6) it follows that

$$ -\biggl[\frac{y^{T}(\tau)Py(\tau)}{2\Gamma(1-\alpha)(t-\tau)^{\alpha} }\biggr]\bigg|_{\tau=t}+ \frac{y^{T}(t_{0})Py(t_{0})}{2\Gamma(1-\alpha )(t-t_{0})^{\alpha}} +\frac{\alpha}{2\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac{y^{T}(\tau )Py(\tau)}{(t-\tau)^{\alpha+1}}\,d\tau\geq0. $$
(7)

Now the issue of Lemma 2.3 is transformed into (7). Let us discuss the first term of (7), which is singular at \(\tau=t \), so we consider the corresponding limit:

$$\begin{aligned}& \lim_{\tau\rightarrow t}\frac{y^{T}(\tau)Py(\tau)}{2\Gamma(1-\alpha)(t-\tau)^{\alpha}} \\& \quad = \lim_{\tau\rightarrow t}\frac{(x(t)-x(\tau))^{T}P(x(t)-x(\tau))}{2\Gamma(1-\alpha)(t-\tau )^{\alpha} } \\& \quad = \lim_{\tau\rightarrow t}\frac{x^{T}(t)Px(t)-2x^{T}(t)Px(\tau)+x^{T}(\tau)Px(\tau)}{2\Gamma (1-\alpha)(t-\tau)^{\alpha}}. \end{aligned}$$
(8)

It is easy to see that (8) is satisfied with L’Hôpital’s rule. By applying L’Hôpital’s rule it follows that

$$\begin{aligned}& \lim_{\tau\rightarrow t}\frac{-2x^{T}(t)P\dot{x(\tau)}+2x^{T}(\tau)P\dot{x(\tau)}}{-2\alpha \Gamma(1-\alpha)(t-\tau)^{\alpha-1} } \\& \quad =\lim_{\tau\rightarrow t}\frac{[x^{T}(\tau)-x^{T}(t)]P\dot{x(\tau)}(t-\tau)^{1-\alpha}}{{\alpha \Gamma(1-\alpha)} }=0. \end{aligned}$$

Thus, (7) is reduced to

$$ \frac{y^{T}(t_{0})Py(t_{0})}{2\Gamma(1-\alpha)(t-t_{0})^{\alpha}} +\frac{\alpha}{2\Gamma(1-\alpha)} \int_{t_{0}}^{t}\frac{y^{T}(\tau )Py(\tau)}{(t-\tau)^{\alpha+1}}\,d\tau\geq0. $$
(9)

It is evidently true for (9). This completes the proof. □

Remark 2.1

If the matrix P from Lemma 2.3 is transformed as the identity matrix E, then

$$ \frac{1}{2}D^{\alpha}x^{T}(t) x(t)\leq x^{T}(t)D^{\alpha}x(t)\quad \forall \alpha\in(0,1). $$

In particular, when \(x(t)\in R \) is a continuous and differentiable function, we obtain

$$ \frac{1}{2}D^{\alpha}x^{2}(t)\leq x(t)D^{\alpha}x(t) \quad \forall\alpha\in(0,1) $$

by applying Lemma 2.3 to every component of vector.

3 Model description

In this section, we introduce a class of vector fractional-order neural networks as the drive system described by

$$ D^{\alpha}x(t)=-Cx(t)+Af\bigl(x(t)\bigr)+I, $$
(10)

where \(x(t)=[x_{1}(t),x_{2}(t),\ldots,x_{n}(t)]^{T}\in R^{n} \) is the state vector of the system, \(C=\operatorname{diag}(c_{1},c_{2}, \ldots,c_{n}) \) represents the self-connection weight, where \(c_{i}\in R \) and \(i\in l=(1,2,\ldots,n)\), \(A=(a_{ij})_{n\times n}\) is the interconnection weight matrix, and \(f(x(t))=[f_{1}(x(t)),f_{2}(x(t)),\ldots,f_{n}(x(t))]^{T}\in R^{n}\) and \(I=[I_{1},I_{2},\ldots,I_{n}]^{T}\) denote the activation function vector and external input vector, respectively.

The response system is described by

$$ D^{\alpha}y(t)=-Cy(t)+Af\bigl(y(t)\bigr)+I+u(t), $$
(11)

where \(y(t)=[y_{1}(t),y_{2}(t),\ldots,y_{n}(t)]^{T}\in R^{n} \) is the state vector of the response system, and \(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T}\in R^{n}\) is a control input vector.

Assumption 1

The activation functions \(f_{j} \) are Lipschitz-continuous on R, that is, there exists constant \(l_{j}>0\) (\(j\in l\)) such that

$$ \bigl\vert f_{j}(u)-f_{j}(v)\bigr\vert \leq l_{j}\vert u-v\vert $$

for all \(u\neq v \in R\). For convenience, we define \(L=\operatorname{diag}(l_{1},l_{2},\ldots,l_{n})\).

Definition 3.1

We say that systems (10) and (11) are projectively synchronized if there exists a nonzero constant β for any two solutions \(x(t)\) and \(y(t) \) of systems (10) and (11) with different initial values \(x_{0}\) and \(y_{0} \) such that

$$ \lim_{t\rightarrow\infty}\bigl\Vert y(t)-\beta x(t)\bigr\Vert =0, $$

where \(\|\cdot\|\) denotes the Euclidean norm of a vector.

The synchronization error is defined by \(e(t)=y(t)-\beta x(t)\), where \(e(t)=(e_{1}(t),e_{2}(t), \ldots,e_{n}(t))^{T}\in R ^{n}\). According to Definition 3.1, the error system can be described by

$$ D^{\alpha}e(t)=-Ce(t)+A\bigl[f\bigl(y(t)\bigr)-\beta f \bigl(x(t)\bigr)\bigr]+(1-\beta)I+u(t). $$
(12)

In what follows, we will design appropriate control schemes to derive the projective synchronization conditions between systems (10) and (11).

4 Main results

In this section, we resolve the problem of projective synchronization by converting the issue of projective synchronization into stability problem. More specially, the projective synchronization of systems (10) and (11) is equivalent to the stability of the error system (12). We will prove the stability of error system (12) with two different control schemes.

In the first control scheme, we choose the following control input \(u(t) \) in the response system:

$$ \left \{ \textstyle\begin{array}{l} u(t)=v(t)+w(t), \\ v(t)=A[\beta f(x(t))-f(\beta x(t))]+(\beta-1)I, \\ w(t)=-K(y(t)-\beta x(t)), \end{array}\displaystyle \right . $$
(13)

with \(K=\operatorname{diag}(k_{1},k_{2},\ldots,k_{n})\), where \(k_{i}>0 \) are the projective coefficients.

Remark 4.1

Note that the control scheme (13) is a hybrid control, \(v(t)\) is an open loop control, and \(w(t)\) is a linear control.

Then, applying the control scheme (13) to the error system (12), we obtain that

$$ D^{\alpha}e(t)=-Ce(t)+A\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr]-Ke(t). $$
(14)

Obviously, \(e(t)=0 \) is a trivial solution of the error system (14). Next, we mainly prove the stability of the error system (14) for the zero solution.

Theorem 4.1

Let Assumption  1 be satisfied. Suppose that there exists a positive definitive matrix P such that \(B=\frac{1}{2}(PC+C^{T}P^{T}-PAL-L^{T}A^{T}P^{T}+PK+K^{T}P^{T})>0\). Then systems (10) and (11) are globally Mittag-Leffler projective synchronized based on the control scheme (13).

Proof

Construct the Lyapunov function

$$ V(t)=\frac{1}{2}e^{T}(t) P e(t). $$

Taking the time fractional-order derivative of \(V(t)\), by Lemma 2.3 we have

$$ D^{\alpha}V(t)=D^{\alpha}\frac{1}{2}e^{T}(t) P e(t)\leq e^{T}(t) PD^{\alpha}e(t). $$
(15)

Substituting \(D^{\alpha}e(t)\) from (14) into (15) yields

$$ D^{\alpha}V(t)\leq e^{T}(t)P\bigl(-Ce(t)+A\bigl[f\bigl(y(t) \bigr)-f\bigl(\beta x(t)\bigr)\bigr]-Ke(t)\bigr). $$

Based on Assumption 1, we obtain

$$\begin{aligned} D^{\alpha}V(t)&\leq e^{T}(t)P\bigl(-Ce(t)+ALe(t)-Ke(t)\bigr) \\ &=-e^{T}(t)P(C-AL+K)e(t) \\ &=-\frac {1}{2}e^{T}(t) \bigl(PC+C^{T}P^{T}-PAL-L^{T}A^{T}P^{T}+PK+K^{T}P^{T} \bigr)e(t) \\ &=-\frac{1}{2}e^{T}(t)Be(t), \end{aligned}$$

where \(PC+C^{T}P^{T}-PAL-L^{T}A^{T}P^{T}+PK+K^{T}P^{T}>0\). Because B also is a positive matrix, it is clear that \(\lambda_{\mathrm{min}}(B)\|e\|^{2}\leq e^{T}(t)Be(t)\leq \lambda_{\mathrm{max}}(B)\|e\|^{2}\), where \(\lambda_{\mathrm{min}}(B)\) and \(\lambda_{\mathrm{max}}(B)\) are minimum and maximum eigenvalues of the matrix B, respectively.

Hence,

$$ D^{\alpha}V(t)\leq-\frac{1}{2}\lambda_{\mathrm{min}}(B)\|e \|^{2}. $$

So, according to the Mittag-Leffler stability theorem [50, 53], we get that system (14) is Mittag-Leffler stable. Namely, systems (10) and (11) are Mittag-Leffler projectively synchronized. This completes the proof. □

In the second control scheme, we choose the following control input \(u_{1}(t)\) in the response system:

$$ \left \{ \textstyle\begin{array}{l} u_{1}(t)=v_{1}(t)+w_{1}(t), \\ v_{1}(t)=A[\beta f(x(t))-f(\beta x(t))]+(\beta-1)I, \\ w_{1}(t)=-K(t)(y(t)-\beta x(t)), \\ D^{\alpha}k_{i}(t)=\sum_{j=1}^{n}e_{j}(t)P_{ji}\gamma_{i}(y_{i}-\beta x_{i}), \end{array}\displaystyle \right . $$
(16)

where \(K(t)=\operatorname{diag}(k_{1}(t),k_{2}(t),\ldots,k_{n}(t))\), and \(\gamma_{i}>0 \) are constants.

Remark 4.2

In fact, the control scheme (16) is also a hybrid control, \(v_{1}(t)\) is an open-loop control, and \(w_{1}(t)\) is a adaptive feedback control. Applying the control scheme (16), we obtain the error system

$$ D^{\alpha}e(t)=-Ce(t)+A\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr]-K(t)e(t). $$
(17)

Then we will prove that system (17) is asymptotically stable.

Theorem 4.2

Let Assumption  1 be satisfied. Suppose that there exist a positive matrix P and adaptive constant matrix K such that \(\Omega=\frac{1}{2}(PC+C^{T}P^{T}-PAL-L^{T}A^{T}P^{T}+PK+K^{T}P^{T})>0\). Then systems (10) and (11) are projectively synchronized by the control scheme (16).

Proof

Construct the auxiliary function

$$ V_{1}(t)=U_{1}(t)+\sum_{i=1}^{n} \frac{1}{2\gamma_{i}}\bigl(k_{i}(t)-k_{i}\bigr)^{2}, $$

where \(U_{1}(t)=\frac{1}{2}e^{T}(t) P e(t)\), and each \(k_{i}\) is an adaptive constant to be determined in the later analysis.

It follows from Lemma 2.3 and Remark 2.1 that the fractional-order derivative of \(V_{1}(t)\) can be described by

$$\begin{aligned} D^{\alpha}V_{1}(t)&=D^{\alpha}U_{1}(t)+ \sum_{i=1}^{n}\frac{1}{2\gamma _{i}}D^{\alpha} \bigl(k_{i}(t)-k_{i}\bigr)^{2} \\ & \leq e^{T}(t) PD^{\alpha}e(t)+\sum _{i=1}^{n}\frac{1}{\gamma _{i}}\bigl(k_{i}(t)-k_{i} \bigr)D^{\alpha}k_{i}(t). \end{aligned}$$
(18)

Inserting (17) into (18) and applying Assumption 1 yield

$$\begin{aligned} D^{\alpha}V_{1}(t) \leq& e^{T}(t)P\bigl(-Ce(t)+A \bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr]-K(t)e(t)\bigr) \\ &{}+\sum _{i=1}^{n}\frac{1}{\gamma _{i}}\bigl(k_{i}(t)-k_{i} \bigr)D^{\alpha}k_{i}(t) \\ =&e^{T}(t)P\bigl(-C+AL-K(t)\bigr)e(t)+\sum _{i=1}^{n}\frac{1}{\gamma _{i}}\bigl(k_{i}(t)-k_{i} \bigr) \Biggl(\sum_{j=1}^{n}e_{j}(t)P_{ji} \gamma_{i}(y_{i}-\beta x_{i})\Biggr) \\ =&e^{T}(t)P\bigl(-C+AL-K(t)\bigr)e(t)+\sum _{i=1}^{n}\sum_{j=1}^{n}e_{j}(t)P_{ji} \bigl(k_{i}(t)-k_{i}\bigr) (y_{i}-\beta x_{i}) \\ =&e^{T}(t)P\bigl(-C+AL-K(t)\bigr)e(t)+e^{T}(t)P \bigl(K(t)-K\bigr)e(t) \\ =&-e^{T}(t)P(C-AL+K)e(t) \\ =&-\frac {1}{2}e^{T}(t) \bigl(PC+C^{T}P^{T}-PAL-L^{T}A^{T}P^{T}+PK+K^{T}P^{T} \bigr)e(t) \\ =&-e^{T}(t)\Omega e(t) \end{aligned}$$

with appropriate constant matrix \(\Omega=\frac {1}{2}(PC+C^{T}P^{T}-PAL-L^{T}A^{T}P^{T}+PK+K^{T}P^{T})>0\). It is clear that \(D^{\alpha}V_{1}(t)\leq-e^{T}(t)\Omega e(t)\). Note that

$$ \lambda_{\mathrm{min}}(P)e^{T}(t)e(t)\leq e^{T}(t)Pe(t) \leq \lambda_{\mathrm{max}}(P)e^{T}(t)e(t). $$

Hence,

$$\begin{aligned}& -e^{T}(t)\Omega e(t)\leq \lambda_{\mathrm{min}}(\Omega)e^{T}(t)e(t) \leq -\frac{\lambda_{\mathrm{min}}(\Omega)}{\lambda_{\mathrm{max}}(P)}e^{T}(t)Pe(t), \\& D^{\alpha}V_{1}(t)\leq -\frac{2\lambda_{\mathrm{min}}(\Omega)}{\lambda_{\mathrm{max}}(P)}U_{1}(t), \quad t\geq0. \end{aligned}$$

Define \(\frac{2\lambda_{\mathrm{min}}(\Omega)}{\lambda_{\mathrm{max}}(P)}=\lambda_{0}\). Then

$$ D^{\alpha}V_{1}(t)\leq-\lambda_{0}U_{1}(t). $$
(19)

According to Lemma 2.2, we know that \(V_{1}(t)\) is a nonincreasing function and \(V_{1}(t)\leq V_{1}(0)\), \(t\geq0\). This implies that \(U_{1}(t)\) and \(k_{i}(t)\) are bounded on \(t\geq0 \). Then, it is easy to find that \(D^{\alpha}V_{1}(t)\) also is bounded on \(t\geq0 \). Meanwhile, we know that

$$ \sum_{i=1}^{n}\frac{1}{\gamma_{i}} \bigl(k_{i}(t)-k_{i}\bigr)D^{\alpha}k_{i}(t)= e^{T}(t)P\bigl(K(t)-K\bigr)e(t) $$

is bounded. So there exists a constant \(M>0\) such that

$$ \bigl\vert D^{\alpha}U_{1}(t)\bigr\vert \leq M,\quad t\geq0. $$
(20)

We will further prove that \(\lim_{t\rightarrow\infty}U_{1}(t)=0\). Otherwise, there would exist a constant \(\varepsilon>0\) and a nondecreasing time series \(\{t_{i}\}\) satisfying \(\lim_{i\rightarrow\infty}t_{i}=\infty\) such that

$$ U_{1}(t_{i})\geq\varepsilon ,\quad i=1,2, \ldots. $$
(21)

According to (20), we have

$$ D^{\alpha}U_{1}(t)\leq M,\quad t\geq0. $$
(22)

Denote \(T=(\frac{\Gamma(\alpha+1)\varepsilon}{2M})^{\frac{1}{\alpha}}>0\). For \(t_{i} -T< t< t_{i}\), \(i=1,2,\ldots\) , taking the integrals of both sides of (20) from t to \(t_{i} \), we get

$$\begin{aligned} U_{1}(t_{i})-U_{1}(t)&\leq\frac{M}{\Gamma(\alpha)} \int _{t}^{t_{i}}(t_{i}-\tau)^{\alpha-1} \,d\tau \\ &=\frac{M}{\Gamma(\alpha+1)}(t_{i}-t)^{\alpha} \\ &\leq\frac{\varepsilon}{2}, \end{aligned}$$

which, together with (21), gives \(U_{1}(t)\geq\frac{\varepsilon}{2}\), \(t_{i} -T< t< t_{i}\), \(i =1,2,\ldots\) . In the same way, for \(t_{i}< t< t_{i}+T\), \(i =1,2,\ldots\) . combining (20) with (21) yields

$$\begin{aligned} U_{1}(t)-U_{1}(t_{i})&\geq-\frac{M}{\Gamma(\alpha)} \int _{t_{i}}^{t}(t-\tau)^{\alpha-1}\,d\tau \\ &=-\frac{M}{\Gamma(\alpha+1)}(t-t_{i})^{\alpha} \\ &\geq-\frac{\varepsilon}{2}, \end{aligned}$$

which shows that \(U_{1}(t)\geq\frac{\varepsilon}{2}\), \(t_{i} < t< t_{i}+T\), \(i =1,2,\ldots\) .

Based on the above description, we obtain

$$ U_{1}(t)\geq \frac{\varepsilon}{2} $$
(23)

for \(t_{i} -T \leq t \leq t_{i}+T\), \(i =1,2,\ldots\) . Without loss of generality, we assume that these intervals are disjoint and \(t_{1}-T>0\). Namely,

$$ t_{i-1} +T < t_{i}-T < t_{i}+T< t_{i+1}-T, $$
(24)

where \(i =1,2,\ldots\) . It follows from (19) and (24) that, for \(t_{i}-T < t< t_{i}+T\), we have

$$ D^{\alpha}V_{1}(t)\leq-\frac{\varepsilon}{2} \lambda_{0}. $$
(25)

Taking the integrals of both sides of (25), we obtain

$$\begin{aligned}& V_{1}(t_{i}+T)-V_{1}(t_{i}-T) \\ & \quad \leq -\frac{\varepsilon}{2\Gamma(\alpha)}\lambda_{0} \int _{t_{i}-T}^{t_{i}+T}(t_{i}+T- \tau)^{\alpha-1}\,d\tau \\ & \quad = -\frac{2^{\alpha-1}\varepsilon\lambda_{0}T^{\alpha}}{\Gamma(\alpha+1)}. \end{aligned}$$

In addition, by (24) we get

$$ V_{1}(t_{i-1}+T)\geq V_{1}(t_{i}-T), \quad i=1,2,\ldots, $$
(26)

and \(V_{1}(t_{0}+T)\geq V_{1}(0)\).

It follows from (24) and (25) that

$$\begin{aligned}& V_{1}(t_{i}+T)-V_{1}(0) \\& \quad = V_{1}(t_{i}+T)-V_{1}(t_{i}-T)+V_{1}(t_{i}-T)-V_{1}(t_{i-1}+T)+ \cdots +V_{1}(t_{0}+T)-V_{1}(0) \\& \quad \leq -\frac{2^{\alpha-1}\varepsilon\lambda_{0}T^{\alpha}}{\Gamma(\alpha+1)}i, \end{aligned}$$

which reveals that \(V_{1}(t_{i}+T)\rightarrow-\infty\) as \(i\rightarrow+\infty\). However, this is a contradictions with \(V_{1}(t)\geq0\). As a result, \(\lim_{t\rightarrow\infty}U_{1}(t)=0\), and we conclude that \(\lim_{t\rightarrow\infty}e(t)=0\). Thus, the drive system (9) and response system (10) are globally asymptotically projectively synchronized. □

5 Illustrative examples

In this section, we give two examples to illustrate the validity and effectiveness of the proposed theoretical results.

Example 1

In system (10), choose \(x=(x_{1},x_{2},x_{3})^{T}\), \(\alpha=0.98\), \(f_{j}(x_{j})=\tanh(x_{j})\) for \(j=1,2,3\), \(c_{1}=c_{2}=c_{3}=1\), \(I_{1}=I_{2}=I_{3}=0 \), and

$$A=(a_{ij})_{3\times3}= \begin{pmatrix} 2&-1.2&0\\1.8&1.71&1.15\\-4.75&0&1.1 \end{pmatrix}. $$

Under these parameters, system (10) has a chaotic attractor, which is shown in Figure 1.

Figure 1
figure 1

Chaotic behavior of system ( 10 ) with initial value \(\pmb{(0.1,-0.08,0.3)}\) .

In the control scheme (13), choose \(k_{1}=5.4837\), \(k_{2}=5.1937\), \(k_{3}=7.5837\). Then system (11) also has a chaotic attractor. After using an appropriate LMI solver to get the feasible numerical solution, we get that the positive definite matrix P could be

$$P= \begin{pmatrix} 0.1433&0.0018&-0.0590\\0.0018&0.1429&0.0029\\ -0.0590&0.0029&0.1193 \end{pmatrix}. $$

By Theorem 4.1 we see that systems (10) and (11) are Mittag-Leffler projectively synchronized, which is verified in Figures 2-4.

Figure 2
figure 2

Evolutions of drive-response system with \(\pmb{\beta =3}\) .

Figure 3
figure 3

Synchronization errors with \(\pmb{\beta=3}\) .

Figure 4
figure 4

Trajectories of drive-response system with \(\pmb{\beta =3}\) .

In Figure 2, the projective synchronization error system converges to zero, which shows that the drive and response systems are globally asymptotically projectively synchronized.

Similarly, projective synchronization with projective coefficient \(\beta=2\), \(\beta=-1\) is simulated in Figures 5-10.

Figure 5
figure 5

Evolutions of drive-response system with \(\pmb{\beta =1}\) .

Figure 6
figure 6

Synchronization errors with \(\pmb{\beta=1}\) .

Figure 7
figure 7

Trajectories of drive-response system with \(\pmb{\beta =1}\) .

Figure 8
figure 8

Evolutions of drive-response system with \(\pmb{\beta =-1}\) .

Figure 9
figure 9

Synchronization errors with \(\pmb{\beta=-1}\) .

Figure 10
figure 10

Trajectories of drive-response system with \(\pmb{\beta =-1}\) .

Example 2

In system (10), the chosen parameters α, \(f(x)\), C, I, A are the same as in Example 1, so that system (10) has a chaotic attractor. In the following, we consider response system. In the control scheme (16). we choose \(k_{1}(0)=0.05\), \(k_{2}(0)=0.06\), \(k_{3}(0)=0.08\), \(r_{1}=r_{2}=r_{3}=1\), \(k_{1}=k_{2}=k_{3}=2\). Using the Matlab LMI toolbox, we find that the linear matrix inequality is feasible and the feasible solution is

$$P= \begin{pmatrix} 0.5747&0.2026&-0.6215\\0.2026&0.8628&0.0238\\ -0.6215&0.0238&2.1256 \end{pmatrix}. $$

Therefore, according to Theorem 4.2, we conclude that systems (10) and (11) are synchronized, which is verified in Figures 11-14.

Figure 11
figure 11

Chaotic behavior of system ( 10 ) with initial value \(\pmb{(0.2,-0.5,0.8)}\) .

Figure 12
figure 12

Evolutions of drive-response system with \(\pmb{\beta =3}\) .

Figure 13
figure 13

Synchronization errors with \(\pmb{\beta=3}\) .

Figure 14
figure 14

Trajectories of drive-response system with \(\pmb{\beta =3}\) .

6 Conclusions

In this paper, the global Mittag-Leffler projective synchronization issue for fractional neural networks is investigated. A lemma about the Caputo fractional derivative of the quadratic form in the literature has been improved. Based on a hybrid control scheme, the Mittag-Leffler projective synchronization conditions have been presented in terms of LMIs, and hence the results obtained in this paper are easily checked and applied in practical engineering.

It would be interesting to extend the results proposed in this paper to fractional-order neural networks with delays. This issue will be the topic of our future research.