1 Introduction

Neural networks have attracted great attention due to their wide applications, including the signal processing, parallel computation, optimization, and artificial intelligence. The dynamical behaviors of neural networks have been widely studied, particularly synchronization, which is one of the most important topics and therefore has been given much attention [16]. However, the majority of existing results considered modeling integer-order neural networks.

It is well known that fractional calculus is the generalization of integer-order calculus to arbitrary order. Compared to classical integer-order models, fractional-order calculus offers an excellent instrument for the description of memory and hereditary properties of dynamical processes. The existence of infinite memory can help fractional-order models better describe the system’s dynamical behaviors as illustrated in [723]. Taking these factors into consideration, fractional calculus was introduced to neural networks forming fractional-order neural networks, and some interesting results on synchronization were demonstrated [2429]. Among all kinds of synchronization, projective synchronization, in which the master and slave systems are synchronized up to a scaling factor, is an important concept in both theoretical and practical manners. Recently, some results with respect to projective synchronization of fractional-order neural networks were considered [3032]. In [30], projective synchronization for fractional neural networks was studied. Through the employment of a fractional-order differential inequality, the projective synchronization of fractional-order memristor-based neural networks was shown in [31]. By using an LMI-based approach, the global Mittag–Leffler projective synchronization for fractional-order neural networks was investigated in [32].

However, time delay, which is unavoidable in biological, engineering systems, and neural networks, was not taken into account in most of the previous works. To the best of our knowledge, projective synchronization of fractional-order neural networks was previously investigated at the presence of time delay through the use of Laplace transform [33], and no special Lyapunov functions were derived for synchronization analysis. In this paper, new methods are introduced to investigate the projective synchronization of fractional-order delayed neural networks. The study includes constructing a Lyapunov function, applying a fractional inequality and the comparison principle of linear fractional equation with delay, and obtaining new sufficient conditions.

The rest of this article is organized as follows. In Sect. 2, some definitions and lemmas are introduced, and the model description is given. In Sect. 3, the projective synchronization schemes are presented, and sufficient conditions for projective synchronization are obtained. Numerical simulations are presented in Sect. 4. Conclusions are drawn in Sect. 5.

2 Preliminaries and model description

It has to be noted that Riemann–Liouville fractional derivative and Caputo fractional derivative are the most commonly used among all the definitions of fractional-order integrals and derivatives. Due to the advantages of the Caputo fractional derivative, it is adopted in this work.

Definition 1

([7])

The fractional integral with non-integer order \(\alpha> 0\) of a function \(x(t)\) is defined by

$$I^{\alpha}x(t)=\frac{1}{\Gamma(\alpha)} \int_{t_{0}}^{t}(t-\tau)^{\alpha -1}x(\tau)\,d\tau, $$

where \(t\geq t_{0}\), \(\Gamma(\cdot)\) is the gamma function, \(\Gamma(s)=\int_{0}^{\infty} t^{s-1}e^{-t}\,dt\).

Definition 2

([7])

The Caputo derivative of fractional order α of a function \(x(t)\) is defined by

$$D^{\alpha}x(t)=\frac{1}{\Gamma(n-\alpha)} \int_{t_{0}}^{t}(t-\tau )^{n-\alpha-1}x^{(n)}( \tau)\,d\tau, $$

where \(t\geq t_{0}\), \(n-1<\alpha<n\in Z^{+}\).

In this paper, we consider a class of fractional-order neural networks with time delay as a master system, which is described by

$$\begin{aligned} &D^{\alpha}x_{i}(t)=-c_{i}x_{i}(t)+\sum _{j=1}^{n}a_{ij}f_{j} \bigl(x_{j}(t)\bigr)+\sum_{j=1}^{n}b_{ij}g_{j} \bigl(x_{j}(t-\tau)\bigr)+I_{i}, \\ &\quad i\in N=\{1, 2, \ldots, n\}, \end{aligned}$$
(1)

or equivalently, by

$$ D^{\alpha}x(t)=-Cx(t)+Af\bigl(x(t)\bigr)+ Bg\bigl(x(t-\tau) \bigr)+I, $$
(2)

where \(0<\alpha<1\), n is the number of units in a neural network, \(x(t)=(x_{1}(t),\ldots, x_{n}(t))^{T}\in R^{n}\) denotes the state variable of the neural network, \(C=\operatorname{diag}(c_{1}, c_{2}, \ldots, c_{n})\) is the self-regulating parameters of the neurons, where \(c_{i}\in R^{+}\). \(I=(I_{1}, I_{2}, \ldots, I_{n})^{T}\) represents the external input, \(A=(a_{ij})_{n\times n}\) and \(B=(b_{ij})_{n\times n}\) are the connective weight matrices without and with delay, respectively. Functions \(f(x(t))=(f_{1}(x_{1}(t)), \ldots, f_{n}(x_{n}(t)))^{T} \), \(g(x(t))=(g_{1}(x_{1}(t)), \ldots, g_{n}(x_{n}(t)))^{T}\) are the neuron activation functions.

The slave system is given by

$$\begin{aligned} &D^{\alpha}y_{i}(t)=-c_{i}y_{i}(t)+\sum _{j=1}^{n}a_{ij}f_{j} \bigl(y_{j}(t)\bigr)+\sum_{j=1}^{n}b_{ij}g_{j} \bigl(y_{j}(t-\tau)\bigr)+u_{i}(t)+I_{i}, \\ &\quad i\in N=\{1, 2, \ldots, n\}, \end{aligned}$$
(3)

or equivalently, by

$$ D^{\alpha}y(t)=-Cy(t)+Af\bigl(y(t)\bigr)+ Bg\bigl(y(t-\tau) \bigr)+U(t)+I, $$
(4)

where \(y(t)=(y_{1}(t),\ldots, y_{n}(t))^{T}\in R^{n}\) is the state vector of system’s response, \(U(t)=(u_{1}(t), \ldots, u_{n}(t))^{T}\) is a suitable controller.

For generalities, the following definition, assumption, and lemmas are presented.

Definition 3

If there exists a nonzero constant β such that, for any two solutions \(x(t)\) and \(y(t)\) of systems (1) and (3) with different initial values, one can get \(\lim_{t\rightarrow\infty} \Vert y(t)-\beta x(t) \Vert =0\), then the master system (1) and the slave system (3) can achieve globally asymptotically projective synchronization, where \(\Vert \cdot \Vert \) denotes the Euclidean norm of a vector.

Assumption 1

The neuron activation functions \(f_{j}(x)\), \(g_{j}(x)\) satisfy the following Lipschitz condition with Lipschitz constants \(l_{j}>0\), \(h_{j}>0\):

$$\bigl\vert f_{j}(u)-f_{j}(v) \bigr\vert \leq l_{j} \vert u-v \vert ,\qquad \bigl\vert g_{j}(u)-g_{j}(v) \bigr\vert \leq h_{j} \vert u-v \vert $$

for all \(u, v\in R\), denote \(L=\operatorname{diag}(l_{1}, l_{2}, \ldots, l_{n}), H=\operatorname{diag}(h_{1}, h_{2}, \ldots, h_{n})\), \(l_{\max}=\max\{l_{1}, l_{2}, \ldots, l_{n}\}\), \(h_{\max}=\max\{h_{1}, h_{2}, \ldots, h_{n}\}\).

Lemma 1

([32])

Suppose that \(x(t)=(x_{1}(t), \ldots, x_{n}(t))^{T}\in R^{n}\) is a differentiable vector-valued function and \(P\in R^{n\times n}\) is a symmetric positive matrix. Then, for any time instant \(t\geq0\), we have

$$ D^{\alpha}\bigl[x^{T}(t)Px(t)\bigr]\leq \bigl(x^{T}(t)P\bigr)D^{\alpha}x(t)+\bigl(D^{\alpha }x^{T}(t) \bigr)Px(t), $$
(5)

where \(0<\alpha<1\).

When \(P=E\) is an identity matrix, then \(\frac{1}{2}D^{\alpha }[x^{T}(t)x(t)]\leq x^{T}(t)D^{\alpha}x(t)\).

Lemma 2

([34])

Suppose that \(V(t)\in R^{1}\) is a continuous differentiable and nonnegative function, which satisfies

$$ \textstyle\begin{cases} D^{\alpha}V(t)\leq-a V(t)+b V(t-\tau),& 0< \alpha< 1\\ V(t)=\varphi(t)\geq0,& t\in[-\tau, 0], \end{cases} $$
(6)

where \(t\in[0, +\infty)\). If \(a>b>0 \) for all \(\varphi(t)\geq0, \tau >0\), then \(\lim_{t\rightarrow+\infty}V(t)=0\).

Lemma 3

([34])

Suppose that \(x(t)=(x_{1}(t), x_{2}(t), \ldots, x_{n}(t))^{T}\in R^{n}\) and \(y(t)=(y_{1}(t), y_{2}(t), \ldots, y_{n}(t))^{T}\in R^{n}\) are vectors, for all \(Q=(q_{ij})_{n\times n}\), the following inequality holds:

$$ y^{T}Qx \leq k_{\max}y^{T}y+ \bar{k}_{\max}x^{T}x, $$
(7)

where \(k_{\max}=\frac{1}{2} \Vert Q \Vert _{\infty}=\frac{1}{2}\max_{i=1}^{n}(\sum_{j=1}^{n}|q_{ij}|), \bar{k}_{\max}=\frac{1}{2}\| Q\|_{1}=\frac{1}{2}\max_{j=1}^{n}(\sum_{i=1}^{n}|q_{ij}|)\).

3 Projective synchronization

In this section, master–slave projective synchronization of delayed fractional-order neural networks is discussed. The aim is to design a suitable controller to achieve the projective synchronization between the slave system and the master system.

Let \(e_{i}(t)=y_{i}(t)-\beta x_{i}(t)\) (\(i=1, 2, \ldots, n\)) be the synchronization errors.

Select the control input function \(u_{i}(t)\) (\(i=1, 2, \ldots, n\)) as the following form:

$$\begin{aligned} &u_{i}(t)=v_{i}(t)+w_{i}(t), \end{aligned}$$
(8)
$$\begin{aligned} &\begin{aligned}[b] v_{i}(t)={}&\sum_{j=1}^{n}a_{ij} \bigl[\beta f_{j}\bigl(x_{j}(t)\bigr)-f_{j}\bigl( \beta x_{j}(t)\bigr)\bigr] \\ &{}+\sum_{j=1}^{n}b_{ij}\bigl[ \beta g_{j}\bigl(x_{j}(t-\tau)\bigr)-g_{j}\bigl( \beta x_{j}(t-\tau)\bigr)\bigr] \\ &{}+(\beta-1)I_{i}, \end{aligned} \end{aligned}$$
(9)
$$\begin{aligned} &w_{i}(t)=-d_{i} \bigl[y_{i}(t)-\beta x_{i}(t)\bigr], \end{aligned}$$
(10)

where \(d_{i}\) are positive constants, β is the projective coefficient.

Remark 1

The control function \(u_{i}(t)\) is a hybrid control, \(v_{i}(t)\) is an open loop control, and \(w_{i}(t)\) is a linear control.

Then the error system is obtained:

$$\begin{aligned} D^{\alpha}e_{i}(t) =&-c_{i}e_{i}(t)+ \sum_{j=1}^{n} a_{ij} \bigl[f_{j}\bigl(y_{j}(t)\bigr)-f_{j}\bigl(\beta x_{j}(t)\bigr)\bigr] \\ &{}+\sum_{j=1}^{n} b_{ij}\bigl[ g_{j}\bigl(y_{j}(t-\tau)\bigr)-g_{j}\bigl(\beta x_{j}(t-\tau)\bigr)\bigr] \\ &{}-d_{i}\bigl[y_{i}(t)-\beta x_{i}(t)\bigr], \end{aligned}$$
(11)

or equivalently,

$$\begin{aligned} D^{\alpha}e(t) =&-Ce(t)+A\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr]+B\bigl[g\bigl(y(t-\tau)\bigr)-g\bigl(\beta x(t-\tau)\bigr)\bigr] \\ &{}-De(t), \end{aligned}$$
(12)

where \(e(t)=(e_{1}(t), \ldots, e_{n}(t))^{T}\), \(D=\operatorname{diag}(d_{1}, d_{2}, \ldots, d_{n})\).

Theorem 1

Under Assumption 1, if there exists a symmetric positive definite matrix \(P\in R^{n\times n}\) such that

$$-\bigl(\hat{\lambda}_{\max}+{k_{1}}_{\max}+ \bar{k_{1}}_{\max}l^{2}_{\max }+{k_{2}}_{\max} \bigr)\lambda^{-1}_{\max}>\bar{k_{2}}_{\max}h^{2}_{\max} \lambda ^{-1}_{\min}, $$

then the fractional-order delayed neural networks systems (1) and (3) can achieve globally asymptotically projective synchronization based on the control schemes (8), (9), (10), where \(\hat{\lambda}_{\max}\) denotes the greatest eigenvalue of \(-PC-PD\), \({k_{1}}_{\max}=\frac{1}{2} \Vert PA \Vert _{\infty}\), \(\bar {k_{1}}_{\max}=\frac{1}{2} \Vert PA \Vert _{1}\), \({k_{2}}_{\max}=\frac{1}{2} \Vert PB \Vert _{\infty}\), \(\bar{k_{2}}_{\max}=\frac{1}{2} \Vert PB \Vert _{1}\), \(\lambda_{\min}\) and \(\lambda_{\max}\) denote the minimum and the maximum eigenvalue of P, respectively.

Proof

Construct a Lyapunov function:

$$ V(t)=\frac{1}{2}e^{T}(t)Pe(t). $$
(13)

Taking the time fractional-order derivative of \(V(t)\), according to Lemma 1, (13) can be rewritten as

$$\begin{aligned} D^{\alpha}V(t) =&D^{\alpha}\biggl[\frac{1}{2}e^{T}(t)Pe(t) \biggr] \\ \leq& e^{T}(t)PD^{\alpha}e(t) \\ =& e^{T}(t)P\bigl\{ -Ce(t)+A\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t) \bigr)\bigr] \\ &{}+B\bigl[g\bigl(y(t-\tau)\bigr)-g\bigl(\beta x(t-\tau)\bigr)\bigr]-De(t)\bigr\} \\ =& e^{T}(t) (-PC-PD)e(t)+e^{T}(t)PA\bigl[f\bigl(y(t) \bigr)-f\bigl(\beta x(t)\bigr)\bigr] \\ &{}+e^{T}(t)PB\bigl[g\bigl(y(t-\tau)\bigr)-g\bigl(\beta x(t-\tau) \bigr)\bigr]. \end{aligned}$$
(14)

From Lemma 3, we have

$$\begin{aligned} &\begin{aligned}[b] e^{T}(t)PA\bigl[f\bigl(y(t)\bigr)-f \bigl(\beta x(t)\bigr)\bigr]\leq{}& {k_{1}}_{\max}e^{T}(t)e(t)+ \bar {k_{1}}_{\max}\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t) \bigr)\bigr]^{T} \\ &{}\cdot\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr] \\ \leq{}&{k_{1}}_{\max}e^{T}(t)e(t)+ \bar{k_{1}}_{\max}e^{T}(t)L^{2}e(t) \\ \leq{}&\bigl({k_{1}}_{\max}+\bar{k_{1}}_{\max}l^{2}_{\max} \bigr)e^{T}(t)e(t). \end{aligned} \end{aligned}$$
(15)
$$\begin{aligned} &\begin{aligned}[b] e^{T}(t)PB\bigl[g\bigl(y(t-\tau)\bigr)-g\bigl(\beta x(t-\tau)\bigr) \bigr]\leq{}& {k_{2}}_{\max }e^{T}e(t)+ \bar{k_{2}}_{\max}\bigl[g\bigl(y(t-\tau)\bigr) \\ &{} - f\bigl(\beta x(t-\tau)\bigr)\bigr]^{T}\cdot\bigl[g\bigl(y(t- \tau)\bigr) \\ &{}-g\bigl(\beta x(t-\tau)\bigr)\bigr] \\ \leq{}&{k_{2}}_{\max}e^{T}e(t) \\ &{}+\bar{k_{2}}_{\max}e^{T}(t-\tau)H^{2}e(t- \tau) \\ \leq{}&{k_{2}}_{\max}e^{T}e(t) \\ &{}+\bar{k_{2}}_{\max}h^{2}_{\max}e^{T}(t- \tau)e(t-\tau). \end{aligned} \end{aligned}$$
(16)

Submitting (15) and (16) into (14) yields

$$\begin{aligned} D^{\alpha}V(t) \leq& e^{T}(t) (-PC-PD)e(t)+ \bigl({k_{1}}_{\max}+\bar{k_{1}}_{\max }l^{2}_{\max}+{k_{2}}_{\max} \bigr)e^{T}(t)e(t) \\ &{}+\bar{k_{2}}_{\max}h^{2}_{\max}e^{T}(t- \tau)e(t-\tau). \end{aligned}$$
(17)

Then

$$\begin{aligned} D^{\alpha}V(t) \leq&\hat{\lambda}_{\max}e^{T}(t)e(t)+ \bigl({k_{1}}_{\max}+\bar {k_{1}}_{\max}l^{2}_{\max}+{k_{2}}_{\max} \bigr)e^{T}(t)e(t) \\ &{}+\bar{k_{2}}_{\max}h^{2}_{\max}e^{T}(t- \tau)e(t-\tau) \\ =&\bigl(\hat{\lambda}_{\max}+{k_{1}}_{\max}+ \bar{k_{1}}_{\max}l^{2}_{\max }+{k_{2}}_{\max} \bigr)e^{T}(t)e(t) \\ &{}+\bigl(\bar{k_{2}}_{\max}h^{2}_{\max}+ \bar{k_{3}}_{\max}\bigr)e^{T}(t-\tau)e(t-\tau ) \\ \leq&\bigl(\hat{\lambda}_{\max}+{k_{1}}_{\max}+ \bar{k_{1}}_{\max}l^{2}_{\max }+{k_{2}}_{\max} \bigr)\lambda^{-1}_{\max}V(t) \\ &{}+\bigl(\bar{k_{2}}_{\max}h^{2}_{\max} \bigr)\lambda^{-1}_{\min}V(t-\tau). \end{aligned}$$
(18)

From Lemma 2, we have that, if

$$-\bigl(\hat{\lambda}_{\max}+{k_{1}}_{\max}+ \bar{k_{1}}_{\max}l^{2}_{\max }+{k_{2}}_{\max} \bigr)\lambda^{-1}_{\max}>\bar{k_{2}}_{\max}h^{2}_{\max} \lambda ^{-1}_{\min}, $$

then system (1) synchronizes system (3). □

Remark 2

If the projective coefficient \(\beta=1\), the projective synchronization is simplified to complete synchronization, and the control input function (8) becomes

$$ u_{i}(t)=-d_{i}\bigl[y_{i}(t)-x_{i}(t) \bigr]. $$
(19)

Remark 3

If the projective coefficient \(\beta=-1\), the projective synchronization is simplified to anti-synchronization, and the control input function (8) becomes

$$\begin{aligned} u_{i}(t) =&-\sum_{j=1}^{n}a_{ij} \bigl[ f_{j}\bigl(x_{j}(t)\bigr)+f_{j}\bigl(- x_{j}(t)\bigr)\bigr]-\sum_{j=1}^{n}b_{ij} \bigl[ g_{j}\bigl(x_{j}(t-\tau)\bigr) \\ &{}+g_{j}\bigl(- x_{j}(t-\tau )\bigr)\bigr]-2I_{i}-d_{i} \bigl[y_{i}(t)+ x_{i}(t)\bigr]. \end{aligned}$$
(20)

In the following, we choose the control input function \(u_{i}(t)\) in system (3):

$$\begin{aligned} &u_{i}(t)=v_{i}(t)+w_{i}(t), \end{aligned}$$
(21)
$$\begin{aligned} &\begin{aligned} v_{i}(t)={}&\sum_{j=1}^{n}a_{ij} \bigl[\beta f_{j}\bigl(x_{j}(t)\bigr)-f_{j}\bigl( \beta x_{j}(t)\bigr)\bigr] \\ &{}+\sum_{j=1}^{n}b_{ij}\bigl[ \beta g_{j}\bigl(x_{j}(t-\tau)\bigr)-g_{j}\bigl( \beta x_{j}(t-\tau)\bigr)\bigr] \\ &{}+(\beta-1)I_{i}, \end{aligned} \end{aligned}$$
(22)
$$\begin{aligned} & w_{i}(t)=- \bigl(d_{i}(t)+d_{i}^{*}\bigr) \bigl[y_{i}(t)-\beta x_{i}(t)\bigr], \end{aligned}$$
(23)
$$\begin{aligned} & D^{\alpha}d_{i}(t)= \gamma_{i} \bigl\Vert y_{i}(t)-\beta x_{i}(t) \bigr\Vert ^{2}, \end{aligned}$$
(24)

where \(d_{i}(t)+d_{i}^{*}\) are feedback gains, \(d_{i}(t)\geq0, d_{i}^{*}>0\) are positive constants, \(\gamma_{i}\) are any positive constants, and β is the projective coefficient.

Remark 4

The control function \(u_{i}(t)\) is a hybrid control, \(v_{i}(t)\) is an open loop control, and \(w_{i}(t)\) is an adaptive feedback control.

Remark 5

Let \(d_{i}(0)\geq0\), then \(d_{i}(t)=d_{i}(0)+I^{\alpha}(\gamma_{i} \Vert y_{i}(t)-\beta x_{i}(t) \Vert ^{2})\geq d_{i}(0)\). So it is easy to get \(d_{i}(t)\geq0\).

Then the system’s error is given as follows:

$$\begin{aligned} D^{\alpha}e_{i}(t) =&-c_{i}e_{i}(t)+ \sum_{j=1}^{n} a_{ij} \bigl[f_{j}\bigl(y_{j}(t)\bigr)-f_{j}\bigl(\beta x_{j}(t)\bigr)\bigr] \\ &{}+\sum_{j=1}^{n} b_{ij} \bigl[g_{j}\bigl(y_{j}(t-\tau)\bigr)-g_{j}\bigl( \beta x_{j}(t-\tau)\bigr)\bigr] \\ &{}-\bigl(d_{i}(t)+d_{i}^{*}\bigr) \bigl[y_{i}(t)-\beta x_{i}(t)\bigr], \end{aligned}$$
(25)

or equivalently,

$$\begin{aligned} D^{\alpha}e(t) =&-Ce(t)+A\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr]+B\bigl[g\bigl(y(t-\tau )\bigr) \\ &{}-g\bigl(\beta x(t-\tau)\bigr)\bigr]-\bigl(D(t)+D^{*}\bigr)e(t), \end{aligned}$$
(26)

where \(D(t)=\operatorname{diag}(d_{1}(t), \ldots, d_{n}(t))\), \(D^{*}=\operatorname{diag}(d_{1}^{*}, \ldots, d_{n}^{*})\).

Theorem 2

Under Assumption 1, if there exists a symmetric positive definite matrix \(P\in R^{n\times n}\) such that

$$-\bigl(\check{\lambda}_{\max}+{k_{1}}_{\max}+ \bar{k_{1}}_{\max}l^{2}_{\max }+{k_{2}}_{\max} \bigr)\lambda^{-1}_{\max}>\bar{k_{2}}_{\max}h^{2}_{\max} \lambda ^{-1}_{\min}, $$

then the fractional-order delayed neural networks systems (1) and (3) can achieve globally asymptotically projective synchronization based on the control schemes (21), (22), (23), (24), where \(\check{\lambda }_{\max}\) denotes the greatest eigenvalue of \(-PC-PD^{*}\), \(\lambda_{\min }\) and \(\lambda_{\max}\) denote the minimum and the maximum eigenvalues of P, respectively.

Proof

Construct the auxiliary function

$$ V(t)=\frac{1}{2}e^{T}(t)Pe(t)+\sum _{i=1}^{n}\frac{\lambda_{\min}}{2\gamma _{i}}d_{i}^{2}(t). $$
(27)

Taking the time fractional-order derivative of \(V(t)\), according to Lemma 1, (27) can be given as follows:

$$\begin{aligned} D^{\alpha}V(t) \leq& e^{T}(t)PD^{\alpha}e(t)+ \sum_{i=1}^{n}\frac{\lambda _{\min}}{\gamma_{i}}d_{i}(t)D^{\alpha}d_{i}(t) \\ =& e^{T}(t)P\bigl\{ -Ce(t)+A\bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t) \bigr)\bigr]+B\bigl[g\bigl(y(t-\tau)\bigr) \\ &{}-g\bigl(\beta x(t-\tau)\bigr)\bigr]-\bigl(D(t)+D^{*}\bigr)e(t) \bigr\} +\sum_{i=1}^{n}\frac{\lambda _{\min}}{\gamma_{i}}d_{i}(t)D^{\alpha}d_{i}(t) \\ =& e^{T}(t) \bigl(-PC-PD^{*}\bigr)e(t)+e^{T}(t)PA \bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr] \\ &{}+e^{T}(t)PB\bigl[g\bigl(y(t-\tau)\bigr)-g\bigl(\beta x(t-\tau) \bigr)\bigr] \\ &{}-e^{T}(t)PD(t)e(t)+\sum_{i=1}^{n} \frac{\lambda_{\min}}{\gamma _{i}}d_{i}(t)D^{\alpha}d_{i}(t) \\ \leq& e^{T}(t) \bigl(-PC-PD^{*}\bigr)e(t)+e^{T}(t)PA \bigl[f\bigl(y(t)\bigr)-f\bigl(\beta x(t)\bigr)\bigr] \\ &{}+e^{T}(t)PB\bigl[g\bigl(y(t-\tau)\bigr)-g\bigl(\beta x(t-\tau) \bigr)\bigr]. \end{aligned}$$
(28)

The rest is the same as the proof of Theorem 1, hence omitted here. □

Remark 6

If the projective coefficient \(\beta=1\), the control input function (21) becomes

$$ u_{i}(t)=-\bigl(d_{i}(t)+d_{i}^{*} \bigr)\bigl[y_{i}(t)- x_{i}(t)\bigr], $$
(29)

where

$$ D^{\alpha}d_{i}(t)=\gamma_{i} \bigl\Vert y_{i}(t)-x_{i}(t) \bigr\Vert ^{2}. $$
(30)

Remark 7

If the projective coefficient \(\beta=-1\), the control input function (21) becomes

$$\begin{aligned} u_{i}(t) =&-\sum_{j=1}^{n}a_{ij} \bigl[ f_{j}\bigl(x_{j}(t)\bigr)+f_{j}\bigl(- x_{j}(t)\bigr)\bigr]-\sum_{j=1}^{n}b_{ij} \bigl[ g_{j}\bigl(x_{j}(t-\tau)\bigr) \\ &{}+g_{j}\bigl(- x_{j}(t-\tau )\bigr) \bigr]-2I_{i}-\bigl(d_{i}(t)+d_{i}^{*} \bigr)\bigl[y_{i}(t)+ x_{i}(t)\bigr], \end{aligned}$$
(31)

where

$$ D^{\alpha}d_{i}(t)=\gamma_{i} \bigl\Vert y_{i}(t)+x_{i}(t) \bigr\Vert ^{2}. $$
(32)

Remark 8

By using an LMI-based approach, Wu et al. investigated global Mittag–Leffler projective synchronization for fractional-order neural networks [32], but without considering delay.

Remark 9

In [33], by using the Laplace transform, the hybrid projective synchronization of fractional-order memristor-based neural networks with time delays was discussed, but the theoretical synchronization results are poor and the sufficient conditions are complex. For comparison purposes, in this paper, the projective synchronization of fractional-order delayed neural networks is studied by constructing a Lyapunov function, with the employment of a fractional inequality and the comparison principle of linear fractional equation with delay. The results are simpler and more theoretical.

4 Numerical simulations

The following two-dimensional fractional-order delayed neural networks are considered in this section:

$$ D^{\alpha}x(t)=-Cx(t)+Af\bigl(x(t)\bigr)+Bg\bigl(x(t-\tau) \bigr)+I, $$
(33)

where \(x(t)=(x_{1}(t), x_{2}(t))^{T}\), \(\alpha=0.97\), \(I=(0, 0)^{T}\). The activation functions are given by \(f(x(t))=g(x(t))=\tanh(x(t))\), \(\tau=1\). Obviously, \(f(x)\) and \(g(x)\) satisfy Assumption 1 with \(L=H=\operatorname{diag}(1, 1)\) and \(C=\bigl( {\scriptsize\begin{matrix}{} 1& 0 \cr 0&1 \end{matrix}} \bigr) \), \(A=\bigl( {\scriptsize\begin{matrix}{} 2.0& -0.1 \cr -5.0&2.0 \end{matrix}} \bigr) \), \(B=\bigl( {\scriptsize\begin{matrix}{} -1.5& -0.1 \cr -0.2& -1.5 \end{matrix}} \bigr) \).

Under these parameters, system (33) has a chaotic attractor, which is shown in Fig. 1.

Figure 1
figure 1

Chaotic behavior of system (17) with initial value \((2, 4)\)

In the control scheme (8), (9), (10), we select the symmetric positive definite matrix \(P=\bigl( {\scriptsize\begin{matrix}{} 1& 0 \cr 0&2 \end{matrix}} \bigr) \). By simple computing, we can get \(d_{1}=18\), \(d_{2}=10\). Select the projective coefficients \(\beta=2\), initial values \(x_{1}(0)=4\), \(x_{2}(0)=2\), \(y_{1}(0)=3\), \(y_{2}(0)=1\), the projective synchronization error is shown in Fig. 2. The synchronization trajectories are shown in Fig. 3 and Fig. 4.

Figure 2
figure 2

The synchronization errors \(e_{i}\) (\(i=1,2\)) state with \(\beta=2\)

Figure 3
figure 3

The synchronization trajectories of \(x_{1}\), \(y_{1}\) with \(\beta=2\)

Figure 4
figure 4

The synchronization trajectories of \(x_{2}\), \(y_{2}\) with \(\beta=2\)

Similarly, projective synchronization with projective coefficient \(\beta =-3\) is given in Fig. 5–Fig. 7.

Figure 5
figure 5

The synchronization errors \(e_{i}\) (\(i=1,2\)) state with \(\beta=-3\)

Figure 6
figure 6

The synchronization trajectories of \(x_{1}\), \(y_{1}\) with \(\beta=-3\)

Figure 7
figure 7

The synchronization trajectories of \(x_{2}\), \(y_{2}\) with \(\beta=-3\)

In the control scheme (21), (22), (23), (24), we select the symmetric positive definite matrix \(P=\bigl( {\scriptsize\begin{matrix}{} 1& 0 \cr 0&\frac{1}{2} \end{matrix}} \bigr) \). By simple computing, we can get \(d_{1}(0)=10\), \(d_{2}(0)=24\). Select the projective coefficients \(\beta=3\), \(d_{1}^{*}=1\), \(d_{2}^{*}=2\), initial values \(x_{1}(0)=4\), \(x_{2}(0)=1\), \(y_{1}(0)=3\), \(y_{2}(0)=2\), the projective synchronization error is shown in Fig. 8. The synchronization trajectories are shown in Fig. 9 and Fig. 10. In addition, the adaptive gains \(d_{i}(t)\) (\(i=1, 2\)) converge to some positive constants, see Fig. 11.

Figure 8
figure 8

The synchronization errors \(e_{i}(t)\) (\(i=1,2\)) state with \(\beta=3\)

Figure 9
figure 9

The synchronization trajectories of \(x_{1}\), \(y_{1}\) with \(\beta=3\)

Figure 10
figure 10

The synchronization trajectories of \(x_{2}\), \(y_{2}\) with \(\beta=3\)

Figure 11
figure 11

Time response of \(d_{i}(t)\) (\(i=1, 2\)) with \(\beta=3\)

Similarly, projective synchronization with projective coefficient \(\beta =-2\) is shown in Fig. 12–Fig. 15.

Figure 12
figure 12

The synchronization errors \(e_{i}(t)\) (\(i=1,2\)) state with \(\beta=-2\)

Figure 13
figure 13

The synchronization trajectories of \(x_{1}\), \(y_{1}\) with \(\beta=-2\)

Figure 14
figure 14

The synchronization trajectories of \(x_{2}\), \(y_{2}\) with \(\beta=-2\)

Figure 15
figure 15

Time response of \(d_{i}(t)\) (\(i=1, 2\)) with \(\beta=-2\)

Remark 10

In simulations, the projective coefficient β is a nonzero constant, which is selected arbitrarily.

5 Conclusions

In this paper, the projective synchronization of delayed fractional-order neural networks is investigated. In order to obtain general results, an effective controller is designed, a fractional inequality and the comparison principle of linear fractional equation with delay are implemented, and some sufficient conditions are given to ensure that the master–slave systems are able to obtain projective synchronization. Numerical simulations are used to show the effectiveness of the method proposed.