1 Introduction

The repetition learning is a specific learning strategy characterized by the accumulation of experience. Based on the repetitive learning strategy, the iterative learning control (ILC) is proposed, which combines the artificial intelligence and the automatic control. The focus of the ILC is to achieve a perfect tracking task in a limited time interval by the repeated trials, and it generally refines the desired control signal by comparing the error between the system output and the desired trajectory. The ILC has been successfully applied to various systems [1,2,3,4,5]. In recent years, based on the distributed cooperative control of multi-agent systems [6,7,8], the ILC algorithm has been extended to multi-agent systems to solve the distributed tracking problems, for example, the leader-follower consensus [9, 10], the formation control [11,12,13], and the finite-time consensus [14, 15], to name but a few.

It is worth noting that the network can be used to characterize the information transmission between sensors, controllers and actuators, as well as different control objects, thus it plays a key role in the multi-agent systems. In the actual communication process, information is usually transmitted in digital signals. For example, in many applications of multi-agent systems, including tracking control of robotic teams, unmanned air vehicles formations, and autonomous underwater vehicles, information is transmitted in the form of data. Due to the limitation of communication bandwidth and storage space in practical applications, the original precise data needs to be quantized before sending [16]. Thus, how to achieve control objectives with the information loss caused by the quantization is a challenge for multi-agent systems. As a consequence, many distributed algorithms with quantized communication have been designed to analyze the multi-agent systems [17,18,19,20,21]. For example, in [17], the uniform and logarithmic quantizers have been utilized to deal with the consensus problem of multi-agent systems, and it demonstrated that the average consensus can be achieved if the logarithmic gain is small enough. In [18], the authors further proved that the average consensus can be achieved for any logarithmic gain by the quantized relative state measurements. Moreover, consider the energy consumption [22], the authors in [19, 20] addressed the consensus problem by the periodic event-triggered control with quantization, and solved the leader-following consensus for linear and Lipschitz nonlinear multi-agent systems by the event-triggered control with quantization. There also exist many other applications for the quantized controller, for example, the logarithmic quantizer learning method has been used to solve the distributed resource allocation problem [23, 24].

Recently, the quantized controller has been applied to the ILC algorithm [25, 26]. As an earlier work, [25] designed an ILC based on the logarithmic quantizer and proved that the tracking error converges to a error bound. To get better tracking performance, the ILC was designed by the quantized error information in [26], and the authors proved that the tracking error converges to zero asymptotically. For multi-agent systems, a quantized iterative learning controller was proposed based on the finite-level uniform quantizer, which solves the consensus tracking problem of multi-agent systems in [27]. Both quantization and event-triggered were considered in [28, 29] for solving the consensus tracking problem of discrete-time multi-agent systems and continuous-time multi-agent systems, respectively. In [30], the authors further extended [29] to the continuous-time multi-agent systems with finite-leveled \(\varSigma _{\Delta }\) quantization and random packet losses. It is worth noting that most of controllers are built based on the model information [31]. Since the complexity of the system in the practical application, it becomes more difficult to establish a mathematical model that exactly models the actual system. In order to deal with this problem, Hou proposed a model-free adaptive control (MFAC) method to design the controller [32], which only needs the input and output data of the system to design the controller. The MFAC method has been applied in many fields, such as traffic networks, chemical batch processes, and cyber-physical systems [33]. Based on the MFAC method, two different quantized ILC algorithms only depending on input and output data were proposed to solve the tracking problem of the unknown nonlinear systems with quantization and saturation in [34]. For multi-agent systems, the distributed MFAILC algorithms have been established to solve the consensus tracking problem [35] and the formation control [36]. Notably, the above literature mainly focuses on the cooperative interaction between agents. However, the relations of hostile or competition between agents are ubiquitous. For example, in [37] the antagonistic interactions has been considered, and the bipartite synchronization problem was solved. Generally, the network with antagonistic interactions can be characterized by the signed graph, where the negative (positive) edge represents the antagonistic (cooperative) interaction between agents. In [38], by the MFAILC method and the logarithmic quantizer, a quantized distributed MFAILC approach was proposed to analyze the bipartite consensus tracking of multi-agent systems with quantized data. In [39], the authors solved the bipartite containment tracking problem by the MFAILC method, that is, the output states of agents converge to the convex hull formed by the output and the symmetric output states of leaders.

In this paper, we propose a quantized MFAILC algorithm to solve the bipartite containment tracking problem of unknown nonlinear multi-agent systems with quantized information. This algorithm is established based on the MFAILC method and the logarithmic quantizer [18]. The developed algorithm can be applied in many scenarios. For instance, autonomous robots can secure and remove hazardous materials, and autonomous robots with detection sensors can help the robots without such sensors reach a deployment position. Similarly, for the tracking control problem, different control algorithms based on the reinforcement learning have also been proposed to solve such problems in [40,41,42,43,44]. Different from these algorithms, our proposed algorithm only depends on input and output data of multi-agent systems, and does not need mathematical model and structural information of agents. Moreover, the communication information between agents can be incomplete. First, we establish a linear data model of unknown nonlinear multi-agent systems by the dynamic linearization method. Second, we design a quantized MFAILC algorithm, which only depends on the quantized values of the relative output measurements and input data of multi-agent systems. Compared with the bipartite containment studied in [39], which requires accurate information, our algorithm can deal with the information loss caused by quantization, thereby solving the bipartite containment tracking problem. And this algorithm only needs the incomplete output information, which can reduce the consumption of communication and storage resources. Finally, we demonstrate that the proposed quantized MFAILC algorithm ensures all leaders achieve bipartite consensus, and the output error of agents converges to zero. That is, the multi-agent systems achieves the bipartite containment.

Notation: Let R denote the real number and \(R^{n\times n}\) denote \(n \times n\)-dimensional real matrix. For matrix \(A=[a_{ij}]_{n\times n}\), \({{\,\textrm{diag}\,}}(A)={{\,\textrm{diag}\,}}\{a_{11},a_{22},\ldots ,a_{nn}\}\) represents a diagonal matrix with \(a_{11},a_{22},\ldots ,a_{nn}\) as its diagonal elements, and \(\Vert A\Vert \) represents 1-norm of A. I denotes the identity matrix with appropriate dimensions. |a| represents the absolute value of a.

2 Preliminaries

2.1 Signed Graph

We consider n agents in a signed network. The signed graph G(VEA) represents the signed network, where \(V=\{1,2,\ldots ,n\}\) represents the node set, \(E=\{(i,j)|i,j \in V\}\subseteq V\times V \) denotes the edge set, and \(A=[a_{ij}]_{n\times n}\in R^{n\times n}\) denotes the adjacency matrix such that \(a_{ij}\ne 0\) if and only if \((j,i)\in E\). (ji) denotes a directed edge from j to i. We assume that there are no self-loops, i.e., \(a_{ii}=0\) for all \(i\in V\). \({a_{ij}}>0\) denotes the cooperative interaction between i and j and \({a_{ij}}<0\) denotes the antagonistic interaction between i and j. The neighbor set of node i is defined by \({N_i} = \{j|(j,i) \in E \}\). A directed path from i to j in graph G(VEA) is a finite sequence of edges in the form of \((i_{0},i_{1}),(i_{1},i_{2}), \ldots , (i_{k-1},i_{k}) \in E\), where \(i_{0}= i\) and \(i_{k}= j\). If there is a directed path between any two different nodes, then G(VEA) is called strongly connected. The Laplacian matrix \(L=[l_{ij}]_{n\times n}\) is defined as \(L=D-A\), where \(D={{\,\textrm{diag}\,}}\{{{d_1},{d_2}, \cdots , {d_{n}}}\}\) and \({d_i}= \sum _{j = 1}^{n} |{a_{ij}}|\). It follows that L can be written as

$$\begin{aligned}l_{ij}={\left\{ \begin{array}{ll} \sum \limits _{k = 1,k\ne i}^{n} |{a_{ik}}|,\ \ \text {for }\, j=i,\\ -a_{ij},\ \ \ \ \ \ \ \ \ \ \ \text {for }j\ne i. \end{array}\right. }\end{aligned}$$

Definition 1

[45] A signed graph G(VEA) is structurally balanced if it admits a bipartition of the nodes \({V_1}\), \({V_2}\), \({V_1}\cup {V_2} = V\), and \({V_1}\cap {V_2} = \emptyset \), such that \({a_{ij}} \ge 0\), \(\forall i, j \in {V_k}(k \in \{ 1,2\} )\), and \({a_{ij}} \le 0\), \(\forall i \in {V_p},j \in {V_q}, p \ne q(p,q \in \{ 1,2\} )\). It is said structurally unbalanced otherwise.

Lemma 1

[45] The multi-agent system admits a bipartite consensus if and only if the strongly connected signed graph G(VEA) is structurally balanced.

Lemma 1 means that when the signed graph G(VEA) is structurally balanced, all agents of the multi-agent system can be divided into two groups \(V'\) and \(V''\). By the designed control algorithm, all agents in \(V'\) converge to a value and all agents in \(V''\) converge to a value. The above values are equal in absolute value but opposite in sign.

2.2 Problem Statement

In this paper, we will study the bipartite containment problem of the nonlinear discrete-time multi-agent system by the quantized MFAILC method. We consider m leaders and n followers, where the group of m leaders is denoted by \(V_{1}=\{1,2,\ldots ,m\}\) and the group of n followers is denoted by \(V_{2}=\{m+1,m+2,\ldots ,m+n\}\). Assume that the communication graph consisting of m leaders is strongly connected and structurally balanced, that is, it admits a bipartition of the nodes \(V'\), \(V''\), \(V_{1}=V'\bigcup V''\) and \({V'}\cap {V''} = \emptyset \). The interactions among followers are cooperative.

A nonlinear discrete-time system is written as

$$\begin{aligned} \begin{array}{rl} {y_i}(t+1,k)\!=\!&{}{f_i}({y_i}(t,k),{y_i}(t-1,k), \ldots ,{y_i}(t-{n_y},k), \\ &{}{u_i}(t,k),{u_i}(t-1,k),\ldots , {u_i}(t-{n_u},k)), \end{array} \end{aligned}$$
(1)

where \(i\in \{1,2,\ldots ,m+n\}\) represents i-th agent, \(t \in \{ 0,1, \ldots T\}\) is the time, and k denotes the iteration number. \({y_i}(t,k) \in R\) and \({u_i}(t,k) \in R\) represent the output and input data of the i-th agent, respectively. \(n_y\) and \(n_u\) are the unknown orders of output \({y_i}(t,k)\) and input \({u_i}(t,k)\), respectively. \(f_{i}( \cdot )\) denotes the nonlinear function, and it only consists of the input and output data of the i-th agent.

Our goal is to design a quantized model-free iterative learning controller, which ensures that the bipartite containment objective can be achieved. That is, the output state \(y_i(t,k)\) of each follower converges to the convex hull formed by the output states \(y_i(t,k)\) of leaders and the symmetric output states \(-y_i(t,k)\) of leaders.

Assumption 1

The partial derivative of \(f_{i}(\cdot )\) with respect to \({u_i}(t,k)\) is continuous, and the nonlinear function \(f_{i}(\cdot )\) satisfies the Lipschitz condition \(\left| {\Delta {y_i}(t + 1,k)} \right| \le b\left| {\Delta {u_i}(t,k)} \right| \), where \(\Delta {y_i}(t + 1,k) \buildrel \Delta \over = {y_i}(t + 1,k) - {y_i}(t + 1,k - 1)\), \(\Delta {u_i}(t,k) \buildrel \Delta \over = {u_i}(t,k) - {u_i}(t,k - 1)\), and b is a positive constant.

Remark 1

Assumption 1 is reasonable for practical nonlinear systems, and it is a general condition for controller design. Assumption 1 also sets a bound on the change rate of the output caused by the change of input. From the perspective of energy, it implies that the energy change rate cannot go to infinity if the change of the control input is within a finite range. This assumption is reasonable for some practical industrial systems, such as temperature control systems and pressure control systems, etc.

Lemma 2

[46] For system (1), if it satisfies Assumption 1 and \(\Delta {u_i}(t,k) \ne 0\), then system (1) can be written as

$$\begin{aligned} \Delta {y_i}(t + 1,k)={\phi _i}(t,k)\Delta {u_i}(t,k), \end{aligned}$$
(2)

where \({\phi _i}(t,k)\) is a time-varying parameter, which is called pseudo partial derivative. \(|\phi _{i}(t,k)|\le b'\) and \({b'}\) is a positive constant.

By equation (2), the nonlinear system can transform into a linear dynamic system with a time-varying parameter. And \(\phi _{i}(t,k)\) can also be estimated only by the input and output data of multi-agent systems. Lemma 2 requires that \(\Delta {u_i}(t,k) \ne 0\) holds. If \(\Delta {u_i}(t,k)=0\) holds, then a slightly different dynamical linearization can be applied after shifting \(\delta _k \in Z^+\) time instants till \(u_i(t,k)-{u_i}(t,k - \delta _k)\ne 0\), that is, \(\phi _{i}(t,k - \delta _k)\) is allowed to exist. For more details, please refer to Theorem 3.2 in [33].

Assumption 2

For any t and k, \({\phi _i}(t,k)\) satisfies \(\phi _{i}(t,k)>\sigma >0\) or \(\phi _{i}(t,k)<-\sigma <0\), where \(\sigma \) is an arbitrarily positive constant. Without loss of generality, we assume \(\phi _{i}(t,k)> \sigma > 0\).

Remark 2

Assumption 2 ensures that the increment of output and the increment of input have the same sign, which means that when the control input increases, the output does not decrease. It implies that the control direction is known or remains unchanged. This is a common assumption for most practical systems, such as pressure control systems and temperature control systems, etc [34].

Assumption 3

The G(VEA) is strongly connected and \(\sum _{j = 1}^{m + n} |{a_{ij}}| > \sum _{j = m+1}^{m + n} |{{a_{ji}}}|\) for any \(i\in V\).

Lemma 3

[47] Let \(\Upsilon \in R^{n\times n}\) denote the set of all irreducible substochastic matrices with positive diagonal entries, then we have

$$\begin{aligned} |\Upsilon (\omega )\Upsilon (\omega -1)\ldots \Upsilon (1)|\le \tau , \end{aligned}$$

where \(0<\tau < 1\), and \(\Upsilon (i)\), \(i=1,2,\ldots ,\omega \) are \(\omega \) matrices arbitrarily selected from \(\Upsilon \).

In this paper, we will adopt the logarithmic quantizer. The logarithmic set of quantization levels is described as

$$\begin{aligned} W=\{\pm \varpi _{p}:\varpi _{p}=\theta ^{p}\varpi _{0}, p=0, \pm 1,\pm 2,\ldots \}\cup \{0\}, \end{aligned}$$

where \(0<\theta <1\) represents the quantization density, and \(\varpi _{0}>0\) is the initial quantization value.

Definition 2

The quantizer \(q:R\rightarrow R\) is defined as

$$\begin{aligned} q(x)\!=\!{\left\{ \begin{array}{ll} \varpi _{p},&{}if \quad \frac{1}{1+\alpha }\varpi _{p}<x\le \frac{1}{1-\alpha }\varpi _{p},\\ 0,&{}if\quad x=0,\\ -q(-x),&{}if\quad x<0,\\ \end{array}\right. } \end{aligned}$$
(3)

where \(0<\alpha =\dfrac{1-\theta }{1+\theta }<1\).

For the vector \(x=[x_1,x_2,\ldots ,x_n]^{T}\), we define \(q(x)=[q(x_1),q(x_2),\ldots ,q(x_n)]^{T}\in R^n\). By Definition 2, we know that if \(\frac{1}{1+\alpha }\varpi _{p}<x_i\le \frac{1}{1-\alpha }\varpi _{p}\), then \(q(x_i)=\varpi _{p}\). Thus for \((1-\alpha )x_i\le \varpi _{p}<(1+\alpha )x_i\), we have \(-\alpha x_i\le q(x_i)-x_i<\alpha x_i\). If \(x_i<0\), we know that \(q(x_i)=-q(-x_i)\) holds. Thus for \(\frac{1}{1+\alpha }\varpi _{p}<-x_i\le \frac{1}{1-\alpha }\varpi _{p}\), we have \(q(-x_i)=\varpi _{p}\) and \(-(1-\alpha )x_i\le \varpi _{p}<-(1+\alpha )x_i\). Then for \(x_i<0\), we obtain \(\alpha x_i<q(x_i)-x_i\le -\alpha x_i\). For \(x_i=0\), we have \(q(x_i)-x_i=0\). Therefore, there holds

$$\begin{aligned} q(x)-x=\varXi x, \end{aligned}$$
(4)

where \(\varXi ={{\,\textrm{diag}\,}}\{\varXi _1,\varXi _2,\ldots ,\varXi _n\}\) is a constant matrix, and \(\varXi _i \in [-\alpha ,\alpha )\) for \(i=1,2,\ldots ,n\).

3 Main Results

In this section, we will show the quantized MFAILC algorithm and give the condition that the multi-agent system achieves the bipartite containment. Without loss of generality, A denotes the adjacency weight matrix of G(VEA), and L denotes the Laplacian matrix associated with A, \(G_{L}\) represents the communication graph between all leaders, and \(G_{F}\) represents the communication graph between all followers.

The adjacency weight matrix of G(VEA) can be written as

$$\begin{aligned} A=\begin{bmatrix} A_{L} &{} 0_{m\times n} \\ A_{FL} &{} A_{F} \end{bmatrix}, \end{aligned}$$

where \(A_{L}\) denotes the adjacency matrix of \(G_{L}\), \(A_{F}\) denotes the adjacency matrix of \(G_{F}\), and \(A_{FL}\) denotes the interactions among leaders and followers. The Laplacian matrix L of G(VEA) can be written as

$$\begin{aligned} L=\begin{bmatrix} L_{L} &{} 0_{m\times n} \\ -A_{FL} &{} L_{F}+D_{FL} \end{bmatrix}, \end{aligned}$$

where \(L_{L}\) and \(L_{F}\) are Laplacian matrices associated with \(A_{L}\) and \(A_{F}\), respectively. \(L_{F}=D_{FF}-A_{F}\), \(D_{FF}={{\,\textrm{diag}\,}}\{{{d_1},{d_2}, \cdots , {d_{n}}}\}\) with \(d_{i}=\sum _{j = m+1}^{m+n} |a_{(m+i)j}|\), \(1\le i\le n\), and \(D_{FL}={{\,\textrm{diag}\,}}\{d'_1,d'_2, \cdots , d'_{n}\}\) with \(d'_{i}=\sum _{j = 1}^{m} |a_{(m+i)j}|\), \(1\le i\le n\).

Definition 3

The error of the i-th agent at the k-th iteration is denoted by \({e_i}(t,k)\). We have

$$\begin{aligned} {e_i}(t,k)=b_{i}y_{0}-y_{i}(t,k), \end{aligned}$$
(5)

where \(y_{0}\) is the desired trajectory of leaders, \(b_{i}=1\) for \(i\in V'\) and \(b_{i}=-1\) for \(i\in V''\).

Definition 4

The containment error of the i-th agent at the k-th iteration is denoted by \({\xi _i}(t,k)\). For \(i\in V_{1}\), we have

$$\begin{aligned} {\xi _i}(t,k)\!\!=\!\!\sum \limits _{j \in {N_i}} {({a_{ij}}{y_j}(t,k)}\!-\!|{{a_{ij}}}|{y_i}(t,k))\!+\!c_{i}(b_{i}y_{0}\!-\!y_{i}(t,k)), \end{aligned}$$
(6)

where \(y_{0}\) is a fixed constant, \(c_i\in \{0,1\}\), and \(c_{i}=1\) denotes that agent i can receive the information \(y_{0}\), otherwise \(c_i=0\). We consider that there is at least one \(c_i=1\) for \(i\in V_{1}\). \(b_i=1\) for \(i\in V'\) and \(b_i=-1\) for \(i\in V''\). For \(i\in V_{2}\), we have

$$\begin{aligned} {\xi _i}(t,k)=\sum \limits _{j \in {N_i}} {a_{ij}}({y_j}(t,k)-{y_i}(t,k)). \end{aligned}$$
(7)

By equation (7), we can obtain

$$\begin{aligned} \xi _{F}(t,k)=A_{FL}Y_{L}(t,k)-(L_{F}+D_{FL})Y_{F}(t,k), \end{aligned}$$

where \(\xi _{F}(t,k)\!=\![\xi _{m\!+\!1}(t,k),\xi _{m\!+\!2}(t,k),\ldots ,\xi _{m\!+\!n}(t,k)]^{T}\), \(Y_{L}(t,k)\!=\![y_{1}(t,k), y_{2}(t,k), \ldots , y_{m}(t,k)]^{T}\), \(Y_{F}(t,k)=[y_{m+1}(t,k),y_{m+2}(t,k),\ldots , y_{m+n}(t,k)]^{T}\), and \(\xi _{L}(t,k)=[\xi _{1}(t,k),\xi _{2}(t,k),\ldots ,\xi _{m}(t,k)]^{T}\). Similar to the analysis in [39], if we can prove that \(\lim _{k\rightarrow \infty }y_{i}(t,k)= y_0\) for \(i\in V'\), \(\lim _{k\rightarrow \infty }y_{i}(t,k)=-y_0\) for \(i\in V''\), and \(\lim _{k\rightarrow \infty }^{}\Vert \xi _{F}(t,k)\Vert =0\), then the bipartite containment can be achieved. If \(\lim _{k\rightarrow \infty }^{}\Vert \xi _{F}(t,k)\Vert =0\), we have \(\lim _{k\rightarrow \infty }Y_F(t,k)=(L_F+D_{FL})^{-1}A_{FL}\lim _{k\rightarrow \infty }Y_L(t,k)\). By the analysis in [39], we know that each entry of \(({L}_F+{D}_{FL})^{-1}{A}_{FL}\) is nonnegative and each row sum of \(({L}_F+{D}_{FL})^{-1}{A}_{FL}\) is one. Thus if \(\lim _{k\rightarrow \infty }y_{i}(t,k)= y_0\) for \(i\in V'\), \(\lim _{k\rightarrow \infty }y_{i}(t,k)=-y_0\) for \(i\in V''\), and \(\lim _{k\rightarrow \infty }^{}\Vert \xi _{F}(t,k)\Vert =0\), then the bipartite containment can be achieved.

In order to achieve the bipartite containment tracking task, the quantized model-free iterative learning controller for each agent is designed as follows

$$\begin{aligned} u_{i}(t,k)={u_i}(t,k-1) + \frac{{\rho {{\hat{\phi }}_i}(t,k)}}{{\lambda + {{\left| {{{\hat{\phi }}_i}(t,k)} \right| }^2}}}\xi _{qi}(t+1,k-1), \end{aligned}$$
(8)

where \(\xi _{qi}(t+1,k-1)=\sum _{j \in {N_i}}q {({a_{ij}}{y_j}(t,k)}\!-\!|{{a_{ij}}}|{y_i}(t,k))+c_{i}q(b_{i}y_{0}-y_{i}(t,k))\) for \(i\in V_1\), \(\xi _{qi}(t+1,k-1)=\sum _{j \in {N_i}}q (a_{ij}(y_j(t,k)-y_i(t,k)))\) for \(i\in V_2\), \(\lambda > 0\) represents the weighting factor, and \(\rho \in (0,1)\) represents the parameter of controller.

The \({{{\hat{\phi }}_i}(t,k)}\) is the estimated value of \({\phi _i}(t,k)\), and the value of \({{{\hat{\phi }}_i}(t,k)}\) is updated by

$$\begin{aligned} \begin{array}{rl} \hat{\phi }_{i}(t,k)=&{}\hat{\phi }_{i}(t,k\!-\!1)+\frac{\eta \Delta u_{i}(t,k-1)}{\mu \!+\!|\Delta u_{i}(t,k-1)|^{2}}(q(\Delta y_{i}(t\!+\!1,k\!-\!1))\\ &{}-\hat{\phi }_{i}(t,k-1)\Delta u_{i}(t,k-1)), \end{array} \end{aligned}$$
(9)

where \(0<\eta < 1\), \(\mu > 0\), and \(q(\Delta y_{i}(t+1,k-1))\) represents the quantized value of \(\Delta y_{i}(t+1,k-1)\). The \(\eta \) is a step-size, and its purpose is to make the algorithm more flexible and general. Moreover, the \(\eta \) is helpful to analyze the boundedness of \(\hat{\phi }_{i}(t,k)\). If any of the following three equations is satisfied

$$\begin{aligned}{} & {} |\hat{\phi }_{i}(t,k)|\le \sigma , \\{} & {} |\Delta u_{i}(t,k-1)|\le \sigma , \\{} & {} \text {or}\ \ sign(\hat{\phi }_{i}(t,k))\ne sign(\hat{\phi }_{i}(t,1)), \end{aligned}$$

the following equation holds

$$\begin{aligned} \hat{\phi }_{i}(t,k) =\hat{\phi }_{i}(t,1). \end{aligned}$$
(10)

Equation (10) is a reset condition, which can ensure the robustness of controller. The compact form of (8) can be written as:

$$\begin{aligned} u(t,k)=u(t,k-1) + \rho \Omega (t,k)\xi _{q}(t+1,k-1), \end{aligned}$$
(11)

where \(u(t,k)=[u_{1}(t,k), u_{2}(t,k), \ldots ,u_{m+n}(t,k)]^{T}\), \(\Omega (t,k)={{\,\textrm{diag}\,}}\{\frac{\hat{\phi }_{1}(t,k)}{\lambda +|\hat{\phi }_{1}(t,k)|^{2}}, \frac{\hat{\phi }_{2}(t,k)}{\lambda +|\hat{\phi }_{2}(t,k)|^{2}},\ldots , \frac{\hat{\phi }_{m+n}(t,k)}{\lambda +|\hat{\phi }_{m+n}(t,k)|^{2}}\}\), and \(\xi _{q}(t+1,k-1)=[\xi _{q1}(t+1,k-1),\xi _{q2}(t+1,k-1),\ldots ,\xi _{q(m+n)}(t+1,k-1)]^T\).

Theorem 1

Suppose that system (1) with the controller (8) satisfies Assumptions \(1{-}3\). Then, \(\lim _{k\rightarrow \infty }y_i(t,k)=b_iy_0\) for \(i\in V_1\), and \(\lim _{k\rightarrow \infty } \Vert \xi _{F}(t,k)\Vert =0\) if

$$\begin{aligned} \lambda > \frac{(b')^2}{4}\ \ \text {and} \ \ 0<\rho \le \frac{1}{2{\max \limits _{i = 1, \ldots m + n} (\sum \limits _{j = 1}^{m + n} {\left| {{a_{ij}}} \right| } }+c_{i})}. \end{aligned}$$

Proof: Firstly, we prove the boundedness of \(\hat{\phi _{i}}(t,k)\). Let \(\tilde{\phi _{i}}(t,k)=\hat{\phi _{i}}(t,k)-\phi _{i}(t,k)\), by (9), we have

$$\begin{aligned} \begin{array}{rl} \tilde{\phi _{i}}(t,k)\!=\!&{}\hat{\phi _{i}}(t,k)-\phi _{i}(t,k)\\ \!\!=\!\!&{}\hat{\phi _{i}}(t,k-1)+\frac{\eta \Delta u_{i}(t,k-1)}{\mu +|\Delta u_{i}(t,k-1)|^{2}}q(\Delta y_{i}(t+1,k-1))\\ &{}-\hat{\phi }_{i}(t,k-1)\Delta u_{i}(t,k-1))-\phi _{i}(t,k)\\ &{}+\phi _{i}(t,k-1)-\phi _{i}(t,k-1) +\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\phi _{i}(t,k-1)\\ &{}-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\phi _{i}(t,k-1)\\ \!\!=\!\!&{}\hat{\phi _{i}}(t,k\!-\!1)\!-\!\phi _{i}(t,k\!-\!1)\!\!-\!\!\phi _{i}(t,k)\!+\!\phi _{i}(t,k\!-\!1)\\ &{}-\frac{\eta \Delta u_{i}(t,k-1)}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\hat{\phi }_{i}(t,k-1)\Delta u_{i}(t,k-1)\\ &{}+\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\phi _{i}(t,k-1) +\frac{\eta \Delta u_{i}(t,k-1)}{\mu +|\Delta u_{i}(t,k-1)|^{2}}q(\Delta y_{i}(t+1,k-1))\\ &{}-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\phi _{i}(t,k-1).\\ \end{array} \end{aligned}$$
(12)

By (4), we have \(q(\Delta y_{i}(t+1,k-1))=(1+\Gamma _{\Delta {i}})\Delta y_{i}(t+1,k-1)\), where \(|\Gamma _{\Delta {i}}|\le \alpha <1\). Then by (2), we have

$$\begin{aligned} \begin{array}{rl} \tilde{\phi _{i}}(t,k) =&{}(1-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}})\tilde{\phi _{i}}(t,k-1) -\Delta \phi _{i}(t,k)\\ &{}+\frac{\eta \Delta u_{i}(t,k-1)}{\mu +|\Delta u_{i}(t,k-1)|^{2}}(\Gamma _{\Delta _{i}}\Delta y_{i}(t+1,k-1)+\Delta y_{i}(t+1,k-1))\\ &{}-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\phi _{i}(t,k-1). \\ =&{}(1-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}})\tilde{\phi _{i}}(t,k-1) -\Delta \phi _{i}(t,k)\\ &{}+\frac{\eta \Delta u_{i}(t,k-1)^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}(\Gamma _{\Delta _{i}}\phi _{i}(t,k-1)+\phi _{i}(t,k-1))\\ &{}-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\phi _{i}(t,k-1)\\ =&{}(1-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}})\tilde{\phi _{i}}(t,k-1) -\Delta \phi _{i}(t,k)\\ &{}+\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}\Gamma _{\Delta _{i}}\phi _{i}(t,k-1). \end{array} \end{aligned}$$
(13)

Thus

$$\begin{aligned} \begin{array}{rl} |\tilde{\phi _{i}}(t,k)| \le &{}|\Delta \phi _{i}(t,k)|\!\! +\!\!|1\!\!-\!\!\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}||\tilde{\phi _{i}}(t,k-1)|\\ &{}+|\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}||\Gamma _{\Delta _{i}}||\phi _{i}(t,k-1)|.\\ \end{array} \end{aligned}$$
(14)

Since \(0<\eta <1\) and \(\mu >0\), we have

$$\begin{aligned} \eta (\Delta u_{i}(t,k-1))^{2} \, \le \, |(\Delta u_{i}(t,k-1))|^{2} < \mu + |(\Delta u_{i}(t,k-1))|^{2}. \end{aligned}$$

Thus

$$\begin{aligned} 0<\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}<1. \end{aligned}$$

There is \(\hbar \) such that the following inequality holds,

$$\begin{aligned} 0<|1-\frac{\eta (\Delta u_{i}(t,k-1))^{2}}{\mu +|\Delta u_{i}(t,k-1)|^{2}}|\le \hbar <1. \end{aligned}$$
(15)

By \(|\phi _{i}(t,k)|\le b'\), we can get \(|\ \Delta \phi _{i}(t,k)|\le 2b'\). By \(|\Gamma _{\Delta _{i}}|<1\) and (15), we have

$$\begin{aligned} \begin{array}{rl} |\tilde{\phi _{i}}(t,k)|\le &{}\hbar |\tilde{\phi _{i}}(t,k-1)|+b'+2b'\\ \le &{}\hbar ^{2}|\tilde{\phi _{i}}(t,k-2)|+(\hbar +1)3b'\\ \le &{}\hbar ^{3}|\tilde{\phi _{i}}(t,k-3)|+(\hbar ^{2}+\hbar +1)3b'\\ \vdots \\ \le &{}\hbar ^{k-1}|\tilde{\phi _{i}}(t,1)|+(\hbar ^{k-2}+\hbar ^{k-1}+\cdots +\hbar ^{2}+\hbar +1)3b'\\ =&{}\hbar ^{k-1}|\tilde{\phi _{i}}(t,1)|+\frac{1-\hbar ^{k-1}}{1-\hbar }3b'.\\ \end{array} \end{aligned}$$
(16)

By \(0<\hbar <1\), we have

$$\begin{aligned} \lim _{k\rightarrow \infty } |\tilde{\phi _{i}}(t,k)|= \frac{3b'}{1-\hbar }. \end{aligned}$$

Thus, \(\tilde{\phi _{i}}(t,k)\) is bounded. Since \(\phi _{i}(t,k)\) is also bounded, then \(\hat{\phi }_{i}(t,k)\) is bounded.

Next, we prove that leaders in group \(V_{1}\) can achieve the bipartite consensus. For \(i\in V_{1}\), we have

$$\begin{aligned} \begin{array}{rl} \xi _{qi}(t+1,k-1)\!=&\!\!\sum \limits _{j\in N_{i}}q(a_{ij}y_{j}(t,k)-|a_{ij}|y_{i}(t,k))+c_{i}q(b_{i}y_{0}-y_{i}(t,k)). \end{array} \end{aligned}$$
(17)

By (4), we have

$$\begin{aligned}{} & {} \begin{array}{rl} q(a_{ij}y_{j}(t,k)\!-\!|a_{ij}|y_{i}(t,k)) =(1\!\!+\!\!\Gamma _{ij})(a_{ij}y_{j}(t,k)\!\!-\!\!|a_{ij}|y_{i}(t,k)), \end{array}\\{} & {} q(b_{i}y_{0}-y_{i}(t,k))=(1+\Gamma _i)(b_{i}y_{0}-y_{i}(t,k)), \end{aligned}$$

where \(|\Gamma _{ij}|\le \alpha \) and \(|\Gamma _{i}|\le \alpha \). Then, we obtain

$$\begin{aligned} \begin{array}{rl} \xi _{qi}(t+1,k-1) =&{}\sum \limits _{j\in N_{i}}(1+\Gamma _{ij})(a_{ij}y_{j}(t,k)\!\!-\!\!|a_{ij}|y_{i}(t,k))\\ &{}+c_{i}(1+\Gamma _i)(b_{i}y_{0}-y_{i}(t,k))\\ =&{}\sum \limits _{j\in N_{i}}(1+\Gamma _{ij})(a_{ij}y_{j}(t,k)-|a_{ij}|y_{i}(t,k)+|a_{ij}|b_{i}y_{0}\\ &{}-|a_{ij}|b_{i}y_{0})+c_{i}(1+\Gamma _i)e_{i}(t,k)\\ =&{}\sum \limits _{j\in N_{i}}(1+\Gamma _{ij})(|a_{ij}|e_{i}(t,k)+a_{ij}y_{j}(t,k)-a_{ij}b_{j}y_{0})\\ &{}+c_{i}(1+\Gamma _i)e_{i}(t,k)\\ =&{}\sum \limits _{j\in N_{i}}(1\!\!+\!\!\Gamma _{ij})(|a_{ij}|e_{i}(t,k)\!\!-\!\!a_{ij}e_{j}(t,k))\!\!+\!\!c_{i}(1\!\!+\!\!\Gamma _i)e_{i}(t,k), \end{array} \end{aligned}$$
(18)

where \(e_{i}(t,k)=b_{i}y_{0}-y_{i}(t,k)\).

Then, we have

$$\begin{aligned} \begin{array}{rl} \xi _{qL}(t,k)=(\tilde{L}_{L}+\tilde{C})e_{L}(t,k), \end{array} \end{aligned}$$
(19)

where \(\xi _{qL}(t,k)=[\xi _{q1}(t,k),\xi _{q2}(t,k), \ldots ,\xi _{qm}(t,k)]^{T}\) and \(e_{L}(t,k)=[e_{1}(t,k),e_{2}(t,k), \ldots ,e_{m}(t,k)]^{T}\), \(\tilde{L}_{L}\) is the Laplacian matrix associated with \(\tilde{A}_{L}=[(1+\Gamma _{ij})a_{ij}]_{m\times m}\), and \(\tilde{C}={{\,\textrm{diag}\,}}\{c_{1}(1+\Gamma _1),c_{2}(1+\Gamma _2),\ldots ,c_{m}(1+\Gamma _m)\}\).

By (11) and (19), we have

$$\begin{aligned} \begin{array}{rl} u_{L}(t,k)=u_{L}(t,k-1) +\rho \Omega _L(t,k)(\tilde{L}_{L}\!\!+\!\!\tilde{C})e_{L}(t+1,k-1), \end{array} \end{aligned}$$
(20)

where \(u_{L}(t,k)=[u_{1}(t,k),u_{2}(t,k),\ldots ,u_{m}(t,k)]^T\) and \(\Omega _L(t,k)={{\,\textrm{diag}\,}}\{\frac{\hat{\phi }_{1}(t,k)}{\lambda +|\hat{\phi }_{1}(t,k)|^{2}}, \frac{\hat{\phi }_{2}(t,k)}{\lambda +|\hat{\phi }_{2}(t,k)|^{2}},\ldots , \frac{\hat{\phi }_{m}(t,k)}{\lambda +|\hat{\phi }_{m}(t,k)|^{2}}\}\). By (2), (5), and (20), we have

$$\begin{aligned} \begin{array}{rl} e_{L}(t+1,k) &{}=e_{L}(t+1,k-1)-Y_{L}(t+1,k)+Y_{L}(t+1,k-1)\\ &{}=e_{L}(t+1,k-1)-\Delta Y_{L}(t+1,k)\\ &{}=e_{L}(t+1,k-1) -\rho H_{L}(t,k)(\tilde{L}_{L}+\tilde{C})e_{L}(t+1,k-1), \end{array} \end{aligned}$$
(21)

where \(H_{L}(t,k)={{\,\textrm{diag}\,}}\{\frac{\phi _{1}(t,k)\hat{\phi }_{1}(t,k)}{\lambda +|\hat{\phi }_{1}(t,k)|^{2}},\frac{\phi _{2}(t,k)\hat{\phi }_{2}(t,k)}{\lambda +|\hat{\phi }_{2}(t,k)|^{2}},\ldots , \frac{\phi _{m}(t,k)\hat{\phi }_{m}(t,k)}{\lambda +|\hat{\phi }_{m}(t,k)|^{2}}\}\). Let \(h_{i}(t,k)=\frac{\phi _{i}(t,k)\hat{\phi }_{i}(t,k)}{\lambda +|\hat{\phi }_{i}(t,k)|^{2}}\), by \(\lambda >\frac{(b')^2}{4}\), we have

$$\begin{aligned} 0<h_{i}(t,k)\le \frac{b'\hat{\phi }_{i}(t,k)}{2\sqrt{\lambda }|\hat{\phi }_{i}(t,k)|}\le \frac{b'}{2\sqrt{\lambda }}<1. \end{aligned}$$

Since \(|\Gamma _{ij}|\le \alpha <1\) and \(|\Gamma _{i}|\le \alpha <1\), then \(0<1+\Gamma _{ij} <2\) and \(0<1+\Gamma _{i} <2\) hold for \(i,j=1,2,\ldots ,m\). Thus, the structure of \(G_L\) remains unchanged. Since \(G_L\) is structurally balanced, by Lemma 1 in [45], there is M such that \(M\tilde{A}_{L}M\ge 0\), where \(M={{\,\textrm{diag}\,}}\{\tau _{1},\tau _{2},\ldots ,\tau _{m}\}\) and \(\tau _{i}\in \{1,-1\}\).

Let \(e_L(t+1,k)=MZ(t+1,k)\), by (21), we have

$$\begin{aligned} MZ(t+1,k)=(I-\rho H_{L}(t,k)(\tilde{L}_{L}+\tilde{C}))MZ(t+1,k-1). \end{aligned}$$
(22)

Since \(M^{-1}=M\), we have

$$\begin{aligned} \begin{array}{rl} Z(t+1,k) =(I-\rho H_{L}(t,k)\mathcal {L}_{M})Z(t+1,k-1), \end{array} \end{aligned}$$
(23)

where \(\mathcal {L}_{M}=M(\tilde{L}_{L}+\tilde{C})M\). Since \(h_{i}(t,k)<1\), \(\rho \le \frac{1}{2{\max \limits _{i = 1, \ldots m + n} (\sum _{j = 1}^{m + n} {\left| {{a_{ij}}} \right| } }+c_{i})}\), and \(G_L\) is strongly connected, then matrix \(I-\rho H_{L}(t,k)\mathcal {L}_{M}\) is nonnegative and irreducible. Since there is at least one \(c_i=1\), then at least one row sum of \(I-\rho H_{L}(t,k)\mathcal {L}_{M}\) is strictly less than one. Thus, \(I-\rho H_{L}(t,k)\mathcal {L}_{M}\) is an irreducible substochastic matrix with positive diagonal elements.

By (23), we obtain

$$\begin{aligned} \begin{array}{rl} \Vert Z(t+1,k)\Vert \le &{}\Vert I-\rho H_{L}(t,k)\mathcal {L}_{M}\Vert Z(t+1,k-1)\Vert \\ \le &{}\Vert I-\rho H_{L}(t,k)\mathcal {L}_{M}\Vert \Vert I-\rho H_{L}(t,k-1)\mathcal {L}_{M}\Vert \\ &{}\Vert Z(t+1,k-2)\Vert \\ \vdots \\ \le &{}\Vert I-\rho H_{L}(t,k)\mathcal {L}_{M}\Vert \Vert I-\rho H_{L}(t,k-1)\mathcal {L}_{M}\Vert \\ &{}\ldots \Vert I-\rho H_{L}(t,2)\mathcal {L}_{M}\Vert \Vert Z(t+1,1)\Vert .\\ \end{array} \end{aligned}$$
(24)

By Lemma 3 and (24), we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert Z(t+1,k)\Vert =\lim _{k\rightarrow \infty }\delta ^{\lfloor \frac{k-1}{\omega }\rfloor }\Vert Z(t+1,1)\Vert , \end{aligned}$$
(25)

where \(\lfloor \frac{k-1}{\omega }\rfloor \) represents the largest positive integer not exceeding \(\frac{k-1}{\omega }\). By \(0<\delta <1\), we obtain \(\lim _{k\rightarrow \infty }\Vert Z(t+1,k)\Vert =0\). Since \(e_L(t+1,k)=MZ(t+1,k)\), then \(\lim _{k\rightarrow \infty }\Vert e_L(t+1,k)\Vert =0\), that is, agents in \(V_{1}\) achieve bipartite consensus.

Next, we will prove that the trajectories of followers gradually converge into the convex hull formed by the leaders’ trajectories and the leaders’ symmetrical trajectories. By (2), we have

$$\begin{aligned} Y_{F}(t+1,k)=Y_{F}(t+1,k-1)+\Phi _{F}(t,k)\Delta u_{F}(t,k), \end{aligned}$$
(26)

where \(\Phi _{F}(t,k)={{\,\textrm{diag}\,}}\{\phi _{m+1}(t,k),\phi _{m+2}(t,k),\ldots , \phi _{m+n}(t,k)\}\) and \(\Delta u_{F}(t,k)\!=\![\Delta u_{m\!+\!1}(t,k), \Delta u_{m\!+\!2}(t,k),\ldots , \Delta u_{m+n}(t,k)]^{T}\). Let \(D_{F}=D_{FF}+D_{FL}\), by (7), we have

$$\begin{aligned} \xi _{F}(t,k)=A_{FL}Y_{L}(t,k)+A_{F}Y_{F}(t,k)-D_{F}Y_{F}(t,k). \end{aligned}$$
(27)

Equation (26) is multiplied by \(-D_{F}\) on both sides and add \(A_{FL}Y_{L}(t+1,k)+A_{F}Y_{F}(t+1,k)+A_{FL}Y_{L}(t+1,k-1)+A_{F}Y_{F}(t+1,k-1)\) to both sides of (26), we will get

$$\begin{aligned} \begin{array}{lllll} &{}A_{FL}Y_{L}(t\!+\!1,k)\!+\!A_{F}Y_{F}(t\!+\!1,k)\!\!+\!\!A_{FL}Y_{L}(t\!+\!1,k\!-\!1)\\ &{}\quad +A_{F}Y_{F}(t+1,k-1)-D_{F}Y_{F}(t+1,k)\\ &{}=A_{FL}Y_{L}(t+1,k)+A_{F}Y_{F}(t+1,k)\\ &{}\quad +A_{FL}Y_{L}(t+1,k-1)+A_{F}Y_{F}(t+1,k-1)\\ &{}\quad -D_{F}Y_{F}(t+1,k-1)-D_{F}\Phi _{F}(t,k)\Delta u_{F}(t,k).\\ \end{array} \end{aligned}$$
(28)

By (27) and (28), we have

$$\begin{aligned} \begin{array}{llll} &{}\xi _{F}(t\!+\!1,k)\!+\!A_{FL}Y_{L}(t\!+\!1,k\!-\!1)\!\!+\!\!A_{F}Y_{F}(t\!+\!1,k\!-\!1)\\ &{}=\xi _{F}(t+1,k-1)\!+\!A_{FL}Y_{L}(t+1,k)\!+\!A_{F}Y_{F}(t+1,k)\\ &{}\quad -D_{F}\Phi _{F}(t,k)\Delta u_{F}(t,k).\\ \end{array} \end{aligned}$$
(29)

By (29), we have

$$\begin{aligned} \begin{array}{rl} \xi _{F}(t+1,k) =&{}\xi _{F}(t+1,k-1)+A_{FL}\Delta Y_{L}(t+1,k)\\ &{}+A_{F}\Delta Y_{F}(t+1,k) -D_{F}\Phi _{F}(t,k)\Delta u_{F}(t,k),\\ \end{array} \end{aligned}$$
(30)

where \(\Delta Y_{L}(t\!\!+\!\!1,k)\!\!=\!\![\Delta Y_{1}(t\!\!+\!\!1,k),\Delta Y_{2}(t\!+\!1,k),\ldots ,\Delta Y_{m}(t\!+\!1,k)]^{T}\), \(\Delta Y_{F}(t\!+\!1,k)=[\Delta Y_{m+1}(t+1,k), \Delta Y_{m+2}(t+1,k),\ldots ,\Delta Y_{m+n}(t+1,k)]^{T}\), and \(\Delta u_{F}(t,k)=[\Delta u_{m+1}(t,k),\Delta u_{m+2}(t,k),\ldots ,\Delta u_{m+n}(t,k)]^{T}\).

By \(\Delta Y_{F}(t+1,k)=\Phi _{F}(t,k)\Delta u_{F}(t,k)\) and (30), we obtain

$$\begin{aligned} \begin{array}{rl} \xi _{F}(t+1,k) =&{}\xi _{F}(t+1,k-1)+A_{FL}\Delta Y_{L}(t+1,k)\\ &{}+(A_{F}-D_{F})\Phi _{F}(t,k)\Delta u_{F}(t,k)\\ =&{}\xi _{F}(t+1,k-1)-(L_{F}+D_{FL})\Phi _{F}(t,k)\Delta u_{F}(t,k)\\ &{}+A_{FL}\Delta Y_{L}(t+1,k). \end{array} \end{aligned}$$
(31)

By (8), we have

$$\begin{aligned} \begin{array}{rl} \Phi _{F}(t,k)\Delta u_{F}(t,k)=\rho H_{F}\xi _{qF}(t+1,k-1), \end{array} \end{aligned}$$
(32)

where \(H_{F}\!\!=\!\!{{\,\textrm{diag}\,}}\{h_{m+1},h_{m+2},\dots , h_{m+n}\}\!\!=\!\!{{\,\textrm{diag}\,}}\{\frac{\phi _{m+1}(t,k)\hat{\phi }_{m+1}(t,k)}{\lambda +|\hat{\phi }_{m+1}(t,k)|^{2}}, \frac{\phi _{m+2}(t,k)\hat{\phi }_{m+2}(t,k)}{\lambda +|\hat{\phi }_{m+2}(t,k)|^{2}}, \ldots ,\frac{\phi _{m\!+\!n}(t,k)\hat{\phi }_{m+n}(t,k)}{\lambda +|\hat{\phi }_{m+n}(t,k)|^{2}}\}\), \(\xi _{qF}(t+1,k-1)\!\!=\!\![\xi _{q(m+1)}(t\!+\!1,k\!-\!1),\xi _{q(m\!+\!2)}(t\!+\!1,k\!-\!1),\ldots ,\xi _{q(m+n)}(t+1,k-1)]^{T}\). By (4), we have \(\xi _{qF}(t\!+\!1,k\!-\!1)\!=\!(I\!+\!\Lambda )\xi _{F}(t\!+\!1,k\!-\!1)\), where \(\Lambda ={{\,\textrm{diag}\,}}\{\Lambda _{m+1},\Lambda _{m+2},\dots ,\Lambda _{m+n}\}\) and \(|\Lambda _{i}|<1\) for \(i=m+1,\dots , m+n\). Thus

$$\begin{aligned} \begin{array}{rl} \xi _{F}(t+1,k) =&{}(I-\rho (L_{F}+D_{FL})H_{F}(I+\Lambda ))\xi _{F}(t+1,k-1)\\ &{}+A_{FL}\Delta Y_{L}(t+1,k). \end{array} \end{aligned}$$
(33)

The norm of matrix \(I-\rho (L_{F}+D_{FL})H_{F}(I+\Lambda )\) can be denoted as \(\max \limits _{i = m\!+\!1, \ldots m\! +\! n}\{1\!\!-\!\!\rho h_{i}(1\!\!+\!\!\Lambda _{i})\sum _{j = 1}^{m+n}|a_{ij}|\!\!+\!\!\rho h_{i}(1\!\!+\!\!\Lambda _{i})\sum _{j = m\!+\!1}^{m\!+\!n}|a_{ji}|\}\). By Assumption 3, we have \(\sum \nolimits _{j = 1}^{m\!+\!n}|a_{ij}|\!\!>\!\!\sum \nolimits _{j = m+1}^{m+n}|a_{ji}|\!\!>\!\!0\). By \(\rho \le \frac{1}{2{\max \nolimits _{i = 1, \ldots m + n} (\sum \nolimits _{j = 1}^{m + n} {\left| {{a_{ij}}} \right| } }+c_{i})}\) and \(|\Lambda _{i}|<1\), we have

$$\begin{aligned} 0<(1+\Lambda _i)\Bigg (\sum \limits _{j = 1}^{m+n}|a_{ij}|-\sum \limits _{j = m+1}^{m+n}|a_{ji}|\Bigg )<\frac{1}{\rho }. \end{aligned}$$

Thus

$$\begin{aligned} 0<1-\rho h_{i}(1+\Lambda _i)\Bigg (\sum \limits _{j = 1}^{m+n}|a_{ij}|-\sum \limits _{j = m+1}^{m+n}|a_{ji}|\Bigg )<1. \end{aligned}$$

From the above analysis, we can know that there is \(\nu \) satisfying the following equation,

$$\begin{aligned} \begin{array}{rl} 0<\Vert I-\rho (L_{F}+D_{FL})H_{F}(I+\Lambda )\Vert \le \nu <1. \end{array} \end{aligned}$$
(34)

Since group \(V_{1}\) achieve bipartite consensus, we can get \(\lim _{k\rightarrow \infty }\Delta Y_{L}(t + 1,k)=0\). Similar to the analysis in [39], we have

$$\begin{aligned} \begin{array}{rl} \xi _{F}(t+1,k) =&{}(I-\rho (L_{F}+D_{FL})H_{F}(I+\Lambda ))^{k-k_0}\xi _{F}(t+1,k_0)\\ &{}+\sum \limits _{i=k_0\!+\!1}^{k}(I\!\!-\!\!\rho (L_{F}+D_{FL})H_{F}(I\!\!+\!\!\Lambda ))^{k\!-\!i}A_{FL}\Delta Y_{L}(t\!+\!1,i).\\ \end{array} \end{aligned}$$
(35)

By (34), we have

$$\begin{aligned} \begin{array}{rl} &{}\Vert \xi _{F}(t+1,k)\Vert \\ \le &{}\nu ^{k-k_0}\Vert \xi _{F}(t+1,k_0)\Vert \!+\!\dfrac{\Vert A_{FL}\Vert }{1-\nu } \sup \limits _{k_0+1\le i\le k}\Vert \Delta Y_{L}(t+1,i)\Vert . \end{array} \end{aligned}$$
(36)

By \(\lim _{k\rightarrow \infty }\Delta Y_{L}(t + 1,k)=0\) and the proof of Lemma 4.7 in [48], we have \(\lim _{k\rightarrow \infty }\Vert \xi _{F}(t,k)\Vert =0\). \(\square \)

4 Numerical Simulations

We consider the multi-agent system consisting of 3 leaders and 3 followers under the signed digraph in Fig. 1. In Fig. 1, 1–3 represent leaders and 4–6 represent followers, the positive weights of a directed edge represent cooperative interaction between agents, and the negative weights represent antagonistic interaction between agents. The subgraph consisting of nodes 1–3 is structurally balanced. Obviously, agents 1 and 3 belong to the same subgroup \(V'\) and agent 2 belongs to subgroup \(V''\). We set \(c_{1}=1\), \(c_{2}=0\), \(c_{3}=0\), and \(b_1=1\). The controller parameters are selected as \(\alpha =\theta =\theta =0.8\), \(\varpi _{0}=2\), \(\rho =0.05\), \(\lambda =10\), \(\eta =0.5\), and \(\mu =1\). The initial condition \(y_i(0,k)\) is randomly varying in the interval \([-2,2]\), and \( u_i(t,0)=0\).

Case I: Taking \(y_{0}=0.85\), the dynamic of i-th agent is considered as

$$\begin{aligned} y_{i}(t\!+\!1,k\!+\!1)\!\!=\!\!\frac{y_{i}(t,k+1)+u_{i}(t,k+1)}{1\!+\!y^{2}_{i}(t,k+1)}\!+\!i\cdot u_{i}(t,k\!+\!1), \end{aligned}$$

where \(i\in \{1,2,\ldots ,6\}\).

Fig. 1
figure 1

Signed graph

Fig. 2
figure 2

The outputs and containment errors of multi-agent systems under controller (8)

From Fig. 2a, one can obtain that the output trajectories of agent 1 and agent 3 converge to \(y_{0}\), and the output trajectory of agent 2 converges to \(-y_{0}\), which shows that the bipartite consensus can be achieved for agents 1–3. Moreover, Fig. 2a also shows that agents 4–6 converge into the convex hull formed by the output trajectories of agents 1–3. Figure 2b further illustrates that the containment error of each follower gradually converges to 0, which shows that \(\lim _{k\rightarrow \infty }Y_F(t,k)=(L_F+D_{FL})^{-1}A_{FL}\lim _{k\rightarrow \infty }Y_L(t,k)\) holds. From Fig. 2b, we can see that the tracking performance of all agents can be improved through repeated learning processes.

Case II: Taking \(y_{0}=0.5\), the dynamic of i-th agent is considered as

$$\begin{aligned} y_{i}(t\!+\!1,k\!+\!1)\!\!=\!\!y_{i}^2(t,k+1)\cdot u_{i}(t,k+1)+6\cdot u_{i}(t,k\!+\!1), \end{aligned}$$

where \(i\in \{1,2,\ldots ,6\}\). The initial condition \(y_i(0,k)\) is randomly varying in the interval \([-2,2]\), and other control parameters are selected as Case I. From Fig. 3, we know that agents 4–6 converge into the convex hull formed by \(-0.5\) and 0.5, that is, the multi-agent systems achieves the bipartite containment. This is consistent with Theorem 1.

Fig. 3
figure 3

The outputs and containment errors of multi-agent systems under controller (8)

5 Conclusion

In this paper, the bipartite containment tracking problem has been studied for unknown nonlinear multi-agent systems with quantization. A quantized MFAILC method is proposed based on the quantized values of the relative output measurements of agents, which does not depend on the structural information of agents. It is shown that the proposed algorithm can ensure the containment tracking performance in the presence of incomplete information, that is, the bipartite containment control problem of unknown nonlinear multi-agent systems can be achieved.

The quantized MFAILC algorithm is proposed for multi-agent systems with a fixed network and no system disturbances. However, many practical systems contain the unknown disturbances in the system and the uncertainty of communication network. Due to the complexity of such nonlinear multi-agent systems, the proposed method cannot be directly extended to these systems. One of the main difficulties is how to establish a new algorithm and analyze its sensitivity in the presence of disturbances and network uncertainties. Thus, the robustness analysis of the proposed algorithm by considering the unknown disturbance in the system and the uncertainty of communication network will be discussed in our future work. At present, we mainly focus on the theoretical analysis and numerical simulations. In the future, we may explore the application of the proposed method in real-world systems and conduct experiments to verify its effectiveness. Moreover, inspired by the application of quantizers in the resource allocation, we will attempt to solve the resource allocation problem by the algorithm in this paper. For the resource allocation problem, how to apply the quantized MFAILC method to solve optimization problems without tracking error information is a challenging problem.