1 Introduction

Multi-agent systems have attracted enormous attention from a large variety of fields including social science, animal behavior, smart grids, physics, biology, intelligent transportation and some areas in engineer, due to its vastly useful application in practice. A multi-agent system consists of some agents communicating with each other locally by some kind of link, aiming to complete various controller goals via local interaction of designated agents by designing some suitable controllers. In control theory, multi-agent systems have been focused on a lot in several directions, such as flocking [16], sensor networks, spacecraft formation flying [710], consensus [1117], rendezvous [18], axial alignment, cooperative surveillance, etc.

Among these different directions, flocking is a form of collective behavior of numerous interacting agents with the purpose of reaching a common group mission. This behavior is ubiquitous in nature like a self-organized group with the ability to coordinate group behavior, such as flocking of birds, fish, and bees, etc. It has a tremendous number of potential applications in reality. For instance, by employing flocking of cooperative unmanned aerial vehicles (UAVs), one nation could perform necessary military missions such as reconnaissance, etc. Therefore, it is of huge significance for researchers to concentrate on the study of flocking. As early as in 1986, three flocking rules were brought forward by Reynolds based on the animals’ behavior as follows [19].

  1. 1.

    Collision Avoidance: avoid collisions with nearby flockmates;

  2. 2.

    Velocity Matching: attempt to match velocity with nearby flockmates;

  3. 3.

    Flock Centering: attempt to stay close to nearby flockmates.

These rules are also called separation, alignment, and cohesion, respectively, in [1]. Under the assumption that all agents are informed with a constant virtual leader’s velocity, the author has proposed a theoretical framework for design and analysis of distributed flocking algorithms by the introduction of α-, β-, and γ-agents in [1], where the simple second-order dynamics have been studied. And both the free-space and the obstacle-space have been taken into account in several cases. In [20] it has been shown that the flocking algorithm in [1] still drives all agents to the same velocity even if only a part of agents could receive the information of the leader’s velocity. A flocking algorithm with multi-target tracking for multi-agent systems has been provided in [21], in which the authors proposed a method to implement the multi-target missions under the introduction of two types of potential functions. By using virtual force and pseudo-leader mechanism, the authors in [22] have proposed an approach to determine the pseudo-leaders among all agents. A unified optimal control framework has been proposed for flocking of second-order multi-agent systems in [23] and the authors utilized a sort of new inverse optical control strategy to procure the optimal control law along with the occurrence of a challenging optimal problem. The second-order systems with nonlinear intrinsic dynamics have been discussed for the flocking problem with a general switching graph in [24] by using a leader-following strategy and pinning control. The more general second-order systems with nonlinear intrinsic dynamics have been considered for flocking problem with a virtual leader in [2], where all agents have the same intrinsic nonlinear dynamics as well as the virtual leader governed by locally Lipschitz nonlinearity. Apart from the second-order point models aforementioned, nonlinear Euler-Lagrange systems have been investigated for the flocking problem in [25] and the flocking is reached by designing an adaptive controller. For more details on flocking problem, we refer readers to the survey [26, 27].

Based on these aforesaid, this paper investigates the robust flocking problem for second-order systems with a leader on undirected switching networks. Two distributed flocking control protocols are presented to achieve flocking asymptotically for two cases according to if the leader’s velocity is constant or time-varying. Compared with the existing flocking problem in the literature, the contributions of this paper are threefold. First, the intrinsic dynamics are non-identical, that is, the intrinsic dynamics of all followers are all different from each other. Second, the external disturbances are considered in the dynamic equations of followers. Third, the intrinsic dynamics are nonlinear, which rely not only on the velocity but also on the position. Additionally, it should be noted that the proposed distributed flocking control laws are model-independent which results in the effectiveness of the controller to tackle the different intrinsic dynamics of the followers and the leader under some assumptions on boundedness of several states by virtue of Lyapunov theory.

The remainder of this paper is organized as follows. Section 2 gives some notations and the background of graph theory and nonsmooth analysis. The problem statement is presented in Section 3. And main results are provided in Section 4 which are divided into two cases: one is the case with constant leader’s velocity, and the other is the case with time-varying leader’s velocity. An illustrative example is given to demonstrate the effectiveness of the theoretical results in Sections 5 and 6 draws the conclusion.

2 Preliminaries

2.1 Notation

Let \(\mathbb {R}^{n}\) and \(\mathbb {R}^{n\times m}\) be the sets of n-dimensional vectors and n×m matrices, respectively. \(\mathbb {R}^{+}\) denotes the set of nonnegative real numbers and diag{a1,a2,…,an} is the diagonal n×n matrix with diagonal entries equal to a1,a2,…,an. 0 is a vector or matrix with compatible dimension with all entries 0 and 1 is denoted analogously. For a vector \(x\in \mathbb {R}^{n}, sgn(x)\) denotes the signum function for every element of x. And ||x||1:=|x1|+|x2|⋯+|xn| is the 1-norm of a vector. Given a matrix M, λmin(M) and λmax(M) are, respectively, the smallest and largest eigenvalues of M. ||x|| and ||A|| stand for the standard Euclidean norm of a vector \(x\in \mathbb {R}^{n}\) and the induced norm of a matrix \(A\in \mathbb {R}^{n\times n}\), respectively. Subscript T represents the transpose of a vector or matrix.

2.2 Graph theory

A graph with N nodes is defined by \(\mathcal {G}_{N}=(\mathcal {V}_{N},\mathcal {E}_{N})\) consisting of a set of vertices \(\mathcal {V}_{N}=\{v_{1},v_{2},\ldots,v_{N}\}\) and a set of edges \(\mathcal {E}_{N}\subseteq \mathcal {V}_{N}\times \mathcal {V}_{N}\). An edge \((v_{i},v_{j})\in \mathcal {E}_{N}\) means node j could receive information from node i, in which node i is called a neighbor of node j. A graph is called undirected if any \((v_{i},v_{j})\in \mathcal {E}_{N}\) implies \((v_{j},v_{i})\in \mathcal {E}_{N}\), and directed otherwise. A directed path in a directed graph is a sequence of edges of the form (i1,i2),(i2,i3),…, and an undirected path in undirected graph is defined analogously. An undirected graph is called connected if there is an undirected path between every pair of distinct nodes. The neighbor set of node i is denoted by \(\mathcal {N}_{i}=\{v_{j}\in \mathcal {V}_{N}:~(v_{j},v_{i})\in \mathcal {E}_{N}\}\). Define the adjacency matrix \(A_{N}=(a_{{ij}})\in \mathbb {R}^{N\times N}\) as: aij>0 if \(v_{j}\in \mathcal {N}_{i}, a_{{ij}}=0~(i\neq j)\) otherwise and aii=0,i=1,2,…,N. The Laplacian matrix associated with AN is defined as \(L_{N}=(l_{{ij}})\in \mathbb {R}^{N\times N}, l_{{ii}}=\sum _{j=1}^{N} a_{{ij}}\) and lij=−aij,ij. Combined with the leader, the followers graph is expanded to be a new graph \(\mathcal {G}_{N+1}=(\mathcal {V}_{N+1},\mathcal {E}_{N+1})\) with the adjacency matrix \(A_{N+1}=(a_{{ij}})\in \mathcal {R}^{(N+1)\times (N+1)}\) defined as ai0>0 if \((0,i)\in \mathcal {E}_{N+1}\) and ai0=0 otherwise, i=1,2,…,N,a0k=0 for all k=0,1,…,N, and aij,i,j=1,2,…,N, is the same defined as in AN. Define M:=LN+diag{a10,a20,…,aN0}. For the followers and leader graph, let \(\widehat {\mathcal {N}}_{i}\subseteq \{0,1,\ldots,N\}\) denote the neighbor set of follower i. In addition, for the case of switching graph topology, we assume that the agents have the same communication/sensing radii R>0, i.e., \(j\in \widehat {\mathcal {N}}_{i}(t)\) if ||xj(t)−xi(t)||<R and \(j\notin \widehat {\mathcal {N}}_{i}(t)\) if ||xj(t)−xi(t)||≥R,i=1,2,…,N,j=0,1,…,N.

In this paper, the results are based on the following assumption on the communication graph.

Assumption 1

The graph \(\mathcal {G}_{N}\) is undirected and the leader in \(\mathcal {G}_{N+1}(t)\) has directed paths to all followers at the initial time t=0.

It is worth noting that Assumption 1 is a weak condition, which is only postulated at the initial time, instead of the whole time duration which has been widely used in many existing works. Compared with the whole duration, this assumption is easily satisfied at the initial time by contrived deployment, which has also been employed in such as [2, 28].

Regarding the flocking problem, the following lemma will be used later.

Lemma 1

([29]) If the graph \(\mathcal {G}_{N}\) is undirected and the leader in \(\mathcal {G}_{N+1}\) has directed paths to all followers, then M is symmetric positive-definite.

2.3 Nonsmooth analysis

Given the differential equation ([30])

$$\begin{array}{*{20}l} \dot{x}=f(x,t), \end{array} $$
(1)

where \(f:\mathbb {R}^{n}\times \mathbb {R}\rightarrow \mathbb {R}^{n}\) is measurable and essentially locally bounded. A vector function x(·) is called a Filippov solution of (1) if x(·) is absolutely continuous and \(\dot {x}\in K[f](x,t)\) for almost everywhere, where K[f](x,t) denotes the smallest convex closed set containing all the limit values of the vector-valued function f(xi,t) for xix and a constant t. For a locally Lipschitz function \(V:\mathbb {R}^{n}\times \mathbb {R}\rightarrow \mathbb {R}\), the generalized gradient of V at (x,t) is defined by \(\partial V(x,t):=\overline {co}\{\lim \nabla V(x,t)|~(x_{i},t_{i})\rightarrow (x,t),(x_{i},t_{i})\notin \Omega _{V}\}\), where ΩV is the set of measure zero on which the gradient of V with respect to x or t is not defined. The generalized time derivative of V(x,t) with respect to t is defined as

$$\dot{\tilde{V}}:=\bigcap_{\xi\in\partial V(x,t)}\xi^{T}\left(\begin{array}{c} K[f](x,t) \\ 1 \\ \end{array} \right).$$

The following nonsmooth version of LaSalle’s theorem is useful later.

Lemma 2

([30]) Let Ω be a compact set such that every Filipov solution to the autonomous system \(\dot {x}=f(x),x(0)=x(t_{0})\) starting in Ω is unique and remains in Ω for all tt0. Let V:ΩR be a time independent regular function such that v≤0 for all \(v\in \dot {\tilde {V}}\) (if \(\dot {\tilde {V}}\) is the empty set then this is trivially satisfied). Define \(S=\{x\in \Omega |~0\in \dot {\tilde {V}}\}\). Then every trajectory in Ω converges to the largest invariant set, S0, in the closure of S.

3 Problem statement

Consider a multi-agent system consists of N followers labeled as 1 to N. In the meantime, the dynamics of each follower i satisfy the following non-identical second-order nonlinear differential equations ([17, 29, 31])

$$\begin{array}{*{20}l} \dot{x}_{i}&=v_{i}, \\ \dot{v}_{i}&=f_{i}(x_{i},v_{i})+u_{i}+w_{i},~~~i=1,2,\ldots,N, \end{array} $$
(2)

where \(x_{i}\in \mathbb {R}^{n}, v_{i}\in \mathbb {R}^{n}\), and \(u_{i}\in \mathbb {R}^{n}\) are the position, velocity, and control input of follower i, respectively. Furthermore, \(w_{i}\in \mathbb {R}^{n}\) is the continuous external disturbance of follower i. And \(f_{i}:\mathbb {R}^{n}\times \mathbb {R}^{n}\rightarrow \mathbb {R}^{n}\) is continuously differentiable nonlinear function.

Meanwhile, there exists a leader labeled as agent 0 besides those N followers, whose state is represented by position x0 and velocity v0 with the derivative of v0 being bounded, i.e., \(||\dot {v}_{0}||\leq H_{1}\), where H1>0 is a constant. Note that v0 could be either constant or time-varying. For flocking of multi-agent systems, the objective is to achieve the following properties by designing distributed cooperative control law, which are also known as three flocking rules of Reynolds [19]:

  1. 1.

    ||vi(t)−v0(t)||→0 as t, ∀i=1,2,…,N.

  2. 2.

    The agents attempt to stay close to nearby flockmates.

  3. 3.

    Collisions are avoided among all agents.

For the purpose of reaching flocking for multi-agent systems, the following assumption is utilized necessarily.

Assumption 2

\(x_{0},v_{0},\dot {v}_{0},w_{i},x_{i},\) and vi are bounded for i=1,2,…,N.

Note that wi needs to be bounded for the sake of robust flocking since its exact value could not be obtained by followers that have only local interactions. In addition, the trajectory of the leader is not necessary to be exactly known as long as its state is bounded, which is a sort of generic assumption. Therefore, this scenario is really a generation of a variety of results in the literature. For instance, the leader has the same intrinsic dynamic as the followers in [22]. In reality, it is natural to require the states of all agents to be bounded due to the consideration of safety and restricted field, etc. However, the bound could be any large what we need in practice.

For robust flocking, the connectivity-preserving mechanism (see [2, 18]) for undirected switching graph, roughly speaking, guarantees that one follower or the leader wills till be a neighbor of another follower for any time t>0 as long as it is the case at the initial time t=0. The mechanism is described as follows. i) Initial edges are generated by

$$\begin{array}{*{20}l} E(0)=\{(i,j)|~||x_{j}(0)-x_{i}(0)||< r,i,j\in\mathcal{V}\}. \end{array} $$
(3)

ii) Let ϕi(j)(t)∈{0,1} represent whether or not follower j is a neighbor of follower i, which is defined as follows.

$$ \begin{aligned} \phi_{i}(j)(t)=\left\{ \begin{array}{ll} 0, & \left\{ \begin{array}{ll} \text{if}~[(\phi_{i}(j)(t^{-})=0)\cap(||x_{j}(t)-x_{i}(t)||\geq R-\epsilon)] \\ \cup[(\phi_{i}(j)(t^{-})=1)\cap(||x_{j}(t)-x_{i}(t)||\geq R)], \end{array} \right. \\ 1, & \left\{ \begin{array}{ll} \text{if}~[(\phi_{i}(j)(t^{-})=0)\cap(||x_{j}(t)-x_{i}(t)||< R-\epsilon)] \\ \cup[(\phi_{i}(j)(t^{-})=1)\cap(||x_{j}(t)-x_{i}(t)||< R)]. \end{array} \right. \end{array} \right. \end{aligned} $$
(4)

iii) Here 0<r<R and 0<εR.

Note that ε in this paper is chosen to equal the sensing radius of followers, i.e., ε=R, which prevents new edges from being added to the initial graph at initial time 0.

The following definition states the notion of potential function.

Definition 1

The potential function Vij is a continuously differentiable, nonnegative function of ||xixj|| satisfying the following conditions.

  1. 1.

    Vij achieves its unique minimum when ||xixj|| is equal to its desired value dij.

  2. 2.

    Vij as ||xixj||→0.

  3. 3.

    Vij as ||xixj||→R.

  4. 4.

    Vii=c,i=1,2,…,N, where c is a positive constant.

For example, Vij could be given as

$$\begin{array}{*{20}l} V_{{ij}}=\left\{ \begin{array}{ll} +\infty, & ||x_{i}-x_{j}||=0, \\ \frac{2R}{||x_{i}-x_{j}||^{2}(R-||x_{i}-x_{j}||)}, & ||x_{i}-x_{j}||\in(0,R), \\ +\infty, & ||x_{i}-x_{j}||=R. \end{array} \right. \end{array} $$
(5)

Similar to Lemma 3.1 in [32], we have the following lemma.

Lemma 3

Let Vij be defined in Definition 1. The following equality holds:

$$\begin{array}{*{20}l} \frac{1}{2}\sum_{i=1}^{N}\sum_{j\in\widehat{\mathcal{N}}_{i}}\left(\frac{\partial V_{{ij}}}{\partial x_{i}}\dot{x}_{i}+\frac{\partial V_{{ij}}}{\partial x_{j}}\dot{x}_{j}\right)=\sum_{i=1}^{N}\sum_{j\in\widehat{\mathcal{N}}_{i}}\frac{\partial V_{{ij}}}{\partial x_{i}}\dot{x}_{i}. \end{array} $$
(6)

For simplicity, let n=1 in this paper. Note that it is straightforward to extend the results to the case of general dimension by means of the Kronecker product ⊗.

4 Main results

This section focuses mainly on the robust flocking problem for multi-agent system (2) with the velocity of the leader being time-varying. In this case, the distributed flocking algorithm is proposed as

$$ \begin{aligned} u_{i}&=-\sum_{j\in\widehat{\mathcal{N}}_{i}(t)}\frac{\partial V_{{ij}}}{\partial x_{i}}-k_{2}sgn(v_{i}-\hat{v}_{i0})-k_{3}\cdot sgn\left(\sum_{j\in\widehat{\mathcal{N}}_{i}(t)}a_{{ij}}(\hat{v}_{i0}-\hat{v}_{j0})\right) \\ &\hspace{0.4cm}-k_{1}\sum_{j\in\widehat{\mathcal{N}}_{i}(t)}a_{{ij}}(v_{i}-v_{j}),~~~i=1,2,\ldots,N, \end{aligned} $$
(7)

where k1,k2,k3 are positive control gains to be determined later, and \(\hat {v}_{i0}\) is the estimate of the leader’s velocity by follower i with \(\hat {v}_{00}=v_{0}\), satisfying the following adaptation law:

$$\begin{array}{*{20}l} {}\dot{\hat{v}}_{i0}=-k_{3}\cdot sgn\left(\sum_{j\in\widehat{\mathcal{N}}_{i}(t)}a_{{ij}}(\hat{v}_{i0}-\hat{v}_{j0})\right),~~~i=1,2,\ldots,N. \end{array} $$
(8)

Physically speaking, the first term on the right-hand side of (7) is used to preserve the initial connectivity among agents at all times, the second term is designed to deal with the nonlinearity and disturbance in the agent’s dynamics, the third term is introduced to counteract the impact of time-varying v0, and the last term is employed to align all agents’ velocities.

Under input (7), system (2) can be rewritten as

$$\begin{array}{*{20}l} {}\dot{\tilde{x}}&=\tilde{v}, \\ {}\dot{\tilde{v}}&\in^{a.e.} K[L_{1}+N_{1}-k_{1}M\tilde{v}-k_{2}sgn(\hat{v}^{0})+\dot{\hat{v}}_{0}-\dot{v}_{0}\mathbf{1}-V_{1}], \end{array} $$
(9)

where \(\tilde {x}=(\tilde {x}_{1},\tilde {x}_{2},\ldots,\tilde {x}_{N})^{T}, \tilde {x}_{i}=x_{i}-x_{0}, \tilde {v}= (\tilde {v}_{1},\tilde {v}_{2},\ldots,\tilde {v}_{N})^{T}, \tilde {v}_{i}=v_{i}-v_{0}, L_{1}:=\left (L_{1}^{1},L_{2}^{1},\ldots,L_{N}^{1}\right)^{T}, N_{1}:=\left (N_{1}^{1},N_{2}^{1},\ldots,N_{N}^{1}\right)^{T}, L_{i}^{1}:=w_{i}+f_{i}(x_{0},v_{0}), N_{i}^{1}:= f_{i}(x_{i},v_{i})-f_{i}(x_{0},v_{0}),i=1,2,\ldots,N, V_{1}:=\left (\sum _{j\in \widehat {\mathcal {N}}_{1}(t)}\frac {\partial {V_{1j}}}{\partial x_{1}},\ldots,\sum _{j\in \widehat {\mathcal {N}}_{N}(t)}\frac {\partial {V_{{Nj}}}}{\partial x_{N}}\right)^{T}, \hat {v}_{0}:=(\hat {v}_{10},\hat {v}_{11},\ldots,\hat {v}_{N0})^{T}, \hat {v}^{0}:=\left (\hat {v}_{1}^{0},\hat {v}_{2}^{0},\ldots,\hat {v}_{N}^{0}\right)^{T}, \hat {v}_{i}^{0}:=v_{i}-\hat {v}_{i0}, K[\cdot ]\) means the differential inclusion defined in last section, and a.e. stands for “almost everywhere”. From Assumption 2, it is obvious to see that ||L1|| is bounded and in view of the mean value theorem, there exists a positive nondecreasing function α0 such that ||N1||≤α0(||y||)||y|| (see [29]), where \(y=(\tilde {x}^{T},\tilde {v}^{T})^{T}\). After denoting α(z):=α0(z)z,z≥0, which is a positive nondecreasing function as well, one has ||N1||≤α(||y||). As a result, it follows from Assumption 1 that there exists a positive constant H2 such that ||y||≤H2 and hence ||N1||≤α(H2). We are now ready to given the flocking result.

Theorem 1

Under Assumptions 1 and 2, the distributed flocking algorithm (7) for multi-agent system (2) can ensure that velocity differences between all followers and the leader ultimately converge to zero, i.e., \({\lim }_{t\rightarrow \infty }||v_{i}(t)-v_{0}(t)||=0, i=1,2,\ldots,N\), and the collision of all agents is avoided, if the following inequalities hold:

$$\begin{array}{@{}rcl@{}} k_{2}\geq ||L_{2}||+\alpha(H_{2}),~~k_{3}>\sqrt{N}H_{1}. \end{array} $$
(10)

Proof

Let t0=0,t1,t2,… be the switching time slots of interaction networks for system (2). Constructing the Lyapunov function candidate as

$$\begin{array}{*{20}l} V=\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in\widehat{\mathcal{N}}_{i}(t)} V_{{ij}}+\sum_{i=1}^{N} V_{i0}+\frac{1}{2}\tilde{v}^{T}\tilde{v}. \end{array} $$
(11)

Then the generalized derivative of V is calculated as

$$\begin{array}{*{20}l} \dot{\tilde{V}}&=\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in\widehat{\mathcal{N}}_{i}(t)} \left(\frac{\partial V_{{ij}}}{\partial \tilde{x}_{i}}\dot{\tilde{x}}_{i}+\frac{\partial V_{{ij}}}{\partial \tilde{x}_{j}}\dot{\tilde{x}}_{j}\right)\\&+\sum_{i=1}^{N} \left(\frac{\partial V_{i0}}{\partial \tilde{x}_{i}}\dot{\tilde{x}}_{i}+\frac{\partial V_{i0}}{\partial \tilde{x}_{0}}\dot{\tilde{x}}_{0}\right)+\tilde{v}^{T}\dot{\tilde{v}} \\ &=\sum_{i=1}^{N}\sum_{j\in\widehat{\mathcal{N}}_{i}(t)} \frac{\partial V_{{ij}}}{\partial \tilde{x}_{i}}\dot{\tilde{x}}_{i}+\sum_{i=1}^{N} \frac{\partial V_{i0}}{\partial \tilde{x}_{i}}\dot{\tilde{x}}_{i}-\tilde{v}^{T}V_{1} \\ &-k_{1}\tilde{v}^{T}M\tilde{v}+k_{2}\tilde{v}^{T}K[sgn(\hat{v}^{0})]+\tilde{v}^{T} \dot{\hat{v}}_{0}\\&-K[\dot{v}_{0}]\tilde{v}^{T}\mathbf{1}+\tilde{v}^{T}(L_{1}+N_{1}) \\ &=-k_{1}\tilde{v}^{T}M\tilde{v}-k_{2}\tilde{v}^{T}K[sgn(\hat{v}^{0})]+\tilde{v}^{T} \dot{\hat{v}}_{0}\\&-K[\dot{v}_{0}]\tilde{v}^{T}\mathbf{1}+\tilde{v}^{T}(L_{1}+N_{1}) \\ &\leq-k_{1}\lambda_{{min}}(M)||\tilde{v}||^{2}+(k_{2}+k_{3}+H_{1})||\tilde{v}||_{1}\\&+(||L_{1}||+\alpha(||y||))||\tilde{v}|| \\ &\leq[(k_{2}+k_{3}+H_{1})\sqrt{N}+||L_{1}||+\alpha(||y||)]||\tilde{v}|| \\ &\leq[(k_{2}+k_{3}+H_{1})\sqrt{N}+||L_{1}||+\alpha(H_{2})]\sqrt{2V}, \end{array} $$
(12)

where we have used Lemma 3 and the fact \(\dot {\tilde {x}}_{0}=0\). (12) directly leads to that \(V(t)\leq (\sqrt {V(0)}+t_{1}[(k_{2}+k_{3}+H_{1})\sqrt {2N}+\sqrt {2}(||L_{1}||+\alpha (H_{2}))]/2)^{2}\) for any t∈[t0,t1). As a consequence, V is bounded in time [t0,t1). Moreover, in view of the definition of potential function, one follower is always a neighbor of another follower or the leader once it is the case at the initial time t=0, since the potential function and thereby V will become infinite as ||xjxi||→R. Hence, no existing edge will be lost in any time t∈[t0,t1]. Additionally, from the definition of connectivity-preserving mechanism, no new edges are added to the interaction structure of initial graph at time t=0. Consequently, inequality (12) is still correct for any time t∈[t0,t1]. Similarly, it can be concluded that the network structure in any time interval [tk,tk+1),k≥1 is always the same as that of initial graph at time t=0. That is, Assumption 1 holds for all time t≥0.

To proceed, consider the estimator (8). Using the notation \(\hat {v}^{0}\) and nonsmooth analysis, (8) can be rewritten as

$$\begin{array}{*{20}l} \dot{\hat{v}}^{0}\in^{a.e.}-K[k_{3}sgn(M\hat{v}^{0})+\dot{v}_{0}\mathbf{1}]. \end{array} $$
(13)

Constructing the Lyapunov function candidate as

$$\begin{array}{*{20}l} V=\frac{1}{2}(\hat{v}^{0})^{T}M\hat{v}^{0}, \end{array} $$
(14)

whose generalized derivative is derived as

$$\begin{array}{*{20}l} \dot{\tilde{V}}&=-k_{3}(\hat{v}^{0})^{T}M K[sgn(M\hat{v}^{0})]-K[\dot{v}_{0}](\hat{v}^{0})^{T}M\mathbf{1} \\ &\leq-k_{3}||M\hat{v}^{0}||_{1}+H_{1}||M\hat{v}^{0}||_{1} \\ &\leq-k_{3}||M\hat{v}^{0}||+\sqrt{N}H_{1}||M\hat{v}^{0}|| \\ &\leq-(k_{3}-\sqrt{N}H_{1})||M\hat{v}^{0}||, \end{array} $$
(15)

which is negative if \(k_{3}>\sqrt {N}H_{1}\). As previously shown, Assumption 1 holds for all time t≥0 and thus (15) is correct with fixed M for all time t≥0. And M is positive definite by Lemma 1. With reference to Theorem 3.1 in [30], one has \({\lim }_{t\rightarrow \infty }\hat {v}^{0}=0\). Moreover, by inequality (15), it can be obtained

$$\begin{array}{*{20}l} \dot{\tilde{V}}&\leq -(k_{3}-\sqrt{N}H_{1})||M\hat{v}^{0}|| \\ &=-(k_{3}-\sqrt{N}H_{1})\sqrt{(\hat{v}^{0})^{T}M^{T}M\hat{v}^{0}} \\ &\leq-(k_{3}-\sqrt{N}H_{1})\sqrt{\lambda_{{min}}(M^{2})}||\hat{v}^{0}|| \\ &\leq-(k_{3}-\sqrt{N}H_{1})\frac{\lambda_{{min}}(M)}{\sqrt{\lambda_{{max}}(M)}}\sqrt{2V}, \end{array} $$
(16)

which follows that

$$\sqrt{V(t)}\leq\sqrt{V(0)}-\frac{\sqrt{2}\lambda_{{min}}(M)(k_{3}-\sqrt{N}H_{1})t}{2\sqrt{\lambda_{{max}}(M)}}. $$

Therefore, vanishing the right-hand side yields that \(\hat {v}_{i0}\rightarrow v_{0}\) in finite time T0, that is, \(\hat {v}_{i0}=v_{0},\forall t\geq T_{0},i=1,2,\ldots,N\), where T0 is defined by

$$\begin{array}{*{20}l} T_{0}:=\frac{\sqrt{\lambda_{{max}}(M)(\hat{v}^{0}(0))^{T}M\hat{v}^{0}(0)}}{\lambda_{{min}}(M)(k_{3}-\sqrt{N}H_{1})}. \end{array} $$
(17)

As for T0 in (17), there must exist a constant \(l\in \mathbb {N}\) such that T0∈[tl,tl+1). At this stage, partitioning the time into two parts: t∈[0,tl) and t∈[tl,). When t∈[0,tl), (12) has shown that \(V(t)\leq (\sqrt {V(0)}+t_{l}[(k_{2}+k_{3}+H_{1})\sqrt {2N}+\sqrt {2}(||L_{1}||+\alpha (H_{2}))]/2)^{2}\) for any t∈[0,tl) and thus V(t) is bounded. When \(t\in [t_{l},\infty), \hat {v}_{i0}=v_{0},i=1,2,\ldots,N\). The generalized derivative of V in (11) can be evaluated as

$$\begin{array}{*{20}l} \dot{\tilde{V}}\leq -k_{1}\lambda_{{min}}(M)||\tilde{v}||^{2}-(k_{2}-||L_{1}||-\alpha(H_{2}))||\tilde{v}||, \end{array} $$
(18)

which is negative if k2≥||L2||+α(H2). Notice that {y(T)| ||y(T)||≤α−1(k2−||L1||)} is a compact set and \(\dot {V}=0\) implies that \(\tilde {v}=0\), i.e., \(\tilde {v}_{i}=v_{0}\). Invoking Lemma 2, one concludes that \({\lim }_{t\rightarrow \infty }||v_{i}-v_{0}||=0\), and the collision is avoided since V will become infinite as ||xixj||→0 which contradicts the boundedness of V. This completes the proof. □

Remark 1

In comparison with existing results, such as [2, 2024], this paper is the first to address the flocking problem for multiple agent systems with non-identical intrinsic dynamics and external disturbances. The main difficulties lie in the controller design and its convergence analysis for the studied problem.

As a special case, when the velocity of the leader is constant, i.e., \(\dot {v}_{0}=0\), a simpler distributed flocking algorithm for (2) can designed as

$$ \begin{aligned} u_{i}&=-\sum_{j\in\widehat{\mathcal{N}}_{i}(t)}\frac{\partial V_{{ij}}}{\partial x_{i}}-k_{2}sgn(v_{i}-\hat{v}_{i0})-k_{1}\sum_{j\in\widehat{\mathcal{N}}_{i}(t)}a_{{ij}}(v_{i}-v_{j}),~~~i=1,\ldots,N \end{aligned} $$
(19)

where notations are the same as exploited in (7). Besides, the adaptation law (8) of \(\hat {v}_{i0}\) is somewhat simplified to

$$\begin{array}{*{20}l} \dot{\hat{v}}_{i0}=-sgn\left(\sum_{j\in\widehat{\mathcal{N}}_{i}(t)}a_{{ij}}(\hat{v}_{i0}-\hat{v}_{j0})\right),~i=1,2,\ldots,N. \end{array} $$
(20)

The controller (19) has the same physical meaning as that of (7), and the main difference is that the third term on the right-hand side of (7), which are introduced to offset the time dependence of v0, is unnecessary here since v0 is constant.

Similarly, as argued earlier in the proof of Theorem 1, one can obtain that all the estimators \(\hat {v}_{i0}\) will converge to v0,i=1,2,…,N in finite time \(T_{1}:=\sqrt {\lambda _{{max}}(M)(\hat {v}^{0})^{T}M\hat {v}^{0}}/\lambda _{{min}}(M)\), and the similar result to Theorem 1 is provided as follows.

Theorem 2

Under Assumptions 1 and 2, the distributed flocking algorithm (19) for systems (2) can ensure that velocity differences between all followers and the leader will ultimately converge to zero, i.e., \({\lim }_{t\rightarrow \infty }||v_{i}(t)-v_{0}||=0, i=1,2,\ldots,N\), and the collision of all agents is avoided if the following inequality holds:

$$\begin{array}{*{20}l} k_{2}\geq ||L_{1}||+\alpha(H_{2}). \end{array} $$
(21)

5 An illustrative example

In this section we mainly present an example to illustrate the effectiveness of the theoretical results. In the example, the flocking problem consists of 8 followers labeled as i,i=1,2,…,8 and a leader labeled as 0, in which each agent has the following intrinsic dynamics that is adapted from the example in [2]:

$$\begin{array}{@{}rcl@{}} {}f_{i}(x_{i},v_{i})=\left(\begin{array}{c} 10(v_{i2}-v_{i1})-x_{i1}+\frac{i}{10} \\ 2v_{i1}-v_{i1}v_{i3}-v_{i2}-x_{i2}+2\sin(\frac{\pi i}{9}) \\ v_{i1}v_{i2}-2.5v_{i3}-x_{i3}+\cos(\frac{i}{8}) \\ \end{array} \right), \end{array} $$
(22)

where xi=(xi1,xi2,xi3)T,vi=(vi1,vi2,vi3)T with the initial positions (0.99,4.23,2.83)T,(1.89,0.11,3.09)T,(4.69,1.43,0.73)T,(0.89,6.51,3.59)T,(3.73,1.23,2.03)T,(1.17,2.01,5.27)T,(0.38,3.48,2.31)T,(2.98,0.98,3.08)T for followers 1,2,…,8, respectively, and the initial velocities (3.39,2.92,1.21)T,(3.11,2.21,5.82)T,(4.79,0.66,2.64)T,(3.09,4.23,1.28)T,(4.43,1.70,5.12)T,(2.17,5.11,3.15)T,(1.88,2.62,0.59)T,(3.99,1.15,5.18)T for followers 1,2,…,8, respectively. For the leader, it has the following intrinsic dynamics:

$$\begin{array}{@{}rcl@{}} f_{0}(x_{0},v_{0})=\left(\begin{array}{c} 10(v_{02}-v_{01})-x_{01} \\ 2v_{01}-v_{01}v_{03}-v_{02}-x_{02} \\ v_{01}v_{02}-2.5v_{03}-x_{03} \\ \end{array} \right), \end{array} $$
(23)

where x0=(x01,x02,x03)T,v0=(v01,v02,v03)T with the initial position x0(0)=(4,5,3.5)T and the initial velocity v0(0)=(2,2.5,3)T.

Furthermore, the external disturbances wi is given by

$$\begin{array}{@{}rcl@{}} {}w_{i}(t)=\left(2\sin(\frac{\pi i}{12}t),\!~0,\!~\cos(\frac{\pi i}{2}t)\right)^{T}\!,~~~i=1,2,\ldots,8. \end{array} $$
(24)

This example is related to the following potential function

$$\begin{array}{@{}rcl@{}} V_{{ij}}=\left\{ \begin{array}{ll} +\infty, & ||x_{i}-x_{j}||=0, \\ \frac{R}{||x_{i}-x_{j}||(R-||x_{i}-x_{j}||)}, & ||x_{i}-x_{j}||\in(0,R), \\ +\infty, & ||x_{i}-x_{j}||=R. \end{array} \right. \end{array} $$
(25)

Meanwhile, the initial interaction topology is given by \(E(0)=\{(i,j)|~||x_{j}(0)-x_{i}(0)||< r,i,j\in \mathcal {V}\}\) with r=7. For the estimate \(\hat {v}_{i0}\) of the leader velocity v0, the initial value is given by \(\hat {v}_{i0}(0)=(3,1.5,0.6)^{T},i=1,2\ldots,8\). For some parameters, let k1=2,k2=12,k3=15, and consider the directed graph by making the sensing radius R=10.

Figure 1 shows that the estimates of leader velocity definitely converge to the actual leader velocity in finite time, and Fig. 2 shows the initial position and velocity structure, from which it is obvious to see that all followers are moving in different directions. And Fig. 3 presents that all followers match the velocity of the leader, which reachs the velocity matching as a flocking rule, and at the same time the collision avoidance and flocking centering rules are achieved as shown in Fig. 3. Furthermore, Fig. 4 precisely depicts the velocity matching rule, which means that the velocities of all followers will be equal to the velocity of the leader.

Fig. 1
figure 1

Velocity convergence of estimates and here v01,v02,v03 are x,y,z components of leader velocity v0, respectively, and v10,v20,v30 are x,y,z components of observer to leader velocity v0, respectively

Fig. 2
figure 2

Initial states

Fig. 3
figure 3

Final states

Fig. 4
figure 4

Velocity convergence of follower and here v01,v02,v03 are x,y,z components of leader velocity v0, respectively

6 Conclusion

The robust flocking problem for second-order systems is investigated with external disturbances in this paper, in which the intrinsic dynamics are all distinct from each other for all agents. Moreover, the intrinsic dynamics, which are not like most of the second-order systems discussed in the literature, are nonlinear that depend not only on the velocity but also on the position which is more practical. Two distributed flocking control laws have been proposed to make the differences of the velocity between all followers and the leader approach to zero asymptotically based on that the leader’s velocity may be constant or time-varying. The proposed distributed flocking control laws are both model-independent which results in the effectiveness of the controllers to cope with the different intrinsic dynamics of the followers and the leader under some assumptions on boundedness of several states. Finally, an example is presented to illustrate the efficiency of the theoretical results. Future directions include the investigation of finite-time robust flocking by using continuous-time controllers, and the scenario with stochastic and possibly unbounded disturbances.