1 Introduction

Navigating and following are two basic functionalities that are required for a system of mobile robots. Naturally, both of them are well studied for either single-agent systems or multi-agent systems [19]. However, many methods in the existing literature focus either on single- and double-integrator robot models, or on the design of full-state feedback controllers to track certain reference signals or maintain a predefined formation pattern. Despite being a major concern in applications such as SLAM [10, 11], the constraint of actuator and sensor limitations seems to be more or less underestimated in the controller design. It is crucially important to take such issues into consideration when designing the control algorithms.

For example, consider a team of mobile robots that navigate in an unknown environment and are equipped with low-resolution range sensors. Although each robot itself only has a limited or even incorrect view of the surroundings, the multi-agent system is still able to cooperatively obtain a good overlook of the environment through locally exchanging information with neighbors. To achieve such goals, new challenges arise in designing the motion controllers, where the robots are required to stay within communication range of each other.

In this paper, the formation control problem is studied for a mobile multi-agent system with limited information from range sensors. Several control protocols are designed by coordinating some basic laws, which are easy to implement and are distributed using only local information. Besides our previous work on unicycle models, the most relevant literature to this work are [1, 2]. In [1] a multi-agent system with extended unicycle dynamics is considered, and formation control algorithms are designed that only require relative position feedback from neighbors. In [2] serial and parallel formation control are studied and an almost global convergence to the desired formation is obtained. It uses distributed control based on relative displacement and relative orientation, in which an interaction function and an offset vector are needed. However, in our study only distance and relative bearing will be used in the control algorithm. As a price to pay, the desired formation can not stand still in the case of parallel formation. Compared with the existing distributed controllers such as [4, 6, 8, 9, 12], the main novelty of our approach lies in the incorporation of sensing information and actuator constraints, and that the considered serial and parallel formations that are not rigid.

In our work, the issue of “directional sensor information” is addressed. It is assumed that agents are equipped with sensors of both limited detection range and limited field of view. To avoid losing track of the formation, each agent is required to stay within a certain range of its neighbors while navigating in the environment. From a practical perspective, that is a vital feature for any formation control algorithm. The control algorithm proposed for example in [13] is convergent but in the transient phase the error does not decay monotonically thus there is a risk that the target might be lost. To address such issues, the relative distances between agents are required to be small, and their relative rotation is expected to be more or less fixed. Finally, as measurements from the localization sensors can occasionally be corrupted with noises and unreliability, local cooperation and information sharing may be necessary within the team. That is to say, the agents in the formation must be able to communicate with neighbors, thus are required to be kept within the working range of each other’s communication systems. Therefore, the above requirements must be considered when addressing the formation control problem.

2 Serial and parallel tracking

In this section, two basic controllers with sensor limitations are designed for a robot to follow another mobile agent, which are referred to as the “target” or “leader” in the sequel. In Sect. 2.1 the serial tracking controller is designed for the robot to follow a target while keeping the predefined constant relative distance. Next, the parallel tracking controller is proposed in Sect. 2.2. Under such a scenario, the followers will move side by side at some specified distance with the leader while following its orientation as well. The first case can be studied by applying linearization techniques, while a nonlinear controller has to be designed in the second case.

Furthermore, by combining the two proposed basic controllers in various ways, we are able to obtain a wide range of controllers to achieve more complicated tasks, as will be demonstrated in Sect. 3. In addition, an obstacle avoidance behavior is also designed such that the formation can autonomously adapt to complex environments.

In this paper, we study a multi-agent system with n mobile robots. Under some fixed coordinates, we consider the unicycle dynamics of each robot as:

$$\begin{aligned}& \dot{x}_{i} = v_{i}\cos \phi _{i}, \\& \dot{y}_{i} = v_{i}\sin \phi _{i}, \\& \dot{\phi}_{i} = \omega _{i}, \quad i=1,\ldots ,n, \end{aligned}$$
(1)

where \(v_{i}\), \(\phi _{i}\) and \(\omega _{i}\) denote the speed, rotation and angular velocity of robot i.

2.1 Serial tracking - path following

We will derive the control for a group of robots \(R_{i}\) with \(i=1,\ldots,n\). Robot \(R_{i}\) is required to track \(R_{i-1}\) at a desired angle \(\beta _{0,i}\) and a desired distance \(d_{0,i}\). We clarify that \(R_{i}\) can either be a cheap follower in a sub-formation tracking a sub-leader or a sub-leader tracking the main leader \(L_{1}\) while in the main formation.

As is shown in Fig. 1, we denote \(\alpha _{i}\) as the relative angle measured from the center of \(R_{i}\) to the orientation axis of \(R_{i-1}\), while \(\beta _{i}\) as the relative angle measured from the orientation axis of \(R_{i}\) to the center of \(R_{i-1}\). Let \(d_{i}\) be the actual distance between robots \(R_{i}\) and \(R_{i-1}\). On the other hand, the desired relative formation is defined by \(\alpha _{i}=\pi -\beta _{0,i}\), \(\beta _{i}=\beta _{0,i}\) and \(d_{i}=d_{0,i}\) for \(i=2,\ldots,n\). To facilitate notations, we also define \(\gamma _{i}=\phi _{i}-\phi _{i-1}\).

Figure 1
figure 1

Definitions of relative information

It is worth noted that in order to avoid singularities, the “look-ahead” distance \(d_{0,i}\) is required to be given. Otherwise, the angle \(\beta _{i}\) from the robot’s axis of orientation to the target will eventually be undefined with the decreasing of the relative distance between the two robots. However, there is no need to specify from which direction or orientation this look-ahead distance should be kept.

Based on the above definitions, the dynamics of the system can be reformulated using relative information as

$$\begin{aligned}& \dot{d}_{i} = -v_{i}\cos \beta _{i}-v_{i-1}\cos \alpha _{i}, \\& \dot{\gamma}_{i} = \omega _{i}-\omega _{i-1}, \\& \dot{\beta}_{i} = -\omega _{i}+ \frac{v_{i}}{d_{i}}\sin \beta _{i}- \frac{v_{i-1}}{d_{i}}\sin \alpha _{i}, \\& i = 2,\ldots ,n, \alpha _{i}=\pi -\gamma _{i}-\beta _{i}. \end{aligned}$$
(2)

By involving \(v_{i-1}\) and \(\omega _{i-1}\), the dynamics in (2) is in fact a cascaded system. The formation control task is defined as: Given \(v_{1}(t)\) and \(\omega _{1}(t)\) of the leader, design controls \(v_{i}(t)\) and \(\omega _{i}(t)\) for \(i=2,\ldots ,n\) such that

$$ \textstyle\begin{cases} d_{i} \rightarrow d_{0,i}, \\ \gamma _{i} \rightarrow 0, \\ \beta _{i} \rightarrow \beta _{0,i} \end{cases}\displaystyle \text{as } t\rightarrow \infty , \beta _{0,i} \in \biggl[0, \frac{\pi}{2} \biggr). $$

Depending on different values of \(\beta _{0,i}\), various formation patterns can be achieved. In the scenario where \(\beta _{0,i}=0\) for \(i=2,\ldots ,n\), the agents are expected to follow straight behind each other, which is defined as a serial formation. On the other hand, the agents should move on a parallel line if \(\beta _{0,i}=\pi /2\) for \(i=2,\ldots ,n\). In this section we study the tasks with \(\beta _{0,i}< \pi /2\), where the achievable formations vary from serial to “sloping parallel”. Such patterns can find numerous applications, such as in convoys of tanks and aircraft formations. Furthermore, by introducing an overlap in coverage as is shown in Fig. 2, the sloping parallel formation is also efficient in tasks such as the construction of a detailed map and mine sweeping.

Figure 2
figure 2

Parallel tracking and formation keeping

Using linearizing techniques, one can easily derive a globally exponentially stable controller for this purpose [13]. However, the monotonic decrease of the angular error \(\vert \beta _{i}(t)-\beta _{0,i}\vert \) can not be guaranteed. Therefore, it is likely to lose track of the target because of the sensors’ limited field of view.

It turns out that the monotonic convergence can be achieved by slightly modifying the controller in [13], while at the cost of losing globality.

Proposition 1

Let

$$\begin{aligned}& \begin{aligned} &v_{i} =\frac{k}{\cos \beta _{i}} \\ &\hphantom{v_{i} =}{}\times \biggl(d_{i} -d_{0,i}\cos (\beta _{i}- \beta _{0,i})+\frac{v_{i-1}}{k}\cos (\gamma _{i}+\beta _{i})\biggr), \\ &\omega _{i} = \frac{k}{d_{i}\cos \beta _{i} }\biggl(d_{i}\sin \beta _{i} -d_{0,i} \sin \beta _{0,i} - \frac{v_{i-1}}{k} \sin \gamma _{i} \biggr), \end{aligned} \end{aligned}$$
(3)

where \(d_{0,i}>0\), \(\vert \beta _{0,i}\vert <\frac{\pi}{2}\) and \(k>0\). Then for any initial states \(\beta _{i}(0)\in (-\frac{\pi}{2},\frac{\pi}{2})\) and \(d_{i}(0)>0\), we must have that \(d_{i}(t)-d_{0,i}\) converges to zero and \(\vert \beta _{i}(t)-\beta _{0,i}\vert \) converges to zero monotonically.

Proof

Define \(\Delta d_{i}=d_{i}\cos \beta _{i}-d_{0,i}\cos \beta _{0,i}\) and \(\Delta \beta _{i}=\beta _{i}-\beta _{0,i}\), whose dynamics can be derived by

$$\begin{aligned}& \Delta \dot{d}_{i} = -k\Delta d_{i}, \\& \Delta \dot{\beta}_{i} = - \frac{kd_{0,i}\cos \beta _{i}}{d_{0,i}\cos \beta _{0,i}+\Delta d_{i}} \sin \Delta \beta _{i}. \end{aligned}$$

Due to the continuity of the solutions in the first equation, it is easy to see that \(\vert \Delta d_{i}(t)\vert \) decreases monotonically, and \(d_{i}(t)>0\), \(\forall t\ge 0\). By rewriting the second equation as \(\Delta \dot{\beta}_{i}=-k\frac{d_{0,i}}{d_{i}(t)}\sin \Delta \beta _{i}\), it holds that \(\vert \beta _{i}(t)-\beta _{0,i}\vert \) will decrease monotonically as long as \(\vert \Delta \beta _{i}(0)\vert <\frac{\pi}{2}\).

But what will happen if the measurements of d and β are contaminated with noise? Consider first the case where the measured angle with respect to the leader is \(\tilde{\beta}_{i}=\beta _{i}+\delta \beta _{i}\), where \(\beta _{i}\) is the true relative angle to leader and \(\delta \beta _{i}\) is the measurement error. If the noise-contaminated \(\tilde{\beta}_{i}\) is used instead of \(\beta _{i}\) in (3), the new error dynamics will be

$$\begin{aligned}& \begin{aligned} &\dot{x}_{e} = -kx_{e}+2kd_{i} \sin \frac{\delta \beta _{i}}{2}\sin \biggl( \phi _{i}+\beta _{i}+ \frac{\delta \beta _{i}}{2}\biggr), \\ &\dot{y}_{e} = -ky_{e}-2kd_{i}\sin \frac{\delta \beta _{i}}{2}\cos \biggl( \phi _{i}+\beta _{i}+ \frac{\delta \beta _{i}}{2}\biggr). \end{aligned} \end{aligned}$$
(4)

Setting \((\dot{x_{e}},\dot{y_{e}})=(0,0)\) gives the new steady state solution for \((x_{e},y_{e})\). Apparently the steady-state errors are bounded and directly depending on \(\delta \beta _{i}\)

$$ \begin{aligned} & \vert x_{e} \vert \leq 2d_{i} \biggl\vert \sin \frac{\delta \beta _{i}}{2} \biggr\vert , \\ & \vert y_{e} \vert \leq 2d_{i} \biggl\vert \sin \frac{\delta \beta _{i}}{2} \biggr\vert . \end{aligned} $$
(5)

If the corresponding calculation is made for measurement errors in d, a similar result is obtained. If the measured distance to the leader is assumed to be \(\tilde{d}_{i}=d_{i}+\delta d_{i}\), where \(\delta d_{i}\) is the error, then the bounds for the steady-state errors are found to be

$$ \begin{aligned} & \vert x_{e} \vert \leq \vert \delta d_{i} \vert , \\ & \vert y_{e} \vert \leq \vert \delta d_{i} \vert . \end{aligned} $$
(6)

As shown above, the steady-state positioning errors obtained with the control are bounded as long as the measurement errors are bounded. However, it may be difficult to guarantee the quality of sensor data since it is depending on the sensing range of onboard sensors. In addition, the closed-loop stability also depends on physical constraints on maximal speed, acceleration and rotation. The setup with sub-leaders and sub-formations is therefore a vital means of increasing robustness for large formations.

By tuning the parameters \(d_{0,i}\) and \(\beta _{0,i}\), various formations are achievable using the designed control protocols. Furthermore, for scenarios with \(\vert \beta _{0,i}\vert <\pi /2\) and with perfect sensor information, the guaranteed global stability makes it possible to perform online switching between different parameter values for each robot independently, as a response to encountered environmental features. □

2.2 Parallel tracking

In this part, the problem of parallel tracking control is studied as shown in Fig. 2. Each robot is aligned with a mobile leader to navigate in parallel with the same orientation. In the ideal case, the angle from the leader’s axis of orientation to the following agent is expected to be \(\frac{\pi}{2}\).

By introducing another robot to follow the original follower, such parallel tracking can be extended a multi-agent system for a “chain” of robots, which will all move in parallel with the same orientation. This formation pattern is useful in applications such as the exploration and coverage of large areas by a team of mobile units. For example, a concrete application can be found in the search for land-mines with a robot team.

Recall that our goal is to align the center points of two mobile agents. In such situation the linearization technique applied in Sect. 2.1 does not work anymore as it requires \(\vert \beta _{0}\vert =\frac{\pi}{2}\). One may argue that feedback linearization can still be applied by considering the two off-the-axis points with a small distance to the axes as in [2, 4]. However, it is worth noting that the control input in that case could be too big or even ill-conditioned.

To address the above issue, we instead propose a nonlinear control law in this part. Under some mild assumptions on the leader’s maneuvers, we can guarantee that the control input of followers will stay well bounded.

In the sequel, it is assumed that the target speed and angular velocity of the leader are known to the followers, which can be either estimated from onboard sensors or from information transferred by local communications. In the scenario where the robot is required to follow the leader at a 90-degree angle from the its axis of orientation, the speed of the robot and the leader will not necessarily be equal.

Let \(d_{0}\) and \(d_{i}\) be the desired tracking distance and the measured distance between robots \(R_{i}\) and \(R_{i-1}\), respectively. Then the following control law is designed for robot i:

$$\begin{aligned} \begin{bmatrix} v_{i} \\ \omega _{i} \end{bmatrix} = \begin{bmatrix} v_{i-1} \\ \omega _{i-1} \end{bmatrix} +C_{0}, & \end{aligned}$$
(7)

where \(k_{1}>0\), \(k_{2}>0\) are arbitrary constants and

$$\begin{aligned} C_{0}= \begin{bmatrix} \omega _{i-1}d_{0}-d_{0}(\gamma _{i}+k_{1}(d_{i}-d_{0})+k_{2}(\beta _{i}- \frac{\pi }{2}) \\ -\gamma _{i} \end{bmatrix} . & \end{aligned}$$
(8)

Recall that \(\beta _{i}\) denotes the measured relative angle from robot \(R_{i-1}\) to the leader \(R_{i-1}\), which is supposed to be approximately \(\frac{\pi}{2}\). Then the closed-loop stability can be analyzed.

Theorem 1

Suppose that the control law (7) is applied to the follower and \(\omega _{i-1}\) is set as a positive constant. Then the equilibrium (\(d=d_{0}\), \(\gamma =0\), \(\beta =\frac{\pi}{2}\)) of the closed-loop system is locally exponentially stable.

Remark

It is well known that it is impossible to stabilize the above nonholonomic systems using a \(C^{1}\) state feedback control. Therefore when both \(v_{i-1}\) and \(\omega _{i-1}\) are set to zero, stabilization of the system is not possible.

Proof

Denote \(\Delta \beta =\beta _{i}-\frac{\pi}{2}\), \(\gamma =\gamma _{i}\) and \(\Delta d=d_{i}-d_{0}\), then

$$\begin{aligned}& \Delta \dot{ d} = v\sin \Delta \beta -v_{i-1}\cos \alpha _{i}, \\& \dot{\gamma} = \omega -\omega _{i-1}, \\& \Delta \dot{ \beta} = -\omega +\frac{v}{d_{0}+\Delta d}\cos \Delta \beta - \frac{v_{i-1}}{d_{0}+\Delta d}\sin \alpha _{i}. \end{aligned}$$

Substituting (7) and setting \(\alpha _{i}=\frac{\pi }{2}\), it yields that

$$\begin{aligned}& {\Delta \dot{ d}} = \bigl(v_{i-1}+\omega _{i-1}d_{0}-d_{0}(\gamma +k_{1} \Delta d+k_{2}\Delta \beta )\bigr)\sin \Delta \beta \\& \hphantom{{\Delta \dot{ d}} =}{} -v_{i-1}\cos \alpha _{i}, \\& \dot{\gamma} = -\gamma, \\& \Delta \dot{ \beta} = \gamma -\omega _{i-1} \\& \hphantom{\Delta \dot{ \beta} =}{} +\frac {\cos \Delta \beta}{d_{0}+\Delta d}(v_{i-1}+\omega _{i-1}d_{0}-d_{0}( \gamma +k_{1}\Delta d+k_{2}\Delta \beta ) \\& \hphantom{\Delta \dot{ \beta} =}{} -\frac{v_{i-1}}{d_{0}+\Delta d}\sin \alpha _{i}. \end{aligned}$$
(9)

Using the fact \(\alpha _{i}=\frac {\pi }{2}-(\Delta \beta +\gamma )\), by linearization it is easy to see that the error system defined in (9) is locally exponentially stable.

Now consider the multi-agent system (2) and assume the first robot is the leader, namely \(R_{1}=R_{T}\). □

Theorem 2

Suppose that robot i (\(i=2,\ldots ,n\)) use control law (7). Then the locally exponential convergence of the parallel tracking error is guaranteed if the leader’s motion is defined by \(\omega _{T}=\textit{constant}>0\).

Based on the cascaded structure of the overall system, the above result can be proved using Theorem 1. The proof is thus omitted.

For the situation that \(\omega _{T}=0\) and \(v_{T}=\text{constant}>0\), we lose convergence and will only have bounded error.

Theorem 3

Suppose that robot i (\(i=2,\ldots ,n\)) use control law (7). Then the parallel tracking error will be bounded if the motion of the leader is defined by \(\omega _{T}=0\) and \(v_{T}=\textit{constant}>0\).

The proof can be done by using center manifold arguements and is omitted. In this case we can use the control law given in [13] for convergence:

Proposition 2

Assume that the reference velocities of the leading robot \(R_{1}\) satisfy the following conditions:

$$ v_{T}(t)\ge v_{0}>0, \quad \int _{0}^{\infty} \bigl\Vert \omega _{T}(t) \bigr\Vert ^{2}dt< \infty . $$

Considering the equilibrium (\(d_{2}=d_{0}^{i}\), \(\gamma _{2}=0\), \(\beta _{2}=\frac{\pi}{2},\ldots , d_{n}=d_{0}^{i}, \gamma _{n}=0\), \(\beta _{n}=\frac{\pi}{2}\)), the cascaded system (2) can be locally exponentially stabilized by the control law

$$\begin{aligned}& v_{i} = v_{i-1}+c_{1}\Delta d_{i}\cos (\beta _{i})-c_{2}\Delta \beta _{i}, \\& \omega _{i} = c_{3}\Delta \beta _{i}-c_{4} \bigl(d^{i}_{0}-d_{i}\sin ( \beta _{i}) \bigr)-c_{5}\gamma _{i}, \end{aligned}$$
(10)

where \(c_{2}\geq 0\) while \(c_{1}\), \(c_{3}\), \(c_{4}\) and \(c_{5}\) are positive constants.

3 Formation adaptation and obstacle avoidance

When a formation of robots is navigating in an environment with clustered obstacles, the ability to adapt to the environment becomes a necessity to accomplish a cooperative task safely and efficiently. In such scenarios, the size and shape of the whole formation play a crucial role in deciding the mobility of the robot team, such as whether it is possible to move between obstacles or pass through a narrow corridor, and how fast and sharp turns it can take. Therefore, it is desirable for the formation (especially large groups) to be able to autonomously adapt to the changing environment or to the switching tasks, where physical constraints of the robot maneuverability also results in additional restrictions to be considered.

In general, changing the formation shape can be considered as switching between different control protocols. In the past decades, various methodologies have been developed in control theory for stability analysis of switching systems, such as those by the Lyapunov theory. Moreover, the effects of noise and disturbances cannot be ignored in the stability analysis. In this section, the switching behaviors between the proposed serial and the parallel tracking control laws will be studied, while avoiding obstacles when necessary.

3.1 Stability of the switching system

For a switched system formed by the proposed two basic controllers, we can immediately conclude that it is impossible to find a common Lyapunov function to analyze the stability of arbitrary switching behaviors as the two control laws are defined in different domains.

Firstly, we notice that in the absence of measurement errors, switch to a serial formation with \(0\leq \beta _{0}<\frac{\pi}{2}\) is by nature stable as the global stability of the serial control has been proved. On the other hand, what remains to be studied is the switch to a parallel formation from a serial tracking pattern.

If there is a stable neighborhood of the parallel control near the equilibrium \(\beta =\beta _{0}\) and the serial control is globally stable for any \(\beta _{0}\in (0,\frac{\pi}{2})\), then there must exist a region around \(\beta =\beta _{0}\) where both control laws are stable. Therefore, for a robot in the serial formation, by first driving it into the stable region of the parallel control, it is then theoretically possible to perform a stable switch to the parallel tracking law.

Despite the above stability analysis on switching behaviors in theory, some practical issues may still arise due to noises in the observed sensor data. The factor \(\frac{1}{\cos \beta}\) in (3) tends to amplify the effect of noise when \(\beta _{0}\) is close to \(\frac{\pi}{2}\), which will eventually result in instability of the closed-loop system as \(\Delta t\rightarrow 0\). We obviously need to consider the size of the stable region for the parallel tracking controller. If the stable region is large enough such that switching to the parallel control with \(\beta _{0}=\frac{\pi}{2}\) can be performed directly from a stable formation where \(\beta _{0}\) is near 0, then there is no need to drive the serial control with an angle it was not designed for. In the next section, simulation examples will be provided to illustrate the stability of switching behaviors.

3.2 Obstacle avoidance

In the control of mobile robots, obstacle avoidance is always a very important part of any integrated solution. In the literature, there exist two types of popular techniques in designing the control law for obstacle avoidance. Briefly speaking, one is based on reactive behaviors (online), while the other is derived by re-planning (mostly offline) [1416]. However, it is worth noting that the limitations of communication and computation bandwidth must be considered when involving reactive behaviors in the control design.

A reactive controller is designed in this section by directly using sensor feedback, which can be further integrated with the proposed formation controllers.

As for obstacle detection, we consider the scenarios where the available sensing information is obtained from an array of range sensors, from which a piece-wise linear approximation can be constructed for the obstacle contour.

In our work, the following obstacle avoidance control is adopted by each robot:

$$\begin{aligned}& \begin{aligned} &v = k_{1}\cos 2\alpha, \\ &\omega = -k_{2}\cos \alpha , \end{aligned} \end{aligned}$$
(11)

where \(k_{1}>0\), \(k_{2}>0\), and α in Fig. 3 denotes the angle between the direction of the robot velocity and the tangent direction to the point on the obstacle contour where the velocity vector intersects. In such a definition, the range of α is chosen to lie in \([0,\pi ]\). The stability of obstacle-avoidance behaviors can be analyzed as follows.

Figure 3
figure 3

Obstacle avoidance

Proposition 3

Assume that the contour of the obstacle is sufficiently long and linear, with \(\theta _{0}\) being its orientation. The reactive control

$$ \dot{\phi}=\omega =-k_{2}\cos \alpha $$

leads to two equilibria when \(\phi \in [0,2\pi ]\), namely, \(\phi _{1}=\theta _{0}+\frac{\pi}{2}\) and \(\phi _{2}=\theta _{0}+\frac{3\pi}{2}\). In addition, \(\phi _{1}\) is unstable while \(\phi _{2}\) is asymptotically stable.

Remark

The proof of the above results is straightforward, thus is omitted here due to page limitations. However, it is inspiring to see that in practical applications because of the above instability and the presence of noises, the robot will never get stuck in any local minima if there exist only convex polyhedral in the environment.

4 Navigation coordination

In this section, coordination of the proposed controllers is studied to realize complex behaviors. Compared with serial tracking, it is more challenging for a team of robots to maintain a parallel formation while navigating in an obstacle-clustered environment. Therefore, in the sequel, we consider parallel tracking as the nominal formation. As is shown in Fig. 4, when parallel formation cannot be maintained in the presence of obstacles, flexibility of the formation pattern is then desirable where a switching to serial formation is performed to maintain communication with neighbors. Furthermore, the parallel formation should be resumed after passing the obstacle clusters.

Figure 4
figure 4

Navigation coordination

To accomplish the above task, coordination of parallel tracking, serial tracking, and obstacle avoidance remains to be studied. Firstly, by adding the latter two behaviors to the parallel tracking formation, it may be possible to coordinate the overall system by a hybrid automaton. Depending on whether the two behaviors are active simultaneously or not, two different hybrid automata can be possible, for example, see [17].

On the other hand, by concurrently activating the two sub-controllers in the overall system, a smoother overall performance can be achieved [18] as different behaviors affect the system simultaneously instead of by switching. However, it will be significantly more difficult to analyze such a system where new behaviors are added.

Another method for the coordination task can be considered by applying hard switches between the different behaviors. Although such a method is more scalable in practice, there is a risk that chattering may be introduced into the overall performance and the transient behavior may be unsatisfactory.

Based on the above analysis, we propose to use a regularized automaton [19], where the transient performance can be improved by adding additional intermediate nodes to the automaton.

The obstacle-avoidance behavior will be activated when an obstacle is detected within some threshold (such as the detection range of the infrared sensors) to the robot. The serial tracking behavior will then become active simultaneously. By applying hard switches between the two sub-controllers, chattering behaviors may occur under some scenarios. To address such issues, we instead consider the resulting system with these two behaviors as a discontinuous differential system. Under some assumptions, the existence of a sliding surface can be proved. Moreover, the obstacle-avoidance behavior will result in a repulsive potential field that is orthogonal to the surface on which the behavior is activated. Hence, the vector field on the sliding surface can be derived as

$$ f_{S}=\alpha _{s} f_{OA}+(1-\alpha _{s})f_{ST}, $$

where \(\alpha _{s}\in [0,1]\) is some weighting factor to be tuned. “OA” and “ST” represent “obstacle avoidance” and “serial tracking” respectively. The advantage of adding the above behavior as an intermediate node lies in a better transient response as well as the ability to avoid the so-called “Zeno phenomena” [20].

The next step is to add another intermediate node such that the parallel formation behavior can also be incorporated, where the corresponding vector field is chosen as

$$ f_{A}=\beta _{a} f_{S}+(1-\beta _{a})f_{PT}. $$

Finally, once an obstacle is detected, a coordination law can be applied by taking all the above vector fields into account, where the system is switched into the following intermediate node:

$$ f_{A}=\beta _{a}\bigl(\alpha _{s} f_{OA}+(1-\alpha _{s})f_{ST} \bigr)+(1-\beta _{a})f_{PT}, $$
(12)

for some \(\alpha _{s}\in [0,1]\) and \(\beta _{a}\in [0,1]\). Naturally, different emerging behaviors can be observed by tuning the weights \(\alpha _{s}\) and \(\beta _{a}\), which will be illustrated by numerical examples.

5 Illustrative examples

In this section, numerical simulations are provided to illustrate the effectiveness of our proposed control algorithms. Firstly, the stability of the proposed controller is demonstrated, where we take the serial tracking controller (3) as an example. The leader follows a given trajectory with constant velocity \(v_{T}=0.1\) and constant angular velocity \(\omega _{T}=-0.03\). The closed-loop trajectories and the formation errors are shown in Fig. 5. The relative distance and bearing angles converge to the desired values as expected.

Figure 5
figure 5

Simulation results of the serial formation

Next, the parallel tracking controller is tested in the presence of obstacles. At the initial state, a group of four agents is distributed on the y-axis, where the leader has a constant velocity and zero angular velocity. The goal of the multi-agent is to achieve a collision-free movement along the wall while maintaining the parallel formation. The navigation controller (12) is tested, where the emerging behaviors with different parameters are shown in Fig. 6. With more emphasis on maintaining the parallel formation (thus using smaller \(\beta _{a}\) and larger \(\alpha _{s}\) as in Fig. 6(a)), larger \(k_{1}\) and \(k_{2}\) are required for obstacle avoidance. Compared with switching to the repulsive vector field \(f_{OA}\), Fig. 6(b) and Fig. 6(c) show that incorporating a serial term as well helps to realize a smoother obstacle-avoidance behavior. Such tradeoff can be leveraged by adjusting \(\alpha _{s}\) for different application scenarios.

Figure 6
figure 6

Navigation coordination in the presence of obstacles

By taking all the control schemes into consideration, a more complicated scenario is studied. The agents are navigating in an unknown environment with obstacles, and the switching behaviors are demonstrated. The leader follows a given trajectory with constant velocity \(v_{T}=0.1\) and piece-wise constant angular velocity. Distributed controllers for the other three followers are implemented to realize a switching formation with inter-agent distance \(d_{0}=0.8\) while avoiding the unknown obstacle. The switching behaviors are demonstrated in Fig. 7. In the beginning, the multi-agent system is maintaining a parallel formation during time interval \([0, 25]\). After that, the group gradually switches to a serial formation to go through the passage in the presence of an obstacle. The obstacle avoidance behavior is activated at \(t=70\) and corresponding obstacle avoidance commands are added during \(70 \leq t \leq 110\). In order to perform a stable switch back to the parallel formation, a transient serial control mechanism is applied on \([110, 120]\) to drive the robots into the stable region of parallel control, where the final switch occurs at \(t=120\).

Figure 7
figure 7

Trajectories of switching formations

The results show that the followers could achieve serial and parallel formations as desired and switch the patterns smoothly. Note that some large maneuvers could be observed on the trajectory of agent 4 at \(t=45\) and \(t=90\). That is because of the change of \(\beta _{0}\) between the switching of serial and parallel formations, as well as trigger and cease of the obstacle avoidance control. In practice, time-varying parameters \(k_{1}\) and \(k_{2}\) in (11) are considered and transition behaviors can be further improved to guarantee smooth maneuvers.

6 Conclusions

In this paper, two control algorithms are proposed for a multi-agent system to achieve serial and parallel formations respectively, where only local measurements of relative distance and bearing information from neighbors are used. Stability and robustness of the proposed algorithms are analyzed. Practical issues such as formation switching and obstacle avoidance are also discussed, which are illustrated by numerical simulations.