1 Introduction

A popular research topic in the area of systems and control is the formation control of multi-robot systems to allow multiple robots to achieve a prespecified configuration in a distributed manner. The popularity of this topic is attributed to its modern applications, including the formation flight of unmanned aerial vehicles and the exploration of hazardous environments through mobile robots.

Herein, we focus on four-legged robots (see, e.g., Fig. 1) as robots to be controlled. Four-legged robots are capable of performing straight, lateral, and rotational movements using their four legs. They can also cross obstacles if the tips of their legs can reach the top surfaces of the obstacles. Compared to wheeled robots, four-legged robots are suitable for rough terrain missions because the tip positions of their legs can be changed depending on the terrain. Moreover, using four legs improves the walking stability and reduces the production and operation costs. A lower number of legs reduce the walking stability of robots, whereas a higher number of legs increase the hardware cost and the energy consumption.

In our previous study [1], we addressed a formation control problem for four-legged robots subject to discrete-valued input constraints. The motivation was that the commands of specific movements are used to drive four-legged robots [2, 3] and switching between the commands (i.e., discrete-valued signals) enables the position control of the robots. We then proposed formation controllers as a solution to this problem. The proposed controllers were obtained based on the combination of conventional formation controllers for omnidirectional robots and dynamic quantization, i.e., transforming continuous-valued signals into discrete-valued ones through feedback mechanisms. However, [1] focused on achieving fixed formations and did not consider moving formations. Moving formations are necessary for many applications, including cooperative exploration and transportation through mobile robots.

Motivated by this, we aim to extend the theoretical framework developed in [1] to the case of leader–follower formations [4]. In the leader–follower formation, we regard one robot as the leader and the other robots as the followers, and the followers track the leader while preserving a prespecified formation. In this scenario, moving formations can be achieved simply by steering the leader. Although the leader–follower approach leads to an over-reliance on a single robot for achieving the group objective and is not robust against disturbances [5], its simplicity and scalability are major advantages [6]. In addition, there are some cases where the leader and the followers are preassigned in the target system and applying the leader–follower approach is natural, e.g., following and hunting a target with mobile robots [7], the formation flying of two spacecraft [8, 9], and adaptive cruise control systems [10].

Fig. 1
figure 1

Four-legged robot [2]

The main contributions of this paper are summarized below.

  1. 1.

    We present controllers that achieve leader–follower formations using discrete-valued inputs. Our controllers are based on the combination of PD-like formation controllers and dynamic quantization. These controllers are given as a simple extension of the controllers in [1] by focusing on their structures and extending the specific parts appropriately. Numerical examples demonstrate the performance of the presented controllers.

  2. 2.

    We theoretically analyze the feedback system with the presented controllers. Specifically, we evaluate a performance index that quantifies the difference between the behavior of the feedback systems whose inputs are quantized and unquantized. We derive an upper bound of the performance index as a function of the system parameters. This result helps to evaluate the impact of the quantization on the system behavior and to provide a theoretical guarantee of the stability of the feedback system.

Finally, we discuss the related works. A number of results on leader–follower formation control have been reported. Consolini et al. [6] considered the formation control for unicycle-type robots with constraints on their input magnitudes. Mariottini et al. [5] and Han et al. [11] discussed the combination of localization and control to achieve leader–follower formations. Lin et al. [12] proposed an approach based on complex-valued graph Laplacians to study a leader–follower formation control problem. Dai et al. [13] developed adaptive formation controllers to achieve both prescribed transient and steady-state performance. Tang et al. [14] studied the formation control in three-dimensional space based on the persistence of excitation of the desired formation. Moreover, we can find results in the cases where quantized signals are included. Qiu et al. [15] addressed a leader-following consensus problem for high-order multi-agent systems with quantized outputs. Xiong et al. [16] studied the leader–follower formation control of linear heterogeneous multi-agent systems using a quantizer with a zoom variable. Huang and Dong [17] focused on the reliable formation control under quantized communication and cyber attacks. Hu et al. [18] and Wang et al. [19] developed adaptive formation controllers for unmanned aerial vehicles with uncertainties and quantized inputs. They [20] also considered the case where both inputs and outputs are quantized. However, the aforementioned studies primarily focused on omnidirectional robots, unicycle-type robots, and robots with general linear dynamics, and four-legged robots were not considered. In addition, the quantization of signals in the existing studies is due to the limitation of the capacity of the communication network between robots, whereas that in our study is due to the property of four-legged robots. As a result, our quantization method is distinguished from the existing ones; therefore, the existing results cannot be directly applied to this study.

Notation: We denote the real number field and the set of positive real numbers by \(\mathbb {R}\) and \(\mathbb {R}_+\), respectively. For the complex number z, \(\text {Re}(z)\), \(\text {Im}(z)\), and \(\vert z\vert \) represent its real part, imaginary part, and absolute value, respectively. For the vectors \(x_1,x_2,\ldots ,x_n\in \mathbb {R}^2\) and the set \(\mathbb {I}:=\{ i_{1},i_{2},\ldots ,i_{m}\}\subseteq \{ 1,2,\ldots ,n\}\), let \([x_i ]_{i\in \mathbb {I}}:=[x_{i_1}^\top ~x_{i_2}^\top ~\cdots ~x_{i_m}^\top ]^\top \in \mathbb {R}^{2m}\). The \(\infty \)-norms of vectors and matrices and the Euclidean norms of vectors are described using \(\Vert \cdot \Vert \) and \(\Vert \cdot \Vert _2\), respectively. Let \(0_{n\times m}\) be the \(n\times m\) zero matrix, and let \(I_n\) be the n-dimensional identity matrix. We denote the diagonal matrix with the diagonal elements \(x_1,x_2,\ldots ,x_n\in \mathbb {R}\) by \(\text {diag}(x_1,x_2,\ldots ,x_n)\). The Kronecker product of the matrices \(M_1\) and \(M_2\) is defined by \(M_1\otimes M_2\). The cardinality of the set \(\mathbb {S}\) is denoted by \(\vert \mathbb {S}\vert \). We use \(\mathbb {B}(c,r)\) to represent a closed disk in \(\mathbb {R}^2\) with the center c and the radius r, i.e., \(\mathbb {B}(c,r):=\{ x\in \mathbb {R}^2\, \vert \, \Vert x-c\Vert _2\le r\}\). For the positive number c and the vector v, let \(\text {sat}_c(v)\) denote the saturation function such that \(\vert v_i\vert \le c\) is guaranteed for each element \(v_i\) of v.

2 Problem formulation

Consider the multi-robot system \(\Sigma \) shown in Fig. 2, which comprises n four-legged robots in two-dimensional space and controllers embedded in them.

Robot i (\(i\in \{1,2,\ldots ,n\}\)) is given as the discrete-time system

$$\begin{aligned}&\begin{bmatrix} x_{i1}(t+1) \\ x_{i2}(t+1) \\ \theta _i (t+1)\\ \end{bmatrix} = \begin{bmatrix} x_{i1}(t) \\ x_{i2}(t) \\ \theta _i (t)\\ \end{bmatrix} + \left[ \begin{array}{c} (\cos (\theta _i (t)+u_{i2}(t)) \\ (\sin (\theta _i (t)+u_{i2}(t)) \\ u_{i2}(t) \end{array} \right. \nonumber \\& \left. \begin{array}{c} \times (1-u_{i3}(t))+\sin (\theta _i (t)+u_{i2}(t))u_{i3}(t))u_{i1}(t) \\ \times (1-u_{i3}(t))-\cos (\theta _i (t)+u_{i2}(t))u_{i3}(t))u_{i1}(t) \\ ~ \end{array} \right] , \end{aligned}$$
(1)

where \(t\in \{0,1,\ldots \}\) denotes the discrete time and \([x_{i1}(t)\ x_{i2}(t)]^\top \in \mathbb {R}^2\) (defined as \(x_i(t)\)) and \(\theta _i(t)\in (-\pi ,\pi ]\) denote the position and orientation of robot i, respectively. The variables \(u_{i1}(t),u_{i2}(t)\) \(\in \mathbb {R}\) and \(u_{i3}(t)\in \{0,1\}\) stand for the control inputs determining the translational and rotational velocities and the movement type, respectively. The relation between the value of \(u_{i3}(t)\) and the movement type is shown in Fig. 3. If \(u_{i3}(t)=0\), robot i performs rotational and straight movements, whereas if \(u_{i3}(t)=1\), it performs rotational and lateral movements. The system (1) is derived by incorporating \(u_{i3}(t)\) into a discrete-time model of a unicycle-type robot in order to enable lateral movements.

Fig. 2
figure 2

Multi-robot system \(\Sigma \)

The controller embedded in robot i is of the form

$$\begin{aligned} &K_i: {\left\{ \begin{array}{ll} \xi _i(t+1) =f_{i1}(\xi _i(t),[x_j(t)-x_i(t)]_{j\in \mathbb {N}_i}, \\ \theta _i(t),[y_j(t)]_{j\in \mathbb {N}_i}), \\ u_i(t) =f_{i2}(\xi _i(t),[x_j(t)-x_i(t)]_{j\in \mathbb {N}_i}, \\ \theta _i(t),[y_j(t)]_{j\in \mathbb {N}_i}), \\ y_i(t) =f_{i3}(\xi _i(t),[x_j(t)-x_i(t)]_{j\in \mathbb {N}_i}, \\ \theta _i(t),[y_j(t)]_{j\in \mathbb {N}_i}), \\ \end{array}\right. } \end{aligned}$$
(2)

where \(\xi _i(t)\in \mathbb {R}^m\) is the state, \([x_j(t)-x_i(t)]_{j\in \mathbb {N}_i}\in \mathbb {R}^{2\vert \mathbb {N}_i\vert }\), \(\theta _i(t)\), and \([y_j(t)]_{j\in \mathbb {N}_i}\in \mathbb {R}^{\mu \vert \mathbb {N}_i\vert }\) are the inputs, \(u_i(t)=[u_{i1}(t)\ u_{i2}(t)\ u_{i3}(t)]^\top \) and \(y_{i}(t)\in \mathbb {R}^\mu \) are the outputs, and \(f_{i1}:\mathbb {R}^m\times \mathbb {R}^{2\vert \mathbb {N}_i\vert }\times (-\pi ,\pi ]\times \mathbb {R}^{\mu \vert \mathbb {N}_i\vert }\rightarrow \mathbb {R}^m\), \(f_{i2}:\mathbb {R}^m\times \mathbb {R}^{2\vert \mathbb {N}_i\vert }\times (-\pi ,\pi ]\times \mathbb {R}^{\mu \vert \mathbb {N}_i\vert }\rightarrow \mathbb {R}^3\), and \(f_{i3}:\mathbb {R}^m\times \mathbb {R}^{2\vert \mathbb {N}_i\vert }\times (-\pi ,\pi ]\times \mathbb {R}^{\mu \vert \mathbb {N}_i\vert }\rightarrow \mathbb {R}^\mu \) are functions characterizing the controller. The set \(\mathbb {N}_i\subset \{1,2,\ldots ,n\}\) consists of the indices of the neighboring robots from which robot i can obtain the information on the relative positions. To simplify the discussion, we assume the initial state to be zero. We further assume that for the output \(u_i(t)\), its elements \(u_{i1}(t)\) and \(u_{i2}(t)\) must take discrete values, that is, \(u_{i1}(t)\in \{0,\pm s,\pm 2s,\ldots \}\) and \(u_{i2}(t)\in \{0,\pm \pi /4,\pm \pi /2,\pm (3\pi )/4,\pi \}\), where \(s\in \mathbb {R}_+\) is the step size. This restricts the movement distance and direction of robot i at each time t to integer multiples of s and \(\pi /4\), respectively.

Fig. 3
figure 3

Relation between the value of \(u_{i3}(t)\) and the movement type of robot i

To represent the network structure of the system \(\Sigma \), we introduce the time-invariant directed graph \(G=(\mathbb {V},\mathbb {E})\), where \(\mathbb {V}:=\{1,2,\ldots ,n\}\) and \(\mathbb {E}\subset \mathbb {V}\times \mathbb {V}\) denote the vertex and edge sets that correspond to the indices of the robots and the connections among them, respectively. Then, we define \(\mathbb {N}_i:=\{ j\in \mathbb {V}\, \vert \, (j,i)\in \mathbb {E}\}\).

To consider the leader–follower formation control for the system \(\Sigma \), we suppose that robot 1 is the leader and robots 2 to n are the followers without loss of generality. Let \(d_1(t)\in \mathbb {R}^2\) and \(r_{ij}\in \mathbb {R}^2\) denote the desired velocity of the leader and the desired position of robot i relative to robot j, respectively. Under this setting, we address the following problem.

Problem 1

Consider the multi-robot system \(\Sigma \). Suppose that the step size s and the desired leader’s velocity \(d_1(t)\) and relative positions \(r_{ij}\) \((i,j=1,2,\ldots ,n)\) are given. Find controllers \(K_1,K_2,\ldots ,K_n\) (i.e., functions \(f_{11},f_{12},f_{13},f_{21},\ldots ,f_{n3}\)) that satisfy

$$\begin{aligned}&\lim _{t\rightarrow \infty }(x_1(t+1)-x_1(t)-d_1(t))=0_{2\times 1}, \end{aligned}$$
(3)
$$\begin{aligned}&\lim _{t\rightarrow \infty }(x_i(t)-x_j(t))=r_{ij}\quad \forall (i,j)\in \mathbb {V}\times \mathbb {V} \end{aligned}$$
(4)

for every initial state \((x_i(0),\theta _i(0))\in \mathbb {R}^2\times (-\pi ,\pi ]\) \((i=1,2,\ldots ,n)\).

For Problem 1, we note the following three points. First, we cannot exactly achieve (3) and (4) due to the constraint of discrete values for the control inputs \(u_{i1}(t)\) and \(u_{i2}(t)\). Hence, our goal is to achieve (3) and (4) approximately. Second, the leader is unaware of its own position in the world coordinate frame, and thus its desired velocity \(d_1(t)\) is given instead of the desired position. Further, we have to design the controller for the leader because achieving (3) is not trivial due to the discrete-valued input constraint. Finally, the followers do not possess any information regarding \(d_1(t)\), which implies that we cannot solve Problem 1 by driving the followers at \(d_1(t)\) while preserving the fixed formation.

3 Leader–follower formation control with discrete-valued inputs

In this section, a solution to Problem 1 is presented.

3.1 Existing controllers achieving fixed formations

Our approach toward Problem 1 involves extending the controllers proposed in [1] that achieve fixed formations to the case of the leader–follower formations. Therefore, we first introduce the existing controllers.

Fig. 4
figure 4

Controller \(K_i\) proposed in [1]

The existing controller \(K_i\) for robot i is shown in Fig. 4. This is composed of the four subcontrollers \(K_{i0}\)\(K_{i3}\). The subcontroller \(K_{i0}\) is described by

$$\begin{aligned} K_{i0}: \tilde{u}_i(t) =-k\sum _{j\in \mathbb {N}_i}(x_i(t)-x_j(t)-r_{ij}), \end{aligned}$$
(5)

where \(x_i(t)-x_j(t)\) for \(j\in \mathbb {N}_i\) (corresponding to \([x_j(t)-x_i(t)]_{j\in \mathbb {N}_i}\) in (2)) is the input, \(\tilde{u}_i(t)\in \mathbb {R}^2\) is the output, and \(k\in \mathbb {R}_+\) is the controller gain. The subcontroller \(K_{i1}\) is written as

$$\begin{aligned} K_{i1}: {\left\{ \begin{array}{ll} \xi _{i1}(t+1) =g(\theta _i(t),u_i(t))-v_i(t), \\ v_i(t) =\text {sat}_{{\bar{v}}}(-\xi _{i1}(t)+\tilde{u}_i(t)), \end{array}\right. } \end{aligned}$$
(6)

where \(\xi _{i1}(t)\in \mathbb {R}^2\) is the state, \(\theta _i(t)\), \(u_i(t)\), and \(\tilde{u}_i(t)\) are the inputs, \(v_i(t)\in \mathbb {R}^2\) is the output, and \(\text {sat}_{{\bar{v}}}\) denotes the saturation function introduced in Sect. 1. The function \(g:(-\pi ,\pi ]\times \mathbb {R}^3\rightarrow \mathbb {R}^2\) provides the velocity vector in the \((x_{i1},x_{i2})\) plane when robot i is driven by \(u_i(t)\), that is, \(x_i(t+1)-x_i(t)\). The subcontroller \(K_{i2}\) is of the form

$$\begin{aligned} K_{i2}: w_i(t)= \begin{bmatrix} \Vert v_i(t)\Vert _2 \\ \text {arctan}2(v_{i2}(t),v_{i1}(t))-\theta _i(t) \end{bmatrix} , \end{aligned}$$
(7)

where \(v_i(t)=[v_{i1}(t)\ \, v_{i2}(t)]^\top \) and \(\theta _i(t)\) are the inputs, \(w_i(t)\in \mathbb {R}^2\) is the output, and \(\text {arctan}2\) denotes the four-quadrant version of the inverse tangent function. Finally, \(K_{i3}\) is given by

(8)

where \(w_i(t)=[w_{i1}(t)\ \, w_{i2}(t)]^\top \) and \(u_i(t)\) are the input and the output, respectively, and \(q:\mathbb {R}\rightarrow \{0,\pm s,\pm 2s,\ldots \}\) denotes the mid-tread uniform quantizer with the step size s. Notably, (8) assumes \(w_{i2}(t)\in (-\pi ,\pi ]\).

The working of the aforementioned controller \(K_i\) is as follows. The subcontroller \(K_{i0}\) is a conventional formation controller and outputs the desired velocities in the \(x_{i1}\) and \(x_{i2}\) directions as \(\tilde{u}_i(t)\). The subcontroller \(K_{i1}\) modifies \(\tilde{u}_i(t)\) into \(v_i(t)\) using \(\xi _{i1}(t)\). It follows from (6) that \(\xi _{i1}(t)=g(\theta _i(t-1),u_i(t-1))-v_i(t-1)\) holds. Moreover, \(g(\theta _i(t),u_i(t))-v_i(t)\) represents the quantization error, i.e., the difference between the resulting velocities of robot i taking discrete values and the original \(v_i(t)\) taking continuous values. Hence, the modification of \(\tilde{u}_i(t)\) based on \(\xi _{i1}(t)\) reflects the quantization error in the desired velocities, which suppresses the adverse impact of the discrete-valued input constraint on the resulting formation. Subsequently, to obtain the desired translational and rotational velocities, \(v_i(t)\) is transformed into \(w_i(t)\) by \(K_{i2}\). Based on \(w_i(t)\) and the discrete-valued input constraint, \(K_{i3}\) determines an appropriate \(u_i(t)\) that achieves the velocities close to those specified by \(w_i(t)\) under the input constraint.

3.2 Proposed controllers

The explanation in the previous section implies that the subcontroller \(K_{i0}\) determines the direction in which robot i should move, and \(K_{i1}\)\(K_{i3}\) modify the output \(\tilde{u}_i(t)\) of \(K_{i0}\) by considering the dynamics (1) and the discrete-valued input constraint. Therefore, we modify \(K_{i0}\) to achieve the leader–follower formation.

Based on this concept, we update the subcontroller \(K_{i0}\) as

$$\begin{aligned} K_{i0}': {\left\{ \begin{array}{ll} \xi _{i0}(t+1)=g(\theta _i(t),u_i(t)), \\ \tilde{u}_i(t) = {\left\{ \begin{array}{ll} d_1(t) \quad \text {if}~i=1, \\ \displaystyle \dfrac{1}{\vert \mathbb {N}_i\vert }\sum _{j\in \mathbb {N}_i}(y_j(t)-\kappa (x_i(t)-x_j(t) \\ -r_{ij})) \text {if}~i=2,3,\ldots ,n, \end{array}\right. } \\ y_i(t)=\xi _{i0}(t), \end{array}\right. } \end{aligned}$$
(9)

where \(y_i(t)\in \mathbb {R}^2\) (i.e., \(\mu :=2\)), \(\xi _{i0}(t)\in \mathbb {R}^2\) is the state, and \(\kappa \in \mathbb {R}_+\) is the controller gain. For the leader (\(i=1\)), its desired velocity \(d_1(t)\) is directly set as \(\tilde{u}_i(t)\). As a result, the leader moves according to \(d_1(t)\). For the followers (\(i=2,3,\ldots ,n\)), \(\tilde{u}_i(t)\) is given by a PD-like controller because \(y_j(t)=g(\theta _j(t-1),u_j(t-1))\) holds from (9) and \(g(\theta _j(t-1),u_j(t-1))\) is equal to \(x_j(t)-x_j(t-1)\). Unlike (5), the performance for the leader tracking would be improved by using the information on the velocities of the neighboring robots. This subcontroller \(K_{i0}'\) for each follower i is inspired by the controllers developed in [21]. We propose the controllers given by (6)–(9) as a solution to Problem 1.

The performance of the proposed controllers is demonstrated through numerical examples. Consider the multi-robot system \(\Sigma \) with \(n:=5\) and \(s:=0.05\). The desired velocity of the leader is chosen as \(d_1(t):=[0.02\ \ 0.03\sin (0.05t)]^\top \). The desired formation of the robots and the network structure G are shown in Fig. 5, where the robots, their indices, and the edges of the graph G are represented by the circles, the numbers \(1,2,\ldots ,5\), and the arrows, respectively. We employ the controllers \(K_i\) \((i=1,2,\ldots ,5)\) given by (6)–(9) with \({\bar{v}}:=0.15\) and \(\kappa :=0.05\).

Fig. 5
figure 5

Desired formation and the network structure G

Fig. 6
figure 6

Initial formation

Fig. 7
figure 7

Snapshots of the formation obtained by the proposed controllers for \(d_1(t):=[0.02\ \ 0.03\sin (0.05t)]^\top \)

Fig. 8
figure 8

Time evolution of \(u_2(t)\)

For the initial formation in Fig. 6, the snapshots of the resulting formation are shown in Fig. 7, where the red thick line indicates the desired trajectory of the leader specified by \(d_1(t)\). Figure 8 depicts the evolution over time of the control input \(u_{2}(t)\) for robot 2 as an example. It can be observed that the followers track the leader while preserving the desired formation, although the control inputs are restricted to the discrete values. Similarly, the results for \(d_1(t):=[0.01\ \ 0.02]^\top \) and \(d_1(t):=[0.025\ \ -0.02\sin (0.04t)]^\top \) are shown in Figs. 9 and 10, respectively. We see that the leader–follower formations are achieved also for the different velocities of the leader. In addition, Fig. 11 depicts the snapshots of the formation when the existing controllers given by (5)–(8) are used for the followers, where \(d_1(t):=[0.02\ \ 0.03\sin (0.05t)]^\top \), \(k:=0.05\), and the other conditions remain unchanged. The comparison with Fig. 7 indicates that the proposed controllers achieve higher performance in terms of the accuracy of the resulting formation.

3.3 Introducing collision avoidance algorithm

Figure 12 shows the trajectory of each robot for the result in Fig. 7. This and Fig. 7(a) indicate that the collision between robots 4 and 5 occurs at around \(t=10\). The reason for the collision is that each robot observes only those specified by the network structure G. Such collisions pose a challenge when the proposed controllers are applied to real robots.

Thus, we introduce a collision avoidance algorithm based on the potential filed approach [22] to the proposed controllers. In the potential field approach, each robot is steered to a location with a lower value of a potential function using the information on the gradient of the function. By designing the potential function, we can control the behavior of each robot.

Let \(x\in \mathbb {R}^{2n}\) denote the positions of all the robots, i.e., \(x:=[x_1^\top ~x_2^\top ~\cdots ~x_n^\top ]^\top \). Then, based on [22], we consider the potential function

$$\begin{aligned} \phi ([x_j-x_i]_{j\in \mathbb {N}^r_{i}(x)}):=\kappa _\phi \sum _{j\in \mathbb {N}^r_i(x)}\dfrac{1}{\Vert x_i-x_j\Vert _2} \end{aligned}$$
(10)

for each robot i, where \(\kappa _\phi \in \mathbb {R}_+\) is a constant and \(\mathbb {N}^r_{i}(x):=\{ j\in \mathbb {V}\setminus \{i\}\, \vert \, x_j\in \mathbb {B}(x_i,r)\}\) for \(r\in \mathbb {R}_+\). The potential function \(\phi \) is given as the sum of the inverse of the distances between robot i and others within the radius r. Hence, by decreasing \(\phi \), the collisions between the robots do not occur. Using \(\phi \), we modify a part of (9) as

$$\begin{aligned} \tilde{u}_i(t)&=\dfrac{1}{\vert \mathbb {N}_i\vert }\sum _{j\in \mathbb {N}_i}(y_j(t)-\kappa (x_i(t)-x_j(t)-r_{ij})) \nonumber \\&\quad -\, \dfrac{\partial }{\partial x_i} \phi ([x_j(t)-x_i(t)]_{j\in \mathbb {N}^r_{i}(x(t))}) \nonumber \\&=\dfrac{1}{\vert \mathbb {N}_i\vert }\sum _{j\in \mathbb {N}_i}(y_j(t)-\kappa (x_i(t)-x_j(t)-r_{ij})) \nonumber \\&\quad +\, \kappa _\phi \sum _{j\in \mathbb {N}^r_{i}(x(t))}\dfrac{x_i(t)-x_j(t)}{{\Vert x_i(t)-x_j(t)\Vert _2}^3}, \end{aligned}$$
(11)
Fig. 9
figure 9

Snapshots of the formation obtained by the proposed controllers for \(d_1(t):=[0.01\ \ 0.02]^\top \)

Fig. 10
figure 10

Snapshots of the formation obtained by the proposed controllers for \(d_1(t):=[0.025\ \ -0.02\sin (0.04t)]^\top \)

Fig. 11
figure 11

Snapshots of the formation for \(d_1(t):=[0.02\ \ 0.03\sin (0.05t)]^\top \) when the existing controllers introduced in Sect. 3.1 are used for the followers

Fig. 12
figure 12

Trajectory of each robot for the result in Fig. 7

Fig. 13
figure 13

Snapshots of the formation obtained by the proposed controllers with the collision avoidance algorithm

where \(i=2,3,\ldots ,n\). By introducing the term on the gradient of \(\phi \), each follower can track the leader while avoiding the collisions with other robots.

Figures 13 and 14 show the results corresponding to Figs. 7 and 12 when using (11) with \(\kappa _\phi :=0.01\) and \(r:=0.3\), respectively. We see that the leader–follower formation is achieved without any collision unlike the case of Figs. 7 and 12.

4 Theoretical analysis

This section presents the theoretical analysis of the feedback system with the proposed controllers. The analysis method employed here is similar to that in our previous study [1]. Specifically, we first discuss if the leader–follower formation is achieved in the case without the quantization of the control inputs, and then examine the impact of the quantization on the resulting formation. In the following, similar to [1], we assume that the saturation of the signal by \(\text {sat}_{{\bar{v}}}\) in (6) does not occur; i.e., the magnitude of each element of the input vector to \(\text {sat}_{{\bar{v}}}\) does not exceed \({\bar{v}}\) for every \(t\in \{0,1,\ldots \}\). This assumption is intended to focus on the impact of the quantization on the resulting formation. To simplify the discussion, we further suppose that the feedback system to be analyzed does not contain the collision avoidance algorithm described in Sect. 3.3.

Fig. 14
figure 14

Trajectory of each robot for the result in Fig. 13

4.1 Analysis of feedback system without quantization

Let \(A_f\!\in \!\mathbb {R}^{(n-1)\times (n-1)}\) be the adjacency matrix of the graph describing the network structure of the followers, and let \(D_f:=\text {diag}(1/\vert \mathbb {N}_2\vert ,1/\vert \mathbb {N}_3\vert ,\ldots ,1/\vert \mathbb {N}_n\vert )\). Using these notations, we define

$$\begin{aligned} M\!:=\! \begin{bmatrix} (1-\kappa )I_{n-1}\!+\!(1+\kappa )D_fA_f &{} -D_fA_f\\ I_{n-1} &{} 0_{(n-1)\times (n-1)}\\ \end{bmatrix}. \end{aligned}$$
(12)

Then, the following result is obtained.

Lemma 1

For the feedback system constructed by (1) and (6)–(9), assume that \(d_1(t)\) and \(r_{ij}\) \((i,j=1,2,\ldots ,n)\) are given and there is no quantization of \(u_{i1}(t)\) and \(u_{i2}(t)\) in (8). Assume further that there exists a constant \({\bar{d}}_1\in \mathbb {R}_+\) such that \(\Vert d_1(t)\Vert \le {\bar{d}}_1\) for every \(t\in \{0,1,\ldots \}\). If the following two conditions hold, (3) and

$$\begin{aligned}&\Vert x_i(t)-x_1(t)-r_{i1}\Vert \le 2{\bar{d}}_1\Vert (I_{2(n-1)}-M)^{-1}\Vert \nonumber \\& \text {as}\ t\rightarrow \infty \ \ \forall i\in \mathbb {V}\setminus \{1\} \end{aligned}$$
(13)

hold for every \((x_i(0),\theta _i(0))\in \mathbb {R}^2\times (-\pi ,\pi ]\) \((i=1,2,\ldots ,n)\).

  1. (C1)

    On the graph G, there exists a directed path from the vertex corresponding to the leader to the other vertices.

  2. (C2)

    The gain \(\kappa \) satisfies

    $$\begin{aligned} \kappa < \min \{1,\epsilon _1,\epsilon _2,\ldots ,\epsilon _{n-1}\}, \end{aligned}$$
    (14)

    where \(\epsilon _i\) \((i\in \mathbb {V}\setminus \{n\})\) is defined as

    $$\begin{aligned} \epsilon _i:=\dfrac{2\vert 1-\lambda _i\vert ^2(2(1-\textrm{Re}(\lambda _i))-\vert 1-\lambda _i\vert ^2)}{\vert 1-\lambda _i\vert ^4+4(\textrm{Im}(\lambda _i))^2} \end{aligned}$$
    (15)

    and \(\lambda _i\) represents each eigenvalue of \(D_fA_f\).

Proof

Based on the assumption that there is no quantization of \(u_{i1}(t)\) and \(u_{i2}(t)\) in (8), we obtain \(\xi _{i1}(t)\equiv 0_{2\times 1}\) for every \(i\in \mathbb {V}\) because the initial states of the controllers are supposed to be zero and no quantization error occurs. From this fact, (6)–(9), and the assumption of no quantization, we can show that the velocities of robot i in the \(x_{i1}\) and \(x_{i2}\) directions are determined by \(\tilde{u}_i(t)\) in (9). Therefore, the dynamics of the leader is written as

$$\begin{aligned}&x_1(t+1)=x_1(t)+d_1(t), \end{aligned}$$
(16)

and that of each follower i is written as

$$\begin{aligned} x_i(t+1)&=x_i(t)+\dfrac{1}{\vert \mathbb {N}_i\vert }\sum _{j\in \mathbb {N}_i}(x_j(t)-x_j(t-1)\nonumber \\&\quad -\, \kappa (x_i(t)-x_j(t)-r_{ij})). \end{aligned}$$
(17)

Because (16) is independent of \(\theta _i(0)\) \((i=1,2,\ldots ,n)\), (3) holds for every \((x_i(0),\theta _i(0))\in \mathbb {R}^2\times (-\pi ,\pi ]\) \((i=1,2,\ldots ,n)\). Meanwhile, applying \(z_i(t):=x_i(t)+r_{1i}\) to (17) and using \(r_{ij}=r_{1j}-r_{1i}\) yield

$$\begin{aligned} z_i(t+1)&=z_i(t)+\dfrac{1}{\vert \mathbb {N}_i\vert }\sum _{j\in \mathbb {N}_i}(z_j(t)-z_j(t-1)\nonumber \\&\quad -\, \kappa (z_i(t)-z_j(t))). \end{aligned}$$
(18)

We can consider (18) as the consensus algorithm for tracking a time-varying reference state proposed in [21] by regarding \(z_1(t)\) as the reference state. According to [21], the magnitude of each element of the tracking error \(z_i(t)-z_1(t)=x_i(t)-x_1(t)-r_{i1}\) \((i\in \mathbb {V}\setminus \{1\})\) is bounded by the right-hand side of (13) as \(t\rightarrow \infty \) for every \(z_i(0)\in \mathbb {R}^2\) \((i=1,2,\ldots ,n)\) under conditions (C1) and (C2), where \(r_{11}=0_{2\times 1}\) and \(r_{1i}=-r_{i1}\) are used. Combining this result and the definition of the \(\infty \)-norm and using the fact that (18) is independent of \(\theta _i(0)\) \((i=1,2,\ldots ,n)\), we can prove that (13) holds for every \((x_i(0),\theta _i(0))\in \mathbb {R}^2\times (-\pi ,\pi ]\) \((i=1,2,\ldots ,n)\) under (C1) and (C2). This completes the proof. \(\square \)

Lemma 1 shows that under the boundedness of the desired velocity \(d_1(t)\) and conditions (C1) and (C2), the leader’s velocity becomes \(d_1(t)\) and the tracking error of each follower is ultimately bounded if there is no quantization of the control inputs. In this sense, the proposed controllers without the quantization achieve the leader–follower formation. Here, (C1) means that all the followers can share the information on the leader, and (C2) is satisfied by choosing an appropriate gain \(\kappa \).

4.2 Analysis of impact of quantization

Next, we analyze the impact of the quantization of the control inputs on the behavior of the feedback system.

4.2.1 Problem formulation

We use \(x^*(t)\in \mathbb {R}^{2n}\) to denote the group position x(t) for the proposed controllers where \(u_{i1}(t)\) and \(u_{i2}(t)\) in (8) are unquantized. Then, we consider the following problem described with reference to [23].

Problem 2

For the feedback system constructed by (1) and (6)–(9), suppose that the step size s, the desired leader’s velocity \(d_1(t)\) and relative positions \(r_{ij}\) \((i,j=1,2,\ldots ,n)\), and the parameter \({\bar{v}}\) of \(\text {sat}_{{\bar{v}}}\) in (6) are given. Evaluate the performance index

$$\begin{aligned} E:=\sup _{x(0)\in \mathbb {R}^{2n}}\sup _{\tau \in \{0,1,\ldots \}} \Vert x(\tau )-x^*(\tau )\Vert . \end{aligned}$$
(19)

In Problem 2, E represents the difference between the behavior of the original (i.e., quantized) feedback system and that of the unquantized version. The magnitude of E corresponds to that of the quantization effects on the behavior of the feedback system.

4.2.2 Main result

We begin with the following result on the quantization error \(e_i(t):=g(\theta _i(t),u_i(t))-v_i(t)\) \((i\in \mathbb {V})\).

Lemma 2

For the feedback system constructed by (1) and (6)–(9), suppose that s and \({\bar{v}}\) are given. Then,

$$\begin{aligned} \Vert e_i(t)\Vert \le \sqrt{\left( \frac{s}{2}\right) ^2+\left( 1-\cos \left( \dfrac{\pi }{8}\right) \right) \left( 4{\bar{v}}^2+\sqrt{2}s{\bar{v}} \right) } \end{aligned}$$
(20)

holds for every \(i\in \mathbb {V}\) and \(t\in \{0,1,\ldots \}\).

Proof

This lemma can be proven in a similar manner to that in [1] because \(\Vert e_i(t)\Vert \) depends only on (1) and (6)–(8) and is unrelated to (9) introduced in this study. \(\square \)

Lemma 2 presents an upper bound of \(\Vert e_i(t)\Vert \) as a function of the system parameters s and \({\bar{v}}\).

From Lemma 2, we obtain the following result.

Theorem 1

For the feedback system constructed by (1) and (6)–(9), suppose that s, \(d_1(t)\), \(r_{ij}\) \((i,j=1,2,\ldots ,n)\), and \({\bar{v}}\) are given. Then,

$$\begin{aligned} E&\le \left( 1+\sum _{\ell =0}^\infty \left\| {F^*}^{\ell +1}-{F^*}^\ell \right\| \right) \nonumber \\&\quad \times \sqrt{\left( \frac{s}{2}\right) ^2+\left( 1-\cos \left( \dfrac{\pi }{8}\right) \right) \left( 4{\bar{v}}^2+\sqrt{2}s{\bar{v}} \right) } \end{aligned}$$
(21)

holds, where

$$\begin{aligned} F^*:= \begin{bmatrix} (I_{n}+DA-\kappa DL)\otimes I_2 &{}\ \ -(DA)\otimes I_2 \\ I_{2n} &{}\ \ 0_{2n\times 2n} \end{bmatrix} \end{aligned}$$
(22)

for \(D:=\text {diag}(0,1/\vert \mathbb {N}_2\vert ,1/\vert \mathbb {N}_3\vert ,\ldots ,1/\vert \mathbb {N}_n\vert )\) and the adjacency matrix A and graph Laplacian L of the graph G.

Proof

By a discussion similar to that in the proof of Lemma 1, the dynamics of the feedback system without the quantization of the control inputs can be written using (16) and (17), that is,

$$\begin{aligned} \begin{bmatrix} x^*(t+1) \\ \zeta ^*(t+1) \end{bmatrix} = F^*\begin{bmatrix} x^*(t) \\ \zeta ^*(t) \end{bmatrix} + \eta ^*(t), \end{aligned}$$
(23)

where \(\zeta ^*(t):=x^*(t-1)\) and

$$\begin{aligned} \eta ^*(t) := \begin{bmatrix} \kappa (D\otimes I_2)b+d(t) \\ 0_{2n\times 1} \end{bmatrix} \end{aligned}$$
(24)

for \(b:=[0_{1\times 2}\ \, \sum _{j\in \mathbb {N}_2}r_{2j}^\top \ \, \sum _{j\in \mathbb {N}_3}r_{3j}^\top \ \cdots \ \sum _{j\in \mathbb {N}_n}r_{nj}^\top ]^\top \) and \(d(t):=[d_1^\top (t)\ \ 0_{1\times 2(n-1)}]^\top \). Similarly, from (1) and (6)–(9), the dynamics of the original feedback system is described as

$$\begin{aligned} \begin{bmatrix} x(t+1) \\ \zeta (t+1) \\ \xi (t+1) \end{bmatrix} = F \begin{bmatrix} x(t) \\ \zeta (t) \\ \xi (t) \end{bmatrix} + \eta (t) , \end{aligned}$$
(25)

where \(\zeta (t)\!:=x(t-1)\), \(\xi (t)\!:=[\xi _{11}^\top (t)~\xi _{21}^\top (t)~\cdots ~\xi _{n1}^\top (t)]^\top \), and

(26)
(27)

for \(e(t):=[e_1^\top (t)~e_2^\top (t)~\cdots ~e_n^\top (t)]^\top \). Equations (23) and (25) yield

$$\begin{aligned} x^*(\tau )&= \begin{bmatrix} I_{2n}&0_{2n\times 2n} \end{bmatrix} \nonumber \\& \quad \times \left( {F^*}^\tau \!\begin{bmatrix} x^*(0) \\ \zeta ^*(0) \end{bmatrix} + \sum _{\ell =0}^{\tau -1} \left( {F^*}^{\tau -\ell -1}\eta ^*(\ell )\right) \right) , \end{aligned}$$
(28)
$$\begin{aligned} x(\tau )&= \begin{bmatrix} I_{2n}&0_{2n\times 2n}&0_{2n\times 2n} \end{bmatrix} \nonumber \\&\quad \times \left( {F}^\tau \begin{bmatrix} x(0) \\ \zeta (0) \\ \xi (0) \end{bmatrix} + \sum _{\ell =0}^{\tau -1} \Bigl ({F}^{\tau -\ell -1}\eta (\ell )\Bigr ) \right) , \end{aligned}$$
(29)

respectively. Using (24), (26), and (27), we can rewrite (29) as

$$\begin{aligned} x(\tau )&= \begin{bmatrix} I_{2n}&0_{2n\times 2n}&0_{2n\times 2n} \end{bmatrix} \nonumber \\&\quad \times \left( {F}^\tau \begin{bmatrix} x(0) \\ \zeta (0) \\ \xi (0) \end{bmatrix} + \sum _{\ell =0}^{\tau -2}\left( {F}^{\tau -\ell -1}\eta (\ell )\right) +\eta (\tau -1) \right) \nonumber \\&= \begin{bmatrix} I_{2n}&0_{2n\times 2n} \end{bmatrix} \Biggl ( {F^*}^\tau \begin{bmatrix} x(0) \\ \zeta (0) \end{bmatrix} \nonumber \\&\quad + \sum _{\ell =0}^{\tau -2} \left( \begin{bmatrix} {F^*}^{\tau -\ell -1}&{}\ \ -{F^*}^{\tau -\ell -2} \begin{bmatrix} I_{2n} \\ 0_{2n\times 2n} \end{bmatrix} \end{bmatrix} \eta (\ell ) \right) \nonumber \\&\quad +\eta ^*(\tau -1) + \begin{bmatrix} e(\tau -1) \\ 0 _{2n\times 1} \end{bmatrix} \Biggr ) \nonumber \\&= \begin{bmatrix} I_{2n}&0_{2n\times 2n} \end{bmatrix} \Biggl ( {F^*}^\tau \begin{bmatrix} x(0) \\ \zeta (0) \end{bmatrix} \nonumber \\&\quad + \sum _{\ell =0}^{\tau -2} \biggl ( {F^*}^{\tau -\ell -1}\eta ^*(\ell ) + ({F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2}) \nonumber \\&\quad \times \begin{bmatrix} e(\ell ) \\ 0 _{2n\times 1} \end{bmatrix} \biggr ) +\eta ^*(\tau -1) + \begin{bmatrix} e(\tau -1) \\ 0 _{2n\times 1} \end{bmatrix} \Biggr ) \nonumber \\&= \begin{bmatrix} I_{2n}&0_{2n\times 2n} \end{bmatrix} \Biggl ( {F^*}^\tau \begin{bmatrix} x(0) \\ \zeta (0) \end{bmatrix} +\sum _{\ell =0}^{\tau -1} \left( {F^*}^{\tau -\ell -1}\eta ^*(\ell ) \right) \nonumber \\&\quad + \sum _{\ell =0}^{\tau -2} \left( ({F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2}) \begin{bmatrix} e(\ell ) \\ 0_{2n\times 1} \end{bmatrix} \right) \nonumber \\&\quad + \begin{bmatrix} e(\tau -1) \\ 0_{2n\times 1} \end{bmatrix} \Biggr ), \end{aligned}$$
(30)

where \(\xi (0)=0_{2n\times 1}\) is used to derive the second equality. From (28), (30), and Lemma 2, we obtain

$$\begin{aligned}&\Vert x(\tau )-x^*(\tau )\Vert \nonumber \\&= \Biggl \Vert \begin{bmatrix} I_{2n}&0_{2n\times 2n} \end{bmatrix} \Biggl ( \sum _{\ell =0}^{\tau -2} \Biggl ( ({F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2}) \nonumber \\&\quad \times \begin{bmatrix} e(\ell ) \\ 0_{2n\times 1} \end{bmatrix} \Biggr ) + \begin{bmatrix} e(\tau -1) \\ 0_{2n\times 1} \end{bmatrix} \Biggr ) \Biggr \Vert \nonumber \\&\le \left\| \begin{bmatrix} I_{2n}&0_{2n\times 2n} \end{bmatrix} \right\| \Biggl \Vert \sum _{\ell =0}^{\tau -2} \Biggl ( ({F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2}) \nonumber \\&\quad \times \begin{bmatrix} e(\ell ) \\ 0_{2n\times 1} \end{bmatrix} \Biggr ) + \begin{bmatrix} e(\tau -1) \\ 0_{2n\times 1} \end{bmatrix} \Biggr \Vert \nonumber \\&\le \Biggl \Vert \sum _{\ell =0}^{\tau -2} \Biggl ( ({F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2}) \begin{bmatrix} e(\ell ) \\ 0_{2n\times 1} \end{bmatrix} \Biggr ) \Biggr \Vert \nonumber \\&\quad + \Biggl \Vert \begin{bmatrix} e(\tau -1) \\ 0_{2n\times 1} \end{bmatrix} \Biggr \Vert \nonumber \\&\le \sum _{\ell =0}^{\tau -2} \Biggl \Vert ({F^*}^{\tau -\ell -1}\!-\!{F^*}^{\tau -\ell -2}) \begin{bmatrix} e(\ell ) \\ 0_{2n\times 1} \end{bmatrix} \Biggr \Vert \nonumber \\&\quad + \Biggl \Vert \begin{bmatrix} e(\tau -1) \\ 0_{2n\times 1} \end{bmatrix} \Biggr \Vert \nonumber \\&\le \sum _{\ell =0}^{\tau -2} \left\| {F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2} \right\| \Biggl \Vert \begin{bmatrix} e(\ell ) \\ 0_{2n\times 1} \end{bmatrix} \Biggr \Vert \nonumber \\&\quad + \Biggl \Vert \begin{bmatrix} e(\tau -1) \\ 0_{2n\times 1} \end{bmatrix} \Biggr \Vert \nonumber \\&= \sum _{\ell =0}^{\tau -2} \left\| {F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2} \right\| \left\| e(\ell ) \right\| + \left\| e(\tau -1) \right\| \nonumber \\&\le \left( 1+\sum _{\ell =0}^{\tau -2} \left\| {F^*}^{\tau -\ell -1}-{F^*}^{\tau -\ell -2}\right\| \right) \nonumber \\&\quad \times \sqrt{\left( \frac{s}{2}\right) ^2+\left( 1-\cos \left( \dfrac{\pi }{8}\right) \right) \left( 4{\bar{v}}^2+\sqrt{2}s{\bar{v}} \right) }. \end{aligned}$$
(31)

The right-hand side of (31) is monotonically non-decreasing with respect to \(\tau \in \{0,1,\ldots \}\) and is independent of x(0). This, together with (19), proves the statement. \(\square \)

Theorem 1 presents an upper bound of the performance index E as a solution to Problem 2. If \({F^*}^\ell \) converges as \(\ell \rightarrow \infty \), the upper bound is finite because \(\Vert {F^*}^{\ell +1}-{F^*}^\ell \Vert \) in (21) goes to zero as \(\ell \rightarrow \infty \). Therefore, under the condition that \({F^*}^\ell \) converges, the impact of the quantization of the control inputs can be estimated. In addition, this result and Lemma 1 imply that under the above condition and those in Lemma 1, the behavior difference between the original feedback system and the unquantized version, where the leader–follower formation is achieved, is smaller than or equal to a certain level. In this sense, we can guarantee the stability of the feedback system.

Remark 1

We compare Theorem 1 with the corresponding result in [1] that considered fixed formations. Replacing \({F^*}\) in (21) with \((I_n-kL)\otimes I_2\) yields the result in [1]. This implies that the key matrix in the analysis result becomes more complicated by updating \(K_{i0}\) to \(K_{i0}'\). The difference between the matrices \({F^*}\) and \((I_n-kL)\otimes I_2\) causes the difference in the magnitudes of the quantization effects in the sense of their upper bounds.

4.2.3 Examples

We consider the example that provides the result in Fig. 7 again. In this example, \({\bar{d}}_1\) in Lemma 1 exists, and conditions (C1) and (C2) are satisfied. Moreover, for \({F^*}\) in (22), we can numerically confirm that \({F^*}^\ell \) converges as \(\ell \rightarrow \infty \). Thus, as mentioned previously, Lemma 1 and Theorem 1 guarantee the stability of the feedback system. Next, from (21) and the behavior of the robots shown in Fig. 7, we obtain \(E\le 1.266\) and \(\sup _{\tau \in \{0,1,\ldots ,120\}}\Vert x(\tau )-x^*(\tau )\Vert =0.04011\), respectively. This demonstrates the validity of Theorem 1.

Similar results are obtained in the cases of Figs. 9 and 10. The desired velocities \(d_1(t):=[0.01\ \ 0.02]^\top \) and \(d_1(t):=[0.025\ \ -0.02\sin (0.04t)]^\top \) of the leader satisfy the condition in Lemma 1, and (C1), (C2), and \({F^*}\) do not depend on \(d_1(t)\). Hence, by a discussion similar to the above, the stability of the feedback systems is guaranteed. In addition, the behavior of the robots shown in Figs. 9 and 10 yields \(\sup _{\tau \in \{0,1,\ldots ,70\}}\Vert x(\tau )-x^*(\tau )\Vert =0.04671\) and \(\sup _{\tau \in \{0,1,\ldots ,100\}}\Vert x(\tau )-x^*(\tau )\Vert =0.05229\), respectively. These results support Theorem 1 because (21) remains to be \(E\le 1.266\) due to its independence from \(d_1(t)\).

5 Conclusion

In this study, we discussed the leader–follower formation control of four-legged robots via discrete-valued inputs. By focusing on the structures of existing controllers and modifying the specific parts appropriately, we obtained leader–follower formation controllers using discrete-valued inputs. In addition, we analyzed the resulting feedback system based on a performance index that quantifies the impact of the discrete-valued input constraint on the behavior of the system. The results in this study contribute to achieving moving formations of four-legged robots.

A future direction of this research is to develop better controllers in terms of the performance index considered in this paper. Another direction is to extend our results to a more general setting, e.g., the case where the quantization interval in the rotational direction of the robots is generalized. In addition to these theoretical works, the experimental verification of the proposed controllers using real robots should be addressed in the future.