Distributed multirobot formation control in dynamic environments
 4.3k Downloads
 5 Citations
Abstract
This paper presents a distributed method for formation control of a homogeneous team of aerial or ground mobile robots navigating in environments with static and dynamic obstacles. Each robot in the team has a finite communication and visibility radius and shares information with its neighbors to coordinate. Our approach leverages both constrained optimization and multirobot consensus to compute the parameters of the multirobot formation. This ensures that the robots make progress and avoid collisions with static and moving obstacles. In particular, via distributed consensus, the robots compute (a) the convex hull of the robot positions, (b) the desired direction of movement and (c) a large convex region embedded in the four dimensional positiontime free space. The robots then compute, via sequential convex programming, the locally optimal parameters for the formation to remain within the convex neighborhood of the robots. The method allows for reconfiguration. Each robot then navigates towards its assigned position in the target collisionfree formation via an individual controller that accounts for its dynamics. This approach is efficient and scalable with the number of robots. We present an extensive evaluation of the communication requirements and verify the method in simulations with up to sixteen quadrotors. Lastly, we present experiments with four real quadrotors flying in formation in an environment with one moving human.
Keywords
Multirobot systems Distributed robotics Formation control Dynamic environments Collision avoidance Unmanned aerial vehicles Drones Micro air vehicles1 Introduction
Multirobot systems will be ubiquitous to perform many tasks, such as surveillance (Schwager et al. 2011), inspection (Suzuki et al. 2000), factory automation (AlonsoMora et al. 2015a), logistics (Wurman et al. 2008) or cinematography (Nägeli et al. 2017b). While some of these problems require team navigation in a rigid pattern, other scenarios, such as cooperative manipulation of deformable objects or transportation of cablesuspended loads, allow for more flexibility, yet requiring certain level of coordination. This is also the case, for example, for a team of robots that fly through narrow canyons while preserving interrobot communication or visibility. In this paper we present a method for formation control that is ideally suited for these kind of flexible multirobot formations, since our approach is capable of adjusting several parameters of the formation dynamically to avoid collisions with the environment.
Multirobot navigation in formation has received extensive attention in the past, with many works considering obstaclefree scenarios. In this work we leverage efficient constrained optimization, multirobot consensus and geometric reasoning to achieve distributed formation control in environments with static and moving obstacles. In contrast to our previous work (AlonsoMora et al. 2017), we consider the case where robots are no more centrally controlled, but instead have a limited field of view and communicate with their immediate neighbors to coordinate.
Given a set of target formation shapes, our method optimizes the parameters (such as position, orientation and size) of the multirobot formation in a neighborhood of the robots. The method guarantees that the team of robots remains collisionfree by rearranging its formation, see Fig. 1 for an example with four quadrotors in a square formation. A simplified global planner can use this method to navigate the group of robots from an initial location to a final location. This global planner may consist of a series of waypoints for the formation center. A human may also provide the global path for the formation, or a desired velocity, and the robots will adapt their configuration automatically.
1.1 Related works
A large part of the literature in multirobot navigation with obstacles considers solutions designed for ground robots operating on the plane. These techniques include using a set of reactive behaviors (Balch and Arkin 1998), potential fields (Balch and Hybinette 2000; Sabattini et al. 2011), abstractions (Michael et al. 2008; Ayanian and Kumar 2010a), decentralized feedback laws with graph theory (Desai et al. 2001), proximity constraints (Ayanian and Kumar 2010b) and stochastic planning (Urcola et al. 2017), to name a few. In contrast, our method automatically optimizes for the formation parameters natively in threedimensional dynamic environments.
The use of distributed consensus algorithms (Ren and Beard 2008), where each robot only needs to interact with nearby team mates, has also led to a wide variety of formation control strategies, as shown in the survey by Oh et al. (2015). Regarding the robot dynamics, Lin et al. (2005) considered unicycle robots, Dong et al. (2015) considered aerial vehicles and Hatanaka et al. (2012) considered motion in SO(3). Although our method does not directly model the robot dynamics in the computation of the formation parameters, it relays on lowlevel controllers to drive each robot towards its individual position in the formation while respecting its dynamic constraints. We show experiments with a team of quadrotors. In terms of the sensing model, Franchi et al. (2012) considered relative bearing measurements, Oh and Ahn (2011) considered interrobot distances and Mostagh et al. (2009) and Montijano et al. (2016) employed explicit vision measurements to estimate the relative positions of neighboring robots and reach the formation. A common assumption in these approaches is the lack of obstacles in the environment, focusing on the design of lowlevel controllers for each robot to reach the desired formation pattern. Our method is different, in the sense that we exploit the consensus algorithm to agree upon highlevel navigation concepts, such as a large convex region reachable by the robots and which does not intersect any of the observed obstacles.
Our approach relies on convex and nonconvex optimization methods to obtain the locally optimal state of the formation. Several approaches have formulated the navigation of teams of robots as an optimization problem. In particular, convex optimization frameworks for navigating in formation include semidefinite programming (Derenick et al. 2010), which considers only 2D circular obstacles; distributed quadratic optimization (AlonsoMora et al. 2015a), without global coordination; distributed optimization with discretetime communications by Kia et al. (2016), which considers a global function defined by the sum of individual costs; and second order cone programming (Derenick and Spletzer 2007), which triangulates the free 2D space to compute the optimal motion in formation. Our method applies to polygonal obstacles and does not require a triangulation of the environment.
Centralized nonconvex optimizations include a mixed integer approach by Kushleyev et al. (2013) and a discretized linear temporal logic approach by Saha et al. (2014). Both require high computational effort and can only be applied offline to precompute trajectories. Our goal is to have realtime capability for online computation. Online sequential convex programming has been employed by Augugliaro et al. (2012) and Chen et al. (2015) to compute collisionfree trajectories for multiple Micro Air Vehicles (MAVs), but without considering formations. The assignment of robots to the target positions in the formation is another optimization problem that was solved with a centralized algorithm by Turpin et al. (2014) or with a distributed algorithm, albeit in environments without obstacles, by Montijano and Mosteo (2014) and Morgan et al. (2016). Building upon the centralized, yet online, method by AlonsoMora et al. (2017), we propose an optimization and consensus based approach to reconfigure the formation in dynamic environments, which is distributed and online.
Several works have proposed distributed constrained optimization approaches to maintain a formation, based on Model Predictive Control (MPC). In particular, Keviczky et al. (2008) computed a set of inputs for a team of aerial vehicles navigating in, or towards, a given formation, and Kuriki and Namerikawa (2015) relied on a leader to compute the formation configuration. Our approach is leaderless and can adjust the size of the formation to avoid obstacles.
1.2 Contribution
The main contribution of this paper is a distributed method for formation control. The method enables a team of ground or aerial robots to navigate in a dynamic environment while reconfiguring their formation to avoid collisions with static and moving obstacles. A descriptive idea of the method is shown in Fig. 2.

The convex hull of the robot’s positions.

The preferred direction of motion.

A convex region in free positiontime space, given by the intersection of individual regions.
An earlier version of this paper was published by AlonsoMora et al. (2016). In this version we extend the method with an additional consensus round to compute the preferred direction of motion and a validation of the method in experiments with four real drones navigating in an environment with a human.
Our proposed method is intended for local motion planning and therefore, deadlocks may arise. To avoid deadlocks, our method can be employed in combination with a global planner, in a manner similar to the work on centralized formation control by AlonsoMora et al. (2017). In this work we have introduced an additional step in the method, where robots compute the preferred direction of motion. This additional step is intended to avoid disagreements in the case that each robot computes an independent global path, and to coordinate the intentions of all robots. This is done in a maxmin consensus step where the best direction of movement is chosen with respect to all the robots.
2 Preliminaries
In this section we provide the needed definitions, the problem formulation for distributed formation control in dynamic environments and an overview of the proposed method.
2.1 Definitions
2.1.1 Robots
Consider a team of robots navigating in formation. For each robot \(i \in \mathcal {I}= \{1,\dots ,n\} \subset \mathbb {N}\), its position at time t is denoted by \(\mathbf p _i(t) \in \mathbb {R}^3\). In the following, we consider all robots to have the same dynamic model and cylindrical nonrotating shape of radius r and height 2h in the vertical dimension. Denote the volume occupied by a robot at position \(\mathbf p \) by \(\mathcal {A}(\mathbf p ) \subset \mathbb {R}^3\).
2.1.2 Communication
Let \(\mathcal {G}=(\mathcal {I},{\mathcal {E}})\) be the communication graph associated to the team of robots. Each edge in the graph, \((i,j)\in {\mathcal {E}},\) denotes the possibility of robots i and j to directly communicate with each other. The set of neighbors of robot i is denoted by \({\mathcal {N}}_i\), i.e., \({\mathcal {N}}_i = \{ j\in {\mathcal {I}}\ \ (i,j)\in {\mathcal {E}}\}\). We assume ideal communications, i.e., noisefree and without packet losses, and that \({\mathcal {G}}\) is connected, i.e., for every pair of robots i, j there exists a path of one or more edges in \({\mathcal {E}}\) that links robot i to robot j. We denote by d the diameter of \({\mathcal {G}}\), which is the longest among all the shortest paths between any pair of robots.
2.1.3 Field of view
We consider that each robot i has a limited field of view, typically a sphere of given radius centered at the robot’s position. We denote it by \(\mathcal {B}_i \subset \mathbb {R}^3\).
2.1.4 Static obstacles
2.1.5 Moving obstacles
2.1.6 Positiontime obstacles
2.1.7 Positiontime free space
2.1.8 Motion planning
This work presents an approach for local navigation. We consider that a desired goal position for the team of robots is given, and known by all robots. This global position could be given by a human operator or a standard sampling based approach for global planning, and is outside the scope of this work. Denote by \(\mathbf{g }(t) \in \mathbb {R}^3\) the goal position for the centroid of the formation at time t. Our distributed local planner then computes the configuration state of the target formation and the required motion of the robots for a given time horizon \(\tau > 0\), which must be longer than the required time to stop. We denote \(t_1 = t_0 + \tau \).
2.2 Definition of the formation
We consider a predefined set of \(m\in \mathbb {N}\) template formations, such as a square or a line. See Fig. 3 for an example. Each template formation \( f \in {\mathcal {I}}_f = [1, m]\) is given by a set of robot positions \(\{{\mathbf{r }^{f}}_{0,1},\ldots , {\mathbf{r }^{f}}_{0,n}\}\) and a set of outer vertices \(\{{\mathbf{w }^{f}}_{1},\ldots ,{\mathbf{w }^{f}}_{n_{f}}\}\) relative to the center of rotation (typically the centroid) of the formation, where \(n_f\) denotes the number of outer vertices defining formation f. The set of vertices represents the convex hull of the robot’s positions in the formation, thus reducing the complexity for formations with a large number of robots.
Further denote by \(d_f\) the minimum distance between any given pair of robots in the template formation f. Template formations can be defined by a human designer or automatically computed for optimal representation of a target shape as showed by Schoch et al. (2014).
A formation is then defined by an isomorphic transformation, which includes the size \(s\in \mathbb {R}_+\), a translation \(\mathbf t \in \mathbb {R}^3\) and a rotation \(R(\mathbf q )\) described by a unit quaternion \(\mathbf q \in SO(3)\), its conjugate denoted by \({\bar{\mathbf{q }}}\). With this formation definition, the configuration state for the team of robots is fully defined by \(\mathbf z = [\mathbf t , s, \mathbf q ] \in \mathbb {R}^3 \times \mathbb {R}_+ \times SO(3)\).
2.3 Problem formulation
Consider also a set of static and moving obstacles seen by the robots and a prediction of their future positions for a time horizon \(\tau \). From this information, each robot individually computes the free positiontime space \({\bar{\mathcal {F}}}_i(t_0)\).
Our method solves the following two problems jointly.
Problem 1
(Optimal target configuration) At the current time \(t_0\), obtain a goal configuration \(\mathbf z ^*\) and formation index \(f^* \in {\mathcal {I}}_f\) for time \(t_1 = t_0 + \tau \) such that the deviation between the robot team’s centroid and a desired position \(\mathbf g \) is minimized, and the robot positions in the target formation are collision free with respect to all observed obstacles, that is \(\mathcal {V}(\mathbf z ^*,f^*) \times t_1 \subset \bigcup _{i\in \mathcal {I}} {\bar{\mathcal {F}}}_i(t_0)\).
Problem 2
(Collisionfree motion) Given the current position at time \(t_0\) of all robots, ensure that the transition from their current positions, \(\mathbf p _1(t_0) \ldots \mathbf p _n(t_0)\), to their assigned positions, \(\mathbf r _1(t_1)\ldots \mathbf r _n(t_1)\), in the target formation is collision free at all time instances until \(t_1\), i.e., for all robot \(i \in \mathcal {I}\) and time \(t\in [t_0,t_1]\) its position satisfies \(\mathbf p _i(t) \times t \subset \bigcup _{i\in \mathcal {I}} {\bar{\mathcal {F}}}_i(t_0)\).
In the following, and for clarity, we will drop the time index whenever it is selfevident and denote \(\mathbf p _i(t_0)\) by \(\mathbf p _i\).
2.4 Method overview
In the following we give an intuitive idea of the method followed by a detailed method overview.
2.4.1 Idea
Consider a team of robots, each of them with a limited field of view, and a communication topology. A naive approach to solve Problem 1 could be that each robot computes a target formation and then all robots perform consensus on the formation parameters. Unfortunately, this can lead to a formation in collision with an obstacle, as shown in Fig. 2a. If all the robots first agree on a convex obstaclefree region, and then compute a target formation therein, then this problem would not appear any more. An approach to compute this common obstaclefree region could be that each robot computes an obstaclefree region with respect to its limited field of view and then the robots collaboratively compute the intersection of all regions. Nonetheless, this could lead to an empty intersection as shown in Fig. 2b. This second problem can be solved by imposing (a) that the robots first agree on a common direction of motion and (b) that the convex obstaclefree region computed by each robot accounts for the robots’ positions. The latter is equivalent to imposing that the convex obstaclefree region includes the convex hull of the robots’ positions. See Fig. 2c for an example.
Following this line of thought, the proposed method consists of the following steps, which are detailed in Fig. 4.
2.4.2 Formation control
 (a)
Robots perform distributed consensus to compute the convex hull of the robots’ positions.
 (b)
Robots perform a distributed min/max consensus to agree on the preferred direction of movement for the team.
 (c)
Each robot computes a large convex region in obstaclefree space, grown from the convex hull of the robots’ positions and directed in the preferred direction of motion.
 (d)
Robots perform distributed consensus to compute the intersection of the individual convex regions.
 (e)
Each robot computes the optimal target formation within the resulting convex volume. At this stage all the robots execute the same optimization and with identical initial conditions, variables, cost function and constraints (these are computed from the intersection of convex regions, the convex hull of the robot positions and the preferred direction of motion). Therefore, we assume that they reach the same solution. If not, an additional consensus round would be required.
 (f)
Robots are assigned, with a distributed optimization, to target positions within the desired formation.
2.4.3 Low level control
Each robot navigates towards its assigned position within the target formation with a highfrequency control loop. They locally avoid collisions with their neighbors and remain within the convex region in free positiontime space to avoid collisions with perceived static and moving obstacles. The target positions are updated as soon as a new configuration state of the team’s formation is obtained.
3 Method
In this section we explain all the steps of the distributed navigation algorithm, discussing which information the robots need to communicate to their neighbors and which steps are executed locally. The proposed algorithm accounts for the limited visibility and communication capabilities of all the robots, exploiting the good properties of a distributed consensus scheme. To avoid confusions in the notation, throughout the section we denote discretetime communication rounds using the index k and remove the continuous time dependency of the previous section.
3.1 Convex hull of the robots’ positions
In the first step the robots need to compute the convex hull, \(\mathcal {C}\), of their positions. The convex hull of a set of points can be computed trivially via a function that we denote by convhull. To compute \(\mathcal {C}\) in a distributed manner we let each robot handle a local estimation of the convex hull, \(\mathcal {C}_i\), which is initialized containing exclusively the robot’s position, i.e., \(\mathcal {C}_i(0) = \{ \mathbf p _i \}\).
Proposition 1
Proof
We analyze now the communication cost of Algorithm 1. Note that, in the worst case, where the convex hull contains the positions of all the robots, our algorithm presents a communication cost equal to that of flooding all the positions to all the robots. Nevertheless, even in such case, there are practical advantages of using this procedure instead of pure flooding. Besides the likely savings in communications from positions that are not relayed because they do not belong to the convex hull, with our procedure there is no need for a specific identification of which position corresponds to a particular robot, making it better suited for pure broadcast implementations.
Remark 1
(Unknown d) If the diameter, d, is unknown, the consensus runs until convergence for all robots. Since only new points are transmitted at each iteration, the convergence of the algorithm can be detected using a timeout when no new messages are received.
Remark 2
(Algorithm complexity) In terms of computational demands, our algorithm requires the computation of d convex hulls for each robot, instead of a single computation. On the other hand, each convex hull computation will potentially contain fewer points, and the information from previous rounds could also be exploited for efficiency. Nevertheless, existing algorithms to compute the convex hull of a set of points are already fast enough not to consider the additional computations an issue of the algorithm.
3.2 Preferred direction of motion
The next step of the algorithm consists in computing the direction in which the team of robots needs to move. Denote by \(\mathbf g \in \mathbb {R}^3\) the goal position for the robot formation and consider it known by all robots. A priori the preferred direction of motion, \(\theta ^* \in \mathbb {R}^3,\) for the team of robots is given by the vector from \(\mathbf c \in \mathbb {R}^3\), the centroid of \(\mathcal {C}\), to the goal position, \(\mathbf g \in \mathbb {R}^3\). Note that the centroid of the convex hull can be computed by all the robots locally without the need of additional information, as opposed to the centroid of the robots’ positions, which would require further information and possibly asymptotic consensus methods. However, it may happen that an obstacle is in the way of such direction, which might be seen only by a subset of the robots. Thus, we introduce an optional step in the algorithm in which the robots agree upon the best direction to compute the goal formation.
Our algorithm considers a discrete set, \(\varTheta = \{ \theta _1, \ldots , \theta _\kappa \},\) containing \(\kappa \) different possible directions of motion. We assume that a common orientation frame is available to all the robots, with origin defined by the vector \(\mathbf g \mathbf c .\) For each \(\theta \in \varTheta \), each robot computes a utility value, \(u_i: \varTheta \rightarrow \mathbb {R}^+,\) that describes how good is that direction. The utility function can be defined for example, as the distance to an obstacle in that direction, based on the local perception of that robot.
Remark 3
(Bandwidth reduction) Using the above rule, the total bandwidth used in the network will be equal to \(\kappa n d\) units of information, obtained from d communication rounds, each one of them requiring n robots to transmit a vector of dimension \(\kappa \). In order to reduce this quantity, at each communication round each robot only sends the components of the vector that have changed after executing (12). The messages contain then segments of the vector, determined by the initial component, the length of the segment and the data. While in the worst case the bandwidth usage of this methodology raises to \(\frac{3}{2} \kappa n d\), we will show empirically that in practice this approach brings substantial savings.
3.3 Obstaclefree convex region
Recall that, from Sect. 3.1, all robots have knowledge of the convex hull \(\mathcal {C}\) of the robots’ positions and from Sect. 3.2 they share a preferred direction of motion. With this common information, but different obstacle maps due to the limited field of view, each robot computes an obstaclefree convex region embedded in positiontime space, denoted \(\mathcal {P}_i \subset \mathbb {R}^3 \times [0,\tau ]\). If the step in Sect. 3.2 is omitted, the robots will use by default the angle \(\theta ^*\) defined by the vector \(\mathbf g \mathbf c \) as preferred direction.
To compute the convex regions \(\mathcal {P}_i\) we follow our previous work for centralized formation control (AlonsoMora et al. 2017), which relies on the iterative optimization by Deits and Tedrake (2014). Given a small initial ellipsoid in free space we compute (1) the separating hyperplanes between the ellipsoid and the obstacles and (2) the largest ellipsoid contained in the resulting convex polytope. These two steps are formulated as convex programs and are repeated iteratively until convergence to a large convex region in free space and as long as a set of points are contained in the convex region. The initial ellipsoid can be generated by two points biasing the growth of the convex polytope. We note though that the distributed formation control method described in this paper is agnostic to the underlaying algorithm to compute convex polytopes in free space.

\(\mathcal {P}_i^{\mathcal {C}}\), a convex polytope that contains the convex hull \(\mathcal {C}\) of the robot positions, is computed towards a point \(\chi = \mathbf c + \theta ^* \tau \) in the preferred direction of motion, and is embedded in the free positiontime space as seen by the robot. This polytope \(\mathcal {P}_i^{\mathcal {C}} = \mathcal {P}_{\mathcal {C}\times 0}^{\chi \times \tau }({\bar{\mathcal {F}}}_i(t_0))\) verifies that \(\mathcal {C}\times 0 \subset \mathcal {P}_i^{\mathcal {C}} \subset {\bar{\mathcal {F}}}_i(t_0)\).

\(\mathcal {P}_i^\mathbf{c }\), a convex polytope that, in contrast to \(\mathcal {P}_i^{\mathcal {C}}\), only contains the centroid \(\mathbf c \) of the convex hull. This polytope \(\mathcal {P}_i^\mathbf{c } = \mathcal {P}_\mathbf{c \times 0}^{\chi \times \tau }({\bar{\mathcal {F}}}_i(t_0))\) verifies that \(\mathbf c \times 0 \subset \mathcal {P}_i^\mathbf{c } \subset {\bar{\mathcal {F}}}_i(t_0)\).
We then define \(\mathcal {P}_i\) as the intersection of both polytopes, i.e., \(\mathcal {P}_i = \mathcal {P}_i^{\mathcal {C}} \bigcap \mathcal {P}_i^\mathbf{c }\). The former polytope (thanks to its convexity) guarantees that the robots can move towards the target configuration following collisionfree trajectories. The latter polytope guides the team towards the goal.
However, due to the local visibility of the robots, some of these regions may intersect some obstacles that a particular robot has not seen. Additionally, these regions might not be equal for all robots, which, if used without further agreement, would lead to different target formations. Thus, the robots need to agree upon a common region that is globally free of obstacles. For that purpose, we next propose a distributed algorithm that computes the intersection of all the regions, \(\mathcal {P}= \bigcap _{i \in \mathcal {I}} \mathcal {P}_i\).
Proposition 2
Proof
Let us note that any face that belongs to both \(\mathcal {P}_i(k)\) and \(\mathcal {P}_i(k+1)\) will yield the same linear constraints in both polytopes. This implies that, similarly to Algorithm 1, robots do not need to send all the constraints at each communication round, but only those that are new, and consequently more restrictive than in the previous round. In particular, robots send at each communication round the new linear constraints that have appeared after computing the intersection in Line 5 of Algorithm 3, instead of all the linear constraints at each round, as we originally considered in AlonsoMora et al. (2016). This modification leads to substantial communication savings in the Algorithm, specially when compared to a pure flooding approach. In addition, it allows us to define a solid stop criterion for the algorithm in case of unknown value of d, as in the case of Remark 1.
Proposition 3
The resulting convex region \(\mathcal {P}\) is a convex polytope.
Proof
The intersection of convex regions is also convex. \(\square \)
Proposition 4
The resulting convex region \(\mathcal {P}\) does not intersect with any obstacle seen by the robots in the team for the time period \([t_0,t_1]\), i.e., it is fully contained in the free positiontime space.
Proof
For each robot i, its individual convex region \(\mathcal {P}_i(0)\) is fully contained in its observed free positiontime space by construction, i.e., \(\mathcal {P}_i(0) \subset {\bar{\mathcal {F}}}_i(t_0)\).
In each consensus round, the new polytope is given by the intersection of the previous polytope with the received ones, therefore \(\mathcal {P}_i(k+1) \subset \mathcal {P}_i(k)\). This implies that, after convergence, \(\mathcal {P}= \mathcal {P}_i(d) \subset \mathcal {P}_i(0)\).
If \(\mathcal {P}= \emptyset \), an alternative convex region \(\mathcal {P}_i\) can be selected by each robot as described by AlonsoMora et al. (2017)—Sect. IIIC, and consensus on the intersection is repeated.
3.4 Optimal formation
3.5 Robot assignment to positions in the formation
The result of the computation of Sect. 3.4 is a target formation \(f^*\) and configuration state \(\mathbf z ^*\), from which its associated set of target robot positions \(\{\mathbf{r _{1}}^{*},\ldots ,\mathbf{r _{n}}^{*}\}\) can be computed from Eq. (4).
Proposition 5
The robots can transition to their assigned positions in the target formation with collisionfree paths.
Proof
Under the assumption of holonomic motion model, the proposition is guaranteed if, for every robot, the straight line from the current position to the assigned position is collision free within the positiontime space.
Recall Sect. 3.3 and let us denote by \(\mathcal {P}^{\mathcal {C}}\) the intersection \(\mathcal {P}^{\mathcal {C}} = \bigcap _{i\in \mathcal {I}}\mathcal {P}_i^{\mathcal {C}}\), which contains the convex hull of robot positions, i.e., \(\mathcal {C}\subset \mathcal {P}^{\mathcal {C}}\), and the consensus polytope \(\mathcal {P}\), i.e, \(\mathcal {P}\subset \mathcal {P}^{\mathcal {C}}\), by construction. It also does not intersect any of the seen obstacles, i.e., \(\mathcal {P}^{\mathcal {C}} \subset \bigcap _{i\in \mathcal {I}}{\bar{\mathcal {F}}}_i(t_0)\).
From Sect. 3.3, we have that the current robot position \(\mathbf p _i\) is inside the convex region \(\mathcal {P}^{\mathcal {C}}\), since \(\mathbf p _i \times 0 \in \mathcal {C}\times 0 \subset \mathcal {P}^{\mathcal {C}}\). Furthermore, the optimization problem of Eq. (17) guarantees that the target position \(\mathbf r _{\sigma (i)}^*\) is within the same convex region, since \(\mathbf r _{\sigma (i)}^* \times \tau \in \mathcal {P}\subset \mathcal {P}^{\mathcal {C}}\). Therefore, the path from the current position to the target position is within a convex polytope \(\mathcal {P}^{\mathcal {C}}\) which does not intersect any of the seen obstacles. \(\square \)
3.6 Realtime control
Consider \(\mathbf r ^*_i\) to be the target position assigned to robot i, which is updated as soon as a new target formation is computed. In a high frequency control loop each robot individually navigates towards its target position avoiding collisions with static obstacles, moving obstacles and other robots locally. For this we compute a collisionfree local motion via a state of the art receding horizon controller which accounts for the dynamical model of the robot. Two suitable methods are the distributed reciprocal velocity obstacles with motion constraints for aerial vehicles by AlonsoMora et al. (2015b) and the receding horizon controller by Nägeli et al. (2017a).
4 Simulation results
4.1 Performance of the consensus strategies
In this section we present simulation results using Monte Carlo experiments to analyze the distributed Algorithms 1, 2 and 3. In particular, we are interested in comparing the communication demands of our algorithms with a solution consisting on flooding the information of all the robots to the whole network, i.e., a centralized solution under the assumption of limited communication. Since the final solution and the number of communication rounds are equivalent to those of the centralized solution, we do not analyze these parameters in the simulation.
4.1.1 Convex hull
In Algorithm 1 we considered different group sizes, from \(n=5\) to \(n=1024\) robots. For each value of n we have considered 100 different initial conditions, where the robots have been randomly placed in a 3 dimensional space, with minimum interrobot distance equal to 0.5 m, forcing the connectedness of the communication graph for a communication radius of one meter. Then, for each configuration we have considered four different communication radii, \(CR = \{1, 2, 5, 10\}\) and we have run the algorithm.
The amount of information exchanged over the network, relative to the amount required when using flooding, is shown in Fig. 6a. The plot shows the mean and standard deviation over the 100 trials for each scenario. First of all, it should be noted that the total bandwidth requirements over the network will actually increment as we increase the number of robots, because the number of communication rounds and possibly the size of the convex hull will increase accordingly. Thus, the objective of the plots is not to analyze the total bandwidth but to compare how much better (or worse) is one solution relative to the other, understanding that there might be other limitations for the algorithms depending on the size of the network and its configuration. With this in mind, the first observation is that in all the cases our algorithm requires less communication than pure flooding of all the positions because the relative cost is always less than one. The algorithm also shows the scalability with the number of robots. As n increases, the amount of positions that do not belong to the convex hull is also increased, resulting in fewer information exchanges for any communication radius. In a similar fashion, by increasing the communication radius, the relative communication cost is also decreased. This happens because at each communication round, the robots are able to discard more points from their local convex hull estimations, since they have information from more neighbors available. Overall, taking into account that the number of communication rounds of our algorithm is the same as the one for flooding, we conclude that our distributed solution is always a better choice.
4.1.2 Direction of movement
We have also analyzed Algorithm 2 (Distributed Direction of Motion) using the same simulation parameters, i.e., 100 trials for each different number of robots and communication radii. We have measured the relative cost to a pure flooding algorithm for vectors with \(\kappa =100\) utilities, using the implementation described in Remark 3. The results of this experiment are depicted in Fig. 6b, where we can observe that for this particular algorithm our solution is by large more efficient than flooding. As in the convex hull, the algorithm also performs better for densely connected networks and large values of n. We have also analyzed the influence of the size of \(\kappa \), observing that the relative cost, compared to flooding, was basically the same for the different sizes. Therefore, since this parameter can be chosen arbitrarily, the design choice should be made according to the capacity of the network to find a good balance between the degree of accuracy in the orientation of the direction and the absolute bandwidth requirements.
4.1.3 Intersection of convex regions
Finally, in order to analyze Algorithm 3 we have considered again the same number of robots and communication radii, as well as 100 random initial configurations for each pair of values. The initial regions \(\mathcal {P}_i\) have been created using the following procedure: first we have created a random polytope composed by 20 three dimensional vertices. Then, for each robot we have randomly changed 5% of the vertices and included perturbations on another 15% of the vertices. These parameters have been designed taking into account the properties of the polytopes obtained in the full simulations containing real obstacles described in Sect. 4.2. The results of these experiments are depicted in Fig. 6c.
The plot shows a similar behavior to the one in Fig. 6a, with decreasing bandwidth requirements, relative to a flooding procedure, as n and the communication radius are increased. Only for small teams of robots, some executions of the algorithm will require the exchange of more information than flooding. This happens because for the flooding we send the 3dimensional vertices of the associated obstaclefree polytope instead of the 4dimensional constraints to reduce the bandwidth. When n is small, the savings from sending the new constraints at each iteration are not enough to compensate the increase in the dimension of the information exchanged, given that \(d\simeq n\). Nevertheless, in the rest of cases our algorithm outperforms this algorithm to levels where we only require to send a small fraction of the information. Considering the extra routing control mechanisms that flooding would require make our solution a much better choice. Besides, the solution sending only the new constraints improves over our original approach in AlonsoMora et al. (2016), where on average for small teams the cost of our algorithm was much bigger.
In summary, our algorithms require in (almost) all cases less bandwidth than equivalent flooding approaches using the same number of communication rounds. However, it should be noted that the number of communication rounds of these algorithms will increase with the diameter of the network, which will also grow with the number of robots. Nevertheless, even if the number of rounds increases with n, the size of the messages will remain more or less constant for arbitrarily large numbers of robots. The reason for this is that, while the number of robots can grow, the number of points that define the convex hull (or similarly the direction of motion and the points in the obstacle free region) will remain approximately constant.
4.2 Multidrone formation control
We present simulations with teams of quadrotor MAVs, where we employ the same nonlinear dynamical model and LQR controller employed by AlonsoMora et al. (2015b), which was verified with real quadrotors. We use SNOPT by Gill et al. (2002) to solve the nonlinear program, a goaldirected version of IRIS by Deits and Tedrake (2014) to compute large convex regions and the Drake toolbox from MIT^{1} to handle quaternions, constraints and interface with SNOPT.
In our simulations a time horizon \(\tau = 4\) s is considered for the experiments with 4 robots and of \(\tau = 10\) s for the experiments with 16 robots, due to the large size of the formation and the scenario. In all cases a new formation is computed every 2 s. The individual collision avoidance planners run at 5 Hz and the quadrotors have a preferred speed of 1.5 m/s. Both the visibility distance and the communication radius are set to 3 m and sensing and actuation noise are neglected.
We test the distributed algorithm described in this paper in two scenarios previously introduced in AlonsoMora et al. (2017) for the centralized case. This provides a direct comparison and evaluation.
4.2.1 Four robots
Figure 7 shows snapshots and trajectories of four quadrotors tracking a circular trajectory while locally avoiding three static obstacles and a dynamic obstacle. Three default formations are considered: square (1st preference), diamond (2nd preference) and line. The optimal parameters are computed with the distributed consensus algorithm and nonlinear optimizaiton, allowing rotation in 3D (flat horizontal orientation preferred) and reconfiguration.
The four quadrotors start from the horizontal square and slightly tilt it (11 s) to avoid the incoming dynamic obstacle. To fully clear it while avoiding the obstacle in the lower corner, they shortly switch to a vertical line, and then back to the preferred square formation (20 s). To pass through the next narrow opening they switch back to the line formation (30 s). Once the obstacles are cleared they return to the preferred horizontal square formation (45 s).
4.2.2 Sixteen robots
5 Experimental results
In this section we describe experimental results with a team of four quadrotors.
5.1 Implementation details
Our experiments are conducted with two standard laptops (Quadcore Intel i7 CPU@2.8 GHz). The person and drones are tracked with a external motion capture system that provides precise position information at a high update rate and move in an environment of approximately 4m (W) \(\times \) 5m (L) \(\times \) 3m (H). In order to guarantee connectedness of the communication graph, the communication radius of the drones is simulated at 3 m. The visibility has also been set at 3m, looking for a compromise between safety and allowing different perceptions of the environment. The physical diameter of each drone is, approximately, 0.3 m. When computing the configuration of the formation, we impose a minimum distance of 1 m between drones, to avoid collisions and aerodynamic disturbances.
In the second laptop, we receive the current state of the drones and obstacles at a high frequency and send input commands to the drones. We also receive the target positions for the drones, computed in the first laptop. This laptop controls the position of the drones at a high, and approximately constant, frequency of 20 Hz. We implement a slightly modified version of the Model Predictive Controller introduced by Nägeli et al. (2017a) and extended to multiple drones by Nägeli et al. (2017b), where we remove all the cost terms for videography. This controller minimizes the deviation from the assigned position of the drone in the formation, subject to collision avoidance, state and input constraints. We run a controller for each drone and exchange the planned trajectories sequentially. We employ a horizon of \(M = 20\) steps, at 0.05 s each, and we solve the MPC problem with FORCES Pro (Domahidi et al. 2012; Domahidi and Jerez 2016), which generates fast solver code, exploiting the special structure in the NLP problem. MPC methods have also been employed with onboard sensing of obstacles, for example by Odelga et al. (2016).
5.2 Quadrotor hardware
We use unmodified Parrot Bebop2 quadrotors in all our experiments. The communication between the drones and the host PC is handled via ROS (Quigley et al. 2009) and we directly send the control inputs from the first time step, without an additional feedback controller for trajectory tracking on the drone.
The state of the quadrotor is given by its position \(\mathbf {p}_{} \in \mathbb {R}^3\), its velocity \(\dot{\mathbf {p}_{}} = [\dot{\mathbf {p}}_{x,y}, {\dot{p}}_{q_z}] \in \mathbb {R}^3\) and its orientation, i.e. roll \(\varPhi _q\), pitch \(\varTheta _q\) and yaw \(\psi _q\).
The control inputs to the system are: the velocity of the quadrotor in the bodyz axis \(v_z = {\dot{p}}_{q_z}\), the desired roll angle \(\phi _q\) and the desired pitch angle \(\theta _q\) for the quadrotor and its angular speed around the bodyz axis \(\omega _{q_z}\). The horizontal velocities are not directly controlled.
The NMPC by Nägeli et al. (2017b) accounts for the internal constraints of the Parrot Bebop 2 (e.g., maximal vertical and horizontal velocities, maximal roll and pitch angles). The limits are described in the documentation of the Parrot SDK.^{2}
5.3 Results
We have performed a total of twenty experiments, all of them with one moving obstacle (walking and running) and three to four drones. In each experiment the robots are tasked with either maintaining the centroid of the formation as close as possible to the center of the room or with tracking a circular, constant velocity motion. For the experiments with three drones we consider both a line and a triangle formation. For the experiments with four drones we consider both a line and a square formation. In all cases, the preferred size of the formation is set to 1.5 m between consecutive robots (side of the triangle and square), and its minimum size is set to 1 m between robots.
In Fig. 10b we cumulate the distance between all drones during the experiments. Again, we observe the two distinct peaks at the planned 1.5 and 2.1 m. Yet, we also observe a much larger variability in the interrobot distance. In most instances, the drones maintain a separation greater than 1 m, as planned by the formation control module. In a few instances, two drones were between the planned separation of 1 m and the collision distance of 0.5 m. In extremely few instances, the separation between two drones was below 0.5 m and a collision could occur, but did not happen in our experiments. These instances occurred in cases where the human is running, the drones are pushed towards the walls of the room and have to fly over in a narrow space and short period of time in order to avoid a collision. Recalling our system architecture, see Sect. 5.1, we note that the setpoint tracking is the responsibility of the lowlevel collision avoidance, which in our implementation was a sequential nonlinear model predictive controller (NMPC). The reduction of interdrone distance, below the predefined setpoint, was due to several factors: (a) the drone dynamics were not perfectly identified in the model employed, (b) delays in the control and communication framework were not modeled, (c) higher weight was given to drone–human collision avoidance than to drone–drone collision avoidance, and (d) slack variables were used in the NMPC optimization framework. We recall that the contribution of this paper is the formation control method—which generated collisionfree setpoints for the drones—and not the low level singledrone controller.
Finally, in Fig. 10c we cumulate the distance between each drone and the moving human. In all instances a minimum separation of 1 m was achieved and therefore collision with the human were avoided. The approach successfully adapted the configuration of the formation to avoid the moving human, whose motion ranged from walking to running.
We now present experimental results for four distinct scenarios. In each scenario we describe a distinct capability of the method. A video illustrating the results accompanies this paper and can be found at https://youtu.be/khzM54Qk1QQ.
5.3.1 Single planar formation and static setpoint
5.3.2 Single planar formation and moving setpoint
In this second scenario four drones navigate in the square planar formation of the previous experiment. As before, the formation is allowed to change its position and rotate around the vertical axis to avoid the moving person and the walls of the room. In this scenario, the centroid of the team of robots tracks a circular trajectory of radius 2 m and speed 0.3 m/s while avoiding the moving person. In four representative frames, see Fig. 12, we show a full avoidance maneuver where the team of robots lifts to avoid the person and then continues tracking the specified trajectory.
5.3.3 Single formation (with free 3D rotation) and static setpoint
In this third scenario four drones navigate again in the square formation, which can now rotate freely in 3D to avoid the moving person and the walls of the room. The team of robots minimizes the deviation between its centroid and the center of the room and tilts the formation to avoid the human. In Fig. 13 we show the robots at their preferred position and orientation and three examples of the team of robots tilting the formation to avoid the moving person. In these set of experiments we observed that while the method successfully computes tilted configurations for the team of robots, which are safe, their execution was not always robust due to the turbulences created by the drones and their height sensors (sonar), which produced interferences between the drones when they were very near.
5.3.4 Multiple formations and moving setpoint
In the fourth scenario three drones navigate in a triangle formation and can reconfigure to a line formation if advantageous. The team of robots optimizes (a) the formation type (triangle preferred or line), (b) the centroid of the formation and (c) the orientation around the vertical axis. This is done to avoid the moving person and the walls of the room and to let the centroid of the team of robots track a circular trajectory of radius 2 m and speed 0.5 m/s. In Fig. 14 we show two sequences of three images each. The three drones are first shown in their preferred formation type (triangle) tracking the circular motion (Fig. 14a). When the person traps them against a side, they have to switch to a line formation (Fig. 14b), which then goes up to fly over the person (Fig. 14c), before returning the preferred triangle formation once the area is clear (Fig. 14d). If the person runs towards the drones, and they have enough room, they may quickly go up to pass over the person (Fig. 14e) and continue tracking the circular trajectory (Fig. 14f).
5.3.5 Discussion on limitations
In multiple experiments we have observed the following.
The approach is safe under predictable movement of the human. If the human walks in the environment or runs with constant speed, the formation control method updates the parameters of the formation to successfully avoid the moving person. If the human makes abrupt changes in speed while running, then the formation control method can result in an unfeasible optimization due to the constant velocity assumption and the computation delay. If this happens, the individual low level controllers, based on NMPC, avoid collisions with the moving person and the formation is recovered as soon as the optimization becomes feasible again, typically in below a second.
Although the median computation time of the approach was 0.35 s, several instances took over one second to compute, see Fig. 9 and Sect. 5.1 for a discussion. This delay was noticeable at high obstacle speeds. A faster and bounded update rate is desirable for more fluid performance and shorter reaction times.
Higher robustness is achieved when in the cost function of the optimization, Eq. (18), we set a lower weight for vertical deviations from the setpoint than for deviations in the horizontal plane. In this way we give preference to avoiding the human by lifting the formation, rather than a sideways avoidance. This helps in avoiding situations where the robots are trapped against a wall and they have to quickly fly up to avoid the collision with the moving human.
The formation control method takes polytopes as obstacles. The volume occupied by the human was enclosed by a convex polytope. We chose an hexagonal prism with the sides slightly tilted, i.e. the upper face was slightly smaller than the lower face. This serves two purposes: (a) it provides larger clearance around the body of the person and the legs and (b) it helps in biasing the freespace convex polytope to have more clearance as height increases.
Overall, the approach performed very well and was able to safely adapt the configuration of the formation in real time.
6 Conclusion
In this paper we considered a team of networked robots in which each robot only communicates with its neighbors. We showed that navigation of distributed teams of robots in formation among static and dynamic obstacles can be achieved via a constrained nonlinear optimization combined with consensus. The robots first compute an obstaclefree convex region and then optimize the formation parameters. In particular, nonconvex environments can be handled. Thanks to the consensus on convex obstaclefree regions, the robots do not need to exchange the position of all the obstacles. Instead they compute, and exchange, the joint free space. This approach may present lower computational cost, specially in scenarios with many obstacles, and requires substantially fewer communication messages than flooding for consensus.
In simulations with up to sixteen drones, and in experiments with up to four drones, we showed successful navigation in formation. The robots were able to reconfigure the formation when required, in order to avoid collisions with static and moving obstacles, and to make progress. Last, but not least, the approach is general and could be adapted to other formation definitions and applications, such as collaborative transportation with mobile manipulations, as long as the formation can be defined by a set of equations that determine the outer vertices and position of the robots.
Since the approach is local, deadlocks may still occur. Yet, the consensus round on the best direction of motion could be extended to account for global planning performed by the individual robots. Another avenue of future work is splitting and merging into smaller and larger subteams, which also navigate in formation. Future works should also look at the integration of planning and sensing in real environments and at joint lowlevel and highlevel planning.
Footnotes
Notes
Supplementary material
Supplementary material 1 (mp4 39148 KB)
References
 AlonsoMora, J., Knepper, R. A., Siegwart, R., & Rus, D. (2015a). Local motion planning for collaborative multirobot manipulation of deformable objects. In: IEEE international conference on robotics and automation.Google Scholar
 AlonsoMora, J., Montijano, E., Schwager, M., & Rus, D. (2016). Distributed multirobot navigation in formation among obstacles: A geometric and optimization approach with consensus. In IEEE international conference on robotics and automation.Google Scholar
 AlonsoMora, J., Baker, S., & Rus, D. (2017). Multirobot formation control and object transport in dynamic environments via constrained optimization. The International Journal of Robotics Research, 36(9), 1000–1021.CrossRefGoogle Scholar
 AlonsoMora, J., Nägeli, T., Siegwart, R., & Beardsley, P. (2015b). Collision avoidance for aerial vehicles in multiagent scenarios. Autonomous Robots, 39(1), 101–121.CrossRefGoogle Scholar
 Augugliaro, F., Schoellig, A. P., & D’Andrea, R. (2012). Generation of collisionfree trajectories for a quadrocopter fleet: A sequential convex programming approach. In: IEEE/RSJ international conference on intelligent robots and systems.Google Scholar
 Ayanian, N., & Kumar, V. (2010a). Abstractions and controllers for groups of robots in environments with obstacles. In: IEEE international conference on robotics and automation, Anchorage, AK (pp. 3537–3542).Google Scholar
 Ayanian, N., & Kumar, V. (2010b). Decentralized feedback controllers for multiagent teams in environments with obstacles. IEEE Transactions on Robotics, 26(5), 878–887.CrossRefGoogle Scholar
 Balch, T., & Hybinette, M. (2000). Social potentials for scalable multirobot formations. In: IEEE international conference on robotics and automation.Google Scholar
 Balch, T., & Arkin, R. C. (1998). Behaviorbased formation control for multirobot teams. IEEE Transaction on Robotics and Automation, 14(6), 926–939.CrossRefGoogle Scholar
 Burger, M., Notarstefano, G., Allgower, F., & Bullo, F. (2012). A distributed simplex algorithm for degenerate linear programs and multiagent assignments. Automatica, 48(9), 2298–2304.MathSciNetCrossRefzbMATHGoogle Scholar
 Chen, Y., Cutler, M., & How, J. P. (2015). Decoupled multiagent path planning via incremental sequential convex programming. In IEEE international conference on robotics and automation (ICRA).Google Scholar
 Deits, R., & Tedrake, R. (2014). Computing large convex regions of obstaclefree space through semidefinite programming. In Workshop on the algorithmic fundamentals of robotics.Google Scholar
 Derenick, J., Spletzer, J., & Kumar, V.(2010). A semidefinite programming framework for controlling multirobot systems in dynamic environments. In: IEEE conference on decision and control.Google Scholar
 Derenick, J. C., & Spletzer, J. R. (2007). Convex optimization strategies for coordinating largescale robot formations. IEEE Transaction on Robotics, 23, 1252–1259.CrossRefzbMATHGoogle Scholar
 Desai, J. P., Ostrowski, J. P., & Kumar, V. (2001). Modeling and control of formations of nonholonomic mobile robots. IEEE Transaction on Robotics and Automation, 17(6), 905–908.CrossRefGoogle Scholar
 Domahidi, A., & Jerez, J. (2016). FORCES Pro: Code generation for embedded optimization. https://www.embotech.com/FORCESPro.
 Domahidi, A., Zgraggen, A. U., Zeilinger, M. N., Morari, M., & Jones, C. N. (2012). Efficient interior point methods for multistage problems arising in receding horizon control. In: 47th IEEE conference on decision and control, 2008. CDC 2008 (pp. 668–674). IEEE.Google Scholar
 Dong, X., Yu, B., Shi, Z., & Zhong, Y. (2015). Timevarying formation control for unmanned aerial vehicles: Theories and applications. IEEE Transactions on Control Systems Technology, 23(1), 340–348.CrossRefGoogle Scholar
 Erdmann, M., & LozanoPerez, T. (1987). On multiple moving objects. Algorithmica, 2, 477–521.MathSciNetCrossRefzbMATHGoogle Scholar
 Franchi, A., Masone, C., Grabe, V., Ryll, M., Bulfhoff, H. H., & Giordano, P. R. (2012). Modeling and control of UAV bearing formations with bilateral highlevel steering. International Journal of Robotics Research, 31, 1504–1525.CrossRefGoogle Scholar
 Gill, P. E., Murray, W., & Saunders, M. A. (2002). SNOPT: An SQP algorithm for largescale constrained optimization. SIAM Journal on Optimization, 12(4), 979–1006.MathSciNetCrossRefzbMATHGoogle Scholar
 Hatanaka, T., Igarashi, Y., Fujita, M., & Spong, M. W. (2012). Passivitybased pose synchronization in three dimensions. IEEE Transactions on Automatic Control, 57(2), 360–375.MathSciNetCrossRefzbMATHGoogle Scholar
 Keviczky, T., Borrelli, F., Fregene, K., Godbole, D., & Balas, G. J. (2008). Decentralized receding horizon control and coordination of autonomous vehicle formations. IEEE Transactions on Control Systems Technology, 16(1), 19–33.CrossRefGoogle Scholar
 Kia, S. S., Cortes, J., & Martinez, S. (2016). Distributed convex optimization via continuoustime coordination algorithms with discretetime communication. Automatica, 55(5), 254–264.MathSciNetzbMATHGoogle Scholar
 Kuriki, Y., & Namerikawa, T. (2015). Formation control with collision avoidance for a multiUAV system using decentralized mpc and consensusbased control. SICE Journal of Control, Measurement, and System Integration, 8(4), 285–294.CrossRefGoogle Scholar
 Kushleyev, A., Mellinger, D., Powers, C., & Kumar, V. (2013). Towards a swarm of agile micro quadrotors. Autonomous Robots, 35(4), 287–300.CrossRefGoogle Scholar
 Lin, Z., Francis, B., & Maggiore, M. (2005). Necessary and sufficient graphical conditions for formation control of unicycles. IEEE Transaction on Automatic Control, 50(1), 540–545.MathSciNetzbMATHGoogle Scholar
 Lynch, N. (1997). Distributed algorithms. Burlington: Morgan Kaufmann publishers.Google Scholar
 Michael, N., Zavlanos, M. M., Kumar, V., & Pappas, G. J. (2008). Distributed multirobot task assignment and formation control. In: IEEE international conference on robotics and automation.Google Scholar
 Montijano, E., & Mosteo, A. R. (2014). Efficient multirobot formations using distributed optimization. In: IEEE 53th conference on decision and control.Google Scholar
 Montijano, E., Cristofalo, E., Zhou, D., Schwager, M., & Sagues, C. (2016). Visionbased distributed formation control without an external positioning system. IEEE Transactions on Robotics, 32(2), 339351.CrossRefGoogle Scholar
 Morgan, D., Subramanian, G. P., Chung, S.J., & Hadaegh, F. Y. (2016). Swarm assignment and trajectory optimization using variableswarm, distributed auction assignment and sequential convex programming. The International Journal of Robotics Research, 35(10), 1261–1285.CrossRefGoogle Scholar
 Mostagh, N., Michael, N., Jadbabaie, A., & Daniilidis, K. (2009). Visionbased, distributed control laws for motion coordination of nonholonomic robots. IEEE Transactions on Robotics, 25(4), 851860.Google Scholar
 Mosteo, A. R., Montano, L., & Lagoudakis, M. G. (2008). Guaranteedperformance multirobot routing under limited communication range. In: Distributed autonomous robotic systems (pp. 491–502).Google Scholar
 Nägeli, T., Meier, L., Domahidi, A., AlonsoMora, J., & Hilliges, O. (2017b). Realtime planning for automated multiview drone cinematography. ACM Transactions on Graphics (TOG), 36(4), 132.CrossRefGoogle Scholar
 Nägeli, T., AlonsoMora, J., Domahidi, A., Rus, D., & Hilliges, O. (2017a). Realtime motion planning for aerial videography with dynamic obstacle avoidance and viewpoint optimization. IEEE Robotics and Automation Letters, 2(3), 1696–1703.CrossRefGoogle Scholar
 Nestmeyer, T., Robuffo Giordano, P., Blthoff, H. H., & Franchi, A. (2017). Decentralized simultaneous multitarget exploration using a connected network of multiple robots. Autonomous Robots, 41(4), 989–1011.CrossRefGoogle Scholar
 Odelga, M., Stegagno, P., & Bülthoff, HH. (2016). Obstacle detection, tracking and avoidance for a teleoperated UAV. In: 2016 IEEE international conference on robotics and automation (ICRA) (pp. 2984–2990). IEEE.Google Scholar
 Oh, K.K., & Ahn, H.S. (2011). Formation control of mobile agents based on interagent distance dynamics. Automatica, 47(10), 2306–2312.MathSciNetCrossRefzbMATHGoogle Scholar
 Oh, K. K., Park, M. C., & Ahn, H. S. (2015). A survey of multiagent formation control. Automatica, 53(3), 424–440.MathSciNetCrossRefzbMATHGoogle Scholar
 Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T., Leibs, J., Wheeler, R., & Ng, A. Y. (2009). Ros: An opensource robot operating system. In: IEEE ICRA workshop on open source software.Google Scholar
 Ren, W., & Beard, R. W. (2008). Distributed consensus in multivehicle cooperative control. Communications and control engineering. London: Springer.zbMATHGoogle Scholar
 Sabattini, L., Secchi, C., & Fantuzzi, C. (2011). Arbitrarily shaped formations of mobile robots: Artificial potential fields and coordinate transformation. Autonomous Robots, 30, 385–397.CrossRefGoogle Scholar
 Saha, I., Ramaithitima, R., Kumar, V., Pappas, G. J., & Seshia, S. A. (2014). Automated composition of motion primitives for multirobot systems from safe LTL specifications. In: IEEE/RSJ international conference on intelligent robots and systems.Google Scholar
 Schoch, M., AlonsoMora, J., Siegwart, R., & Beardsley, P. (2014). Viewpoint and trajectory optimization for animation display with aerial vehicles. In: 2010 IEEE international conference on robotics and automation (ICRA) (pp. 4711–4716). IEEE.Google Scholar
 Schwager, M., Julian, B. J., Angermann, M., & Rus, D. (2011). Eyes in the sky: Decentralized control for the deployment of robotic camera networks. Proceedings of the IEEE, 99(9), 1541–1561.CrossRefGoogle Scholar
 Suzuki, T., Sekine, T., Fujii, T., Asama, H., & Endo, I. (2000). Cooperative formation among multiple mobile robot teleoperation in inspection task. In Proceedings of the 39th IEEE Conference on Decision and Control, 2000 (Vol. 1, pp. 358–363). IEEE.Google Scholar
 Turpin, M., Mohta, K., Michael, N., & Kumar, V. (2014). Goal assignment and trajectory planning for large teams of interchangeable robots. Autonomous Robots, 37(4), 401–415.CrossRefGoogle Scholar
 Urcola, P., Lazaro, M. T., Castellanos, J. A., & Montano, L. (2017). Cooperative minimum expected length planning for robot formations in stochastic maps. Robotics and Autonomous Systems, 87, 3850.CrossRefGoogle Scholar
 Wurman, P. R., D’Andrea, R., & Mountz, M. (2008). Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Magazine, 29(1), 9.Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.