On-board range-based relative localization for micro air vehicles in indoor leader–follower flight
- 903 Downloads
- 1 Citations
Abstract
We present a range-based solution for indoor relative localization by micro air vehicles (MAVs), achieving sufficient accuracy for leader–follower flight. Moving forward from previous work, we removed the dependency on a common heading measurement by the MAVs, making the relative localization accuracy independent of magnetometer readings. We found that this restricts the relative maneuvers that guarantee observability, and also that higher accuracy range measurements are required to rectify the missing heading information, yet both disadvantages can be tackled. Our implementation uses ultra wideband, for both range measurements between MAVs and sharing their velocities, accelerations, yaw rates, and height with each other. We showcased our implementation on a total of three Parrot Bebop 2.0 MAVs and performed leader–follower flight in a real-world indoor environment. The follower MAVs were autonomous and used only on-board sensors to track the same trajectory as the leader. They could follow the leader MAV in close proximity for the entire durations of the flights.
Keywords
Relative localization Leader–follower Micro air vehicles Autonomous flight Indoor1 Introduction
Swarm robotics offers to make micro air vehicle (MAV) applications more robust, flexible, and scalable (Şahin 2005; Brambilla et al. 2013). These properties pertain to a group’s ability to remain operable under loss of individual members and to reconfigure for different missions. Furthermore, a cooperating swarm of MAVs could execute tasks faster than any single MAV. The envisioned applications of such multi-agent robotic systems are plentiful. Examples of interest are: cooperative surveillance and/or mapping (Saska et al. 2016; Schwager et al. 2009a; Achtelik et al. 2012), localization of areas of sensory interest (e.g. chemical plumes) (Hayes et al. 2003; Schwager et al. 2009b), the detection of forest fires (Merino et al. 2006), or search missions in hazardous environments (Beard and McLain 2003). In order to deploy a team of MAVs for such applications, there are certain behaviors that the MAVs should be capable of, such as collision avoidance (Coppola et al. 2018; Roelofsen et al. 2015) or leader–follower/formation flight (Vásárhelyi et al. 2014; Hui et al. 2014; Gu et al. 2006). These tasks are accomplished by the MAVs through knowledge of the relative location of (at least) the neighboring MAVs in the group, for which several solutions can be found in literature.
Often used are external systems that provide a global reference frame within which agents can extract both their own and the other MAVs’ position. One example is (MCSs) (Schwager et al. 2009b; Mulgaonkar et al. 2015; Kushleyev et al. 2013; Michael et al. 2010; Turpin et al. 2012; Chiew et al. 2015; Hayes and Dormiani-Tabatabaei 2002). MCSs provide highly accurate location data, but only within the limited coverage provided by the system. Alternatively, (GNSS) can be used to provide similar location data (Gu et al. 2006; Saska et al. 2016; Vásárhelyi et al. 2014; Quintero et al. 2013; Hauert et al. 2011). Although GNSS is widely available, it has relatively low accuracy if compared to MCS and therefore large inter-MAV separation is required to guarantee safe flight (Nägeli et al. 2014). Furthermore, GNSS cannot reliably be used indoors due to signal attenuation (Liu et al. 2007) and can also be subject to multi-path issues in some urban environments or forests (Nguyen et al. 2016).
To increase the versatility of the solution, MAVs should thus use on-board sensors to determine the locations of neighboring MAVs. Often, vision based methods are employed, such as: onboard camera based systems (Nägeli et al. 2014; Iyer et al. 2013; Conroy et al. 2014; Roelofsen et al. 2015), or infrared sensor systems (Kriegleder et al. 2015; Stirling et al. 2012; Roberts et al. 2012). A drawback of these systems is that they have a limited field of view. This issue can be tackled by creating constructs with an array of sensors (Roberts et al. 2012) or by actively tracking neighboring agents (Nägeli et al. 2014) to keep them in the field of view. The first solution introduces a weight penalty, while the second solution severely limits freedom of motion and scalability as a consequence of the need for active tracking of neighbors. Therefore, neither solution is ideal for MAVs. A natively omni-directional sensor would be more advantageous; one such sensor is a wireless radio transceiver.
Guo et al. (2017) recently implemented an ultra wideband (UWB) radio-based system for this. Range measurements are fused with displacement information from each MAV to estimate the relative location between MAVs. However, their method suggests that each MAV must keep track of their own displacement with respect to an initial launching point. If this measurement is obtained through on-board sensors (for example, by integrating velocities) then this measurement can be subject to drift over time.
Alternatively, Coppola et al. (2018) demonstrated a Bluetooth based relative localization method. Rather than using displacement information, the velocities of the MAVs, the orientation, and the height were communicated between each other, and the signal strength was used as a range measurement.
The main contribution of this paper is an analysis of the consequences of removing the heading dependency in range based relative localization, leading to the development and implementation of a heading-independent relative localization and tracking method that is accurate enough for full on-board indoor leader–follower flight, as shown in Fig. 1. The analysis is provided by a formal observability analysis and by performing limit-case simulations. Differently from the work of Zhou and Roumeliotis (2008) and Martinelli and Siegwart (2005), the analysis also considers the inclusion of acceleration information, since this is commonly known by MAVs from their Inertial Measurement Unit (IMU). Furthermore, our analysis specifically focuses on the implications of removing a heading dependency on the performance of the relative localization filters and on the relative maneuvers that the agents can perform in order to guarantee that the filter remains observable. The observability analysis will show that the task of leader–follower flight is especially difficult with range-based relative localization methods, because it does not allow for the MAVs to fly parallel trajectories. We then use the insights gathered for the development and implementation of a heading-independent leader–follower system that we are able to use on-board of autonomous MAVs operating indoors. The MAVs rely only on on-board sensors, using UWB for both communication and relative ranging.
The structure of the paper is as follows. First, in Sect. 2, we compare the theoretical observability of range based relative localization systems both with and without a reliance on a common heading. The findings from Sect. 2 are verified through simulation in Sect. 3, where we also evaluate the difference in performance that can be expected. We carry this information forward in Sect. 4, where a heading-independent system is implemented on real MAVs, and where we show the results of our leader–follower experiments. The results are further discussed in Sect. 5. Finally, the overall conclusions are drawn in Sect. 6. Future work is discussed in Sect. 7.
2 Observability of the relative localization filter
In this section, an observability analysis is performed that specifically focuses on the practical implications of performing range based relative localization both with and without reliance on a common heading reference. Specifically, we will study the case where one MAV (denoted MAV 1) tracks another MAV (denoted MAV 2). Despite our focus on MAVs in particular, the conclusions that follow hold for any general system that can provide the same sensory information. Furthermore, the results can be extrapolated to more than two MAVs, as will be demonstrated in Sect. 4.
2.1 Preliminaries
We will conduct the analysis by studying the local weak observability of the systems (Hermann and Krener 1977). With an analytical test, briefly introduced in the following, local weak observability can be used to extract whether a specific state can be distinguished from other states in its neighborhood.
2.2 Reference frames
For the analyses that follow, consider the reference frames schematically depicted in Fig. 2. Denoted by \(\mathcal {I}\) is the Earth-fixed North-East-Down (NED) reference frame, which is assumed to be an inertial frame of reference. Denoted by \(\mathcal {H}_i (i=1,2)\) is a body-fixed reference frame belonging to MAV i. Its origin is coincident with MAV i’s centre of gravity, and its location with respect to the \(\mathcal {I}\) frame is represented by the vector \(\mathbf {p_i}\). \(\mathcal {H}_i\) is a horizontal frame of reference, such that the z-axis of the \(\mathcal {H}_i\) frame always remains parallel to that of the \(\mathcal {I}\) frame. The \(\mathcal {H}_i\) frame is rotated with respect to the \(\mathcal {I}\) frame only about the positive z-axis by an angle \(\psi _i\), where \(\psi _i\) is the heading that MAV i has with respect to North, also referred to as its yaw angle. The rate of change of \(\psi _i\) is represented by \(r_i\).
2.3 Nonlinear system description
We shall study the case where MAV 1 attempts to estimate the relative position of MAV 2. We use \(\mathbf {p}\) to denote this relative position, such that \(\mathbf {p}=\mathbf {p_2}-\mathbf {p_1}\) (see Fig. 2). Furthermore, let \(\mathbf {v_i}\) and \(\mathbf {a_i}\) be the linear velocities and accelerations of frame \(\mathcal {H}_i\) with respect to frame \(\mathcal {I}\) expressed in frame \(\mathcal {H}_i\), respectively. Finally, let \({\varDelta }\psi \) represent the difference in heading between MAVs 1 and 2, such that \({\varDelta }\psi = \psi _2 - \psi _1\).
Since the horizontal plane of \(\mathcal {H}_i\) matches the horizontal plane of \(\mathcal {I}\), height of the MAVs from the ground can be treated as a decoupled dimension. This does not affect the observability result as long as the MAVs are both capable of measuring and comparing their own height, which is the case. Therefore, for brevity, height will not be included in the following analysis. The vectors for the relative position \(\mathbf {p}\), the velocity \(\mathbf {v_i}\), and the acceleration \(\mathbf {a_i}\) can thus be expanded as 2D vectors: \(\mathbf {p}^\intercal = [p_x,p_y]^\intercal \), \(\mathbf {v_i}=[v_{x,i},v_{y,i}]^\intercal \), \(\mathbf {a_i}=[a_{x,i},a_{y,i}]^\intercal \), \(i=1,2\).
- \(\sum _{A}\): The scenario where \(\psi _1\) and \(\psi _2\) are observed is equivalent to \({\varDelta }\psi \) (the difference in headings) being observed. Therefore, for \(\sum _A\), the observation model is:$$\begin{aligned} \mathbf { y_A} = \mathbf {h_A(x)} = \left[ {\begin{array}{*{20}{c}} {h_{A1}(\mathbf {x})} \\ {h_{A2}(\mathbf {x})} \\ {\mathbf {h_{A3}}(\mathbf {x})}\\ {\mathbf {h_{A4}}(\mathbf {x})} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\frac{1}{2}\mathbf {p}^\intercal \mathbf {p}}\\ {{\varDelta }\psi }\\ {\mathbf {v_1}}\\ {\mathbf {v_2}} \end{array}} \right] \end{aligned}$$(12)
Note that the observation equation \(h_{A1}(\mathbf {x})\) is slightly modified with regards to the previously mentioned measurements. Rather than observing the range between the two MAVs (i.e. \(||\mathbf {p}||_2\)), half the squared range is observed (i.e. \(\frac{1}{2}\mathbf {p}^\intercal \mathbf {p}\)). This change makes the observability analysis more convenient without affecting its result. Both \(||\mathbf {p}||_2\) and \(\frac{1}{2}\mathbf {p}^\intercal \mathbf {p}\) contain the same information as far as observability of the system is concerned (Zhou and Roumeliotis 2008).
- \(\sum _{B}\): In this case, the headings of the MAVs are not measured, and it is thus not possible to observe the difference in heading \({\varDelta }\psi \) directly. For \(\sum _B\), the observation model is:$$\begin{aligned} \mathbf {y_B} = \mathbf {h_B(x)} = \left[ {\begin{array}{*{20}{c}} {h_{B1}(\mathbf {x})} \\ {\mathbf {h_{B2}}(\mathbf {x})}\\ {\mathbf {h_{B3}}(\mathbf {x})} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\frac{1}{2}\mathbf {p}^\intercal \mathbf {p}}\\ {\mathbf {v_1}} \\ {\mathbf {v_2}} \end{array}} \right] \end{aligned}$$(13)
2.4 Observability analysis with a common heading reference
- I.
- $$\begin{aligned} ~\mathbf {p} \ne \mathbf {0}_{2\times 1} \end{aligned}$$(20)
- II.
- $$\begin{aligned} ~\mathbf {v_1} \ne \mathbf {0}_{2\times 1} \ \mathtt {or} \ \mathbf {v_2} \ne \mathbf {0}_{2\times 1} \end{aligned}$$(21)
- III.
- $$\begin{aligned} ~\mathbf {v_1} \ne \mathbf {R}\mathbf {v_2} \end{aligned}$$(22)
Whilst these three conditions are easier to consider, it should be noted that they form only a subset of the conditions imposed by Eq. 19. For example, the scenario where MAV 2 is stationary, and MAV 1 flies straight towards MAV 2, does not violate any of these three conditions. It does, however, violate Eq. 19. Therefore, the observability of a state and input combination should be checked against the full condition in Eq. 19.
2.5 Observability analysis without a common heading reference
- I.
- $$\begin{aligned} ~~\mathbf {p} \ne \mathbf {0}_{2\times 1} \end{aligned}$$(34)
- II.
- $$\begin{aligned}&(\mathbf {v_1} \ne \mathbf {0}_{2\times 1} \ \mathtt {or} \ \mathbf {a_1} \ne \mathbf {0}_{2\times 1}) \ \mathtt {and} \nonumber \\&\displaystyle (\mathbf {v_2} \ne \mathbf {0}_{2\times 1} \ \mathtt {or} \ \mathbf {a_2} \ne \mathbf {0}_{2\times 1}) \end{aligned}$$(35)
- III.
- $$\begin{aligned} ~~\mathbf {v_1} \ne s\mathbf {R v_2} \ \mathtt {or} \ (\mathbf {a_1} \ne \mathbf {0}_{2\times 1} \ \mathtt {or} \ \mathbf {a_2} \ne \mathbf {0}_{2\times 1}) \end{aligned}$$(36)
where s is an arbitrary constant.
In order to study these intuitive conditions in further detail, we evaluated how the observability of the system is affected once the relative position \(\mathbf {p}\) between the MAVs changes. By varying the \(p_x\) and \(p_y\) values of the vector \(\mathbf {p}\) around the originally set values for \(\mathbf {p}\) (as in Fig. 3), we analyzed the observability of the system for different relative positions, while keeping the velocities and accelerations constant. The measure for observability was obtained by interpreting the meaning of Eq. 33. It essentially tells that the left hand side of the equation should not be parallel to the relative position vector \(\mathbf {p}\). Therefore, a practical measure of observability is how far away the left hand side of Eq. 33 is from being parallel to \(\mathbf {p}\), which can be tested with the cross product. The absolute value of the cross product is then used as a measure of the observability of the system. This paper considers a cross product less than a value of 1 to be unobservable. In theory, only when the cross product is 0 does it actually represent an unobservable condition. However, such a threshold facilitates visibility on the plots and provides insight on what the near-unobservable conditions are and their proportion in relation to the remaining conditions.
For the case of the second (Eq. (35), Fig. 4a) and the third intuitive condition (Eq. (36), Fig. 4c) it can be seen that a varying \(\mathbf {p}\) does not affect the unobservability in the color map. Once an acceleration vector is added to the state of MAV 1 in both cases, specifically \(\mathbf {a}_1~=[0.3~0.3]^\intercal \), the color plots in Fig. 4b, d show that for a set of relative positions, the system does become observable again. However, the chances of the MAVs ending up in an unobservable state are still significant within an operating area of \(100~\mathrm{m^2}\).
The three intuitive conditions we extracted are only a subset of all conditions imposed by Eq. 33. This means that there exist state and input combinations that satisfy the three intuitive conditions, but that do not satisfy Eq. 33. In order to study what the implications of the full unobservability condition in Eq. 33 are, we used the Nelder–Mead simplex method to find other points in the state and input space that violate the full observability condition. Two examples are shown in Fig. 3c, d. These scenarios do not violate any of the intuitive conditions given by Eqs. 34–36. The relative position is non-zero, both MAVs have non-zero velocities and accelerations, and the velocity vectors are not parallel. Nevertheless, they violate Eq. 33. Based on this, color maps for the unobservable conditions in Fig. 3c, d are given in Fig. 4e, f, respectively.
Both color maps of Fig. 4e, f clearly show a non-linear relationship between the relative position vector \(\mathbf {p}\) and the observability of the system. Moreover, both maps show a different non-linear relationship. Figure 4e shows more of a hyperbolic relationship, whereas the unobservable region in Fig. 4f looks more elliptical. It can be shown that different conditions show yet other relationships between the observability of the system for different relative positions \(\mathbf {p}\). Moreover, these relationships only show what happens in two dimensions, which are for the two entries in the vector \(\mathbf {p}\). In reality, the observability condition in Eq. 33 presents an 11-dimensional problem. It is therefore still difficult to deduce general rules from these results. What the latter two color maps do have in common is that the unobservable relative positions are in all cases vastly outnumbered by the observable relative positions. This is different than what was observed for situations that would violate any of the more intuitive conditions in Eqs. 35 and 36.
2.6 Comparison of the two systems
Finally, the results from the observability analysis of both systems will be compared. These will show what the practical implications are when switching from a system that relies on a common heading reference to a system that does not.
A primary result of the analysis is that removing the relative heading measurement results in a system that requires at least one extra Lie derivative in the range observation to make the system locally weakly observable. This is an important result, because it tells us that the heading-independent system \(\sum _B\) relies more heavily on the range equation than \(\sum _A\). Without a heading observation, the range measurement serves to estimate a total of three states, as opposed to two in \(\sum _A\). Some of this information is contained in the second derivative of the range observation, and it is a well known fact that the derivative of a noisy signal will be even noisier. In practice, this means that any system that wishes to perform range-based relative localization without a heading dependency needs an accurate and low-noise range measurement.
Another important result is that the criteria posed for \(\sum _B\) specify that both MAVs must be moving. Contrarily, the criteria for \(\sum _A\) specify that only one of the MAVs must be moving. Whilst this result might not be as relevant for MAV teams, as the MAVs will typically be moving anyway, this result can be important for other applications of range-based relative localization. Think, for example, of the case where a single static beacon is used to estimate the position of a flying MAV using only range sensing and communication. The results of our analysis show that \(\sum _B\) is not observable in this case, and thus a common heading reference must be known for such a system to work or, alternatively, the MAV must track the beacon and then communicate its estimate back to the beacon. Note that, in the case where one of the participants is not moving, if we were to continue our analysis of \(\sum _B\) to higher order Lie derivatives then it would still not be possible to make the observability matrix full rank, so that the condition holds generally.
A third difference is found in the condition for parallel movement of the two MAVs. \(\sum _A\) requires that the MAVs should not move in parallel at the same speed, meaning that there should be a non-zero relative velocity between the two MAVs. Instead, \(\sum _B\) requires that the MAVs should not be moving in parallel regardless of speed. Therefore, even if the second MAV were to be moving twice as fast as the first, the filter would not be observable as long as the direction of movement is the same. However, \(\sum _B\) can bypass this condition in some cases if either of the MAVs is also simultaneously accelerating. Similarly, it can be shown that \(\sum _A\) is able to bypass the parallel motion condition with acceleration, although a second order Lie derivative would be necessary in that case.
3 Verification through Simulations
In this section, we further investigate the conclusions drawn from the analytical observability analysis. At first, a kinematic, noise-free study is performed to verify and confirm the differences in the observability conditions for \(\sum _A\) and \(\sum _B\). Afterwards, the influence of noise and disturbances on the filter are studied.
3.1 Filter design
The filter of choice, used throughout the rest of this paper, is an Extended Kalman Filter (EKF), since this type of filter fits intuitively with how the state-space system was described in Sect. 2. The EKF also uses a state differential model and an observation model. The state differential model can thus be kept exactly as the one given earlier in Eq. 9. The observation models for \(\sum _A\) and \(\sum _B\) are also kept almost the same as given in Eqs. 12 and 13, with the only adjustment that now the full range \(||\mathbf {p}||_2\) is observed, rather than half the squared range \(\frac{1}{2}\mathbf {p}^\intercal \mathbf {p}\). Additionally, in line with earlier research on range-based relative localization on real robots (Coppola et al. 2018), we decided to use an EKF on-board of the real-world MAVs because of its low processing and memory requirements.
An EKF has parameters that need to be tuned, namely: the initial state, the system and measurement noise matrices, and the initial state covariance matrix. The initial state is an important setting that will be described where appropriate in the next sections. The matrices are always tuned to correspond to the actual expected values. The measurement noise matrix is tuned based on the expected quality of the measurement variables, and similarly for the system noise matrix. However, since some of the simulations also make use of perfect measurements and since a zero entry in the measurement noise matrix is not possible, the corresponding entries are then given a small value of \(0.1~\mathrm{m}\). We use \(0.1~\mathrm{m}\) based on what is eventually used on the EKF on-board of the real MAVs. By using UWB antennas for range measurements, we can expect standard deviations of 0.1–\(0.3~\mathrm{m}\) around the true value. Our experimental set-up is described in Sect. 4.3.
3.2 Kinematic, noise-free study of unobservable situations
In the first simulated study, the two MAVs that are studied have kinematic trajectories that can be described analytically. The MAVs also have perfect noise-free knowledge of the inputs and measurements. The kinematic and noise-free situation is used to confirm conclusions drawn in the observability analysis performed in Sect. 2.
- 1.
MAV 1 (host) is moving and MAV 2 (tracked) is stationary.
- 2.
MAV 1 (host) is stationary and MAV 2 (tracked) is moving.
- 3.
MAV 1 (host) and MAV 2 (tracked) are both moving in parallel to each other at different speeds.
The simulations will show whether these different scenarios have convergent EKFs or not. The focus of this analysis is on the estimation of the relative position \(\mathbf {p}\) and the relative heading \({\varDelta }\psi \). Since the velocities are observed directly, these are observable regardless of the situation, and are thus not shown.
The initial velocities of MAVs 1 and 2 are initialized to their true value, since these are not the variables of interest in this analysis. The initial position and relative heading are initialized with an error, the specifics of which will be given in the respective scenarios. The yaw rates and headings of both MAVs are kept at \(0~\mathrm{rad/s}\) and \(0~\mathrm{rad}\), respectively. The EKF runs at a frequency of \(50~\mathrm{Hz}\).
The error measure throughout this paper is the Mean Absolute Error (MAE). The separate x and y errors in the relative location estimate \(\mathbf {p}\) are combined according to the norm \(||\mathbf {p}||_2\). This choice was made because the separate errors in x and y directions offer little additional insight and are usually identical.
3.2.1 MAV 1 (host) moving, MAV 2 (tracked) stationary
In the simulation, MAV 1 (the host) is positioned at \(\mathbf {p}_{1,0}^\intercal = [0,0]^\intercal \) and has a constant velocity \(\mathbf {v_1}^\intercal = [1,0]^\intercal \). MAV 2 (the tracked MAV) is positioned at \(\mathbf {p}_{2,0}^\intercal = [1,1]^\intercal \) with no velocity or acceleration. The initial guess of MAV 1 for the relative position and heading of MAV 2 is \([\hat{\mathbf {p}}_0^\intercal ,\hat{{\varDelta }\psi }_0]^\intercal = [0.1,0.1,1]^\intercal \). This means that the initial estimation error in \(p_x\), \(p_y\), and \({\varDelta }\psi \) is thus equal to 0.9, 0.9, and 1, respectively.
As can be seen in Fig. 5, both the relative position \(\mathbf {p}\) error and the relative heading \({\varDelta }\psi \) error quickly converge to 0. Contrarily, the observability analysis of \(\sum _B\) has shown that this scenario is not locally weakly observable, because the second condition is violated, i.e., one of the MAVs is not moving. However, Fig. 6 shows that the \(||\mathbf {p}||_2\) error converges to 0 just as rapidly as for \(\sum _A\). A more thorough inspection shows that the unobservable state of the system is in fact \({\varDelta }\psi \), which is the one that does not converge. This is a favorable result, since the relative position is typically the variable of interest, rather than the difference in heading.
3.2.2 MAV 1 (host) stationary, MAV 2 (tracked) moving
This time, because \(\mathbf {v_2}\) is not equal to \(\mathbf {0}\), the state differential equation for the relative position of MAV 2 has a dependency on the relative heading state \({\varDelta }\psi \). Since \({\varDelta }\psi \) does not converge to its true value, and eventually settles at an error of approximately \(1.5~\mathrm{rad}\), there is a large inaccuracy in the state differential equation for \(\dot{\mathbf {p}}\). This consequently results in an ever increasing error in \(\mathbf {p}\), because MAV 1 essentially ‘thinks’ that MAV 2 is flying in a different direction than it really is.
3.2.3 MAV 1 (host) and MAV 2 (tracked) moving in parallel at different speeds
Finally, the case where both MAVs are moving in parallel, but at different speeds, is studied. Once more, most of the parameters are kept the same as those presented under case 1. This time, the velocity of MAV 2 is set to \(\mathbf {v_2}^\intercal =[1,0]^\intercal \) and the velocity of MAV 1 is set in a parallel direction, but with twice the magnitude (\(\mathbf {v_1}^\intercal =2\mathbf {v_2}^\intercal =[2,0]^\intercal \)).
According to the observability analysis, this is one of the limit cases where \(\sum _A\) is still just observable, but \(\sum _B\) is not. Indeed, Fig. 9 shows convergent behavior for \(\sum _A\), whereas Fig. 10 shows divergence for \(\sum _B\). Note that the filter for \(\sum _B\) has a decreasing error in \({\varDelta }\psi \). However, the convergence of \({\varDelta }\psi \) is very slow. Furthermore, the error for \(\mathbf {p}\) continues to rise indefinitely.
3.3 Kinematic noisy range measurements study of observable situation
Whilst a noise-free study demonstrates the feasibility of the proposed filter and can verify the differences between \(\sum _A\) and \(\sum _B\), it is also important to study the filter’s performance when presented with noisy data. Not only is this more representative of the filter’s performance in practice, but it also can be used to verify one of the main conclusions that were drawn in the observability study, namely that \(\sum _B\) needs information present in the second derivative of the range data to be observable, compared to only a first derivative for \(\sum _A\). It is consequently expected that, with all other parameters fixed, \(\sum _B\) will perform increasingly worse as the range measurement noise increases.
In this study, we steer away from unobservable scenarios. The intent now is to study both filter’s performances for the case where the filters are known to be observable, in order to compare their performance. For this reason, the trajectories of MAV 1 (host) and MAV 2 (tracked) are designed so as to stay clear of the unobservable situations and to excite the filter properly through relative motion. The trajectories that we devised for this study are perfectly circular, and we assume that the MAVs fly at the same height.
By setting \(\rho _2=4~\mathrm{m}\) and \(\omega _2=\frac{2\pi }{20}~\mathrm{rad}\), the trajectory of MAV 2 becomes a circle with a radius of \(4~\mathrm{m}\) that is traversed in \(20~\mathrm{s}\). To comply with the previously defined constraints, \(\rho _1\) and \(\omega _1\) are \(3~\mathrm{m}\) and \(-\frac{2\pi }{20}~\mathrm{rad/s}\), respectively. These values are representative of what a real MAV should easily be capable of and result in relative velocities of about \(1~\mathrm{m/s}\) in x and y directions between the two MAVs.
The study will test the performance of the relative localization filter as seen from the perspective of MAV 1, which is tracking MAV 2. The filter is fed perfect information on all state and input values, except for the measurement of the range \(||\mathbf {p}||_2\) between the two MAVs. The range measurement are artificially distorted with increasingly heavy Gaussian white noise. The measured range fed to the filter is thus \(||\mathbf {p}||_{2,m}= ||\mathbf {p}||_2 + n(\sigma _{R})\), where \(n(\sigma _{R})\) is a Gaussian white noise signal with zero mean and standard deviation \(\sigma _{R}\). The standard deviations that are tested are \(0~\mathrm{m}\) (noise free), \(0.1~\mathrm{m}\), \(0.25~\mathrm{m}\), \(0.5~\mathrm{m}\), \(1~\mathrm{m}\), \(2~\mathrm{m}\), \(4~\mathrm{m}\), and \(8~\mathrm{m}\). In practice, a standard deviation of \(8~\mathrm{m}\) could be considered quite high, but this is intentionally chosen with the intent to observe a significant difference in the error. Since this study keeps all the other measurements and inputs noise free, the noise on the range measurement needs to be higher to get a significant increase in the localization error.
This time the EKF runs at \(20~\mathrm{Hz}\), which is more representative of our real-world set-up, discussed later in Sect. 4. The described flight trajectory is simulated for \(20~\mathrm{s}\) each run, which is thus one complete revolution of the circular trajectory. The EKF is initialized to the true state to exclude the effects of initialization.
For each particular noise standard deviation, both the filter for \(\sum _A\) and for \(\sum _B\) are simulated with 1000 different noise realizations. For each realization the MAE of the estimated \(\mathbf {p}\) with respect to its true value is computed, again by considering the combined error in the estimate of \(||\mathbf {p}||_2\). After 1000 realizations, the Average MAE (AMAE) is computed to extract the average performance for all noise realizations.
Average Mean Absolute Error for \(\sum _A\) and \(\sum _B\) over 1000 runs with different noise standard deviation on the range measurement
Range noise \(\sigma _{R}\) (m) | ||||||||
---|---|---|---|---|---|---|---|---|
0 | 0.1 | 0.25 | 0.5 | 1 | 2 | 4 | 8 | |
\(\sum _A\) AMAE (cm) | 2.3 | 3.4 | 6.2 | 10.8 | 19.3 | 37.7 | 72.9 | 118.2 |
\(\sum _B\) AMAE (cm) | 2.7 | 4.5 | 8.5 | 15.1 | 27.1 | 52.5 | 101.8 | 172.8 |
This result is in line with the analytical results presented in Sect. 2. However, it also raises the question of whether removing the dependency on a common heading reference poses any advantage, since \(\sum _A\) performs consistently better than \(\sum _B\). The reason for this result lies in the fact that the studied scenario uses perfect measurements for all the sensors except for the measured range. As mentioned in the introduction, the heading observation is notoriously troublesome and unreliable, especially in an indoor environment (Afzal et al. 2010). Therefore, it would be valuable to study what would happen to this analysis in the case where the heading estimate is not perfect. This is presented next.
3.4 Kinematic noisy range measurements and heading disturbance study for observable situation
In order to compare the results obtained with an imperfect heading measurement to those obtained in the previous section, the same trajectories are simulated (as in Eqs. 38 and 37 for MAVs 1 and 2, respectively). All the other simulation parameters are also kept the same, with one exception. This time, a disturbance is introduced on the heading measurement. The simulated disturbance is modeled to look similar to how a real local perturbation in the magnetic field would perturb a heading estimate. The actual magnetic perturbation and the corresponding heading error are taken from the work of Afzal et al. (2010), where indoor magnetic perturbations are studied. It was found that the obtained disturbance on the heading estimate looks similar to a Gaussian curve, and in this analysis it is thus modeled as such.
Several amplitudes of the disturbance are tested, namely \(0~\mathrm{rad}\), \(0.25~\mathrm{rad}\), \(0.5~\mathrm{rad}\), \(1~\mathrm{rad}\), and \(1.5~\mathrm{rad}\). The final amplitude of \(1.5~\mathrm{rad}\) results in a maximum heading estimate error of almost \(85^{\circ }\), which is approximately equal to the amplitude of the disturbance shown by Afzal et al. (2010). Note that the disturbance is introduced directly on the measurement of \({\varDelta }\psi \) (the difference in headings between two MAVs). This is the situation that would occur if one of the two MAVs would fly in a locally perturbed area.
Since the parameter of interest is how the filter for \(\sum _B\) compares to the filter for \(\sum _A\), the results are represented as a percentage comparison of the relative localization errors between the two filters. This is visually presented in Fig. 14. In the figure, a positive % means that the filter for \(\sum _B\) performs worse than the filter for \(\sum _A\). At 0%, marked by a dotted line, both filters perform equally well.
The comparison shows that as the applied disturbance amplitude on the heading measurement provided to system \(\sum _A\) is increased, the region for which \(\sum _B\) performs better than \(\sum _A\) expands. In the case of the largest disturbance, with \(A_d\) equal to \(1.5~\mathrm{rad}\), filter \(\sum _B\) even performs better at a range noise \(\sigma _R\) equal to \(8~\mathrm{m}\).
4 Leader–follower flight experiment
In this section we demonstrate the heading-independent filter in practice, which is used for leader–follower flight in an indoor scenario.
4.1 Leader–follower flight considerations
- 1.
The first condition (Eq. 34) specifies that the relative position between leader and follower must be non-zero. This condition has little implication to leader–follower flight, other than the fact that the follower must follow the leader at a non-zero horizontal distance, which typically is the objective.
- 2.
The second conditions (Eq. 35) tells us that both MAVs must be moving. As far as leader–follower flight is concerned, this is automatically accomplished as long as the leader is not stationary.
- 3.
The third condition (Eq. 36) is especially impactful for leader–follower flight. It specifies that the MAVs should not be moving in parallel (regardless of speed), unless they are also accelerating. A lot of research on leader–follower flight aims to design control laws that would result in fixed geometrical formations between different agents in the formation. This is typically achieved by specifying desired formation shapes, or desired inter-agent distances for members in the swarm (Turpin et al. 2012; Gu et al. 2006; Chiew et al. 2015; Saska et al. 2014). By the very nature of fixed geometries, that would result in parallel velocity vectors.
This solution should also help to prevent the MAVs from getting stuck in an unobservable situation that is not covered by Eqs. 34 to 36, but that is covered by the full observability condition in Eq. 33. We concluded that for the scenarios that are numerically found to be unobservable according to Eq. 33, changing the relative position \(\mathbf {p}\) only slightly can already result in an observable situation. In the proposed method of having the follower fly a time-delayed version of the leader’s trajectory, the relative position vector \(\mathbf {p}\) will naturally change if the leader’s trajectory is not a straight line.
4.2 Leader–follower formation control design
We want to construct a leader–follower control method that results in the follower flying a delayed version of the leader’s trajectory. As it turns out, this type of control can be directly accomplished with the information provided by the relative localization filter.
Let \(t_n\) indicate the current time at which a control input must be calculated. At the current time, MAV 1 has a body fixed reference frame \(\mathcal {H}_1(t_n)\), whose origin is \(\mathbf {p_1}(t_n)\). At time \(t_n-\tau \), MAV 1 knows the relative position of the leader in its own body fixed frame \(\mathcal {H}_1(t_n-\tau )\), since this information is provided by the relative localization filter. However, for this control method to work, MAV 1 must have knowledge of where the leader’s old position is at the current time \(t_n\). This value of interest is depicted by the vector \(\mathbf {e}(t_n)\) in Fig. 15; it is the positional error with respect to the desired follower’s position at time \(t_n\).
4.3 Experimental set-up
One of the main findings in the observability study and the simulation results is that the localization error scales more steeply with range noise for system \(\sum _B\) than for \(\sum _A\). It is therefore important to use sensors that can provide accurate relative ranging measurements.
In this work, we chose to use Ultra Wide Band (UWB) based radio transceivers. UWB has recently gained attention within the domain of ranging. UWB signals are characterized by their fine temporal and spatial resolution (Correal et al. 2003), which leads UWB based systems to be able to, for example, resolve multipath effects more easily (Win and Scholtz 1998). Ultimately, this leads to an accurate ranging performance, which is important if using the heading independent filter. Another advantage of UWB is its relative robustness to interference from other radio technologies due to the fact that it operates on an (ultra) wide range of frequencies (Liu et al. 2007; Foerster et al. 2001; Molisch et al. 2006).
The UWB ranging hardware used in the experiments is the ScenSor DWM1000 module sold by Decawave.^{4} The ranging algorithm that is employed is a particular implementation of the Two-Way Ranging (TWR) method (Neirynck et al. 2016). In order to fuse ranging data with velocity, acceleration, height, and yaw rate data in the localization filter, these variables are also communicated between MAVs by using the UWB devices. The same UWB messages used in the TWR protocol are also used to communicate these variables.
The UWB module transceiver has been installed on the Parrot Bebop 2 platform.^{5} The Bebop 2 runs custom autopilot software designed using the open-source autopilot framework Paparazzi UAV.^{6} Paparazzi UAV provides the stable inner loop control loops for the Bebop 2 using Incremental NDI (INDI, Smeur et al. (2015)). This allows us to control the outer loop by giving the computed velocity commands to the INDI inner loops.
Velocity and height measurements are also necessary for the relative localization filter. In the initial experiments, they are provided by an overhead motion capture system (MCS) by OptiTrack.^{7} In a second iteration of the experiment, they are fully provided by on-board sensors. The velocity data is obtained from the MAVs’ on-board bottom-facing camera using Lucas–Kanade optical flow. Height is measured using an on-board ultrasonic sensor that the Bebop 2 is equipped with by default. At all times, the acceleration and yaw rate measurements are obtained from the MAVs’ on-board accelerometers and gyroscope, respectively. The experiments are first conducted with two MAVs (one leader and one follower), detailed in Sect. 4.4, and then performed again with three MAVs (one leader and two followers), detailed in Sect. 4.5.
4.4 Leader–follower flight with one follower
The experiment with one follower MAV consists of one Bebop 2 following another Bebop 2 using the control law presented in Sect. 4.2. At first, right after take off, the MAVs fly concentric circles just like the ones shown in Fig. 11. This procedure is there to make sure that the EKF running on-board the MAVs has time to converge to the correct result, such that by the time the follower MAV is instructed to start following the leader, the follower has a correct estimate of the relative location of the leader.
When leader–follower flight is engaged, the trajectory of the leader has been designed to sufficiently excite the the relative localization filter during the leader–follower flight and to decrease the likelihood of being stuck in unobservable states. This has been done by introducing frequent turns in the trajectory to have changing relative velocities and accelerations. The follower is instructed to follow the leader’s trajectory with a time delay of \(\tau =5~\mathrm{s}\).
It is important to note that, for safety reasons, the norm of the follower’s commanded velocity \(||\mathbf {v_{1c}}||_2\) during both experiments is saturated at \(1.5~\mathrm{m/s}\). The measure is taken because the MAVs were flying in a relatively small confined area (\(10~\mathrm{m}\) by \(10~\mathrm{m}\)). This change does however have consequences for the performance of the follower’s tracking, which is discussed further in the next sections.
4.4.1 Leader–follower flight with velocity and height information from a MCS
The most clearly identifiable cause for the relative localization error is the occasional dropping of frames by the UWB modules. The update rate of the relative localization filter is equivalent to the UWB messaging rate, because the filter is updated every time that the UWB modules produce a new ranging result (using a callback function). For two UWB modules, this corresponds to an update rate of about \(25~\mathrm{Hz}\), corresponding to a time step of approximately \(40~\mathrm{ms}\). However, the modules occasionally drop frames, causing the time step to spike up. Over the flight, 2% of all messages were received following an interval of more than \(40~\mathrm{ms}\), and 1% of all messages were received following an interval of \(200~\mathrm{ms}\). In one instance, the interval reached \(470~\mathrm{ms}\), an order of magnitude larger than the average. It is not hard to imagine the unfavorable effect that such events can have for the relative localization estimate. It is therefore not coincidental that the largest localization error recorded during the flight also corresponds to one of those times where the UWB modules dropped frames, causing the update rate of the relative localization filter to also drop.
One source of error is the fact that the follower’s response to a velocity command \(\mathbf {v_{1c}}\) is modeled as a first order delay. In reality, the MAV has some overshoot with respect to commands, which is not modeled by this first order delay. This model mismatch by itself might not be that harmful to the performance, since the control law would respond with more aggressive velocity commands as a reaction to the MAV not behaving as modeled. However, the control law’s freedom is severely restricted by the command saturation at \(1.5~\mathrm{m/s}\), which means that the follower cannot move as fast as the command law demands. This argument is further supported by a qualitative analysis of the follower’s trajectory with respect to the leader’s trajectory in Fig. 16. The trajectory of the follower often seems to take ‘shortcuts’ with respect to the leader’s trajectory. This falls in line with the expected behavior due to the command saturation. The control law is designed not only to track the trajectory of the leader in space, but also in time. As the follower starts lagging behind the leader more than the desired \(\tau =5~\mathrm{s}\), the follower starts to take shortcuts in the trajectory to catch up with the leader. This error would be less prevalent if the command saturation were increased.
4.4.2 Leader–follower flight with only on-board measurements
We now demonstrate the workings of the proposed methods in this paper when only on-board sensing is used. In this set-up, the follower MAV does not use any MCS information. Instead, the velocity information comes from Lucas–Kanade optical flow measurements while the height is derived from the on-board ultrasonic sensor. Similarly, the leader MAV directly communicates optical flow velocities and ultrasonic height measurements (along with accelerations and yaw rate from the IMU) to the follower MAV for use in the relative localization filter. The MCS is only used to log ground truth data and for the leader to safely fly its trajectory. No MCS data is used by the follower at all. Again, \(200~\mathrm{s}\) of leader–follower flight with full on-board sensing took place successfully and will be analyzed here.
The trajectory of the follower with respect to the delayed leader’s trajectory is compared in Figs. 22 and 23. Furthermore, another time composition for \(5~\mathrm{s}\) of flight where the follower is tracking the leader is given in Fig. 24.
The tracking error distribution for the on-board sensing case is given in Fig. 25. The mean tracking error is \(50.8~\mathrm{cm}\) and the maximum error is \(1.47~\mathrm{m}\). The relative localization error is given in Fig. 26. Here, the mean error is \(22.6~\mathrm{cm}\) and the maximum error is \(75.8~\mathrm{cm}\), at maximum MAV distances up to \(5.2~\mathrm{m}\).
The performance when using only on-board sensing is very similar to when using the MCS for height and velocity data. This can be mainly attributed to the fact that the measurements that have been replaced (the height and velocity of both MAVs) are actually also accurately measured on-board.
4.5 Leader–follower flight with two followers
To demonstrate that the methods in this paper can also scale to more than one follower, the leader–follower flight is also performed with two follower MAVs instead of one. This is done both with MCS height and velocity data and with only on-board sensing.
This time, due to the lack of space available, there is no initialization flight procedure to give the EKFs of the followers time to converge. Instead, the MAVs are placed in starting positions and orientations that roughly match with what EKFs on-board the MAVs are initialized to. Although this placement is done purely by eye, it is proven to be sufficient to safely start the leader–follower flight.
The leader flies the same trajectory as before. The first follower follows this trajectory with a \(\tau =4~\mathrm{s}\) delay, and the second follower follows it with an \(\tau =8~\mathrm{s}\) delay. Once again, \(200~\mathrm{s}\) of successful flight data is logged and analyzed.
An overhead camera image for the flight with MCS height and velocity data is presented in Fig. 28, giving an idea of how the experiment looked like.^{8} The trajectories for this flight are displayed in Fig. 29 for the leader and two followers. For the flights with only on-board information, the trajectories are shown in Fig. 30.
4.6 Comparison of flights
Comparison of mean localization (loc.) errors and mean tracking (track.) errors for all performed experimental flights, both for MCS and fully on-board (on-b.) flights
1 follower | 2 followers | |||||
---|---|---|---|---|---|---|
MCS | on-b. | MCS 1 | MCS 2 | on-b. 1 | on-b. 2 | |
Loc. error (cm) | 18.4 | 22.6 | 15.8 | 43.9 | 51.8 | 53.6 |
Track. error (cm) | 46.1 | 50.8 | 42.9 | 70.3 | 58.6 | 98.4 |
All the errors are presented in Table 2. The first noteworthy observation is the fact that, for the experiment with two followers, the tracking performance of the second follower is worse than for the first follower in both the MCS and fully on-board case. This is a byproduct of the fact that the proposed leader–follower control method inherently relies on integration of velocity information in time. As the delay with which the follower must follow the leader increases, so does the period of time over which the follower must integrate its velocity. This is subject to drift, which shows in the tracking performance. This effect is more noticeable in the fully on-board case, since the velocity estimates from optical flow methods are less accurate than the ones computed by the MCS.
A final result that stands out is that both followers 1 and 2 have substantially higher localization errors in the on-board case than was found for the on-board experiment with a single follower. This result appears to be due to a combination of factors. The increased communication traffic caused a decrease in the filter update rate and also resulted in an increase in ranging frames dropped. Follower 2, as mentioned above, showed a worse ranging performance than follower 1. Follower 1, in turn, had slightly less accurate optical flow velocity estimates than were obtained with the single follower flight (\(21~\mathrm{cm/s}\) MAE compared to \(15~\mathrm{cm/s}\) before) and also slightly higher ranging errors than for the single follower flight (\(15~\mathrm{cm}\) MAE compared to \(8~\mathrm{cm}\) before). All factors combined, both followers suffered a comparable degradation in localization performance.
5 Discussion
In this section we revisit the observability analysis from Sect. 2 with the obtained experimental data. We also present some remarks on the scalability of this methodology to larger groups of MAVs.
5.1 Remarks on observability
Section 2.5 showed that for a specific set of velocities, accelerations and relative positions for both MAVs, the system will become unobservable. To directly integrate the full observability condition in the design of a leader–follower system is difficult due to its high dimensionality. By having followers fly a delayed version of the leader’s trajectory, it is possible to naturally vary the relative positions between leader and follower, as long as the leader’s velocity changes in time. Given the sparsity of unobservable relative positions, we therefore postulated that this control behavior would be sufficient to limit unobservable situations. Furthermore, even if an unobservable situation were to occur, this would only be for a short period of time, as the relative position continuously changes and the system automatically transitions back to being observable.
Having performed the experiments and collected all the ground truth data, it is now possible to test whether this assumption is valid. All the parameters needed to evaluate Eq. 33 have been logged during the experiments and can be inserted into Eq. 33 to check the observability of the relative localization filter in time. In line with our previous analysis, the measure of observability of the system is represented by the cross product between the left hand side of Eq. 33 and the relative position vector \(\mathbf {p}\). Once more, we shall take a threshold of 1. Although theoretically only a value of 0 would indicate an unobservable system, the higher threshold is chosen to account for noise in the data.
With the chosen threshold, the unobservable data points for the MCS and the on-board flight are 4.76% and 4.75% of all the data points, respectively. The unobservable points are spread in time, thus giving the system ample observable data in between to recover from the short periods of unobservability. Furthermore, isolated events of unobservability are not expected to cause issues. Instead, they can gradually cause an increase in the localization error in time. This has also been confirmed by the simulations in Sect. 3.
Further qualitative inspection of the data does not show a correlation between the unobservable regions of the flight and the relative localization error. To demonstrate this, the localization error is compared to the observability of the filter in Fig. 33 for a small segment of the flight with MCS information. For easier comparison, the observability has been reduced to a binary value, where a value of ‘1’ indicates that the system is within the threshold of unobservability at that time. It can be seen that there is no apparent correlation between the two parameters.
5.2 Remarks on scalability
The experimental results in Sect. 4 show that the methods in this paper can successfully scale to two followers that follow a leader in a confined area. Even when full on-board sensing is used by the followers, more than three minutes of successful autonomous flight were demonstrated, with no pilot input.
Despite the successful results, analysis of the data does show a substantial rise in localization and tracking errors when scaling up to two MAVs. This raises the question of what would happen if even more MAVs are added to the experiment; would this be viable?
One of the results we found is that there is a correlation between the tracking performance of the follower and the time delay with which it follows the leader’s trajectory. The follower that tracked with a time-delay of \(8~\mathrm{s}\) showed consistently larger tracking errors than the followers with \(4~\mathrm{s}\) and \(5~\mathrm{s}\) delays. An alternative solution to the two follower problem is having one follower follow the leader and the other following the first follower. With such an arrangement, both followers could follow another MAV with the same time delay. This setup has not yet been studied in this work, but could prove to be a better alternative to explore in future research.
In our experiments, the update rate reduced when flying with two followers instead of one. It is to be expected that adding more MAVs requires additional data communication, yet a drop from 25 to \(16~\mathrm{Hz}\) is quite significant for adding just one more MAV. In this case, the reduction was due to the communication protocol used during the experiments. In future work there should be efforts to determine how to tackle this, which is a necessary step in order to solve scalability issues that will otherwise arise when introducing even more UWB modules.
As an example, it should be possible to significantly increase the messaging rate to allow for more drones. In these experiments we operated the UWB modules on the lowest data rate settings (\(110~\mathrm{kbps}\)). Furthermore, every message contains a lengthy preamble of \(2048~\mathrm{bits}\), resulting in substantial protocol overhead for every transmitted message which may not be necessary (the actual payload of the UWB messages is less than \(200~\mathrm{bits}\)). The maximum data rate that the UWB modules support is actually \(6.8~\mathrm{Mbps}\) and the preamble can be as short as \(64~\mathrm{bits}\). These would allow for much higher update rates, even with three or more MAVs. One would, however, need to examine what such a change would have on ranging accuracy and stability.
6 Conclusion
The work in this paper has shown the feasibility of heading-independent range-based relative localization on MAVs. We now know that removing the dependency on a common heading between MAVs has two main disadvantages: the motion of agents must meet more stringent conditions to be observable and the relative localization becomes more susceptible to noise on the range measurements. The clear advantage, on the other hand, is that the filter is no longer affected by local disturbances in Earth’s magnetic field. As shown by our simulations, small magnetic perturbations can already lead to a large negative impact, showing how a heading-independent method can actually perform better than the heading-dependent method.
The results of our observability analysis have shown that leader–follower flight is a difficult task when using the proposed relative localization method, where a simple fixed geometry formation flight is not possible. Instead, we needed to develop a method that allows one MAV to follow another MAV’s trajectory with a certain time delay while the leader flies in a curved trajectory. This approach has been shown to stay sufficiently clear from unobservable conditions, which has allowed us to successfully demonstrate leader–follower flight in practice.
Using only on-board sensory information, one MAV can localize another MAV with a mean error of just \(22.6~\mathrm{cm}\) over \(200~\mathrm{s}\) of leader–follower flight. This consequently allows the MAV to track another MAV’s trajectory with a mean error of \(50.8~\mathrm{cm}\). The method has been demonstrated to work also with two followers tracking the same leader.
In a wider context, this work showcases a fundamental connection between relative localization and behavior for teams (or swarms) of robots. We have shown that the constraints included in the observability analysis have to be taken into account when designing the behavior of the robots. This enables the robots to make a better use of their sensors, which in turn provides for a better final performance. For example, in our case, the intuitive conditions extracted from the observability analysis informed us that the leader–follower behavior should not be such that the MAVs fly in a fixed geometry. In general, extracting such intuitive conditions can help swarm designers understand, at a higher level, how the behavior of the individual robots should be designed in order to be in harmony with their relative localization sensors.
7 Future work
There are plenty of opportunities to research within the domain of range based relative localization. Certainly, one such opportunity is the initial convergence behavior of the filter. The initial estimate of the EKF is important to quickly converge to a correct estimate of the relative location of another MAV. If the initial condition is too different from the real situation, the filter has difficulties to converge. One primary problem is that there exist spurious states where the EKF can inititially erraneously converge to. In the future, it would thus be interesting to research methods to address this problem. Some possible solutions could be to use of more thorough estimation filters (e.g., particle filters), or to run multiple filters leading to multiple ambiguous states, which would then help to identify the correct estimate more easily. Furthermore, with an eye on scalability to larger swarms, it would be valuable to explore more thoroughly whether the less intuitive unobservable conditions are, as indicated by our analysis, indeed significantly more unlikely than the observable ones.
It would also be valuable to research alternative control algorithms to enable the leader–follower flight. A weakness of the current controller is that it stores and uses the entire most recent portion of the leader’s trajectory in order to replicate it with a certain delay, which is not memory efficient. An alternative solution might be to perform real time polynomial data fitting on the relative positions of the leader. The resulting polynomial trajectories could be used to obtain the velocities and accelerations through analytical derivations of the polynomials. This might result in less data that needs to be stored on-board of the MAVs and also might lead to smoother trajectories. Moreover, currently we require the leader to fly in an oscillatory trajectory in order to help the followers avoid unobservable states. However, the controller on-board of the followers could also be such that their trajectory is automatically adapted in order to preemptively avoid unobservable conditions. This would put less requirements on the leader, which would then be free to fly any type of trajectory, and would also be a more general and robust solution. To this end, the leader could also communicate additional information such as its planned trajectory over a time horizon.
Finally, considering the hardware used in the experiments, the importance of consistent, high frequency communication and ranging has become apparent. It would be valuable to further optimize the frequency and consistency with which ranging messages are exchanged.
8 Videos
Videos of the experiments can be found at: https://www.youtube.com/playlist?list=PL_KSX9GOn2P--aEr4JtFl7SV3LO5QZY4q
Footnotes
- 1.
- 2.
Note that the approach is also valid when the headings change, this simplification was only done to simplify the trajectory design used in the simulations.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
Furthermore, videos of our experiments are available at https://www.youtube.com/playlist?list=PL_KSX9GOn2P--aEr4JtFl7SV3LO5QZY4q.
Notes
References
- Achtelik, M., Brunet, Y., Chli, M., Chatzichristofis, S., Decotignie, J. D., Doth, K. M., Fraundorfer, F., Kneip, L., Gurdan, D., Heng, L., Kosmatopoulos, E., Doitsidis, L., Lee, G. H., Lynen, S., Martinelli, A., Meier, L., Pollefeys, M., Piguet, D., Renzaglia, A., Scaramuzza, D., Siegwart, R., Stumpf, J., Tanskanen, P., Troiani, C., & Weiss, S.(2012). Sfly: Swarm of micro flying robots. In 2012 IEEE/RSJ international conference on intelligent robots and systems (pp. 2649–2650). https://doi.org/10.1109/IROS.2012.6386281.
- Afzal, M. H., Renaudin, V., & Lachapelle, G. (2010). Assessment of indoor magnetic field anomalies using multiple magnetometers. In 23rd International technical meeting of the satellite division of the institute of navigation (pp 525–533).Google Scholar
- Afzal, M. H., Renaudin, V., & Lachapelle, G. (2011). Use of earths magnetic field for mitigating gyroscope errors regardless of magnetic perturbation. Sensors, 11(12), 11,390–11,414. https://doi.org/10.3390/s111211390.CrossRefGoogle Scholar
- Beard, R. W., & McLain, T. W. (2003). Multiple UAV cooperative search under collision avoidance and limited range communication constraints. In 42nd IEEE international conference on decision and control (Vol 1, pp. 25–30). https://doi.org/10.1109/CDC.2003.1272530.
- Brambilla, M., Ferrante, E., Birattari, M., & Dorigo, M. (2013). Swarm robotics: A review from the swarm engineering perspective. Swarm Intelligence, 7(1), 1–41. https://doi.org/10.1007/s11721-012-0075-2.CrossRefGoogle Scholar
- Chiew, S. H., Zhao, W., & Go, T. H. (2015). Swarming coordination with robust control Lyapunov function approach. Journal of Intelligent and Robotic Systems, 78(3), 499–515. https://doi.org/10.1007/s10846-013-9998-0.CrossRefGoogle Scholar
- Conroy, P., Bareiss, D., Beall, M., & van den Berg, J. (2014). 3-d reciprocal collision avoidance on physical quadrotor helicopters with on-board sensing for relative positioning. arXiv preprint arXiv:1411.3794
- Coppola, M., McGuire, K. N., Scheper, K. Y. W., & de Croon, G. C. H. E. (2018). On-board communication-based relative localization for collision avoidance in micro air vehicle teams. Autonomous Robots, 42(8), 1787–1805. https://doi.org/10.1007/s10514-018-9760-3.CrossRefGoogle Scholar
- Cornejo, A., & Nagpal, R. (2015). Distributed range-based relative localization of robot swarms. In H. L. Akin, N. M. Amato, V. Isler , A. F. van der Stappen (eds) Algorithmic foundations of robotics XI: Selected contributions of the eleventh international workshop on the algorithmic foundations of robotics, Cham: Springer International Publishing (pp. 91–107). https://doi.org/10.1007/978-3-319-16595-0_6.
- Correal, N.S., Kyperountas, S., Shi, Q., & Welborn, M. (2003). An UWB Relative Location System. In 2003 IEEE conference on ultra wideband systems and technologies (pp 394–397). https://doi.org/10.1109/UWBST.2003.1267871.
- Couzin, I. D., & Franks, N. R. (2003). Self-organized lane formation and optimized traffic flow in army ants. Proceedings of the Royal Society of London B: Biological Sciences, 270(1511), 139–146. https://doi.org/10.1098/rspb.2002.2210.CrossRefGoogle Scholar
- Degen, J., Kirbach, A., Reiter, L., Lehmann, K., Norton, P., Storms, M., et al. (2016). Honeybees learn landscape features during exploratory orientation flights. Current Biology, 26(20), 2800–2804. https://doi.org/10.1016/j.cub.2016.08.013.CrossRefGoogle Scholar
- Foerster, J., Green, E., Somayazulu, S., Leeper, D., Labs, I. A., Labs, I. A., Corp, I., & Corp, I. (2001). Ultra-wideband technology for short-or medium-range wireless communications. Intel Technology Journal 2 Google Scholar
- Gu, Y., Seanor, B., Campa, G., Napolitano, M. R., Rowe, L., Gururajan, S., et al. (2006). Design and flight testing evaluation of formation control laws. IEEE Transactions on Control Systems Technology, 14(6), 1105–1112. https://doi.org/10.1109/TCST.2006.880203.CrossRefGoogle Scholar
- Guo, K., Qiu, Z., Meng, W., Xie, L., & Teo, R. (2017). Ultra-wideband based cooperative relative localization algorithm and experiments for multiple unmanned aerial vehicles in gps denied environments. International Journal of Micro Air Vehicles, 9(3), 169–186. https://doi.org/10.1177/1756829317695564.CrossRefGoogle Scholar
- Hauert, S., Leven, S., Varga, M., Ruini, F., Cangelosi, A., Zufferey, J. C., & Floreano, D. (2011). Reynolds flocking in reality with fixed-wing robots: Communication range vs. maximum turning rate. In 2011 IEEE/RSJ international conference on intelligent robots and systems (pp. 5015–5020). https://doi.org/10.1109/IROS.2011.6095129.
- Hayes, A. T., & Dormiani-Tabatabaei, P. (2002). Self-organized flocking with agent failure: Off-line optimization and demonstration with real robots. In 2002 IEEE international conference on robotics and automation (pp. 3900–3905). https://doi.org/10.1109/ROBOT.2002.1014331.
- Hayes, A. T., Martinoli, A., & Goodman, R. M. (2003). Swarm robotic odor localization: Off-line optimization and validation with real robots. Robotica, 21(4), 427–441. https://doi.org/10.1017/S0263574703004946.CrossRefGoogle Scholar
- Hermann, R., & Krener, A. J. (1977). Nonlinear Controllability and Observability. IEEE Transactions on Automatic Control, 22(5), 728–740. https://doi.org/10.1109/TAC.1977.1101601.MathSciNetzbMATHCrossRefGoogle Scholar
- Hui, C., Yousheng, C., Shing, W. W. (2014). Trajectory tracking and formation flight of autonomous uavs in gps-denied environments using onboard sensing. In 2014 IEEE Chinese guidance, navigation and control conference (pp. 2639–2645). https://doi.org/10.1109/CGNCC.2014.7007585.
- Iyer, A., Rayas, L., & Bennett, A. (2013). Formation control for cooperative localization of MAV swarms (Demonstration ). In 2013 international conference on autonomous agents and multi-agent systems (pp. 1371–1372).Google Scholar
- Kriegleder, M., Digumarti, S. T., Oung, R., & D’Andrea, R. (2015). Rendezvous with bearing-only information and limited sensing range. In 2015 IEEE international conference on robotics and automation (pp. 5941–5947). https://doi.org/10.1109/ICRA.2015.7140032
- Kushleyev, A., Mellinger, D., Powers, C., & Kumar, V. (2013). Towards a swarm of agile micro quadrotors. Autonomous Robots, 35(4), 287–300. https://doi.org/10.1007/s10514-013-9349-9.CrossRefGoogle Scholar
- Li, X., Zhou, Q., Lu, S., & Lu, H. (2006). A new method of double electric compass for localization in automobile navigation. In 2006 international conference on mechatronics and automation (pp. 514–519). https://doi.org/10.1109/ICMA.2006.257606.
- Liu, H., Darabi, H., Banerjee, P., & Liu, J. (2007). Survey of wireless indoor positioning techniques and systems. IEEE Transactions on Systems, Man, and Cybernetics, 37(6), 1067–1080. https://doi.org/10.1109/TSMCC.2007.905750.CrossRefGoogle Scholar
- Martinelli, A., & Siegwart, R. (2005). Observability analysis for mobile robot localization. In 2005 IEEE/RSJ international conference on intelligent robots and systems (pp. 1471–1476). https://doi.org/10.1109/IROS.2005.1545153.
- Merino, L., Caballero, F., Martnez-de Dios, J., Ferruz, J., & Ollero, A. (2006). A cooperative perception system for multiple uavs: Application to automatic detection of forest fires. Journal of Field Robotics, 23(34), 165–184. https://doi.org/10.1002/rob.20108.CrossRefGoogle Scholar
- Michael, N., Mellinger, D., Lindsey, Q., & Kumar, V. (2010). The GRASP Multiple Micro-UAV Test Bed: Experimental evaluation of multirobot aerial control algorithms. IEEE Robotics & Automation Magazine, 17(3), 56–65. https://doi.org/10.1109/MRA.2010.937855.CrossRefGoogle Scholar
- Molisch, A. F., Cassioli, D., Chong, C. C., Emami, S., Fort, A., Kannan, B., et al. (2006). A comprehensive standardized model for ultrawideband propagation channels. IEEE Transactions on Antennas and Propagation, 54(11), 3151–3166. https://doi.org/10.1109/TAP.2006.883983.CrossRefGoogle Scholar
- Mulgaonkar, Y., Cross, G., & Kumar, V. (2015). Design of small, safe and robust quadrotor swarms. In 2015 IEEE international conference on robotics and automation (ICRA) (pp. 2208–2215). https://doi.org/10.1109/ICRA.2015.7139491.
- Nägeli, T., Conte, C., Domahidi, A., Morari, M., & Hilliges, O. (2014). Environment-independent formation flight for micro aerial vehicles. In 2014 IEEE/RSJ international conference on intelligent robots and systems (pp. 1141–1146). https://doi.org/10.1109/IROS.2014.6942701.
- Neirynck, D., Luk, E., & McLaughlin, M. (2016). An alternative double-sided two-way ranging method. In 2016 13th workshop on positioning, navigation and communications (WPNC) (pp. 1–4). https://doi.org/10.1109/WPNC.2016.7822844.
- Nguyen, T. M., Zaini, A. H., Guo, K., & Xie, L. (2016). An ultra-wideband-based multi-UAV localization system in GPS-denied environments. In 2016 international micro air vehicle competition and conference (pp 56–61).Google Scholar
- Quintero, S. A. P., Collins, G. E., & Hespanha, J. P. (2013). Flocking with fixed-wing UAVs for distributed sensing: A stochastic optimal control approach. In The American control conference (pp. 2025–2031). https://doi.org/10.1109/ACC.2013.6580133.
- Roberts, J. F., Stirling, T., Zufferey, J. C., & Floreano, D. (2012). 3-D relative positioning sensor for indoor flying robots. Autonomous Robots, 33(1–2), 5–20. https://doi.org/10.1007/s10514-012-9277-0.CrossRefGoogle Scholar
- Roelofsen, S., Gillet, D., & Martinoli, A. (2015). Reciprocal collision avoidance for quadrotors using on-board visual detection. 2015 IEEE/RSJ international conference on intelligent robots and systems (pp 4810–4817). https://doi.org/10.1109/IROS.2015.7354053.
- Roetenberg, D., Luinge, H. J., Baten, C. T. M., & Veltink, P. H. (2005). Compensation of magnetic disturbances improves inertial and magnetic sensing of human body segment orientation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 13(3), 395–405. https://doi.org/10.1109/TNSRE.2005.847353.CrossRefGoogle Scholar
- Roetenberg, D., Baten, C. T. M., & Veltink, P. H. (2007). Estimating body segment orientation by applying inertial and magnetic sensing near ferromagnetic materials. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 15(3), 469–471. https://doi.org/10.1109/TNSRE.2007.903946.CrossRefGoogle Scholar
- Şahin, E. (2005). Swarm robotics: From sources of inspiration to domains of application. In E. Şahin & W. Spears (eds) Swarm robotics. SR 2004. Lecture notes in computer science, Springer, (Vol 3342, pp 10–20). https://doi.org/10.1007/b105069.
- Saska, M., Vakula, J., & Preucil, L. (2014). Swarms of micro aerial vehicles stabilized under a visual relative localization. In 2014 IEEE international conference on robotics and automation (pp. 3570–3575). https://doi.org/10.1109/ICRA.2014.6907374.
- Saska, M., Vonásek, V., Chudoba, J., Thomas, J., Loianno, G., & Kumar, V. (2016). Swarm Distribution and Deployment for Cooperative Surveillance by Micro-Aerial Vehicles. Journal of Intelligent & Robotic Systems, 84(1–4), 469–492. https://doi.org/10.1007/s10846-016-0338-z.CrossRefGoogle Scholar
- Schwager, M., Julian, B. J., & Rus, D. (2009a). Optimal coverage for multiple hovering robots with downward facing cameras. 2009 IEEE international conference on robotics and automation (pp. 3515–3522). https://doi.org/10.1109/ROBOT.2009.5152815.
- Schwager, M., McLurkin, J., Slotine, J. J. E., Rus, D. (2009b). From theory to practice: Distributed coverage control experiments with groups of robots. In Experimental robotics (pp. 127–136). Berlin: Springer. https://doi.org/10.1007/978-3-642-00196-3_15.
- Smeur, E. J., Chu, Q., & de Croon, G. C. (2015). Adaptive incremental nonlinear dynamic inversion for attitude control of micro air vehicles. Journal of Guidance, Control, and Dynamics, 38(12), 450–461. https://doi.org/10.2514/1.G001490.CrossRefGoogle Scholar
- Stirling, T., Roberts, J., Zufferey, J. C., & Floreano, D. (2012). Indoor navigation with a swarm of flying robots. In: 2012 IEEE international conference on robotics and automation (pp. 4641–4647). https://doi.org/10.1109/ICRA.2012.6224987.
- Turpin, M., Michael, N., & Kumar, V. (2012). Decentralized formation control with variable shapes for aerial robots. In 2012 IEEE international conference on robotics and automation pp. 23–30. https://doi.org/10.1109/ICRA.2012.6225196.
- Vásárhelyi, G., Virágh, C., Somorjai, G., Tarcai, N., Szörényi, T., Nepusz, T., & Vicsek, T. (2014). Outdoor flocking and formation flight with autonomous aerial robots. In 2014 IEEE/RSJ international conference on intelligent robots and systems (pp. 3866–3873). https://doi.org/10.1109/IROS.2014.6943105.
- Werner, A., Strzl, W., & Zanker, J. (2016). Object recognition in flight: How do bees distinguish between 3d shapes? PLOS ONE, 11(2), 1–13. https://doi.org/10.1371/journal.pone.0147106.CrossRefGoogle Scholar
- Win, M. Z., & Scholtz, R. A. (1998). Impulse radio: How it works. IEEE Communications Letters, 2(2), 36–38. https://doi.org/10.1109/4234.660796.CrossRefGoogle Scholar
- Yuan, X., Yu, S., Zhang, S., Wang, G., & Liu, S. (2015). Quaternion-based unscented kalman filter for accurate indoor heading estimation using wearable multi-sensor system. Sensors, 15(5), 10,872–10,890. https://doi.org/10.3390/s150510872.CrossRefGoogle Scholar
- Zhou, X. S., & Roumeliotis, S. I. (2008). Robot-to-robot relative pose estimation from range measurements. IEEE Transactions on Robotics, 24(6), 1379–1393. https://doi.org/10.1109/TRO.2008.2006251.CrossRefGoogle Scholar
Copyright information
OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.