How a Cutting-Edge Technology Can Benefit the Creative Industries: The Positioning System at Work

The authors explain their innovative positioning system and its application for the indoor aerial and the creative industries. First, they analyse the current technologies available. Then, they explore the possibility of making an IPS for a creative industries RPAS by making the auto-calibration procedure robust and user-friendly. This challenge implies three parts: to create a specific hardware, to create a highly accurate multi-antenna positioning algorithm and to improve the automatic system calibration for increased user-friendliness.


Introduction
Remotely Piloted Aircraft Systems (RPAS) rely on accurate knowledge of their position for decision-making and control (Colomina and Molina 2014), such as: (a) Maintaining a Stable RPAS Position By nature, without any supportive systems, the flight position of an RPAS is unstable, moving slowly out of position in a sideways direction (referred to as drifting). When this happens, the pilot is constantly forced to compensate for it by adjusting the controls in the opposite direction, which can become very tedious, and it depends greatly on the pilot's skills, potentially leading to unsafe situations. To avoid this drifting and to enable the RPAS to selfstabilise in air, the RPAS needs at least one point of reference of the surroundings. In the case of outdoor flying, GPS is used, but for indoor flying an IPS is necessary (Li et al. 2016).
(b) Guiding the RPAS to a Known Position Knowing the exact position of the RPAS within its environment, and, in addition, knowing what the environment looks like (3D model of indoor environment) enables to programme the flight path of the RPAS (Jiang and Stefanakis 2018). This is very useful, as often a camera shot needs to be repeated several times. In this case, the RPAS follows, within the IPS error, the defined flight path for which repeating the scene recording does not depend on the operator's flying skills.
(c) Avoiding Collisions In a known environment with previously identified static obstacles (e.g. furniture, walls, pillars, etc.), the RPAS can swerve around and avoid them, since they are previously defined as no-fly zones.
(d) Increasing Safety While outdoors GPS can be used for safety features such as safely landing or return to launch (e.g. RPAS from AltiGator or DJI), in order to prevent a crash in the event of an unsafe situation, without an IPS and 3D indoor environment map, these safety features cannot be included for indoor use. In the case of, e.g. low battery, RPAS are commonly programmed to return to launch (starting point), and thus precise position of RPAS and starting point is needed in order to execute this command correctly.
Positioning systems can be subdivided into two groups: outdoor and indoor. Outdoor positioning systems have been well explored and standardised, using either global positioning system (GPS) or techniques that measure user position by means of a cellular network. GPS is based on the known position of determined satellites, which continuously transmit their current time and position. In contrast, cellular network positioning uses the Global System for Mobile Communication (GSM) to calculate position via observed time difference from two different base transceiver stations to a mobile station. For typical outdoor purposes, the accuracy of GPS (~10 m), cellular network (50-125 m) or the combination of both (~5 m) is sufficient when RPAS moves in free space with few but large obstacles (e.g. buildings, trees). Nevertheless, even if there would be sufficient GPS and GSM signal strength indoors, the achieved accuracy of these systems would be inadequate for indoor flights. Creative Industries need to work in spaces, where the typical height of ceilings is between 5 and 20 m, e.g. television studio set up, with many small obstacles (e.g. spotlights). Therefore cm-range accuracy is needed for RPAS use in confined spaces in order to ensure safe flights.
Over the last 10 years, indoor positioning systems have advanced greatly (Kulmer et al. 2017); however most of the developed technologies have not been specifically thought through for use with RPAS. The omnipresent need for indoor positioning in our modern way of life is reflected by the large amount of IPS developed for a wide scope of applications, e.g. medical care (e.g. location tracking of medical personnel), guiding vulnerable people (e.g. aiding visually impaired persons), emergency services (e.g. establishing emergency plans, rescue services), logistics and optimisation (e.g. accurate localisation of packages), surveying and geodesy (e.g. setting out and geometry capture of new buildings as well as for reconstructions), etc. Despite its potential for multiple applications, current IPS technologies are usually not suitable for use with an RPAS. Up to now, only two indoor positioning systems are being used for RPAS: (a) Camera-Based Indoor Positioning (digital optical and video motion tracking system). This system consists of a large number of high speed cameras (!250 fps) installed in the indoor environment where the RPAS will be used that will register, in real-time, the motion of the RPAS to calculate its exact position in three dimensions. Installed cameras need to be referenced and the whole system calibrated (camerassoftware-RPAS). These systems offer high precision, speed and resolution, as well as interference-free real-time tracking for engineering-related studies. Nevertheless, it should be noted that the systems were originally designed for motion capture, e.g. gait analysis when running. As a consequence, when used as IPS for RPAS, the systems have several drawbacks. These systems cost hundreds of thousands of euros, which is unaffordable for small companies, especially in the CI sector. Since the price of the system depends significantly on the size of the environment to control, for large installations such as film-sets, hundreds of cameras would be required, increasing the price even further. In addition, the set up of these systems is time consuming and the required infrastructure is space invading. Set up takes about 2 days and an engineer is needed for calibration of the whole system.
(b) Vision Positioning System (VPS) Unlike the camera-based IPS where the cameras are mounted in the environment, here the IPS system is located on the RPAS. The VPS mounted on an RPAS uses a combination of Ultrasonic sensors and Optical Flow Technology to control the position in environments where GPS signals cannot reach. However, VPS has several important drawbacks: the camera creates a realtime map of the ground below. It does reference the RPAS within a 3D space of a known environment (like the GPS with the help of Google Maps outdoors). Obstacles directly in the flight path are not detected and distance to home base is unknown. The system also does not work properly in low light or bright conditions (less than 100 or more than 100,000 lux). Hovering is only effective between 0.5 and 2.5 m and surfaces should not have clear patterns or texture. Transparent or reflective surfaces, or surfaces that can absorb sound waves (e.g. carpet), can even lead to severe disorientation of the system. Finally, lighting changes should be avoided in order for this system to function properly. In summary, VPS is unsuitable for typical environment conditions where creative industries might use RPAS: performing arts often have changing lightning conditions, concert halls etc. have heights over 2.5 m, filming scenarios often include carpets, bright tiles and so on.

The Pozyx IPS
The Pozyx IPS consists of two types of devices: anchors and tags. The tag is the device that is being tracked, while the anchors are devices with a fixed position that act as reference points for positioning. The tag can estimate its position when it knows the position of the anchors by making ultra-wideband (UWB) distance measurements to each of the anchors within range (Liu et al. 2016). This requires at least four anchors for 3D positioning, but performance is better with at least six or more anchors.
One of the requirements for a drone for the creative industries is a fast and easy deployment of the IPS system. This requires that determining the position of the anchors should be a fast process. Performing manual calibration with a laser metre is not a fast process. For competing indoor positioning systems, the deployment takes about 2 days to deploy and fine-tune the system for an area the size of a basketball court.
Because of this, we use automatic calibration that can perform the same task in seconds. The automatic calibration determines the relative positions of the anchors by making range measurements between the anchors. However, this method is very prone to measuring errors, as any error in the anchor positions will affect the final tag positioning. In addition, it does not work well if there is no clear line-of-sight and good connectivity between all anchors. Thus, a main challenge in making an IPS for a creative industries RPAS is making the auto-calibration procedure robust and userfriendly.
The main requirement for the positioning system to be usable in drone navigation is stable and accurate positioning. Wireless communication has to endure phenomena like scattering, reflections and diffraction rendering multipath propagation. Although ultra-wideband, due to its typical bandwidth of 500 MHz, is quite resistant to multipath signal distortion, the interference of multiple versions of the received signal still leads to some phase delay or even, to a lesser extent, negative phase delay in the detected rising edge of the received signal in general, UWB provides very accurate range measurements in line-of-sight (LOS) with ideal UWB antennas. In this ideal scenario, the noise behaves as almost Gaussian, with a very small standard deviation of about 3 cm. In this LOS case, a similar accuracy can be achieved for positioning. However, the indoor environment is not always ideal, with different obstacles blocking the signal which may cause non-line-of-sight (NLOS) ranging errors of up to 1 m. Because this is very dependent on the environment, it is hard to express this in terms of accuracy. It is clear that the accuracy expressed in average error will be very much dependent on how much NLOS is present, or how "challenging" the environment is. In general, metal objects, water and thick concrete walls may cause NLOS. Because of this it is better to talk about robustness. Apart from the environment, the antenna can also introduce additional errors of up to 20 cm, which are dependent on the orientation of the antenna. Both sources of errors described above (NLOS and antenna non-idealities) are hard to fix with a single antenna design. Therefore, introducing multiple antennas and thus spatial is investigated to significantly reduce the error on the positioning quality in challenging environments. It consists of a central controller unit, equipped with an Inertial Measurement Unit (IMU) and an altimeter, connected to four UWB units to be mounted on the corners of the drone. DecaWave DW1000 chips are used for UWB communication.

Creation of Specific Hardware to Be Mounted on Drones
Both the controller unit and the UWB units include a microcontroller. This design offers the versatility to either work with the controller module as master and the UWB units as slaves (with the possibility of outsourcing part of the computation load to the slaves) or to operate the UWB units as stand-alone tags.
Again, in order to offer versatility, three separate ways of communicating between the controller and UWB units are supported. One can use the UWB signal (decaWave chip to decaWave chip), the SPI bus (central microcontroller to slave decaWave chips) or the I2C (central microcontroller to slave microcontrollers).
The central controller unit is, in essence, a standard Pozyx tag, while the UWB modules are custom designed for use on RPAS. The design for these can be seen in Fig. 2 [UWB module design (right) and device (left)], which includes the decaWave chip on the left-hand side, a microcontroller on the right-hand side, an integrated antenna on the top and both an SPI and an I2C interface at the bottom.

Creation of Highly Accurate Multi-antenna Positioning Algorithm
Initially, each UWB module is positioned independently, resulting in four independent positions. Subsequently, the known configuration of antennas on the drone is mapped onto these measured positions. For this mapping, Procrustes analysis is The rotation component is calculated algebraically as follows. First A is defined as a matrix consisting of the coordinates in the drone coordinate system of antenna ti in column i of the matrix, and B as consisting of the coordinates in the anchor coordinate system, after translation to the origin as described above, of antenna ti in column i. The rotation component is then found as the rotation matrix R for which kRA À Bk reaches its minimal value. This is known as the Orthogonal Procrustes problem in algebra, and its solution is given by performing a singular value decomposition of BA T ¼ M ¼ UΣV T , after which R can be found as R ¼ UV T . The Euler angles of the drone can subsequently be deduced from its rotation matrix R.
This method is computationally relatively light and effectively reduces the random noise on the positioning as σ tc ¼ 1 4 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X 4 i¼1 σ 2 ti r ¼ 1 2 σ t , as all σ ti are considered the same, equal to σ t . Thus, this approach halves the random noise on the positioning, as compared to the single antenna NLLS method.

Fig. 2 UWB module design (right) and device (left). Source: own elaboration
On the other hand, this method is relatively sensitive to outliers and failures in the measurements. An outlier in one of the range measurements will result in an outlier in one of the antenna positions, if it is not detected and rejected. And in turn, an outlier in one of the antenna positions, if undetected, will cause an outlier in the calculated drone position and orientation. Thus, outlier detection is necessary on both the antenna positioning and drone positioning level. This can be achieved by calculating the a posteriori aberration of each of the measurements, subsequently rejecting aberrations larger than a set threshold, and recalculating the position based on only non-rejected measurements. For the antenna positioning, this implies that first the position is calculated using all measurements. Next, the measured distances to the anchors are compared to the distances based on the calculated antenna position. If the difference between calculated and measured position is above a predetermined threshold, the measurement is considered an outlier, and finally the position is recalculated without the outlier distance measurements. This procedure can be repeated until no more outlier measurements are detected, or the number of valid anchors becomes too low for reliable positioning. The method for outlier rejection for the drone positioning is fully analogue.
Although this is an effective technique for outlier rejection, the hierarchical method of first positioning all antenna's and then positioning the drone based on the antenna positions implies that useful measurements can also be discarded. Indeed, when an antenna has insufficient valid range measurements for reliable positioning, its position cannot be calculated, and also the valid measurements are discarded. Thus, in some cases, this method will not be able to position the drone, even though the overall number of valid range measurements would be sufficient, just because valid range measurements were discarded on the antenna positioning level. In practice, on the other hand, this issue does not cause any problems in normal environments.
For validation, a comparison with GPS was made in an outdoor environment on a scale of 20 m Â 20 m. The results are shown in Fig. 3. From this, it can be seen that both systems perform good positioning. However, the GPS and IPS do not completely match. Due to the lack of accurate ground-truth, it cannot be determined if this is due to the IPS or the GPS. However, it is known that the GPS is not as accurate and that it relies more on smoothing to obtain a smooth trajectory.
Additionally, drone positioning measurements were made with the drone on three key locations within the convex hull formed by the anchor positions: one at a corner, one at an edge and one in the centre. The resulting CDFs of the positioning error are shown in Fig. 4. Even in case of the corner position, in which the worst positioning quality is expected, the vast majority of errors fall below 25 mm, and occasional positioning outliers still fall below 200 mm. The other locations perform even better. Combined with the comparison with GPS data, this measurement allows us to conclude the IPS is performing as expected.

Improvement of Automatic System Calibration for Increased User-Friendliness
In its basic form, automatic calibration relies on all inter-anchor distances to calculate the anchors' relative positions to each other, as shown in an example setup in Fig. 5. Assume anchor a is on position [x a , y a , z a ], and the distance between anchor a and b is measured to be b d ab . The goal of the auto-calibration procedure is to find [x a , y a , z a ] for all a, based on b d ab for all pairs of anchors a and b that are within range of each other. This can be achieved using a non-linear least squares (NLLS) procedure with cost function defined as follows: As this is a minimisation in a highly dimensional space (3N a for N a the number of anchors in the system), this is a relatively computationally heavy operation. Unlike positioning though, this needs to be executed only one time during initialisation of the system.
In most practical setups, the above-described basic auto-calibration will have trouble finding an optimal configuration, and additional constraints are necessary to make the minimum search converge to a solution. Normally, the assumption is made that all anchors are on the same height, z a ¼ h, 8 a, as this is the case for the majority  The lines connecting them represent the measured inter-anchor distances, to be used in auto-calibration. Source: own elaboration of setups. If this is not the case, the heights of the majority of the anchors will need to be measured manually in order to obtain correct results from the basic autocalibration algorithm.
An alternative could be Simultaneous Localisation and Mapping (SLAM), which is a technique that is often used in robotics and navigation. In the context of this project, it would involve calculating the coordinates of the anchors while positioning the tag at the same time, based on distance measurements between the different tag antennas and the anchors they have within range. Unfortunately, this technique is both computationally very heavy and not very accurate. In addition, it does not take the measured distances between the anchors into account.
Thus, a method was developed in which not only the inter-anchor ranges are used (like in the basic auto-calibration algorithm), but also measured ranges between the tag antennas and the anchors (like in SLAM), as visualised in Fig. 6. Like in the basic auto-calibration algorithm, and unlike in SLAM, the calibration is calculated after all measurements are done.
As the tag has an altimeter on board and it can keep track of its relative height by comparing altimeter data at the times of ranges measurements to the anchors within range. In case the drone's calibration flight is initiated at zero height, this relative height can be seen as an absolute height measurement by comparing to the altimeter measurement at the beginning of the calibration flight. This will be a valid z-axis value, as long as the calibration is sufficiently short, compared to random drift on the altimeter. Thus, the auto-calibration method as described in the previous section can now be run with an increased amount of "virtual anchors", which are the drone antennas on different places during the calibration flight, of which the z value is known. For smaller setups, in which a short calibration flight suffices to make range measurements from the tag to all anchors, this removes the need to manually measure z values of all anchors, as they can now be deduced from the known z values of the "virtual anchors".
For larger setups, in which a short calibration flight does not suffice to make range measurements to all anchors in the setup, it can no longer be assumed that the Fig. 6 Example setup in which anchors are visualised as squares. The additional distance measurements between tag and anchors during a short reference flight, varying the z value of the tag, are visualised as lines. Source: own elaboration z-coordinates of the "virtual anchors" are still known, because of random drift on the pressure sensor data. Differences in z-coordinates between two consecutive ranging measurements from the tag antennas to the anchors are still known though, as the time difference between these is rather limited. Thus, instead of assuming the z-coordinates of the "virtual anchors" to be known in the auto-calibration cost function, extra terms can be introduced in the cost function to capture the known height differences between them: where z i is the height of "virtual anchor" i, being the tag on the ith consecutive measurement, and Δz i, i À 1 the height difference between the ith and iÀ1st measurement, as measured by the pressure sensor. The total cost function to be optimised for full auto-calibration is then f full ¼ f cost + f cost, z , with the additional knowledge that, in the assumption of a calibration flight that starts and ends on the ground, z 0 ¼ z N ¼ 0, with N the number of distance measurements done during the calibration flight. In Fig. 7, an example setup with a long reference flight, including some extra distance measurements is shown. In order to determine the performance of the auto-calibration. The default autocalibration process was repeated several times and the variation of the outcome was logged. In Fig. 8, the cumulative distribution function is shown for the anchor positioning errors. In total, the calibration process was repeated 30 times in a static environment with four anchors in an area of 100 m 2 . The experiment was performed in two scenarios: one scenario with all anchors in LOS of each other, and the other scenario with one anchor behind a wall (NLOS). It can be seen that the errors for this experiment are very low, between 10 and 30 mm, even in a partial NLOSenvironment. The reason for this level of accuracy is due to the abundance of measurements that are used for the auto-calibration. Each inter-anchor measurement is repeated 50 times, which significantly reduces the error. These additional Fig. 7 Example setup in which anchors are visualised as squares. A selection of additional distance measurements between tag and anchors during the reference flight are visualised as lines. Source: own elaboration measurements take extra time; however, because the calibration process must only be performed once, it is of little importance. In total, the anchor calibration process takes no longer than 5 s.

Conclusion
In this chapter, we have explained Pozyx's positioning system and its application for the indoor aerial and the creative industries. We have demonstrated that UWB-based indoor positioning systems are suitable for drone integration and provide sufficient precision to allow professional high-quality filming. After analysing the current technologies available, we have explained the possibility of making an IPS for a creative industries RPAS by making the auto-calibration procedure robust and userfriendly. This challenge has been developed in three steps: creating a specific hardware, generating a highly accurate multi-antenna positioning algorithm and improving the automatic system calibration for increased user-friendliness.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.