Abstract
The Smartphone Video Guidance Sensor (SVGS) is an emerging technology developed by NASA Marshall Space Flight Center that uses a vision-based approach to accurately estimate the six-state position and orientation vectors of an illuminated target of known dimensions with respect to a coordinate frame fixed to the camera. SVGS is a software-based sensor that can be deployed using a host platform’s resources (CPU and camera) for proximity operations and formation flight of drones or spacecraft. The SVGS output is calculated based on photogrammetric analysis of the light blobs in each image; its accuracy in linear and angular motion at different velocities has previously been successfully demonstrated [9]. SVGS has several potential applications in guidance, navigation, motion control, and proximity operations as a reduced-cost, compact, reliable option to competing technologies such as LiDAR or infrared sensing. One of the applications envisioned by NASA for SVGS is planetary/lunar autonomous landing. This paper aims to compare the SVGS performance in autonomous landing with existing technologies: a combination of infrared beacon technology (IRLock) and LiDAR. The comparison is based on a hardware-in-the-loop emulation of a precision landing experiment using a computer-controlled linear motion to emulate the approach motion and the ROS/Gazebo environment to emulate the response of the flight controller to the environment during landing. Results suggest that SVGS performs better than the existing IRLock with LiDAR sensor combination.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The development of vision-based navigation systems has been growing in recent years in aerospace applications. Image processing techniques and depth sensors have quickly advanced due to the rapid space sector growth [1,2,3]. One of the problems to be addressed by vision-based sensing is the development of effective technologies to support proximity operations, such as autonomous precision landing, docking, rendezvous, and capture.
Current technologies for proximity operations often rely on infrared beacons and depth sensors to estimate the desired state vector [4]. Previous NASA work has demonstrated success using photogrammetric approaches, such as the Advanced Video Guidance Sensor (AVGS), an inverse perspective algorithm that is sufficiently accurate within the 0–300 m range and can be used in a variety of guidance, navigation, and control operations, as seen on the Demonstration for Autonomous Rendezvous Technology (DART) [5, 6]. AVGS’ limitation involves its hardware requirements: state-of-the-art lasers and CMOS imaging sensors [7]. This prompted the development of a more portable and low-cost substitute, the Smartphone Video Guidance Sensor (SVGS). SVGS is based on an adaptation of the collinearity equations used by its predecessor and improved image processing techniques. SVGS hardware requirements are limited to a CPU in the host platform (in this case, an Android Smartphone), a camera, and a four-point illuminated target [6,7,8].
SVGS range can be scaled by changing the dimensions of the four-point beacon and has shown significant accuracy in all 6 DoF [9] for ranges up to 200 m. SVGS’s potential applications include autonomous planetary landing, payload delivery, and docking. Finally, it has already been previously compared in literature with other computer vision techniques, such as binary fiducial markers, for GPS-denied environments and had an overall similar precision landing performance as shown by Bautista et al. [10]
1.1 SVGS algorithm and concept of operation
SVGS is a software sensor that estimates an illuminated target’s 6 DoF position and attitude vector in a coordinate system attached to the host’s camera. It can be easily integrated into robotic systems and flight controllers. An example deployment of SVGS can be seen in Fig. 1 [9].
Smartphone Video Guidance Sensor (SVGS) Concept [9]
Coordinate System Frames. Image plane Represented in Blue [9]
The SVGS algorithm is based on an adaption of the collinearity equations developed by Rakoczy [7]. Considering the perspective center of the host platform camera (thin lens) located at point \({\textbf {L}}\) and the target at point \({\textbf {A}}\). The representation of \({\textbf {A}}\) in the captured image plane is defined as \({\textbf {a}}\), and the distance between \({\textbf {L}}\) and such plane, \({\textbf {f}}\), is the focal length of the host platform camera. In this context, Fig. 2 displays the target \(\langle\) X, Y, Z\(\rangle\) and image coordinate \(\langle\) x, y, z\(\rangle\) frames, the vector \(v_a\) denotes the distance between \({\textbf {L}}\) to \({\textbf {A}}\) in the target frame. Similarly, the vector \(v_a\) characterizes the distance between \({\textbf {L}}\) to \({\textbf {a}}\) in the image frame [9].
The above-mentioned vectors can be correlated using Equation 2, where M represents the x, y, z spatial rotation matrix for the transformation between the target frame to the image frame, and k is the scaling factor [9].
Generalizing Eq. (2), i.e., removing the a & A subscripts and reorganizing to solve for the image frame’s x, y, and z while dividing the result by z to account for the removal of the scaling factor. One can obtain the following Eq. [9]:
Where \(\varvec{m_{ij}}\) denotes the elements of the rotation matrix M. The 6 DoF vector in the image frame, \(\varvec{V}\), is described below [9]:
Where \(\phi\), \(\theta\), and \(\psi\) are x, y, and z rotation angles. The linearized \(F_x\) and \(F_y\) terms using Taylor series expansion truncated after the second term results in the following expression [9]:
The initial guess for the 6 DoF vector is expressed by \(V_0\), while V is the actual state vector [9].
The Taylor series approximation error in x and y are defined as \(\epsilon\)x and \(\epsilon\)y. In addition, each of the 4 LEDs yields an individual pair of \(F_x\) and \(F_y\) expressions, resulting in 8 equations per estimation that can be expressed in matrix form as follows [9]:
To estimate the state vector, the system of equations is solved for a value V that minimizes the square of the errors, \(\epsilon\). The solution is added to the initial value of V, and the process is repeated until the residual errors are small enough, resulting in the final V value [9]
1.2 SVGS collinearity simplification
To simplify the process described above, the azimuth and elevation angles are measured relative to the vector that connects the perspective center to the target location, reducing Eqs. (3) and (4) to the following equation [5, 7, 9, 11]:
1.3 SVGS algorithm implementation and robustness
SVGS state estimation commences by capturing an image of the 4-LED target, which is converted using a binary mask that filters the image’s pixels based on a brightness threshold. Light blobs are identified using the resultant binary image via iteration, and their main characteristics are determined, e.g., mass, area, perimeter, inertia, and location. To reduce background noise, the blobs are filtered using geometric alignment, image processing, and statistical filters, yielding an optimized set of four target blobs. The centroid location of the resulting blobs is used in the state estimation algorithm, which provides the 6 DoF state estimation vector (Fig. 3).
When it comes to SVGS’ robustness, the main issue is light reflections, but using the statistical filter based on the greyscale image definitions, one can find the correct four target blobs by choosing the group with the smallest standard deviation (SD) of the ratio between the blob mass and area
Where M is the mass of the blob, and \(I_i\) represents the brightness of the i-th pixel belonging to a particular blob.
SVGS algorithm logic [9]
1.4 Sensitivity to LED pattern
Another aspect of SVGS is the sensitivity to the target LED pattern, which, although accounted for and mitigated in the geometric alignment. This requires especial consideration of geometric configurations that may be difficult to identify by the SVGS algorithm.
On Fig. 4, each blob is identified as P1, P2, P3, P4. Note that \({proj}_\textbf{x} \textit{P4}\) and \({proj}_\textbf{x} \textit{P3}\) should be ideally located at the midpoint of the distance between P2 and P1. Where \(\textbf{x}\) is the line \(\overline{\textit{P1} \textit{P2}}\).
The following checks are performed based on the scalar values:
The \(\epsilon _{1,2}\) values can be tuned according to the expected LED target dimensions. Smaller a values are less likely to create a wrongful mirrored or rotated target interpretation by the SVGS algorithm at large orientation angles.
1.5 Objectives
This paper aims to emulate precision drone landing using SVGS in hardware-in-the-loop simulation and compare its performance to other existing technology, particularly the IRLock and LiDAR approach, which uses an infrared beacon and a laser range sensor to support automated landing.
2 Sensors and hardware description
2.1 Flight controller unit (FCU)
The flight controller unit used is the Holybro Pixhawk 4, running PX4 firmware [12]. The FCU has an STM32F765 as its central processor and an STM32F100 as an IO processor. The firmware allows several flight modes, but this investigation focuses on the Offboard mode, where the FCU receives the set points directly from a ground station computer running the simulation. The Pixhawk 4 uses internal accelerometer, gyro, magnetometer, and barometer to estimate attitude and position.
2.2 IRLock + LiDAR
IRLock is a navigation technology that uses a camera (PixyCam) with an embedded IR filter to capture images of an infrared beacon (MarkOne beacon) and estimate its x and y position (Fig. 5). A LiDAR sensor in conjunction with IRLock is necessary to obtain the z distance (range) estimate relative to a datum surface. The range sensor used was a Benewake TFmini Plus LiDAR Rangefinder with a range of 12 m.
2.3 Samsung Galaxy 8 + smartphone video guidance sensor (SVGS)
The CPU host platform used to run the SVGS application is a Samsung Galaxy 8, an Android Smartphone equipped with a thin lens camera (Fig. 6).
2.4 Programmable linear motion stage
The linear motion stage used in the investigation was controlled by the National Instruments motion controller (NI-PXI-7350) connected to a Parker Hannifin ERV56 linear actuator is driven by a DC servo motor with a high-resolution built-in encoder (accuracy of ± 2 arc min). The encoder reports the stage position to the motion controller PC.
2.5 Ground-station computer
A computer running Ubuntu 20.04 with ROS Noetic Desktop, Gazebo 11, and QGroundControl is used as the ground station. It is responsible for integrating the virtual environment that fuses the experimental data with the emulated Pixhawk 4 internal sensor data.
3 Software integration
3.1 Robot operating system (ROS)
The hardware-in-the-loop simulation used ROS (Robot Operating System). ROS is an interface that uses a Publisher/Subscription communications architecture and is commonly used in robotics applications (Fig. 7). This method enables discretizing the required software functionality into programs called nodes, which can support different missions. ROS also enables dialog between multiple packages. The packages used in this investigation are:
-
1.
MAVROS: Allows communication with the FCU using MAVLink (FCU communication protocol and serial interpreter) via UART connection over USB [13].
-
2.
AAS_HIL: Package used to represent the sensor data collected from the linear motion stage to emulate autonomous landing. It is also responsible for determining the setpoints sent to the FCU via MAVROS. Relevant nodes are the following:
-
(a)
“motion_control_AAS”: This node parses data (raw output of sensors) and calculates the absolute error between the encoder and sensor values. It is also responsible for sending the setpoints to the “px4_control” node.
-
(b)
“px4_control_AAS”: This node subscribes to the “motion_control” node topics. It obtains setpoint data and sends it to the flight computer at 30Hz.
-
(c)
“data_recording_AAS”: This node subscribes to relevant variables and records them in a CSV file at 25Hz for final data and error analysis.
-
(a)
ROS noetic architecture[14]
3.2 Gazebo 11
Gazebo is an open-source platform that allows the simulation of the flight behavior and dynamics of an Unmanned Aerial Vehicle (UAV) model [15]. It supports integration with PX4 firmware, allowing the internal sensors of the FCU to be simulated according to each environment’s definitions. For this study, the UAV model used was the Iris Quadcopter, and conditions were considered ideal (no external interferences).
3.3 QGroundControl
QGroundControl is the “ground control” software used to receive the emulated MAVLink messages from the FCU during simulation. It verifies the emulated sensors’ and UAV behavior, as shown in Gazebo (Fig. 8).
4 Methodology
4.1 Experimental setup and description
Both IRLock + LiDAR and the Android smartphone running SVGS were secured to the programmable linear motion stage, and their corresponding beacons/targets were mounted at a fixed location, either perpendicular or parallel to the direction of motion for each axis being tested (Fig. 9). The linear motion stage received a sinusoidal input command of 0.05 Hz at different amplitudes for each axes, namely 0.42 m, 0.3 m, and 1 m, respectively for the x, y, and z-axis.
The test setup for the IRLock + LiDAR consisted of a flight controller (PX4) connected to an external voltage source, IRLock camera (x and y position estimates), LiDAR (z estimate), with the IRLock beacon mounted to a flat surface (Fig. 10). The SVGS setup consists of the Android smartphone and a 4-point illuminated target mounted at the same distance, Fig. 11.
The test setup is shown in Figs. 12 and 13. The estimated loop delay of data transmission between external sensors and FCU is 175ms to 230ms, with the mean SVGS delay being closer to the upper bound, and the mean LiDAR + IRLock delay being closer to the lower bound. These values are used as time delays for the Extended Kalman Filter algorithm (EKF2) used by the FCU firmware/software.
The Hardware-in-the-loop simulation aims to emulate the autonomous precision landing of a UAV with the visual sight of the landing target for each method, following the mission outlined in Figs. 14, 15, 16, 17:
4.2 HIL simulation
Each sensor’s raw output collected during motion stage experiments is parsed into the motion control ROS node for x, y, and z-axis, creating a tridimensional matrix. The hardware-in-the-loop simulation uses the data segment of the sinusoidal motion’s half period that ends at zero velocity. The z-axis encoder values are added to the distance between the motion stage and the landing target (beacon), scaling the values for comparison purposes. The PX4 FCU is connected to the ground-station computer via USB, which connects Gazebo to the FCU. Gazebo and QGroundControl are connected to MAVLink via User Datagram Protocol (UDP) at Gazebo’s port 14550, allowing the latter to communicate with the FCU. ROS communicates with the flight controller via MAVROS/MAVLink using the UDP local port 14557 (Fig. 18).
5 Implementation
Simulated landing motion is generated by half-sinusoidal period for each axis. The values are resampled to enable time alignment, as the sampling frequencies for the encoder, SVGS, and FCU connected to IRLock and LiDAR differ. Data parsed into the motion control node consists of three matrices of thirteen rows and three columns, each representing one axis, x, y, and z. Each matrix row represents a setpoint sent to the FCU.
When the final setpoint is achieved (i.e., the last row of the raw sensor data matrix) the landing loop continues to operate such that after each iteration, a 0.1 m distance is decreased from the z-axis of the last setpoint value until the z-distance reaches 0 (landing achieved). The absolute position error after the final setpoint is reached is not reported, as shown in Fig. 19. The position error between each fused position estimate and the encoder value is defined as:
5.1 ROS topics and Gazebo emulated sensor data
After starting the simulation, the Gazebo software and PX4 firmware allow the internal FCU’s accelerometers, gyros, magnetometers, and barometers to have their data emulated based on the flight path and dynamics of the UAV, as shown in Fig. 20.
The ROS architecture works as a Publisher/Subscription interface through topics. The topics pertinent to data analysis are:
-
1.
mavros/local_position/pose: Estimated 6 DOF position and attitude of the vehicle. Data comes from fused emulated and experimental data used by the FCU’s IMU.
-
2.
mavros/setpoint_raw/local: Topic where the setpoints are sent to the FCU.
-
3.
mavros/state: Information about the flight mode (stabilized, position, offboard) and whether the vehicle is armed (motors on) or not.
-
4.
aas_hil/currentError: Absolute error between the sensor and encoder setpoint.
-
5.
aas_hil/setpoint: Vector message containing components of setpoint parsed by the motion control node.
-
6.
aas_hil/customRoutineDone: Boolean that indicates if the offboard motion routine (once the switch to offboard is made) has been completed.
5.2 Sensor fusion
The PX4 Autopilot Software uses an Extended Kalman Filter algorithm (EKF2) to fuse the sensor data available. The EKF2 algorithm has been tested in several multirotor vehicles [12, 16]. This section presents the algorithm to illustrate fusion with SVGS estimates. The mathematical formulation of the PX4 Extended Kalman Filter is formulated as:
State Prediction:
The right-hand side term in Eq. 17 refers to the predicted state at \(t_{k+1}\) considering the measurements gathered until \(t_k\). Whereas the left-hand side term, \(f( \hat{x}(k), u(k))\), represents the UAV nonlinear model function depending on the \(x_k\) and control input, u(k)
Where \(P(k+1\mid k)\) is covariance matrix predicted at \(t_{k+1}\), F(k) is the Jacobian of f in respect to the states at \(\hat{x}(k)\). Finally, Q(k) is the process noise covariance matrix.
State Correction:
The Kalman gain matrix is defined as follows:
\(H(k+1)\) defines the Jacobian of the measurement function h with respect to the states at \(\hat{x}(k+1\mid k)\), and \(R(k+1)\) is the measurement noise covariance matrix. Hence, the predicted state is obtained as follows:
where, \(y(k+1)\) is the measurement at \(t_{k+1}\), and \(h( \hat{x}(k+1\mid k))\) is the predicted measurement at \(t_{k+1}\) considering the predicted state \(\hat{x}(k+1\mid k)\).
Finally, the covariance matrix is updated once more, as shown in Eq. 21
5.3 SVGS and IRLock + LiDAR Fusion
To perform sensor fusion within the PX4 framework, the estimation messages must be transmitted at 50 Hz if the associated covariance is not available or at 30 Hz including the covariance values [16]. Typically achieving 50Hz shouldn’t be a problem for IRLock + LiDAR; however, considering that SVGS running on Samsung Galaxy S8 has a reliable update frequency between 15Hz to 6Hz, it becomes a challenge to integrate it properly.
The solution is to use a Kalman Filter with the raw SVGS measurements as inputs to provide SVGS estimations at the required 30Hz. This technique has some implications, which will be further discussed in Innovation Test section.
5.3.1 SVGS covariance
The SVGS covariance was previously estimated using known motion profiles in accuracy assessment experiments [16]. The covariance estimation requires a test setup similar to the one presented in the Hardware-in-the-loop experiment by comparing sensor values to the motion stage encoder true values. The sensor noise covariance matrix is given by:
Where E denotes the mathematical operator of expectation, \(\mu\) is an arbitrary design parameter, \(\delta (t-\tau )\) is the Kronecker delta function, \(\tau\) is the time delay between the motion stage and the SVGS measurements, and I is the identity matrix. Expanding:
The diagonal elements represent the variance, and the off-diagonal elements represent the covariance between state variables.
Considering \(\mu\) to be equal to 1.6 and the results obtained by Hariri et al., the noise covariance matrix yields:
The values displayed in Eq. 24 were used as a floor covariance matrix transmitted with the SVGS estimates to the flight controller.
5.4 Innovation test
For the sensor values to be successfully fused, they must pass an Innovation Test Ratio [12]. The mathematical formulation for the test is presented below
Where \(\nu\) is the innovation vector. Expanding:
Here, the left-hand side represents the Mahalanobis distance. Therefore, if the distance exceeds the normalized user-defined threshold, the measurement is considered an outlier and is not effectively fused, i.e., rejected by the EKF2.
5.4.1 EKF2 gate size
Moreover, EKF2 allows the user to define a gate size, g, for each sensor, which is directly related to the sensor’s covariance matrix and used to obtain the acceptance region for the Mahalanobis distance. Increasing gate size causes an overall decrease in fusion accuracy, whereas decreasing the gate size may increase accuracy but reduce the number of measurements considered to be valid, potentially causing the system's observability matrix positive definiteness to not hold [12] (Table 1).
For this investigation, the following were considered:
6 Results
For all axes, SVGS shows greater similarity with encoder values. However, for the x and y-axis errors reported for both SVGS and IRLock + LiDAR, the discrepancy is not as significant as the one reported for the z-axis, Figs. 21 and 22. This is expected since both approaches use photogrammetric x-axis and y-axis methods. Also, the LiDAR sensor model used in the comparison fails to generate accurate results during the experiment, particularly for range values smaller than 4 m, Fig. 23.
Encoder data was also simulated in the Gazebo environment. The data segment used as input to the setpoint matrix is shown in Fig. 24 for all three axes.
The absolute error analysis is shown in Figs. 25 and 26 illustrates that SVGS is more accurate than IRLock + LiDAR in autonomous precision landing.
7 Conclusion
Hardware in the loop experiments presented here illustrates the Smartphone Video Guidance Sensor’s potential for navigation, guidance, and proximity operations of UAVs, in particular, precision landing. The hardware-in-the-loop methodology enabled the assessment of landing accuracy by comparing simulated results with data collected from controlled motion experiments. The proposed methodology also enabled performance comparison between SVGS and a leading state-of-the-art landing technology, IR Lock + LiDAR.
SVGS portability and low-cost show great potential to support applications in small satellite (CubeSats) operations, such as space debris removal and asteroid/planetary reconnaissance. Future work entails performing a similar comparison in actual landing missions in a GPS-denied environment.
Data availability
The experimental data used is available upon reasonable request.
References
Lentaris, G., Maragos, K., Stratakos, I., Papadopoulos, L., Papanikolaou, O., Soudris, D., Lourakis, M., Zabulis, X., Gonzalez-Arjona, D., Furano, G.: High-performance embedded computing in space: evaluation of platforms for vision-based navigation. J. Aerosp. Inf. Syst. 15, 178–192 (2018)
Mathurin, J., Peter, N.: Private equity investments beyond Earth orbits: can space exploration be the new frontier for private investments? Acta Astron. 59(1-5), 438–444
Hegadekatti, K.: IBSES: International Bank for Space Exploration and Sciences (March 30, 2017). Available at SSRN: https://ssrn.com/abstract=2943376 or https://doi.org/10.2139/ssrn.2943376
Janousek, J., Marcon, P.: Precision landing options in unmanned aerial vehicles. International Inter-disciplinary PhD Workshop (IIPhDW) 2018, 58–60 (2018). https://doi.org/10.1109/IIPHDW.2018.8388325
Mullins, L., Heaton, A., Lomas, J.: Advanced Video Guidance Sensor Inverse Perspective Algorithm; Marshall Space Flight Center: Huntsville. AL, USA (2003)
Howard, R.T., Bryan, T.C.: DART AVGS flight results. In Sensors and Systems for Space Applications; Howard, R.T., Ed.; SPIE Digital Library Location: Bellingham, WA, USA, Volume 6555, p. 65550L (2007)
Rakoczy, J.: Application of the Photogrammetric Collinearity Equations to the Orbital Express Advanced Video Guidance Sensor Six Degree-of-Freedom Solution; Technical Memorandum, Marshall Space Flight Center: Huntsville. AL, USA (2003)
Becker, C., Howard, R., Rakoczy, J. Smartphone Video Guidance Sensor for Small Satellites. In Proceedings of the 27th Annual AIAA/USU Conference on Small Satellites, Logan, UT, USA, 8-13 August 2013
Hariri, N., Gutierrez, H., Rakoczy, J., Howard, R., Bertaska, I.: Performance characterization of the smartphone video guidance sensor as Vision-based Positioning System. Sensors 20, 5299 (2020)
N. Bautista, H. Gutierrez, J. Inness, and J. Rakoczy, "Precision Landing of a Quadcopter Drone by Smartphone Video Guidance Sensor in a GPS-Denied Environment," Sensors, vol. 23, no. 4, Article no. 1934, 2023. doi: https://doi.org/10.3390/s23041934
M. Peng, K. Di, Y. Wang, W. Wan, Z. Liu, J. Wang, and L. Li, "A Photogrammetric-Photometric Stereo Method for High-Resolution Lunar Topographic Mapping Using Yutu-2 Rover Images," Remote Sensing, vol. 13, no. 15, Article no. 2975, 2021. doi: https://doi.org/10.3390/rs13152975
PX4 Team, “Px4/px4-autopilot: Px4 autopilot software,” GitHub Available: https://github.com/PX4/PX4-Autopilot
“mavlink/mavros,” GitHub Available: https://github.com/mavlink/mavros
Quigley, Morgan, Conley, Ken et al (2009). ROS: an open-source Robot Operating System. ICRA Workshop on Open Source Software
Koenig, N., Howard, A. (2004, September). Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (Vol. 3, pp. 2149-2154). IEEE
N. Hariri, H. Gutierrez, J. Rakoczy, R. Howard, I. Bertaska, "Proximity Operations and Three Degree-of-Freedom Maneuvers Using the Smartphone Video Guidance Sensor," Robotics, vol. 9, no. 3, Article no. 70, 2020. [Online]. Available: https://www.mdpi.com/2218-6581/9/3/70. doi: https://doi.org/10.3390/robotics9030070.
Acknowledgements
This work was supported by NASA Marshall Space Flight Center, Cooperative Agreements 80NSSC20M0169, Dual Use Technology Development CAN 20-109, 2020–2021.
Funding
'Open Access funding provided by the MIT Libraries'.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors declare no conflict of interest.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Silva Cotta, J.L., Rakoczy, J. & Gutierrez, H. Precision landing comparison between smartphone video guidance sensor and IRLock by hardware-in-the-loop emulation. CEAS Space J 16, 475–489 (2024). https://doi.org/10.1007/s12567-023-00518-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12567-023-00518-8