1 Introduction

The determination of actual wing shapes in midair is essential for the improvement and further development of aerodynamics. After explaining this motivation in detail, we propose and investigate a camera-based monocular approach in the context of related works.

1.1 Motivation

Fuel efficiency is one of the major concerns of aerospace companies to reduce costs as well as CO2 emissions. The aerodynamics of an existing aircraft can be improved by retrofits, especially by the mounting of wingtip devices or the application of surface structures that resemble the skin properties of sharks (Bechert et al. 1986). For the design of these measures, simulations based on computational fluid dynamics have to be performed. These computations require the knowledge of an aircraft’s geometry during the flight—and here the deflections of the wing are of utmost importance. Given a measurement system for this task, a continuous acquisition of a wing’s shape is conceivable and thus the monitoring of the wing’s stress over time.

The acquisition of an aeroplane’s actual shape is easy on the ground, but challenging in midair. Here the positioning of sensors is limited to the cabin, the fuselage, and the aeroplane’s skin. In particular, the mounting of experimental measurement systems is restricted to the interior of the cabins for economical reasons.

The operation of active sensors, such as laser scanners or rangefinders, is impossible inside the cabin due to safety regulations. The passengers’ eye-safety has to be guaranteed at all events which defeats measurements through window glasses onto reflective surfaces. Furthermore, the angle of incidence between the wing’s upper surface and the laser beam is very large so that rays are reflected away. This inhibits the application of scanning devices and calls for the usage of retroreflectors. However, elevated reflectors will disturb the airflow to an unacceptable extent.

Thus, passive measurement technologies are beneficial, and the application of one or more cameras offers a cost-efficient solution. However, since a camera is a bearings-only sensor, depth information is not captured. Therefore, many authors propose a stereoscopic approach with two or more cameras mounted on a stereo rig; see Veerman et al. (2009) and Kirmse (2016) for instance. For such multi-camera setups, long baseline lengths and short object distances are beneficial to obtain large measurable parallaxes and therefore acceptable precision in distance measuring.

It is however challenging to establish a long-term stable geometry in the cabin of a wobbling fuselage. In Boden et al. (2013) the elaborate installation of a long and stable baseline behind the windows of a wide-body airliner is shown. With this setup, distance measurements are only possible for object points seen in at least two images. This restricts the workspace; larger angles of aperture will lead to less spatial resolution. And finally, yet importantly, a synchronous acquisition of the imagery is mandatory for multi-camera setups due to the motion of the airfoils.

1.2 Contribution

We pursue a monocular approach that implies the utilization of a model describing the wing’s deflection because of the aeronautic lift and the filling levels of the fuel tanks in the airfoils. By using a single camera only, the efforts for the installation and calibration of the corresponding measuring system can be reduced significantly. Furthermore, this approach enables the application of such a measurement system during passenger flights.

For the bending, we assume that the lift does not affect arc lengths on the upper surface of a wing. This assumption is violated by the compression and tension of the material away from the neutral axis of the bending. However, these effects appear to be negligible compared to the achievable measurement precision. Elongations and contractions because of temperature changes can be considered numerically for known materials.

1.3 Related Work

The principles of aerodynamics and aeroelasticity can be studied with classical textbooks such as Anderson (2010) or Bisplinghoff et al. (1996). In this contribution, we use the approximation for the deflection of a tapered wing as provided and explained by Drela et al. (2006).

For non-intrusive, i.e. noncontact approaches, the exploitation of multi-camera systems appears to be the first choice, especially with cameras mounted in the cabin, in a dorsal camera pod, or in the landing gear of high-wing aeroplanes (Boden et al. 2013; Kirmse 2016; Juretschke et al. 2014; Bakunowicz and Meyer 2016). More choices for the location of cameras are given in experiments performed in the wind tunnel, possibly using just a single camera (Liu et al. 2012; Schairer and Hand 1999; Burner et al. 1996; Burner and Barrows 2005).

More recently, Demoulin et al. (2020) presented a system tested on ground. In this proposal, the wing’s shape and the positions of markers are determined by the deployment of a drone equipped with a downward-looking camera. Hence, no use of a tachymeter or the like is required. Furthermore, Demoulin et al. (2021) introduced a priori knowledge of the wing’s mechanical behaviour.

As an alternative or additionally, measurements with sensors in contact with the wing can be applied, e.g. inertial measurement units, strain gauges, or fibre grating sensors (Nicolas et al. 2016).

Refraction is one of the chief causes of geometric distortions in multimedia photogrammetry—especially in the context of underwater imaging. Often the radial lens distortion parameters of the cameras’ interior orientations are used to compensate for this effect implicitly. This approach is incorrect, but the effect can be a neglected in many applications with moderate requirements concerning the achievable accuracy (Rofallski and Luhmann 2022; Agrafiotis and Georgopoulos 2015). A comprehensive survey of models and monocular approaches for multimedia photogrammetry with flat interfaces is provided by Rofallski and Luhmann (2022). A proper approach modelling the light transport by ray tracing can be found in (Mulsow and Maas 2014).

To the best of our knowledge, no photogrammetric approaches using a single camera in-flight are proposed so far.

2 Determination of the Spanwise Wing Bending and Point Coordinates

The determination of a wing’s bending is challenging, especially for hollow and tapered wings. Usually, a numerical integration along the wingspan is performed to compute deflections according to the Bernoulli-Euler beam model with given loads and bending inertias at supporting locations (Timoshenko 1953). However, to model the phenomenon geometrically, a simple approach might be sufficient depending on the specific geometry of the wing under consideration. In the following section, we derive a simple geometric deflection model that is physically motivated.

2.1 Wing Deflection

To determine the wing bending, we propose a simple but applicable model based on the theory of cantilever beams. Subsequently, we explain the estimation of the model parameters and the distances between the camera’s projection centre in the cabin and the marker positions on the airfoil.

2.1.1 Deflection Model

For the modelling of the wing bending, a simple approximation can be utilized, assuming a constant curvature of the cantilevered beam representing the wing, see Fig. 1. The local beam curvature is

$$\begin{aligned} \kappa (x) = \frac{1}{R} = \frac{\textrm{d}\theta }{\textrm{d}s} \approx \frac{\textrm{d} \theta }{\textrm{d} x} \approx \frac{\textrm{d}}{\textrm{d} x} \left( \frac{\textrm{d}y}{\textrm{d}x} \right) = \frac{\textrm{d}^2 y}{\textrm{d}x^2} \end{aligned}$$
(1)

with the spanwise coordinate x, the radius R, the length s of the circular arc, and the slope \(\theta\) of the wing deflection y. Hence, the deflection y(x) is related to \(x^2\) via integration. Otherwise, a reasonable physical approximation of the deflection y(x) at position x along the span is

$$\begin{aligned} y(x) = \frac{WL}{12 EI_0^{}} \cdot \frac{1+2\lambda }{1+\lambda } x^2, \qquad \lambda \in [0,1] \end{aligned}$$
(2)

with the total weight W of the wing, the half wingspan L, the taper ratio \(\lambda\), the stiffness \(EI_0^{}\), being the product of Young’s modulus E and the bending inertia \(I_0^{}\) at the root (Drela et al. 2006). Comparing the results obtained for various taper ratios to exact solutions, it can be seen that the assumption of constant curvature is poor for rectangular wings, but reasonable for wings with a taper ratio of \(\lambda \!\approx \! 0.3\), see Figure 6 in Drela et al. (2006). The airfoils of most modern passenger planes exhibit this taper ratio.

Fig. 1
figure 1

Small elastic bending of a cantilevered beam with a constant curvature radius

The graph of the function (2) is the parabola \(y(x) \!=\! ax^2\) with the shape parameter a and angular point at \(x\!=\!0\). Note, that the parabola is one of the few curves with an arc length integral we can solve in closed form. This allows for using a known or measured arc length as an invariant for the determination of the wing bending. The arc length l between the angular point and a point of the parabola with spanwise coordinate x is then

$$\begin{aligned} l (x)= & {} \int \limits _0^x \sqrt{1+\left( \frac{\textrm{d}y}{\textrm{d}x} \right) ^2} \, \textrm{d} x \end{aligned}$$
(3)
$$\begin{aligned}= & {} \frac{x}{2}\sqrt{1+4a^2x^2} + \frac{1}{4a} \ln { \left( 2 ax +\sqrt{1+4a^2x^2} \right) }\end{aligned}$$
(4)
$$\begin{aligned}= & {} \frac{x}{2}\sqrt{1+4a^2x^2} + \frac{1}{4a} \textrm{arsinh} \left( 2ax \right) \end{aligned}$$
(5)

with \(a \!\ne \! 0\). For \(a\!=\!0\), the parabola degenerates to the straight line \(y(x) \!=\! 0\).

Given multiple arc lengths and corresponding x-coordinates, a system of equations according to Eq. 5 can be set up to estimate the value of the parameter a for an instant of time, see Sect. 2.2.1 below.

2.2 Measurements on Ground and in Midair

For the computations, we project all 3D entities and directions into a 2D coordinate system, defined by an origin above the wing root, horizontal spanwise x-coordinates, and an upwards pointing y-axis. Note, this system can be defined purely virtually. In this presentation, we use the projection centre C of the camera above the wing root as the coordinate system’s origin for clarity. Figure 2 illustrates the situation with extremely exaggerated bending geometry and Table 1 lists the corresponding observations and parameters.

Table 1 Observations and model parameters as determined at ground and in midair
Fig. 2
figure 2

Two target markers \(M_1^{}\) and \(M_2^{}\) on a surface of an airfoil, observed in midair by a camera with projection centre C above of the wing root. For the unstressed situation on ground, we assume arc lengths l to be identical to measured horizontal distances x

For the unstressed situation on ground with unloaded tip tanks, we assume that \(a \!=\! 0\) holds in good approximation and that the arc lengths \(l_i^{} \!=\! l(x_i^{})\), \(i \!=\! 1, \ldots , N\) are given directly by the measured horizontal distances between the wing root and N points on the surface of the wing, hence \(x_i^{} \!=\! l_i^{}\) for all i. The lengths can be measured by any convenient measuring system, e.g. an electronic tachymeter or a photogrammetric survey.

In midair, the locations of points on the wing surface are not measurable utilizing a single camera. However, we measure the directions \({\varvec{r}}_i'\) to the marked and lifted points \(M_i'\) in the coordinate system of the camera. In doing so, the projection centre C is approximately above the wing’s root, close to the fuselage.

For the parameter estimation explained in the following section, we project the measured 3D directions onto the 2D wing coordinate system as shown in Fig. 2.

2.2.1 Parameter Estimation

Since the distances \(d_i'\) between the projection centre C and the N lifted points \(M_i'\) are not measurable during the flight, we treat these distances as unknown parameters within an adjustment process. Thus, in total, we have \(N+1\) unknown parameters for a moment captured by a single photo: the coefficient a of the parabola and N distances \(d_i'\) in 2D, \(i \!=\! 1,\ldots ,N\), between the projection centre and the points on the wing’s surface.

For numerical reasons and to cover the case \(a\!=\!0\), we multiply Eq. 5 by the factor 4a and obtain the system of conditional equations

$$\begin{aligned} -4 a l_i^{} + 2ax'_i s_i^{} +\textrm{arsinh} \left( 2 a x_i' \right)= & {} 0 \end{aligned}$$
(6)
$$\begin{aligned} a {x_i'}^2+y_i' -y_i^{}= & {} 0 \end{aligned}$$
(7)

with the auxiliary variables \(s_i^{} \!=\! \sqrt{1+4a^2{x_i'}^2}\), and \(i \!=\! 1,\ldots ,N\). The point coordinates \({\varvec{x}}_i' \!=\! [x', y']_i^{{\textsf{T}}}\) are computed via the polar point determination \({\varvec{x}}_i' \!=\! d_i' {\varvec{r}}'_i\) with the observed direction vectors \({\varvec{r}}_i' \!=\! [r'_{x}, r'_{y}]_i^{{\textsf{T}}}\), normalized to unit length, see Fig. 2.

Hence, with the measurements \(l_i^{} \!=\! y_i^{}\) and \(x_i^{}\) taken on ground and the direction components \(r'_{x_i}\) and \(r'_{y_i}\) measured in midair, we obtain the over-constrained system of 2N conditional equations (6 and 7). Minimizing the sum of squared residuals, we get an estimate of the distances \(d_i'\) and the parameter value a for the geometry at a moment of acquisition during the flight.

The estimation requires a proper weighting of the observations featuring different types, i.e. directions derived from image point coordinates in midair and distances measured on the ground. Standard deviations for the point coordinates \((x_i^{}, y_i^{})\) determined on the ground can be specified by considering the specifications of the utilized measuring system, e.g. \(\sigma _x^{} \!=\! \sigma _y^{} \!=\! 5\) mm for point coordinates determined with an electronic tachymeter. The direction measurements in midair are derived from observed image coordinates, see Sect. 2.4. Using repeated measurements, we determined the precisions for manually measured image coordinates \((u_i^{}, v_i^{})\) empirically and set \(\sigma _u^{} \!=\! \sigma _v^{} \!=\! 5\) px for the standard deviations in pixel (px).

2.3 Environmental Influences

Changes in the ambient air result in varying geometries for the aeroplane and varying properties of the optical media. In the following, we rate the impact of these effects, e.g. changes in temperature or pressure, onto the measurements and the results.

The optical densities in the cabin, in the ambient air on the ground, and in midair are different – an effect that might be crucial for the measurement of directions. According to Snell’s law of refraction, the angles of incidence depend on the indices of refraction of the involved media, cf. Agrafiotis and Georgopoulos (2015) for instance. We neglect the influence of the window glasses as further media.

Table 2 summarizes the indices of refraction for visible light computed for typical conditions (NIST 2016). The computations depend on the altitude, the wavelength in vacuum, the air temperature, the atmospheric pressure, the relative humidity, and the carbon dioxide (CO2) concentration (Ciddor 1996). Assuming an incidence angle of 30\(^\circ\) between the optical axis and the normal of the window glass and assuming that the atmosphere inside the cabin is identical to the one at sea level, we obtain a change of 0.0066\(^\circ\) for a measured direction. This results in a lateral deviation of 4.0 mm at a distance of 35 m. For a camera with a focal length of 7800 px, this results in a 0.90-pixel offset in the image centre, which is less than the precision of the point localizations in images. Hence, the change of refraction can be neglected.

Table 2 Indices of refraction at sea level and at cruising level

We assume that the airfoil is built of aluminium with a known coefficient of expansion and that the change in temperature is − 70 K, resulting in contractions with a magnitude up to − 50 mm for a 40-meter wing. This effect can easily be considered calculationally.

2.4 3D-Point Coordinates

The position \({\varvec{X}}\) of a target marker is obtained by polar point determination via

$$\begin{aligned} {\varvec{X}} = {\varvec{C}} +D \varvec{ R }\, \mathbf{ m }, \end{aligned}$$
(8)

with the coordinates \({\varvec{C}}\) of the projection centre, the distance D between the projection centre and the target marker, the rotation matrix \(\varvec{ R }\) between the camera coordinate system and the tachymeter coordinate system, and the direction \(\text{ m }\) measured in the camera coordinate system. The directional vector normalized to unit length is

$$\begin{aligned} \mathbf{m} = \frac{\mathbf{ K}^{-1} \mathbf{x}' }{\Vert \mathbf{ K}^{-1} \mathbf{x}' \Vert }, \quad \mathbf{x}' = [u', v', 1]^{{\textsf{T}}}\end{aligned}$$
(9)

with the calibration matrix \(\mathbf{ K }\) containing the intrinsic camera parameters, and the observed image point represented with homogeneous coordinates \(\mathbf{ x }'\).

The distance D in 3D required for Eq. 8 is computed with the distance \({\widehat{d}}\) estimated according to Sect. 2.2.1 in the two-dimensional coordinate system of the wing

$$\begin{aligned} D \!=\! \frac{d}{\sqrt{r_x^2 +r_y^2}}, \qquad {\varvec{r}} \!=\! \varvec{ R } \, \mathbf{ m } \!=\! [r_x^{}, r_y^{}, r_z^{}]^{{\textsf{T}}}\end{aligned}$$
(10)

with the directional vector \({\varvec{r}}\) derived from image observations in the coordinate system of the tachymeter.

Both the camera coordinate system and the tachymeter coordinate system are defined by the deployment of the measuring devices with free stationing. For the further processing of the evaluation results, e.g. for simulations using computational fluid dynamics, we relate the coordinates of the target markers to the model of the aircraft provided by computer-aided design (CAD). The transformation is obtained via selected tie points on the aircraft’s hull, which can easily be identified in the CAD model. The real-world coordinates of these tie points are measured with a tachymeter.

3 In-flight Measuring System

In the following, we describe the setup of the applied in-flight measuring system and its calibration. The criteria for the selection of appropriate components—essentially the camera and its objective lens—are discussed. In any case, a camera calibration on-site is necessary to capture the influence of the window glasses.

3.1 Camera System

We discuss the selection criteria for the components of the measurement system and explain their installation and calibration.

3.1.1 Selection of Camera and Objective Lens

With the measuring task at hand, we can state several requirements for the utilized camera system:

  • A maximal large portion of the wing should be visible in the images. At the same time the spatial resolution on the wing’s upper surface should be high.

  • The depth of field of the utilized camera should cover distances between 5 and 40 m to capture the wings of passenger planes by pin-sharp pictures.

  • The captured images should feature low noise to ease the precise localization of image points. These image points correspond to the centres of the utilized markers or to salient structures on the airfoil, for example corners of landing flaps.

  • The interior orientation of the camera defines the geometry of the optical paths. In midair, the camera is observing deforming structures and nothing else. Thus, it is not possible to perform a continuous in-flight camera calibration. For most camera systems, changing the focussing usually results in slightly changed focal lengths. Thus, a non-zooming, fixed-focus system has to be applied and calibrated previously; see Sect. 3.1.2 below.

Furthermore, lightweight camera systems with autonomous power supply and built-in memory devices are preferable for a fast and easy deployment during flight.

In parts, the requirements listed above are contradictory and one has to agree on compromises: For instance, a wide field of view is desirable to capture points on the leading and trailing edges close to the wing’s root. Given a specific image sensor, this implies a short focal length and causes a poor resolution in object space. Furthermore, sensor chips with large picture elements are beneficial to avoid image noise. However, the camera’s depth of field decreases with increasing sensor size.

In summary, we selected a lightweight consumer camera with a comparable tiny \(1/2.3''\)-CMOS chip featuring 3000\(\times\)4000 sensor elements, but applied a high-quality prime lens with focal length \(f \!=\! 12\) mm and good light transmission; see Fig. 3. The camera’s horizontal field of view is 28,8°.

Fig. 3
figure 3

Measurement system mounted on a ball head: a consumer-grade camera, a high-quality prime lens, and a remote-control release

3.1.2 Calibration of the Camera System

The interior orientation of a camera system contains all parameters to model the mapping of an object point onto the image plane. The parameters describing the position and orientation of a sensor inside the camera, and the parameters describing lens distortion have to be determined by calibration. In doing so, we consider the window glasses as part of the lens system and describe the distortion with a standard model for consumer-grade lenses, cf. Brown (1971) or Luhmann et al. (2016).

For the determination of the parameters, a calibration body can be utilized, which results in an easy to apply and highly automatable calibration procedure. One of the simplest calibration bodies is a flat board with a checker pattern, see Zhang (2000) for instance. Figure 4 illustrates its application and Fig. 5 visualizes an estimated lens distortion obtained by such a calibration. The board with quadratic tiles has to be captured in different orientations and distances and its appearances have to cover all image regions.

Fig. 4
figure 4

One of the images used for the calibration of the camera system mounted in the cabin. The on-site calibration ensures the consideration of the window classes as parts of the optical system

Fig. 5
figure 5

Vector field: estimated lens distortion and influence of the window glasses according to Brown’s distortion model. Black dots: measured positions of the markers in the image for the situation on ground, see Fig. 4

With the parameter values of the interior orientation, the observed image points can be translated into spatial directions using Eq. 9.

3.1.3 Installation and Setup

For safety reasons, the measurement system has to be demounted and stored during take-off and landing. Hence, the reproducibility of the camera’s position and orientation is crucial since all in-flight measurements rely on an identical setup. Thus, mechanical solutions with few torques and forces are beneficial; see Fig. 3. To specify the precision of the reproducibility, we mounted and dismounted the camera several times in the lab. By comparing the captured images, we noticed shifts up to 10 px.

For the calibration procedure, a working distance of about five meters is beneficial. This allows calibration bodies with a handy size and screen-filling appearance. Thus, the near point of the depth of focus should be at this distance to obtain sharp images. The calibration body with its pattern is then displayed in front of the aircraft’s window in varying positions, distances, and orientations.

To keep the field of focus constant, we did not adjust the camera’s aperture automatically. In principle, small apertures are beneficial since they result in large depths of field. Here we need longer exposure times, which is non-critical since we do not expect high-frequency swings of the airfoils. Unfortunately, small apertures lead to diffraction, too, i.e. light points are mapped to airy discs. This effect is observable for small sensors and common f-numbers. We tolerated airy discs with a diameter of two pixels, i.e. 4.4 μm, and selected the f-number 4 as a compromise.

The camera is positioned and oriented in a way that as much as possible of the wing’s surface is captured by the images. In doing so, we have to anticipate the expected lift and torsion of the wing to be observed in the images.

3.2 Coordinate Transformations

For the data evaluation, we have to transfer the image observations to directions given in the camera coordinate system. Then, these measured directions have to be transformed into the coordinate system of the tachymeter; see Eq. (9).

In principle, the camera pose can be determined in the tachymeter coordinate system by spatial resectioning. However, it is anticipated that this solution is uncertain due to the camera’s narrow field of view. Instead, we determine the position of the camera’s projection centre with the tachymeter, too, to obtain the translation between these two coordinate systems precisely. In this process, we take the sensor’s position inside the camera body from a technical drawing. The rotation, however, is estimated using corresponding directions from the projection centre to the target markers. While these directions are measured in the camera coordinate system directly, they can be computed in the tachymeter coordinate system with the 3D point coordinates. The two bundles of rays are then related by a rotation to be estimated (Arun et al. 1987). For this task, we utilized points on the wing’s surface close to its root at the fuselage since the corresponding image observations are more certain.

Finally, we project all three-dimensional entities into a two-dimensional wing coordinate system provided in the vertical plane that splits the wing longitudinally. With spanwise x-coordinates and vertical ordinates, the wing deflection is computed according to Sect. 2.1 and Fig. 2.

4 Experimental Results

In this section, we present and discuss the result obtained for the camera calibration, explain the design of the utilized target markers, and eventually show results for in-flight measurements. These measurements have been carried out during a scheduled 8.5-hour transatlantic flight with a long-range wide-body airliner. The aircraft’s right wing with a length of approximately 35 m has been captured by multiple image series. This reveals the temporal change of the wing deflection during the flight due to the fuel consumption and the resulting loss of weight. As the aircraft total weight W reduces, the wing’s deflection decreases as predicted by Eq. (2).

4.1 Camera Calibration

The interior orientation of the camera defines the geometry of the optical paths. For a proper modelling of the rays, we are considering the window glasses as a part of the optical system. Hence, we performed an on-site calibration to consider the influence of the window glasses.

Figure 4 shows one of the 21 images utilized for the calibration. We estimated the parameters of the sensor fixture and the parameters of the lens distortion according to Brown’s model.

The three coefficients modelling the radial distortion and the two coefficients modelling the tangential distortion have been proven to be statistically significant (Brown 1971). The adjustment provided an estimated back-projection error of about 0.5 px, i.e. the average deviation of the measured and the predicted image coordinates. This quantity is the square root of the estimated variance factor for the image observations and appears to acceptable.

Figure 5 shows the resulting vector field for the necessary displacements to obtain a central perspective mapping by a pinhole camera.

4.2 Measurements

In the following, we describe the executed measurements at the ground and in midair, i.e. the determination of 3D marker positions by a tachymeter and the identification of marker positions in the images.

4.2.1 Application and Localization of Target Markers

The appearance of target markers attached to the airfoils changes dramatically because of the incidence angle between the line of sight and the surface’s normal. While areas close to the wing’s root are well resolved in the images, areas at the wing’s tip appear extremely distorted. Table 3 compiles the distances between two consecutive pixel centres on a wing’s surface. For this estimate, we positioned a camera conceptually 2 m above a horizontal wing with a vertical winglet 40 m away. Then we computed the resolutions close to the fuselage and at the junction of wing and winglet. As before, the image size is 3000\(\times\)4000 px and the camera constant \(c = 7800\) px. The spanwise resolution varies considerably between 4.4 and 102.8 mm, whereas the streamwise resolution, i.e. parallel to the fuselage, varies just between 1.1 and 5.1 mm.

Table 3 Spatial resolution specified by the distances between two consecutive pixel centres on an airfoil’s surface

For the situation at ground, areas of the wing tip might not be visible at all because of the occlusion induced by a wing’s arching. Due to their different orientation, markers on the wing tip devices are highly visible again.

Hence, the design of the target markers should consider this effect. For target markers close to the fuselage, circles with four quadrants alternating in black and white are chosen. For the signalization of distant points at the wing tip, black-filled circles are used as markers. Because of the curved surface, these circles project approximately into very flat ellipses in the image; see Fig. 6. Note that the centroid of such an ellipse is not the centre of the marker because of the perspective foreshortening.

Fig. 6
figure 6

Equally scaled display details: appearances of three markers visible in the image shown in Fig. 7, bottom

4.2.2 Image Measurements

During the experiments, series of images with approximately 10 images have been captured in intervals of one hour. Two example images are shown in Fig. 7. Due to the ongoing fuel consumption and the resulting loss in weight, a decreasing bending of the wings is expected.

Fig. 7
figure 7

Example images from two photo series. The images show the variation in appearance due to changing illuminations and the transit of media

We decided to carry out manual measurements, although an automatic detection and measurement of the marker positions in the images is conceivable and feasible. Hence, at this stage of the studies we can neglect the impact of the markers’ visibility on automatic measurement methods, which have to cope with specular reflections, cloud shadows, back light, perspective shortening, etc.

For economical reasons we did not measure each image point position repeatedly. However, to specify the empirical precisions of the image coordinates \((u_i^{}, v_i^{})\), we marked image points several times in a few sample images. The obtained standard deviations differ, depending on the operators’ skills, their interpretations, and their willingness to measure accurately. As mentioned in Sect. 2.2.1, we set \(\sigma _{u}^{} \!=\! \sigma _{v}^{} \!= 5\) px for all image coordinates.

4.3 Wing Deflection, 3D Point Coordinates and Torsion

We estimate the wing deflection according to Sect. 2.2.1 in the wing coordinate system with horizontal spanwise x-coordinates and the y-axis pointing upwards. For the very first image of the flight, we obtained the estimates \({\widehat{a}} \!=\! 0.00218957\) and \({\widehat{\sigma }}_{{\hat{a}}}^{} \! = \! 4.6263\cdot 10^{-5}\) for the parabola parameter a and its standard deviation. The estimated values of the distances \(d_i'\) are in the range between 7.88 and 35.41 m and feature estimated standard deviations between 41.6 and 42.2 mm. The maximum correlation coefficient is − 0.16.

Eight parabolas depicting the states during an 8.5 h-flight are shown in Fig. 8. We realize that the wingtip initially lifts up to 3.10 m at cruising level. In the course of the flight, the ongoing fuel consumption results in a reduction of weight. With the decreasing total weight W of the aircraft, the bending of the wings also decreases as to be expected according to Eq. (2).

Fig. 8
figure 8

Temporal decrease in wing bending due to fuel consumption and resulting loss of weight during an 8.5-h transatlantic flight

The results of the polar point determination are shown in Fig. 9 in three orthographic projections. The figures show the marker positions at ground, the marker positions in midair as captured by the first image, the selected tie points at the aircraft’s hull, and the camera position. Additionally, the lifts of the markers and the line-of-sights are drawn. We can spot a 3-meter lift of the wingtip as to be expected because of Fig. 8.

Fig. 9
figure 9

Deflection of an aircraft’s right wing in orthographic projections. In midair the directions from the projection centre (□) to the marker positions (●) are observed. On ground the positions of the makers (○) and the tie points (\(\times\)) are measured

To specify the occurring torsion, we consider the lifts of two markers on the airfoil close to the wingtip. The first one is close to the airfoil’s leading edge at a distance of 31.65 m measured along the line-of-sight, the second close to the trailing edge at a distance of 30.94 m. Applying the polar point determination (8), we realize that the former is lifted by 2.33 m while the latter is lifted by 2.16 m. With a horizontal coordinate difference of 1.54 m between these two points, we obtain a change of approximately 6.3\(^\circ\) for the angle of attack close to the wingtip.

5 Conclusions and Outlook

The shape of an aeroplane’s airfoil determines the aerodynamics and, therefore, the engines’ fuel consumptions. Hence, the precise geometry in midair is required to perform realistic and meaningful fluid simulations to analyze and optimize the aerodynamic performance of an existing aircraft. Thus, measurements like these shown here are essential when changing the aerodynamics of an aircraft by modifying its surface by adding a radome or an antenna, modifying a winglet or applying sharkskin to its surface.

We created a lightweight and non-intrusive inflight measurement system to be mounted in an aeroplane’s cabin for the permanent determination of a wing’s deflection. In contrast to conventional stereoscopic approaches, the system is operated with a single camera. This feature paves the way to the easy and cost-efficient application during passenger flights. The only utilized high-grade—and therefore expensive—component is a high-quality prime lens with a good light transmission.

These advantages are balanced by the need for a deflection model and the determination of the positions of target markers at the ground. The latter are stuck to the wing’s surface to alleviate the identification of points. The utilized deflection model assumes the preservation of arc lengths on the wing’s surface. The model is a reasonable approximation for wings with a moderate taper ratio—a prerequisite that holds for most modern passenger aircraft.

The most time-consuming step in the proposed procedure is the determination of the markers’ 3D coordinates at ground, which has been done with a tachymeter performing electro-optical distance measurements. This task can be accelerated on-site by applying a drone equipped with a measuring camera to carry out a photogrammetric multiview point determination.

The environmental medium changes considerably for the situations at the ground and in midair. To account for this, we calculate the thermal expansion of the wing. The impact of the refraction caused by the windows glass is implicitly covered by the camera calibration and we can neglect the influence of the changing ambient air.

The in-flight system has been applied on several long-distance flights. The results verify the decreasing wing deflection due to the fuel consumption during flight and the subsequent reduced weight of the aircraft. The positions of the markers in midair are determined by polar point determination with distances obtained by the adjustment procedure. The directions for this polar point determination are given by the image observations. Thus, we capture the torsion of an airfoil as seen in the images, too.

No independent measuring system is available that provides reference data with superior accuracy. Hence, no assessment of the achieved accuracies is feasible. However, we can assume that the most probable values have been computed given the observations because of the utilized mathematical and statistical methods. As expected, the precision of the estimated 3D coordinates is highest next to the airfoil’s root. The precisions of the measurements at the ground (distances), and the measurements in midair (image coordinates), have been verified by the estimation of variance components for these two groups of observations (Koch 1986).

Several improvements can be envisaged for a more operational application, especially an automatic camera calibration and an automatic detection and measurement of the marker positions in the images. These approaches would also allow for a continuous monitoring of the wing stress in real-time, which would enable a more precise determination of necessary overhaul and maintenance intervals.