1 Introduction

Structural health monitoring (SHM) is aimed at providing valuable information for structural assessment and decision support for maintenance through relevant measures of structural response. Deformation is an important metric for structural condition and performance assessment for several reasons. In particular serviceability is reflected through deformation during normal operation, since extreme values and ranges indicate problems that may limit operational use, while time-varying deformation patterns constructed from discrete displacement measurements can provide a wealth of information about structure condition.

Conventional sensors like linear variable differential transformers (LVDT) require a stationary reference point for installation and direct access to monitoring structures that could be challenging on site. The global positioning systems (GPS) have the limitation of measurement accuracy (i.e. sub-centimetre [1] or centimetre level [2]) and are mostly applied for monitoring campaigns in flexible large-scale structures. Integration schemes from acceleration measurement are only feasible for short-time signals and might fail to capture the static or quasi-static components in displacement signals. Such limitations of more traditional displacement sensing technologies have driven research in non-contact optical sensing. Vision-based monitoring methods have promising features, e.g. simple instrumentation and installation, operation remote from the structure and capacity for multi-point measurement using a single (camera) sensor.

Although there have been earlier optics-based methods used for monitoring civil structure deformation, e.g. in the Tacoma Narrows Bridge [3] and the Tagus Bridge [4], among the earliest applications of opto-electronic vision-based continuous structural deformation monitoring using charge-coupled device (CCD) arrays was to Humber Bridge and Severn Bridge in the UK [5, 6]. Since then a number of systems have been developed and evaluated for structural deformation monitoring in high-rise buildings [7], short-span bridges [8,9,10,11] and long-span bridges [12,13,14].

Vision-based systems offer significant potential for structural condition assessment, in particular for system identification [15,16,17]. In addition, deformation information has been used for finite element model calibration [18], damage detection [19] and contribution to bridge weigh-in-motion system with camera assistance for traffic monitoring [20].

Investigations have been made in system improvement in both video acquisition hardware and video-processing software. The feasible video acquisition devices are expanded to include smartphone cameras [15, 21], while artificial targets required in conventional systems were discarded in some recent applications under specific camera configurations [8, 15, 22]. Efficient target tracking techniques in the computer vision field have been validated in structural deformation monitoring [15, 22, 23] and the measurement results describing structural displacement have been expanded to three-dimensional [17, 24,25,26] and six degree of freedom (DOF) motions [11, 14].

This paper aims to present a summary of key work in the field of vision-based systems for structural displacement monitoring while highlighting the principles, advantages and shortcomings of these systems. Although previous reviews of vision-based structural monitoring exist [27,28,29], the contribution of this work is to provide an overview of system classifications, methodologies and applications in field monitoring.

The paper is organised as follows: The components of a vision-based system for displacement monitoring are introduced, followed by a comparison of several mature vision-based systems in application scopes in Sect. 2. In Sect. 3, vision-based systems are categorised based on methods of video processing, with three components in video-processing procedures (i.e. camera calibration, target tracking and structural displacement calculation) reviewed in terms of principle, applications, advantages and shortcomings, respectively. In Sect. 4, applications for bridge deformation and cable vibration measurement are reviewed followed by a discussion of measurement challenges in field applications. Finally, important gaps requiring further investigation are presented, e.g. robust tracking methods, non-contact sensing and measurement accuracy evaluation in field conditions.

2 Vision-based displacement monitoring systems

Applying a vision-based system for structural displacement monitoring requires setting up one or more cameras in a stable location, looking at the ‘target’ contained in a structure and deriving the structural displacement through target tracking. Here the ‘target’ could be either artificial (e.g. pre-installed marker, LED lamp or planar panel with special patterns) or an existing structural feature (e.g. bolts or holes). As shown in Fig. 1, the hardware generally comprises camera, lens, laptop/portable computer with video-processing package and some accessories, e.g. tripod. The video-processing software is critical: its role is acquiring the video frames covering the target region, tracking the target locations in image sequences and finally transforming the target location in image to time history of structural displacement.

Fig. 1
figure 1

Vision-based system for structural displacement monitoring of the Humber Bridge [30]: a site configuration of the vision-based monitoring system; and b 10-min time history signal of vertical displacement at the bridge mid-span measured by the vision-based monitoring system

Systems for extracting metric information from images or videos exist in several fields as indicated in Table 1, e.g. digital image correlation (DIC) [9, 31, 32], photogrammetric techniques [33] and motion capture systems (MCS) [17, 34]. DIC is a measurement tool to extract full-field displacements or strains of a member surface in experimental solid mechanics [32, 35, 36]. Photogrammetry, originally in the production of topographic maps [37], is expanded to include deflection monitoring of bridge structures [38]. Motion capture systems (MCS) are usually applied to capture the movements of a high degree-of-freedom skeleton structure with a number of joints (e.g. human bodies) [39].

Table 1 Summary of vision-based systems

A vision-based system for structural displacement monitoring owns its unique features, as indicated in the last row of Table 1. Researchers have performed several investigations into system development targeted at structural applications and these studies will be reviewed in terms of methodologies in the next section.

3 Review of vision-based structural displacement measurement

In this study, vision-based systems in literature are classified based on video-processing methodologies. A typical video-processing software package could fit into a three-component framework shown in Fig. 2. The derived displacement data could be interpreted for bridge condition assessment.

Fig. 2
figure 2

Video-processing procedures for structural displacement measurement and common methods in each step

If the monitoring campaign is only for system identification and exact vibration values [40, 41] are not required, target tracking may be the only part of the whole video-processing procedure needed, but coordinate transformation might be necessary to align the image motion directions with the structural axes.

Next, the methods for camera calibration, target tracking and structural displacement calculation in literature are reviewed separately.

3.1 Camera calibration

Camera calibration concerns building the projection relationship between the 3D structural points in the structural coordinate system and the corresponding 2D points in the image plane. The determined projection transformation could be used to recover the real locations of targets in structure given the target locations in the image.

Three categories of projection transformation are reported in the literature including the full projection matrix, planar homography and scale factor as indicated in Table 2. In most cases, the projection transformation is following the full perspective model while it could be simplified to an affine camera model when cameras are equipped with large focal length lenses [25].

Table 2 Projection transformation from structure to image plane

3.1.1 Full projection matrix

3.1.1.1 Principle

The full projection matrix is the general form of projection transformation from the 3D structural system to the 2D image plane under no constraint on camera orientation and structural movement directions and is usually used to reconstruct the target 3D structural displacement. The projection relationship is demonstrated in Fig. 3 with a point \( P_{S} ({\mathbf{X}}_{W} = [X,Y,Z,1]^{\text{T}} ) \) in the structural coordinate system mapping to a point \( P_{I} ({\mathbf{u}} = [u,v,1]^{\text{T}} ) \) in the 2D image plane,

$$ \alpha \{ {\mathbf{u}}\} = [H]_{3 \times 4} \{ {\mathbf{X}}_{W} \} , $$
(1)

where \( [H]_{3 \times 4} \) is a full projection matrix and \( \alpha \) is an arbitrary coefficient.

Fig. 3
figure 3

Camera projection model: central perspective projection

The calibration process is shown in Fig. 4 with two main steps. The camera intrinsic matrix is usually estimated in the laboratory by analysing a set of images of a calibration object taken from different viewpoints [42]. The calibration object is typically a flat plane or 3D object with a dot or grid pattern of known spacing such as the chessboard pattern shown in Fig. 4. At least three views of the calibration object with four corner points are required, but it is suggested to use at least ten images to derive more robust estimates [43]. After laboratory calibration, any lens functions, e.g. autofocus and automated image stabilisation that might lead to changes in camera internal parameters are disabled.

Fig. 4
figure 4

Calibration steps for estimation of full projection matrix

Consumer-grade cameras and smartphone cameras always employ wide-angle lenses to increase the field-of-view [15], leading to distorted images particularly in the corner regions of the frame as shown in Fig. 5a. The lens distortion parameters could also be estimated in laboratory calibration and applied to correct the image with the rectified one in Fig. 5b. For cameras equipped with lenses producing no apparent lens distortion, the distortion correction step is not necessary. Naturally for the monitoring measurements, it is preferable to locate the target region in the central area of the field of view [10] which suffers less lens distortion, as shown in Fig. 5a.

Fig. 5
figure 5

Images of chessboard taken by GoPro Hero 4 session camera: a raw image; and b image after distortion correction

In the second step, the camera extrinsic matrix representing the camera position and orientation is estimated on site through point correspondences, i.e. 3D structural coordinates of control points and 2D image coordinates of their projections in an image. Given at least four sets of point correspondences, least-squares optimisation is used to find the best option of camera extrinsic matrix that minimises the total re-projection error between the detected image points and the calculated image projection points.

The calibration algorithms are available in the Vision System Toolbox of MATLAB and the open-source library OpenCV.

3.1.1.2 Application review

Camera calibration for full projection matrix estimation is commonly used to measure 3D structural displacement, with a few examples illustrating the method: the procedures of laboratory and site camera calibration are described by Kim et al. [44] in an application to structural displacement monitoring in a three-span bridge under truck-induced vibration. The viability of motion capture systems for the laboratory vibration measurement was verified [34] using a T-shaped calibration wand for the estimation of camera extrinsic parameters. In the case of a long span bridge, Martins et al. [14] applied the calibration method to measure the 3D structural displacement at mid-span with the assistance of a set of four active targets. The estimated camera parameters can be refined when multiple cameras with overlapped views are involved. For example, the methodology described by Chang and Ji [24] is based on the epipolar geometry principle of stereoscopic vision where five points including structural point \( P_{S} \), projection points in two image planes \( P_{I}^{1} \) and \( P_{I}^{2} \), and two camera optical centres should all be coplanar, as shown in Fig. 6.

Fig. 6
figure 6

Epipolar geometry principle of stereoscopic vision

3.1.1.3 Remarks

The full projection matrix is an accurate representation of the projection relationship and is thus applicable to any configuration of cameras on site. The lens distortion problems common for consumer-grade cameras do not prevent their use for such measurements, since corrections are readily made for distortion using laboratory camera calibration.

Camera calibration on site requires position information for some structural points. In existing studies this has been mainly acquired through the installation of artificial targets. Including artificial targets in laboratory tests is easy, e.g. attaching a planar chessboard target [24, 44] or placing a planar T-shaped wand in the field of view [17, 34] while the installation efforts in field tests [14] are much greater. The existing examples of two field applications are summarised in Table 3, indicating the feasibility of this method for both short-range and long-range monitoring tests.

Table 3 Summary of two field applications in literature using the full projection matrix as projection transformation

3.1.2 Planar homography

3.1.2.1 Principle

For the case where the target moves within a plane contained in the 3D structural system (e.g. the XY plane), the projection relationship could be simplified to a planar homography between a 2D structural plane \( ({\mathbf{X}}_{P} = [X,Y,1]^{\text{T}} ) \) and a 2D image plane \( ({\mathbf{u}} = [u,v,1]^{\text{T}} ) \)

$$ \alpha \{ {\mathbf{u}}\} = [P]_{3 \times 3} \{ {\mathbf{X}}_{P} \} , $$
(2)

where \( [P]_{3 \times 3} \) is the planar homography matrix and \( \alpha \) is an arbitrary coefficient.

The reconstructed results using planar homography are usually the 2D structural displacement of targets.

The calibration process requires at least four sets of 2D-to-2D point correspondences [46], similar to the estimation process on site in full projection method.

3.1.2.2 Application review

The planar homography considers the geometric distortion in the projection process and thus has no constraint on camera positioning [47]. The 2D direct linear transform is effective for the planar homography estimation [48], for example the method was applied to monitor the oscillation of a laboratory steel frame with a dense array of markers glued to the surface [48] and the mid-span deformation of a long-span bridge with an attached planar artificial target [49].

3.1.2.3 Remarks

Planar homography applies no constraint on camera positioning and can be used to recover the target 2D structural displacements. In its application it is usual that the geometric information needed for calibration is provided by attaching artificial planar targets with known dimensions.

This calibration method is based on the assumption that the target moves within a structural plane with negligible motion along the third axis. Any motion not contained within this plane will lead to measurement error unless the motion is purely perpendicular to the camera optical axis.

3.1.3 Scale factor

3.1.3.1 Principle

Scale factor is the simplest projection transformation and assumes an equal depth-of-field for all projected points or a camera configuration where the optical axis is perpendicular to one structural plane [48]. With this assumption, the mapping process converts to a 1D–1D projection indicated in Fig. 7. The scale factor \( {\text{SF}} \) from the structural displacement to the image motion could be determined by one-dimensional correspondence or the camera-to-target distance,

$$ {\text{SF}} = \frac{{\left| {P_{I} Q_{I} } \right|}}{{\left| {P_{S} Q_{S} } \right|}}, $$
(3)
$$ {\text{or}}\;{\text{SF}} = \frac{{f_{\text{pix}} }}{D}, $$
(4)

where \( \left| {P_{S} Q_{S} } \right| \) and \( \left| {P_{I} Q_{I} } \right| \) are the known physical dimension on the structural surface and the corresponding pixel length of the projection in image; \( f_{\text{pix}} \) is the camera lens focal length in terms of pixel units; and \( D \) denotes the distance from the camera optical centre to the structural surface plane.

Fig. 7
figure 7

Camera projection model when the optical axis of camera is perpendicular to the structural surface

For the system combining a camera with a total station, a projection coefficient called angular resolution [50, 51] is used to perform the transformation which represents the angle value (\( \alpha \) in Fig. 7) from the camera optical axis to a projection line with the projection length (\( \left| {O_{I} P_{I} } \right| \)) of one pixel. In principle, this projection transformation is similar to the scale factor estimated by camera-to-target distance in Eq. (4) where the distance \( D \) is measured directly by electronic distance measurement (EDM) instrument and the focal length \( f_{\text{pix}} \) is related to the angular resolution \( \theta \) by

$$ \theta \approx \tan \theta = 1/f_{\text{pix}} . $$
(5)
3.1.3.2 Application review

Scale factor has been widely used to transform image motion to structural displacement with the features summarised in Table 4. Mostly the scale factor is determined via a known dimension in an artificial target attached to structure [5, 8,9,10, 12, 13, 15, 52,53,54,55,56,57] while the method using the camera-to-target distance [22] is less popular. For 2D structural displacement measurement, the scale factors for two axes within the target plane are calibrated separately according to dimension correspondences [53,54,55,56]. Error analysis indicates that the scale factor by dimension correspondence is less sensitive to the tilt of camera optical axis [8]. However, the scale factor using the camera-to-target distance has no dependence on artificial targets and thus is an easier way to realise completely non-contact monitoring [22].

Table 4 Features of two calibration methods for scale factor
3.1.3.3 Remarks

Scale factor is the simplest projection transformation, particularly when no artificial target is used [15, 22] and works when the camera optical axis is perpendicular to the structural surface. Camera positioning is less critical [8] when known structural dimensions are used for calibration. However, when applying the scale factor derived from the camera-to-target distance, the tilt angle of the camera optical axis is suggested to be less than 10° through laboratory validation tests in short distance (≤ 3.7 m) [58]. Care must be taken that different scale factors are applied to different axes to measure the 2D displacement. This simple method can also be used with cameras having apparent lens distortion, since the lens distortion correction method previously described can be used [15, 57].

3.2 Target tracking

Target tracking is the key part of a video-processing software package. In this study, target tracking techniques are categorised into four types based on target characteristics shown in Table 5, partly referring to [59].

Table 5 Categories of target tracking methods

3.2.1 Template matching

3.2.1.1 Principle

Template matching is a classic technique for target tracking by searching in a new frame for an area most closely resembling a predefined template, following the procedures demonstrated in Fig. 8. A rectangular region that is a subset in the reference frame is first selected as the template, and could be either an artificial target [5] or a feature target on the structural surface [8]. A correlation criterion is required to evaluate the similarity level between the template and the new image subset. Robust criteria for matching are zero-mean normalised cross correlation (ZNCC) and zero-mean normalised sum of squared differences (ZNSSD) which are insensitive to offset and linear scale in illumination [35] while another similarity criterion based on orientation code is also reported to be effective [60]. The definition of the ZNCC criterion is provided as an example in Eq. (6) while more details are given in [35]

$$ C_{\text{ZNCC}} = \sum\limits_{i = - M}^{M} {\sum\limits_{j = - N}^{N} {\left[ {\frac{{(f(x_{i} ,y_{j} ) - f_{m} )(g(x_{i}^{'} ,y_{j}^{'} ) - g_{m} )}}{\Delta f\Delta g}} \right]} } , $$
(6)

where \( f(x_{i} ,y_{j} ) \) and \( g(x_{i}^{'} ,y_{j}^{'} ) \) denote the image intensity values at the specified pixel locations in the template region and the new frame; \( f_{m} \) and \( g_{m} \) denote the average intensity values in the template region and the new frame; and \( \Delta f \) and \( \Delta g \) denote the standard deviations of intensity values in the template region and the new frame.

Fig. 8
figure 8

Procedures of template matching method for target tracking: the horizontal and vertical coordinates of the target centre in the image plane are denoted as U and V, respectively; and the subscripts 0 and 1 represent the image coordinates in the reference and new frames, respectively

The location in the correlation map reaching the highest similarity is taken as the new image location of the target. The default resolution is at pixel level, so interpolation schemes [8] are used to refine the result to the subpixel level. The feasible interpolation methods include bi-cubic interpolation [56], second-order polynomial interpolation [57] in spatial domain and zero-padding interpolation in frequency domain [8]. If the selected target includes robust and identifiable features, Harris corner detection that identifies the edge intersection points through a score value related to the eigenvalues of image gradient matrix could be an alternative to refine the initial matching location [24].

3.2.1.2 Application review

Template matching is an established method widely applied in structural monitoring from the earliest work on the Humber and Severn Bridges in 1990s [5, 6]. Recent applications include displacement monitoring tests on a railway bridge [8], a long-span bridge [13] and a high-rise building [7].

Digital image correlation (DIC) is an extension of template matching mostly used in experimental mechanics [32, 35], with the difference that DIC considers the shape distortion under large deformation [61], i.e. Lucas–Kanade template matching [62]. As an example, a short-span railway bridge monitoring exercise [63] used normalised correlation-based matching and Lucas–Kanade matching and indicated high similarity in both time and frequency domain.

3.2.1.3 Remarks

Template matching is easy to use without user intervention apart from the initial selection of the template region. It does not have any special requirement for target patterns and has been validated to work well to track artificial planar targets with specific patterns [5, 6, 24], LED lamp targets [13] and feature targets on structural surfaces [8].

Template matching is not robust to changes in shading, illumination [30, 63] and background conditions [64] in field, although sensitivity to lighting changes might be reduced using camera auto-exposure settings [30]. The method is also not appropriate for tracking slender structural components, since the rectangular subset image used as a template might include background pixels with inconsistent motion.

3.2.2 Feature point matching

3.2.2.1 Principle

Instead of analysing all the locations within the target, feature point matching applies to sparse ‘special’ points within the target region, independently detecting these special points in two images and then finding point correspondences based on their local appearance. ‘Special’ points in an image, termed ‘interest points’ or ‘keypoints’ in computer vision, are those which are stable, distinctive and invariant to image transformation and illumination changes, such as building corners, connection bolts, or other patches with interesting shapes [65].

The procedures are indicated in Fig. 9. A popular keypoint detector in step (1) is the Harris corner detector [66] which is widely used in structural monitoring applications [11, 15, 22, 24, 57]. Instead of using the pixel values directly for similarity comparison, keypoints are often extracted and described by a more complex representation (i.e. feature descriptor) according to the shape and appearance of a small window around the keypoint [65]. The common descriptors and their matching criteria are indicated in Table 6. Float point-based descriptors (e.g. scale-invariant feature transform (SIFT) [67] and speeded up robust features (SURF) [68]) are represented by float vectors, commonly reflecting various local intensity gradients of a pattern around the keypoint. Binary string-based descriptors (e.g. binary robust independent elementary features (BRIEF) [69], oriented FAST and rotated BRIEF (ORB) [70] and fast retina keypoint descriptor (FREAK) [71]) are represented by binary vectors (with elements of 0 and 1) through pairwise comparisons of image intensities (i.e. whether the former is greater or less than the latter) over a special pattern around the keypoint. The matching criterion between two binary descriptors is usually the Hamming distance [69] equal to the number of elements which differ between the two vectors.

Fig. 9
figure 9

Procedures of feature point matching for target tracking

Table 6 Categories of feature descriptors and corresponding matching criteria

To verify the matched keypoint correspondences in step (3), geometric alignment is often used based on whether the keypoints in the first image could fit with the keypoints in the second image after a specific geometric transformation. The widely used approaches for discarding outliers are RANdom SAmple Consensus (RANSAC) [72] and least median of squares (LMS) [73]. The tracking output is the average motion of keypoints in an image that inherently has sub-pixel resolution and could be converted to the target location in the image.

3.2.2.2 Application review

Song et al. [74] proposed a target tracking method based on circular Hough transform for marker detection and coherent point drift algorithm for marker matching and the method was applied for system identification of a steel cantilever beam in the laboratory. Field applications include Khuc and Catbas [22, 75] who applied the FREAK and SIFT methods for deformation measurement in a stadium structure and a railway bridge and Ehrhart and Lienhart [59, 64] who applied the ORB method for deformation measurement in a short-span footbridge.

3.2.2.3 Remarks

Feature point matching is an efficient technique since it deals with sparse points instead of the whole region as in template matching. This technique uses local descriptors to represent keypoints instead of the raw image intensities and are less sensitive to illumination change, shape change and scale variation.

However, feature point matching requires the target region to have rich textures for distinctiveness during the whole recording period. Also several parameters need to be adjusted manually according to users’ experience or judgement, e.g. contrast threshold for feature detector and distance threshold in matching criteria. These parameter adjustments might depend on environmental changes, e.g. the threshold for outlier removal might vary with the illumination condition [22].

Currently feature point matching technique has only been validated in several short-range measurement tests [22, 59, 64, 75]. However, the feasibility for long-range monitoring in terms of stability over several hours and how to choose the best feature descriptors are open questions.

3.2.3 Optical flow estimation

3.2.3.1 Principle

Instead of finding matching locations of a complete region or sparse keypoints, optical flow estimation detects motions or flows of all pixels within the target region. Optical flow is the apparent velocity of movement in an image resulting from brightness pattern shift [76]. The calculation imposes two constraints, one temporal constraint on image properties (e.g. image intensity constancy for the same pattern over time) and one spatial constraint that models the flow properties in an image (e.g. coherent motion in adjacent pixels) [77]. A function reflecting these two constraints is then defined and optimised to derive a dense estimation of velocity flow for each pixel. In structural monitoring applications, the output could be converted to image motion instead of velocity by replacing the temporal gradient of image properties in the function with the variation of image properties between two discrete frames. Outlier removal is used to retain only sensible image motions, and average image motion of the inlier pixels is converted to target location inherently having sub-pixel resolution.

Optical flow estimation is an established method with several variant techniques, such as ‘differential’, ‘spatiotemporal energy’ and ‘phase-based’. In this section only two methods, the differential technique of Lucas and Kanade (LK) [78] and the phase-based technique [79] are discussed.

LK optical flow estimation [78] is based on brightness constancy assumption, i.e. projection of the same point has the same image intensity in every frame. Since corner points or keypoints are good features mathematically for the computation of optical flows, LK method is usually applied for sparse estimation instead of computation at every pixel. With keypoints detected in the reference frame usually using the Shi–Tomasi corner detector [80], LK algorithm is applied to compute the image motion of each keypoint in the new frame from spatial–temporal image brightness variations,

$$ \left[ {\begin{array}{*{20}c} {\sum\nolimits_{i} {I_{xi}^{2} } } & {\sum\nolimits_{i} {I_{xi} I_{yi} } } \\ {\sum\nolimits_{i} {I_{xi} I_{yi} } } & {\sum\nolimits_{i} {I_{yi}^{2} } } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {dx} \\ {dy} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} { - \sum\nolimits_{i} {I_{xi} I_{ti} } } \\ { - \sum\nolimits_{i} {I_{yi} I_{ti} } } \\ \end{array} } \right], $$
(7)

where \( dx \) and \( dy \) denote the optical flows in the horizontal and vertical directions of the image plane; \( I_{x} \), \( I_{y} \) and \( I_{t} \) represent the spatial and temporal gradients of image intensities; and \( i \) denotes the ith pixel location in a square patch (e.g. \( 3 \times 3 \)) around a feature point \( (x,y) \). The image motion is then estimated after discarding false motion estimates according to RANSAC or LMS, as with feature point matching.

Phase-based optical flow estimation is based on local phase constancy assumption. The method first proposed by Fleet and Jepson [79] is receiving new attention together with the motion magnification technique [81] which visualises motions in image sequences that are not visible to the naked eye. The mathematical details of phase-based optical flow estimation are explained in [23] and the algorithm is briefly summarised here.

The Fourier shift theorem indicates that a delay of a signal in the time domain corresponds to a linear phase variation in the frequency domain. Similarly, the image motion in spatial domain is also reflected in phase changes in spatial frequency domain. The phase here is the local phase [82] corresponding to a specific spatial location instead of the whole image, usually derived by employing a quadrature pair of filters consisting of an even real part and an odd imaginary part [83], i.e. Gabor filters [84] and Gaussian derivative filters [23] (demonstrated in Fig. 10). The image motion at each pixel is then estimated from the spatial–temporal variations of the local phase for the filtered image.

Fig. 10
figure 10

Image after filtering by a quadrature pair of Gaussian derivative filters in the image width direction: a the real part of Gaussian derivative filters; b the imaginary part of Gaussian derivative filters; c the raw image of footbridge stay cables; d the real part of filtered image data; and e the imaginary part of filtered image data

3.2.3.2 Application review

LK optical flow estimation was applied in a laboratory test of a multi-storey metal tower [15] for system identification, and for field application in deformation measurement in a footbridge [59] and bridge stay-cable vibration measurement [85, 86].

Implementations of phase-based optical flow estimation were mostly focused on system identification, i.e. extracting modal frequencies and mode shapes in laboratory tests [23, 87] and identifying modal frequencies of high-rise tower buildings [88].

3.2.3.3 Remarks

Optical flow estimation enables tracking of features on a structural surface without the requirement for artificial targets. It provides fast and accurate results in controlled environmental conditions.

Like feature point matching, optical flow estimation prefers target patterns with distinct and robust features over the whole test period. Edges are not suitable for tracking due to the ‘aperture problem’, i.e. only the motion component perpendicular to the local edge direction could be detected instead of the true motion of the edge. If the structural motion along edges is one-dimensional translation with known direction, e.g. bridge stay cable vibration [85], optical flow estimation is viable.

Phase-based optical flow estimation is mostly applied for system identification in the laboratory but is harder to use in field conditions due to high signal noise [88]. Measurement of image motion is sensitive to the choice of pixel location [89], while a selection strategy to ensure satisfactory measurement has not yet been clearly reported. Changes of lighting and background conditions might lead to apparent measurement error [88].

3.2.4 Shape-based tracking

Other than general techniques, there are also some target tracking methods that depend on the special shapes of target patterns which could appear in custom-made artificial targets or structural components (e.g. line-like cables). Table 7 provides a summary of target patterns commonly used. With lack of generality, these methods have limitations for application.

Table 7 Examples of shape-based tracking

3.2.5 Summary of target tracking performance

In terms of target tracking, the nominal algorithm resolution can be better than 0.01 pixel while the reported accuracy in practice varies from 0.5 to 0.01 pixel [95]. The real-time processing was realised in [8, 49, 63] using the template matching method, in [16, 86] using the optical flow estimation method and in [13, 51,52,53] using the shape-based tracking approaches. Although not reported in the existing applications, the feature point matching approach is capable of being used for real-time application [70]. Among the four tracking methods, template matching requires the least user intervention apart from the initial selection of template region while in the other three methods some threshold values that might be environmentally dependent are required as user inputs.

Ehrhart and Lienhart [64] evaluated the performance of three techniques (optical flow, template matching and feature point matching) by tracking structural features of a footbridge and reported that feature point matching is robust to the changes of background condition (i.e. snowfall), whereas drift over time was observed in the measurement by the two other methods. Busca et al. [96] evaluated three techniques (template matching, edge detection and digital image correlation) on a steel truss railway bridge, concluding that the three techniques provide similar tracking performance while tracking accuracy is slightly poorer for natural targets. Khaloo and Lattanzi [97] investigated four optical flow estimation methods for dense displacement measurement. The study indicated that classic + NL method (i.e. introducing a weighted non-local term into the classical optical flow formulations [77]) provided the most consistent and accurate measurement. However, the coarse-to-fine schemes (i.e. building image pyramids for each frame and computing optical flows on each layer of pyramids to get rid of the small motion constraint) are necessary for Lucas–Kanade and Horn–Schunck methods to deal with large displacement.

3.3 Structural displacement calculation

Structural displacement could be easily derived from the change of structural coordinates given the image location of a target (output of target tracking) and the projection transformation relationship (output of camera calibration). In this case, the projection transformation is a fixed value or matrix without any update during the test.

Another less common method to derive structural displacement is based on the variation of real-time camera extrinsic matrix. The camera extrinsic matrix represents the camera pose, i.e. position and orientation relative to the structural system. Since the camera is physically fixed during the recording, variation of camera extrinsic matrix is related to the change of target pose (position and orientation) and could be used to estimate the target motions in six degrees of freedom (6DOF).

3.3.1 Offline projection transformation

3.3.1.1 Principle

For single camera applications using scale factor or planar homography, the 2D structural coordinate/displacement is derived uniquely through transforming the target location/motion in an image to that in the structure via a projection transformation value or matrix.

When two or more cameras with overlapped views are used to monitor the same target, 3D structural displacement can be extracted based on triangulation method [46].

3.3.1.2 Application review

Applications of scale factor and planar homography for 2D structural displacement measurement have been reviewed in Sect. 3.1.

For stereoscopic view or multiple cameras, the triangulation method was used in [24, 33, 98] for 3D structural displacement measurement. A multi-camera arrangement provides more reliable results than a single view but the measurement quality has high dependency on the time-synchronisation of camera recordings.

3.3.2 Online pose estimation

3.3.2.1 Principle

For single camera applications, using a fixed projection transformation relationship only supports recovery of 2D structural displacement. Some researchers tried to extract more information about target motion (up to 6DOF) using a single camera by updating the real-time target pose in the structural system.

Estimation of camera extrinsic matrix is performed for every frame and the 3D translational and rotational target motions are extracted from the changes of camera extrinsic matrix compared to the initial frame. The calibration process requires at least four non-collinear points with known dimensions or spacing in structure that should have consistent motion.

3.3.2.2 Application review

Greenbaum et al. [99] applied the online pose estimation method for the laboratory 3D motion measurement of an oscillating rigid object with a few targets of known positions distributed on its surface. In field applications, Chang and Xiao [11] used a planar target with square grid patterns attached to a bridge surface for the measurement of 6DOF structural displacement while Martins et al. [14] tracked four non-coplanar LED targets together to reconstruct the 3D structural motion in a long span bridge.

3.3.2.3 Remarks

The greatest advantage of the method is the capacity to extract 6DOF structural motions from single camera, but it has a high requirement on the nature of tracked targets which should consist of at least four non-collinear points with precisely known geometry. The target or a set of target points should have rigid motions and be visible during the whole recording period, e.g. artificial planar targets with salient corner points [11], distributed target points on structural surface [99] or a set of LED targets [14].

This technique cannot measure translation along the camera optical axis [11]; thus the camera should be configured to avoid facing any motion direction of interest.

The measurement accuracy of this method might be poorer than offline projection transformation method. In a footbridge monitoring test by Chang and Xiao [11], using a 36.4-mm focal length camera placed about 5.2 m from mid-span generated measurement noise with standard deviations of 0.76 and 1.09 mm in two horizontal directions. This was much larger than would be achieved by offline projection transformation method in a similar test [100] (tracking 0.2 mm bridge vertical displacement with the 85-mm focal length and 27-m camera-to-target distance). Therefore, this method is not recommended for field applications unless the target size is not negligible compared to the camera-to-target distance [96].

4 Field applications and challenges

This section summarises the existing field applications of vision-based systems in two active fields, bridge deformation measurement and cable vibration measurement. A discussion about measurement challenges in field applications is also presented.

4.1 Application examples

Video acquisition devices are now expanded to include smartphone cameras, with numerous applications including vibration measurement of a laboratory multi-floor tower structure [15] and cable vibration measurement of a cable-stayed footbridge [21]. In these two applications, smartphones are only used as the data acquisition system with the recorded video files post-processing for data extraction. Smartphone applications for real-time video acquisition and processing are also viable [101] through experimental validations.

The existing applications of vision-based systems in field tests involve the deformation measurement of a wide range of structural types including: high-rise buildings [7, 88], bridges [5, 6, 8, 10,11,12,13,14, 18, 20, 22, 30, 44, 45, 49, 51,52,53,54,55, 59, 63, 64, 75, 98, 102,103,104] and stadium structures [22, 105]. Work in the two most active fields, i.e. bridge deformation measurement and cable vibration measurement are summarised in Tables 8 and 9, respectively.

Table 8 Review of studies about bridge displacement measurement using vision-based systems
Table 9 Review of studies about cable vibration measurement using vision-based systems

The viability of vision-based systems for bridge displacement measurement has been verified through comparison with traditional displacement sensors, e.g. LVDT [10, 55, 103], laser sensors [55] and potentiometers [44] for short-span bridge and GPS [13, 30, 49] for long-span bridges. The displacement data could be interpreted for system identification [8, 11, 12, 49, 54, 55, 63, 75], evaluation of load carrying capacity [53], model calibration [18] and estimation of vehicle weights [20]. Artificial targets are commonly used in existing applications to assist camera calibration, whereas recent investigations [51, 63, 75, 103, 104] overcome the dependence on artificial targets and realise completely non-contact sensing based on a simplified projection transformation, i.e. scale factor.

Another promising application of vision-based systems is to estimate cable tension forces based on vibration measurement. Measurement accuracy was verified through comparison work with traditional sensors, e.g. accelerometers [40, 85, 106], velocimeters [41] and load cells [105]. Vision-based systems require no access to cables [16, 30, 40, 85, 86, 90, 105, 106] and are capable of measuring the vibrations of multiple cables using a single camera [16, 86, 105, 106] that is comparable to an array of accelerometers.

4.2 Measurement accuracy and challenges

Measurement accuracy of vision-based systems depends on several parameters, e.g. camera-to-target distance, target pattern features, lighting conditions, camera mounting stability and video-processing methods. Khuc et al. [22] investigated the measurement accuracy of a vision-based system in a laboratory and suggested an accuracy of 0.04 mm in a short-range distance (< 14 m). Martins et al. [14] demonstrated the uncertainty evaluation of displacement measurement by a vision-based system on a long-span bridge monitoring test and illustrated a standard measurement accuracy of 1.7 mm in the vertical direction. The high noise range might limit the field application of vision-based systems for system identification on civil structures although high frame rate is taken for vision-based systems.

The achievable accuracy in field tests might be much poorer than that of controlled conditions. The authors investigated the field challenges through a series of monitoring tests in two short-span and two long-span bridges which have been reported in [30]. A summary of the main findings from the tests and the literature is presented here.

  • Camera and support motion induced by wind [10] might lead to apparent measurement error. Except for improving camera mounting configurations [30], a common correction method is to additionally track the ‘nominal’ motion of a fixed target, e.g. bridge towers or adjacent buildings. Recent work [97] indicates another promising approach for camera motion compensation through removing the averaged motion of background pixels based on dense optical flow estimation.

  • Variation in lighting and background conditions is one of the critical challenges during field tests. The influence of lighting variations might be reduced by enabling camera auto-exposure settings [30]. Correlation-based template matching is not robust to this effect apart from testing during overcast weather, whilst the feature point matching method was reported to be less sensitive [64].

  • Atmospheric refraction and turbulence of optical light propagating through the air are common error sources for any optical-based instrument, especially for long-range measurements. Refraction deviation could be minimal for short-term displacement measurement while the air turbulence movement has a larger influence [52]. Quantification of the induced error based on mathematical models is demonstrated in a vision-based measurement test of a long-span bridge [107].

  • Observations from short-term tests (with duration less than 12 h) do not find an apparent influence of temperature variations on measurement accuracy, while this effect is necessary to consider for long-term tests, e.g. with duration a few months or more. A time–frequency approach indicates the potential for error compensation based on investigation of the correlation models linking measurement errors and temperatures [108].

5 Summary and prospects

As evidenced from the review, vision-based systems are promising tools for structural displacement measurement having advantages in cost, installation efforts and measurement capacities of frequency range and spatial resolution. Although the potential in field applications has been validated in many articles, there are a few aspects still to mature.

  • Robust target tracking methods. Template matching and optical flow estimation are established methods widely used in short-range and long-range measurement tests, but they are not robust to lighting and background changes. Feature point matching is a relatively new and promising tracking method, but investigations regarding several aspects, e.g. selection strategy of proper threshold parameters, sensitivity on environmental effects and field viability for long-range measurement are rare and need to be expanded. It is still an open question about the most robust tracking method for vision-based systems to deal with changes in lighting conditions during field tests.

  • Completely non-contact measurement. Artificial targets are commonly included to assist the camera calibration process, but dependence on artificial targets is eliminated in a few field applications [15, 22, 75, 104]. These studies were based on a simplified projection transformation, i.e. scale factor that is not a general approach and imposes constraints on camera positioning. To develop a non-contact vision-based system for the general case, requiring control points with known locations is the main obstacle which could possibly be resolved via the assistance of surveying instruments, such as total station.

  • Distributed sensing of structural displacement. Vision-based systems have the capacity for simultaneous multi-point displacement measurement that is comparable or superior to an array of accelerometers for system identification. Currently, bridge applications primarily focus on the mid-span displacement measurement, while the potential of distributed sensing and system identification is not well investigated.

  • Measurement uncertainty evaluation. Measurement accuracy and uncertainty are of great importance for a mature measurement system. Quantified descriptions about measurement accuracy haven been made in some references (e.g. [8, 22, 54, 64]) through comparisons with reference measurements. However, the quality of vision-based measurements could be time-varying, environmentally dependent and differ significantly with various test configurations. The influential factors include the test configurations (e.g. the camera-to-target distance and the target features), the video-processing methods used and the environmental conditions (e.g. the lighting conditions, the atmospheric refraction and turbulence). A systematic evaluation of vision-based measurement methodologies will require extensive experimental effort by the research community with publication of case studies contributing to evolving guidance for field applications.