Abstract
A new approach to determine a multipoint deformation of the earth’s surface or objects upon it, represented by point fields measured in two epochs, is presented. The problem of determining, which points have been deformed, is not approached by testing pointbypoint, but by formulating alternative hypotheses that test if one, two or more subsets of points have been deformed, each subset in its own way. The method is based on the least squares connection adjustment, defines alternative hypotheses and searches the best one by testing a large amount of them. If the best hypothesis is found, a least squares estimation of the deformations is provided. The test results of the presented method are invariant under changes of the Ssystems in which the point coordinates are defined. The results of a numerical test of the method applied to a simulated network are given. In designing a geodetic deformation network minimal detectable deformations can be computed, belonging to likely deformation patterns. The proposed method leads to a reconsideration of the duality of reference and object points. A comparison with the method of testing confidence ellipsoids is made. The relevance of the difference between geometric and physical interpretations of deformations and the consequences of the presented method for future developments are discussed.
Introduction
Deformation analysis is done in many fields of human activity, e.g. the production of gas and oil, civil and construction engineering, water management, industrial installations, and because of threats of natural phenomena such as land slides. An often applied method is to determine coordinates (one, two or threedimensional) of points that are representative of the earth’s surface or the object that is or may be subject to deformation. This object can be a civil engineering work, a building, a dam, an industrial storage tank, part of the earths’ surface, etc. The object is represented by point coordinates. If the coordinates are acquired by geodetic means, the point field is called in this paper a geodetic network. The deformation analysis looks at the changes of the coordinates in the course of time.
Modern technology offers many possibilities to produce coordinates, e.g. total station measurements, levelling, GPS, terrestrial photogrammetry and 3D laser scanning (terrestrial or airborne). Also, hydrological techniques, such as networks of transponders on the seafloor, can be considered here.
The approach to compare coordinates is appropriate, as it is generally natural to describe deformations in their terms. Where in the past the amount of acquired coordinates was often limited to a few tens or hundreds, modern techniques can deliver streams of coordinates almost continuously, both in time and in space.
An approach towards the geometric analysis of the deformation of a point field is presented that differs notably from conventional methods as described in Sect. 2.
The proposed method The method, proposed in this paper, is not based on an analysis of displacement vectors, but on testing the results of connection adjustments and can test for more deformed points simultaneously. It is invariant under changes of the chosen Ssystems (the reference systems of both the coordinates and their covariance matrix), also known as the geodetic datums. The concept of an Ssystem is introduced by Baarda (1973) and generalised by (Teunissen 1985, p. 41).
The proposed method elaborates on the theory in Teunissen (2006b), Teunissen et al. (1987a), Velsink (1998a) and applies it to the geometric analysis of a geodetic network that has been measured in two epochs. The least squares adjustment of the coordinates, resulting from the measurements at each epoch, and the detection, specification and quantification of existing deformations are treated. The application of the theory of testing multidimensional alternative hypotheses of (Teunissen 2006b, p. 71ff.), an extension of the testing of onedimensional alternative hypotheses of Baarda (1968b), is shown. The need to perform Stransformations during the testing process is avoided by inserting transformation parameters into the adjustment model.
The proposed method follows an approach of formulating alternative hypotheses that allow for complex hypotheses. By testing large amounts of multidimensional tests, it is possible to find the points that have been deformed most likely, without the need to have prior information about the deformations. The method is capable of giving least squares estimates of the deformations. Moreover, it can compute the minimal detectable deformations, i.e. the size of the deformations, specified by an alternative hypothesis, that can be found with a specified probability by testing the hypothesis. It is an important tool for designing a geodetic network for deformation analysis.
Overview of this paper After a description of the conventional approaches in Sect. 2, the paper gives in Sect. 3 a review of the adjustment model and its solution for the connection of two epochs of coordinates of a geodetic network. Section 4 describes the theory of formulating one and multidimensional alternative hypotheses and the way to test them. The least squares estimation of the deformation and the concept of a minimal detectable deformation are treated. Section 5 shows how an alternative hypothesis can be specified that describes the deformation of several points: the deformation of a partial point field. Also, the case of two or more partial point fields, each of which can have a different deformation, is treated. The connection adjustment of Sect. 3 and the test strategy in Sect. 5.2, based on the testing theory of Sect. 4, lie at the heart of the method proposed in this paper. An algorithm, designed to test the method, is described in Sect. 6. The results of the application of this algorithm to a simulated network are given. They show that the method is capable of detecting deformed partial point fields and estimating the size of the deformation. The method of this paper gives rise to a reconsideration of a few aspects of geodetic deformation analysis. They are considered in Sect. 7. Section 8 gives the conclusions of this paper.
Conventional approaches
If deformations are suspected in a geodetic network, but the exact points that have been affected are not known, the conventional analysis method is to determine coordinates of object points in two epochs in the same reference system (Ssystem) and to compute the displacement vectors from these coordinates to test them (Kamiński and Nowel 2013; Setan and Singh 2001; Welsch et al. 2000; Caspary 2000). The covariance matrix of the displacement vectors is computed as well. Each object point is consecutively tested by determining the 95 %confidence ellipse in 2D or ellipsoid in 3D and determining if the displacement vector is outside this ellips(oid) (Koch 1985; Cederholm 2003). Such a test is in general not invariant under a change of Ssystem, as will be shown in Sect. 7.3. A solution can be sought in performing an Stransformation towards such an Ssystem, in which the lengths of the displacement vectors are minimised. The idea is that deformed points will show most clearly large displacement vectors in such an Ssystem. A possible Ssystem is the inner constraint solution (Baarda 1960; Pope 1971). Chen (1983) and Caspary and Borutta (1987) use socalled “robust” methods, e.g. by minimising the L1norm of the displacement vector lengths, to find an optimal Ssystem.
Welsch et al. (2000) describe also a different approach. They build an adjustment model, in which the observations of two epochs are combined and constraints are imposed on the point coordinates. The constraints state that coordinates of common points should coincide, if no deformation has occurred. The quadratic form of the weighted estimated least squares residuals that result from the adjustment is tested. If this test fails, the quadratic form is analysed to determine which points cause the failure. To this end, a decomposition of the quadratic form is performed using Gauss, Cholesky or spectral decomposition.
Typical for all these methods is the search for deformed points onebyone. Because more than one point can be subject to deformation, (Welsch et al. 2000, pp. 395–397) and (Niemeier 2008, pp. 446–450) describe a strategy of “backward” and “forward” searching for deformed points. “Backward” means removing points that are suspected of being deformed and “forward” means (again) adding points that were formerly removed. Koch (1985) describes a similar approach.
The method proposed in this paper provides a test for a deformation pattern of several deformed points that is more powerful (in the sense of a “most powerful test” as defined by Teunissen 2006b, p. 62) than the mentioned conventional test strategies. It is invariant under a change of the Ssystems, in which the point coordinates are defined. The need for such an invariance is mentioned in literature (Koch 1985), but is not present in the described conventional methods.
Review of the connection adjustment of two epochs of a geodetic network
The method proposed in this paper performs a connection adjustment of the adjustment results of two epochs of geodetic measurements. The deformation analysis is done with the results of this connection adjustment.
In this section, the adjustment model of the connection adjustment is reviewed. The theory can also be found in Teunissen (1985), Teunissen et al. (1987b), Velsink (1998a).
Three partial point fields and two reference systems
Consider a point field of several or even many points, where geodetic measurements have been made with the intention to determine one, two or threedimensional coordinates of all points. It is assumed that a least squares adjustment of the measurements has been performed, resulting in a set of coordinates with its covariance matrix of the point field. At a later moment, the second epoch, the measurements are repeated, and also the adjustment, resulting again in coordinates and their covariance matrix. Let us take together all coordinates of epoch 1 in a vector a. In a twodimensional plane, for instance, the column vector a of Cartesian coordinates is written as:
where \(x_1^a\) and \(y_1^a\) are the x and y coordinates of point 1, etc., and \(n_a\) is the number of points in vector a. The asterisk \(^*\) indicates the transpose of the vector. In the same way, all coordinates of epoch 2 are taken together in a vector b. Now, vector a is partitioned in two parts: part 1 contains all coordinates of points that have no coordinates in vector b (point field 1) and part 2 contains the coordinates of all connection points (point field 2), i.e. the coordinates of those points that also have coordinates in vector b. Vector b is divided in the same way in two subvectors \(b_2\) and \(b_3\). Subvector \(b_2\) contains the coordinates of the connection points, subvector \(b_3\) the coordinates of the points that have no coordinates in vector a (point field 3), see Fig. 1.
Only the coordinates in \(a_2\) and \(b_2\), the coordinates of the connection points, give us information by which we can perform a connection adjustment. Influenced by the adjustment are all coordinates: \(a_1\), \(a_2\), \(b_2\) and \(b_3\).
Vectors a and b are supposed to be random vectors with a normal distribution, described by covariance matrices. The covariance matrix of a is indicated by \(D_a\). It can be divided in a scalar variance factor \(\sigma ^2\) and a cofactor matrix \(Q_a\) as \(D_a = \sigma ^2 Q_a\). In the same way, we have \(D_b = \sigma ^2 Q_b\). The variance factor is seen as a convenient way to get cofactor matrices with elements that are neither too large nor too small. The same variance factor is taken for a and b.
A vector c contains the coordinates of all points as unknown parameters, to be estimated in the least squares adjustment. Vector c is divided into three subvectors \(c_1\), \(c_2\) and \(c_3\), in accordance with the three point fields 1, 2 and 3.
It is assumed that c is defined in the same reference system (Ssystem) as a. Assume that b is defined in a reference system (Ssystem) that differs from the reference system of a by a similarity transformation or a congruence transformation. In 2D, a similarity transformation has four degrees of freedom: two translations in the directions of the x and y axis, a rotation and a change of scale. In 3D, there are seven degrees of freedom: three translations, three rotations and a change of scale. In 1D, there are two degrees of freedom: a translation and a change of scale. A congruence transformation has in all dimensions one degree of freedom less: the change of scale is missing.
To determine the coordinates in vector a, geodetic measurements have been performed that yield more, less or an equal amount of degrees of freedom in the resulting coordinates as the measurements that resulted in vector b. It should be noted that only the information that is common to both vectors can influence the adjustment. The degrees of freedom of the transformation should encompass all degrees of freedom that both a and b have (Teunissen 1985, p. 70).
Linearised adjustment model and its solution
Nonlinear model and its linearisation
The functional relationship between a, b and c is given by the following equation:
where \(e_{a_1}, e_{a_2}, e_{b_2}, e_{b_3}\) are random errors with a mathematical expectation of zero, f is the vector of transformation parameters from the reference system of b to that of a and t(.) is the nonlinear function describing the transformation.
Applying a nonlinear adjustment to this model is generally not easy, so it is linearised.
To get simpler equations, vector b is first loosely transformed to vector \(b^{'}\) in such a way that the elements of \(b^{'}_2\) approximate those of \(a_2\):
where \(b^{'}_2 \approx a_2\).
Equation (2) is now written as:
where the transformation t and the transformation parameters f now cause only small changes in the coordinates.
Linearisation of function t yields the following linearised equation:
where I is the unit matrix, 0 is the matrix of zeros, \(E_2, E_3\) are matrices to be looked closer at in the sequel, \(\Delta \) is the difference of a vector and its vector of approximate values, and \(e_{a_1}, \cdots , e_{b^{'}_3}\) are random errors. For \(a_1, a_2, b^{'}_2, \cdots , c_2, c_3, f\) see Sect. 3.1.
As approximate values are taken:
For f, approximate values are taken that leave the coordinates unchanged.
Matrix of coefficients of the transformation
The matrices \(E_2\) and \(E_3\) are linearised coefficient matrices of the transformation from the reference system of b to that of a. As described in Sect. 3.1, it is assumed that the transformation is a similarity or a congruence transformation, which means that \(E_2\) and \(E_3\) result from the linearisation of these transformations.
If for instance the transformation is a four parameter similarity transformation in a twodimensional plane (change of scale, change of orientation, translation along the x axis, translation along the y axis), the matrices \(E_2\) and \(E_3\) have the following structure (Velsink 1998a, p. 60):
where \(x_i^0\) and \(y_i^0\) are approximate coordinates of point i. The first column of E pertains to the change of scale, the second to the change of orientation, the third and fourth to the translation along, respectively, the x and y axis. For each point i, the first row concerns the x coordinate and the second row the y coordinate.
Reduced linearised adjustment model
In the following, the deltas will be dropped and also the primes of \(b^{'}\), therefore, e.g. \(\Delta a_1\), \(\Delta b^{'}_2\) or \(\Delta c_3\) will be indicated as \(a_1\), \(b_2\) and \(c_3\), respectively. From Eq. (5), the rows concerning vector \(a_1\) and vector \(b_3\) are omitted, because they give no redundant information and do not influence the adjustment results. Equation (5) becomes:
Subtracting the second row from the first and putting \(d = a_2  b_2\) and its random error vector as \(e_d\) with its cofactor matrix \(Q_d\) gives:
where \(Q_{a_2}\) and \(Q_{b_2}\) are the cofactor matrices of \(a_2\) and \(b_2\), respectively. The stochastic vectors a and b (and therefore also \(a_2\) and \(b_2\)) are supposed to be stochastically not correlated mutually. \(Q_{a_2}\) and \(Q_{b_2}\), however, can be full matrices.
The vector d contains \(2 n_{a_2}\) elements. The vector of transformation parameters f contains p elements, where p is 2, 4 or 7 in 1D, 2D and 3D, respectively, in case of a similarity transformation. For a congruence transformation p is 1, 3 or 6. The redundancy is therefore \(2 n_{a_2}  p\).
Adjustment
The model consisting of Eqs. (8) and (9) can be adjusted according to the method of least squares. The result is:
where \(\hat{f}\) contains the adjusted transformation parameters; \(\hat{d}\) the adjusted values of d, and \(\hat{e}_d\) the estimated values of \(e_d\).
Because the model is a linearised one, iteration of the computation is necessary until a certain iteration criterion is met.
Adjusted coordinates for all partial point fields
From the adjusted vector \(\hat{d}\), the estimates \(\hat{c}_1\), \(\hat{c}_2\) and \(\hat{c}_3\) can be calculated. The equations to use follow from the equations to estimate the random errors of free variates [Teunissen 2006a, p. 76, equation (14)] and can be written as (Velsink 1998a, p. 79):
As can be seen, \(\hat{c}_2\) can be calculated along two paths. In practical calculations, one path is used. The other one may serve as a check.
The solution of the datum problem
In the previous section, it is shown that a solution for vector c in Eq. (2) can be found via the following steps:

1.
Perform an approximate transformation on b in such a way that the transformed vector \(b^{'}_2\) is almost the same as vector \(a_2\).

2.
Calculate the adjusted coordinates \(\hat{c}_1\), \(\hat{c}_2\) and \(\hat{c}_3\) with Eqs. (8)–(13).
In performing these steps, a problem occurs, if the matrix \(Q_d\) of Eq. (9) is singular and the inverse of \(Q_d\) in Eq. (10) and (13) cannot be calculated. This may occur for instance, if both vectors \(a_2\) and \(b_2\) are defined in the same Ssystem and \(Q_{a_2}\) and \(Q_{b_2}\) are both singular matrices. The singularity of \(Q_d\) is related to the Ssystems, in which \(a_2\) and \(b_2\) are defined and can therefore be solved by performing Stransformations, i.e. by changing the geodetic datums. Generally, this is done by having the datum defined by some or all stable points, making it possible in that way to test the other points for deformation. This means that a change of datum is necessary if a datum point is detected as being influenced by deformation. This is done by Stransformations (van Mierlo 1978) or generalised inverses (Koch 1985). Another solution is, however, possible.
Matrix \(Q_d\) is calculated from Eq. (9). In (Teunissen et al. 1987b, p. 231), it is proven that the same solution in Eq. (13) is arrived at, if \(Q_d\) of Eq. (9) is replaced by the regular matrix \(Q_{d^\prime }\):
where \(Q_t\) is any positive definite matrix with the right dimensions, for example, the unit matrix. Changing matrix \(Q_d\) into the regular matrix \(Q_{d^\prime }\) is called the regularisation of \(Q_d\). Almost the same equation gives [Teunissen 1985, eq. (3.2.14.a)], where the product \(E_2 E_2^*\) is used.
Schaffrin (1975, p. 27) shows that for any adjustment problem, formulated with observation equations, any symmetric positive semidefinite generalised inverse of \(Q_d + k^2 E_2 E_2^*\), with \(k\ne 0\) an arbitrary real scalar, can be used as weight matrix of the observations to arrive at the least squares solution. Because in our case \(Q_d + k^2 E_2 E_2^*\) is a regular matrix, the ordinary (Cayley) inverse can be used. From the proof of (Schaffrin 1975, p. 28), it is clear that instead of \(Q_d + k^2 E_2 E_2^*\) also \(Q_{d^\prime } = Q_d + E_2 Q_t E_2^*\) can be taken (the proof requires that \(R(E_2) \subset R(Q_{d^\prime })\); this is true, because \(Q_{d^\prime }\) has full rank and so \(R(Q_{d^\prime }) = \mathbb {R}^m\)).
If \(Q_d\) has a rank defect, i.e. \(\text {rank}(Q_d)<n_{a_2}\), with \(n_{a_2}\) the dimension of \(Q_d\), the rank of \(Q_{d^\prime }\) is larger than that of \(Q_d\), because \(Q_{d^\prime }\) has full rank \(n_{a_2}\). The regularisation of \(Q_d\) can therefore be interpreted as moving the datum outside the point field under consideration and it is unnecessary to perform datum transformations: the datum problem is solved by the transformation that is implicit in the adjustment model (Teunissen 1985, p. 75).
Testing theory applied to deformation analysis
The way to determine the deformation of an object is to represent the object by a point field and to determine the changes in position, size and form of this point field. When looking at just one point, it is tempting to calculate the differences in coordinates of this one point at two different epochs and to see these differences as the deformation. This is common practice in many deformation analyses, see, for example, the product specification for performing and analysing deformation measurements of civil engineering works in the Netherlands (Rijkswaterstaat 2012).
It is, however, better to perform a connection between the coordinate set at the first epoch and that of the second one. In Sect. 3, the equations are given to fuse both coordinate sets into one vector c by means of the method of least squares. In that case, it is assumed that both coordinate sets describe the same point field, without any deformation. If a deformation has taken place, it should be tested by one or more appropriate statistical tests. If a statistical test leads to rejection, i.e. a deformation is present, the corresponding equations for estimating errors give estimates for the size of the deformation. These equations are given in Sect. 4.2.
Before any deformation measurement is done, it is possible to assess the smallest deformations that can be discovered with a certain probability by means of the designed deformation network. Section 4.3 treats the necessary statistical quantities.
Detection and specification of a deformation
Performing a statistical test on the connection of two coordinate sets and concluding that both coordinate sets describe the same point field differently and that a deformation has occurred, is the detection phase of the analysis. Closely related to the detection is the specification of the deformation, described by an alternative hypothesis.
Detection of blunders
Before any detection of deformations can be done, both a and b, and therefore d, have to be free of blunders. If checking for blunders has not been done well, the remaining blunders will lead to wrong conclusions regarding the deformations. A careful analysis of external reliability (Baarda 1968a, p. 68, van Mierlo 1978, p. 339) of the models by which a and b were acquired is necessary to assess the influence of possible remaining blunders.
Null and alternative hypothesis
The detection of a deformation can be done by performing \(\chi ^2\)tests on the results of the least squares adjustment. The equations given here are based on Velsink (1998b), which in their turn are based on the first student edition of Teunissen (2006b).
The adjustment model for the connection of both coordinate sets is given by Eqs. (8) and (9). It is written as a linear model of observation equations, where d is the vector of observations and \(Q_d\) its cofactor matrix. The matrix \(E_2\) is the matrix of coefficients, f the vector of unknowns and \(e_d\) the vector of random errors.
Solving the model of observation equations by means of the method of least squares gives Eqs. (10), (11) and (12). Testing this solution by means of \(\chi ^2\)tests is done (Teunissen 2006b, p. 78) by considering the model of Eqs. (8) and (9) as a null hypothesis and opposing it to an alternative hypothesis, defined by a specification matrix C and a vector of unknowns \(\nabla \) (pronounced as “nabla”):
where the random errors are supposed to be normally distributed, described by the cofactor matrix \(Q_d\), and to have an expected value of zero, i.e.: \(E \{ e_d \} = 0\).
Examples of the specification matrix C
As an example, take a twodimensional plane point field, of which x and y coordinates have been determined in two epochs. Suppose that all points except points 4 and 5 stayed at the same position. Points 4 and 5, however, have shifted away together in an equal, but unknown direction with an equal, but unknown amount. The vector \(\nabla \) now consists of two elements, a shift in the x direction and a shift in the y direction. The matrix C has two columns and twice as many rows as there are points:
where \(\nabla d_{x_i}\) and \(\nabla d_{y_i}\) are the shifts in the x and ydirection of point i and \(\nabla _x\) and \(\nabla _y\) the size of the shift in both directions. The elements of C that are represented by dots are all zero.
If point 4 and 5 have shifted independently from each other, the vector \(\nabla \) has four elements and we have:
Testing the alternative hypothesis
To test the alternative hypothesis against the null hypothesis, the test statistic \(V_q\) is calculated, where q indicates the number of elements of \(\nabla \) (and the rank of C) (Velsink 1998b, p. (3)32):
where the random reciprocal errors \(\hat{r}\) and their cofactor matrix \(Q_{\hat{r}}\) are calculated as:
where \({\hat{e}}_d\) is calculated with Eq. (12) and \(Q_d\) with Eq. (9). The cofactor matrix \(Q_{{\hat{e}}_d}\) is calculated as follows:
with \(E_2\) from Eq. (5) and \(Q_{\hat{d}}\) the cofactor matrix of \(\hat{d}\).
The test statistic \(V_q\) is statistically distributed according to the \(\chi ^2\)distribution and can therefore be tested by comparing it with a critical value, calculated from a chosen level of significance. It is the uniformly most powerful invariant test (Teunissen 2006b, pp. 69, 78) to test the alternative hypothesis against the null hypothesis according to (15). We get the same result if \(V_q\) is divided by q and compared with a critical value, computed from the Fdistribution. The test is as follows:
with \(\sigma ^2\) the variance factor (variance of unit weight). It is assumed here that the variance factor is known and is not estimated from the adjustment results.
In Sect. 5.2, this test is adapted to take account of the situation where alternative hypotheses of different dimension q are to be compared with each other.
Ssystem invariance of the test
Test (22) is invariant under changes of the Ssystems relative to which vectors a and b are defined. It is proven as follows. In Teunissen et al. (1987b, pp 231, 232), it is proven that \(\hat{r}\) is invariant under changes of the Ssystems, in which \(a_2\) and \(b_2\) are defined (hereafter called: \(\hat{r}\) is Ssystem invariant). Because the proof in Teunissen et al. (1987b) is in Dutch, it is repeated here in English, adapted to the reasoning and formulation of this paper. Subsequently, it is proven that also \(Q_{\hat{r}}\) is Ssystem invariant. Because \(\hat{r}\) and \(Q_{\hat{r}}\) are Ssystem invariant, also \(V_q\) of Eq. (18) and therefore test (22) are Ssystem invariant.
Lemma 1
\(\hat{r}\) is Ssystem invariant
Proof
Model (8) includes a transformation f that describes, according to Sects. 3.1 and 3.2, the transformation from the reference system of b to the reference system of a by a similarity or congruence transformation. Suppose that a or b or both are defined in different Ssystems, i.e. a and b are replaced by:
where \(E_a\) and \(E_b\) are the linearised coefficient matrices of the similarity or congruence transformation as described in Sect. 3.2 and \(\Delta f_a\) and \(\Delta f_b\) are the vectors of transformation parameters. Considering only the coordinates \(a_2\) and \(b_2\) of the connection points and switching to vector d of Eq. (8), we have:
with \(\Delta f = \Delta f_a  \Delta f_b\). For the cofactor matrix, \(Q_{d^\prime }\) of \(d^\prime \) we get:
where \(Q_{d,f}\) is the cofactor matrix between d and \(\Delta f\), \(Q_{f,d}\) its transpose and \(Q_f\) the cofactor matrix of \(\Delta f\).
Model (8), (9) is formulated as a system of observation equations. It is now reformulated as the associated system of condition equations (that yields the same least squares solution). First, the full rank (\(2n_{a_2}\times (2n_{a_2} p)\))matrix \(E_2^\bot \) is introduced as the matrix defined by:
Premultiplying Eq. (8) on both sides with \(E_2^{\bot *}\), we get
where \(t = E_2^{\bot *} e_d\) is the vector of misclosures with \(E\{t\}=0\). Equation (28) is a system of condition equations. Solving it by the method of least squares yields the following equation:
where it should be noted that in solving the model of condition equations first \(\hat{r}\) is calculated and then \(\hat{e}_d\) with:
which means that \(\hat{r}\) can be calculated without using the inverse of \(Q_d\).
If d is replaced in Eq. (29) by \(d^\prime \) and \(Q_d\) by \(Q_{d^ \prime }\), we get the changed \(\hat{r}^\prime \), caused by the transition of a and b to other Ssystems:
Substituting Eqs. (25) and (26) into Eq. (31), we see that the second term of (25) and the last three terms of (26) lead to terms in (31) that are zero, because \( E_2^* E_2^{\bot } = 0\) and also \( E_2^{\bot *}E_2 = 0\) and therefore:
This means that \(\hat{r}\) remains unchanged and is Ssystem invariant. \(\square \)
Lemma 2
\(Q_{\hat{r}}\) is Ssystem invariant
Proof
The cofactor matrix \(Q_{\hat{r}}\) is obtained by applying the law of propagation of cofactors to Eq. (29):
Switching to other Ssystems for a and b means replacing \(Q_d\) in Eq. (33) with \(Q_{d^\prime }\) of Eq. (26). Elaborating this shows that the last three terms of Eq. (26) lead to terms in Eq. (33) that are zero, because \( E_2^* E_2^{\bot } = 0\) and also \( E_2^{\bot *}E_2 = 0\). Therefore, \(Q_{\hat{r}}\) remains unchanged and is Ssystem invariant. \(\square \)
Invariance under regularisation of \(Q_d\)
In Sect. 3.3, it is shown how regularisation of \(Q_d\) can solve the datum problem. Therefore, it is important to prove the following lemma.
Lemma 3
Test (22) is invariant under regularisation of \(Q_d\).
Proof
The regularisation of \(Q_d\) is done with Eq. (14), which is repeated here:
Substituting (34) into Eq. (29) shows that the second term of (34) is postmultiplied with \(E_2^\bot \), which yields a term of zero, because \(E_2^* E_2^\bot =0\). Therefore, \(\hat{r}\) is invariant under the replacement of \(Q_d\) with \(Q_{d^\prime }\).
Equation (33) yields the cofactor matrix \(Q_{\hat{r}}\). Replacing \(Q_d\) in this equation with \(Q_{d^\prime }\) does not change \(Q_{\hat{r}}\) for the same reason it did not change \(\hat{r}\). Therefore, also \(Q_{\hat{r}}\) is invariant under the replacement of \(Q_d\) with \(Q_{d^\prime }\).
The conclusion is that both \(\hat{r}\) and \(Q_{\hat{r}}\) are invariant under regularisation of \(Q_d\), which means that also \(V_q\) of Eq. (18) and therefore test (22) are invariant under regularisation of \(Q_d\). \(\square \)
Types of alternative hypotheses
There are many alternative hypotheses, each defined by its own specification matrix C, that can be tested by the test statistic \(V_q\). The number of elements of vector \(\nabla \) (and so the number of independent errors that can be tested for) is q and it is limited by the redundancy of the adjustment model (Eqs. 8, 9). In this adjustment model, 2, 4 or 7 transformation parameters (in a one, two or threedimensional connection problem, respectively) are determined by at least as many coordinate differences. The excess coordinate differences determine the redundancy. So the redundancy \(\rho \) is equal to the difference of the number of rows of the matrix \(E_2\) and its number of columns.
The number of elements of vector \(\nabla \) is q. It cannot exceed the redundancy \(\rho \), so:
Overall model test
If \(q = \rho \) the test statistic \(V_q\), now indicated as \(V_\rho \), can be simplified to (Velsink 1998b, p. (3)44):
Because the matrix C is eliminated from the equation, any matrix C of full rank and with \(\rho \) columns, of which the range space is complementary to the range space of \(E_2\), will give the same test result when used in Eq. (18). This test is therefore a general test on the correctness of the null hypothesis and is called the overall model test.
wtest
If \(q = 1\), the test statistic \(V_q\), written as \(V_1\) can be transformed into the wellknown w test statistic (Baarda 1968b, p. 14). The matrix C is now a vector, indicated by a lowercase letter c. Equation (18) can in this case be written as:
The test statistic w is defined by:
Therefore, the relation between \(V_1\) and w is:
The test statistic w has a standard normal distribution and therefore its expectation is 0 and its standard deviation 1.
Data snooping and point test
If in performing the wtest, the vector c in Eq. (38) is defined as:
the test is, if one point has changed in one coordinate direction only, while all other points have not changed position. Testing successively all coordinates with (22), (37) and (40) is called data snooping (Baarda 1968b, p. 30). In a 1Dconnection of point fields, this is a realistic test. In a 2D or 3Dconnection, it can be used to check, for example, for an input error. To test for a deformation in 2D and 3D, however, it is less useful, because a deformation affects in general two or three coordinates of one point simultaneously. A more logical test is then a test with \(q = 2\) or \(q = 3\) and a Cmatrix that looks like, for example, in 3D, with the dots indicating zeros:
Such a test is called a point test.
Tests with \(1 < q < \rho \)
The overall model test with \(V_\rho \) of Eq. (36) is called a \(\rho \)dimensional test and the wtest with the test statistic of Eq. (38), where \(q = 1\), is called a 1dimensional test. Between these two extremes of \(q=1\) and \(q=\rho \), many tests can be devised where \(1 < q < \rho \). The point tests of the previous Sect. 4.1.6, where \(q=2\) or \(q=3\), are just one example. The case, where a deformation pattern of many points is tested by specifying an appropriate Cmatrix with q a value between 1 and \(\rho \) will be treated in Sect. 5.
Quantification of a deformation (its least squares estimation)
As shown in the previous subsections, many alternative hypotheses can be formulated. In Sect. 6, it will be shown that it is worthwhile to automate the process and to test for thousands of alternative hypotheses to find the best one in the case that the overall model test of Sect. 4.1.6 has rejected the null hypothesis.
If the best alternative hypothesis has been found, the associated Cmatrix is known and it is possible to estimate the size of the deformation by estimating \(\hat{\nabla }\):
The estimated deformation in each coordinate is in, e.g. 2D:
It is worth noting that this estimation of deformations for each coordinate direction of a point is in general different from the coordinate differences between the first and second epoch of that point: the least squares estimator of the deformation is a best linear unbiased estimator (Teunissen 2006a) and is in that sense better than the in practice often applied method of assessing a deformation by computing differences and at the most presenting a graphical representation of the differences. Much however depends upon the correctness of the alternative hypothesis. Finding the right alternative hypothesis is the subject of Sects. 5 and 6.
Minimal detectable deformation
Designing a geodetic network for the purpose of deformation analysis involves consideration of the type and size of deformation that can be detected by the analysis. Deformation measurements are subject to stochasticity that is caused by the measuring instruments, the observer (human or not) and the idealisation precision [the precision by which an object in reality is represented in a mathematical model, i.e. the precision of the “linkingup” of the model (Baarda 1967, p. 6)]. It is difficult to distinguish a deformation from this stochastical variation of measurement values. The way to go is shown by Baarda (1968b) and extended by Teunissen (2006a).
Using the analysis procedure as described in the preceding sections, it is natural to use the concept of the minimal detectable bias (Teunissen 2006b, p. 102) to evaluate the type and size of deformation that can be detected by the analysis. In the context of deformation analysis, we will talk about the minimal detectable deformation \(\nabla _0\), defined by:
where \(\sigma ^2\) is the variance factor (variance of unit weight), \(\lambda _0\) is the reference noncentrality parameter (its computation is explained below), and C and \(Q_{\hat{r}}\) are as defined above.
The reference noncentrality parameter \(\lambda _0\) is dependent on the power \(\gamma \) of test (22), the size \(\alpha \) of this test and the dimension q, symbolically written as:
If a 1dimensional test is performed (\(q = 1\)) and the size is chosen as \(\alpha = 0.1\,\%\) and the power as \(\gamma = 80\, \%\), a value of \(\lambda _0 = 17.075\) results.
In general, it is desirable that different tests (22) with different dimensions q have the same power \(\gamma \) (indicated as \(\gamma _0\)) and the same noncentrality parameter \(\lambda _0\) (the reference noncentrality parameter). This means that for different dimensions different sizes \(\alpha \) are used. Usually, the value for \(q=1\), indicated by \(\alpha _0\), is fixed at a certain value, often \(\alpha _0 = 0.1\,\%\). This procedure is called the Bmethod of testing (Baarda 1968b, p. 34), see also (Velsink 1998b, p. (3)54) and (Niemeier 2008, p. 303).
The minimal detectable deformation \(\nabla _0\) of Eq. (44) describes the deformation that, if present, will be detected by test (22) with a probability of \(\gamma _0\) (e.g. 80 %), if the critical value of the test is computed with the chosen reference noncentrality parameter \(\lambda _0\) (e.g. 17.075), using the Bmethod of testing.
Deformations that are smaller will be detected with a smaller probability than \(\gamma _0\).
The minimal detectable deformation \(\nabla _0\) (MDD) is defined in Eq. (44). In a 1dimensional test (\(q=1\)) it is a scalar, that can be readily derived from Eq. (44), except for its sign:
where c is written with a lowercase letter, because it is a vector in case \(q=1\).
If \(q=2\), Eq. (44) describes an ellipse, if \(q=3\) an ellipsoid and for \(q>3\) a hyperellipsoid.
The principal axes of the hyperellipsoid are determined by computing the eigenvectors of \(C^* Q_{\hat{r}} C\). The lengths of \(\nabla _0\) in the direction of these axes are determined by the eigenvalues of the said matrix. The following equation gives the relation (Teunissen 2006b, p. 105):
where \(\nabla _{0_k}\) is the vector that gives the MDD in the direction of the kth eigenvector, \(\sigma \) is the square root of the variance factor, \(\lambda _0\) is the reference noncentrality parameter, \(\lambda _k\) is the kth eigenvalue of \(C^* Q_{\hat{r}} C\), and \(d_k\) is the kth normalised eigenvector of \(C^* Q_{\hat{r}} C\).
The hyperellipsoid of Eq. (44) gives information about the whole point field, not just about one or a few points. The equation to give information about individual coordinates is:
In the case of data snooping, the matrix C is a vector with only one element different from zero and Eq. (46) can be used. The MDD can then be uniquely attributed to one coordinate. For higher dimensional alternative hypotheses, the quantity \(\nabla _0\) in Eq. (48) can be chosen to be \(\nabla _{0_k}\) of Eq. (47) belonging to the largest eigenvalue of \(C^*Q_{\hat{r}}C\).
Designing a geodetic network for deformation analysis is strongly supported by computing the MDDs for those alternative hypotheses that describe the deformation situations that might occur. An advantage of the equations given is that the MDDs can be computed before the first epoch is measured, i.e. in the design stage of the deformation network.
Testing the deformation of partial point fields
Data snooping strategy
If coordinates of a point field have been determined at two or more epochs, a deformation analysis can be performed. What is an optimal strategy to perform such an analysis? When no specific indication is present about the points that have been deformed, it seems appropriate to start with data snooping, if the overall model test rejects the null hypothesis. That means that each coordinate is tested by means of the test statistic w of Eq. (38) with the cvector from Eq. (40). The idea is that by checking each coordinate successively for a deformation one will effectively find all deformed coordinates. In the following, these test statistics w are called conventional wquantities.
Baarda (1968b) introduced the idea of data snooping and he warned immediately for the limitations of it. He writes on page 12: “These “possible” model errors are now described by a number of alternative hypotheses, which in principle do not have to occur simultaneously” (emphasis from Baarda). The alternative hypothesis that checks coordinate i tests if that coordinate is deformed and all other coordinates are not. This alternative hypothesis cannot be true simultaneously with the alternative hypothesis that coordinate \(j \ne i\) is deformed and all other coordinates are not.
The conventional strategy of data snooping is to compute the conventional wquantities and consider the coordinate with the largest wquantity deformed, if its absolute value is larger than the critical value of the normal distribution (3.29 if \(\alpha = 0.1\,\%\)). After removing this coordinate from the data set and repeating the adjustment and testing the then largest wquantity, its coordinate is removed if the critical value is exceeded. This process is repeated until no critical value is exceeded anymore. This strategy does not provide the uniformly most powerful invariant test as defined in Teunissen (2006b, p. 62), if the deformation concerns more than one coordinate. The uniformly most powerful invariant test is test (22) with a matrix C that describes all coordinates affected by the deformation.
In a 2D or 3D point field, testing for deformations is done more logically not by testing individual coordinates, but individual points. The alternative hypothesis is in this case, that two or three coordinates (for 2D and 3D, respectively) are deformed and all other coordinates are not. Here, we call a strategy that tests every point successively with such an alternative hypothesis point data snooping.
Point data snooping does not provide the uniformly most powerful invariant test if more than one point is subject to deformation. Such a test is test (22) with a matrix C that describes all points affected by the deformation.
Formulating alternative hypotheses to test for a deformation
Usually a deformation affects not just one point, but several points of a geodetic network. Testing for the occurrence of such a deformation can be done by choosing an appropriate Cmatrix and using test (22). But how to know what an appropriate Cmatrix is? If information is available about the processes that underlie the deformation, these processes may dictate the Cmatrix to use. In many cases, however, the underlying processes are not known well enough or even not known at all. Then, one can simply try several different Cmatrices and perform test (22). The Cmatrix that delivers the largest value for the test statistic indicates the best alternative hypothesis.
Dimension q differs for different C matrices
A problem arises if the different Cmatrices that are tested have different dimensions, i.e. q is different. This means that test (22) is executed with different sizes \(\alpha \) (if the Bmethod of testing is used). As a consequence, different critical values are used. In that case, the fact that the test statistic \(V_q\) for a certain alternative hypothesis has a larger value than any other alternative hypothesis does not mean that the alternative hypothesis concerned is the best one. de Heus et al. (1994b) proposed to use in such a situation the ratio of the test statistic with the critical value, while fixing the power \(\gamma \) at 50 % (not 80 % as before). The resulting new test is:
Test strategy
The test strategy now becomes: try several different Cmatrices and perform test (49). The Cmatrix that gives the largest ratio that is larger than one indicates the best alternative hypothesis, assuming that no Cmatrix exists that is not tested for and that would give an even larger ratio.
Because test (22) is Ssystem invariant, also test (49) and the above test strategy are Ssystem invariant.
A justification for a test strategy using test ratios is given by de Heus et al. (1994a, b).
Several differently deformed partial point fields
In a geodetic network, consisting of many points, designed to detect deformations, it is plausible that one point may deform, but also that two, three, or even more points may deform. These points can deform in exactly the same way, for example, shifting away with the same amount in the same direction. It is also possible that some deform in the same way, but others differently. To illustrate these possibilities, three examples in 2D of Cmatrices are given.
The first Cmatrix says that the first two points have been deformed with the same amount in the same direction. The second Cmatrix says also that the first two points have been deformed, but the second point has a deformation that differs from that of the first point. The third Cmatrix says the size of the deformation of the second point is s times the size of the deformation of the first point. s may be, for example, the ratio of the distance of the second point to a certain fixed point and the distance of the first point to that point, reflecting the situation that the deformation of a point may depend on the distance to a certain fixed point.
The vertical dots indicate a not specified amount of rows, consisting merely of zeros.
Using such specification matrices C, one can test the deformation of a subset of points or even of several subsets of points. It is, therefore, a multipoint testing method, contrary to the often used pointbypoint methods, described in Sect. 2.
How points are deformed depends on the physical processes that underlie the deformations. These processes may cause a simple shifting of (subsets of) points that can be described by simple Cmatrices like the ones above. It is, however, also possible that more complex deformations result from the physical processes. If the parabolic form of a water storage dam is considered, its deformations might follow a pattern that can be described by a mathematical function. Linearisation of this function gives the Cmatrix that can be used for testing. If a subset of points is subject to an elastic deformation, the points of this subset undergo an affine transformation. Linearisation of the equations of the affine transformation results in the following matrix C:
with i until \(i+j\) the points of the deformed subset.
The six columns of this matrix C correspond to the six parameters of the affine transformation that the points are subject to.
Bmethod of testing
It is assumed in the testing strategy that the Bmethod of testing is applied (Baarda 1968b, p. 33). This means that first the overall model test is performed. If the null hypothesis is accepted, the testing stops. Only if the null hypothesis is rejected the search for the best fitting alternative hypothesis is started, beginning with data snooping. The overall model test and the one and multidimensional tests are linked by the requirement that all tests have the same power if a deformation of the size of the minimal detectable deformation is present. The consequence is that the significance levels of the tests differ for different dimensions. The significance level for a certain dimension is chosen, from which the significance levels for all other dimensions are derived. Starting from the onedimensional test or starting from the \(\rho \)dimensional overall model test are two options given by Baarda (1968b, p. 25). Starting from the significance level of the onedimensional test (\(\alpha _0\)) may result in large values for the significance level \(\alpha _\rho \) of the overall model test (over 50 %), if \(\rho \) is large. Therefore, for the validation in Sect. 6.2, it is chosen to fix \(\alpha _\rho \) and to derive \(\alpha _0\) from it. This may result in very small values for \(\alpha _0\).
Searching the best alternative hypothesis
Automating the process
The assumption in the test strategy of Sect. 5.2 that no accidentally not tested Cmatrix exists with a larger ratio is tricky. The amount of possible alternative hypotheses, and thus of possible Cmatrices, can be infinitely large. In practice, not all are relevant, but extremely many can be plausible. It is possible to automate the process of formulating alternative hypotheses (and thus of designing Cmatrices) and subsequently testing these hypotheses. Testing one alternative hypothesis can be done very fast, if a computer is used. The number of numerical computations depends on the size of matrix \(C^* Q_{\hat{r}} C\) in Eq. (18). The dimension of this matrix is equal to the amount of columns of matrix C, so equal to q. Computing the test statistic for smalldimensional tests can therefore be done extremely fast. Computing thousands or ten thousands of test statistics is just a matter of seconds.
An algorithm for the adjustment model of Sect. 3 and its testing with qdimensional tests, as described in Sects. 4 and 5, has been programmed for use in a computer.
The automated testing of large amounts of alternative hypotheses has been incorporated into the algorithm. The algorithm starts with testing all hypotheses that just one point has been deformed and all others have not. Then, it takes every combination of two points and tests if those two points have been deformed in the same way and that all other points have not been deformed. It also tests if the two points have been deformed in a different way.
Subsequently, it takes every combination of three points and performs analogous tests. It goes on with testing combinations of four points, five points, etc.
Here, the algorithm as for now stops in taking combinations of points to be tested. It could be possible to test, for example, if two points have been deformed in the same way and a certain third point in a different way. This the algorithm does not do, as so many other combinations are not tested.
Because point fields can have very large amounts of points, the amount of combinations can very rapidly attain incredibly large numbers, so the algorithm limits the total amount of alternative hypotheses to be calculated.
The algorithm calculates test ratios (see test 49) for every alternative hypothesis and lists the 10 alternative hypotheses with the largest ratios.
Testing the algorithm showed that the deformation patterns of the 10 alternative hypotheses give more information about the deformations present than just data snooping or point data snooping can provide.
Validation of the method
To validate the proposed automated process, a numerical test is presented. In section 13.5 of Niemeier (2008), a simulated network (Fig. 2), used before in a FIG working group (Welsch 1983), is given. It is about a 2D geodetic network, where distances and directions have been measured in two epochs.
Niemeier’s analysis
Niemeier (2008) analyses the network in two ways. First, he assumes that it is unknown if and where points are deformed and performs a onestep analysis, in which the forward and backward strategy, mentioned in the introduction, is performed. This strategy identifies significantly shifted points. Second, a twostep strategy is described, in which the fault zone is assumed to be known. The points that are supposed to be stable (reference points) are tested with an overall model test and a subsequent localisation of nonstable points is done. In the second step, it is tested pointbypoint, if a point should be attributed to the reference points. The results are given in Table 1 together with the simulated shifts (deformations). Point 7 does not appear in the table, because it was not measured in two epochs and is therefore no connection point.
As can be seen in Table 1, the onestep strategy does not succeed in giving satisfying results. The twostep strategy gives better results, helped, of course, by knowing which points are deformed most probably.
Analysis with the algorithm
To test the algorithm, described in Sect. 6.1, first coordinates and their covariance matrices were computed for each epoch. The distance and direction observations and their standard deviations as given in Niemeier (2008) were used, except for the incorrect standard deviation of 1.5 cm for the distance observations, for which the correct standard deviation of 10.5 cm (personal communication of prof. Niemeier) was used. The adjustment of each epoch was done as a free network adjustment. Both adjustments were tested with an overall model test and accepted. For the connection adjustment, the model of Sect. 3.2 was used with a similarity transformation. The full covariance matrices of the coordinates as computed in both epochs were used in the connection adjustment. The overall model test (Eq. 36) was not accepted with a square root of \(F_{b,\infty }\) from Eq. (22) equal to 2.33, using the Bmethod of testing with \(\lambda _0 = 10.014\), \(\gamma _0 = 50\,\%\) and \(\alpha _\rho = 10\,\%\). The square root of the ratio according to test (49) was 1.98.
Because the overall model test was not accepted, the algorithm searched for the best alternative hypothesis. It generated 19,800 alternative hypotheses and tested each of them. The following three types of alternative hypotheses were generated:

1.
one point is deformed, all others are not; to be tested for each of the 14 points that were measured in both epochs;

2.
two, three, four and up to seven points are deformed; the deformation is the same for each point;

3.
two, three, four and up to seven points are deformed; the deformation is different for each point.
The first type amounts to fourteen alternative hypotheses.
The second and third type amount each to
Each alternative hypothesis with \(i > 7\) gives an identical test result as one alternative hypothesis with \(i \le 7\), because testing that some specified points are deformed and the others are not is equivalent to testing that those others are deformed and the specified are not.
The three types of alternative hypotheses together give \(14 + 9,893 + 9,893 = 19,800\) alternative hypotheses.
For each alternative hypothesis, the square root of the ratio according to test (49) was computed. The alternative hypotheses were sorted in order of this ratio. The largest square root of the ratio was 3.09 (Table 2) and belonged to the alternative hypothesis that the five points 3, 5, 11, 39 and 41 were deformed, all in the same way, and that all other points had not been deformed. This hypothesis gives exactly the points that are the deformed points according to Niemeier (2008), see Table 1 and Fig. 2. It is clear from Table 2 that the largest value of 3.09 was notably larger than the other ones.
For the best alternative hypothesis, the deformations of the five deformed points were estimated according to Eqs. (42) and (43). Estimated was a shift in the xdirection of 23.7 cm and in the ydirection of 8.0 cm. These results are shown in the last two columns of Table 1. The root mean square (rms) of the differences between the estimated shifts and the simulated ones is 3.9 cm, where it is 5.1 cm for the twostep strategy.
Conclusion of the numerical test
The numerical test shows that the method is effective in detecting the points affected by deformation and gives a smaller rms of the remaining coordinate differences than the conventional methods as described in Niemeier (2008, section 13). The advantage of this method is that it does not need to have prior information about which points might have been deformed (no information about the deformation zone is needed).
Not all possible alternative hypotheses were tested in the simulation. For example, the hypothesis that two points were deformed in the same way and one other point in a different way was not tested. Because the true hypothesis was among the tested ones, the method found it.
Considerations
Reference points and object points
It is common in the literature (de Heus et al. 1994b; Welsch and Heunecke 2001; Rüeger 2006) to distinguish between object points and reference points. In an absolute deformation analysis, the stability of the reference points is checked first. Then, the reference points are kept fixed and the object points are checked for deformation. Using the model of this paper, the reference points are just one subset of points. The object points form another subset of points, or they may form several subsets. The search is for the functional relationship between the points and is equivalent to the search of the specification matrix C as described in Sect. 6. The result can be that several subsets emerge that have changed their relative positions. The subset of reference points is one of them. If, for the analysis of the object points, the reference points are fixed, i.e. considered errorless, the stochasticity of the reference points is “pushed” to the object points and in this way disturbs the analysis. Specifying the right alternative hypothesis, i.e. specifying the right matrix C with the appropriate subsets of points and with stochastic reference points, is to be preferred.
Stransformation or implicit transformation
The model for the connection adjustment, as it is presented in this paper, includes a transformation (Eq. 2). One could approach the connection problem without a transformation, in which case it is necessary that the coordinate vectors a and b are defined in the same reference system. Generally, the geodetic datum is defined by some or all points and it is necessary to perform Stransformations (Baarda 1973; Koch 1985; Welsch et al. 2000) in the process of searching the best alternative hypothesis. To avoid this, the connection model includes a transformation. This implicit transformation and the regularisation, described in Sect. 3.3, take care of the datum problem, i.e. the transformation between Ssystems. Because of the implicit transformation and the regularisation, it is possible to perform tests of points that are part of the datum definition in an Ssystem, and to estimate their deformations, without performing any Stransformation.
Testing with confidence ellipsoids
In this section, the conditions are derived under which the more general test (22) is equal to the testing of confidence ellipsoids (Cederholm 2003).
If a and b are defined in the same Ssystem, transformation f is estimated in Eq. (10) as \(\hat{f} = 0\). This means:
From this, the test quantity of Eq. (18) becomes:
Divide \(Q_d^{1}\) into four sub matrices and d into two sub vectors according to:
with \(W_1\) a \((3 \times 3)\)matrix and \(d_1\) a 3vector.
In case of point data snooping in 3D and testing the first point, matrix C follows from Eq. (41) and we get:
\(V_q\) becomes:
If \(W_3 = 0\), i.e. there is no correlation between the coordinates of the first point and all other points for both a and b, we get:
Performing test (22) is now equal to testing coordinate differences with confidence ellipsoids (Cederholm 2003), if the same level of statistical significance is chosen.
The choice of which point is the first point is arbitrary, which means that Eq. (61) can be used for any point. The reasoning has been done for 3D, but is the same for 1D and 2D.
The conclusion is that testing using confidence ellipsoids (confidence regions in 1D, confidence ellipses in 2D) is the uniformly most powerful invariant test [in accordance with the definition of such a test in Teunissen 2006b, p. 62], if two conditions are fulfilled:

1.
The coordinates of a and b are defined in the same Ssystem

2.
The coordinate difference between a and b of any point is not correlated with any other coordinate difference.
If an Stransformation is performed on the coordinates of all points, they will all in general be correlated with each other after the transformation, even if they were not before. This means that testing with confidence ellipsoids is not the uniformly most powerful invariant test after an Stransformation. It also means that the results of a confidence ellipsoids test (Koch 1985; Cederholm 2003) are in general not invariant under a change of the Ssystem.
If \(Q_d\) is of full rank, it is not clear whether a and b are defined in the same Ssystem. In such a case using model (8), (9) is preferable for two reasons. First, it has a transformation included, which solves the datum problem. Second, it eliminates the uncertainty in the definition of the degrees of freedom of the Ssystem from the deformation analysis.
Model (8), (9) cannot be used, if the deformation is relative to points (objects, part of the earth’s surface) that lie outside the point field under consideration, i.e. the reference points are not part of \(a_2\) and \(b_2\). This is the case, for instance, if the reference system itself is used as reference for the deformation analysis. In such a case, the transformation should be omitted from model (5). This is not elaborated upon in this paper.
Geometric and physical interpretation
A geometric deformation analysis can be improved significantly if physical causes of the deformation are taken into account. In Sect. 4, it is shown how an alternative hypothesis is defined by a specification matrix C. A method is proposed to find the best C. The method applies in fact “brute force”: it tries to test as many as possible alternative hypotheses to find the one with the largest test ratio. The search for the best alternative hypothesis could, however, be formulated in a general way:
Search 1
Find the alternative hypothesis with a specification matrix C that maximises the test ratio of test (49).
A mathematical approach of this search means finding the derivative of the left hand side of the inequality of test (49) relative to the elements of matrix C and equating it to zero. It is a system of nonlinear equations, whose solution is not straightforward.
Here, we approach the search by trying different alternative hypotheses. The large amount of alternative hypotheses that has to be tested can be reduced by omitting not plausible hypotheses. Such not plausible hypotheses can be found by geometric reasoning (e.g. points that are far away from each other probably do not undergo exactly the same deformation). Having knowledge, however, about the underlying physical processes that determine the deformations helps considerably to reduce the amount of matrices C that has to be tested.
The method, proposed in Sect. 4, can be used in the absence of knowledge of the underlying physical processes. If however knowledge about them is available, that knowledge should be used to reduce the amount of alternative hypotheses that has to be tested.
Outlook
Although the situation that two epochs of measurements of a limited amount of 1D, 2D or 3Dpoints are to be analysed still happens often in professional practice, the trend is towards very frequent measuring, even continous monitoring, of objects and towards very dense coverage of an object with measured points (Niemeier 2011). Extension of the presented method towards more epochs than just two and towards a large amount of points is therefore desirable.
With an adjustment model that covers more than two epochs, it is possible to specify alternative hypotheses that describe deformation processes during more epochs. The search for the best alternative hypothesis will be even more complicated. It is therefore important to find ways to reduce the amount of alternative hypotheses that have to be tested. Analysing underlying physical processes and finding functional relationships that describe the deformation processes is a way to do it.
Conclusions
A new approach to determine a multipoint deformation of a geodetic network, measured in two epochs, is presented. Monitoring an object or the earth’s surface for deformations is possible by choosing a representing set of points and measuring them, and, if necessary, reference points outside the object, with geodetic means. If two epochs of measurements are available, it is shown how to test the measured points in such a way that subsets of points can be distinguished. Each subset can have its own deformation behaviour. The reference points are seen as just one subset, not to be treated differently from the other subsets.
The deformation analysis uses the null hypothesis that no deformation has occurred. This hypothesis is tested with an overall model test. If this test leads to rejection, a search starts for the best alternative hypothesis. An alternative hypothesis can concern the deformation of just one point, but in general it will affect more points (i.e. a subset of points). It can even distinguish more subsets of points. If no information is available on possible deformations, a method is given to test by “brute force” as many alternative hypotheses as possible.
The test method is invariant under changes of the Ssystems in which the point coordinates are defined.
When the best alternative hypothesis has been detected, the least squares estimate of the deformation can be computed. The equations are given as well to compute the minimal detectable deformations that can be used in the design stage of a geodetic network for deformation analysis.
The method to find the best alternative hypothesis has been tested numerically in a 2Dnetwork, where it succeeded to find the deformation.
The used adjustment model includes a transformation that makes it unnecessary to use Stransformations in the process of testing for deformations.
The relation between the proposed method and the method of testing confidence ellipsoids is shown.
Finally, it is shown that the proposed method exceeds the boundaries of a purely geometric analysis. Application of the method yields improved results, if the underlying physical processes are taken into account.
References
Baarda W (1960) De betekenis van varianties van coördinaten bij berekening van puntsbepalingsystemen in het platte vlak. Tech. rep., Laboratorium voor Geodetische Rekentechniek, Technische Hogeschool Delft (now: Delft University of Technology)
Baarda W (1967) Statistical concepts in geodesy. Publications on Geodesy New Series, vol 2, no 4. Netherlands Geodetic Commission
Baarda W (1968a) Enkele inleidende beschouwingen tot de “BMethode” van toetsen. Laboratorium voor Geodetische Rekentechniek, Technische Hogeschool Delft
Baarda W (1968b) A Testing Procedure for Use in Geodetic Networks. Publications on Geodesy New Series, vol 2, no 5. Netherlands Geodetic Commission
Baarda W (1973) Stransformations and Criterion Matrices. Publications on Geodesy New Series, vol 5, no 1. Netherlands Geodetic Commission
Caspary W (2000) Concepts of network and deformation analysis, Monograph 11. School of Surveying, The University of New South Wales, Australia, Tech. rep
Caspary W, Borutta H (1987) Robust estimation in deformation models. Surv Rev 29(223):29–45
Cederholm P (2003) Deformation analysis using confidence ellipsoids. Surv Rev 37(287):31–45
Chen YQ (1983) Analysis of deformation surveys—a generalized method. University of New Brunswick, Department of Geodesy and Geomatics Engineering, Canada, Tech. rep
de Heus HM, Joosten P, Martens MHF, Verhoef HME (1994a) Geodetische deformatie analyse: 1ddeformatieanalyse uit waterpasnetwerken. Tech. Rep. 5, Delft University of Technology, LGR Series, Delft
de Heus HM, Martens MHF, Verhoef HME (1994b) Stabilityanalysis as part of the strategy for the analysis of the Groningen gas field levellings. In: Proceedings of the Perlmutter Workshop on Dynamic Deformation Models, Haifa, pp 259–272
Kamiński W, Nowel K (2013) Local variance factors in deformation analysis of nonhomogenous monitoring networks. Surv Rev 45(328):44–50
Koch K (1985) Ein statistisches Auswerteverfahren für Deformationsmessungen. Allgemeine VermessungsNachrichten 92(3):97–108
van Mierlo J (1978) A testing procedure for analysing geodetic deformation measurements. In: Proceedings of the 2nd FIG Symposium on Deformation Measurements by Geodetic Methods. Bonn, Germany, pp 321–353
Niemeier W (2008) Ausgleichungsrechnung, Statistische Auswertemethoden, 2nd edn. de Gruyter, Berlin, New York
Niemeier W (2011) Stand und Entwicklungstendenzen für das “Geometrische Monitoring” von kleinräumigen Objekten. In: Proceedings Geomonitoring
Pope AJ (1971) Transformation of covariance matrices due to changes in minimal control. In: AGU Fall Meeting, San Francisco, California, December 9, 1971. National Ocean Survey Geodetic Research and Development Laboratory
Rijkswaterstaat (2012) Productspecificaties Deformatiemeting Kunstwerken. http://www.rijkswaterstaat.nl/zakelijk/zakendoen_met_rws/werkwijzen/gww/dataeisen/monitoringsgegevens_kunstwerken/
Rüeger J (2006) Overview of geodetic deformation measurements of dams. ANCOLD 2006 Conference
Schaffrin B (1975) Zur Verzerrtheit von Ausgleichungsergebnissen. Universität Bonn, Tech. rep
Setan H, Singh R (2001) Deformation analysis of a geodetic monitoring network. Geomatica 55(3)
Teunissen PJG (1985) The geometry of geodetic inverse linear mapping and nonlinear adjustment. Publications on Geodesy New Series, vol 8, no 1. Netherlands Geodetic Commission
Teunissen PJG (2006a) Adjustment theory, an introduction. VSSD, Delft
Teunissen PJG (2006b) Testing theory, an introduction. VSSD, Delft
Teunissen PJG, Salzmann MA, de Heus HM (1987a) Over het aansluiten van puntenvelden. NGT Geodesia 29(5–12), a series of four articles in Dutch
Teunissen PJG, Salzmann MA, de Heus HM (1987b) Over het aansluiten van puntenvelden (2): De aansluitingsvereffening. NGT Geodesia 29(6):229–235 (in Dutch)
Velsink H (1998a) Coördinaten definiëren, transformeren en schranken. Lecture Notes Hogeschool Utrecht (Utrecht University of Applied Sciences), Course of Geodesy
Velsink H (1998b) Waarnemingsrekening voor toepassing in de geodesie, volumes 1, 2, 3 and 4. Lecture Notes Hogeschool Utrecht (Utrecht University of Applied Sciences), Course of Geodesy
Welsch W (1983) Deformationsanalysen ’83: geometrische Analyse und Interpretation von Deformationen geodätischer Netze: Beiträge zum Geodätischen Seminar, 22. April 1983. Wissenschaftlicher Studiengang Vermessungswesen, Hochschule der Bundeswehr
Welsch WM, Heunecke O (2001) Models and terminology for the analysis of geodetic monitoring observations. Official Report of the AdHoc Committee of FIG Working Group 6.1, the 10th FIG International Symposium on Deformation Measurements
Welsch WM, Heunecke O, Kuhlmann H (2000) Handbuch Ingenieurgeodäsie: Auswertung geodätischer Überwachungsmessungen, 1st edn. Wichmann, Heidelberg
Acknowledgments
The support of the research by the Netherlands Geodetic Commission is gratefully acknowledged.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Velsink, H. On the deformation analysis of point fields. J Geod 89, 1071–1087 (2015). https://doi.org/10.1007/s001900150835z
Received:
Accepted:
Published:
Issue Date:
Keywords
 Deformation analysis
 Multipoint deformation
 Connection adjustment
 Alternative hypotheses
 Testing partial point fields