## Abstract

In this contribution, we propose a method for statistically evaluating the risk in a deformation monitoring system. When the structure under monitoring moves beyond tolerance, the monitor system should issue an alert. Only a very small probability is acceptable of the system telling us that no change beyond a critical threshold has taken place, while in reality it has. This probability is referred to as *integrity risk*. We provide a formulation of integrity risk where the interaction between estimation and testing is taken into account, implying the use of conditional probabilities. In doing so, we assumed different scenarios with the alerts being dependent on both the identified hypothesis and the threat that the estimated size of deformations entails. It is hereby highlighted that a correct risk evaluation requires estimation and testing being considered together, as they are typically intimately linked. In practice, one may, however, find it simpler computation-wise to neglect the estimation–testing link. For this case, we provide an approximation of the integrity risk. This approximation may provide a too optimistic or pessimistic description of the integrity risk depending on the testing procedure and tolerances of the structure at hand. Monitoring systems, besides issuing timely alerts, are also required to provide threat estimates together with their corresponding probabilistic properties. As the testing outcome determines how the threat gets estimated, the threat estimator will then inherit the statistical properties of both estimation and testing. We derive the threat estimator \(\bar{b}_{j}\) and its probability density function, taking the contributions from combined estimation and testing into account. It is highlighted that although the threat estimator under the identified hypothesis \({\mathcal {H}}_{j}\), i.e., \(\hat{b}_{j}\), is normally distributed, the estimator \(\bar{b}_{j}\) is not. It is explained that working with \(\hat{b}_{j}\) instead of \(\bar{b}_{j}\), thus ignoring the estimation–testing link, may provide a too optimistic description of the threat estimator’s quality. The presented method is illustrated by means of two simple deformation examples.

## Introduction

Monitoring systems for both man-made structures (such as a dam, a dike, or a bridge) and natural Earth structures (such as a volcano, a fault, or tectonic plates), which—upon load or changing circumstances—may be subject to deformation and/or displacement, are safety-critical. The monitor system should timely detect a real effect, but on the other hand issue as few false alarms as possible. When a deformation or displacement beyond the tolerance of the structure occurs and goes unnoticed, the structure may ‘suddenly’ fail or collapse with possibly dramatic consequences such as loss of human lives and huge damage. Therefore, in practice, only a very small probability is acceptable of the system telling us that no change beyond a critical threshold has taken place (issuing no alert), while in reality it has. We refer to this probability as *integrity risk*.

The structure under consideration is typically believed to be stable (null hypothesis \({\mathcal {H}}_{0}\)), implying no threat. We need to be alerted, however, upon undesired deformation or displacement (alternative hypotheses \({\mathcal {H}}_{i}\)), in particular when they are beyond tolerances of the structure. The situation in which the structure moves beyond tolerance should obviously be avoided as much as possible, but when it happens, the monitor system should issue an alert based on the monitoring measurements being carried out. In order to quantify the performance of the monitoring for a specific hypothesis, the corresponding integrity risk needs to be evaluated. Such an evaluation can even already be done in the designing phase prior to the operational phase of the monitoring system when the actual measurements are collected. For example, one may tune the statistical testing procedure such that a (very) small integrity risk is guaranteed.

Many studies have so far been devoted to analysis of structural deformation measurements and designing proper deformation monitoring procedures. Pelzer (1971) was one of the first to set out a mathematical framework for geodetic deformation analysis. The analysis is based on the so-called (global) congruence test, carried out on the difference vector between the coordinates at two epochs in time, formally to find out whether shape and/or size of the pointfield has changed or not. When this test is passed, the conclusion reads that there is no deformation or displacement of the pointfield covering the object/area of interest and its surroundings. The mathematical procedure was extended and elaborated on by van Mierlo (1978). The identification of shifted points was subsequently investigated through a variety of statistical methods (Caspary and Borutta 1987; Chen et al. 1990; Niemeier 1985; Setan and Singh 2001; Sušić et al. 2017; Konakoğlu and Gökalp 2018). In Eichhorn (2007), an overview of techniques and trends in geodetic deformation analysis is presented. A recent overview of geodetic deformation analysis is given in the textbook (Heunecke et al. 2013). This book also covers kinematic, static and dynamic deformation models, with the aim of estimating the deformation parameters of interest, see also (Verhoef and De Heus 1995), which proposes the use of polynomial models. A comprehensive review of dam deformation monitoring technologies is provided in Scaioni et al. (2018). Recent studies are still inspired by the earlier framework for geodetic deformation analysis, see, e.g., Durdag et al. (2018) and Yavaşoğlu et al. (2018). In the latter paper, once the global congruence test has detected ‘some’ displacement, attempts are made to fit models consisting of position/displacement, velocity and acceleration parameters. Velocity and acceleration estimators are then tested on significance.

This paper presents a new contribution to the field of deformation monitoring and analysis. We propose a method to statistically evaluate the *risk* in deformation monitoring. It is highlighted that in the processing of measurements of a monitoring system, *estimation* and *testing* are intimately linked and should be considered together when presenting the quality of the output of the monitoring system (Teunissen 2018). As such, the risk assessment under a hypothesis, say \({\mathcal {H}}_{j}\), can only be done correctly when all testing decision probabilities are taken into account, as well as the implications of testing on the distributions of the estimators for the parameters involved. This needs eventually to be done for all hypotheses at hand in order to arrive at the overall integrity risk.

This contribution is organized as follows. In Sect. 2, we first describe the null and alternative hypotheses considered for deformation monitoring analysis. The role of the misclosure space partitioning in testing these hypotheses is then highlighted, and the testing procedure is accordingly specified. It is hereby shown how the estimator \(\bar{b}_{j}\) of a deformation parameter is formed, capturing the contributions from both testing and estimation. We also derive the distribution of the estimator \(\bar{b}_{j}\). The integrity risk is mathematically formulated in Sect. 3. For different scenarios, we provide a strict formulation where the estimation–testing link is taken into account, implying the use of conditional probabilities. We then provide an approximation following from neglecting the conditioning on the testing outcome, which might be considered simpler computation-wise. We hereby highlight that this approximation may provide a too optimistic or pessimistic description of the integrity risk depending on the testing procedure and tolerances of the structure under monitoring.

In Sect. 4, for a simple observational model with just a single alternative, the integrity risk is evaluated both using the strict and approximate approach. We demonstrate in graphical form the factors driving the difference between these two approaches. Assuming that a deformation has taken place, we then provide an analysis of precision of the deformation parameter estimator, with and without accounting for conditioning on testing decision. It is highlighted that negligence of this conditioning process may provide a too optimistic description of the estimator’s quality. The integrity risk evaluation is then continued, but now for an actual deformation measurement system example with multiple hypotheses. Finally a summary with conclusions is presented in Sect. 5.

## Deformation monitoring

### Null and alternative hypotheses

As our starting point, we characterize the null and alternative hypotheses, denoted by \({\mathcal {H}}_{0}\) and \({\mathcal {H}}_{i}\), respectively. Typically in change detection, the null hypothesis \({\mathcal {H}}_{0}\) is the ‘all-stable, no movement’ model, which, here, is assumed to be given as

with \({\mathsf {E}}(\cdot )\) the expectation operator, \({\mathsf {D}}(\cdot )\) the dispersion operator, \(y\in {\mathbb {R}}^{m}\) the normally distributed random vector of observables (with the measurements typically collected at multiple epochs in time), \(x\in {\mathbb {R}}^{n}\) the estimable unknown parameters, \(A\in {\mathbb {R}}^{m\times n}\) the design matrix of rank\((A)=n\), and \(Q_{yy}\in {\mathbb {R}}^{m\times m}\) the positive-definite variance matrix of *y*. The redundancy of the model of observation equations under \({\mathcal {H}}_{0}\) is \(r =m -\mathrm{rank(A)} = m - n\).

As alternative hypotheses, we consider those describing different dynamic behavior of the structure under consideration. In this contribution, we limit ourselves—for simplicity of the analyses—to movements which can be characterized by a single scalar. The observational model under \({\mathcal {H}}_{i}\) (for \(i=1,\ldots ,k\)) is of the form

where \(c_{i}\in {\mathbb {R}}^{m}\) describes the presumed movement signature, and \(b_{i}\in {\mathbb {R}}{\setminus }\{0\}\) is the size of the nonzero movement, e.g., a displacement (step or jump), or a velocity (rate of change). With \(b_{i}=0\), we are effectively back at \({\mathcal {H}}_{0}\) in (1). Note that \([A~~c_{i}]\) is a known matrix of full rank, i.e., rank\(([A~~c_{i}])=n+1\), and scalar \(b_{i}\) is unknown. The hypotheses \({\mathcal {H}}_{i}\) (\(i=0,1,\ldots ,k\)) are mutually exclusive, implying that \({\mathsf {E}}(y)\) cannot have the same location (in \({\mathbb {R}}^{m}\)) under different hypotheses. We further assume that the hypotheses at hand do not occur simultaneously, indicating that only one hypothesis is true at a time.

As a simple example, one could imagine that the height of a single point is repeatedly observed at several epochs in time and stacked in vector *y*. The all-stable, no movement null hypothesis \({\mathcal {H}}_{0}\) is then represented by a model in which all observations relate to a single unknown parameter, namely the height of the point under consideration, through the design matrix *A* which equals a vector of all ones. One of the alternative hypotheses, say \({\mathcal {H}}_{i}\), could imply a sudden shift in the point height, which is supposed, for example, to occur after the third epoch. Therefore, the shift is present in the height of the point from the fourth epoch onward. Then, scalar \(b_{i}\) represents the unknown shift, and the vector \(c_{i}\) takes zeros as its first three entries and ones elsewhere.

Alternative hypotheses (2) imply an *extension* of null hypothesis (1). An extra parameter, namely \(b_{i}\), is introduced in the alternative hypothesis with respect to the null hypothesis, for instance, to accommodate a jump or a rate of change. The very same pair of hypotheses can also be given another interpretation. The alternative hypothesis presents the more general situation, including a possible displacement or rate of change, through parameter \(b_{i}\). In the null hypothesis, this parameter is constrained to zero; with \(b_{i}=0\), the alternative hypothesis reduces to the null hypothesis. According to this interpretation, one is testing the *significance* of the extra parameter \(b_{i}\), for instance the rate of change being zero (null hypothesis) or not, and hence significant (alternative hypothesis).

Finally, we mention that, in practice, alternative hypotheses may also concern incidental outliers and faults in the measurements of the monitoring system, or distortions in individual benchmarks. These hypotheses are omitted in the present contribution for the sake of clarity—we focus on actual deformations.

### Hypothesis testing

All information required to test the hypotheses at hand against one another is contained in the *misclosure* vector \(t\in {\mathbb {R}}^{r}\) given as

where \(B\in {\mathbb {R}}^{m\times r}\) is a full-rank matrix, with rank\((B)=r\), such that \([A~~B]\in {\mathbb {R}}^{m\times m}\) is invertible and \(A^\mathrm{T}B=0\). With \(y\overset{{\mathcal {H}}_{i}}{\sim }{\mathcal {N}}(Ax+c_{i}b_{i},Q_{yy})\) for \(i=0,1,\ldots ,k\) and \(c_{0}b_{0}=0\) (to accommodate also the null hypothesis in (1)), the misclosure vector is then distributed as

The testing procedure can be established through unambiguously assigning the outcomes of *t* to the statistical hypotheses \({\mathcal {H}}_{i}\) for \(i=0,1,\ldots ,k\), which can be realized through a partitioning of the misclosure space \({\mathbb {R}}^{r}\). Therefore, with \({\mathcal {P}}_{i}\in {\mathbb {R}}^{r}\) being a partitioning of the misclosure space, i.e., \(\cup _{i=0}^{k}{\mathcal {P}}_{i}={\mathbb {R}}^{r}\) and \({\mathcal {P}}_{i}\cap {\mathcal {P}}_{j}=\emptyset \), the testing procedure is unambiguously defined as (Teunissen 2018)

As (5) shows, the decisions of the testing procedure are driven by the outcome of the misclosure vector *t*. If \({\mathcal {H}}_{i}\) is true, then the decision is correct if \(t\in {\mathcal {P}}_{i}\), and wrong if \(t\in {\mathcal {P}}_{j\ne i}\). As such, based on the outcomes of *t*, we have tabulated the set of events under \({\mathcal {H}}_{0}\) and \({\mathcal {H}}_{i}\) in Table 1. The probabilities of the occurrence of these events, denoted by \(\mathrm{P}_{*}\) with \(*=\{\mathrm{CA,FA,MD}_{i},\mathrm{CD}_{i},\mathrm{WI}_{i},\mathrm{CI}_{i}\}\), satisfy

Except for FA and CA events, the probabilities of other events under an alternative hypothesis, say \({\mathcal {H}}_{i}\), depend on the threat value \(b_{i}\). Also note that for the special case of having only one single alternative, say \({\mathcal {H}}_{1}\), we have \({\mathcal {P}}_{1}={\mathbb {R}}^{r}{\setminus }{\mathcal {P}}_{0}\) which implies \(\mathrm{P}_{\mathrm{WI}_{1}}=0\), thereby \(\mathrm{P}_{\mathrm{CD}_{1}}=\mathrm{P}_{\mathrm{CI}_{1}}\).

In this study, our testing strategy comprises two steps of detection and identification, respectively, and is specified as follows

*Detection*: The validity of the null hypothesis (all stable) is checked through an overall model test (the redundancy needs to be \(r>0\)). The null hypothesis \({\mathcal {H}}_{0}\) is accepted if \(t\in {\mathcal {P}}_{0}\) with$$\begin{aligned} {\mathcal {P}}_{0}=\left\{ t\in {\mathbb {R}}^{r}\bigg |~\Vert t\Vert ^{2}_{Q_{tt}} \le k_{\alpha ,r}\right\} \end{aligned}$$(7)in which \(\Vert .\Vert ^{2}_{Q_{tt}}=(.)^\mathrm{T}Q_{tt}^{-1}(.)\) and \(k_{\alpha ,r}\) is the \(\alpha \)-percentage of the central Chi-square distribution with

*r*degrees of freedom. \(\alpha \) is the false alarm probability, i.e., \(\alpha =\mathrm{P}_{\mathrm{FA}}\), which is usually set a priori by the user.*Identification*: If the default working model \({\mathcal {H}}_{0}\) is rejected in the detection step, a search is carried out among the specified alternatives \({\mathcal {H}}_i\ (i =1,\ldots ,k)\) to pinpoint the potential source of deformation (note that with \(r=1\) identification is not possible). The alternative hypothesis \({\mathcal {H}}_{i\ne 0}\) is selected if \(t\in {\mathcal {P}}_{i\ne 0}\) with$$\begin{aligned} {\mathcal {P}}_{i}=\left\{ t\in {\mathbb {R}}^{r}{\setminus }{\mathcal {P}}_{0}\bigg | ~|w_{i}|=\underset{j\in \{1,\ldots ,k\}}{\max }\;|w_{j}|\right\} ,\;\, i=1,\ldots ,k \end{aligned}$$(8)in which \(w_{i}\) is Baarda’s test statistic computed as (Baarda 1967; Teunissen 2000)

$$\begin{aligned} w_{i} = \dfrac{c^\mathrm{T}_{t_{i}}Q_{tt}^{-1}t}{\sqrt{c^\mathrm{T}_{t_{i}} Q_{tt}^{-1}c_{t_{i}}}};\quad c_{t_{i}}=B^\mathrm{T}c_{i} ,\quad i=1,\ldots ,k \end{aligned}$$(9)

It can be shown that the set of regions \({\mathcal {P}}_{i}\ (i= 0, 1, \ldots , k)\) in (7) and (8) forms a partitioning of the misclosure space if and only if \(c_{t_{i}}\ne \gamma \, c_{t_{j}}\) for any \(i\ne j\) and for any nonzero scalar \(\gamma \in {\mathbb {R}}{\setminus }\{0\}\) (Zaminpardaz and Teunissen 2019). This implies that for the case of \(r=1\) where \(c_{t_{i}}\in {\mathbb {R}}\), none of the alternative hypotheses is separable from one another.

Note that once one of the alternatives, say \({\mathcal {H}}_{i}\), is identified through the above procedure, then follow-on estimations, like deformation estimation, take place according to model (2). This will be discussed in the following subsection.

We remark that since \(t=B^\mathrm{T}y=B^\mathrm{T}\hat{e}_{0}\), with \(\hat{e}_{0}=y-A\hat{x}_{0}\), the above procedure can be (equivalently) formulated in terms of the least-squares residual vector \(\hat{e}_{0}\) as well, providing a more recognizable form of the testing procedure (Teunissen 2000). Also note that here, for simplicity, we work with alternative hypotheses that are 1-dimensional extensions of the null hypothesis (cf. (2)). Nevertheless, our method is equally valid for higher-dimensional cases, provided that the selection of the \({\mathcal {H}}_{i}\)’s can be done unambiguously (cf. (5)). Finally, note that although we use likelihood-ratio-based statistical tests through (7)–(9), our point, that testing and estimation are intimately linked, holds true for any data-driven decision procedure like *p*-values (Lehmann and Lösler 2016) and the Akaike Information Criterion (AIC) (Akaike 1974; Burnham and Anderson 2003).

### Threat estimation

In deformation analyses, monitoring systems have the task of not only issuing timely alerts when the situation is deemed too dangerous, but also providing threat estimates with their corresponding probabilistic properties. Let \(b_{j}\), the movement size under \({\mathcal {H}}_{j}\) (cf. (2)), be the threat one is concerned with. Depending on whether or not the hypothesis \({\mathcal {H}}_{j}\) is selected through the testing procedure in (5), estimation of \(b_{j}\) would be different; \(b_{j}\) is estimated once \({\mathcal {H}}_{j}\) is selected, and kept zero otherwise. Therefore, the outcome of testing determines how the deformation \(b_{j}\) gets estimated. The probabilistic properties of such an estimation–testing combination can be captured through a unifying framework presented by Teunissen (2018). As such, the estimator of \(b_{j}\) is given as

with \(p_{j}(t)\) being the indicator function of region \({\mathcal {P}}_{j}\) (cf. (5)), i.e., \(p_{j}(t)=1\) for \(t\in {\mathcal {P}}_{j}\) and \(p_{j}(t)=0\) for *t* elsewhere, and \(\hat{b}_{j}\) the estimator of \(b_{j}\) under \({\mathcal {H}}_{j}\). In this paper, we make use of Best Linear Unbiased Estimation (BLUE), from which the estimator of \(b_j\) follows as

where \(c_{t_{j}}^{+}=(c_{t_{j}}^\mathrm{T}\,Q_{tt}^{-1}c_{t_{j}})^{-1} c_{t_{j}}^\mathrm{T}\,Q_{tt}^{-1}\) is the BLUE-inverse of \(c_{t_{j}}=B^\mathrm{T}c_{j}\). As \(\hat{b}_{j}\) is a linear function of the normally distributed misclosure vector *t*, with (11) and (4), we then have

Note that although estimator \(\hat{b}_j\) is normally distributed, estimator \(\bar{b}_{j}\) of (10) is *not*. The estimator \(\bar{b}_{j}\) is namely, next to its dependence on \(\hat{b}_j\), also *nonlinearly* dependent on the misclosure *t* through the indicator function.

### The PDF of the threat estimator

To gain an understanding of the properties of the threat estimator \(\bar{b}_{j}\) in (10), its probability density function (PDF) needs to be studied. As (10) shows, the estimator \(\bar{b}_{j}\) is constructed from the misclosure vector *t* and the BLUE \(\hat{b}_{j}\) which, according to (11), is also fully driven by *t*. Therefore, the probabilistic characteristics of \(\bar{b}_{j}\) are governed by those of *t*. In order to derive the PDF of \(\bar{b}_{j}\) under \({\mathcal {H}}_{i}\), we first apply a one-to-one transformation to the misclosure vector *t* as follows

where \(c_{t_{j}}^{\perp }\in {\mathbb {R}}^{r\times (r-1)}\) is a full-rank matrix of which the range space is orthogonal to that of \(c_{t_{j}}\), i.e., \(c_{t_{j}}^{\perp ^\mathrm{T}}c_{t_{j}}=0\), which implies that the normally distributed \(\hat{b}_{j}\) and \(\tilde{t}_{j}\) are *independent*. We remark that \(\tilde{t}_{j}\) represents the ‘remaining’ misclosures, once parameter \(b_{j}\) is estimated according to model (2) for \({\mathcal {H}}_{j}\). In other words, \(\tilde{t}_{j}\) is a misclosure vector obtained employing the alternative hypothesis \({\mathcal {H}}_{j}\) for estimation. For the special case of having only one redundancy (\(r=1\)), we have \(t\in {\mathbb {R}}\) implying that \(\hat{b}_{j}\) would be the scaled version of misclosure and \(\tilde{t}_{j}\) would no longer exist. For this single-redundancy scenario, no identification can be exercised which means that only one alternative hypothesis can be considered, i.e., \(j=1\) and \(k=1\), such that rejection of \({\mathcal {H}}_{0}\) implies acceptance of \({\mathcal {H}}_{1}\).

Applying transformation (13) to the regions \({\mathcal {P}}_{i}\) (\(i=0,1,\ldots ,k\)), we obtain the new regions \(\widetilde{{\mathcal {P}}}_{i}\) (\(i=0,1,\ldots ,k\)) defined as

These regions, like \({\mathcal {P}}_{i}\) in (5), form a partitioning of \({\mathbb {R}}^{r}\). We are now in a position to derive the PDF of \(\bar{b}_{j}\). In doing so, we discriminate between the two cases \(r=1\) (\(t\in {\mathbb {R}}\)) and \(r>1\) (\(t\in {\mathbb {R}}^{r>1}\)).

### Theorem 1

(PDF of \(\bar{b}_{j}\)) Let \(\bar{b}_{j}\) be given as (10). Then, the PDF of \(\bar{b}_{j}\) under \({\mathcal {H}}_{i}\) can be expressed as

- (i)
for \(r=1\) (\(t\in {\mathbb {R}}\)):

$$\begin{aligned} f_{\bar{b}_{j}}(b|{\mathcal {H}}_{i})=f_{\hat{b}_{j}}(b|{\mathcal {H}}_{i}) \,p_{j}(c_{t_{j}}\,b)\;+\;\delta (b)\,\mathrm{P}(t\notin {\mathcal {P}}_{j}|{\mathcal {H}}_{i}) \end{aligned}$$(15) - (ii)
for \(r>1\) (\(t\in {\mathbb {R}}^{r>1}\)):

$$\begin{aligned} f_{\bar{b}_{j}}(b|{\mathcal {H}}_{i})= & {} f_{\hat{b}_{j}} (b|{\mathcal {H}}_{i})\;{\displaystyle \int _{{\mathbb {R}}^{r-1}} f_{\tilde{t}_{j}}(\tau |{\mathcal {H}}_{i})\,\tilde{p}_{j}(b,\tau )\;\hbox {d}\tau }\nonumber \\&+\delta (b)\,\mathrm{P}(t\notin {\mathcal {P}}_{j}|{\mathcal {H}}_{i}) \end{aligned}$$(16)

with \(\delta (b)\) the multi-dimensional Dirac delta distribution, \(\tilde{p}_{j}(b,\tau )=p_{j}\left( {\mathcal {T}}_{j}^{-1} \left[ \begin{array}{l} b\\ \tau \end{array}\right] \right) \) the indicator function of the region \(\widetilde{{\mathcal {P}}}_{j}\), and \(\mathrm{P}(\cdot )\) the probability of the occurrence of the event within parentheses.

### Proof

See Appendix. \(\square \)

As was mentioned before, case (i) is of relevance only for binary hypothesis testing as, with \(r=1\), one cannot discriminate between alternative hypotheses. Case (i) can be seen as a special case of (ii), since when \(r=1\), the indicator function \(\tilde{p}_{j}(b, \tau )\) reduces to \(p_{j}(c_{t_{j}}b)\), and by substituting this into (16) one gets (15).

The above theorem shows that the PDF of \(\bar{b}_{j}\) is constructed from two parts. The first part applies when \(t\in {\mathcal {P}}_{j}\), and \(b_j\) gets estimated according to (11), resulting in a normal PDF with no probability mass over a specific interval, while the second part applies when \(t\notin {\mathcal {P}}_{j}\), and hence, \(b_j\) is estimated as zero, which leads to all probability mass getting concentrated at \(b=0\). Equations (15) and (16) imply that even if the misclosure vector *t* of (4), and thus the estimator \(\hat{b}_{j}\) of (12) as well, is normally distributed, \(\bar{b}_{j}\) does *not* have a normal distribution.

### Example

Let \(y\in {\mathbb {R}}^{2}\) contain the observations of a single point height over two epochs which are uncorrelated and have the same standard deviation \(\sigma \). Under the null hypothesis \({\mathcal {H}}_{0}\), the height of this point, \(x\in {\mathbb {R}}\), is assumed to remain unchanged over time, whereas under the alternative \({\mathcal {H}}_{1}\), it is assumed that a shift of size \(b_{1}\) in the height of the point occurs at the second epoch, i.e., \(c_{1}=[0,~1]^\mathrm{T}\). These two hypotheses are then formulated as

with \(I_{2}\) being the identity matrix of dimension two. The redundancy of \({\mathcal {H}}_{0}\) is \(r=1\), implying that \(t\in {\mathbb {R}}\). For this binary hypothesis example (\(k=1\)), the partitioning of the misclosure space \({\mathbb {R}}\) is formed by two regions, i.e., \({\mathcal {P}}_{0}\) and its complement \({\mathcal {P}}_{1}={\mathbb {R}}{\setminus }{\mathcal {P}}_{0}={\mathcal {P}}^{c}_{0}\). As \(r=1\) (\(t\in {\mathbb {R}}\)), the PDF of \(\bar{b}_{1}\) is obtained from (15). Figure 1 illustrates the PDF of \(\bar{b}_{1}\) under \({\mathcal {H}}_{0}\) and \({\mathcal {H}}_{1}\) assuming \(b_{1}=3~\mathrm{cm}\), for three different sets of values of \(\sigma \) and \(\alpha \), i.e., \(\sigma =1/\sqrt{2}~\mathrm{cm}\) and \(\alpha =0.01\) (left), \(\sigma =1/\sqrt{2}~\mathrm{cm}\) and \(\alpha =0.1\) (middle), \(\sigma =1~\mathrm{cm}\) and \(\alpha =0.01\) (right).

It is observed that the PDF of the threat estimator \(\bar{b}_{1}\) is made of two parts: a curve and a spike (cf. (15)). The former is obtained by the normal PDF \(f_{\hat{b}_{1}}(b| {\mathcal {H}}_{i})\) of which the probability mass is set to zero over an interval where \(p_{1}(c_{t_{1}}b)=0\). With \({\mathcal {P}}_{0} =[-\sqrt{k_{\alpha ,1}}\sigma _{t}, ~\sqrt{k_{\alpha ,1}}\sigma _{t}]\), \(p_{1}(c_{t_{1}}b)\) gets zero for \(b\in [-\sqrt{k_{\alpha ,1}}\sigma _{t}/c_{t},~\sqrt{k_{\alpha ,1}}\sigma _{t}/c_{t}]\). For the example at hand, we have \(\sigma _{t}/c_{t}=\sqrt{2}\sigma \), implying that \(p_{1}(c_{t_{1}}b)=0\) for \(b\in [-\sqrt{2\,k_{\alpha , 1}} \sigma ,~\sqrt{2\,k_{\alpha ,1}}\sigma ]\). Therefore, the larger the \(\sigma \), the wider the interval where \(p_{1}(c_{t_{1}}b)=0\). This can also be confirmed by comparing the left and right panels. The second part of the threat estimator PDF is formed by a spike of which the height is given by \(\mathrm{P}(t\in {\mathcal {P}}_{0}| {\mathcal {H}}_{0})=1-\alpha \) for the top row and by \(\mathrm{P}(t\in {\mathcal {P}}_{0}|{\mathcal {H}}_{1})\) for the bottom row. The former is the CA-probability (see Table 1) which depends solely on the user-determined \(\alpha \); the larger the \(\alpha \), the smaller the CA-probability, whereas the latter is the MD-probability (see Table 1) which increases when \(\alpha \) decreases and/or \(\sigma \) increases.

## Integrity

With the null hypothesis \({\mathcal {H}}_{0}\) in (1) as the ‘all-stable, no movement’ model, the alternative hypotheses \({\mathcal {H}}_{j}\) (\(j=1,\ldots ,k\)) in (2) are assumed to cover potentially dangerous deformations/movements, and their sizes are characterized through the scalars \(b_{j}\). The monitoring system is therefore required to issue an *alert* when a significant movement (e.g., displacement or velocity) has occurred. The term ‘alert’ here should not be confused with the term ‘alarm’ in Table 1. While the latter is driven by testing procedure only, the former is in addition driven by the threat estimate. The critical or threshold movement is called ‘Alert Limit’ and denoted by AL. A movement of this magnitude is deemed to pose an immediate dangerous threat to the structure. A region of acceptable threat values \(b_{j}\) is then defined as \({\mathcal {B}}_{\mathrm{AL}}=[-\mathrm{AL},~\mathrm{AL}]\) which is a zero-centered region (formally with the origin excluded). Note that, in practice, AL and hence the region \({\mathcal {B}}_{\mathrm{AL}}\) may vary for different alternative hypotheses. In the sequel however, for simplicity, we assume that the same threshold value AL applies to all alternative hypotheses.

### Definition 1

(*Integrity risk*) The integrity risk for a monitoring system is defined as the probability of not alerting while an alternative hypothesis, say \({\mathcal {H}}_{j}\), holds true, and the corresponding threat \(b_{j}\) goes beyond the alert limit AL.

With the above definition, no risk will be involved if either \({\mathcal {H}}_{0}\) is true or \({\mathcal {H}}_{j}\) holds true while the corresponding threat \(b_{j}\) lies below the alert limit. Therefore, the integrity risk (IR) under \({\mathcal {H}}_{0}\) and \({\mathcal {H}}_{j}\) reads

where \(\iota (b_{j})\) is the indicator function of the region \({\mathbb {R}}{\setminus }{\mathcal {B}}_{\mathrm{AL}}\) defined as \(\iota (b_{j})=0\) for \(b_{j}\in {\mathcal {B}}_{\mathrm{AL}}\), and \(\iota (b_{j})=1\) elsewhere. Thus, \(\mathrm{IR}|{\mathcal {H}}_{j}=0\) if \(b_{j}\in {\mathcal {B}}_{\mathrm{AL}}\). Through the testing procedure, if \({\mathcal {H}}_{0}\) is selected then there would be no threat to be estimated, and in case \({\mathcal {H}}_{i\ne 0}\) is selected then \(\hat{b}_{i}\) is provided as the estimate of \(b_{i}\). With this in mind, the event of ‘no alert’ occurs when either ‘\({\mathcal {H}}_{0}\) is selected’ or ‘\({\mathcal {H}}_{i\ne 0}\) is selected and \(\hat{b}_{i}\in {\mathcal {B}}_{\mathrm{AL}}\) (\(i=1,\ldots ,k\)),’ occurs, see also Table 2. The integrity risk \(\mathrm{IR}|{\mathcal {H}}_{j}\) in (18) for \(j=1,\ldots ,k\) can then be expressed as

where the second equality results from application of the conditional probability rule. We remark that since the events ‘\({\mathcal {H}}_{0}\) is selected’ and ‘\({\mathcal {H}}_{i\ne 0}\) is selected and \(\hat{b}_{i}\in {\mathcal {B}}_{\mathrm{AL}}\) (\(i=1,\ldots ,k\))’ are mutually exclusive, see (5), the probability of their union, i.e., the integrity risk, can be written as the summation of their individual probabilities. The first, second and third terms on the right-hand side of (19) represent the risks incurred by the MD-event, CI-event and WI-event, respectively, see Table 1.

In (19), the integrity risk is presented in case the alternative hypothesis \({\mathcal {H}}_{j}\) holds true. We have to consider this for all alternative models \({\mathcal {H}}_{i}\) with \(i=1,\ldots ,k\), as all alternative hypotheses, once the movement is beyond the Alert Limit, are considered here dangerous. Assuming that \({\mathcal {H}}_{0}\) and \({\mathcal {H}}_{i}\) (\(i=1,\ldots ,k\)) cover all the events that can possibly occur, we then have \(\sum _{i=0}^{k}\,\mathrm{P}({\mathcal {H}}_{i})=1\) with \(\mathrm{P}({\mathcal {H}}_{i})\) being the probability of occurrence of \({\mathcal {H}}_{i}\). The overall integrity risk can then be obtained using the total probability rule (Papoulis 1984) as

Note that the above equation contains no \({\mathcal {H}}_{0}\)-related term as under the null hypothesis there would be no integrity issue, i.e., \(\mathrm{IR}|{\mathcal {H}}_{0}=0\), that is, false alarm is not considered an integrity risk. To get a better understanding of the factors contributing to the overall risk, Table 2 visualizes the construction of the \(\mathrm{IR}\) from the testing decisions and threat estimations.

Here, it is important to realize that the expression in (20) depends on the (true) value of \(b_{j}\) under \({\mathcal {H}}_{j}\) for \(j=1,\ldots ,k\) (cf. (19)), which are unknown. Hence, the actual integrity risk cannot be computed but only as a function of \(b_{j}\)’s (\(j=1,\ldots ,k\)). To be conservative, one can then have a look at ‘worst case’ scenarios by considering the largest possible value that the IR can take as a function of \(b_{j}\)’s (\(j=1,\ldots ,k\)), i.e., maximizing \(\mathrm{IR}|{\mathcal {H}}_{j}\) for each \(j=1,\ldots ,k\). Also, in case the a-priori probabilities \(\mathrm{P}({\mathcal {H}}_{j})\) (\(j=1,\ldots ,k\)) are not known, one can stay with individual integrity risks \(\mathrm{IR}|{\mathcal {H}}_{j}\) and work with *k* worst-case scenarios, equivalent to setting \(\mathrm{P}({\mathcal {H}}_{j})=1\) for \(j=1,\ldots ,k\).

### Approximate integrity risk

As (19) suggests, computation of the integrity risk for a certain \({\mathcal {H}}_{j}\) requires the computation of *k* conditional probabilities, i.e., probabilities of \(\hat{b}_{i}\in {\mathcal {B}}_{\mathrm{AL}}\) conditioned on \(t\in {\mathcal {P}}_{i}\) and \({\mathcal {H}}_{j}\) (\(i=1,\ldots ,k\)), which may impose heavy computational burden particularly when dealing with a large number of alternative hypotheses. One may, however, find it more convenient to *neglect* the correlation between \(\hat{b}_{i}\) (\(i=1,\ldots ,k\)) and *t*, and hence the conditioning on the testing outcome, and arrive at the following approximation of the integrity risk

The overall approximate integrity risk then reads

It is important to note that whether \(\mathrm{IR}^{o}\) provides a conservative or optimistic approximation of \(\mathrm{IR}\) depends on how the regions \({\mathcal {P}}_{i}\) (\(i=0,1,\ldots ,k\)) and \({\mathcal {B}}_{\mathrm{AL}}\) are defined.

We note that with \(\mathrm{IR}|{\mathcal {H}}_{j}\), one conditions on both the hypothesis and the testing outcome (cf. (19)), while with \({\mathrm{IR}}^{o}|{\mathcal {H}}_{j}\), one conditions only on the hypothesis and *not* on the testing outcome (cf. (21)). The difference between the integrity risk and its approximation, under \({\mathcal {H}}_{j}\), can be expressed as

which is driven by the difference between the conditional non-normal PDFs \(f_{\hat{b}_{i}|t\in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i}, ~{\mathcal {H}}_{j})\) and the normal PDFs \(f_{\hat{b}_{i}}(b| {\mathcal {H}}_{j})\) (\(i=1,\ldots ,k\)) over \({\mathcal {B}}_{\mathrm{AL}}\). These PDFs are linked by virtue of the total probability rule as

which implies that if \(\mathrm{P}(t\in {\mathcal {P}}_{i}| {\mathcal {H}}_{j})\rightarrow 1\), then \(f_{\hat{b}_{i}|t \in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i},~{\mathcal {H}}_{j}) \rightarrow f_{\hat{b}_{i}}(b|{\mathcal {H}}_{j})\). The following Lemma gives the conditional PDF \(f_{\hat{b}_{i}|t \in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i},~{\mathcal {H}}_{j})\). Here, we again distinguish between \(r=1\) (\(t\in {\mathbb {R}}\)) and \(r>1\) (\(t\in {\mathbb {R}}^{r>1}\)), and emphasize that the former is of relevance only for binary hypothesis testing as, with \(r = 1\), one cannot discriminate between alternative hypotheses.

### Lemma 1

(PDF of \(\hat{b}_{i}|t\in {\mathcal {P}}_{i}\)) Let \(\hat{b}_{i}\) and *t* be linked to each other according to (13). Then, the conditional PDF \(f_{\hat{b}_{i}|t\in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i},~{\mathcal {H}}_{j})\) follows as

- (i)
for \(r=1\) (\(t\in {\mathbb {R}}\))

$$\begin{aligned} f_{\hat{b}_{i}|t\in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i},~{\mathcal {H}}_{j}) =f_{\hat{b}_{i}}(b|{\mathcal {H}}_{j})\times \dfrac{{p}_{i}(c_{t_{i}}b)}{\mathrm{P}(t\in {\mathcal {P}}_{i}|{\mathcal {H}}_{j})} \end{aligned}$$(25) - (ii)
for \(r>1\) (\(t\in {\mathbb {R}}^{r>1}\))

$$\begin{aligned} f_{\hat{b}_{i}|t\in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i},~{\mathcal {H}}_{j})&=f_{\hat{b}_{i}}(b|{\mathcal {H}}_{j})\nonumber \\&\quad \times {\displaystyle \int _{{\mathbb {R}}^{r-1}} \dfrac{f_{\tilde{t}_{i}} (\tau |{\mathcal {H}}_{j})\,\tilde{p}_{i}(b,\tau )}{\mathrm{P}(t\in {\mathcal {P}}_{i}|{\mathcal {H}}_{j})}\;\hbox {d}\tau } \end{aligned}$$(26)

### Proof

See Appendix. \(\square \)

As the above Lemma shows, the conditional PDF \(f_{\hat{b}_{i}|t\in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i}, ~{\mathcal {H}}_{j})\) at each value of *b* is obtained by scaling the corresponding value of the normal PDF \(f_{\hat{b}_{i}} (b|{\mathcal {H}}_{j})\). For example, for the case of \(r=1\), \(f_{\hat{b}_{i}|t\in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i},~{\mathcal {H}}_{j})\) equals \(f_{\hat{b}_{i}}(b|{\mathcal {H}}_{j})\) divided by \(\mathrm{P}(t\in {\mathcal {P}}_{i}|{\mathcal {H}}_{j})\) if \(c_{t_{i}}b\in {\mathcal {P}}_{i}\), and zero if \(c_{t_{i}}b\notin {\mathcal {P}}_{i}\).

Figure 2 shows, for the binary example of (17), the PDFs \(f_{\hat{b}_{1}}(b|{\mathcal {H}}_{1})\) (red) and \(f_{\hat{b}_{1}|t\in {\mathcal {P}}_{1}}(b|t\in {\mathcal {P}}_{1},~{\mathcal {H}}_{1})\) (blue). The underlying settings are \(b_{1}=3~\hbox {cm}\) and \(\sigma =1~\hbox {cm}\). The conditional PDF \(f_{\hat{b}_{1}|t\in {\mathcal {P}}_{1}}(b|t \in {\mathcal {P}}_{1},~{\mathcal {H}}_{1})\) is given for two values of \(\alpha \), i.e., \(\alpha =0.1\) (solid curve) and \(\alpha =0.001\) (dashed curve). As was mentioned previously, \(p_{1}(c_{t_{1}}b)\) gets zero for \(b\in [-\sqrt{2k_{\alpha ,1}}\sigma ,~\sqrt{2k_{\alpha ,1}}\sigma ]\). In Fig. 2, given \(\sigma =1~\hbox {cm}\), the probability mass of \(f_{\hat{b}_{1}|t\in {\mathcal {P}}_{1}}(b|t\in {\mathcal {P}}_{1},~{\mathcal {H}}_{1})\) is zero over the interval \([-\sqrt{2\,k_{\alpha ,1}}, ~\sqrt{2\,k_{\alpha ,1}}]\). The PDF \(f_{\hat{b}_{1}|t \in {\mathcal {P}}_{1}}(b|t\in {\mathcal {P}}_{1},~{\mathcal {H}}_{1})\) takes larger values when \(\alpha \) decreases. This is due to the fact that decreasing \(\alpha \) leads to the probability \(\mathrm{P}(t\in {\mathcal {P}}_{1}|{\mathcal {H}}_{1})\), the denominator of (25), decreasing as well.

### Further simplification

Suppose that the goal is to correctly detect a real effect (deformation), say \({\mathcal {H}}_{j}\), no matter the value of the *estimated* threat \(\hat{b}_{j}\). In this case, selection of an alternative hypothesis always goes with an alert. In other words, an alert will be issued if the null hypothesis is rejected and the event of ‘no alert’ reduces to ‘\({\mathcal {H}}_{0}\) is selected.’ The integrity risk for this case then becomes

in which there is no longer an integrity risk associated with the CI-event, nor with the WI-event. In Table 2, then only the red cells with ‘\({\textcircled {0}}\)’ remain to contribute to the IR and the columns \(\hat{b}_{j}\in {\mathcal {B}}_{\mathrm{AL}}\) effectively vanish (with \(j=1,\ldots ,k\)). This approach was exercised, for instance, in Lepadatu and Tiberius (2014), considering a single alternative hypothesis. One can easily observe that the IR in (27) is smaller than IR in (19). With the event of ‘no alert’ corresponding to (27), an alert is given for *any* value of movement when \(b_{j}\) is estimated, with \(j=1,\ldots ,k\), leading also to a larger number of false alerts compared to the case when the event of ‘no alert’ is defined such as corresponding with (19).

### Only subset of alternatives implying threats

So far, it was assumed that all alternative hypotheses \({\mathcal {H}}_{i}\) (\(i=1,\ldots ,k\)) can pose dangerous threats. For the case when only a subset of alternatives, say \({\mathcal {H}}_{i}\) for \(i=1,\ldots ,q\) with \(q\le k\), is considered dangerous, then the event of ‘no alert’ contains the following events: ‘\({\mathcal {H}}_{0}\) is selected,’ ‘\({\mathcal {H}}_{i}\) is selected and \(\hat{b}_{i}\in {\mathcal {B}}_{\mathrm{AL}}\) (\(i=1,\ldots ,q\)),’ and ‘\({\mathcal {H}}_{i}\) is selected (\(i=q+1,\ldots ,k\)).’ For this scenario, the integrity risk corresponding to \({\mathcal {H}}_{j}\) (\(j=1,\ldots ,q\)) is no longer given by (19), but by

In the special case when only one alternative, say \({\mathcal {H}}_{j}\), is considered dangerous (\(q=1\)) and we are only concerned with the threat \(b_{j}\notin {\mathcal {B}}_{\mathrm{AL}}\) (single-threat scenario), the integrity risk simplifies to

## Numerical analysis

In this section, we illustrate the proposed method of evaluating the integrity risk and estimating the threat, by means of two examples. We evaluate the integrity risk given by (19) and also compare it with its approximation in (21). To provide insight into their characteristics, we first consider the simple observational model in (17), and then further continue with a basic, though more realistic deformation model considering multiple alternative hypotheses.

### Single alternative hypothesis

For our analysis in this subsection, we consider the binary hypothesis example given in (17). Since \(r=1\) only, no identification is possible, just detection, with consequent partitioning of the misclosure space in \({\mathcal {P}}_{0} =[-\sqrt{k_{\alpha ,1}}\sigma _{t},~\sqrt{k_{\alpha ,1}}\sigma _{t}]\) and \({\mathcal {P}}^{c}_{0}\).

#### Integrity risk

To form the single misclosure *t* under \({\mathcal {H}}_{0}\), we choose matrix \(B=[-1,~1]^\mathrm{T}\), cf. (3), and thus the BLUE of \(b_{1}\) is given by \(\hat{b}_{1}=t\). In this case, we have \(f_{\hat{b}_{1}}(b|{\mathcal {H}}_{1})\;=\;f_{t}(b|{\mathcal {H}}_{1})\). The conditional PDF \(f_{\hat{b}_{1}|t\in {\mathcal {P}}_{1}}(b|t \in {\mathcal {P}}_{1},~{\mathcal {H}}_{1})\), with \((t\in {\mathcal {P}}_{1}|{\mathcal {H}}_{1})=\mathrm{CD}\) (see Table 1), can be expressed as \(f_{\hat{b}_{1}|\mathrm{CD}}(b|\mathrm{CD})\) and is given by (25) which simplifies to

As there is only one alternative hypothesis \({\mathcal {H}}_{1}\) (\(k=1\)), the events in Table 1 reduce to *four*: CA, FA, MD and CD. Note that the subscript of CD in (30), as in Table 1, is dropped as \({\mathcal {H}}_{1}\) is the only alternative. With \(t\overset{{\mathcal {H}}_{1}}{\sim }{\mathcal {N}} (b_{1},\sigma ^{2}_{t}=2\sigma ^{2})\), Fig. 3 shows the PDFs \(f_{\hat{b}_{1}|\mathrm{CD}}(b|\mathrm{CD})\) (blue) and \(f_{\hat{b}_{1}}(b|{\mathcal {H}}_{1})\) (red) for \(b_{1}=3~\hbox {cm}\), \(\sigma =1/\sqrt{2}~\hbox {cm}\) and \(\alpha =0.01\).

In Fig. 3, it can be observed that the conditional PDF \(f_{\hat{b}_{1}|\mathrm{CD}}(b|\mathrm{CD})\) has no mass over the interval \(\left[ -\sqrt{k_{\alpha ,1}},~\sqrt{k_{\alpha ,1}}\right] \), due to the presence of \(p_{1}(b)\) in (30). Therefore for \(\hbox {AL}\le \sqrt{k_{\alpha ,1}}\), we have \(\mathrm{P}(\hat{b}_{1} \in {\mathcal {B}}_{\mathrm{AL}}|\mathrm{CD})=0\), thus \(\mathrm{IR}^{o}|{\mathcal {H}}_{1}>\mathrm{IR}|{\mathcal {H}}_{1}\), cf. (23). In case \(\hbox {AL}>\sqrt{k_{\alpha ,1}}\), we have

as \(p_{1}(b)=1\) for \(b \in \langle -\infty , -\mathrm{AL}]\) and \(b\in [\mathrm{AL}, \infty \rangle \), and

Denoting the term within brackets by \(\gamma \), we have (32) > (31) as \(\gamma <\gamma /\mathrm{P}_{\mathrm{CD}}\). Therefore, \(\mathrm{IR}^{o}|{\mathcal {H}}_{1}>\mathrm{IR}|{\mathcal {H}}_{1}\) always holds true, implying that \(\mathrm{IR}^{o}|{\mathcal {H}}_{1}\) provides in this case a conservative (i.e., safe) description of the integrity risk.

Shown in Fig. 4 [top] is the colormap of the difference \(\mathrm{IR}|{\mathcal {H}}_{1}-\mathrm{IR}^{o}|{\mathcal {H}}_{1}\) as a function of \(b_{1}\) horizontally, and AL vertically. The top half of this graph is left empty as integrity risk concerns those situations where the threat goes beyond the Alert Limit (cf. (18)). It is indeed observed that \(\mathrm{IR}^{o}|{\mathcal {H}}_{1}\) is always larger than \(\mathrm{IR}|{\mathcal {H}}_{1}\). Due to this conservatism and also the lower computational burden of the approximate integrity risk than the strict one, one may then be inclined to compute and use \(\mathrm{IR}^{o}|{\mathcal {H}}_{1}\) instead of \(\mathrm{IR}|{\mathcal {H}}_{1}\). However, one should also bare in mind the additional costs incurred by too conservative values of integrity risk (look at the blue area in Fig. 4 [top]).

The bottom panel in Fig. 4 illustrates a cross section of the shown colormap on top for \(\hbox {AL} = 3~\mathrm{cm}\) (in red) and the corresponding graph of \(\mathrm{IR}|{\mathcal {H}}_{1}\) (in blue). The integrity risk \(\mathrm{IR}|{\mathcal {H}}_{1}\) (cf. (19)), for a given AL, shows a decreasing behavior as function of \(b_{1}\) which can be understood by looking at the contributing factors

The first term on the right-hand side, i.e., \(\mathrm{P}_{\mathrm{MD}}\), is a decreasing function of the threat value \(b_{1}\). The second term on the right-hand side equals zero if AL\(\le \sqrt{k_{\alpha ,1}}\). Otherwise, since \(\sqrt{k_{\alpha ,1}}<\mathrm{AL}<b_{1}\), the probability mass of \(f_{t}(\tau |{\mathcal {H}}_{1})\) over \({\mathcal {P}}_{0}^{c}\cap {\mathcal {B}}_{\mathrm{AL}}\) will decrease as \(b_{1}\) increases. Therefore, \(\mathrm{IR}|{\mathcal {H}}_{1}\) for a given AL, is a decreasing function of \(b_{1}\). In the extreme case when \(b_{1}\rightarrow \infty \), we have \(\mathrm{IR}|{\mathcal {H}}_{1} \rightarrow 0\). Likewise, for approximate integrity risk (cf. (21)) which is computed as

when \(b_{1}\rightarrow \infty \), we have \({\mathrm{IR}}^{o}|{\mathcal {H}}_{1}\rightarrow 0\) as a result of \(\mathrm{P}(t\in {\mathcal {B}}_{\mathrm{AL}}|{\mathcal {H}}_{1})\rightarrow 0\) and \(\mathrm{P}_{\mathrm{MD}}\rightarrow 0\). Consequently, we would expect that the difference \((\mathrm{IR}|{\mathcal {H}}_{1}-{\mathrm{IR}}^{o}|{\mathcal {H}}_{1})\rightarrow 0\) in case \(b_{1}\rightarrow \infty \). For completeness, we mention again that there is no integrity risk associated with \({\mathcal {H}}_{0}\).

#### Threat estimation precision

Assuming that model identification is successful and in this case that we have correctly detected the real effect \({\mathcal {H}}_{1}\), we now provide a precision analysis of the estimator for the corresponding deformation parameter \(b_{1}\). For such an analysis, we are interested in the separation between the estimator and its (unknown) true value, and connect this separation to a probability. For instance, we define an interval around the true value (which we do know in simulation), in the context of the example of Fig. 3, as \([b_1 - r_\beta , b_1 + r_\beta ]\), and we are interested to evaluate the probability that this interval contains the estimator for \(b_1\) (10).

We first consider the normal PDF of \(\hat{b}_1\) (in red, in Fig. 3), ignoring the conditioning of estimation on testing. We demand \(95\%\) probability from which \(r_{\beta =0.025}\) is determined. Then, no matter the actual value for \(b_1\), this interval will always represent \(95\%\) probability of containing the estimator \(\hat{b}_{1}\), the red line in Fig. 5. In the next step, we use the correct PDF (in blue, in Fig. 3), acknowledging the conditioning of estimation on testing, to evaluate the probability that the estimator \(\hat{b}_1|\mathrm{CD}\) is inside \([b_1 - r_{0.025}, b_1 + r_{0.025}]\). Figure 5 shows this probability \(\mathrm{P}(|\hat{b}_{1}-b_{1}|<r_{0.025}|\mathrm{CD})\) as a function of the true value \(b_1\in [0,~10]\) (in blue), together with the constant \(95\%\) probability corresponding to the estimator \(\hat{b}_{1}\) (in red). As can be seen, for \(b_{1}<2.5~\hbox {cm}\), the probability \(\mathrm{P}(|\hat{b}_{1}-b_{1}|<r_{0.025}|\mathrm{CD})\) is *smaller* than \(\mathrm{P}(|\hat{b}_{1}-b_{1}|<r_{0.025}| {\mathcal {H}}_{1})\), implying that ignoring the conditioning on the testing decision results in a *too optimistic* description of the estimator’s quality. When the unconditional interval \(\mathrm{P}(|\hat{b}_{1}-b_{1}|<r_{0.025}|{\mathcal {H}}_{1})\) is used to present the estimator’s quality after testing, it should in fact be made *larger* in order to contain \(95\%\) probability (i.e., a larger value is to be taken for \(r_{0.025}\)).

### Multiple alternative hypotheses

Here, we consider a dam deformation monitoring case inspired by an example in (Heunecke et al. 2013, p. 227). Let the dam shown in Fig. 6 be subject to the load caused by water in the lake. For simplicity, we assume that the dam is vertically stable. To monitor the horizontal displacement of this dam, use is made of a 2D terrestrial survey network of six points: *two* (points 5, 6) are established on the dam as *object* points, and *four* (points 1, 2, 3, 4) are located in a stable area close to this dam as *reference* points. To determine horizontal deformations of the dam, one can then compare the object points’ coordinates obtained at different times. We assume that at two times (or epochs) \(l=1,2\), each point is occupied by a total station taking distance and direction measurements to the rest of the points. With six network points (two object and four reference points), we will then have 60 measurements: 30 distance measurements and 30 direction measurements. The distance and direction measurements are assumed to be normally distributed with standard deviations of 3 mm and 5 seconds of arc, respectively. The measurements are assumed to be all uncorrelated. To make the scale, orientation and location of the 2D survey network estimable, the coordinates of the reference points 1 and 2 (black triangles in Fig. 6) are assumed given. The 60 distance and direction observations at epoch *l* are then used to estimate the Easting and Northing of points \(i=3,\ldots ,6\), together with the unknown instrument scale factor (one for the whole network) and six unknown orientations (one per instrument setup).

As the input for the following deformation analysis, we take the epoch-wise estimated coordinates of points \(i=3,\ldots ,6\) and their corresponding variance matrices. With \(x_{i,l}\in {\mathbb {R}}^{2}\) (for \(i=3,\ldots ,6\) and \(l=1,2\)) containing the unknown Easting and Northing of point *i* at epoch *l*, we define \(x_{l}=[x^\mathrm{T}_{3,l}, ~x^\mathrm{T}_{4,l},~x^\mathrm{T}_{5,l},~x^\mathrm{T}_{6,l}]^\mathrm{T}\in {\mathbb {R}}^{8}\) for \(l=1,2\). Under the null hypothesis \({\mathcal {H}}_{0}\) where no deformation occurs, we assume

The redundancy under \({\mathcal {H}}_{0}\) is \(r=8\). For simplicity of our analysis, we make the following assumptions about the alternative hypotheses that may occur. In case of deformation, we assume that either only one or both of the dam points are unstable, with their deformation being in the direction perpendicular to the dam in this example (the dam is supposed to be subject to load of the water in the lake, and hence points 5 and/or 6 may be pushed back, in the southwest direction). Thus we have, in case only one point is unstable,

with \(u_{i}\in {\mathbb {R}}^{4}\) the canonical unit vector having the 1 as its \((i+2)^{th}\) entry, \(d\in {\mathbb {S}}^{2}\) the known unit vector in the direction perpendicular to the dam, \(b_{i}\in {\mathbb {R}}\) the unknown scalar deformation size parameter, and \(\otimes \) the Kronecker product (Henderson et al. 1983). In case both of the object points 5 and 6 are unstable, we assume that they deform with the same amount as

in which \(u_{3}=u_{1}+u_{2}\) and \(b_{3}\in {\mathbb {R}}\) is the unknown deformation parameter. Note, although in the current example we have considered 1-dimensional alternative hypotheses, that our proposed risk evaluation method can be applied to more general situations where the alternative hypotheses are of multiple dimensions and different from each other.

Assuming \(\mathrm{P}({\mathcal {H}}_{i})=0.01\) (\(i=1,2,3\)) and \(\alpha =10^{-3}\), Fig. 7 [top] shows the overall integrity risk \(\mathrm{IR}\), and [bottom] its difference with the approximate one \(\mathrm{IR}-\mathrm{IR}^{o}\), as a function of AL, based on (20) and (22). The results for each value of AL are presented for the threat values \(b_{i}=\hbox {AL}+1\hbox {mm}\) (in blue), and \(\hbox {AL}+5\hbox {mm}\) (in red). We note that since \(r=8>1\), our testing procedure involves both detection and identification steps (7) and (8), see also Table 1. It is observed (on top) that the overall integrity risk decreases as the AL, and thus in this case the deformation magnitudes \(b_{i}\) (\(i=1,2,3\)), increase. This indeed makes sense as larger alert limits imply that the structure under monitoring can stand larger deformations, thus encountering a lower risk of failure, and larger changes \(b_{i}\) are more easily detected (and identified). We again notice the smaller values of the strict integrity risk compared to the approximate one.

When the AL gets larger than a specific value, the strict integrity risk IR and the difference \(\mathrm{IR}-\mathrm{IR}^{o}\) both become stable, which can be explained as follows. When the AL increases, then \(b_{i}\), which is chosen here as \(b_{i}=\)AL\(+1\)mm and AL\(+5\)mm, also increases as well. This in turn results in a larger CI-probability and lower MD- and WI-probabilities (see Table 1). Therefore, we have \(\mathrm{P}(t\in {\mathcal {P}}_{i}|{\mathcal {H}}_{i})\rightarrow 1\) and \(\mathrm{P}(t\in {\mathcal {P}}_{j\ne i}|{\mathcal {H}}_{i})\rightarrow 0\), thus \(\mathrm{P}(\hat{b}_{i}\in {\mathcal {B}},~t\in {\mathcal {P}}_{i} |{\mathcal {H}}_{i})\rightarrow \mathrm{P}(\hat{b}_{i}\in {\mathcal {B}}|{\mathcal {H}}_{i})\) and \(\mathrm{P}(\hat{b}_{j\ne i}\in {\mathcal {B}},~t\in {\mathcal {P}}_{j\ne i}|{\mathcal {H}}_{i})\rightarrow 0\). As a result of this, both IR and \(\hbox {IR}^{o}\) go toward \(\sum _{i=1}^{k}\mathrm{P}(\hat{b}_{i}\in {\mathcal {B}} |{\mathcal {H}}_{i})\mathrm{P}({\mathcal {H}}_{i})\), thereby thus \(\mathrm{IR}-\mathrm{IR}^{o}\rightarrow 0\). Given the definition of \({\mathcal {B}}_{\mathrm{AL}}\), one can write

where \(\varPhi (\cdot )\) denotes the cumulative distribution function of the standard normal distribution. As \(b_{i}=\hbox {AL}+1\hbox {mm}\) and \(\hbox {AL}+5\hbox {mm}\), then \(\hbox {AL}-b_{i}\) remains constant if AL increases which explains why the IR becomes stable when \(\mathrm{AL}\rightarrow \infty \).

To gain an understanding of the contribution of the different hypotheses into the construction of the overall integrity risk, Fig. 8 shows the graphs of \(\hbox {IR}|{\mathcal {H}}_{1}\), \(\hbox {IR}|{\mathcal {H}}_{2}\) and \(\hbox {IR}|{\mathcal {H}}_{3}\), as a function of the alert limit AL for the threat value \(b_{i}=\mathrm{AL}+1\hbox {mm}\). It is observed, for all ranges of AL, that \(\mathrm{IR}|{\mathcal {H}}_{2}>\mathrm{IR}|{\mathcal {H}}_{3}>\mathrm{IR}|{\mathcal {H}}_{1}\). For the sake of simplicity, we explain this behavior for large alert limits where the integrity risk IR\(|{\mathcal {H}}_{i}\) can be approximated by \(\varPhi \left( \frac{\mathrm{AL}-b_{i}}{\sigma _{\hat{b}_{i}}}\right) \) (cf. (38)). According to (12), the variance of \(\hat{b}_{i}\) is characterized through \(\Vert c_{t_{i}}\Vert ^{2}_{Q_{tt}}\) which is also the indicator of minimal detectable bias (MDB) under \({\mathcal {H}}_{i}\) (Baarda 1968; Teunissen 2000); the larger the value of \(\Vert c_{t_{i}}\Vert ^{2}_{Q_{tt}}\), the smaller the MDB, and thus the better the detectability under \({\mathcal {H}}_{i}\). For the model at hand, we have

implying that \(\sigma _{\hat{b}_{2}}>\sigma _{\hat{b}_{3}} >\sigma _{\hat{b}_{1}}\) which with \(b_{1}=b_{2}=b_{3}>\hbox {AL}\) (hence \(\hbox {AL}-b_{i}<0\)), gives \(\frac{\mathrm{AL}-b_{2}}{\sigma _{\hat{b}_{2}}}>\frac{\mathrm{AL}-b_{3}}{\sigma _{\hat{b}_{3}}}>\frac{\mathrm{AL}-b_{1}}{\sigma _{\hat{b}_{1}}}\). As \(\varPhi (\cdot )\) is a monotonously increasing function of \((\cdot )\), we then have \(\mathrm{IR}|{\mathcal {H}}_{2}>\mathrm{IR}|{\mathcal {H}}_{3}>\mathrm{IR}|{\mathcal {H}}_{1}\).

## Summary and conclusion

It is crucial for deformation monitoring systems to timely detect a dangerous displacement beyond tolerances of the structure under consideration. This contribution presents a method for statistically evaluating the risk in a deformation monitoring system. In order to quantify the performance of the monitoring under a particular deformation, the corresponding *integrity risk* needs to be evaluated. We referred to integrity risk as the probability of the monitoring system failing to issue an alert, when in fact one should have been given.

The integrity components of deformation monitoring were introduced and discussed. As deformation monitoring involves statistical testing of multiple hypotheses, the integrity risk was mathematically developed for the multiple hypothesis testing problem. In doing so, the alerts were assumed to be dependent on both the identified hypothesis and the threat that the estimated size of deformations entails. It was thereby highlighted that for a correct evaluation of the risk, estimation and testing should be considered together, as they are intimately linked in practice. This in turn leads to the use of conditional probabilities when computing the integrity risk. One may, however, find it simpler computation-wise to neglect the interaction between estimation and testing. For this case, we provided an approximation of integrity risk. It was emphasized that this approximation may provide a too optimistic or pessimistic description of the integrity risk depending on the testing procedure and tolerances of the structure at hand. The integrity risk was also formulated for some other simplified scenarios and compared with the strict formulation.

In addition to timely detecting hazardous deformations, monitoring systems are also required to provide threat estimates together with their corresponding probabilistic properties. It was shown that the outcome of testing determines how the threat gets estimated. The threat estimator \(\bar{b}_{j}\) and its associated distribution were then derived, capturing the contributions from both testing and estimation. It was emphasized that although the threat estimator under the identified hypothesis \({\mathcal {H}}_{j}\), i.e., \(\hat{b}_{j}\), is normally distributed, the estimator \(\bar{b}_{j}\) is *not* due to its nonlinear dependency on the misclosure.

For a simple observational model with just a single alternative, the integrity risk was evaluated both using the strict and approximate approach. The difference between these two approaches was analyzed, and the role of different contributing factors was highlighted. We pointed out that when choosing one approach over another, one should, besides the computational burden, also take the additional costs incurred by conservatism into account. Assuming that a deformation has taken place, we then analyzed the precision of the threat estimator with and without accounting for conditioning on testing decision. It was explained that negligence of this conditioning process may provide a too optimistic description of the estimator’s quality. Our evaluations were extended to a basic deformation measurement system example with multiple alternative hypotheses, where monitoring measurements were provided by a 2D terrestrial survey network.

Finally, we remark that although our analyses were presented for hypotheses of the same dimensions (1D), our risk evaluation method can be applied to more general situations where the alternative hypotheses are of different dimensions. This is due to the fact that it is driven by the concept of misclosure space partitioning and this is irrespective of the alternative hypotheses having the same dimension, or being of different dimensions. Hence, as soon as the hypothesis-selection has been made unambiguous, the corresponding partitioning of the misclosure space enables a direct application of our risk evaluation method. Moreover, the method can also be used to compare the performance of different sets of partitionings \({\mathcal {P}}_i\) for the same set of hypotheses and thus be used to study and compare the performances of different hypothesis-selection mechanisms.

## References

Akaike H (1974) A new look at the statistical model identification. IEEE Trans Autom Control 19(6):716–723

Baarda W (1967) Statistical concepts in geodesy. Netherlands Geodetic Commission, Publ. on geodesy, New series 2(4)

Baarda W (1968) A testing procedure for use in geodetic networks. Netherlands Geodetic Commission, Publ on geodesy, New Series 2(5)

Burnham KP, Anderson DR (2003) Model selection and multimodel inference: a practical information-theoretic approach. Springer, Berlin

Caspary W, Borutta H (1987) Robust estimation in deformation models. Surv Rev 29(223):29–45

Chen Y, Chrzanowski A, Secord J (1990) A strategy for the analysis of the stability of reference points in deformation surveys. CISM J 44(2):39–46

Durdag UM, Hekimoglu S, Erdogan B (2018) Reliability of models in kinematic deformation analysis. J Surv Eng 144(3):04018 004

Eichhorn A (2007) Tasks and newest trends in geodetic deformation analysis: a tutorial. In: Proceedings of the 15th European signal processing conference (EUSIPCO 2007), EURASIP, pp 1156–1160

Henderson HV, Pukelsheim F, Searle SR (1983) On the history of the Kronecker product. Linear Multilinear Algebra 14:113–120

Heunecke O, Kuhlmann H, Welsch W, Eichhorn A, Neuner H (2013) Handbuch Ingenieurgeodäsie: Auswertung geodätischer Überwachungsmessungen (in German). Wichmann, Berlin

Konakoğlu B, Gökalp E (2018) Deformation measurements and analysis with robust methods: a case study, deriner dam. Turk J Sci Technol 13:99–103

Lehmann R, Lösler M (2016) Multiple outlier detection: hypothesis tests versus model selection by information criteria. J Surv Eng 142(4):04016 017

Lepadatu L, Tiberius CCJM (2014) GPS for structural health monitoring—case study on the Basarab overpass cable-stayed bridge. J Appl Geod 8(1):65–86

Niemeier W (1985) Deformationsanalyse (in German). In: Pelzer H (ed) Geodätische Netze in Landes- und Ingenieursvermessung II: Vorträge des Kontaktstudiums Februar 1985 in Hannover. K. Wittwer Verlag, Stuttgart, pp 559–623 chap 15

Papoulis A (1984) Probability, random variables, and stochastic processes. McGraw-Hill, New York

Pelzer H (1971) Zur Analyse geodätischer Deformationsmessungen, Ph.D. Thesis (in German). Deutsche Geodätische Kommission, Reihe C: Dissertationen - Heft Nr. 164, München, Germany

Scaioni M, Marsella M, Crosetto M, Tornatore V, Wang J (2018) Geodetic and remote-sensing sensors for dam deformation monitoring. Sensors 18(11):3682

Setan H, Singh R (2001) Deformation analysis of a geodetic monitoring network. Geomatica 55:333–346

Sušić Z, Batilović M, Ninkov Y, Bulatović V, Aleksić I, Nikolić G (2017) Geometric deformation analysis in free geodetic networks: case study for fruška gora in serbia. Acta Geodyn Geomater 14:341–355

Teunissen PJG (2000) Testing theory: an introduction. Series on mathematical geodesy and positioning. Delft University Press, Delft

Teunissen PJG (2018) Distributional theory for the DIA method. J Geod 92(1):59–80. https://doi.org/10.1007/s00190-017-1045-7

van Mierlo J (1978) A testing procedure for analysing geodetic deformation measurements. In: Proceedings of the II. International symposium of deformation measurements by geodetic methods. Bonn, Germany, September 25–28, 1978, Konrad Wittwer, Stuttgart, pp 321–353

Verhoef HME, De Heus HM (1995) On the estimation of polynomial breakpoints in the subsidence of the Groningen gasfield. Surv Rev 33(255):17–30

Yavaşoğlu HH, Kalkan Y, Tiryakioğlu I, Yigit CO, Özbey V, Alkan MN, Bilgi S, Alkan RM (2018) Monitoring the deformation and strain analysis on the Ataturk Dam, Turkey. Geom Nat Hazards Risk 9(1):94–107

Zaminpardaz S, Teunissen PJG (2019) DIA-datasnooping and identifiability. J Geod 93(1):85–101. https://doi.org/10.1007/s00190-018-1141-3

## Author information

### Affiliations

### Contributions

S.Z., P.J.G.T and C.C.J.M.T. contributed to the design, implementation of the research, analysis of the results and the writing of the manuscript.

### Corresponding author

Correspondence to P. J. G. Teunissen.

## Appendix

### Appendix

### Proof of Theorem 1

With \(\bar{b}_{j}\) in (10) for \(j=1,\ldots ,k\), and \(\widetilde{{\mathcal {P}}}_{i}\) (\(i=0,1,\ldots ,k\)) being a partitioning of \({\mathbb {R}}^{r}\) (cf. (14)), for any interval \({\mathcal {B}}\subset {\mathbb {R}}\), we have

The first equality follows from an application of the total probability rule, while the second from \((\bar{b}_{j}|t\in {\mathcal {P}}_{j}) =(\hat{b}_{j}|t\in {\mathcal {P}}_{j})\) and the fact that \((\bar{b}_{j}|t\notin {\mathcal {P}}_{j})=0\). In the second equality, the second term on the right-hand side vanishes if \(0\notin {\mathcal {B}}\).

- (i)
If \(r=1\) (\(t\in {\mathbb {R}}\)), then \(\hat{b}_{j}=t/c_{t_{j}}\) (cf. (11)). With this in mind, (40) can be expressed in terms of the integral of the corresponding PDFs as

$$\begin{aligned}&{\displaystyle \int _{{\mathcal {B}}} f_{\bar{b}_{j}} (b|{\mathcal {H}}_{i})\;\hbox {d}b}\nonumber \\&\quad ={\displaystyle \int _{{\mathcal {B}}}\left\{ f_{\hat{b}_{j}} (b|{\mathcal {H}}_{i}){p}_{j}(c_{t_{j}}b)\;+\;\mathrm{P}( t\notin {\mathcal {P}}_{j}|{\mathcal {H}}_{i})\delta (b)\right\} \;\hbox {d}b} \end{aligned}$$(41)Since \({\mathcal {B}}\) is arbitrary, (15) follows from (41).

- (ii)
Now, we consider the case of \(r>1\) (\(t\in {\mathbb {R}}^{r>1}\)). Using the one-to-one link between \(\widetilde{{\mathcal {P}}}_{j}\) and \({\mathcal {P}}_{j}\), see (13) and (14), the first probability on the right-hand side of (40) in the second equality can be rewritten as

$$\begin{aligned} \mathrm{P}(\hat{b}_{j}\in {\mathcal {B}},~t\in {{\mathcal {P}}}_{j}|{\mathcal {H}}_{i}) =\mathrm{P}\left( \left[ \begin{array}{l} \hat{b}_{j} \\ \tilde{t}_{j} \end{array}\right] \in \widetilde{{\mathcal {P}}}_{j} \cap \left[ \begin{array}{c} {\mathcal {B}} \\ {\mathbb {R}}^{r-1} \end{array}\right] \bigg |{\mathcal {H}}_{i}\right) \end{aligned}$$(42)With the above expression, (40) can be presented in terms of the integral of the corresponding PDFs as

$$\begin{aligned}&{\displaystyle \int _{{\mathcal {B}}} f_{\bar{b}_{j}} (b|{\mathcal {H}}_{i})\;\hbox {d}b}\nonumber \\&\quad =\displaystyle \int _{{\mathcal {B}}}\left\{ f_{\hat{b}_{j}} (b|{\mathcal {H}}_{i}){\displaystyle \int _{{\mathbb {R}}^{r-1}} f_{\tilde{t}_{j}}(\tau |{\mathcal {H}}_{i})\,\tilde{p}_{j} (b,\tau )\,\hbox {d}\tau }\right. \nonumber \\&\qquad \left. +\mathrm{P}(t\notin {\mathcal {P}}_{j}|{\mathcal {H}}_{i}) \delta (b)\right\} \;\hbox {d}b \end{aligned}$$(43)Since \({\mathcal {B}}\) is arbitrary, (16) follows from (43). \(\square \)

### Proof of Lemma 1

Using the conditional probability rule, we can write for any \({\mathcal {B}}\subset {\mathbb {R}}\)

The above conditional probability can be expressed in terms of the integral of the corresponding PDF, i.e., \(\mathrm{P}(\hat{b}_{j} \in {\mathcal {B}} |t\in {\mathcal {P}}_{j},~{\mathcal {H}}_{i}) ={\displaystyle \int _{{\mathcal {B}}} f_{\hat{b}_{i}|t \in {\mathcal {P}}_{i}}(b|t\in {\mathcal {P}}_{i},~{\mathcal {H}}_{j})\;\hbox {d}b}\). The probability \(\mathrm{P}(\hat{b}_{j}\in {\mathcal {B}},~t \in {\mathcal {P}}_{j}|{\mathcal {H}}_{i})\) is also given by the first term on the right-hand side of (41) and (43) for, respectively, the cases \(r=1\) and \(r>1\). Substituting these terms into the above equation, (25) and (26) follow since \({\mathcal {B}}\) is arbitrary. \(\square \)

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Zaminpardaz, S., Teunissen, P.J.G. & Tiberius, C.C.J.M. A risk evaluation method for deformation monitoring systems.
*J Geod* **94, **28 (2020). https://doi.org/10.1007/s00190-020-01356-w

Received:

Accepted:

Published:

### Keywords

- Deformation
- Monitoring system
- Statistical testing
- Integrity risk
- Threat estimation
- Conditional distribution