1 Introduction

Physical security risk assessment (SRA) has gained importance in recent years; in particular, the vulnerability of critical infrastructures against terrorist threats is regularly assessed. For this purpose, new approaches emerged aiming at the introduction of quantitative metrics, e.g., by Flammini et al. (2013) and Landucci et al. (2017). In practice, however, qualitative SRA is still very common. Yet, a lack of evidence of actual attacks with a terrorist background leads to inherent uncertainties regarding threat scenarios as well as the capabilities of security systems (Abrahamsen et al. 2015). As a result, SRA is often backed by only vague data or elicited expert knowledge that may represent a rather subjective perspective.

The occurrence of inherent uncertainties in risk assessment and decision-making is well known in the general fields of risk science and resilience, e.g., in Flage et al. (2014) and Aven and Zio (2021). In this context, especially the role of these uncertainties is discussed when considering decision-making for risk-reducing measures (Aven and Zio 2011; Yoe 2019). An earlier published study by Lichte and Wolf (2018) outlines consequences of considering uncertainties for qualitative methods in SRA that rely on expert knowledge.

Especially in SRA, the described inherent uncertainties have a potentially significant influence on the results and thus the possible outcome of occurring attacks. Here, the mere determination of qualitative or scalar values without considering the uncertain database or subjective expert knowledge for the characterization of security measures can lead to a fatal overestimation of the actual security level. Thus, the presented paper focuses on the impact of considering uncertainties in quantitative SRA, especially regarding the robustness of the system against resulting input parameter variance. Therefore, two levels of uncertainty can be distinguished: Firstly, the small basis of evidence, which increases uncertainty in the prediction of future attacks. Secondly, the performance of security measures against uncertain attackers, which can only be estimated at best. According to Milliken (1987), the aforementioned levels can be referred to as effect uncertainty (level 2) and response uncertainty (level 3), respectively. The impact of both levels is analyzed by applying an earlier approach to quantitative vulnerability assessment introduced by Lichte and Wolf (2017) as part of the SRA process to a notional airport structure.

Initially, we introduce a security measure configuration represented by probability density functions (pdfs) characterizing the performance of comprised components based on the subjective perspective of experts. Herein, the variance is a metric to describe uncertainties regarding measure efficiency in deterring potential attacks as a result from differing or vague expert opinions or scattered data. A first assessment only takes into account the mean values of normal probability density functions (npdfs), and thus uncertainties are not considered. Additionally, we assess the configuration incorporating given variances of the npdfs resulting from expert knowledge elicitation for uncertainty consideration. A comparison shows the influence on the resulting vulnerability on system level. In a further step, we conduct a Variance Based Sensitivity Analysis (VBSA) as demonstrated by Saltelli et al. (2004) to quantify the influence of the introduced uncertainties on barrier level. Here, we analyze the influence of protection, observation and intervention measures on system vulnerability to reveal their potential impact on the effectiveness of the security system.

Based on this analysis, we propose and formalize a security margin. The concept of the security margin aims at the consideration of uncertainties introduced by either vague data or expert knowledge elicitation in SRA. By considering the systemic and barrier specific impact on security system effectiveness it is introduced to support optimized security system design. The security margin is derived in two steps. In a first step, the influencing security measures are identified by conducting a VBSA of the initial configuration of a security system. Then, the actual security margin is derived, only depending on the introduced uncertainty resulting from measure characterization and a reasonable target effectiveness based on efficiency considerations. Finally, we demonstrate the benefits of the approach for decision-making by optimizing the vulnerability of the initial configuration using the security margin concept.

2 Background

The issue of uncertainties in risk assessment is widely discussed within the general field of risk assessment, especially in the field of safety, e.g., by Fjaeran (2021) and Aven and Zio (2021). In recent years, risk science extended its scope further on risk management of complex systems facing greater hazards, i.e., natural extremes or man-made disasters, e.g., in Aven (2018). The consideration of uncertainty is even more important for these high impact low probability events, as their assessment often relies on vague data and information regarding likelihood of occurrence and temporal development.

This lack of knowledge, mostly referred to as epistemic uncertainty, may be considered critical for decision-making, as the development of a hazard scenario is decisive for its outcome. Thus, suited measures rely on little available information (Aven and Zio 2021). Within such scenarios, the rising number of attacks on critical infrastructures led to an increasing focus on security-related questions in business and sociopolitical decision-making, e.g., in Alcaraz and Zeadally (2015), Zsifkovits and Pickl (2016) and Guerra et al. (2008).

In actual security threat scenarios, faultily designed measures and miscalculated forecasts will, at least, lead to substantially larger damages at the asset under consideration. Threat scenarios missed out in system layout and deficient estimation of influence parameters result in misrepresentation of real situations (Campbell and Stamp 2004). This might lead to poor decisions in security investments. While in practice, qualitative methods are commonly used for the assessment, quantitative methods are developed in science and gain more advantages (Queirós et al. 2017). Such methods allow a better understanding of the interdependencies in security systems. Consequently, modeling of the behavior of entire security systems is feasible today, enabling analysis, optimization and simulation of the system (Meritt 1999). Although quantitative methods may depend on the same vague data or expert knowledge as qualitative methods, they allow, however, to consider the resulting uncertainties explicitly, which may lead to a significantly different outcome of the analysis. Thus, quantitative methods can therefore bring decisive advantages in SRA. An improvement in security performance can be achieved by considering the uncertainties in analysis and design. Additionally, it potentially reveals to which extent large uncertainties, when taken into account, lead to cost-intensive over-optimization.

Despite potential problems in quantification, it is obviously important to consider uncertainties in SRA since they are likely to influence its results significantly, especially because the reliance on vague data or expert knowledge in SRA induces such uncertainties. Yet, it is reasonable to consider the input parameters by degree-of-belief-densities based on subjective probabilities, where probability distributions can be obtained by eliciting expert knowledge (EFSA 2014; Meyer and Booker 2001). In this way, representation of uncertainties is made possible while formally complying with probability theory (Beyerer and Geisler 2016).

Unfortunately, there are only few quantitative models considering uncertainties in security-related systems, e.g., the vulnerability assessment introduced by Lichte and Wolf (2017) or the approach introduced by McGill et al. (2007). The influence of these uncertainties on the SRA process is not yet analyzed. A first approach to analyze its impact on the output of a quantitative model was introduced by Lichte and Wolf (2018).

A framework proposed by Abrahamsen et al. (2015) considers uncertainties by including them into decision-making in security strategies. Depending on the grade of expected uncertainties and consequences, different strategies for decision-making are proposed. These strategies vary from extensive SRA at lower levels of uncertainty, precautionary approaches at medium levels of uncertainty to discursive style decisions. The last strategy should especially be adopted at high levels of uncertainty, e.g., when considering measures of counterterrorism, where cause-effect relationships are broadly discussed (van Dongen 2011).

For more complex quantitative models, uncertainty consideration can also be achieved by sensitivity analysis, which is used for assessing the influence of the input on the output of a system (Henkel et al. 2012). Within sensitivity analysis, the variability of the model inputs is related to the outputs with regard to their cause-and-effect chain. Thus, uncertainties of an output parameter are tracked back to the input. Especially if non-linear models are considered, the scattering of input and output factors is very challenging (Saltelli et al. 2004).

3 Methods and Exemplary Infrastructure

3.1 Variance Based Sensitivity Analysis

A procuring approach to analyze the influence of uncertainties is the conduction of Variance Based Sensitivity Analysis (VBSA) on the model under study. The VBSA was introduced by Saltelli et al. (2004) and is a numerical method to assess the relative importance of model input factors by measuring the sensitivity across the complete input space. For this purpose, the effect of uncertainty in the output of a model is analyzed regarding different sources of uncertainty inputs (Henkel et al. 2012). The objective of this method is to find the parameters that have the largest impact on a predefined target function, e.g., the model output.

Within the scope of the presented approach, the numerical scenario analysis tool Monte Carlo Simulation (MCS), based on probability theory and statistics, is used with sampling based on Sobol sequences for realizing the VBSA (Saltelli et al. 2010). Within VBSA, total effect sensitivity indices \(S_{\rm{T}i}\) are used to analyze linear and nonlinear effects of the input parameters \(X_1, X_2, \dots , X_k\) on the model output \(Y = f(X_1, X_2, \dots , X_k)\). For the i-th parameter, \(S_{\rm{T}i}\) is defined as follows:

$$\begin{aligned} S_{\rm{T}i} = 1 - \frac{V_{\mathbf{X}_{\sim i}} \left( E_{X_i} \left( Y \mid {\mathbf{X}}_{\sim i} \right) \right) }{V(Y)} \end{aligned}$$
(1)

\({\mathbf{X}}_{\sim i}\) refers to the sample matrix of all input parameters excluding the i-th parameter, \(E_{X_i}\) refers to the mean value taken over \(X_i\) and \(V_{{\mathbf{X}}_{\sim i}}\) refers to the variance taken over all parameters but \(X_i\).

In a combined risk assessment, e.g., for security-related investments, the interaction of the design parameters of the security system with regard to the interoperated risk might be very important for decision-making.

3.2 Applied Vulnerability Model

The vulnerability model applied in this paper is based on four basic assumptions, which characterize the most relevant behavior of a security system in an infrastructure (Lichte and Wolf 2017). These assumptions are used in the probabilistic description of the system’s relations.

  1. 1.

    The weakest path of the security system determines the system’s vulnerability as the chosen path of the attacker is uncertain.

  2. 2.

    The combination of protection and observation at barriers is necessary as an attacker is always able to break through a barrier given infinite time without being detected.

  3. 3.

    The detection of an attack is possible only if the protection is sufficient to prevent a break-through under observation until detection.

  4. 4.

    After detection, an attack can be stopped only if the residual protection along the remaining attack path lasts long enough to prevent the attacker from reaching the asset until intervention is completed (see Fig. 1 (bottom)).

Considering the four stated principles, the model consists of three main input parameters that characterize the system capabilities provided by the installed security measures on barrier level: protection (P), observation (O) and intervention (I). Each of these parameters is described as a time-based probability density function (pdf). Capabilities are described as relations between these parameters. Figure 1 (top) shows the configuration of barriers along attack paths.

Fig. 1
figure 1

Principle of security measures based on Garcia (2008). Source: Lichte and Wolf (2017)

A detection of an attacker is triggered if the protection measure at a barrier prevents an attacker from a break-through until an observation is completed with detection. This is described by the conditional probability D:

$$\begin{aligned} D = P(t_{\rm{O}} < t_{\rm{P}}) \end{aligned}$$
(2)

Herein \(t_{\rm{P}}\) and \(t_{\rm{O}}\) denote the distributed time for protection and observation.

Timely intervention is the second key relation in the vulnerability model. It is based on the time needed for intervention \(t_{\rm{I}}\) and the residual protection \(t_{\rm{RP}}\). The residual protection \(t_{\rm{RP}}\) is the sum of all protection measures along the residual barriers of the system on an attack path.

$$\begin{aligned} t_{\rm{RP}} = \sum _{j=i}^n t_{\rm{P},j} - t_{\rm{O}i} \end{aligned}$$
(3)

The conditional probability for timely intervention T is thus defined by:

$$\begin{aligned} T = P(t_{\rm{I}} < t_{\rm{RP}}) \end{aligned}$$
(4)

Both main principles and the resulting relations between the pdfs of the incorporated parameters are shown in Fig. 2.

Fig. 2
figure 2

Application of normal pdf (npdf) for \(t_{\rm{P}}\), \(t_{\rm{O}}\), \(t_{\rm{I}}\)

The vulnerability of a barrier \(V_{\rm{B}}\) is then represented by

$$\begin{aligned} V_{\rm{B}} = 1 - D \cdot T \end{aligned}$$
(5)

The product of the barrier-specific vulnerabilities leads to the vulnerability of the whole attack path \(V_{\rm{P}}\):

$$\begin{aligned} V_{\rm{P}} = \prod _{j=1}^n V_{\rm{B},j} \end{aligned}$$
(6)

Referring to the first assumption, the system vulnerability \(V_{\rm{S}}\) is determined by the weakest path:

$$\begin{aligned} V_{\rm{S}} = \max (V_{\rm{P},1}, \dots , V_{\rm{P},m}) \end{aligned}$$
(7)

In case of numerical sampling, e.g., Monte Carlo, we reformulate the definition of system vulnerability due to the binary characteristic of path vulnerability at each sample. At a sample, the system is defined to be vulnerable, when any path is vulnerable. The mean of all samples then describes the overall system vulnerability.

3.3 Exemplary Airport Structure and Security System

The airport system and the identified security barriers are depicted in Fig. 3. Additionally, the figure outlines feasible attack paths within this structure. The structure is based on a notional airport which was subject to a security risk assessment in Lichte and Wolf (2017).

Fig. 3
figure 3

Notional airport structure with feasible attack path, based on: Lichte and Wolf (2017)

4 The Impact of Uncertainties in Security Vulnerability Assessment

In this section, we show how uncertainties influence the results of vulnerability assessment. For this purpose, we analyze an initial configuration of the introduced notional airport structure regarding general model sensitivity to added variance on system parameters. The initial configuration is described in Table 1. The defined values are assumed to be the result of expert knowledge elicitation. Subsequently, we quantify the monitored impact on barrier level by applying a VBSA on all input parameters characterizing the capabilities of the security system.

Table 1 Initial configuration of notional airport security system

4.1 Impact of Uncertainties on System Vulnerability

In this analysis, we initially replace the pdfs with scalars to describe the performance of security measures without changing the basic barrier-oriented structure. Thus, the parameters are fully described by the mean values: \(t_{\rm{P}i} = \mu _{\rm{P}i}\), \(t_{\rm{O}i} = \mu _{\rm{O}i}\) and \(t_{\rm{I}i} = \mu _{\rm{I}i}\) (compare Table 1).

In a second assessment, we additionally add the uncertainty regarding the security measure performance at the single barriers represented by the variance \(\sigma ^2\). Table 1 shows the npdf-based values for the respective system parameters. It should be noted that we assume that no security measures are associated with barrier 1. Hence, it is excluded from the assessment.

With the now established npdfs for \(t_{\rm{P}i}\), \(t_{\rm{O}i}\) and \(t_{\rm{I}i}\), we conduct a re-assessment of the vulnerability considering the added uncertainties. For this purpose, we compute the vulnerability by MCS. The obtained results for both assessments are listed in Table 2.

Table 2 Path vulnerabilities with and without uncertainty consideration

The weakest path determines the system vulnerability \(V_{\rm{S}}\). We calculate the results for both cases: \(V_{\rm{S,nv}}\) for no variance consideration and \(V_{\rm{S,v}}\) using the npdfs.

$$\begin{aligned} V_{\rm{S,nv}}&= 0 \end{aligned}$$
(8)
$$\begin{aligned} V_{\rm{S,v}}&= 0.811 \end{aligned}$$
(9)

The assessment of the vulnerabilities of the feasible attack paths, as well the system vulnerability for both versions, reveals highly variable results. The difference in \(V_{\rm{S,nv}}\) and \(V_{\rm{S,v}}\) on system level is solely caused by the introduced uncertainties, since the vulnerability model and the mean values of the input parameters remain unchanged. Thus, estimation and consideration of uncertainties is important, as system layout based on scalars can lead to misleading results, which can lead to fatal decisions. Additionally, a more detailed understanding of the uncertainty impact is needed for a rational and cost-efficient security system layout. For this reason, we carry out further analyses on barrier and parameter level in the next section.

4.2 Uncertainty Impact Assessment on Barrier Level

In this step, we analyze which uncertain parameters impact system vulnerability. By applying a VBSA, we reveal the influence of all parameters on barrier level. For this purpose, we investigate the total effect sensitivity indices \(S_{\rm{T}i}\) of the model output \(V_{\rm{S}}\) to the input parameters \(t_{\rm{P}i}\), \(t_{\rm{O}i}\) and \(t_{\rm{I}i}\). By generating samples based on Sobol sequences and calculating the sensitivity indices using the software SALib (Herman and Usher 2017), we obtain the results for all input parameters of the optimized configuration shown in Table 3.

Table 3 Total effect sensitivity indices \(S_{\rm{T}i}\) for all parameters

On the one hand, the results reveal that the uncertainty added to some of the input factors does not have an impact on the model output of system vulnerability, as the total effect sensitivity indices are zero or near zero, e.g., all input factors at barrier 4. On the other hand, the uncertainty of some input factors seems to have an impact on the results, e.g., at the barriers 2a, 2b, 2c, 3, 6 and 8. However, it can be concluded that uncertain parameters for security measures only have an impact on certain points, i.e., barriers within a security system.

5 Approach Toward a System Layout Considering Uncertainty

In this section, we propose an approach that optimizes system security by considering the influence of uncertainties analyzed in Sect. 4. For this reason, we introduce a security margin concept that is set up in two consecutive steps. In a first step, we argue to use the VBSA-based total effect sensitivity indices \(S_{\rm{T}i}\) to identify barriers and security measures relevant for optimization. In a second step, we derive a security margin for the performance of the identified measures to account for associated uncertainties. The process is run successively for detection and intervention capabilities. As security margins applied to protection or observation measures change the residual protection on certain attack paths (see Eq. 3), the sequence enables optimized adjustment of the security margin for intervention measures. The security margins for all measures are based only on the characteristics of the involved pdfs and a targeted level for detection and timely intervention capability. We additionally demonstrate the possible correlation between effort needed to consider uncertainties and achievable security level which becomes visible through this approach. This can be used for further assessment of optimization efficiency.

5.1 Step 1: Variance Influence Assessment on Measure Performance at Barrier Level

A rational optimization of security systems should consider the boundary conditions of feasibility, cost-benefit ratio and financial budget constraints. A sensitivity analysis, especially a VBSA for nonlinear systems, is a reasonable first step for cost-benefit considerations, as it provides qualitative knowledge about the influence of system’s variables on its output. Hence, influencing variables can be identified and chosen for further optimization to concentrate resources and thus maximize their benefit.

Here, we use the VBSA as shown in Sect. 4.2 to find input parameters that influence the system vulnerability by comparing the calculated total effect sensitivity indices \(S_{\rm{T}i}\).

5.2 Step 2: Security Margin Definition

Identified influencing security measures are optimized to improve the overall security system performance in the second step. This is reached by adding a security margin M considering the uncertainty, i.e., the variance of the characterizing pdfs. The new parameters for protection \(t_{\rm{P}i}^*\) and intervention \(t_{\rm{I}i}^*\) are then given as follows:

$$\begin{aligned} t_{\rm{P}i}^*&= t_{\rm{P}i} + M_{\rm{P}i} \left( \sigma _{\rm{P}i}, \sigma _{\rm{O}i}, D_i^* \right) \end{aligned}$$
(10)
$$\begin{aligned} t_{\rm{I}i}^*&= t_{\rm{I}i} - M_{\rm{I}i} \left( \sigma _{\rm{I}i}, \sigma _{\rm{RP}i}, T_i^* \right) \end{aligned}$$
(11)

Herein, \(\sigma ^2\) marks the variance of the respective pdf at the i-th barrier. \(D_i^*\) and \(T_i^*\) describe a targeted level of probability for detection and timely intervention at barrier i, respectively.

The definition of M depends on the underlying pdfs used to describe the performance of the security measures. Based on the level of knowledge, different pdfs may be suitable, e.g., uniform, triangular or normal distributions. The derivation of M for normal distributions is described in the following based on mean and variance of measure performance as well as the targeted level of the dependent capability. Here, we restrict ourselves to normal distributions, since these are mathematically straightforward to handle.

For all distribution types, the starting point is derived from Eqs. 2, 4, 10 and 11, respectively:

$$\begin{aligned} D^*&= P(t_{\rm{O}} < t_{\rm{P}} + M_{\rm{P}}) \end{aligned}$$
(12)
$$\begin{aligned} T^*&= P(t_{\rm{I}} - M_{\rm{I}} < t_{\rm{RP}}) \end{aligned}$$
(13)

As shown in Lichte and Wolf (2017), D and T can be expressed by pdfs, here extended to include the security margin:

$$\begin{aligned} D^*&= \int _{-\infty }^{\infty } f_{\rm{O}}(t) \; \int _t^{\infty } \!\! f_{\rm{P}}(\tau - M_{\rm{P}}) \; \rm{d}\tau \, \rm{d}t \end{aligned}$$
(14)
$$\begin{aligned} T^*&= \int _{-\infty }^{\infty } f_{\rm{I}}(t + M_{\rm{I}}) \; \int _t^{\infty } \!\! f_{\rm{RP}}(\tau ) \; \rm{d}\tau \, \rm{d}t \end{aligned}$$
(15)

Herein, \(f_{\rm{RP}}\) is obtained by consecutively convoluting the pdfs for protection of the remaining barriers on the attack path. Additionally, we used the following definition to treat the distributed time for first observation at barrier i:

$$\begin{aligned} (f \mathop {\bar{*}} g) (t)&:=\int _{-\infty }^{\infty } f (\tau ) \, g (\tau - t) \, \rm{d} \tau \end{aligned}$$
(16)

Hence, we obtain:

$$\begin{aligned} f_{\rm{RP}}(t) = \left( f_{\rm{P}i} * \dots * f_{\rm{P},n} \mathop {\bar{*}} f_{\rm{O}i} \right) (t) \end{aligned}$$
(17)

For npdfs parametrized by mean \(\mu\) and variance \(\sigma ^2\), the security margin for protection \(M_{\rm{P}}\) follows from Eq. 14:

$$\begin{aligned} M_{\rm{P}} = \mu _{\rm{O}} - \mu _{\rm{P}} - \sqrt{ 2 \left( \sigma _{\rm{O}}^2 + \sigma _{\rm{P}}^2 \right) } \cdot {{\,\rm{erf}\,}}^{-1} (1 - 2 D^*) \end{aligned}$$
(18)

Herein, \({{\,\rm{erf}\,}}^{-1}\) refers to the inverse error function.

Analogously, \(M_{\rm{I}}\) follows from Eq. 15. Since the residual protection time \(t_{\rm{RP}}\) is the result of the convolution of npdfs (Eq. 17), \(t_{\rm{RP}}\) is normally distributed as well:

$$\begin{aligned} M_{\rm{I}} = \mu _{\rm{I}} - \mu _{\rm{RP}} - \sqrt{ 2 \left( \sigma _{\rm{I}}^2 + \sigma _{\rm{RP}}^2 \right) } \cdot {{\,\rm{erf}\,}}^{-1} (1 - 2 T^*) \end{aligned}$$
(19)

The distribution parameters for \(t_{\rm{RP}}\) are:

$$\begin{aligned} \mu _{\rm{RP}}&= \sum _{j=i}^n \mu _{\rm{P},j} - \mu _{\rm{O}i} \end{aligned}$$
(20)
$$\begin{aligned} \sigma _{\rm{RP}}^2&= \sum _{j=i}^n \sigma _{\rm{P},j}^2 + \sigma _{\rm{O}i}^2 \end{aligned}$$
(21)

It should be noted that detailed optimization of the introduced security margins requires an enhanced cost-benefit assessment using cost functions. However, without an underlying cost function, the ratio between effort and benefit regarding the capabilities of detection and timely intervention depends on the distribution used for description. Figure 4 shows this relation for the introduced npdf in the detection mechanism. It reveals that the needed security margin \(M_{\rm{P}}\) grows nearly linearly with rising target detection probability level \(D^*\) where the influence of the inverse error function in Eq. 18 is limited. Congruent to the curve shape of npdf, the size of the security margin sharply rises, when \(D^* > P(x < \mu + 2 \sigma ) \approx 97.72\,\%\) is required. This implies a direct dependence of the security margin on the variance \(\sigma ^2\) of the respective npdfs that characterize security measure performance depending on the available level of data or knowledge. The shown dependency can be used for a first efficiency estimation of efforts needed to consider existing uncertainties.

The graphs for higher variances in Fig. 4 underline that higher levels of variance, i.e., the uncertainty regarding measure performance, cannot efficiently be tackled by consideration in security margins when the required target level for detection probability is high. This can also be seen in Eq. 18 since detection probability is a factor of the variances through the inverse error function. For this case, an upstream reduction of uncertainties appears necessary.

Fig. 4
figure 4

Security margin as a function of target detection level for barrier 3 and varying distribution parameters of observation time

6 Exemplary Solution for Notional Airport Structure

In the following, we evaluate the introduced security margin approach by applying it to the notional airport infrastructure introduced in Sect. 4. For this purpose, we follow the process outlined in Sect. 5 and set up a new configuration of the security system using calculated security margins. Subsequently, we assess the vulnerability of the newly defined configuration.

Based on the relation between security margin and target level required for detection or timely intervention probability shown in Sect. 5.2, we choose the following values for probability of attacker detection \(D_i^*\) and timely intervention \(T_i^*\), respectively:

$$\begin{aligned} D_i^*&= 97.72\,\% \end{aligned}$$
(22)
$$\begin{aligned} T_i^*&= 97.72\,\% \end{aligned}$$
(23)

6.1 Security Margin for Measures of Detection

6.1.1 Step 1: Assessment of Influencing Variance

In a first next step, barriers with high total effect sensitivity index \(S_{\rm{T}i}\) (see Table 3) are chosen from the results of the VBSA carried out in Sect. 4.2 for security margin definition. The protection measures with the respective protection times \(t_{\rm{P}i}\) at the barriers shown in Table 4 are subject of further considerations.

Table 4 Identified barriers and protection measures

6.1.2 Step 2: Derivation of Security Margin

In the second step, we calculate the security margin for the protection measures identified in the first step by applying the values of the initial configuration given in Table 1 to Eq. 18. For instance, for barrier 8 we get:

$$\begin{aligned} M_{\rm{P}} = 180 \, \rm{s} - 216 \, \rm{s} - \sqrt{ 2 \cdot \left( 27^2 \, \rm{s}^2 + 33^2 \, \rm{s}^2 \right) } \cdot {{\,\rm{erf}\,}}^{-1} (1 - 2 \cdot 0.9772) = 49.2 \, \rm{s} \end{aligned}$$
(24)

The results for all considered barriers are given in Table 5. Note that the values with added security margin \(\mu _{\rm{P}i}^*\) are further used for the definition of the security margin for timely intervention.

Table 5 Security margins applied to protection measure parameters

6.2 Security Margin for Measures of Timely Intervention at Barrier Level

6.2.1 Step 1: Assessment of Influencing Variance

Based on the updated configuration incorporating the security margin for the protection measures defined in Table 5, we additionally revise the security system to incorporate the remaining influence of uncertainties on timely intervention. Thus, we conduct a new VBSA and revise the remaining total effect sensitivity indices for intervention measures \(S_{\rm{T,I}}\). The results are given in Table 6.

Table 6 Total effect sensitivity indices \(S_{\rm{T}i}\) for all parameters with applied \(M_{\rm{P}}\)

Our results for \(S_{\rm{T,I}}\) show the influence of intervention measures at barrier 8. In order to identify the weakest path, where vulnerability is influenced by the uncertainties at barrier 8, we additionally break down its influence on attack path level. Table 7 reveals attack path 14 is the only influenced path on which barrier 8 and 9 shape the residual protection distribution (compare Fig. 3).

Table 7 Total effect sensitivity indices \(S_{\rm{T}i}\) for influence of intervention at barrier 8 on path vulnerability \(V_{\rm{P}}\)

6.2.2 Step 2: Derivation of Security Margin

The security margin for the remaining influence of barrier 8 on timely intervention is calculated by inserting the respective values from Tables 1 and 5 into Eq. 19. We then obtain:

$$\begin{aligned}&M_{\rm{I}} = 288 \, \rm{s} - \left( 265.2 \, \rm{s} + 360 \, \rm{s} - 180 \, \rm{s} \right) \nonumber \\&\quad - \sqrt{ 2 \cdot \left( 75^2 \, \rm{s}^2 + 33^2 \, \rm{s}^2 + 54^2 \, \rm{s}^2 + 27^2 \, \rm{s}^2 \right) } \cdot {{\,\rm{erf}\,}}^{-1} (1 - 2 \cdot 0.9772) = 46.2 \, \rm{s} \end{aligned}$$
(25)

The added security margin for the intervention measure at barrier 8 is listed in Table 8.

Table 8 Security margins applied to intervention measure parameters

6.3 Vulnerability Assessment

Finally, we assess the vulnerability of the new configuration according to the procedure given in Sect. 4.1 and compare it to the initially analyzed security system. Table 9 compares path vulnerability of the initial configuration with that resulting from the application of security margins, one time with \(M_{\rm{P}}\) only and one time with both \(M_{\rm{P}}\) and \(M_{\rm{I}}\).

Table 9 Comparison of path vulnerabilities for configurations with and without security margins

We calculate the system vulnerability \(V_{\rm{S,v}}^*\) for the newly created system with security margins and compare it to the initial configuration considering variance \(V_{\rm{S,v}}\):

$$\begin{aligned} V_{\rm{S,v}}&= 0.811 \end{aligned}$$
(26)
$$\begin{aligned} V_{\rm{S,v}}^*&= 0.210 \end{aligned}$$
(27)

The results show that the vulnerability of the new configuration is at a low level. Additionally, the comparison to the initial configuration shows that system vulnerability is significantly minimized induced by the application of the security margins. As the total effect sensitivity indices \(S_{\rm{T}i}\) reveal an impact of added variance or uncertainty, we can use this result to establish a new configuration subjecting only influencing factors to a security margin M. Non-influencing parameter values are kept from the initial configuration. The resulting configuration for our exemplary system containing the security margins is summarized in Table 10.

Table 10 Configuration of notional airport security system containing security margin

7 Discussion

The analysis carried out in this paper demonstrates how the quantitative approach to SRA can be useful to take uncertainties into account, even if the data situation is vague or the assessment is based only on expert knowledge. Given this, we present an approach that aims to minimize the influence of uncertainties, i.e., the lack of knowledge regarding the performance of security measures, in security system design by using quantitative methods in a targeted manner.

Our analysis shows that the difference between scalar and distribution based vulnerability assessment can be significant. In the example used, the introduced uncertainties lead to a significant rise in vulnerability. Thus, we show that the uncertainty regarding the knowledge of security measure performance may severely influence the results of a SRA and, even more important, the outcome of possible attacks. By applying VBSA to the analyzed security system, we reveal that the influence of the uncertainties is limited to few security barriers and measures within the system in this case. As the underlying vulnerability model is nonlinear, a total effect sensitivity index \(S_{\rm{T}i}\) gives insight about direct and indirect influence of the analyzed variable and the respective security measure.

The proposed security margin concept aims at tackling the aforementioned influences in security system (re-)configuration. The derivation of the security margin involves two steps that are consecutively conducted for the fundamental capabilities of detection and timely intervention. First, by applying a VBSA, influential security measures are identified. In a second step, the security margin itself is calculated dependent on solely the size of the introduced uncertainty of the measure and target levels for detection and timely intervention. It should be noted that we establish the security margin concept for npdf-based description of security measure performance resulting from expert knowledge in this paper. The formalization for other reasonable pdfs, e.g., equal or triangular distribution, is similar in principal but requires additional computational effort. However, the demonstrated relation of target measure effectiveness and distribution variance, i.e., the introduced uncertainties, can be used to support efficiency considerations.

In the case of npdf, we show that the effort needed to increase the target effect efficiency increases sharply at \(P(x < \mu + 2 \sigma ) \approx 97.72\,\%\). The efficiency estimate for higher variances shows that large uncertainties regarding the properties of security measures entail fundamental problems. On the one hand, taking these uncertainties into account in the system design does not appear to be efficient, since disproportionate effort must be expended to ensure a sufficient security margin. On the other hand, the result shows that poor quality of the input data used, be it a vague data base or expert knowledge, may limit the informational value of the evaluation as well as the proposed security margin concept to the extent that poor (vague) input data lead to questionable results —a valuable insight that is hardly obtained from qualitative methods.

This strongly suggests that the consideration of uncertainties by the security margin is not sufficient for this case. Here, a reduction of the corresponding variances seems necessary first. This could potentially be tackled by a further evaluation of the implemented security measures in real-world tests aiming to decrease the input uncertainty by enhancing the database.

The evaluation of the security margin concept using the airport example illustrates its usefulness in principle. By taking into account the uncertainties based on expert knowledge, only the barriers with influence are provided with a security margin in a modified configuration. A following vulnerability assessment supports the assumption regarding the differing influence of input parameters. The significant reduction of system vulnerability shows the effectiveness of the security margin in reducing the influence of uncertainties on system performance.

8 Conclusion

In this paper, we show the usefulness of the quantitative approach in SRA. This is particularly evident in the use of a vague database or expert knowledge, which is common in security assessment. Unlike qualitative analysis, quantitative analysis allows for consideration of uncertainty.

The analysis carried out in the paper shows the potentially large impact of these uncertainties, represented by variances in pdfs, on the outcome of the SRA and the outcome of possible attacks on the system under consideration. A VBSA shows that this influence can be attributed to certain barriers for the selected configuration. Based on these results, we propose the concept of security margin, in which targeted changes to influential barriers that take into account the uncertainties resulting from, for example, vague data or expert knowledge.

Generally, in SRA sufficient attention should be paid to the level of effect uncertainty and resulting consequences. According to Abrahamsen et al. (2015), potentially severe consequences should lead to precautionary approaches. Here, the introduced security margin can be used for a corresponding security system layout. The introduced formalization supports basic efficiency considerations as well as enhanced optimization methods. However, the results suggest that in the case of large uncertainties, their reduction should be sought first. For this purpose, additional investigation of the security margin concept and its limits is needed. Additionally, the security margin concept should be formalized for different pdfs for carving out additional limitations. For enhanced applicability, the problems of non-continuous change of performance between implementable security measures and dependent financial efforts should be included, thus enabling enhanced cost-benefit analysis and optimization.

In summary, the understanding and consideration of the described inherent levels of uncertainty in effect and response in SRA is important, since their influence on the outcome of analysis and its validity is potentially significant. The proposed security margin concept is a feasible way to cope with such uncertainties by methodical identification and targeted limitation of their influence on vulnerability of security systems.