1 Introduction

Safety and security share many commonalities. Nevertheless, measures and systems to provide and ensure safety and security are planned and implemented often independently by different experts. If both aspects were treated in an integrated manner, synergies could be realized and costs could be reduced.

If we want to ensure safety and security of such complex systems like critical infrastructures and socio-technical systems, many disciplines will be stakeholders: engineering, law, economics, humanities, social sciences, etc.

Up to now, there is no established common formal language concerning safety and security and no common language across all involved disciplines. The aim of this paper is to propose a quantitative mathematical approach that could serve to describe and to analyze safety and security problems in a unified fashion and to plan and optimize dedicated measures and systems.

2 Related Work

The frameworks of statistical decision and game theory are mature and approved methodologies which have been applied to many different domains, foremost to economics (Berger 1993). In combination with attack trees, game theory has been already applied to model rational attackers (Buldas et al. 2006). Some aspects of the approach presented in this paper have been already proposed in a preliminary qualitative formulation in Beyerer et al. (2009) and Beyerer (2009).

3 Safety and Security

The terms of safety and security only make sense in the face of some danger that is supposed to be able to cause damage. It emanates from some »source of danger« d, propagates over a certain »path of transmission« and has effect on a »subject of protection« s (see Fig. 1). The path of transmission is everything between d and s that is needed to transport the hazardous effect. It belongs neither to d nor to s.

Fig. 1
figure 1

Relation between a source of danger d and subject of protection s. D and S denote the sets of sources of danger and the set of subjects of protection, respectively

In case of, e.g. a radio-controlled explosive device this path comprises the radio link between trigger and device as well as the air between the device and the target that the bomb fragments have to pass. In case of a tsunami it is the water between the epicenter of an earthquake and the shore.

The danger hits the subject of protection s at some of its »flanks of vulnerability« F that can be of different quality (mechanical, chemical, psychological, financial, informational,…). The flanks of vulnerability do belong to the subject of protection and are under its control.

The two examples mentioned above—explosive device and tsunami—illustrate two fundamental categories of dangers: willful and unintended. If a danger is willfully applied, we are in the domain of security. If it is unintended, we are in the domain of safety. A willful endangerment by human beings can be used on the one hand as a means to achieve some (material) goal, e.g. in the case of robbery. Or it can be executed as a purpose of itself, e.g. in the case of vandalism or amok. The source of unintended danger may on the one hand be human carelessness that may underestimate or even ignore damage. Or the origin may be a random event such as an unforeseeable technical fault or a natural event (e.g. an earthquake). Figure 2 illustrates this categorization.

Fig. 2
figure 2

Categorization of dangers with respect to safety and security. \(d \in \text{D}_{\text{W}}\) are called “attackers” and \(d \in \text{D}_{\text{U} }\) are called “causers”. In the cases of an attacker \(d \in \text{D}_{\text{WM}}\) or a causer \(d \in \text{D}_{\text{UC}}\) the pertaining risk can be influenced by costs charged to d (penalties, money,…), so that d will be deterred from attacking or so that d is urged to act more carefully, respectively

From a game theoretic point of view there is another interesting interpretation of safety and security (Beyerer et al. 2009). With respect to safety the subject of protection s plays a game against nature (see Fig. 3). His opponent behaves like a random process. Based on a statistical analysis the distribution which characterizes the opponent can be learned and counter measures can be applied to reduce the risk. Especially if the distribution does not change with time, a stationary safety level can be attained with passive measures. The protection process can converge (see Figure 2, on the right).

Fig. 3
figure 3

Game theoretic view on safety

In contrast, regarding security, the adversary behaves intelligently (see Fig. 4). In this case, the subject of protection s plays against a strategically acting opponent who evades of being understood, who analyzes the weaknesses of s and who selfishly tries to maximize his benefit. Therefore, measures will be answered with counter measures and no stationarity will be achieved. The protection process oscillates necessarily (see Fig. 2, on the left).

Fig. 4
figure 4

Game theoretic view on security

A further issue becomes clear from the discussion so far: an attacker who is a rationally acting agent does not randomly attack any of the flanks of vulnerability. Instead he will attack the flank which is most promising for him to achieve his goal. Relating to security, this directly leads to the following minimum principle: the weakest flank determines the degree of vulnerability.

Moreover, whether we are in the domain of security or of safety only depends on the source of danger d and does neither depend on the path of transmission nor on the subject of protection s (see Fig. 1). For example, if a fire was caused by an arsonist, we would have a security case. If, however, the fire was caused by an electric shortcut, we would assign it to safety. Relating to the path of transmission and to the subject of protection both cases need not to be distinguished, since both lead to the same consequences.

4 Role and Risk Model

4.1 Roles

The goal of each measure to increase safety as well as to increase security is to prevent the subject of protection from harm caused by dangers. Therefore, we define a third role beneath the already introduced source of protection s and the source of danger d: The »protector« p. It is first of all a role, not necessarily an entity separate from s. When any s protects itself without any external help, p and s are coined by the same entity. With the introduction of p we can concentrate all measures of protection onto this role. That are (see Fig. 5): To detect and possibly neutralize the source of danger directly, to elongate the path of transmission in order to weaken the hazardous effect, to cover the subject of protection and to harden its flanks of vulnerability. A necessary precondition for the relation between the subject of protection and its protector is trust, in case of s and p are separate entities often confirmed by a contract.

Fig. 5
figure 5

Roles and relations between them. Note that the different roles can be played by different entities but coincidences are also possible. For example, someone can be a danger for himself or someone can protect himself

To complete the relations between the three roles in Fig. 5 it should be made clear, that except for unintended danger by random events (see Figs. 2, 3) there is always some flow of value from s to d. That is expressed by the relation s »enriches« d.

4.2 Formalization of Ingredients

In this section the entities, attributes and relations of the considerations above are formalized and quantified using the well-established approach of Bayesian statistical decision theory (Berger 1993).

4.2.1 Degree of Belief Interpretation of Probabilities

Following the compellent argumentation of (Lindley 1982) all uncertainties are modelled based on the probability calculus.

In this paper, probability is used in the broader sense as a degree of belief (DoB). This interpretation is a generalization of the classical frequentistic meaning of probability which, however, is still compliant with the axioms of Kolmogorov (Bernardo and Smith 1994; Beyerer 1999).

Figure 6 illustrates this concept. The famous axioms of Kolmogorov formally define the syntax of probability as a measure theoretic concept. But they only determine how to calculate with probabilities in a sound manner, i.e. they only determine the syntax of probabilistic calculations. They do not explain the meaning of probability. Indeed, to a formal system like Kolmogorov’s axioms multiple interpretations (i.e. multiple semantics) can coexist as long as they are consistent with the axioms (Hofstadter 1979).

Fig. 6
figure 6

Different meanings of probability

One the one hand, there is the frequentistic interpretation of probability. Probability here is treated like a physical quantity that can be measured by performing experiments; at least thought experiments should be conceivable for this endeavor. For example, if a die is given and the probabilities for its six numbers should be determined, the die can be thrown N times and the relative frequencies of the numbers can be used as estimates for the pertaining probabilities. If N goes to infinity, the law of big numbers guaranties that the relative frequencies converge to the probabilities.

On the other hand, if someone is asked about the probability of life on Mars, after some intensive considerations his answer could be: 0.0001 or maybe 0.5 for example. Obviously, these answers have no frequentistic meaning (Lehner et al. 1996). Either there is life on Mars or not. The point is, it is unknown. No repeated experiment, even no reasonable thought experiment, is conceivable in which trails can be performed in order the count the cases in which there was life on Mars or not.

The first answer 0.0001 could be the result of some thorough considerations about the physical conditions on Mars and their consequences for the existence of biological life. It quantifies an individual belief. The second answer 0.5 could express that there is no idea about the possibility of life on Mars at all and therefore expresses complete ignorance (Lehner et al. 1996). Again, it quantifies a belief, or to be more specific, a degree of belief (DoB). DoBs are consistent with the axioms of Kolmogorov and furthermore generalize the frequentistic interpretation. If a frequentistic experiment can be performed and relative frequencies are calculated, of course this result can be adopted as DoB, which in this special case is determined empirically.

DoBs can be subdivided into objective and subjective DoBs. In the first case, given evidence is transformed into DoBs in an objective way so that two individuals faced with the same facts and having the same knowledge would derive the same DoB. In the latter case, each individual can derive its own subjective DoBs about all relevant factors.

Objective DoBs are of special interest, because there are well understood approaches to establish DoBs individually in an impartial way such as, e.g. the Maximum Entropy Principle (MEP) (Jaynes 1968); see also Sect. 4.4 for more details. It takes all given facts and knowledge as constraints and calculates that DoB which has the maximum entropy by simultaneously fulfilling all constraints. MEP–DoBs therefore are minimum prejudiced and do not implicitly introduce any additional assumptions i.e. no additional bias. If risk is to be quantified from an objective point of view, the MEP is a suitable approach for importing given facts and knowledge formally into DoBs and thus into the probability calculus.

On the other hand, subjective DoBs allow that each agent within a scenario has his own point of view and his own belief about the probabilities of events and realizations of variables. Individual beliefs may differ very much from one individual to other individuals and also may strongly deviate from objective DoBs. But the decisions of each agent clearly depends on that the agent’s beliefs. For example, if an agent intends to commit a burglary in a house, he evaluates his personal risk based on his subjective DoBs about vulnerability and the probabilities of being successful and being caught and punished, instead of considering the objective, to him usually unknown values of those quantities.

4.2.2 Subjects of Protection, Sources of Danger and Protectors

All quantities relate to a particular time interval of length T, within which they are assumed to remain constant.

$$\text{S} = \text{S}_{\text{Persons}} \cup \text{S}_{\text{Objects}} \cup \text{S}_{\text{Systems}} \cup \text{S}_{{{\text{Legal}}\;{\text{interests}}}} ,$$
(1)

denotes the set of subjects of protection.

These subjects \(s \in \text{S}\) have budgets \(b(s)\) for safety and security measures and flanks of vulnerability \(f \in \text{F}_{s}\).

Dangers (attackers, causers) d are elements of the set of sources of danger

$$\text{D} = \text{D}_{\text{WP}} \cup \text{D}_{\text{WM}} \cup \text{D}_{\text{UC}} \cup \text{D}_{\text{UR}} ,$$
(2)

where the indices have the following meaning: WP is the willful danger as a purpose (vandalism, amok, ), WM is the willful danger as a means (burglary, robbery, …), UC is the unintended danger due to carelessness or negligence (inattention, breach of duty), UR is the unintended danger with random characteristic (technical failures, natural disasters).

We define two further subsets \(\text{D}_{\text{U}} : = \text{D}_{\text{UC}} \cup \text{D}_{\text{UR}}\) and \(\text{D}_{\text{W}} : = \text{D}_{\text{WP}} \cup \text{D}_{\text{WM}}\) that structure the dangers \(\text{D} = \text{D}_{\text{W}} \cup \text{D}_{\text{U}}\) into a willful and an unintended subcategory.

In the following \(d \in \text{D}_{\text{W}}\) are called »attackers«. Attackers perpetrate attacks a which are pooled in the set of attacks A, \(a \in \text{A}\). An attacker has a budget \(b(d)\) with which he finances the effort of an attack. The attacks a an attacker d is able to perform are summarized in the subset \(\text{A}_{d} \subseteq \text{A}\).

Sources of danger \(d \in \text{D}_{\text{U}}\) due to carelessness generate incidents i, which are pooled in set of incidents \(\text{I}\), \(i \in \text{I}\). In the following \(d \in \text{D}_{\text{U}}\) are called »causers«, because they cause incidents. The set of incidents referred to a causer \(d \in \text{D}_{\text{U}}\) are summarized in the subset \(\text{I}_{d} \subseteq \text{I}\).

If an attack or incident happens, the success (harm) of such an event is quantified by the degree of success \(\beta \in [0,1]\). β = 1 denotes total success and β = 0 stand for no success at all.

An attack or an incident on s via flank f with success β costs s: \(c(s,f,\beta ) \in \left[ {0,\infty } \right)\). Vulnerability with respect to attacks or incidents is modelled as a DoB-density. \(p_{\text{V} } \left( {\beta \left| {i,s,f} \right.} \right)\) and \(p_{\text{V} } \left( {\beta \left| {a,s,f} \right.} \right)\) describe the DoB-densities for the degree of success β, if a respectively i hits s via f (see Fig. 7).

Fig. 7
figure 7

Vulnerability is modeled the by the DoB-density of the success β of a certain attack a on the subject of protection s via its flank of vulnerability f [with the discrete probability Prw for a willful threat; see Eq. (11)]. Thus the vulnerability does not depend on the source of danger but only on the attack a, that an attacker d performs. Analogously, the same holds if attackers are replaced by causers \(d \in \text{D}_{\text{U} }\) and attacks by incidents i

Remark In the case that the costs \(c(s,f,\beta )\) are proportional to the success \(\beta\), i.e.

$$c(s,f,\beta ) = \beta \times c(s,f) ,$$
(3)

costs and vulnerability can be factorized:

$$\int_{0}^{1} {c(s,f,\beta ) \times p_{V} (\beta |i,s,f)\text{d} \beta = c(s,f) \times v(s,f,i)} ,$$
(4)

where

$$v(s,f,i): = E_{\beta |i,s,f} \{ \beta \} = \int\limits_{0}^{1} {\beta \times p_{\text{V}} (\beta |i,s,f)\text{d} \beta } ,$$
(5)

is the mean success-DoB of an incident i.

Causers of danger due to carelessness \(d \in \text{D}_{\text{UC}}\) are charged with costs \(\kappa (s,\;f,\beta ) \in [0,\kappa_{{d\_\text{Ruin} }} ]\). These costs correspond to a penalty for d for generating an incident \(i \in \text{I}_{d}\) hitting s via f with success β. The higher the costs for d the lower the probability of an incident generated by d should be (deterrent effect).

A protector p ∈ P provides safety and security measures \(m(s,f) \in \text{M}\) for the flank f of s. M denotes the set of available and \(\text{M}^{*} \subseteq \text{M}\) the set of implemented measures. A measure m costs s the amount \(c(m(s,f))\). Of course, s can only undertake measures according to his budget. This introduces the constraint \(\sum\nolimits_{{m \in \text{M}^{*} }} {c(m(s,f))\; \le \;b(s)}\).

Measures \(m(s,f)\) should reduce vulnerability, i.e. the success of attacks and/or incidents, and/or the probability of occurrence of attacks and/or of incidents. However, \(m(s,f)\) is modeled such that it does not reduce \(c(s,f,\beta )\).

The following quantities are to be understood from the attacker’s point of view. \(g(s,f,\beta )\) denotes the gain due to an attack on s via f with success β. \(p_{\text{Success}} (\beta |a,s,f)\) is the DoB-density for success β, if a hits s via f. \(c_{\text{Effort}} (a,s,f)\) describes the costs due to the effort for executing an attack a on s via f. \(c_{\text{Penalty}} (s,f,\beta )\) denotes the monetary equivalent to a penalty for an attack on s via f with success β .

And finally, \(\Pr (\text{Penalty} |s,f,\beta ) = 1 - \Pr (\neg \text{Penalty} |s,f,\beta )\) denotes the DoB for a punishment of an attack on s via f with success β.

4.3 Quantification of Risk

The total risk \(R_{{{\text{s}}\_{\text{total}}}}\) of a subject of protection s from the point of view of s can be expressed as:

$$R_{{{s\text{\_total}}}} : = \underbrace {{R_{\text{s}} }}_{\text{Model}} + \underbrace {{R_{0} }}_{\text{Outside \,modelling \,scope}}$$
(6)

where \(R_{\text{S}}\) denotes the describable part of the risk and \(R_{0}\) denotes that part of the risk, which cannot be modelled. Hopefully, measures m reducing the modelled part of the risk \(R_{\text{s}}\) should not increase \(R_{0}\) for more than this reduction, i.e.:

$$\Delta R_{{{s\text{\_absolut}}}} (m): = R_{{{s\text{\_absolut}}}} ({\text{without}}\;\;m) - R_{{{s\text{\_absolut}}}} ({\text{with}}\;\;m)\; \ge \;0$$
(7)

with

$$\Delta R_{\text{s}} (m): = R_{\text{s}} ({\text{without}}\;\;m) - R_{\text{s}} ({\text{with}}\;\;m)\; > \;0 .$$
(8)

The risk \(R_{\text{s}}\) of s from the point of view of s can be expressed as:

$$ \begin{aligned} R_{\text{s}} & = \sum\limits_{{d \in \,{\text{D}}_{\text{U} } }} {\sum\limits_{{i \in {\text{I}}_{d} }} {\sum\limits_{{f \in \,{\text{F}}_{s} }} {\int_{0}^{1} {c(s,f,\beta )\; \times p_{\text{V} } (\beta |i,s,f){\text{d}} \beta \times {\Pr}_{\text{U} } (i|s,f)} } } } \\ & \quad + \sum\limits_{{d \in \,{\text{D}}_{\text{W}} }} {\sum\limits_{{a \in \,{\text{A}}_{d} }} {\int_{0}^{1} {c(s,\tilde{f},\beta ) \times p_{\text{V} } (\beta |a,s,\tilde{f}){\text{d}} \beta \times {\Pr}_{\text{W}} (a|s,\tilde{f}) + \sum\limits_{{m \in {\text{M}}^{*} }} {c(m(s,f)),} } } } \\ \end{aligned} $$
(9)

\(\Pr_{\text{U} } (i|s,f)\) denotes the probability of occurrence (DoB) of an incident caused by d on s via f. \(\Pr_{\text{W}} (a|s,f)\) is the probability of occurrence (DoB) of an attack of d on s via \(f\).

The first summand of \(R_{\text{s}}\) corresponds with the risk relating to safety, the second quantifies the risk relating to security and the third addend numeralizes the costs of deployed measures m. Thus, \(R_{s}\) unites the rating of safety and security and also considers the efforts for reducing the risk.

Compared to statistical decision theory (Berger 1993), additionally to the classical risk factors probability and cost, with \(p_{\text{V} }\), which models the vulnerability, a third factor comes into play. This is in accordance with the approaches in (Baker 2005) and (Broder and Tucker 2012) whereas we formulate this third factor as a conditional DoB-density, so that compliance with probability theory is preserved. For example,

$$p_{\text{V} } (\beta |i,s,f) \times \text{Pr}_{\text{U} } (i|s,f),$$
(10)

is equal to the joint DoB-density \(p(i,\beta |s,f)\) for the occurrence of an incident i with success β given s, f. Only if an attacker coincidentally has motivation, power and occasion, he will undertake an attack. Therefore, \(\Pr_{\text{W}} (a|s,f)\) is modelled with a product of three DoB factors:

$$\text{Pr}_{\text{W}} = \text{Pr}_{\text{Motivation}} \times \text{Pr}_{\text{Power}} \times \text{Pr}_{\text{Occasion}} ,$$
(11)
$$\tilde{f}: = \mathop {\arg \hbox{max} }\limits_{{f \in {\kern 1pt} \text{F}_{s} }} \{ \mathop {\hbox{max} }\limits_{{a \in \text{A}_{d} }} \{ U_{d} (a,s,f)\} \} ,$$
(12)

denotes the most beneficial flank of vulnerability of s from the viewpoint of the attacker d.

To quantify the awaited benefit for the attacker d perpetrating an attack a on s via f, the utility \(U_{d} (a,s,f) \in [U_{\hbox{min} ,\,\,d} ,\;\,U_{\hbox{max} ,\,\,d} ]\) is modelled as:

$$\begin{aligned} U_{d} (a,s,f) & : = \int_{0}^{1} {g(s,f,\beta )} \times p_{\text{Success}} (\beta |a,s,f){\text{d}}\beta - c_{\text{Effort}} (a,s,f) \\ & \quad - \int_{0}^{1} {c_{\text{Penalty}} (s,f,\beta ) \times \Pr (\text{Penalty} |s,f,\beta )} \times p_{\text{Success}} (\beta |a,s,f){\text{d}}\beta , \\ \end{aligned}$$
$$\begin{aligned} U_{d} (a,s,f) & = \int_{0}^{1} {\left[ {g(s,f,\beta ) - c_{\text{Penalty}} (s,f,\beta )\times \Pr (\text{Penalty} |s,f,\beta )} \right]} \times p_{\text{Success}} (\beta |a,s,f)\,{\text{d}}\beta \\ & \quad - c_{\text{Effort}} (a,s,f), \\ \end{aligned}$$
(13)

whereupon \(c_{\text{Effort}} (a,s,f) \le b(d)\) holds. Obviously, it is straight forward to apply the risk modelling approach also to sets S of subjects s of protection who are endangered by D. In this case, the risk simply can be calculated by summing over S :\(R_{S} = \sum\nolimits_{{s \in \text{S} }} {R_{s} } .\)

4.4 Determination of Probabilities

The crucial challenge of the presented framework is the determination of probabilities, or to be more specific, the DoBs which are constituents of the risk terms. This is especially difficult, if the probabilities are very low, so that there are not enough data to estimate the DoBs with statistical methods. From a methodological point of view, there are different options how to manage this task.

4.4.1 Maximum Entropy Principle (MEP)

To define the DoBs in an objective manner, the Maximum Entropy Principle (MEP) can be applied (Jaynes 1968). Shannon’s entropy

$$H: = \sum\limits_{\omega \in \Omega } { - \Pr (\omega )\log } (\Pr (\omega )),$$
(14)

in the discrete case and the differential entropy

$$h: = \int_{\omega \in \Omega } { - p(\omega )\log (p(\omega )){\text{d}}\omega } ,$$
(15)

for continuous variables ω quantify the concentration of the DoB on the definition set ω. The lower the concentration the higher is the pertaining entropy. Without any constraint that DoB-distribution with constant DoB values for each \(\omega \in \Omega\) achieves the maximum entropy. If we know any facts about \(\omega \in \Omega\), those facts are employed as constraints with respect to which the DoB with maximum entropy is calculated. Thus, the resulting DoB maps the given facts into the probabilistic calculus in a way that avoids any additional implicit assumptions. Therefore, the MEP–DoB is impartial beyond the evidence of the considered facts.

The adoption of the MEP can be strongly justified by a set of axioms, from which the MEP can be derived unambiguously (Paris 1999). The axioms are formulated generally understandably and can be considered as commonsense reasoning principles. According to (Beierle et al. 2015) these principles are:

  1. 1.

    Irrelevant information principle: knowledge that is entirely irrelevant to the problem under consideration can be ignored.

  2. 2.

    Renaming principle: renaming all variables used to describe the problem does not influence the choice of the best model.

  3. 3.

    Obstinacy principle: receiving information that is already known is redundant and does not change the best model.

  4. 4.

    Equivalence principle: if two knowledge bases are semantically equivalent according to the axioms of probability theory, they should have the same best model.

  5. 5.

    Relativization principle: probabilistic knowledge about an event is not affected by knowledge that assumes that the event has not happened.

  6. 6.

    Weak independence principle: if two events A and B cannot occur together, then probabilistic knowledge about B does not affect the chosen probability for any event that happens together with A.

  7. 7.

    Continuity: very small changes in the factual probabilistic knowledge of the given probabilistic knowledge base can only result in very small changes in the resulting probabilities of the best model.

Best model means the probability distribution that is compliant to all given facts (i.e. to the above mentioned probabilistic data base) and compliant to the axioms (a)–(g). A rational agent (individual, reasoner) who uses probabilities and complies with these principles should choose the MEP to determine his probability distributions (Beierle et al. 2015).

4.4.2 Conditioning on Rare Events (CORE)

To cope with the problem to determine probabilities of extremely rare events (incidents or attacks) the estimation and/or assessment of them could also be completely omitted and risk could be formulated conditionally. I.e. for each event risk is expressed under the condition that an event i or a has occurred. Given e.g. an incident i has occurred the first summand of Eq. (9) changes to

$$\sum\limits_{{f \in \,\text{F}_{\text{s}} }} {\int_{0}^{1} {c(s,f,\beta ) \times p_{\text{V} } (\beta |i,s,f)\text{d} \beta } } ,$$
(16)

and constitutes then a component to the conditional risk \(R_{s} \left| {\,i} \right.\).

4.5 Subjective Views of Agents

Objective cost functions and probabilities of occurrence must be clearly distinguished from subjective assessments of those quantities. A rational agent draws his decisions according to his subjective view, i.e. to his belief about costs in the case an incident or an attack would happen and about his DoBs with respect to the probability of occurrence. According to (Mainzer 2016), (Tversky and Kahnemann 2000) individuals rate costs and probability of occurrence with a cognitive bias. On the one hand the probabilities of very infrequent events are usually overestimated and those of very frequent events are underestimated. On the other hand also the costs are distorted in a nonlinear manner, because an increment of costs is rated relatively to the absolute cost level, which approximately leads to a logarithmic scale and therefore to a strong flattening of the subjective cost functions for higher values. Furthermore, the readiness to assume risk, or otherwise, the risk aversion of an individual introduce asymmetries for positive and negative costs (i.e. profit). If c objective and p objective are the objective costs and objective probabilities respectively, the transition to subjective costs, subjective probabilities (DoBs) and thus to subjective risk can be accomplished mathematically using value functions υ(.) and π(.):

$$c_{\text{subjective}} = \upsilon (c_{\text{objective}} ),$$
(17)
$$p_{\text{subjective}} = \pi (p_{\text {objective}} ),$$
(18)
$$R_{\text{subjective}} = \varPsi \{ \upsilon (c_{\text {objective}} )\pi (p_{\text {objective}} )\} ,$$
(19)

where \(\varPsi \{ .\}\) denotes an ensemble functional like, e.g. an integral or a selection operator. Within the presented framework, quantities from the point of view of an individual are always are to be understood as subjective quantities. In the case of probabilities the notion of DoB encapsulates the individual assessment of the frequency of events as well as the individual’s cognitive bias.

4.6 Introduction of Temporal Dynamics

Up to now, all quantifies have been treated as they were constants relating to a time interval of duration T. In order to cover real world problems, it is necessary to equip the approach with a time dependency. If, for example, a measure m is implemented to improve the security level of s, this will influence the behavior of an intelligent opponent d. Within a longer time period T this would couple the different quantities implicitly and would make the interplay between s and d obscure.

A straight forward approach is to model all quantities as time series. An upper index \(k \in {\mathbb{N}}_{0}\) denotes the discrete instant of time. Additionally, a transition operator \(\Phi^{k}\) is introduced that maps the relevant quantities from time step k to k + 1.

$$(b^{k} (s),m^{k} , \ldots ,p_{\text{V}}^{k} ,{\text{Pr}} _{\text{U}}^{k} ,{\text{Pr}}_{\text{W}}^{k} ,R_{s}^{k} ,U_{d}^{k} )\, {\mathop{\rightarrow}\limits_{\Phi ^{k} }}\, (b^{k + 1} (s),m^{k + 1} , \ldots ,p_{\text{V}}^{k + 1} ,{\text {Pr}}_{\text{U}}^{k + 1} ,{\text{Pr }}_{\text{W}}^{k + 1} ,R_{\text{s}}^{k + 1} ,U_{d}^{k + 1} ).$$
(20)

It is assumed that the time discretization is fine enough to keep pace with the dynamics of the modelled system, so that all quantities can be assumed to remain constant within a time step k.

For example, the influence of a security measure m k implemented at time k on \(b(s),\,p_{\text{V}} ,\,\Pr_{\text{W}} ,\,R_{\text{s}} {\text{ and }}U_{\text{d}}\) is modelled by the change from \(b^{k} (s),\,\;p_{V}^{k} ,\,\;\Pr_{\text{W}}^{k} ,\,\;R_{s}^{k} \;{\text{and}}\;U_{d}^{k}\) to \(b^{k + 1} (s),\,\;p_{V}^{k + 1} ,\,\;\Pr_{\text{W}}^{k + 1} ,\,\;R_{s}^{k + 1} \;{\text{and}}\;U_{d}^{k + 1}\) accomplished by the transition operator \(\Phi^{k}\).

5 Conclusions, Challenges, and Summary

Based on a role concept we have introduced a mathematical framework that allows to model the risk of a subject of protection with respect to safety as well as with respect to security in a unified manner. The roles and quantities have clear semantics, which is a helpful prerequisite to determine the model parameters quantitatively, if the framework is applied to real problems. Nevertheless, in practice it is very challenging to estimate the involved quantities with sufficient precision. Especially the estimation of the different probabilities is far from trivial. If attacks or incidents occur very seldom, frequently there is not enough data available to perform a standard statistical analysis. The only way out is to adopt the wider interpretation of probabilities as degrees of belief (DoB). Within the Bayesian statistics this is the usual semantics of probability. It allows in the extreme case to use probabilities to express subjective beliefs of an agent (Bernardo and Smith 1994), as long as the syntactic rules for the calculation with probabilities, i.e. Kolmogorov’s axioms, are not violated.

The quantitative formulation of the risk of the subjects of protection and of the utility of attackers should allow to run simulations, e.g. Monte Carlo or agent-based simulations, in order to compute the risk numerically and to generate plausible event sequences according to a simulated game between instances of the introduced roles.

Future work will be focused on methods to estimate the parameters of the model and to apply the approach to real world safety and security tasks. Furthermore, we strive for a UML-based conceptualization of all terms of the model according to the ideas proposed in Schnieder and Schnieder (2009, 2013). The further development of the modelling approach will be especially pursued within the working group “Themennetzwerk Sicherheit” of the German National Academy of Science and Engineering acatech.