On Quantitative Analysis of Attack–Defense Trees with Repeated Labels
Abstract
Ensuring security of complex systems is a difficult task that requires utilization of numerous tools originating from various domains. Among those tools we find attack–defense trees, a simple yet practical model for analysis of scenarios involving two competing parties. Enhancing the wellestablished model of attack trees, attack–defense trees are trees with labeled nodes, offering an intuitive representation of possible ways in which an attacker can harm a system, and means of countering the attacks that are available to the defender. The growing palette of methods for quantitative analysis of attack–defense trees provides security experts with tools for determining the most threatening attacks and the best ways of securing the system against those attacks. Unfortunately, many of those methods might fail or provide the user with distorted results if the underlying attack–defense tree contains multiple nodes bearing the same label. We address this issue by studying conditions ensuring that the standard bottomup evaluation method for quantifying attack–defense trees yields meaningful results in the presence of repeated labels. For the case when those conditions are not satisfied, we devise an alternative approach for quantification of attacks.
1 Introduction
Beginning with 19th century chemistry and a groundbreaking work of Cayley, who used them for the purposes of enumeration of isomers, trees – connected acyclic graphs – have a long history of application to various domains. Those include safety analysis of systems using the model of fault trees [10], developed in 1960s, and security analysis with the assistance of the attack trees, which the fault trees inspired. Attack trees were introduced by Schneier in [26], for the purpose of analyzing security of systems and organizations. Seemingly simple, attack trees offer a compromise between expressiveness and usability, which not only makes them applicable for industrial purposes [23], but also puts them at the core of many more complex models and languages [11, 24]. An extensive overview and comparison of attack treebased graphical models for security can be found in [20]. A survey focusing on scalability, complexity analysis and practical usability of such models has recently been provided in [12].
Attack–defense trees [18] are one of the most wellstudied extensions of attack trees, with new methods of their analysis developed yearly [2, 3, 8, 21]. Attack–defense trees enhance attack trees with nodes labeled with goals of a defender, thus enabling modeling of interactions between the two competing actors. They have been used to evaluate the security of reallife systems, such as ATMs [7], RFID managed warehouses [4] and cyberphysical systems [16]. Both the theoretical developments and the practical studies have proven that attack–defense trees offer a promising methodology for security evaluation, but they also highlighted room for improvements. The objective of the current paper is to address the problem of quantitative analysis of attack–defense trees with repeated labels.
Related Work. It is wellknow that the analysis of an attack–defense tree becomes more difficult if the tree contains repeated labels. This difficulty is sometimes recognized, e.g., in [2, 21], where authors explicitly assume lack of repeated labels in order for their methods to be valid. In some works the problem is avoided (or overlooked) by interpretation of repeated labels as distinct instances of the same goal, thus, de facto as distinct goals (e.g., [8, 13, 18, 22]), or by distinguishing between the repetitions occurring in specific subtrees of a tree, as in [3]. Recently, Bossuat and Kordy have established a classification of repeated labels in attack–defense trees, depending on whether the corresponding nodes represent exactly the same instance or different instances of a goal [5]. They point out that, if the meaning of repeated labels is not properly specified, then the fast, bottomup method for identifying attacks that optimize an attribute (e.g., minimal cost, probability of success, etc.), as used in [15, 18, 22], might yield tainted results.
Repeated labels are also problematic in other treebased models, for instance fault trees. Whereas some methods for qualitative analysis of fault trees with repeated basic events (or generally, shared subtrees) have been developed [6, 27], their quantification might rely on approximate methods. For example, the probability of a system failure can be evaluated using rare event approximation approach (see [10], Chap. XI), while a simple bottomup procedure gives an exact result in fault trees with no shared subtrees [1]. This last observation is consistent with the results previously obtained for attack–defense trees (see Theorems 2–4 in [2]).
Contribution. The contribution of this work is threefold. First, we determine sufficient conditions ensuring that the standard quantitative bottomup analysis of attack–defense trees with repeated labels is valid. Second, we prove that some of these conditions are in fact necessary for the analysis to be compatible with a selected semantics for attack–defense trees. Finally, for the case when these conditions are not satisfied, we propose a novel, alternative method of evaluation of attributes that takes the presence of repeated labels into account.
Paper Structure. The model of attack–defense trees is introduced in detail in the next section. In Sect. 3, the attributes and exisiting methods for their evaluation are explained. In Sect. 4, we present our main results on quantification of attack–defense trees with repeated labels. We give proofs of these results in Sect. 5, and conclude in Sect. 6.
2 Attack–Defense Trees
Attack–defense trees are rooted trees with labeled nodes that allow for an intuitive graphical representation of scenarios involving two competing actors, usually called attacker and defender. Nodes of a tree are labeled with goals of the actors, with the label of the root of the tree being the main goal of the modeled scenario. The actor whose goal is represented by the root is called proponent and the other one is called opponent. The aim of the proponent is to achieve the root goal, whereas the opponent tries to make this impossible.
 1.
either the nodes represent the same single instance of the goal – e.g., cutting the power off in a building can be done once and has multiple consequences, thus a number of refined nodes might have a node labeled cutPowerOff among their child nodes, but all these nodes will represent exactly the same action of cutting the power off;
 2.
or else each of the nodes is treated as a distinct instance of the goal. For instance, while performing an attack, the attacker might need to pass through a door twice – once to enter and second time to leave a building. Since these actions refer to the same door and the same attacker, the corresponding nodes will, in most cases, hold the same label goThroughDoor. However, it is clear that they represent two different instances of the same goal.
In this work we assume the first of these ways of interpretation. In particular, following [5], we call a basic action that serves as a label for at least two nodes a clone or a cloned basic action, and interpret them as the same instance of a goal. Nodes representing distinct instances of the same goal or distinct goals are assumed in this work to have different labels.
An example of attack–defense tree^{1} is represented in Fig. 1. In this tree, the proponent is the attacker and the opponent is the defender. According to the attack–defense trees’ convention, nodes representing goals of the attacker are depicted using red circles, and those of the defender using green rectangles. Children of an \(\mathtt {AND}\) node are joined by an arc, and countermeasures are attached to nodes they are supposed to counter via dotted edges.
Example 1
In the attack–defense scenario represented by the attack–defense tree from Fig. 1, the proponent wants to steal money from the opponent’s account. To achieve this goal, they can use physical means, i.e., force the opponent to reveal their PIN, steal the opponent’s card and then withdraw money from an ATM. One way of learning the PIN would be to eavesdrop on the victim when they enter the PIN. This could be prevented by covering the keypad with hand. Covering the keypad fails if the proponent monitors the keypad with a hidden micro–camera installed at an appropriate spot. Another way of getting the PIN would be to force the opponent to reveal it.
Instead of attacking from a physical angle, the proponent can steal money by exploiting online banking services. In order to do so, they need to learn the opponent’s user name and password. Both of these goals can be achieved by creating a fake bank website and using phishing techniques for tricking the opponent into entering their credentials. The proponent could also try to guess what the password and the user name are. Using very strong password would counter such guessing attack. Once the proponent obtains the credentials, they use them for logging into the online banking services and execute a transfer. Transfer dispositions might be additionally secured with twofactor authentication using mobile phone text messages. This security measure could be countered by the proponent by stealing the opponent’s phone.
Note that even though there are two nodes labeled with phishing in the tree, they actually represent the same instance of the same action. The proponent does not need to perform two different phishing attacks to get the password and the user name—setting up one phishing website and sending one phishing email will suffice for the proponent to get both credentials. Thus, the two nodes labeled phishing are clones.
Let us now introduce a formal notation for attack–defense trees, which we will use throughout this paper. Such notation is necessary to formally define the meaning of attack–defense trees in terms of formal semantics and to specify the algorithms for their quantitative analysis.
Example 2
We denote the set of trees generated by grammar (1) with \(\mathbb {T}\).
In order to analyze possible attacks in an attack–defense tree, in particular, determine cheapest ones, or the ones that require the least amount of time to execute, one needs to decide what is considered to be an attack. This can be achieved with the help of semantics that provide formal interpretations for attack–defense trees. Several semantics for attack–defense trees have been proposed in [18]. Below, we recall two ways of interpreting attack–defense trees and the notions of attack they entail.
Definition 1
Definition 1 formalizes one of the most intuitive and widely used ways of interpreting attack–defense trees, where every basic action is assigned a propositional variable indicating whether or not the action is satisfiable. In the light of the propositional semantics, an attack in an attack–defense tree \({T} \) is any assignment of values to the propositional variables, such that the formula \(\mathcal {P}({T})\) evaluates to true. We note that this natural approach is often used without invoking the propositional semantics explicitly (e.g., in [2] or [8]). Observe also that due to the idempotency of the logical operators \(\vee \) and \(\wedge \), and the fact that every basic action is assigned a single variable, when the propositional semantics is used, cloned actions are indeed treated as the same instance of the same action. In particular, this implies that the trees \(\mathtt {AND} ^{\mathrm {p}}(\mathtt {b}, \mathtt {OR} ^{\mathrm {p}}(\mathtt {b},\mathtt {b} '))\) and \(\mathtt {b} \) are equivalent under the propositional interpretation. Such approach might not always be desirable, especially when we do not only want to know whether attacks are possible, but actually how they can be achieved. To accommodate this point of view, the set semantics has recently been introduced in [5]. We briefly recall its construction below.
Definition 2
The meaning of a pair (P, O) belonging to \(\mathcal {S}({T})\) is that if the proponent executes all actions from P and the opponent does not execute any of the actions from O, then the root goal of the tree \({T} \) is achieved. In particular, if \((P,\emptyset )\in \mathcal {S}({T})\), then the opponent cannot prevent the proponent from achieving the root goal when they execute all actions from P.
Example 3
Throughout the rest of the paper, by an attack in an attack–defense tree \(T\) we mean an element of its set semantics \(\mathcal {S}({T})\).
Grammar (1) ensures that attack–defense trees are welltyped with respect to the two players, i.e., \(\mathrm {p}\) and \(\mathrm {o}\). However, not every welltyped tree is necessarily wellformed wrt the labels used. In particular, it should be ensured that the usage of repeated labels is consistent throughout the whole tree. For instance, if the action coverKey, of covering an ATM’s keypad with a hand, can be countered by monitoring with a camera, this countermeasure should also be attached to every other node labeled coverKey. Similarly, if execution of the action logIn&execTrans contributes to the achievement of the proponent’s goal of stealing money via the online banking services, this information should be kept in every subtree rooted in a node labeled via online banking. Thus, to ensure that the results of the methods developed further in the paper indeed reflect the intended aspects of a modeled scenario, in the following we assume that subtrees of an attack–defense tree that are rooted in identically labeled nodes are equivalent wrt the set semantics.
3 Quantitative Analysis Using Attributes
Among methods for quantitative analysis of scenarios modeled with attack–defense trees are so called attributes, introduced intuitively by Schneier in [26] and formalized for attack trees in [15, 22], and for attack–defense trees in [18]. Attributes represent quantitative aspects of the modeled scenario, such as a minimal cost of executing an attack or maximal damage caused by an attack. Numerous methods to evaluate the value of an attribute on attack–defense trees exist [2, 8], and the most often used approach is based on so called bottomup evaluation [18]. The idea behind the bottomup evaluation is to assign attribute values to the basic actions and to propagate them up to the root of the tree using appropriate operations on the intermediate nodes. The notions of attribute and bottomup evaluation are formalized using attribute domains.
Definition 3
 1.
\(\mathtt {OP} _{\alpha }^{\mathrm {s}}\) is an unranked function on \(D_{\alpha }\),
 2.
\(\mathtt {C} ^{\mathrm {s}}_{\alpha }\) is a binary function on \(D_{\alpha }\).
Let \(A_{\alpha } = (D_{\alpha }, \mathtt {OR} ^{\mathrm {p}}_{\alpha }, \mathtt {AND} ^{\mathrm {p}}_{\alpha }, \mathtt {OR} ^{\mathrm {o}}_{\alpha }, \mathtt {AND} ^{\mathrm {o}}_{\alpha }, \mathtt {C} ^{\mathrm {p}}_{\alpha }, \mathtt {C} ^{\mathrm {o}}_{\alpha })\) be an attribute domain. A function \(\beta _{\alpha }:\mathbb {B} \rightarrow D_{\alpha }\) that assigns values from the set \(D_{\alpha }\) to basic actions of attack–defense trees is called a basic assignment for attribute \(\alpha \).
Definition 4
Selected attribute domains for attack–defense trees
Attribute  \(D_{\alpha }\)  \(\mathtt {OR} ^{\mathrm {p}}_{\alpha }\)  \(\mathtt {AND} ^{\mathrm {p}}_{\alpha }\)  \(\mathtt {OR} ^{\mathrm {o}}_{\alpha }\)  \(\mathtt {AND} ^{\mathrm {o}}_{\alpha }\)  \(\mathtt {C} ^{\mathrm {p}}_{\alpha }\)  \(\mathtt {C} ^{\mathrm {o}}_{\alpha }\)  \(\beta _{\alpha }(\mathtt {b}^{\mathrm {o}})\) 

min. attack cost  \(\mathbb {R}_{\ge 0}\cup \{+\infty \}\)  \(\min \)  \(+\)  \(+\)  \(\min \)  \(+\)  \(\min \)  \(+\infty \) 
max. damage  \(\mathbb {R}_{\ge 0}\cup \{\infty \}\)  \(\max \)  \(+\)  \(+\)  \(\max \)  \(+\)  \(\max \)  \(\infty \) 
min. skill level  \(\mathbb {N}\cup \{0, +\infty \}\)  \(\min \)  \(\max \)  \(\max \)  \(\min \)  \(\max \)  \(\min \)  \(+\infty \) 
min. nb of experts  \(\mathbb {N}\cup \{0, +\infty \}\)  \(\min \)  \(+\)  \(+\)  \(\min \)  \(+\)  \(\min \)  \(+\infty \) 
satisfiability for \(\mathrm {p}\)  \(\{0,1\}\)  \(\vee \)  \(\wedge \)  \(\wedge \)  \(\vee \)  \(\wedge \)  \(\vee \)  0 
Example 4 illustrates the bottomup procedure on the tree from Fig. 1.
Example 4
As already noticed in [22], the value of an attribute for a tree can also be evaluated directly on its semantic. For our purposes we define this evaluation as follows.
Definition 5
Example 5
Notice that the values obtained for the same tree in Examples 4 and 5 are different, despite the fact that the same basic assignment and the same attribute domain have been used. This is due to the fact that the tree from Fig. 1 contains cloned nodes which the standard bottomup evaluation cannot handle properly. In the next section, we provide conditions and develop a method for a proper evaluation of attributes on attack–defense trees with cloned nodes.
4 Quantification On Attack–Defense Trees with Clones
Depending on what is considered to be an attack in an attack–defense tree, different semantics can be used. Note that a semantics for attack–defense trees naturally introduces an equivalence relation in \(\mathbb {T} \). It is thus of great importance to select a method of quantitative analysis that is consistent with a chosen semantics, i.e., a method that for any two trees equivalent wrt the employed semantics returns the same result. This issue was recognized by the authors of [22] for attack trees, and addressed, in the case of attack–defense trees in [18], with the notion of compatibility between an attribute domain and a semantics. Below, we adapt the definition of compatibility from [18] to the bottomup computation.
Definition 6
Let \(A_{\alpha } = (D_{\alpha }, \mathtt {OR} ^{\mathrm {p}}_{\alpha }, \mathtt {AND} ^{\mathrm {p}}_{\alpha }, \mathtt {OR} ^{\mathrm {o}}_{\alpha }, \mathtt {AND} ^{\mathrm {o}}_{\alpha }, \mathtt {C} ^{\mathrm {p}}_{\alpha }, \mathtt {C} ^{\mathrm {o}}_{\alpha })\) be an attribute domain. The bottomup procedure, defined in Definition 4, is compatible with a semantics \(\equiv \) for attack–defense trees, if for every two trees \({T} _1\), \({T} _2\) satisfying \({T} _1 \equiv {T} _2\), the equality \({\alpha }_{B}({T} _1, \beta _{\alpha }) ={\alpha }_{B}({T} _2, \beta _{\alpha }) \) holds for any basic assignment \(\beta _{\alpha }\).
For instance, it is wellknown that the bottomup computation of the minimal cost using the domain from Table 1 is not compatible with the propositional semantics. Indeed, consider the trees \({T} _1=\mathtt {OR} ^{\mathrm {p}}(\mathtt {b}, \mathtt {AND} (\mathtt {b} ', \mathtt {b} '')) \) and \({T} _2=\mathtt {AND} ^{\mathrm {p}}(\mathtt {OR} ^{\mathrm {p}}(\mathtt {b}, \mathtt {b} '), (\mathtt {b}, \mathtt {b} ''))\) whose corresponding propositional formulæ are equivalent. However, for the basic assignment \(\beta _{\mathrm {cost}}(\mathtt {b})=3, \ \beta _{\mathrm {cost}}(\mathtt {b} ')=4,\ \beta _{\mathrm {cost}}(\mathtt {b} '')=1\) the values \({\alpha }_{B}({T} _1, \beta _{\alpha }) =3\) and \({\alpha }_{B}({T} _2, \beta _{\alpha }) =4\) are different. Similarly, the bottomup computation of the minimal cost attribute is not compatible with the set semantics. This can be shown by considering trees \({T} _3=\mathtt {AND} ^{\mathrm {p}}(\mathtt {b},\mathtt {b})\) and \({T} _4=\mathtt {b} \) and will further be discussed in Corollary 1.
This notion of compatibility defined in Definition 6 can be generalized to any computation on attack–defense trees.
Definition 7
Let \(\mathcal {D}\) be a set and let f be a function on \(\mathbb {T} \times \mathcal {D}\). We say that f is compatible with a semantics \(\equiv \) for attack–defense trees, if for every two trees \({T} _1\), \({T} _2\) satisfying \({T} _1 \equiv {T} _2\) the equality \(f({T} _1, d)=f({T} _2,d)\) holds for any \(d\in \mathcal {D}\).
To illustrate the difference between the compatibility notions defined in Definitions 6 and 7, one can consider the method for computing the so called attacker’s expected outcome, proposed by Jürgenson and Willemson in [17]. Since this method is not based on an attribute domain, it cannot be simulated using the bottomup evaluation. However, the authors show that the outcome of their computations is independent from the Boolean representation of an attack tree. This means that the method proposed in [17] is compatible with the propositional semantics for attack trees.
Remark 1
As it has been observed in [18, 19], there is a wide class of attribute domains of the form \((D_{\alpha }, \oplus , \otimes , \otimes , \oplus , \otimes , \oplus )\), where \((D_{\alpha }, \oplus , \otimes )\) constitutes a commutative idempotent semiring. Recall that an algebraic structure \((R, \oplus , \otimes )\) is a commutative idempotent semiring if \(\oplus \) is an idempotent operation, both operations \(\oplus \) and \(\otimes \) are associative and commutative, their neutral elements, denoted here by \(\mathtt {e}_{\oplus } \) and \(\mathtt {e}_{\otimes } \), belong to R, operation \(\otimes \) distributes over \(\oplus \), and the absorbing element of \(\otimes \), denoted \(\mathtt {a}_{\otimes } \), is equal to \(\mathtt {e}_{\oplus } \).
Remark 2
In order for the computations performed using the bottomup evaluation to be consistent with the intuition, the basic actions of the opponent are assigned a specific value. In the case of an attribute domain \((D_{\alpha }, \oplus , \otimes , \otimes , \oplus , \otimes , \oplus )\) based on a commutative idempotent semiring \((D_{\alpha }, \oplus , \otimes )\) this value is equal to \(\mathtt {a}_{\otimes } \). One of the consequences of this choice is that if for every attack \((P,O)\in \mathcal {S}({T})\) the set O is not empty, then \({\alpha }_{\mathcal {S}}({T}, \beta _{\alpha }) =\mathtt {a}_{\otimes } =\mathtt {e}_{\oplus } \), indicating the fact that the proponent cannot achieve the root goal if the opponent executes all of their actions present in the tree. Note that this is closely related to the choice of the functions \(\mathtt {C} ^{\mathrm {p}}_{\alpha }=\otimes \) and \(\mathtt {C} ^{\mathrm {o}}_{\alpha }=\oplus \).
Example 6
For instance, in the case of the minimal cost attribute domain (cf. Table 1), which is based on the idempotent commutative semiring \((\mathbb {R}_{\ge 0}\cup \{+\infty \}, \min , +)\), the basic actions of the opponent are originally assigned \(+\infty \), which is both a neutral element for the \(\min \) operation, and the absorbing element for the addition. This implies that, if on a certain path, there is an opponent’s action which is not countered by the proponent, the corresponding branch will result in the value \(+\infty \), which models that it is impossible (since too costly) for the proponent. This is due to the fact that \(\mathtt {C} ^{\mathrm {p}}_{\mathrm {cost}}=+\). However, if the opponent’s action is countered by the proponent’s action, the corresponding branch will yield a real value different from \(+\infty \), because the \(\min \) operator, used for \(\mathtt {C} ^{\mathrm {o}}_{\mathrm {cost}}\), will be applied between a real number assigned to the proponent’s counter and the \(+\infty \).
The first contribution of this work is presented in Theorem 1. It establishes a relation between the evaluation of attributes via the bottom–up procedure and their evaluation on the set semantics. Its proof is postponed to Sect. 5.
Theorem 1

there are no repeated labels in \({T} \), or

the operator \(\otimes \) is idempotent,
Note that the assumptions of Theorem 1 are satisfied by any commutative idempotent semiring, thus the same result also holds for attributes whose attribute domains are based on commutative idempotent semirings. Furthermore, one can compare the assumption on the lack of repeated labels in Theorem 1 with the linearity of an attack–defense tree, considered in [2]. The authors of [2] have proven that under this strong assumption, the evaluation method that they have developed for multiparameter attributes coincides with their bottomup evaluation.
Remark 3
Consider again the attribute domain specified in Theorem 1. Suppose that the operation \(\otimes \) is not idempotent. Then there exists \(d\in D_{\alpha }\), such that \(d\otimes d\ne d\). In consequence, for \(\beta _{\alpha }(\mathtt {b})=d\) and the trees \({T} _1=\mathtt {b} \) and \({T} _2=\mathtt {AND} ^{\mathrm {p}}(\mathtt {b}, \mathtt {b})\) that are equivalent wrt to the set semantics, we have \({\alpha }_{B}({T} _1, \beta _{\alpha }) \ne {\alpha }_{B}({T} _2, \beta _{\alpha }) \). This shows that if the operation \(\otimes \) is not idempotent, then the bottomup evaluation based on the attribute domain satisfying the remaining assumptions of Theorem 1 is not compatible with the set semantics.
Theorem 1 and Remarks 1 and 3 immediately yield the following corollary.
Corollary 1
Let \(A_{\alpha }=(D_{\alpha }, \oplus , \otimes , \otimes , \oplus , \otimes , \oplus )\) be an attribute domain such that the operations \(\oplus \) and \(\otimes \) are associative and commutative, \(\oplus \) is idempotent, and \(\otimes \) distributes over \(\oplus \). The bottomup procedure based on \(A_{\alpha }\) is compatible with the set semantics if and only if the operation \(\otimes \) is idempotent.
We can also notice that if the assumptions from Corollary 1 are satisfied but the operation \(\otimes \) is not idempotent, then the bottomup procedure is compatible with the so called multiset semantics (introduced for attack trees in [22] and attack–defense trees in [18]) which uses pairs of multisets instead of pairs of sets.
Some of the domains based on idempotent semirings have a specific property that we encapsulate in the notion of nonincreasing domain.
Definition 8
Let \(A_{\alpha }\) be an attribute domain. We say that \(A_{\alpha }\) is nonincreasing if \(A_{\alpha }=(D_{\alpha }, \oplus , \otimes , \otimes , \oplus , \otimes , \oplus )\), \((D_{\alpha }, \oplus , \otimes )\) is a commutative idempotent semiring, and for every \(d,c\in D_{\alpha }\), the inequality \(d\otimes c \preceq d\) holds, where \(\preceq \) stands for the canonical partial order on \(D_{\alpha }\), i.e., the order defined by \(d\preceq c\) if and only if \(d\oplus c = c\).
Example 7
From the attribute domains presented in Table 1 all but one are nonincreasing. The only one which is not nonincreasing is the maximal damage domain.
Note that in order to be able to evaluate the value of an attribute on the set semantics \(\mathcal {S}({T})\), one needs to construct the semantics itself. This task might be computationally expensive, since, in the worst case, the number of elements of \(\mathcal {S}({T})\) is exponential in the number of nodes of \({T} \). In contrast, the complexity of the bottomup procedure is linear in the number of nodes of the underlying tree (if the operations performed on the intermediate nodes are linear in the number of arguments). Thus, it is desirable to ensure that \({\alpha }_{B}({T}, \beta _{\alpha }) ={\alpha }_{\mathcal {S}}({T}, \beta _{\alpha }) \). By Theorem 1, this equality holds in a wide class of attributes, provided that there are no clones in \({T} \). If \({T} \) contains clones, then the two methods might return different values (as illustrated in Remark 3).
To deal with this issue, we present our second contribution of this work. In Algorithm 1, we propose a method of evaluating the value of attributes having nonincreasing domains on attack–defense trees, that takes the repetition of labels into account. The algorithm relies on the following notion of necessary clones.
Definition 9
Let \(\mathtt {b} \) be a cloned basic action of the proponent in an attack–defense tree \({T} \). If \(\mathtt {b} \) is present in every attack of the form \((P,\emptyset )\in \mathcal {S}({T})\), then \(\mathtt {b} \) is a necessary clone; otherwise it is an optional clone.
It is easy to see that the tree from Fig. 1 does not contain any necessary clones. Indeed, this tree contains only one clone – phish – however, there exists the attack \((\{\texttt {force}, \texttt {stealCard}, \texttt {withdrawCash} \}, \emptyset )\) which does not make use of the corresponding phishing action.
The sets of all necessary and optional clones in a tree \({T} \) are denoted with \(\mathcal {C}_N ({T})\) and \(\mathcal {C}_O ({T})\), respectively. When there is no danger of ambiguity, we use \(\mathcal {C}_N \) and \(\mathcal {C}_O \) instead of \(\mathcal {C}_N ({T})\) and \(\mathcal {C}_O ({T})\). The idea behind Algorithm 1 is to first recognize the set \(\mathcal {C}_N \) of necessary clones and temporarily ensure that the values of the attribute assigned to them do not influence the result of the bottom–up procedure. Then the values of the optional clones are also temporarily modified, and the corresponding bottomup evaluations are performed. Only then the result is adjusted in such a way that the original values of the necessary clones are taken into account. Before explaining Algorithm 1 in detail, we provide a simple method for determining whether a cloned basic action of the proponent is a necessary clone in the following lemma.
Lemma 1
Proof
Observe that under the given basic assignment the value of \(\mathrm {skill}_{\mathcal {S}}({{T}},{\beta _{\mathrm {skill}}})\) is equal to 1 if and only if \(\texttt {a}\) is a necessary clone. Since \(\max \) is an idempotent operation, \(\mathrm {skill}_B({{T}},{\beta _{\mathrm {skill}}})=\mathrm {skill}_{\mathcal {S}}({{T}},{\beta _{\mathrm {skill}}})\), by Theorem 1. The lemma follows. \(\square \)
In lines 6–7, an assignment \(\beta _{\alpha }'\) is created for every subset \(\mathcal {C}\) of the set of optional clones \(\mathcal {C}_O \). The clones from \(\mathcal {C}\) are assigned \(\mathtt {a}_{\otimes } \), which intuitively ensures that they are ignored by the bottomup procedure, and the remaining optional clones are assigned \(\mathtt {e}_{\otimes } \) (again, to ensure that their values under \(\beta _{\alpha }\) will eventually be counted exactly once). The result of computations performed in the for loop is multiplied (in the sense of performing operation \(\otimes \)) in line 11 by the product of values assigned to the necessary clones. (Note that the index \(\mathcal {A}\) in the notation \({\alpha }_{\mathcal {A}}({T}, \beta _{\alpha }) \) refers to the evaluation using Algorithm 1.)
Example 8
We illustrate Algorithm 1 on the tree \(T\) from Fig. 1 and the minimal cost attribute domain. Consider the basic assignment of cost given in Example 4. Observe that \(\mathcal {C}_N =\emptyset \) and \(\mathcal {C}_O =\{\texttt {phish} \}\).
The value of \(\mathrm {cost}_\mathcal {A}({{T}},{\beta _{\mathrm {cost}}})\) after the for loop is \(\min \{140,165\}\). Since \(\mathcal {C}_N =\emptyset \), the algorithm returns \(\mathrm {cost}_\mathcal {A}({{T}},{\beta _{\mathrm {cost}}})=140\). This value corresponds to the cost of the attack Open image in new window , which is indeed the cheapest attack in the tree under the given basic assignment, as already illustrated in Example 5. Notice furthermore, that \(\mathrm {cost}_\mathcal {A}({{T}},{\beta _{\mathrm {cost}}})= {\alpha }_{\mathcal {S}}({T}, \beta _{\mathrm {cost}}) \) (cf. Example 5).
Now we turn our attention to complexity of Algorithm 1. Let k be the number of distinct clones of the proponent in \({T} \). Furthermore, let n be the number of nodes in \({T} \). We assume that the complexity of operations \(\oplus \) and \(\otimes \) is linear in the number of arguments, which is a reasonable assumption in the view of the existing attribute domains (cf. Table 1). This implies that the result of a single bottom upprocedure in \({T} \) is obtained in time \(\mathcal {O}(n)\). Thus, from the operations performed in lines 1–4, the most complex one is the initialization of the sets \(\mathcal {C}_N \) and \(\mathcal {C}_O \), the time complexity of which is in \(\mathcal {O}(kn)\) (by Lemma 1). Since the for loop from line 5 iterates over all of the subsets of the optional clones, and the operations inside the loop are linear in n, the overall time complexity of Algorithm 1 is in \(\mathcal {O}(n2^k)\).
In Theorem 2 we give sufficient conditions for the result \({\alpha }_{\mathcal {A}}({T}, \beta _{\alpha }) \) of Algorithm 1 to be equal to the result \({\alpha }_{\mathcal {S}}({T}, \beta _{\alpha }) \) of evaluation on the set semantics. Its proof is presented in Sect. 5.
Theorem 2
Let \({T} \) be an attack–defense tree generated by grammar (1) and \(A_{\alpha }\) be a non–increasing attribute domain. Then the equality \({\alpha }_{\mathcal {A}}({T}, \beta _{\alpha }) ={\alpha }_{\mathcal {S}}({T}, \beta _{\alpha }) \) holds for every basic assignment \(\beta _{\alpha }:\mathbb {B} \rightarrow D_{\alpha }\) satisfying \(\left. \beta _{\alpha }\right _{\mathbb {B} ^{\mathrm {o}}}\equiv \mathtt {a}_{\otimes } \).
Remark 1 and Theorem 2 imply the following corollary.
Corollary 2
Let \(A_{\alpha }=(D_{\alpha }, \oplus , \otimes , \otimes , \oplus , \otimes , \oplus )\) be a non–increasing attribute domain and let \(\beta := \{\beta _{\alpha }:\mathbb {B} \rightarrow D_{\alpha } \text { st } \left. \beta _{\alpha }\right _{\mathbb {B} ^{\mathrm {o}}}\equiv \mathtt {a}_{\otimes } \}\). Then, the evaluation procedure \(\alpha _A:\mathbb {T} \times \beta \rightarrow D_{\alpha }\) specified by Algorithm 1 is compatible with the set semantics (in the sense of Definition 7).
5 Proofs of Theorems 1 and 2
Throughout this section it is assumed that \({T} \) is an attack–defense tree generated by grammar (1) and \(A_{\alpha }=(D_{\alpha }, \oplus , \otimes , \otimes , \oplus , \otimes , \oplus )\) is an attribute domain with the operations \(\oplus \) and \(\otimes \) that are associative and commutative, \(\oplus \) is idempotent, and \(\otimes \) distributes over \(\oplus \). We begin with examining parallels between attribute domains of this type and the set semantics.
Finally, we denote the ith term of representation (3) with \(\alpha _i\). Now we are ready to prove Theorem 1.
Proof of Theorem 1
We finish this section by providing the proof of Theorem 2.
Proof of Theorem 2
6 Conclusion
The goal of the work presented in this paper was to tackle the issue of quantitative analysis of attack–defense trees in which a basic action can appear multiple times. We have presented conditions ensuring that in this setting the classical, fast bottomup procedure for attributes evaluation yields valid result. For a subclass of attributes, we have identified necessary and sufficient condition for compatibility of the bottomup evaluation with the set semantics. A constructive method of evaluation of attributes belonging to a wide and important subclass of attributes, that takes the presence of repeated labels into account, has been presented.
This work addresses only the tip of the iceberg of a much larger problem which is the analysis and quantification of attack–defense trees with dependent actions. The notion of clones captures the strongest type of dependency between goals, namely where the nodes bearing the same label represent exactly the same instance of the same goal. It is thus obvious that the attribute values for the clones should only be considered once in the attribute computations. However, in practice, weaker dependencies between goals may also be present. For instance, when the attacker has access to a computer with sufficient computation power, the attack consisting in guessing a password becomes de facto the brute force attack and can be performed within a reasonable time, for most of the passwords used in practice. In contrast, if this attack is performed manually, it will, most probably, take much longer to succeed. Similarly, if the attacker knows the victim, guessing their password manually will, in most cases, be faster compared to the situation when the attacker is a stranger to the victim. Of course, this problem can be solved by relabeling the nodes and using differently named goals for the two situations. However, this solution is not in line with the practical usage of attack(–defense) trees whose construction often relies on preexisting libraries of attack patterns where the nodes are already labeled and the labels are as simple as possible. We are currently working on improving the standard bottomup evaluation procedure for attributes (in the spirit of Algorithm 1) to accommodate such weakly dependent nodes.
Furthermore, it would be interesting to try to generalize Algorithm 1 for the approaches proposed in the past for the restricted class of attack–defense trees without repeated labels. Such approaches include for instance multiobjective optimization defined in [2] and a method for selecting the most suitable set of countermeasures, based on integer linear programing, developed in [21].
Footnotes
Notes
Acknowledgments
We would like to thank Angèle Bossuat for fruitful discussions on the interpretation of repeated labels in attack–defense trees and on possible approaches to the problem of quantification in the presence of clones.
References
 1.Ruijters, E., Stoelinga, M.: Fault tree analysis: a survey of the stateoftheart in modeling, analysis and tools. Comput. Sci. Rev. 15–16, 29–62 (2015)MathSciNetCrossRefGoogle Scholar
 2.Aslanyan, Z., Nielson, F.: Pareto efficient solutions of attackdefence trees. In: Focardi, R., Myers, A. (eds.) POST 2015. LNCS, vol. 9036, pp. 95–114. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466667_6CrossRefGoogle Scholar
 3.Aslanyan, Z., Nielson, F., Parker, D.: Quantitative verification and synthesis of attackdefence scenarios. In: CSF, pp. 105–119. IEEE Computer Society (2016)Google Scholar
 4.Bagnato, A., Kordy, B., Meland, P.H., Schweitzer, P.: Attribute decoration of attackdefense trees. IJSSE 3(2), 1–35 (2012)Google Scholar
 5.Bossuat, A., Kordy, B.: Evil twins: handling repetitions in attack–defense trees – a survival guide. In: Liu, P., Mauw, S., Stølen, K. (eds.) GraMSec 2017. LNCS, vol. 10744, pp. 17–37. Springer, Cham (2018). https://doi.org/10.1007/9783319748603_2CrossRefGoogle Scholar
 6.CodettaRaiteri, D.: BDD based analysis of parametric fault trees. In: Proceedings of the RAMS 2006, Annual Reliability and Maintainability Symposium, RAMS 2006, pp. 442–449. IEEE Computer Society, Washington, DC (2006)Google Scholar
 7.Fraile, M., Ford, M., Gadyatskaya, O., Kumar, R., Stoelinga, M., TrujilloRasua, R.: Using attackdefense trees to analyze threats and countermeasures in an ATM: a case study. In: Horkoff, J., Jeusfeld, M.A., Persson, A. (eds.) PoEM 2016. LNBIP, vol. 267, pp. 326–334. Springer, Cham (2016). https://doi.org/10.1007/9783319483931_24CrossRefGoogle Scholar
 8.Gadyatskaya, O., Hansen, R.R., Larsen, K.G., Legay, A., Olesen, M.C., Poulsen, D.B.: Modelling attackdefense trees using timed automata. In: Fränzle, M., Markey, N. (eds.) FORMATS 2016. LNCS, vol. 9884, pp. 35–50. Springer, Cham (2016). https://doi.org/10.1007/9783319448787_3CrossRefzbMATHGoogle Scholar
 9.Gadyatskaya, O., Jhawar, R., Kordy, P., Lounis, K., Mauw, S., TrujilloRasua, R.: Attack trees for practical security assessment: ranking of attack scenarios with ADTool 2.0. In: Agha, G., Van Houdt, B. (eds.) QEST 2016. LNCS, vol. 9826, pp. 159–162. Springer, Cham (2016). https://doi.org/10.1007/9783319434254_10CrossRefGoogle Scholar
 10.Haasl, D.F., Roberts, N.H., Veselay, W.E., Goldberg, F.F.: Fault tree handbook. Technical report, Systems and Reliability Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Comission (1981)Google Scholar
 11.Hermanns, H., Krämer, J., Krčál, J., Stoelinga, M.: The value of attack defence diagrams. In: Piessens, F., Viganò, L. (eds.) POST 2016. LNCS, vol. 9635, pp. 163–185. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662496350_9CrossRefGoogle Scholar
 12.Hong, J.B., Kim, D.S., Chung, C.J., Huang, D.: A survey on the usability and practical applications of graphical security models. Comput. Sci. Rev. 26, 1–16 (2017)MathSciNetCrossRefGoogle Scholar
 13.Horne, R., Mauw, S., Tiu, A.: Semantics for specialising attack trees based on linear logic. Fundam. Inform. 153(1–2), 57–86 (2017)MathSciNetCrossRefGoogle Scholar
 14.Ivanova, M.G., Probst, C.W., Hansen, R.R., Kammüller, F.: Transforming graphical system models to graphical attack models. In: Mauw, S., Kordy, B., Jajodia, S. (eds.) GraMSec 2015. LNCS, vol. 9390, pp. 82–96. Springer, Cham (2016). https://doi.org/10.1007/9783319299686_6CrossRefGoogle Scholar
 15.Jhawar, R., Kordy, B., Mauw, S., Radomirović, S., TrujilloRasua, R.: Attack trees with sequential conjunction. In: Federrath, H., Gollmann, D. (eds.) SEC 2015. IFIP AICT, vol. 455, pp. 339–353. Springer, Cham (2015). https://doi.org/10.1007/9783319184678_23CrossRefGoogle Scholar
 16.Ji, X., Yu, H., Fan, G., Fu, W.: Attackdefense trees based cyber security analysis for CPSs. In: SNPD, pp. 693–698. IEEE Computer Society (2016)Google Scholar
 17.Jürgenson, A., Willemson, J.: Computing exact outcomes of multiparameter attack trees. In: Meersman, R., Tari, Z. (eds.) OTM 2008, Part II. LNCS, vol. 5332, pp. 1036–1051. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540888734_8CrossRefGoogle Scholar
 18.Kordy, B., Mauw, S., Radomirovic, S., Schweitzer, P.: Attackdefense trees. J. Log. Comput. 24(1), 55–87 (2014)MathSciNetCrossRefGoogle Scholar
 19.Kordy, B., Mauw, S., Schweitzer, P.: Quantitative questions on attack–defense trees. In: Kwon, T., Lee, M.K., Kwon, D. (eds.) ICISC 2012. LNCS, vol. 7839, pp. 49–64. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642376825_5CrossRefGoogle Scholar
 20.Kordy, B., PiètreCambacédès, L., Schweitzer, P.: Dagbased attack and defense modeling: Don’t miss the forest for the attack trees. Comput. Sci. Rev. 13–14, 1–38 (2014)CrossRefGoogle Scholar
 21.Kordy, B., Wideł, W.: How well can I secure my system? In: Polikarpova, N., Schneider, S. (eds.) IFM 2017. LNCS, vol. 10510, pp. 332–347. Springer, Cham (2017). https://doi.org/10.1007/9783319668451_22CrossRefGoogle Scholar
 22.Mauw, S., Oostdijk, M.: Foundations of attack trees. In: Won, D.H., Kim, S. (eds.) ICISC 2005. LNCS, vol. 3935, pp. 186–198. Springer, Heidelberg (2006). https://doi.org/10.1007/11734727_17CrossRefGoogle Scholar
 23.National Electric Sector Cybersecurity Organization Resource (NESCOR): Analysis of selected electric sector high risk failure scenarios, version 2.0 (2015). http://smartgrid.epri.com/doc/NESCOR
 24.Paja, E., Dalpiaz, F., Giorgini, P.: The sociotechnical security requirements modelling language for secure composite services. In: Brucker, A.D., Dalpiaz, F., Giorgini, P., Meland, P.H., Rios, E. (eds.) Secure Service Composition. LNCS, vol. 8900. Springer, Cham (2014). https://doi.org/10.1007/9783319135182_5CrossRefGoogle Scholar
 25.Pinchinat, S., Acher, M., Vojtisek, D.: ATSyRa: an integrated environment for synthesizing attack trees – (Tool Paper). In: Mauw, S., Kordy, B., Jajodia, S. (eds.) GraMSec 2015. LNCS, vol. 9390, pp. 97–101. Springer, Cham (2016). https://doi.org/10.1007/9783319299686_7CrossRefGoogle Scholar
 26.Schneier, B.: Attack trees. Dr Dobb’s J. Softw. Tools 24, 21–29 (1999)Google Scholar
 27.Stecher, K.: Evaluation of large faulttrees with repeated events using an efficient bottomup algorithm. IEEE Trans. Reliab. 35, 51–58 (1986)CrossRefGoogle Scholar
 28.Vigo, R., Nielson, F., Nielson, H.R.: Automated generation of attack trees. In: IEEE 27th Computer Security Foundations Symposium, CSF 2014, Vienna, Austria, 19–22 July 2014, pp. 337–350 (2014)Google Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.