We start by defining masking simulation. In [11], we have defined a state-based simulation for masking fault-tolerance, here we recast this definition using labelled transition systems. First, let us introduce some concepts needed for defining masking fault-tolerance. For any vocabulary \(\varSigma \), and set of labels \(\mathcal {F} = \{F_0, \dots , F_n\}\) not belonging to \(\varSigma \), we consider \(\varSigma _{\mathcal {F}}= \varSigma \cup \mathcal {F}\), where \(\mathcal {F} \cap \varSigma = \emptyset \). Intuitively, the elements of \(\mathcal {F}\) indicate the occurrence of a fault in a faulty implementation. Furthermore, sometimes it will be useful to consider the set \(\varSigma ^i = \{ e^i \mid e \in \varSigma \}\), containing the elements of \(\varSigma \) indexed with superscript i. Moreover, for any vocabulary \(\varSigma \) we consider \(\varSigma _{M}= \varSigma \cup \{M\}\), where \(M \notin \varSigma \), intuitively, this label is used to identify masking transitions.
Given a transition system \(A = \langle S, \varSigma , E, s_0 \rangle \) over a vocabulary \(\varSigma \), we denote \(A^M = \langle S, \varSigma _{M}, E^M, s_0 \rangle \) where \(E^M = E \cup \{s \xrightarrow {M} s \mid s \in S\}\).
3.1 Strong Masking Simulation
Definition 1
Let \(A =\langle S, \varSigma , E, s_0\rangle \) and \(A' =\langle S', \varSigma _{\mathcal {F}}, E', s_0' \rangle \) be two transition systems. \(A'\) is strong masking fault-tolerant with respect to A if there exists a relation \(\mathbin {\mathbf {M}}\subseteq S \times S'\) between \(A^M\) and \(A'\) such that:
-
(A)
\(s_0 \mathrel {\mathbin {\mathbf {M}}} s'_0\), and
-
(B)
for all \(s \in S, s' \in S'\) with \(s \mathrel {\mathbin {\mathbf {M}}} s'\) and all \(e \in \varSigma \) the following holds:
-
(1)
if \((s \xrightarrow {e} t) \in E\) then \(\exists \; t' \in S': (s' \xrightarrow {e} t' \wedge t \mathrel {\mathbin {\mathbf {M}}}t')\);
-
(2)
if \((s' \xrightarrow {e} t') \in E'\) then \(\exists \; t \in S: (s \xrightarrow {e} t \wedge t \; \mathbin {\mathbf {M}}\; t')\);
-
(3)
if \((s' \xrightarrow {F} t')\) for some \(F \in \mathcal {F}\) then \(\exists \; t \in S: (s \xrightarrow {M} t \wedge t \mathrel {\mathbin {\mathbf {M}}} t').\)
If such relation exists we say that \(A'\) is a strong masking fault-tolerant implementation of A, denoted by \(A \preceq _{m}A'\).
We say that state \(s'\) is masking fault-tolerant for s when \(s~\mathbin {\mathbf {M}}~s'\). Intuitively, the definition states that, starting in \(s'\), faults can be masked in such a way that the behavior exhibited is the same as that observed when starting from s and executing transitions without faults. In other words, a masking relation ensures that every faulty behavior in the implementation can be simulated by the specification. Let us explain in more detail the above definition. First, note that conditions A, B.1, and B.2 imply that we have a bisimulation when A and \(A'\) do not exhibit faulty behavior. Particularly, condition B.1 says that the normal execution of A can be simulated by an execution of \(A'\). On the other hand, condition B.2 says that the implementation does not add normal (non-faulty) behavior. Finally, condition B.3 states that every outgoing faulty transition (F) from \(s'\) must be matched to an outgoing masking transition (M) from s.
3.2 Weak Masking Simulation
For analysing nontrivial systems a weak version of masking simulation relation is needed, the main idea is that a weak masking simulation abstracts away from internal behaviour, which is modeled by a special action \(\tau \). Note that internal transitions are common in fault-tolerance: the actions performed as part of a fault-tolerant procedure in a component are usually not observable by the rest of the system.
The weak transition relations \({\Rightarrow } \subseteq S \times (\varSigma \cup \{\tau \} \cup \{M\} \cup \mathcal {F}) \times S\), also denoted as \(E_W\), considers the silent step \(\tau \) and is defined as follows:
The symbol \(\circ \) stands for composition of binary relations and \((\xrightarrow {\tau })^{*}\) is the reflexive and transitive closure of the binary relation \(\xrightarrow {\tau }\).
Intuitively, if \(e \notin \{\tau ,M\}\cup \mathcal {F}\), then
means that there is a sequence of zero or more \(\tau \) transitions starting in s, followed by one transition labelled by e, followed again by zero or more \(\tau \) transitions eventually reaching \(s'\).
states that s can transition to \(s'\) via zero or more \(\tau \) transitions. In particular,
for every s. For the case in which \(e\in \{M\}\cup \mathcal {F}\),
is equivalent to
and hence no \(\tau \) step is allowed before or after the e transition.
Definition 2
Let \(A =\langle S, \varSigma , E, s_0\rangle \) and \(A' =\langle S', \varSigma _{\mathcal {F}}, E', s_0' \rangle \) be two transition systems with \(\varSigma \) possibly containing \(\tau \). \(A'\) is weak masking fault-tolerant with respect to A if there is a relation \(\mathbin {\mathbf {M}}\subseteq S \times S'\) between \(A^M\) and \(A'\) such that:
-
(A)
\(s_0 \mathrel {\mathbin {\mathbf {M}}} s'_0\)
-
(B)
for all \(s \in S, s' \in S'\) with \(s \mathrel {\mathbin {\mathbf {M}}} s'\) and all \(e \in \varSigma \cup \{\tau \}\) the following holds:
-
(1)
if \((s \xrightarrow {e} t) \in E\) then
;
-
(2)
if \((s' \xrightarrow {e} t') \in E'\) then
;
-
(3)
if \((s' \xrightarrow {F} t') \in E'\) for some \(F \in \mathcal {F}\) then \(\exists \; t \in S: (s \xrightarrow {M} t \in E \wedge t \mathrel {\mathbin {\mathbf {M}}} t').\)
If such relation exists, we say that \(A'\) is a weak masking fault-tolerant implementation of A, denoted by \(A \preceq ^w_{m}A'\).
The following theorem makes a strong connection between strong and weak masking simulation. It states that weak masking simulation becomes strong masking simulation whenever transition \(\xrightarrow {}\) is replaced by
in the original automata.
Theorem 2
Let \(A =\langle S, \varSigma , E, s_0\rangle \) and \(A' =\langle S', \varSigma _{\mathcal {F}}, E', s_0' \rangle \). \(\mathbin {\mathbf {M}}\subseteq S \times S'\) between \(A^M\) and \(A'\) is a weak masking simulation if and only if:
-
(A)
\(s_0 \mathrel {\mathbin {\mathbf {M}}} s'_0\), and
-
(B)
for all \(s \in S, s' \in S'\) with \(s \mathrel {\mathbin {\mathbf {M}}} s'\) and all \(e \in \varSigma \cup \{\tau \}\) the following holds:
-
(1)
if
then
;
-
(2)
if
then
;
-
(3)
if
for some \(F \in \mathcal {F}\) then 
The proof of this is straightforward following the same ideas of Milner in [25].
A natural way to check weak bisimilarity is to saturate the transition system [14, 25] and then check strong bisimilarity on the saturated transition system. Similarly, Theorem 2 allows us to compute weak masking simulation by reducing this problem to compute strong masking simulation. Note that
can be alternatively defined by:
As a running example, we consider a memory cell that stores a bit of information and supports reading and writing operations, presented in a state-based form in [11]. A state in this system maintains the current value of the memory cell (\(m=i\), for \(i=0,1\)), writing allows one to change this value, and reading returns the stored value. Obviously, in this system the result of a reading depends on the value stored in the cell. Thus, a property that one might associate with this model is that the value read from the cell coincides with that of the last writing performed in the system.
A potential fault in this scenario occurs when a cell unexpectedly loses its charge, and its stored value turns into another one (e.g., it changes from 1 to 0 due to charge loss). A typical technique to deal with this situation is redundancy: use three memory bits instead of one. Writing operations are performed simultaneously on the three bits. Reading, on the other hand, returns the value that is repeated at least twice in the memory bits; this is known as voting.
We take the following approach to model this system. Labels \(\text {W}_0, \text {W}_1, \text {R}_0,\) and \(\text {R}_1\) represent writing and reading operations. Specifically, \(\text {W}_0\) (resp. \(\text {W}_1\)): writes a zero (resp. one) in the memory. \(\text {R}_0\) (resp. \(\text {R}_1\)): reads a zero (resp. one) from the memory. Figure 1 depicts four transition systems. The leftmost one represents the nominal system for this example (denoted as A). The second one from the left characterizes the nominal transition system augmented with masking transitions, i.e., \(A^M\). The third and fourth transition systems are fault-tolerant implementations of A, named \(A'\) and \(A''\), respectively. Note that \(A'\) contains one fault, while \(A''\) considers two faults. Both implementations use triple redundancy; intuitively, state \(\text {t}_0\) contains the three bits with value zero and \(\text {t}_1\) contains the three bits with value one. Moreover, state \(\text {t}_2\) is reached when one of the bits was flipped (either 001, 010 or 100). In \(A''\), state \(\text {t}_3\) is reached after a second bit is flipped (either 011 or 101 or 110) starting from state \(\text {t}_0\). It is straightforward to see that there exists a relation of masking fault-tolerance between \(A^M\) and \(A'\), as it is witnessed by the relation \(\mathbin {\mathbf {M}}= \{(\text {s}_0, \text {t}_0), (\text {s}_1, \text {t}_1), (\text {s}_0, \text {t}_2)\}\). It is a routine to check that \(\mathbin {\mathbf {M}}\) satisfies the conditions of Definition 1. On the other hand, there does not exist a masking relation between \(A^M\) and \(A''\) because state \(\text {t}_3\) needs to be related to state \(\text {s}_0\) in any masking relation. This state can only be reached by executing faults, which are necessarily masked with M-transitions. However, note that, in state \(\text {t}_3\), we can read a 1 (transition \(\text {t}_3 \xrightarrow {\text {R}_1} \text {t}_3\)) whereas, in state \(\text {s}_0\), we can only read a 0.
3.3 Masking Simulation Game
We define a masking simulation game for two transition systems (the specification of the nominal system and its fault-tolerant implementation) that captures masking fault-tolerance. We first define the masking game graph where we have two players named by convenience the refuter (\(R\)) and the verifier (\(V\)).
Definition 3
Let \(A=\langle S, \varSigma , E, s_0\rangle \) and \(A'=\langle S', \varSigma _{\mathcal {F}}, E_W', s'_0 \rangle \). The strong masking game graph \(\mathcal {G}_{A^M,A'} = \langle S^G, S_R, S_V, \varSigma ^G, E^G, {s_0}^G \rangle \) for two players is defined as follows:
-
\(\varSigma ^G = \varSigma _{M}\cup \varSigma _{\mathcal {F}}\)
-
\(S^G = (S \times ( \varSigma _{M}^1 \cup \varSigma _{\mathcal {F}}^2 \cup \{\#\}) \times S' \times \{ R, V \}) \cup \{s_{err}\}\)
-
The initial state is \(s_0^G = \langle s_0, \#, s'_0, R \rangle \), where the refuter starts playing
-
The refuter’s states are \(S_R = \{ (s, \#, s', R) \mid s \in S \wedge s' \in S' \} \cup \{s_{err}\}\)
-
The verifier’s states are \(S_V = \{ (s, \sigma , s', V) \mid s \in S \wedge s' \in S' \wedge \sigma \in \varSigma ^G\setminus \{M\}\}\)
and \(E^G\) is the minimal set satisfying:
-
\(\{ (s, \#, s', R) \xrightarrow {\sigma } (t, \sigma ^{1}, s', V) \mid \exists \;\sigma \in \varSigma : s \xrightarrow {\sigma } t \in E \} \subseteq E^G\),
-
\(\{ (s, \#, s', R) \xrightarrow {\sigma } (s, \sigma ^{2}, t', V) \mid \exists \;\sigma \in \varSigma _{\mathcal {F}}: s' \xrightarrow {\sigma } t' \in E' \} \subseteq E^G\),
-
\(\{ (s, \sigma ^2, s', V) \xrightarrow {\sigma } (t, \#, s', R) \mid \exists \;\sigma \in \varSigma : s \xrightarrow {\sigma } t \in E \} \subseteq E^G\),
-
\(\{ (s, \sigma ^1, s', V) \xrightarrow {\sigma } (s, \#, t', R) \mid \exists \;\sigma \in \varSigma : s' \xrightarrow {\sigma } t' \in E' \} \subseteq E^G\),
-
\(\{ (s, F^2, s', V) \xrightarrow {M} (t, \#, s', R) \mid \exists \;s \xrightarrow {M} t \in E^M \} \subseteq E^G\), for any \(F \in \mathcal {F}\)
-
If there is no outgoing transition from some state s then transitions \(s \xrightarrow {\sigma } s_{err}\) and \(s_{err}\xrightarrow {\sigma } s_{err}\) for every \(\sigma \in \varSigma \), are added.
The intuition of this game is as follows. The refuter chooses transitions of either the specification or the implementation to play, and the verifier tries to match her choice, this is similar to the bisimulation game [28]. However, when the refuter chooses a fault, the verifier must match it with a masking transition (M). The intuitive reading of this is that the fault-tolerant implementation masked the fault in such a way that the occurrence of this fault cannot be noticed from the users’ side. \(R\) wins if the game reaches the error state, i.e., \(s_{err}\). On the other hand, \(V\) wins when \(s_{err}\) is not reached during the game. (This is basically a reachability game [26]).
A weak masking game graph \(\mathcal {G}^W_{A^M,A'}\) is defined in the same way as the strong masking game graph in Definition 3, with the exception that \(\varSigma _{M}\) and \(\varSigma _{\mathcal {F}}\) may contain \(\tau \), and the set of labelled transitions (denoted as \(E_W^G\)) is now defined using the weak transition relations (i.e., \(E_W\) and \(E_W'\)) from the respective transition systems.
Figure 2 shows a part of the strong masking game graph for the running example considering the transition systems \(A^M\) and \(A''\). We can clearly observe on the game graph that the verifier cannot mimic the transition \((s_0, \#, t_3, R) \xrightarrow {R_1^2} (s_0, R_1^2, t_3, V)\) selected by the refuter which reads a 1 at state \(t_3\) on the fault-tolerant implementation. This is because the verifier can only read a 0 at state \(s_0\). Then, the \(s_{err}\) is reached and the refuter wins.
As expected, there is a strong masking simulation between A and \(A'\) if and only if the verifier has a winning strategy in \(\mathcal {G}_{A^M,A'}\).
Theorem 3
Let \(A=\langle S, \varSigma , E, s_0\rangle \) and \(A'=\langle S', \varSigma _{\mathcal {F}}, E', s_0' \rangle \). \(A \preceq _{m}A'\) iff the verifier has a winning strategy for the strong masking game graph \(\mathcal {G}_{A^M,A'}\).
By Theorems 2 and 3, the result replicates for weak masking game.
Theorem 4
Let \(A=\langle S, \varSigma \cup \{\tau \}, E, s_0\rangle \) and \(A'=\langle S', \varSigma _{\mathcal {F}}\cup \{\tau \}, E', s_0' \rangle \). \(A \preceq ^w_{m}A'\) iff the verifier has a winning strategy for the weak masking game graph \(\mathcal {G}^W_{A^M,A'}\).
Using the standard properties of reachability games we get the following property.
Theorem 5
For any A and \(A'\), the strong (resp. weak) masking game graph \(\mathcal {G}_{A^M, A'}\) (resp. \(\mathcal {G}^W_{A^M, A'}\)) can be determined in time \(O(|E^G|)\) (resp. \(O(|E_W^G|)\)).
The set of winning states for the refuter can be defined in a standard way from the error state [26]. We adapt ideas in [26] to our setting. For \(i,j\ge 0\), sets \(U^j_i\) are defined as follows:
$$\begin{aligned} U^0_i =&U^j_0 = \emptyset , \\ U_1^1 =&\{s_{err}\}, {}\nonumber \end{aligned}$$
(1)
$$\begin{aligned} U_{i+1}^{j+1}&= \{v' \mid v' \in S_R \wedge post(v') \cap U_{i+1}^j \ne \emptyset \} \\&\textstyle {}\cup \{v' \mid v' \in S_V \wedge post(v') \subseteq \bigcup _{j'\le j} U_{i+1}^{j'} \wedge post(v') \cap U^j_{i+1} \ne \emptyset \wedge \pi _2(v') \notin \mathcal {F} \} \\&\textstyle {}\cup \{v' \mid v' \in S_V \wedge post(v') \subseteq \bigcup _{i'\le i, j' \le j}U_{i'}^{j'} \wedge post(v') \cap U^j_{i} \ne \emptyset \wedge \pi _2(v') \in \mathcal {F} \} \end{aligned}$$
then \(U^k = \bigcup _{i \ge 0} U^k_i\) and \(U = \bigcup _{k \ge 0} U^k\). Intuitively, the subindex i in \(U^k_i\) indicates that \(s_{err}\) is reach after at most \(i-1\) faults occurred. The following lemma is straightforwardly proven using standard techniques of reachability games [9].
Lemma 1
The refuter has a winning strategy in \(\mathcal {G}_{A^M, A'}\) (or \(\mathcal {G}^W_{A^M, A'}\)) iff \(s_{init} \in U^k\), for some k.
3.4 Quantitative Masking
In this section, we extend the strong masking simulation game introduced above with quantitative objectives to define the notion of masking fault-tolerance distance. Note that we use the attribute “quantitative” in a non-probabilistic sense.
Definition 4
For transition systems A and \(A'\), the quantitative strong masking game graph \(\mathcal {Q}_{A^M, A'} = \langle S^G, S_R, S_V, \varSigma ^G, E^G, s_{0}^G, v^G \rangle \) is defined as follows:
-
\(\mathcal {G}_{A^M, A'}=\langle S^G, S_R, S_V, \varSigma ^G, E^G, s_{0}^G \rangle \) is defined as in Definition 3,
-
\( v^G(s \xrightarrow {e} s') = (\chi _{\mathcal {F}} (e), \chi _{s_{err}}(s'))\)
where \(\chi _{\mathcal {F}}\) is the characteristic function over set \(\mathcal {F}\), returning 1 if \(e \in \mathcal {F}\) and 0 otherwise, and \(\chi _{s_{err}}\) is the characteristic function over the singleton set \(\{s_{err}\}\).
Note that the cost function returns a pair of numbers instead of a single number. It is direct to codify this pair into a number, but we do not do it here for the sake of clarity. We remark that the quantitative weak masking game graph \(\mathcal {Q}^W_{A^M, A'}\) is defined in the same way as the game graph defined above but using the weak masking game graph \(\mathcal {G}^W_{A^M, A'}\) instead of \(\mathcal {G}_{A^M, A'}\).
Given a quantitative strong masking game graph with the weight function \(v^G\) and a play \(\rho = \rho _0 \sigma _0 \rho _1 \sigma _1 \rho _2, \ldots \), for all \(i \ge 0\), let \(v_i = v^G(\rho _i \xrightarrow {\sigma _i} \rho _{i+1})\). We define the masking payoff function as follows:
$$ f_{m}(\rho ) = \lim _{n \rightarrow \infty } \frac{\text{ pr }_{1}(v_n)}{1+ \sum ^{n}_{i=0} \text{ pr }_{0}(v_i)}, $$
which is proportional to the inverse of the number of masking movements made by the verifier. To see this, note that the numerator of \(\frac{\text{ pr }_{1}(v_n)}{1+ \sum ^{n}_{i=0} \text{ pr }_{0}(v_i)}\) will be 1 when we reach the error state, that is, in those paths not reaching the error state this formula returns 0. Furthermore, if the error state is reached, then the denominator will count the number of fault transitions taken until the error state. All of them, except the last one, were masked successfully. The last fault, instead, while attempted to be masked by the verifier, eventually leads to the error state. That is, the transitions with value \((1,\_)\) are those corresponding to faults. The others are mapped to \((0,\_)\). Notice also that if \(s_{err}\) is reached in \(v_n\) without the occurrence of any fault, the nominal part of the implementation does not match the nominal specification, in which case \(\frac{\text{ pr }_{1}(v_n)}{1+ \sum ^{n}_{i=0} \text{ pr }_{0}(v_i)}=1\). Then, the refuter wants to maximize the value of any run, that is, she will try to execute faults leading to the state \(s_{err}\). In contrast, the verifier wants to avoid \(s_{err}\) and then she will try to mask faults with actions that take her away from the error state. More precisely, the value of the quantitative strong masking game for the refuter is defined as \(val_R(\mathcal {Q}_{A^M,A'}) = \sup _{\pi _R \in \varPi _R} \; \inf _{\pi _V \in \varPi _V} f_{m}(out(\pi _R, \pi _V))\). Analogously, the value of the game for the verifier is defined as \(val_V(\mathcal {Q}_{A^M,A'}) = \inf _{\pi _V \in \varPi _V} \; \sup _{\pi _R \in \varPi _R} f_{m}(out(\pi _R, \pi _V))\). Then, we define the value of the quantitative strong masking game, denoted by \(\mathop {val }(\mathcal {Q}_{A^M,A'})\), as the value of the game either for the refuter or the verifier, i.e., \(\mathop {val }(\mathcal {Q}_{A^M,A'}) = val_R(\mathcal {Q}_{A^M,A'}) = val_V(\mathcal {Q}_{A^M,A'})\). This can be done because quantitative strong masking games are determined as we prove below in Theorem 6.
Definition 5
Let A and \(A'\) be transition systems. The strong masking distance between A and \(A'\), denoted by \(\delta _{m}(A, A')\) is defined as: \(\delta _{m}(A, A') = \mathop {val }(\mathcal {Q}_{A^M,A'}).\)
We would like to remark that the weak masking distance \(\delta _{m}^W\) is defined in the same way for the quantitative weak masking game graph \(\mathcal {Q}^W_{A^M,A'}\). Roughly speaking, we are interesting on measuring the number of faults that can be masked. The value of the game is essentially determined by the faulty and masking labels on the game graph and how the players can find a strategy that leads (or avoids) the state \(s_{err}\), independently if there are or not silent actions.
In the following, we state some basic properties of this kind of games. As already anticipated, quantitative strong masking games are determined:
Theorem 6
For any quantitative strong masking game \(\mathcal {Q}_{A^M, A'}\) with payoff function \(f_{m}\):
$$ \textstyle \inf _{\pi _V \in \varPi _V} \; \sup _{\pi _R \in \varPi _R} f_{m}(out(\pi _R, \pi _V)) = \sup _{\pi _R \in \varPi _R} \; \inf _{\pi _V \in \varPi _V} f_{m}(out(\pi _R, \pi _V))$$
The value of the quantitative strong masking game can be calculated as stated below.
Theorem 7
Let \(\mathcal {Q}_{A^M,A'}\) be a quantitative strong masking game. Then, \(\mathop {val }(\mathcal {Q}_{A^M,A'}) = \frac{1}{w}\), with \(w = \min \{ i \mid \exists j : s_{init} \in U^j_i \}\), whenever \(s_{init} \in U\), and \(\mathop {val }(\mathcal {Q}_{A^M,A'})=0\) otherwise, where sets \(U^j_i\) and U are defined in Eq. (1).
Note that the sets \(U^j_i\) can be calculated using a bottom-up breadth-first search from the error state. Thus, the strategies for the refuter and the verifier can be defined using these sets, without taking into account the history of the play. That is, we have the following theorems:
Theorem 8
Players \(R\) and \(V\) have memoryless winning strategies for \(\mathcal {Q}_{A^M,A'}\).
Theorems 6, 7, and 8 apply as well to \(\mathcal {Q}^W_{A^M,A'}\). The following theorem states the complexity of determining the value of the two types of games.
Theorem 9
The quantitative strong (weak) masking game can be determined in time \(O(|S^G| + |E^G|)\) (resp. \(O(|S^G| + |E_{W}^{G}|)\)).
Theorems 5 and 9 describe the complexity of solving the quantitative and standard masking games. However, in practice, one needs to bear in mind that \(|S^G| = |S|*|S'|\) and \(|E^G| = |E|+|E'|\), so constructing the game takes \(O(|S|^2*|S'|^2)\) steps in the worst case. Additionally, for the weak games, the transitive closure of the original model needs to be computed, which for the best known algorithm yields \(O(\text {max}(|S|,|S'|)^{2.3727})\) [30].
By using \(\mathcal {Q}^W_{A^M,A'}\) instead of \(\mathcal {Q}_{A^M,A'}\) in Definition 5, we can define the weak masking distance \(\delta _{m}^W\). The next theorem states that, if A and \(A'\) are at distance 0, there is a strong (or weak) masking simulation between them.
Theorem 10
For any transition systems A and \(A'\), then (i) \(\delta _{m}(A,A') = 0\) iff \(A \preceq _{m}A'\), and (ii) \(\delta _{m}^W(A,A') = 0\) iff \(A \preceq ^w_{m}A' \).
This follows from Theorem 7. Noting that \(A \preceq _{m}A\) (and \(A \preceq ^w_{m}A\)) for any transition system A, we obtain that \(\delta _{m}(A,A)=0\) (resp. \(\delta _{m}^W(A,A)=0\)) by Theorem 10, i.e., both distance are reflexive.
For our running example, the masking distance is 1 / 3 with a redundancy of 3 bits and considering two faults. This means that only one fault can be masked by this implementation. We can prove a version of the triangle inequality for our notion of distance.
Theorem 11
Let \(A = \langle S, \varSigma , E, s_0 \rangle \), \(A' = \langle S', \varSigma _{\mathcal {F'}}, E', s'_0 \rangle \), and \(A'' = \langle S'', \varSigma _{\mathcal {F''}}, E'', s''_0 \rangle \) be transition systems such that \(\mathcal {F}' \subseteq \mathcal {F}''\). Then \(\delta _{m}(A,A'') \le \delta _{m}(A,A') + \delta _{m}(A', A'')\) and \(\delta _{m}^W(A,A'') \le \delta _{m}^W(A,A') + \delta _{m}^W(A', A'').\)
Reflexivity and the triangle inequality imply that both masking distances are directed semi-metrics [7, 10]. Moreover, it is interesting to note that the triangle inequality property has practical applications. When developing critical software is quite common to develop a first version of the software taking into account some possible anticipated faults. Later, after testing and running of the system, more plausible faults could be observed. Consequently, the system is modified with additional fault-tolerant capabilities to be able to overcome them. Theorem 11 states that incrementally measuring the masking distance between these different versions of the software provides an upper bound to the actual distance between the nominal system and its last fault-tolerant version. That is, if the sum of the distances obtained between the different versions is a small number, then we can ensure that the final system will exhibit an acceptable masking tolerance to faults w.r.t. the nominal system.