## Abstract

Implementation attacks like side-channel and fault attacks pose a considerable threat to cryptographic devices that are physically accessible by an attacker. As a consequence, devices like smart cards implement corresponding countermeasures like redundant computation and masking. Recently, statistical ineffective fault attacks (SIFA) have been shown to be able to circumvent these classical countermeasure techniques. We present a new approach for verifying the SIFA protection of arbitrary masked implementations in both hardware and software. The proposed method uses Boolean dependency analysis, factorization, and known properties of masked computations to show whether the fault detection mechanism of redundant masked circuits can leak information about the processed secret values. We implemented this new method in a tool called *Danira*, which can show the SIFA resistance of cryptographic implementations like AES S-Boxes within minutes.

## Introduction

Cryptographic primitives are primarily designed to withstand mathematical attacks in a black-box setting. However, when these primitives are deployed in the real world, they find themselves in a grey-box setting in which an attacker may try to force faulty computations or observe additional physical side-channel information, such as instantaneous power consumption. This improved attacker capability simplifies the extraction of secrets like cryptographic keys.

Active implementation attacks, such as fault analysis [1, 2], and passive side-channel attacks, like power or electromagnetic (EM) analysis [3, 4], are among the most serious threats for implementations of cryptographic algorithms. A common algorithmic countermeasure strategy against these attacks is the combination of masking against power analysis with redundant computation against fault attacks. *Masking* is a secret-sharing technique where one splits a cryptographic computation into \(d+1\) random shares. This technique ensures that the observation of up to *d* intermediate values of that masked computation does not reveal any information about native (unmasked) values [5,6,7,8,9]. *Redundant computation* tries to prevent the release of faulty cryptographic computations caused by environmental influences or malicious tampering such as voltage glitches, lasers, or rapid temperature variations. Without this countermeasure, an attacker with access to faulty computations can learn information about the used cryptographic key [10,11,12].

Researchers long believed that the combination of redundancy and masking could adequately deal with active and passive implementation attacks. However, it was recently shown that when using *statistical ineffective fault attacks* (SIFA), even such protected cryptographic implementations are vulnerable to rather straightforward implementation attacks [13,14,15]. The key observation behind SIFA attacks is that a cryptographic key may correlate with the suppression of a faulted cryptographic computation. Thus, the attacker can obtain information about this key by observing whether the output of a faulted cryptographic computation is suppressed by a redundancy countermeasure or not. For example, if a 1-bit signal carries a secret value and the attacker can force this signal to zero, they can learn the secret value by observing whether or not this fault is detected. While this simplified example is obvious, SIFA is interesting because it works even if the fault injection targets just one share of a masked secret. In fact, SIFA is exploitable even if the attacker does not know the exact effect of a fault injection on the faulted value [14].

Most proposed mitigation techniques against SIFA so far use error correction, which is, however, costly when combined with masking [16, 17]. Another recently proposed SIFA mitigation tries to solve this issue with a careful combination of redundancy, masking, and reversible computing [18], achieving protection against SIFA without significant overheads. The authors give detailed *circuit* descriptions of protected cipher components that can be mapped into concrete software or hardware implementations. However, even minor modifications of the circuit description due to human error, compilers, or synthesis tools, although preserving functional equivalence, may make the circuit vulnerable to SIFA. Consequently, there is a high demand for tooling that can support designers in building efficient cryptographic implementations resistant against power analysis and fault attacks, including SIFA.

### Related work

The empirical and formal verification of power analysis and fault attack countermeasures is an already well established topic in the cryptographic research community [19,20,21,22,23,24,25]. On a conceptual level, the verification of masking countermeasures—ensuring that individual computations are unrelated to any cryptographic secret—does perform statistical independence checks that could also be adapted for verifying SIFA protection, i.e., that cryptographic secrets do not correlate with the suppression of a faulted cryptographic computation. However, in the following we argue that such existing tools either cannot be easily adapted for SIFA verification or would come with performance overheads that make them unattractive for practical use.

Tools like Rebecca [21] and its successor Coco [22] use correlation tracking to show statistical independence in (sequential) masked hardware circuits. Although their method ignores the *strength* and *sign* of correlations for performance reasons, the remaining information is still sufficient to show standard probing resistance of masked circuits. However, these approximations are not applicable for SIFA verification. Since Rebecca and Coco do not track the *sign* of correlations, there is no way to distinguish the correlation sets of a negated value from a non-negated value. Due to the nature of bit-flip faults, this method leads to falsely reported leaks due to the structure of the fault-detection mechanism. Similarly, tools like maskVerif [20] rely on security proofs for a gate’s input signals to prove the gate’s security. According to our investigation, since the fault-detection mechanism combines the shares in its sub-formulas, a leakage report is triggered even though the value cannot be observed.

Exact methods like SILVER [25] use some form of model counting to track exact probability distributions of values within masked circuits and check whether the correlation strength is zero for all secret values. These methods could be adopted for SIFA verification, e.g., by using a strategy as outlined in Fig. 1 but will lead to verification runtimes significantly higher compared to the approach that we will present in this paper.

Besides masking verification tools, there also exists VerFI [26], a verification tool dedicated to fault attacks that, amongst others, does have the capability to verify SIFA protection of a given circuit in certain scenarios. More precisely, VerFI can detect SIFA vulnerabilities of a given circuit using an empirical and simulation-based approach that essentially checks if either (1) all fault injections are being corrected through error correction methods, or (2) all fault injections are being detected via redundancy methods. This empirical approach can be used for error-correction-based SIFA countermeasures, however, VerFI is not suited for the verification of, e.g., the more efficient SIFA countermeasure design by Daemen et al. [18] that does not need be able to correct any possible fault injection.

### Contribution

The contribution of this paper is threefold and consists of a method and its implementation, its evaluation, and resulting SIFA-resistant circuit artifacts.

#### Method

We present a formal verification approach to determine whether a masked redundant cipher implementation is SIFA resistant within a well-defined attacker model. Our verification approach checks whether the output of the fault-detection mechanism correlates with secrets used in the computation. We present three properties and their respective checking methods that serve as sufficient conditions for SIFA protection. *(Incompleteness):* If a function \(\delta \) does not functionally depend on all shares of a secret *s*, it cannot leak the secret. *(Hiding):* If a function \(\delta \) can be written as \(m \oplus \delta '\), where *m* is a uniformly distributed random variable and \(\delta '\) is functionally independent of *m*, \(\delta \) does not leak information about any secrets. *(Inferred independence):* For a function \(\delta = \bigvee _i \delta _i\), if all linear combinations of its partial functions \(\delta _i\) are statistically independent of a secret *s*, \(\delta \) cannot leak the secret *s*. We present an algorithm that uses these sufficient but not necessary conditions to prove the security of circuits. Our tool *Danira* implements this algorithm and is, to our knowledge, the first tool for formal verification of SIFA resistance of masked redundant circuits.

#### Evaluation

We provide an experimental evaluation of our method. Because the sufficient conditions may not be able to prove SIFA resistance, we show in our experimental section that the approach gives precise results for a representative range of secure circuits. If *Danira* cannot prove resistance, it provides fault locations that might leak information about the secrets. We show that *Danira* can accurately prove security or find bugs in S-Boxes, the nonlinear parts of cryptographic implementations, in minutes or even seconds. With respect to SIFA verification, masked linear layers do not need any further analysis as fault injections in these components are not exploitable with SIFA. Ultimately, we give practical examples illustrating that, even when a design is secure against SIFA on paper, vulnerabilities may arise as a result of simple compiler/synthesis optimizations, which can then, however, be identified with *Danira*.

#### Artifacts

As a direct result of this work, we present the first SIFA-resistant Verilog implementations of Daemen et al. [18] designs for a masked AES S-Box, the Keccak \(\chi _3\) S-Box, and all classes of quadratic 4-bit S-Boxes.

## Preliminaries

*Masking* is an algorithmic countermeasure that, while primarily intended to prevent power analysis attacks, also plays an essential role in SIFA attacks. In a masked cipher implementation, each input, output, and intermediate variable is split into \(d+1\) *shares* so that their Xor is equal to the original *native variable* [5]. In Boolean masking, a native variable *s* is split random shares that satisfy . As long as an attacker cannot observe a set of values statistically dependent on all \(d+1\) shares of a native value, the computation is secure against classical power analysis techniques. Dealing with linear functions is trivial as they can be computed on each share individually. However, implementing masking for nonlinear functions (S-Boxes) requires computations on all shares, which is more challenging to implement securely and correctly, and thus the main interest in the literature.

*Redundant computation* is a fault attack countermeasure for cryptographic computations. The main idea is to perform the same computation several times and release a result only if the redundant computations match. This check prevents cases where an attacker forces faults in the computation, leading to incorrect results that correlate with native secrets [10]. Figure 1 shows the structure of a fault detection mechanism for redundant computations. If an attacker introduces a fault and the outputs do not match, output \(\delta \) signals the faults and prevents the release of the result.

*Statistical ineffective fault attacks (SIFA),* first presented at CHES 2018 by Dobraunig et al., is a relatively new type of fault attack technique capable of circumventing common fault/power analysis countermeasures, while being applicable to a wide variety of block ciphers or AEAD schemes [13,14,15, 27]. When performing SIFA, an attacker calls a cryptographic operation (e.g., a block cipher) with varying inputs, injects a fault during each of the computations, and only collects outputs in cases where the fault injection did not cause a faulty computation result (i.e., the output is not suppressed). This *filtered* set of outputs can then be used to perform a key recovery attack on a block cipher as follows.

A typical block cipher design of an iterated round function, consisting of a linear and nonlinear layer, that mixes the current state with the cryptographic key such that in the end, each bit of the block cipher output is uniformly distributed. If we now consider, e.g., an AND computation that occurs in the nonlinear layer of a (later) round function, one can observe that a fault-induced difference in one operand only propagates to the AND output if the other operand is ‘1.’ Hence, if an attacker repeatedly calls a block cipher with varying inputs, while injecting the same difference in each computation, and only collecting outputs that are correct (not suppressed), a certain intermediate value should show a bias toward ‘0.’ Given such a set of faulted but correct block cipher outputs, an attacker can now make a partial key guess of the last round key and calculate back to the faulted operation for each collected output (ciphertext). If the partial key guess was correct, the observed distribution of an intermediate value at that location should be biased. Otherwise, if the observed distribution is uniform, the key guess was wrong. For a more complete attack description targeting the AES-128 block cipher, we refer to the description in [14].

If we now additionally consider masked implementations where each intermediate value is split into multiple random shares, filtering outputs based on the operand of one AND gate is not sufficient anymore. In fact, for SIFA to work in masked scenarios, the attacker needs to work with fault inductions that cause a difference that propagates into multiple AND gates that use the shares of one native value as other operands. We show this with a small example inspired by Daemen et al. [18].

### Example 1

Consider a masked S-Box implementation that operates on shared inputs and outputs. For simplicity, assume that we repeatedly call this S-Box with uniformly distributed inputs and observe the corresponding outputs. Since an S-Box is a bijective function, uniformly distributed inputs should give uniformly distributed outputs. Figure 2 shows a reduced depiction of a masked \(\chi _3\) S-Box, the smaller version of \(\chi _5\), which is used in Keccak (SHA-3). The S-Box takes a 3-bit input, represented by bits *a*, *b* and *c*. Therefore a first-order masked version of \(\chi _3\) takes the bits \(a_0, a_1, b_0, b_1, c_0, c_1\) as input, with \(a = a_0 \oplus a_1\), etc.. If we assume a fault targeting \(a_0\) at the specified location, the induced bit-difference propagates into three And-gates that take the bits \(b_0\), \(b_1\), and \(c_1\) as the other inputs. In this case, the bit-difference cancels out and produces a value \(\delta = b_0 \vee b_1 \vee c_1\). When a fault is not detected, an attacker knows that \(b_0\), \(b_1\), and \(c_1\) are all zero, and therefore, *b* is zero as well. In this concrete case, the attacker uses a fault injection to filter out computations where the distribution of *b* is biased, and uses them to recover the key

*Efficient SIFA countermeasures* were presented at CHES 2020 [18]. Their SIFA mitigation strategy has almost no overhead and builds upon a careful combination of masking, redundant computation, and reversible computing. They show that, by building nonlinear operations from incomplete and invertible building blocks, they achieve implementations where a single fault in the computation is either: (1) not exploitable by SIFA, or (2) detectable via redundant computations. This approach is comparably easy to implement for small S-Boxes and can also be extended to larger S-Boxes such as the AES S-Box.

*Boolean formulas* are a symbolic composition of Boolean variables using logic operators. For a propositional Boolean formula *f*, we write \({\text {Var}}(f)\) to refer to the variables that occur in *f*. When clear from context, we write *V* to denote a superset of all used variables, *i.e.*, \({\text {Var}}(f) \subseteq V\). The *partial evaluation* of *f*, where a variable *q* is assigned a value \(p \in \mathbb {B}\) is written as \(f\left[ q \leftarrow p \right] \). Given a set of variables *Q* and an assignment \(\alpha : Q \rightarrow \mathbb {B}\), we write \(f\left[ \alpha \right] \) to denote the partial evaluation of *f* where each variable in *Q* is assigned according to \(\alpha \).

We say that a formula *f* is *functionally dependent* on a variable *x* if and only if the concrete value of \(x \in \mathbb {B}\) has an influence on the value of \(f \in \mathbb {B}\). Henceforth, for a given formula *f*, we write \(\mathcal {D}(f) \subseteq {\text {Var}}(f)\) to denote the set of variables that *f* functionally depends on. That is, \(x \in \mathcal {D}(f)\) if and only if there exists \(\alpha : {\text {Var}}(f) \setminus \left\{ x\right\} \rightarrow \mathbb {B}\), such that \(f[\alpha ]\left[ x \leftarrow \bot \right] \oplus f[\alpha ]\left[ x \leftarrow \top \right] = \top \). The above property can be checked by a SAT solver.

To discuss information leakage caused by a fault, we first define what it means for a formula *f* to contain information about another formula *g*. We define the weight of a Boolean function as \({\#}_{V}(f) = \left| \left\{ \alpha : V \rightarrow \mathbb {B} \mid f[\alpha ] = \top \right\} \right| \). Formulas *f* and *g* are *statistically dependent* if and only if \({\#}_{V}{(f \wedge g)} \cdot {\#}_{V}{(\lnot f)} \ne {\#}_{V}{(\lnot f \wedge g)} \cdot {\#}_{V}{(f)}\). That is, regardless of the observed value of *f*, the proportion of assignments \(\alpha \) for which \(g\left[ \alpha \right] = \top \) is constant.

### Example 2

Let \(V = \{a, b, c\}\) be a set of variables. Let \(f = a \wedge b\), \(g = \lnot a \vee c\), and \(h = b \oplus c\) be Boolean formulas. Formulas *f* and *g* are statistically dependent because \({\#}_{V}{(f \wedge g)}\cdot {\#}_{V}{(\lnot f)} = 6\) and \({\#}_{V}{(\lnot f \wedge g)} \cdot {\#}_{V}{(f)} = 10\). Indeed, if \(f\left[ \alpha \right] = \top \), then probably \(g\left[ \alpha \right] = \bot \), whereas if \(f\left[ \alpha \right] = \bot \), then \(g\left[ \alpha \right] = \top \) is just as likely as \(g\left[ \alpha \right] = \bot \). The formulas *f* and *h* are statistically independent because \({\#}_{V}{(f \wedge h)}\cdot {\#}_{V}{(\lnot f)} = 6\) and \({\#}_{V}{(\lnot f \wedge h)} \cdot {\#}_{V}{(f)} = 6\).

We say that a Boolean formula *f* is *balanced* if and only if \({\#}_{V}{(f)} = {\#}_{V}{(\lnot f)} = 2^{\left| V\right| - 1}\). A Boolean variable *x*, interpreted as a formula, is inherently balanced for any variable set \(x \in V\) as there are \(2^{\left| V\right| -1}\) assignments \(\alpha : V \rightarrow \mathbb {B}\) with \(\alpha (x) = \top \). Lemma 1 states that this can be extended to functions of the form \(f = x \oplus g\).

### Lemma 1

Let \(f = x \oplus g\) be a Boolean formula with \(x \notin {\text {Var}}(g)\). We have that *f* is balanced.

### Proof

Formula *f* is balanced if and only if \({\#}_{V}{(x \oplus g)} = {\#}_{V}{(x \wedge \lnot g)} + {\#}_{V}{(\lnot x \wedge g)} = 2^{\left| V\right| - 1}\). Since \(x \notin {\text {Var}}(g)\), we know that \({\#}_{V}{(x \wedge \lnot g)} = {\#}_{\{x\}}{(x)} \cdot {\#}_{V \setminus \{x\}}{(\lnot g)}\) and similarly \({\#}_{V}{(\lnot x \wedge g)} = {\#}_{\{x\}}{(\lnot x)} \cdot {\#}_{V \setminus \{x\}}{(g)}\). Therefore, since \({\#}_{\{x\}}{(x)} = {\#}_{\{x\}}{(\lnot x)} = 1\), we have

\(\square \)

We measure the *Boolean distance* of two formulas *f* and *g* as the number of assignments where their values are different. This is equivalent to the weight of their difference \({\#}_{V}{(f \oplus g)}\). Lemma 2 states the connection between statistical independence and Boolean distance.

### Lemma 2

Let *f* and *g* be Boolean formulas and let *f* be balanced. Formulas *f* and *g* are statistically independent if and only if their difference is balanced.

### Proof

We start from the definition of statistical independence between *f* and *g*. Since *f* is balanced, we can simplify the condition.

We know that the assignments \(\alpha \) that satisfy \(f \wedge g\) are all those that satisfy *f*, without those that satisfy \(f \wedge \lnot g\). We use this to show that the difference of *f* and *g* must be balanced.

\(\square \)

## Verification method

In this section, we introduce a method for verifying resistance against SIFA. That is, we show how to verify whether the fault-detection mechanism could give away information about native secrets processed by a software computation or hardware circuit. Our method focuses on proving the statistical independence of the fault-detection value \(\delta \) and any of the secrets \(s \in S\). We do not show this directly and instead try to prove the statistical independence using the *incompleteness*, *hiding*, and *inferred independence* properties we introduce in this section. However, we first define the exact attack model considered in this verification approach.

### Attack model

Formally proving resistance against SIFA requires a definition of the attacker’s capabilities and the exact information they observe. We use an attack model that is very similar to the one introduced by Daemen et al. [18]. We consider redundant masked implementations of S-Boxes that the attacker can query. Figure 1 shows a diagram of such an implementation, where the outputs of the two computation units are used to compute the fault-detection value \(\delta \). With SIFA, the value of \(\delta \) is the only information the attacker receives from the computation. The goal of an attacker is to learn information about the native secret values processed by the computation. The inputs of the computation are categorized as masks and secret shares. In the rest of the section, we say that \(M\) is the set of mask variables, and \(S\) is the set of formulas representing the secrets. We, therefore, have the set of input variables \(V = M \cup \bigcup _{s \in S} {\text {Var}}(s)\).

As SIFA is a fault attack, the attacker has the technical capabilities to introduce the fault that changes the value of an intermediate computation. If we represent \(\delta \) as a computational circuit, a fault modifies the output of precisely one logic gate used during the computation. In our attack model, we consider faults that can negate the value of the gate by causing a bit-flip, which also captures many other fault models such as stuck-at faults for masked circuits [18]. The attacker’s goal is to find a fault location that would cause a statistical dependency between \(\delta \) and one of the formulas \(s \in S\). Our verification does not currently take into account the possible effects of “glitchy” fault injections, i.e., faults with specific timing behavior that causes the output of gates to change (glitch) several times before reaching a stable logic state. While it has been shown that such effects need to be taken into account for implementing masking correctly in hardware, it is currently not clear if, or to what extent, they are relevant for SIFA attacks in realistic attacker settings.

### Proposition 1

A computation with a fault-detection value \(\delta \) is SIFA resistant against a fault-inducing attacker if \(\delta \) is statistically independent of all native secrets \(s \in S\).

### Incompleteness

First, we prove that a fault-detection formula \(\delta \) that does not functionally depend on all shares of a secret *s*, cannot be statistically dependent on *s*. A syntactic version of this property is known as *non-interference* in the literature [20, 28]. Intuitively, if one of the shares is absent from the formula \(\delta \), then an attacker cannot infer anything about *s* without this missing piece of information. Definition 1 formally states this intuition of *incomplete* secrets. Lemma 3 states that *incompleteness* is sufficient for statistical independence.

### Definition 1

Let *f* be a formula, and *s* be a secret represented by the formula \(s_0 \oplus \ldots \oplus s_d\), where the shares \(s_i\) are variables. We say that a secret *s* is *incomplete* in formula *f* whenever \(\mathcal {D}(s) \not \subseteq \mathcal {D}(f)\).

### Lemma 3

Let secret \(s = s_0 \oplus \ldots \oplus s_d\) be incomplete in the fault-detection formula \(\delta \). Then, \(\delta \) and *s* are statistically independent.

### Proof

Since *s* is balanced per Lemma 1, \(\delta \) and *s* are statistically independent if their difference \(\delta \oplus s\) is balanced per Lemma 2. Let \(\delta ' = \delta \left[ {\text {Var}}(\delta ) \setminus \mathcal {D}(\delta ) \mapsto \bot \right] \) be a simplified version of \(\delta \) from which the functionally irrelevant variables have been removed. Due to incompleteness, there must be an \(s_i \in \mathcal {D}(s)\) such that \(s_i \notin {\text {Var}}(\delta ' \oplus \bigoplus _{j \ne i} s_j)\), and therefore per Lemma 1, \(\delta \oplus s\) is balanced. \(\square \)

### Hiding

Assume that the formula \(\delta \) is functionally dependent on all shares of a secret \(s = s_0 \oplus \ldots \oplus s_d\), *i.e.*, \(\mathcal {D}(s) \subseteq \mathcal {D}(\delta )\). Incompleteness, as defined in Definition 1, is thus not fulfilled. However, \(\delta \) and *s* could still be statistically independent. Intuitively, if \(\delta \) is balanced and masked by some uniformly random value, it cannot statistically correlate with any secret \(s \in S\).

### Definition 2

A uniformly random variable *x* *hides* a secret \(s \in S\) in the error-detection formula \(\delta \) whenever \(\delta = x \oplus f\), with \(x \notin \mathcal {D}(s) \cup \mathcal {D}(f)\).

Not all variables can hide secrets. Masks hide secrets because they are uniformly random by definition. Although individual shares \(s_i\) of a secret \(s \in S\) are guaranteed to be uniformly random, their corresponding native secrets are not. Consequently, when investigating the hiding property from Definition 2, we only consider masks and shares of incomplete secrets in \(\delta \), as stated in Lemma 4.

### Lemma 4

Let \(\delta \) be a formula, \(S'\) be the set of secrets that are incomplete in \(\delta \), *i.e.*, \(S' = \{s \in S \mid \mathcal {D}(s) \cap \mathcal {D}(\delta ) \ne \emptyset \}\), \(M\) be the uniformly random mask variables, and *X* be the union \(X = M \cup \bigcup _{s \in S'} \mathcal {D}(s)\). If there exists an \(x \in X\) that hides a secret \(s \in S\), then \(\delta \) and *s* are statistically independent.

### Proof

Because *x* hides *s* in \(\delta \), we know that \(\delta \oplus s = x \oplus f \oplus s\) is balanced according to Lemma 1. As the difference of \(\delta \) and *s* is balanced, they are statistically independent according to Lemma 2. \(\square \)

Lemma 5 presents a method that tests whether the factorization needed for the *hiding* property is possible. The method uses a SAT solver and is similar to the method that checks functional dependencies.

### Lemma 5

Let *f* be a Boolean formula and \(x \in {\text {Var}}(f)\) be a variable. Then, \(f = x \oplus f\left[ x \leftarrow \bot \right] \) if and only if \(f\left[ x \leftarrow \bot \right] \oplus f\left[ x \leftarrow \top \right] = \top \).

### Proof

We assume that \(f = x \oplus f\left[ x \leftarrow \bot \right] \) and show \(f\left[ x \leftarrow \bot \right] \oplus f\left[ x \leftarrow \top \right] = \top \) using substitution:

Similarly, assume that \(f\left[ x \leftarrow \bot \right] \oplus f\left[ x \leftarrow \top \right] = \top \). We prove \(f = x \oplus f\left[ x \leftarrow \bot \right] \) by showing that the two sides are equal under all values of *x*. For \(x = \bot \), we show this through the introduction of \(\bot \) and transformations:

For \(x = \top \), we show this using the assumed equality:

\(\square \)

It is enough to find one uniformly random variable *x* to show that \(\delta \) is statistically independent of all secrets \(s \in S\). As discussed earlier, not all variables in \({\text {Var}}(\delta )\) are eligible for the hiding property. Thus, our verification method only checks the hiding property after determining incomplete secrets first.

### Inferred independence

Although incompleteness and hiding are enough in most cases, the structure of \(\delta \) can make them inapplicable. Therefore, it is possible that \(\delta \) functionally depends on some secret *s*, and no uniformly random value hides *s* in \(\delta \). Example 3 illustrates this situation.

### Example 3

Let \(\delta \) be the fault-detection formula with \(\delta = \delta _0 \vee \delta _1\), \(\delta _0 = x \oplus s_0\) and \(\delta _1 = y \oplus s_1\) be its sub-formulas, \(M = \left\{ x, y\right\} \) be the masks, and \(s = s_0 \oplus s_1\) be a secret. Formula \(\delta \) is functionally dependent on both \(s_0\) and \(s_1\), since there are no assignments \(\alpha : {\text {Var}}(\delta ) \setminus \{s_i\} \rightarrow \mathbb {B}\) such that \(\delta [\alpha ]\left[ s_i \leftarrow \bot \right] \oplus \delta [\alpha ]\left[ s_i \leftarrow \top \right] = \top \). Similarly, \(\delta \) cannot be factorized into either \(\delta = x \oplus \delta \left[ x \leftarrow \bot \right] \) or \(\delta = y \oplus \delta \left[ y \leftarrow \bot \right] \), so neither *x* nor *y* hide *s*. However \(\delta \) is indeed statistically independent of *s* because \({\#}_{{\text {Var}}(\delta )}{(\delta \wedge s)}\cdot {\#}_{{\text {Var}}(\delta )}{(\lnot \delta )} = {\#}_{{\text {Var}}(\delta )}{(\lnot \delta \wedge s)}\cdot {\#}_{{\text {Var}}(\delta )}{(\delta )} = 24\).

Therefore, because of the structure of the fault-detection formula \(\delta \), there is a real possibility that the incompleteness and hiding checks are not sufficient to show that \(\delta \) does not statistically depend on any secrets. However, this can be mitigated by inferring whether \(\delta \) is statistically independent of *s* by looking at its sub-formulas \(\delta _i\) instead. Lemma 6 introduces a method for inferring the statistical independence of two Boolean formulas *f* and *g*, where one has the topmost operation Or, just like \(\delta \), and the other is a balanced function, just like a secret. This property is inspired by correlation propagation used in Rebecca [21].

### Lemma 6

Let \(f = a \vee b\) and *g* be Boolean formulas with the variable sets \({\text {Var}}(f) \subseteq V\) and \({\text {Var}}(g) \subseteq V\). If \(\bot \), *a*, *b*, and \(a \oplus b\) are statistically independent of *g*, then *f* is also statistically independent of *g*.

### Proof

First, \(\bot \) and *g* are statistically independent if and only if *g* is balanced, so \({\#}_{V}{(g)} = {\#}_{V}{(\lnot g)} = 2^{\left| V\right| - 1}\). Since *g* must be balanced, *a*, *b*, \(a \oplus b\) are statistically independent of *g* if and only if their differences are balanced per Lemma 2, *i.e.*, \({\#}_{V}{(a \oplus g)} = {\#}_{V}{(b \oplus g)} = {\#}_{V}{(a \oplus b \oplus g)} = 2^{\left| V\right| - 1}\). Summing up all of these conditions and rearranging them yields:

\(\square \)

Therefore, at least in the case where \(\delta = \delta _0 \vee \delta _1\), we can infer that \(\delta \) is statistically independent of a secret *s*, as long as \(\delta _0\), \(\delta _1\), and *s* fulfill the conditions of Lemma 6. Example 4 illustrates this.

### Example 4

Let \(\delta \), \(\delta _0\), \(\delta _1\) and *s* be as in Example 3. By Lemma 1, *s* is balanced. The hiding property applies for \(\delta _0\), \(\delta _1\) and \(\delta _0 \oplus \delta _1\), where *x*, *y*, and \(x \oplus y\) can be factorized out, respectively. According to Lemma 2, all of the prerequisites for Lemma 6 are met, so we are able to show that \(\delta \) is indeed statistically independent of *s*, without testing the statistical independence definition explicitly.

However, in general, \(\delta \) will be a formula of the form \(\delta = \bigvee _{i=1}^{n} \delta _i\). Although it is possible to apply Lemma 6 recursively, it is not ideal because we run into the same problem we demonstrated in Example 3, just one recursive application later. Luckily, Lemma 6 can be generalized to Or operations with multiple arguments, as shown in Theorem 1.

### Theorem 1

Let \(\Phi = \{\phi _1, \ldots , \phi _n\}\) be a set of Boolean formulas, \(f = \bigvee _{i=1}^{n} \phi _i\) be their disjunction, *g* be another Boolean formula, and \({\text {Var}}(f) \subseteq V\) and \({\text {Var}}(g) \subseteq V\) be their variables. If for all \(\Psi \in \mathcal {P}(\Phi )\), where \(\mathcal {P}(\cdot )\) is the power-set operation, \(f' = \bigoplus _{\psi \in \Psi } \psi \) is statistically independent of *g*, then so is *f*.

### Proof

The proof proceeds by induction over the size of \(\Phi \), where we write \(\Phi _n = \left\{ \phi _1, \ldots , \phi _n\right\} \) when referring to the set with *n* sub-formulas. *Basis.* The conditions are trivially fulfilled for \(\Phi _1 = \{\phi _1\}\), whereas \(\Phi _2 = \{\phi _1, \phi _2\}\) directly applies Lemma 6. *Step.* We show that the conditions hold for \(\Phi _{n+1}\), under assumption that they hold for \(\Phi _n\). We want to apply Lemma 6 to \(f = a \vee b\) where \(a = \phi _{n+1}\) and \(b = \bigvee _{i=1}^{n} \phi _i\). The statistical independence of \(\bot \), *a* and *b* with *g* is guaranteed by, respectively, \(\emptyset \in \mathcal {P}(\Phi _{n+1})\), \(\{\phi _{n+1}\} \in \mathcal {P}(\Phi _{n+1})\), and \(\mathcal {P}(\Phi _n) \subset \mathcal {P}(\Phi _{n+1})\) together with the induction hypothesis for \(\Phi _n\). Lastly, if *g* is balanced and statistically independent of \(\phi _{n+1}\) like demonstrated above, then \(a \oplus b\) is statistically independent of *g* if and only if *b* is statistically independent of \(a \oplus g\). This is guaranteed by \(\left\{ \Psi \cup \{\phi _{n+1}\} \mid \Psi \in \mathcal {P}(\Phi _n)\right\} \subset \mathcal {P}(\Phi _{n+1})\) and the induction hypothesis. Therefore, if for all \(\Psi \in \mathcal {P}(\Phi _{n+1})\) we have that \(f' = \bigoplus _{\psi \in \Psi } \psi \) is statistically independent of *g*, then, according to Lemma 6 and the previous observations, \(f = \bigvee _{i=1}^{n+1} \phi _i\) is also statistically independent of *g*. \(\square \)

Theorem 1 suggests that if we prove that all linear combinations of the error lines \(\delta _i\) are statistically independent of a secret *s*, then we have indirectly shown that their disjunction \(\delta \) is also statistically independent of *s*. Additionally, the condition of Theorem 1 can be further simplified because some of the linear combinations produced by \(X \in \mathcal {P}(\Phi )\) could be equivalent. Instead of considering \(\Phi \), we could instead consider the maximal linearly independent subset of \(\Phi \).

### Lemma 7

Let \(\Phi \) and *g* be as in Theorem 1. Let \(\Phi ' \subseteq \Phi \) be a linearly independent subset of \(\Phi \), *i.e.*, \(\forall \phi \in \Phi '.\ \forall \Psi \subseteq \Phi ' \setminus \{\phi \}.\ \phi \ne \bigoplus _{\psi \in \Psi } \psi \), and let \(\Phi '\) be maximal, *i.e.*, \(\forall \phi \in \Phi \setminus \Phi '.\ \exists \Psi \subseteq \Phi '.\ \phi = \bigoplus _{\psi \in \Psi } \psi \). If for all \(\Psi \subseteq \Phi '\), \(\bigoplus _{\psi \in \Psi } \psi \) is statistically independent of *g*, then the same holds for \(\Psi \subseteq \Phi \).

### Proof

The stated implication relies on the maximality of the linearly independent subset \(\Phi '\). In terms of spaces, \(\Phi '\) is a basis of \(\Phi \), meaning that it can represent all elements of \(\Phi \) as an Xor of a subset of \(\Phi '\), *i.e.*, \(\forall \phi \in \Phi .\ \exists \Psi \subseteq \Phi '.\ \phi = \bigoplus _{\psi \in \Psi } \psi \). For each \(\Psi \subseteq \Phi \), we can construct \(\Psi ' \subseteq \Phi '\) such that \(\bigoplus _{\psi \in \Psi } \psi = \bigoplus _{\psi ' \in \Psi '} \psi '\). \(\square \)

As stated in Lemma 7, instead of considering all linear combinations in \(\Phi \), it is sufficient to consider only linear combinations of its maximally linearly independent subset \(\Phi '\) when applying Theorem 1. In many cases, this substantially reduces the number of checks our verification method performed.

### Approximating independence

Theorem 1, together with the optimized condition from Lemma 7, is powerful enough to show that, given the mentioned conditions for \(\delta _i\), \(\delta \) is statistically independent of a secret *s*. The statistical independence of the linear combinations of \(\delta _i\) can be shown using the *incompleteness* and *hiding* properties discussed in Sects. 3.2 and 3.3. However, issuing exponentially many satisfiability queries required by Theorem 1 is still undesirable. Therefore, we introduce an over-approximation which only calls the SAT solver to perform factorization and functional dependency tests for each relevant \(\delta _i\) with all variables in \({\text {Var}}(\delta _i)\). We then use the gathered data to over-approximate the incompleteness and hiding properties for all linear combinations of \(\delta _i\).

In general a Boolean formula *f* can be rewritten as an equivalent formula \(f = g \oplus h\). Here \(g = \bigoplus _{x \in X} x\) is the linear sub-formula where \(X \subseteq {\text {Var}}(f)\) is a set of variable symbols for which Lemma 5 applies, *i.e.*, \(f\left[ x \leftarrow \bot \right] \oplus f\left[ x \leftarrow \top \right] = \top \). Consequently, *h* is the remaining sub-formula of *f*, *i.e.*, \(h = f\left[ \alpha \right] \) where \(\alpha : X \mapsto \bot \) assigns \(\bot \) to all variables in *X*. Henceforth, we write \(\mathcal {C}(f)\) to denote the maximal set of variables that can be factorized out of *f* via Lemma 5, *i.e.*, \(\mathcal {C}(f) = \left\{ x \mid x \in {\text {Var}}(f), f\left[ x \leftarrow \bot \right] \oplus f\left[ x \leftarrow \top \right] = \top \right\} \). Furthermore, call \(f = {f}^{{\text {lin}}} \oplus {f}^{{\text {nl}}}\) the maximal factorization, where \({f}^{{\text {lin}}} = \bigoplus _{x \in \mathcal {C}(f)} x\), \({f}^{{\text {nl}}} = f\left[ \alpha \right] \) and \(\alpha : \mathcal {C}(f) \mapsto \bot \). Knowing both \(\mathcal {C}(f)\) and \(\mathcal {D}(f)\) allows us to perform easy hiding and incompleteness checks for *f* against some linear formula \(f'\). Additionally, \(\mathcal {C}(\cdot )\) and \(\mathcal {D}(\cdot )\) allow us to approximate the maximal factorization for linear combinations \(f = \bigoplus _{i = 1}^n \phi _i\), where \(\phi _i\) themselves are also formulas.

### Lemma 8

Let \(f = \bigoplus _{i = 1}^n \phi _i\) be a formula with sub-formulas \(\phi _i\). The variable set \(\widehat{\mathcal {C}}(f) = {\bigtriangleup _{i=1}^n \mathcal {C}(\phi _i)} \setminus \bigcup _{i=1}^n \mathcal {D}({\phi _i}^{{\text {nl}}})\) is an under-approximation of \(\mathcal {C}(f)\). Similarly, the set \(\widehat{\mathcal {D}}(f) = {\bigtriangleup _{i=1}^n \mathcal {C}(\phi _i)} \cup \bigcup _{i=1}^n \mathcal {D}({\phi _i}^{{\text {nl}}})\) is an over-approximation of \(\mathcal {D}(f)\). Here, the operator \(\triangle \) is the symmetric difference of two sets, *i.e.*, \(A \triangle B = (A \cup B) \setminus (A \cap B)\), and \(\bigtriangleup _{i=1}^n\) applies it *n* times starting with neutral element \(\emptyset \).

### Proof

We rearrange the linear and nonlinear parts of \(\phi _i\) into the formulas \(g = \bigoplus _{i = 1}^{n} {\phi _i}^{{\text {lin}}}\) and \(h = \bigoplus _{i = 1}^n {\phi _i}^{{\text {nl}}}\), with \(f = g \oplus h\). The function *g* is linear by construction, and because of the properties of Xor, only variables that appear in an uneven number of sets \(\mathcal {C}({\phi _i}^{{\text {lin}}}) = \mathcal {C}(\phi _i)\) are part of \(\mathcal {C}(g)\), and thus \(\mathcal {C}(g) = {\bigtriangleup _{i=1}^n \mathcal {C}(\phi _i)}\). For the function *h*, we assume that no functional dependencies of any \({\phi _i}^{{\text {nl}}}\) are lost, and thus get an over-approximation \(\mathcal {D}(h) \subseteq X = \bigcup _{i=1}^n \mathcal {D}({\phi _i}^{{\text {nl}}})\). It follows that \(\widehat{\mathcal {C}}(f) = \mathcal {C}(g) \setminus X \subseteq \mathcal {C}(f)\), as \(\widehat{\mathcal {C}}(f)\) only contains variables that factorizable in *g* and are not a functional dependency of *h*. Along the same lines, we get \(\mathcal {D}(f) \subseteq \widehat{\mathcal {D}}(f) = \mathcal {C}(g) \cup X\) because it contains all variables except for those that are in not essential in either *g* or *h*. \(\square \)

These two approximations are much easier to compute than the real variable sets \(\mathcal {C}(\delta )\) and \(\mathcal {D}(\delta )\). Ideally, we first compute \(\mathcal {D}(\delta _i)\) and \(\mathcal {C}(\delta _i)\) for each of the fault-detection values \(\delta _i\) using a SAT solver. Afterward, when checking all their linear combinations, we only use fast set computation operations from Lemma 8. Since \(\widehat{\mathcal {D}}(\cdot )\) is an over-approximation, it must contain all functional dependencies and possibly some spurious ones. If we show the incompleteness of a secret *s* with \(\widehat{\mathcal {D}}(\cdot )\), we would have gotten the same result with \(\mathcal {D}(\cdot )\). Similarly, \(\widehat{\mathcal {C}}(\cdot )\) contains a subset of the variables that can be factorized out of the formula. It is still a factorization, although it is not guaranteed to be maximal like \(\mathcal {C}(\cdot )\). Therefore, if we show that a secret is hidden by some uniformly random variable using \(\widehat{\mathcal {C}}(\cdot )\), it is guaranteed to be hidden. However, as shown in Example 5, there are cases when \(\widehat{\mathcal {C}}(\cdot )\) and \(\mathcal {C}(\cdot )\) are not equivalent, and verifying the hiding property might fail as a result.

### Example 5

Let \(\delta _0 = \lnot x \wedge y\) and \(\delta _1 = x \wedge \lnot y\) be the values of error-detection lines. By definition of \(\mathcal {C}(\cdot )\) and \(\mathcal {D}(\cdot )\), we get \(\mathcal {C}(\delta _0) = \mathcal {C}(\delta _1) = \emptyset \) and \(\mathcal {D}(\delta _0) = \mathcal {D}(\delta _1) = \left\{ x, y\right\} \). In this case, the approximation \(\widehat{\mathcal {C}}(\delta _0 \oplus \delta _1)\) is not tight, since \(\widehat{\mathcal {C}}(\delta _0 \oplus \delta _1) = \emptyset \subseteq \left\{ x, y\right\} = \mathcal {C}(\delta _0 \oplus \delta _1)\).

### Verification algorithm

In this section, we summarize how the verification algorithm works. In particular, we focus on the order of checks performed by the algorithm and show how they correspond to the previous exposition. As described in Sect. 3.1, the attacker can introduce a fault in any sub-formula \(\phi \) of \(\delta \). The verification method summarized in Algorithm 1 is given the faulted \(\delta \) and its sub-formulas \(\delta _i\), the set of masks \(M\), and the set of formulas \(S\) representing each secret as a linear combination of its shares. The show algorithm considers only one fault at a time, and our tool *Danira* runs it separately for each possible fault location.

First, the algorithm computes the set *K* of complete secrets, *i.e.*, secrets for which \(\delta \) functionally depends on all its shares. Simultaneously, the algorithm computes the set *R* of uniformly random values that contains all masks \(M\) and shares of incomplete secrets \(s \notin K\). In the rest of the algorithm, only values in *R* can hide secrets. If there are no complete secrets in *K* or a uniformly random variable from *R* can be factorized out of \(\delta \) and hides all secrets in *K*, we know that \(\delta \) is statistically independent of the secrets \(S\).

Next, the algorithm computes a maximal linearly independent subset *G* of fault-detection values \(\delta _i\). As discussed previously in Lemma 6, it is sufficient to apply Theorem 1 to this subset when proving statistical independence. The algorithm computes the approximations \(\widehat{\mathcal {D}}(\phi )\) and \(\widehat{\mathcal {C}}(\phi )\) for all possible linear combinations \(\phi \) from *G*. It uses the approximations to check whether any of the secrets in *K* are complete in \(\widehat{\mathcal {D}}(\phi )\), and if they are, whether any of the random values from *R* appear in \(\widehat{\mathcal {C}}(\phi )\) and hide them. If we were able to show statistical independence of secrets for all \(\phi \), Algorithm 1 declares the computation secure for the given fault.

### Theorem 2

Algorithm 1 is sound: if it returns **secure**, the analyzed fault in the attack model from Sect. 3.1 is not exploitable via SIFA. \(\square \)

Algorithm 1 uses a SAT solver to compute functional dependencies and factorize formulas. Therefore, instead of a traditional runtime analysis, we give bounds on the number of such calls. Computing \(\mathcal {D}(\delta )\) and \(\mathcal {C}(\delta )\) requires \(\Omega \left( |{\text {Var}}(\delta )|\right) \) calls and is the lower bound for the whole algorithm. Constructing the basis *G* requires *O*(*n*) calls where *n* is the number of fault-detection sub-formulas \(\delta _i\). Functional dependencies and factorizations are computed for each \(g \in G\), requiring up to \(O\left( |G|\cdot |{\text {Var}}(\delta )|\right) \) calls, making the upper bound \(O\left( |G|\cdot |{\text {Var}}(\delta )|+ n\right) \).

## Case studies

We evaluate our new verification approach against the secured implementations presented by Daemen et al. [18]. *Danira*^{Footnote 1} uses the netlist of a combinatorial circuit as the input. It interprets the inputs as variables and the intermediate computations as Boolean formulas. From a theoretical standpoint, because we only consider computation outputs, it does not matter whether the analyzed circuit has state elements, *i.e.*, registers.

In the rest of this section, we consider the SIFA-resistant masked implementations of Keccak permutations, all classes of quadratic 4-bit S-Boxes, and an AES S-Box [18]. We argue that without a sophisticated verification method, it is extremely easy to introduce bugs that produce correct computations but break the theoretical SIFA-resistance guarantees. Finally, we summarize the performance of *Danira* on several versions of the same designs.

### Masked Keccak

The Keccak permutations \(\chi _3\) and \(\chi _5\) are simple permutation circuits with, respectively, three and five inputs. They, along with their affine-equivalent counterparts, are used in many lightweight ciphers, most notably in Ascon [29].

The first-order masked implementation proposed by Daemen et al. [18] splits the inputs and all intermediate signals into two shares. It requires uniformly random inputs for re-sharing, and uses the output signals for fault detection. Each gate in the resulting design is a possible fault location according to our attack model in Sect. 3.1. Our verification method goes through each of the fault locations, introduces a bit-flip, generates the fault-detection formulas, and runs Algorithm 1 to see if the considered fault could leak the secrets.

We implemented a netlist for the masked circuits manually, and *Danira* was able to verify they were indeed SIFA resistant for both \(\chi _3\) and \(\chi _5\). However, when we synthesized an equivalent RTL design with Yosys [30], *Danira* reported that it could not prove SIFA resistance. The synthesizer realized that one of the input shares was negated two separate times and de-duplicated this computation. Although this optimizes the circuit, it breaks the SIFA resistance. A fault at this new location in the synthesized design is the same as two faults in the hand-written design. As a result, \(\delta \) becomes statistically dependent on secret *c*, which the attacker can exploit. Unfortunately, this demonstrates that *(1)* an analysis on the gate level is unavoidable and *(2)* they must be implemented manually, as synthesis tools or compilers break SIFA resistance while maintaining functional correctness.

### Masked AES S-Box

Compared to the Keccak permutations, the AES S-Box is a significantly more complex circuit of high polynomial degree. The authors of the CHES paper [18] propose a high-level sketch of a SIFA-resistant masked AES S-Box. There are many ways to implement this high-level description and achieving SIFA resistance is not trivial. After several failed attempts, we managed to implement a protected version of the proposed AES S-Box with the help of our new verification tool. We are convinced that correctly protecting a circuit as large as an AES S-Box is infeasible without the help of an automated verification method such as *Danira*.

### Performance evaluation

This section gives a breakdown of *Danira*’s performance on correctly (and incorrectly) protected implementations. As a point of comparison, we have extended the existing masking verifier SILVER [25] with the construction from Fig. 1, making it able to check SIFA resistance. We performed all experiments on a notebook with an eight-core *Intel i7-8550U 1.8GHz* CPU and *16* GiB of memory.

As shown in Table 1, *Danira* instantly verified (or falsified) all tested Keccak \(\chi _3\), \(\chi _5\), and quadratic 4-bit S-Box designs. We also demonstrate that for Keccak \(\chi _3\) and the AES S-Box, even one re-used gate leads to vulnerabilities. *Danira* verifies the SIFA resistance of our implementation in about three minutes. For the AES S-Boxes, *Danira* performs significantly better than our modified SILVER implementation. However, although this shows *Danira*’s potential, our extension of SILVER is not perfect and could be further improved by its authors.

In summary, the results of our experiments in Table 1 indicate that: (1) the over-approximation we introduce in this paper is strong enough to prove SIFA resistance for secure designs, and (2) our verification method applied by *Danira* is fast enough for complex masked implementations.

## Conclusion

Protecting masked implementations against SIFA is not straightforward. Designers can make mistakes when implementing a specification that is supposed to be secure. Additionally, compilers and synthesis tools can introduce simplifications that break the SIFA-resistance guarantees. *Danira* solves these problems using simple yet effective properties of redundant masked implementations to show whether they are SIFA resistant. As demonstrated by our case studies, *Danira* is able to verify designs that may be used in actual embedded systems. In cases where *Danira* cannot prove the security of a design, it gives a developer detailed debugging information about a problematic fault location.

## References

Boneh D, DeMillo RA, Lipton RJ (1997) On the importance of checking cryptographic protocols for faults. In: EUROCRYPT

Biham E, Shamir A (1997) Differential fault analysis of secret key cryptosystems. In: CRYPTO

Kocher PC, Jaffe J, Jun B (1999) Differential power analysis. In: CRYPTO

Quisquater J-J, Samyde D (2001) Electromagnetic analysis (EMA): measures and counter-measures for smart cards. In: E-smart

Ishai Y, Sahai A, Wagner DA (2003) Private circuits: securing hardware against probing attacks. In: CRYPTO

Groß H, Iusupov R, Bloem R (2018) Generic low-latency masking in hardware. In: IACR transactions on cryptographic hardware and embedded systems (2018)

Groß H, Mangard S (2017) Reconciling d+1 masking in hardware and software. In: CHES

Barthe G, Dupressoir F, Faust S, Grégoire B, Standaert F-X, Strub P-Y (2017) Parallel implementations of masking schemes and the bounded moment leakage model. In: EUROCRYPT

Faust S, Grosso V, Pozo SMD, Paglialonga C, Standaert F-X (2018) Composable masking schemes in the presence of physical defaults & the robust probing model. TCHES

Bar-El H, Choukri H, Naccache D, Tunstall M, Whelan C (2006) The sorcerer’s apprentice guide to fault attacks. Proc IEEE 94(2):586

Hutter M, Schmidt J-M (2013) The temperature side channel and heating fault attacks. In: CARDIS

Fuhr T, Jaulmes E, Lomné V, Thillard A (2013) Fault attacks on AES with faulty ciphertexts only. In: FDTC

Dobraunig C, Eichlseder M, Korak T, Mangard S, Mendel F, Primas R (2018) SIFA: exploiting ineffective fault inductions on symmetric cryptography. TCHES

Dobraunig C, Eichlseder M, Groß H, Mangard S, Mendel F, Primas R (2018) Statistical ineffective fault attacks on masked AES with fault countermeasures. In: ASIACRYPT

Dobraunig C, Mangard S, Mendel F, Primas R (2018) Fault attacks on nonce-based authenticated encryption: application to Keyak and Ketje. In: SAC

Dhooghe S, Nikova S (2020) My gadget just cares for me— how NINA can prove security against combined attacks. In: CT-RSA

Saha S, Jap D, Roy DB, Chakraborty A, Bhasin S, Mukhopadhyay D (2020) A framework to counter statistical ineffective fault analysis of block ciphers using domain transformation and error correction. TIFS

Daemen J, Dobraunig C, Eichlseder M, Groß H, Mendel F, Primas R (2020) Protecting against statistical ineffective fault attacks. TCHES

Arribas V, Nikova S, Rijmen V (2018) VerMI: verification tool for masked implementations. In: ICECS

Barthe G, Belaïd S, Cassiers G, Fouque PA, Grégoire B, Standaert F-X (2019) maskVerif: automated verification of higher-order masking in presence of physical defaults. In: ESORICS

Bloem R, Groß H, Iusupov R, Könighofer B, Mangard S, Winter J (2018) Formal verification of masked hardware implementations in the presence of glitches. In: EUROCRYPT

Gigerl B, Hadzic V, Primas R, Mangard S, Bloem R (2021) Coco: co-design and co-verification of masked software implementations on CPUs. In: USENIX

Gao P, Xie H, Zhang J, Song F, Chen T (2019) Quantitative verification of masked arithmetic programs against side-channel attacks. In: TACAS

Gao P, Zhang J, Song F (2019) Wang C Verifying and quantifying side-channel resistance of masked software implementations. TOSEM 28(3):478

Knichel D, Sasdrich P, Moradi A (2020) SILVER—statistical independence and leakage verification. In: ASIACRYPT

Arribas V, Wegener F, Moradi A, Nikova S (2019) Cryptographic fault diagnosis using VerFI. IACR Cryptol ePrint Arch

Ramezanpour K, Ampadu P, Diehl W (2019) A statistical fault analysis methodology for the Ascon authenticated cipher. In: HOST

Barthe G, Belaïd S, Dupressoir F, Fouque P, Grégoire B, Strub P (2015) Verified proofs of higher-order masking. In: EUROCRYPT

Dobraunig C, Eichlseder M, Mendel F, Schläffer M (2021) Ascon v1.2: lightweight authenticated encryption and hashing. J Cryptol

Wolf C (2022) Yosys open SYnthesis suite. http://bygone.clairexen.net/yosys/. Accessed 25 Jan 2022. http://bygone.clairexen.net/yosys/

## Acknowledgements

This work was supported by the *Austrian Research Promotion Agency (FFG)* via the FERMION project (Grant Number 867542, ICT of the Future), and the K-project DeSSnet (funded in the context of COMET).

## Funding

Open access funding provided by Graz University of Technology.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Hadžić, V., Primas, R. & Bloem, R. Proving SIFA protection of masked redundant circuits.
*Innovations Syst Softw Eng* (2022). https://doi.org/10.1007/s11334-022-00448-4

Received:

Accepted:

Published:

DOI: https://doi.org/10.1007/s11334-022-00448-4

### Keywords

- Masking
- Fault attacks
- SIFA
- Verification