Advertisement

Model Checking Indistinguishability of Randomized Security Protocols

  • Matthew S. Bauer
  • Rohit Chadha
  • A. Prasad Sistla
  • Mahesh Viswanathan
Open Access
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10982)

Abstract

The design of security protocols is extremely subtle and vulnerable to potentially devastating flaws. As a result, many tools and techniques for the automated verification of protocol designs have been developed. Unfortunately, these tools don’t have the ability to model and reason about protocols with randomization, which are becoming increasingly prevalent in systems providing privacy and anonymity guarantees. The security guarantees of these systems are often formulated by means of the indistinguishability of two protocols. In this paper, we give the first practical algorithms for model checking indistinguishability properties of randomized security protocols against the powerful threat model of a bounded Dolev-Yao adversary. Our techniques are implemented in the Stochastic Protocol ANalayzer (Span) and evaluated on several examples. As part of our evaluation, we conduct the first automated analysis of an electronic voting protocol based on the 3-ballot design.

1 Introduction

Security protocols are highly intricate and vulnerable to design flaws. This has led to a significant effort in the construction of tools for the automated verification of protocol designs. In order to make automation feasible [8, 12, 15, 23, 34, 48, 55], the analysis is often carried out in the Dolev-Yao threat model [30], where the assumption of perfect cryptography is made. In the Dolev-Yao model, the omnipotent adversary has the ability to read, intercept, modify and replay all messages on public channels, remember the communication history as well as non-deterministically inject its own messages into the network while remaining anonymous. In this model, messages are symbolic terms modulo an equational theory (as opposed to bit-strings) and cryptographic operations are modeled via equations in the theory.

A growing number of security protocols employ randomization to achieve privacy and anonymity guarantees. Randomization is essential in protocols/systems for anonymous communication and web browsing such as Crowds [49], mix-networks [21], onion routers [37] and Tor [29]. It is also used in fair exchange [11, 35], vote privacy in electronic voting [6, 20, 52, 54] and denial of service prevention [40]. In the example below, we demonstrate how randomization is used to achieve privacy in electronic voting systems.

Example 1

Consider a simple electronic voting protocol for 2 voters Alice and Bob, two candidates and an election authority. The protocol is as follows. Initially, the election authority will generate two private tokens \(t_{A}\) and \(t_B\) and send them to Alice and Bob encrypted under their respective public keys. These tokens will be used by the voters as proofs of their eligibility. After receiving a token, each voter sends his/her choice to the election authority along with the proof of eligibility encrypted under the public key of the election authority. Once all votes have been collected, the election authority tosses a fair private coin. The order in which Alice and Bob’s votes are published depends on the result of this coin toss. Vote privacy demands that an adversary not be able to deduce how each voter voted.

All the existing Dolev-Yao analysis tools are fundamentally limited to protocols that are purely non-deterministic, where non-determinism models concurrency as well as the interaction between protocol participants and their environment. There are currently no analysis tools that can faithfully reason about protocols like those in Example 1, a limitation that has long been identified by the verification community. In the context of electronic voting protocols, [28] identifies three main classes of techniques for achieving vote privacy; blind signature schemes, homomorphic encryption and randomization. There the authors concede that protocols based on the latter technique are “hard to address with our methods that are purely non-deterministic.” Catherine Meadows, in her summary of the over 30 year history of formal techniques in cryptographic protocol analysis [46, 47], identified the development of formal analysis techniques for anonymous communication systems, almost exclusively built using primitives with randomization, as a fundamental and still largely unsolved challenge. She writes, “it turned out to be difficult to develop formal models and analyses of large-scale anonymous communication. The main stumbling block is the threat model”.

In this work, we take a major step towards overcoming this long-standing challenge and introduce the first techniques for automated Dolev-Yao analysis of randomized security protocols. In particular, we propose two algorithms for determining indistinguishability of randomized security protocols and implemented them in the Stochastic Protocol ANalyzer (Span). Several works [7, 9, 28, 32, 41] have identified indistinguishability as the natural mechanism to model security guarantees such as anonymity, unlinkability, and privacy. Consider the protocol from Example 1, designed to preserve vote privacy. Such a property holds if the executions of the protocol in which Alice votes for candidate 1 and Bob votes for candidate 2 cannot be distinguished from the executions of the protocol in which Alice votes for candidate 2 and Bob votes for candidate 1.

Observe that in Example 1, it is crucial that the result of the election authority’s coin toss is not visible to the adversary. Indeed if the adversary is allowed to “observe” the results of private coin tosses, then the analysis may reveal “security flaws” in correct security protocols (see examples in [13, 17, 19, 22, 36]). Thus, many authors [10, 13, 17, 18, 19, 22, 26, 36] have proposed that randomized protocols be analyzed with respect to adversaries that are forced to schedule the same action in any two protocol executions that are indistinguishable to them.

For randomized security protocols, [10, 18, 53] have proposed that trace equivalence from the applied \(\pi \)-calculus [5] serve as the indistinguishability relation on traces. In this framework, the protocol semantics are described by partially observable Markov decision processes (POMDPs) where the adversary’s actions are modeled non-deterministically. The adversary is required to choose its next action based on the partial information that it can observe about the execution thus far. This allows us to model the privacy of coin tosses. Two security protocols are said to be indistinguishable [18, 53] if their semantic descriptions as POMDPs are indistinguishable. Two POMDPs \(\mathcal {M}\) and \(\mathcal {M}'\) are said to be indistinguishable if for any adversary \(\mathcal {A}\) and trace \(\overline{o}\), the probability of the executions that generate the trace \(\overline{o}\) with respect to \(\mathcal {A}\) are the same for both \(\mathcal {M}\) and \(\mathcal {M}'\).

Our algorithms for indistinguishability in randomized security protocols are built on top of techniques for solving indistinguishability in finite POMDPs. Our first result shows that indistinguishability of finite POMDPs is P-complete. Membership in P is established by a reduction of POMDP indistinguishability to equivalence in probabilistic finite automata (PFAs), which is known to be P-complete [31, 45, 57]. Further, we show that the hardness result continues to hold for acyclic POMDPs. An acyclic POMDP is a POMDP that has a set of “final” absorbing states and the only cycles in the underlying graph are self-loops on these states.

For acyclic finite POMDPs, we present another algorithm for checking indistinguishability based on the technique of translating a POMDP \(\mathcal {M}\) into a fully observable Markov decision process (MDP), known as the belief MDP \(\mathcal {B}(\mathcal {M})\) of \(\mathcal {M}\). It was shown in [14] that two POMDPs are indistinguishable if and only if the belief MDPs they induce are bisimilar as labeled Markov decision processes. When \(\mathcal {M}\) is acylic and finite then its belief MDP \(\mathcal {B}(\mathcal {M})\) is finite and acyclic and its bisimulation relation can be checked recursively.

Protocols in Span are described by a finite set of roles (agents) that interact asynchronously by passing messages. Each role models an agent in a protocol session and hence we only consider bounded number of sessions. An action in a role performs either a message input, or a message output or a test on messages. The adversary schedules the order in which these actions are executed and generates input recipes comprised of public information and messages previously output by the agents. In general, there are an unbounded number of input recipes available at each input step, resulting in POMDPs that are infinitely branching. Span, however, searches for bounded attacks by bounding the size of attacker messages. Under this assumption, protocols give rise to finite acyclic POMDPs. Even with this assumption, protocols specified in Span describe POMDPs that are exponentially larger than their description. Nevertheless, we show that when considering protocols defined over subterm convergent equational theories, indistinguishability of randomized security protocols is in PSPACE for bounded Dolev-Yao adversaries. We further show that the problem is harder than \(\#\mathsf {SAT_D}\) and hence it is both NP-hard and coNP-hard.

The main engine of Span translates a randomized security protocol into an acyclic finite POMDP by recursively unrolling all protocol executions and grouping states according to those that are indistinguishable. We implemented two algorithms for checking indistinguishability in Span. The first algorithm, called the PFA algorithm, checks indistinguishability of P and \(P'\) by converting them to corresponding PFAs \(\mathsf {A}\) and \(\mathsf {A}'\) as in the proof of decidability of indistinguishability of finite POMDPs. PFA equivalence can then be solved through a reduction to linear programming [31]. The second algorithm, called the on-the-fly (OTF) algorithm, is based on the technique of checking bisimulation of belief MDPs. Although asymptotically less efficient than the PFA algorithm, the recursive procedure for checking bisimulation in belief MDPs can be embedded into the main engine of Span with little overhead, allowing one to analyze indistinguishability on-the-fly as the POMDP models are constructed.

In our evaluation of the indistinguishability algorithms in Span, we conduct the first automated Dolev-Yao analysis for several new classes of security protocols including dinning cryptographers networks [38], mix networks [21] and a 3-ballot electronic voting protocol [54]. The analysis of the 3-ballot protocol, in particular, demonstrates that our techniques can push symbolic protocol verification to new frontiers. The protocol is a full scale, real world example, which to the best of our knowledge, hasn’t been analyzed using any existing probabilistic model checker or protocol analysis tool.

Summary of Contributions. We showed that the problem of checking indistinguishability of POMDPs is P-complete. The indistinguishability problem for bounded instances of randomized security protocols over subterm convergent equational theories (bounded number of sessions and bounded adversarial non-determinism) is shown to be in PSPACE and \(\#\mathsf {SAT_D}\)-hard. We proposed and implemented two algorithms in the Span protocol analysis tool for deciding indistinguishability in bounded instances of randomized security protocols and compare their performance on several examples. Using Span, we conducted the first automated verification of a 3-ballot electronic voting protocol.

Related Work. As alluded to above, techniques for analyzing security protocols have remained largely disjoint from techniques for analyzing systems with randomization. Using probabilistic model checkers such as PRISM [44], STORM [27] and APEX [42] some have attempted to verify protocols that explicitly employ randomization [56]. These ad-hoc techniques fail to capture powerful threat models, such as a Dolev-Yao adversary, and don’t provide a general verification framework. Other works in the Dolev-Yao framework [28, 43] simply abstract away essential protocol components that utilize randomization, such as anonymous channels. The first formal framework combining Dolev-Yao analysis with randomization appeared in [10], where the authors studied the conditions under which security properties of randomized protocols are preserved by protocol composition. In [53], the results were extended to indistinguishability.

Complexity-theoretic results on verifying secrecy and indistinguishability properties of bounded sessions of randomized security protocols against unbounded Dolev-Yao adverasries were studied in [18]. There the authors considered protocols with a fixed equational theory1 and no negative tests (else branches). Both secrecy and indistinguishability were shown to be in coNEXPTIME, with secrecy being coNEXPTIME-hard. The analogous problems for purely non-deterministic protocols are known to be coNP-complete [25, 33, 51]. When one fixes, a priori, the number of coin tosses, secrecy and indistinguishability in randomized protocols again become coNP-complete. In our asymptotic complexity results and in the Span tool, we consider a general class of equational theories and protocols that allow negative tests.

2 Preliminaries

We assume that the reader is familiar with probability distributions. For a set X, \(\mathsf {Dist}(X)\) shall denote the set of all discrete distributions \(\mu \) on X such that \(\mu (x)\) is a rational number for each \(x\in X\). For \(x\in X\), \(\delta _{x}\) will denote the Dirac distribution, i.e., the measure \(\mu \) such that \(\mu (x)=1\). The support of a discrete distribution \(\mu \), denoted \(\mathsf {support}(\mu )\), is the set of all elements x such that \(\mu (x)\ne 0\).

Markov Decision Processes (MDPs). MDPs are used to model processes that exhibit both probabilistic and non-deterministic behavior. An MDP \(\mathcal {M}\) is a tuple \(( Z, z_s, \mathsf {Act}, \varDelta )\) where Z is a countable set of states, \(z_s \in Z\) is the initial state, \(\mathsf {Act}\) is a countable set of actions and \(\varDelta : Z \times \mathsf {Act}\rightarrow \mathsf {Dist}(Z)\) is the probabilistic transition function. \(\mathcal {M}\) is said to be finite if the sets Z and \(\mathsf {Act}\) are finite. An execution of an MDP is a sequence \(\rho = z_0 \xrightarrow {\alpha _1} z_1 \xrightarrow {\alpha _2} \cdots \xrightarrow {\alpha _m} z_m \) such that \(z_0 = z_s\) and \(z_{i+1} \in \mathsf {support}(\varDelta (z_i, \alpha _{i+1}))\) for all \(i \in \{0, \ldots , m{-}1\}\). The measure of \(\rho \), denoted \(\mathsf {prob}_{\mathcal {M}}(\rho )\), is \(\prod _{i=0}^{m-1} \varDelta (z_i, \alpha _{i+1})(z_{i+1})\). For the execution \(\rho \), we write \(\mathsf {last}(\rho ) = z_m\) and say that the length of \(\rho \), denoted \(|\rho |\), is m. The set of all executions of \(\mathcal {M}\) is denoted as \(\mathsf {Exec}(\mathcal {M})\).

Partially Observable Markov Decision Processes (POMDPs). A POMDP \(\mathcal {M}\) is a tuple \((Z, z_s, \mathsf {Act}, \varDelta , \mathcal {O}, \mathsf {obs})\) where \(\mathcal {M}_0 = (Z, z_s, \mathsf {Act}, \varDelta )\) is an MDP, \(\mathcal {O}\) is a countable set of observations and \(\mathsf {obs}: Z\rightarrow \mathcal {O}\) is a labeling of states with observations. \(\mathcal {M}\) is said to be finite if \(\mathcal {M}_0\) is finite. The set of executions of \(\mathcal {M}_0\) is taken to be the set of executions of \(\mathcal {M}\), i.e., we define \(\mathsf {Exec}(\mathcal {M})\) as the set \(\mathsf {Exec}(\mathcal {M}_0)\). Given an execution \(\rho = z_0 \xrightarrow {\alpha _1} z_1 \xrightarrow {\alpha _2} \cdots \xrightarrow {\alpha _m} z_m\) of \(\mathcal {M}\), the trace of \(\rho \) is \(\mathsf {tr}(\rho ) = \mathsf {obs}(z_0) \alpha _1 \mathsf {obs}(z_1) \alpha _2 \cdots \alpha _m \mathsf {obs}(z_m)\). For a POMDP \(\mathcal {M}\) and a sequence \(\overline{o}\in \mathcal {O}\cdot (\mathsf {Act}\cdot \mathcal {O})^{*}\), the probability of \(\overline{o}\) in \(\mathcal {M}\), written \(\mathsf {prob}_\mathcal {M}(\overline{o})\), is the sum of the measures of executions in \(\mathsf {Exec}(\mathcal {M})\) with trace \(\overline{o}\). Given two POMDPs \(\mathcal {M}_0\) and \(\mathcal {M}_1\) with the same set of actions \(\mathsf {Act}\) and the same set of observations \(\mathcal {O}\), we say that \(\mathcal {M}_0\) and \(\mathcal {M}_1\) are distinguishable if there exists \(\overline{o}\in \mathcal {O}\cdot (\mathsf {Act}\cdot \mathcal {O})^{*}\) such that \(\mathsf {prob}_{\mathcal {M}_0}(\overline{o}) \ne \mathsf {prob}_{\mathcal {M}_1}(\overline{o})\). If \(\mathcal {M}_0\) and \(\mathcal {M}_1\) cannot be distinguished, they are said to be indistinguishable. We write \(\mathcal {M}_0 \approx \mathcal {M}_1\) if \(\mathcal {M}_0\) and \(\mathcal {M}_1\) are indistinguishable. As is the case in [18, 53], indistinguishability can also be defined through a notion of an adversary. Our formulation is equivalent, even when the adversary is allowed to toss coins [18].

Probabilistic Finite Automata (PFAs). A PFA is like a finite-state deterministic automaton except that the transition function from a state on a given input is described as a probability distribution. Formally, a PFA \(\mathsf {A}\) is a tuple \((Q, \varSigma , q_s, \varDelta , F)\) where \(Q\) is a finite set of states, \(\varSigma \) is a finite input alphabet, \(q_s \in Q\) is the initial state, \(\varDelta : Q\times \varSigma \rightarrow \mathsf {Dist}(Q)\) is the transition relation and \(F\subseteq Q\) is the set of accepting states. A run \(\rho \) of \(\mathsf {A}\) on an input word \(u\in \varSigma ^*=a_1a_2\cdots a_m\) is a sequence \( q_0q_1\cdots q_{m} \in Q^*\) such that \(q_0 = q_s\) and \(q_i \in \mathsf {support}(\varDelta (q_{i-1}, a_i))\) for each \(1\le i \le m\). For the run \(\rho \) on word \(u\), its measure, denoted \(\mathsf {prob}_{\mathsf {A},u}(\rho )\), is \(\prod _{i=1}^{m} \varDelta (q_{i-1}, a_i)(q_i)\). The run \(\rho \) is called accepting if \(q_{m} \in F\). The probability of accepting a word \(u\in \varSigma \), written \(\mathsf {prob}_\mathsf {A}(u)\), is the sum of the measures of the accepting runs on \(u\). Two PFAs \(\mathsf {A}_0\) and \(\mathsf {A}_1\) with the same input alphabet \(\varSigma \) are said to be equivalent, denoted \(\mathsf {A}_0 \equiv \mathsf {A}_1\), if \(\mathsf {prob}_{\mathsf {A}_0}(u) = \mathsf {prob}_{\mathsf {A}_1}(u)\) for all input words \(u\in \varSigma ^*\).

3 POMDP Indistinguishability

In this section, we study the underlying semantic objects of randomized security protocols, POMDPs. The techniques we develop for analyzing POMDPs provide the foundation for the indistinguishability algorithms we implement in the Span protocol analysis tool. Our first result shows that indistinguishability of finite POMDPs is decidable in polynomial time by a reduction to PFA equivalence, which is known to be decidable in polynomial time [31, 57].

Proposition 1

Indistinguishability of finite POMDPs is in \({\mathbf {P}.}\)

Proof

(sketch). Consider two POMDPs \(\mathcal {M}_i =(Z_i, z_s^i, \mathsf {Act}, \varDelta _i, \mathcal {O}, \mathsf {obs}_i)\) for \(i \in \{0,1\}\) with the same set of actions \(\mathsf {Act}\) and the set of observations \(\mathcal {O}\). We shall construct PFAs \(\mathsf {A}_0\) and \(\mathsf {A}_1\) such that \(\mathcal {M}_0 \approx \mathcal {M}_1\) iff \(\mathsf {A}_0 \equiv \mathsf {A}_1\) as follows. For \(i\in \{ 0,1 \}\), let “\(\mathsf {bad}_i\)” be a new state and define the PFA \(\mathsf {A}_i = (Q_i, \varSigma , q^i_s, {\varDelta '_i}, F_i)\) where \(Q_i = Z_i \cup \{\mathsf {bad}_i\}\), \(\varSigma = \mathsf {Act}\times \mathcal {O}\), \(q^i_s = z^i_s\), \(F_i = Z_i\) and \(\varDelta '_i\) is defined as follows.
$$\begin{aligned} {{\varDelta '_i}(q,(\alpha , o))(q') } = {\left\{ \begin{array}{ll} \varDelta _i(q,\alpha )(q') &{} \text { if } \;q,q' \in Z_i \;\text { and } \;\mathsf {obs}(q) = o \\ 1 &{} \text { if } \;q\in Z_i \text {, } \mathsf {obs}(q) \ne o \text { and } q' = \textsf {bad}_i \\ 1 &{} \text { if }\; q,q' = \mathsf {bad}_i \\ 0 &{} \text { otherwise } \end{array}\right. }. \end{aligned}$$
Let \(u= (\alpha _1, o_0) \ldots (\alpha _k, o_{k-1})\) be a non-empty word on \(\varSigma \). For the word \(u\), let \(\overline{o}_u\) be the trace \(o_0 \alpha _1 o_1 \alpha _2 {\cdots } \alpha _{k-1} o_{k-1}\). The proposition follows immediately from the observation that \(\mathsf {prob}_{\mathsf {A}_i}(u)= \mathsf {prob}_{\mathcal {M}_i}(\overline{o}_u)\).    \(\square \)

An MDP \(\mathcal {M}= (Z, z_s, \mathsf {Act}, \varDelta )\) is said to be acyclic if there is a set of absorbing states \(Z_{\mathsf {abs}} \subseteq Z\) such that for all \(\alpha \in \mathsf {Act}\) and \(z \in Z_{\mathsf {abs}}\), \(\varDelta (z, \alpha )(z) = 1\) and for all \(\rho = z_0 \xrightarrow {\alpha _1} {\cdots } \xrightarrow {\alpha _m} z_m\in \mathsf {Exec}(\mathcal {M})\) if \(z_i=z_j\) for \(i\ne j\) then \(z_i\in Z_{\mathsf {abs}}\). Intuitively, acyclic MDPs are MDPs that have a set of “final” absorbing states and the only cycles in the underlying graph are self-loops on these states. A POMDP \(\mathcal {M}= (Z, z_s, \mathsf {Act}, \varDelta , \mathcal {O}, \mathsf {obs})\) is acyclic if the MDP \(\mathcal {M}_0= (Z, z_s, \mathsf {Act}, \varDelta )\) is acyclic. We have the following result, which can be shown from the P-hardness of the PFA equivalence problem [45].

Proposition 2

Indistinguishability of finite acyclic POMDPs is \({\mathbf {P}}\)-hard. Hence Indistinguishability of finite POMDPs is P-complete.

Thanks to Proposition 1, we can check indistinguishability for finite POMDPs by reducing it to PFA equivalence. We now present a new algorithm for indistinguishability of finite acyclic POMDPs. A well-known POMDP analysis technique is to translate a POMDP \(\mathcal {M}\) into a fully observable belief MDP \(\mathcal {B}(\mathcal {M})\) that emulates it. One can then analyze \(\mathcal {B}(\mathcal {M})\) to infer properties of \(\mathcal {M}\). The states of \(\mathcal {B}(\mathcal {M})\) are probability distributions over the states of \(\mathcal {M}\). Further, given a state \(b\in \mathcal {B}(\mathcal {M})\), if states \(z_1,z_2\) of \(\mathcal {M}\) are such that \(b(z_1), b(z_2)\) are non-zero then \(z_1\) and \(z_2\) must have the same observation. Hence, by abuse of notation, we can define \(\mathsf {obs}(b)\) to be \(\mathsf {obs}(z)\) if \(b(z) \ne 0\). Intuitively, an execution \(\rho = b_0 \xrightarrow {\alpha _1} b_1 \xrightarrow {\alpha _2} \cdots \xrightarrow {\alpha _m} b_m\) of \(\mathcal {B}(\mathcal {M})\) corresponds to the set of all executions \(\rho '\) of \(\mathcal {M}\) such that \(\mathsf {tr}(\rho ')= \mathsf {obs}(b_0) \alpha _1 \mathsf {obs}(b_1) \alpha _2 \cdots \alpha _m \mathsf {obs}(b_m)\). The measure of execution \(\rho \) in \(\mathcal {B}(\mathcal {M})\) is exactly \(\mathsf {prob}_\mathcal {M}( \mathsf {obs}(b_0) \alpha _1 \mathsf {obs}(b_1) \alpha _2 \cdots \alpha _m \mathsf {obs}(b_m))\).

The initial state of \(\mathcal {B}(\mathcal {M})\) is the distribution that assigns 1 to the initial state of \(\mathcal {M}\). Intuitively, on a given state \(b\in \mathsf {Dist}(\mathcal {M})\) and an action \(\alpha \), there is at most one successor state \(b^{\alpha ,o}\) for each observation o. The probability of transitioning from b to \(b^{\alpha ,o}\) is the probability that o is observed given that the distribution on the states of \(\mathcal {M}\) is b and action \(\alpha \) is performed; \(b^{\alpha ,o}(z)\) is the conditional probability that the actual state of the POMDP is z. The formal definition follows.

Definition 1

Let \(\mathcal {M}= (Z, z_s, \mathsf {Act}, \varDelta , \mathcal {O}, \mathsf {obs})\) be a POMDP. The belief MDP of \(\mathcal {M}\), denoted \(\mathcal {B}(\mathcal {M})\), is the tuple \((\mathsf {Dist}(Z), \delta _{z_s}, \mathsf {Act}, \varDelta ^{\mathcal {B}} )\) where \(\varDelta ^{\mathcal {B}}\) is defined as follows. For \(b\in \mathsf {Dist}(Z)\), action \(\alpha \in \mathsf {Act}\) and \(o\in \mathcal {O}\), let
$$p_{b,\alpha ,o}= \sum _{z\in Z} b(z) \cdot \bigg ( \sum _{z' \in Z\wedge \mathsf {obs}(z')=o} \varDelta (z, \alpha )(z')\bigg ).$$
\(\varDelta ^{\mathcal {B}}(b, \alpha )\) is the unique distribution such that for each \(o\in \mathcal {O}\), if \(p_{b,\alpha ,o}\ne 0\) then \(\varDelta ^{\mathcal {B}}(b, \alpha )(b^{\alpha , o}) = p_{b,\alpha ,o}\) where for all \(z'\in Z\),
$$\begin{aligned} b^{\alpha , o} (z') = {\left\{ \begin{array}{ll} \frac{\sum _{z \in Z} b(z) \cdot \varDelta (z, \alpha )(z') }{p_{b,\alpha ,o}} &{} { if }\,\, \mathsf {obs}(z') = o\\ 0 &{} \,{ otherwise} \\ \end{array}\right. }. \end{aligned}$$
Let \(\mathcal {M}_i =(Z_i, z_s^i, \mathsf {Act}, \varDelta _i, \mathcal {O}, \mathsf {obs}_i)\) for \(i \in \{0,1\}\) be POMDPs with the same set of actions and observations. In [14] the authors show that \(\mathcal {M}_0\) and \(\mathcal {M}_1\) are indistinguishable if and only if the beliefs \(\delta _{z_s^0}\) and \(\delta _{z_s^1}\) are strongly belief bisimilar. Strong belief bisimilarity coincides with the notion of bisimilarity of labeled MDPs: a pair of states \((b_0, b_1) \in \mathsf {Dist}(Z_0) \times \mathsf {Dist}(Z_1)\) is said to be strongly belief bisimilar if (i) \(\mathsf {obs}(b_0)=\mathsf {obs}(b_1)\), (ii) for all \(\alpha \in \mathsf {Act}, o \in \mathcal {O}\), \(p_{b_0,\alpha ,o} = p_{b_1,\alpha ,o}\) and (iii) the pair \((b_0^{\alpha ,o}, b_1^{\alpha ,o})\) is strongly belief bisimilar if \(p_{b_0,\alpha ,o}=p_{b_1,\alpha ,o}>0\). Observe that, in general, belief MDPs are defined over an infinite state space. It is easy to see that, for a finite acyclic POMDP \(\mathcal {M}\), \(\mathcal {B}(\mathcal {M})\) is acyclic and has a finite number of reachable belief states. Let \(\mathcal {M}_0\) and \(\mathcal {M}_1\) be as above and assume further that \(\mathcal {M}_0, \mathcal {M}_1\) are finite and acyclic with absorbing states \(Z_{\mathsf {abs}} \subseteq Z_0 \cup Z_1\). As a consequence of the result from [14] and the observations above, we can determine if two states \((b_0, b_1) \in \mathsf {Dist}(Z_0) \times \mathsf {Dist}(Z_1)\) are strongly belief bisimilar using the on-the-fly procedure from Algorithm 1.

4 Randomized Security Protocols

We now present our core process calculus for modeling security protocols with coin tosses. The calculus closely resembles the ones from [10, 53]. First proposed in [39], it extends the applied \(\pi \)-calculus [5] by the inclusion of a new operator for probabilistic choice. As in the applied \(\pi \)-calculus, the calculus assumes that messages are terms in a first-order signature identified up-to an equational theory.

4.1 Terms, Equational Theories and Frames

A signature \({\mathcal {F}}\) contains a finite set of function symbols, each with an associated arity. We assume \({\mathcal {F}}\) contains two special disjoint sets, \(\mathcal {N}_{\mathsf {pub}}\) and \(\mathcal {N}_{\mathsf {priv}}\), of 0-ary symbols.2 The elements of \(\mathcal {N}_{\mathsf {pub}}\) are called public names and represent public nonces that can be used by the Dolev-Yao adversary. The elements of \(\mathcal {N}_{\mathsf {priv}}\) are called names and represent secret nonces and secret keys. We also assume a set of variables that are partitioned into two disjoint sets \(\mathcal {X}\) and \(\mathcal {X}_w\). The variables in \(\mathcal {X}\) are called protocol variables and are used as placeholders for messages input by protocol participants. The variables in \(\mathcal {X}_w\) are called frame variables and are used to point to messages received by the Dolev-Yao adversary. Terms are built by the application of function symbols to variables and terms in the standard way. Given a signature \({\mathcal {F}}\) and \(\mathcal {Y}\subseteq \mathcal {X}\cup \mathcal {X}_w\), we use \(\mathcal {T}({\mathcal {F}}, \mathcal {Y})\) to denote the set of terms built over \({\mathcal {F}}\) and \(\mathcal {Y}\). The set of variables occurring in a term u is denoted by \(\mathsf {vars}(u)\). A ground term is a term that contains no free variables.

A substitution \(\sigma \) is a partial function with a finite domain that maps variables to terms. \(\mathsf {dom}(\sigma )\) will denote the domain and \(\mathsf {ran}(\sigma )\) will denote the range. For a substitution \(\sigma \) with \(\mathsf {dom}(\sigma ) = \{x_1, \ldots , x_k\}\), we denote \(\sigma \) as \(\{x_1 \mapsto \sigma (x_1), \ldots , x_k \mapsto \sigma (x_k) \}\). A substitution \(\sigma \) is said to be ground if every term in \(\mathsf {ran}(\sigma )\) is ground and a substitution with an empty domain will be denoted as \(\emptyset \). Substitutions can be applied to terms in the usual way and we write \(u \sigma \) for the term obtained by applying the substitution \(\sigma \) to the term u.

Our process algebra is parameterized by an equational theory \(({\mathcal {F}}, E)\), where E is a set of \({\mathcal {F}}\)-Equations. By an \({\mathcal {F}}\)-Equation, we mean a pair \(u = v\) where \(u,v \in \mathcal {T}({\mathcal {F}}\setminus \mathcal {N}_{\mathsf {priv}}, \mathcal {X})\) are terms that do not contain private names. We will assume that the equations of \(({\mathcal {F}}, E)\) can be oriented to produce a convergent rewrite system. Two terms u and v are said to be equal with respect to an equational theory \(({\mathcal {F}}, E)\), denoted \(u =_E v\), if \(E \vdash u = v\) in the first order theory of equality. We often identify an equational theory \(({\mathcal {F}}, E)\) by E when the signature is clear from the context.

In the calculus, all communication is mediated through an adversary: all outputs first go to an adversary and all inputs are provided by the adversary. Hence, processes are executed in an environment that consists of a frame \(\varphi : \mathcal {X}_w \rightarrow \mathcal {T}({\mathcal {F}}, \emptyset )\) and a ground substitution \(\sigma : \mathcal {X}\rightarrow \mathcal {T}({\mathcal {F}}, \emptyset )\). Intuitively, \(\varphi \) represents the sequence of messages an adversary has received from protocol participants and \(\sigma \) records the binding of the protocol variables to actual input messages. An adversary is limited to sending only those messages that it can deduce from the messages it has received thus far. Formally, a term \(u \in \mathcal {T}({\mathcal {F}}, \emptyset )\) is deducible from a frame \(\varphi \) with recipe \(r \in \mathcal {T}({\mathcal {F}}\setminus \mathcal {N}_{\mathsf {priv}}, \mathsf {dom}(\varphi ))\) in equational theory E, denoted \(\varphi \vdash _E^r u\), if \(r \varphi =_E u\). We will often omit r and E and write \(\varphi \vdash u\) if they are clear from the context.

We now recall an equivalence on frames, called static equivalence [5]. Intuitively, two frames are statically equivalent if the adversary cannot distinguish them by performing tests. The tests consists of checking whether two recipes deduce the same term. Formally, two frames \(\varphi _1\) and \(\varphi _2\) are said to be statically equivalent in equational theory E, denoted \(\varphi _1 \equiv _E \varphi _2\), if \(\mathsf {dom}(\varphi _1) = \mathsf {dom}(\varphi _2)\) and for all \(r_1, r_2 \in \mathcal {T}({\mathcal {F}}\setminus \mathcal {N}_{\mathsf {priv}}, \mathcal {X}_w)\) we have \(r_1 \varphi _1 =_E r_2 \varphi _1\) iff \(r_1 \varphi _2 =_E r_2 \varphi _2\).

4.2 Process Syntax

Processes in our calculus are the parallel composition of roles. Intuitively, a role models a single actor in a single session of the protocol. Syntactically, a role is derived from the grammar:
$$\begin{aligned} R\,\, {:}{:=} \, \, 0 \, \big \vert \, \mathtt {in}(x)^{\ell } \, \big \vert \, \mathtt {out}(u_0 \cdot R +_p u_1 \cdot R)^{\ell }\, \big \vert \,\mathsf {ite }([c_1 \wedge \ldots \wedge c_k ], R, R)^{\ell }\, \big \vert \, (R \cdot R) \end{aligned}$$
where p is a rational number in the unit interval [0, 1], \(\ell \in \mathbb {N}\), \(x \in \mathcal {X}\), \(u_0, u_1 \in \mathcal {T}({\mathcal {F}}, \mathcal {X})\) and \(c_i\) is \(u_i=v_i\) with \(u_i, v_i \in \mathcal {T}({\mathcal {F}}, \mathcal {X})\) for all \(i \in \{ 1,\ldots ,k \}\). The constructs \(\mathtt {in}(x)^{\ell }\), \(\mathtt {out}(u_0 \cdot R +_p u_1 \cdot R)^{\ell }\) and \(\mathsf {ite}([c_1 \wedge \ldots \wedge c_k ],R,R)^{\ell }\) are said to be labeled operations and \(\ell \in \mathbb {N}\) is said to be their label. The role 0 does nothing. The role \(\mathtt {in}(x)^{\ell }\) reads a term u from the public channel and binds it to x. The role \(\mathtt {out}(u_0 \cdot R +_p u_1 \cdot R')^{\ell }\) outputs the term \(u_0\) on the public channel and becomes R with probability p and it outputs the term \(u_1\) and becomes \(R'\) with probability \(1-p\). A test \([c_1 \wedge \ldots \wedge c_k ]\) is said to pass if for all \(1\le i \le k\), the equality \(c_i\) holds. The conditional role \(\mathsf {ite }([c_1 \wedge \ldots \wedge c_k ], R, R')^{\ell }\) becomes R if \([c_1 \wedge \ldots \wedge c_k ]\) passes and otherwise it becomes \(R'\). The role \(R \cdot R'\) is the sequential composition of role R followed by role \(R'\). The set of variables of a role R is the set of variables occurring in R. The construct \(\mathtt {in}(x)^\ell \cdot R\) binds variable x in R. The set of free and bound variables in a role can be defined in the standard way. We will assume that the set of free variables and bound variables of a role are disjoint and that a bound variable is bound only once in a role. A role R is said to be well-formed if every labeled operation occurring in R has the same label \(\ell \); the label \(\ell \) is said to be the label of the well-formed role R.

A process is the parallel composition of a finite set of roles \(R_1, \ldots , R_{n}\), denoted \(R_1 \mid \ldots \mid R_{n}\). We will use P and Q to denote processes. A process \(R_1 \mid \ldots \mid R_n\) is said to be well-formed if each role is well-formed, the sets of variables of \(R_i\) and \(R_j\) are disjoint for \(i\ne j\), and the labels of roles \(R_i\) and \(R_j\) are different for \(i\ne j\). For the remainder of this paper, processes are assumed to be well-formed. The set of free (resp. bound) variables of P is the union of the sets of free (resp. bound) variables of its roles. P is said to be ground if the set of its free variables is empty. We shall omit labels when they are not relevant in a particular context.

Example 2

We model the electronic voting protocol from Example 1 in our formalism. The protocol is built over the equational theory with signature \({\mathcal {F}}= \{\mathsf {sk}/1, \text { }\mathsf {pk}/1,\text { } \mathsf {aenc}/3, \text { } \mathsf {adec}/2, \text { } \mathsf {pair}/2, \text { } \mathsf {fst}/1, { } \mathsf {snd}/1 \}\) and the equations
$$\begin{aligned} E= & {} \{\mathsf {adec}(\mathsf {aenc}(m, r, \mathsf {pk}(k)), \mathsf {sk}(k)) = m, \\&\quad \mathsf {fst}(\mathsf {pair}(m_1, m_2)) = m_1, \,\,\mathsf {snd}(\mathsf {pair}(m_1, m_2)) = m_2 \} . \end{aligned}$$
The function \(\mathsf {sk}\) (resp. \(\mathsf {pk}\)) is used to generate a secret (resp. public) key from a nonce. For generation of their pubic key pairs, Alice, Bob and the election authority hold private names \(k_A\), \(k_B\) and \(k_{EA}\), respectively. The candidates will be modeled using public names \(c_0\) and \(c_1\) and the tokens will be modeled using private names \(t_A\) and \(t_B\). Additionally, we will write \(y_i\) and \(r_i\) for \(i \in \mathbb {N}\) to denote fresh input variables and private names, respectively. The roles of Alice, Bob and the election authority are as follows.In roles above, we write \(\mathtt {out}(u_0)\) as shorthand for \(\mathtt {out}(u_0 \cdot 0 +_1 u_0 \cdot 0)\). The entire protocol is \(\mathsf {evote}(c_A, c_B) = A(c_A) \mid B(c_B) \mid EA\).

4.3 Process Semantics

An extended process is a 3-tuple \((P, \varphi , \sigma )\) where P is a process, \(\varphi \) is a frame and \(\sigma \) is a ground substitution whose domain contains the free variables of P. We will write \(\mathcal {E}\) to denote the set of all extended processes. Semantically, a ground process P with n roles is a POMDP \([\![P]\!] = (Z, z_s, \mathsf {Act}, \varDelta , \mathcal {O}, \mathsf {obs})\), where \(Z= \mathcal {E}\cup \{\mathsf {error}\}\), \(z_s\) is \((P, \emptyset , \emptyset )\), \(\mathsf {Act}= (\mathcal {T}({\mathcal {F}}\setminus \mathcal {N}_{\mathsf {priv}}, \mathcal {X}_w) \cup \{\tau ,\,\} \times \{1, \ldots , n\})\), \(\varDelta \) is a function that maps an extended process and an action to a distribution on \(\mathcal {E}\), \(\mathcal {O}\) is the set of equivalence classes on frames over the static equivalence relation \(\equiv _E\) and \(\mathsf {obs}\) is as follows. Let \([\varphi ]\) denote the equivalence class of \(\varphi \) with respect to \(\equiv _E\). Define \(\mathsf {obs}\) to be the function such that for any extended process \(\eta = (P, \varphi , \sigma )\), \(\mathsf {obs}(\eta ) = [\varphi ]\). We now give some additional notation needed for the definition of \(\varDelta \). Given a measure \(\mu \) on \(\mathcal {E}\) and role R we define \(\mu \cdot R\) to be the distribution \(\mu _1\) on \(\mathcal {E}\) such that \(\mu _1(P', \varphi , \sigma ) = \mu (P, \varphi , \sigma ) \) if \(\mu (P, \varphi , \sigma )>0\) and \( P'\) is \(P \cdot R\) and 0 otherwise. Given a measure \(\mu \) on \(\mathcal {E}\) and a process Q, we define \(\mu \mid Q\) to be the distribution \(\mu _1\) on \(\mathcal {E}\) such that \(\mu _1(P', \varphi , \sigma ) = \mu (P, \varphi , \sigma ) \) if \(\mu (P, \varphi , \sigma )>0\) and \( P'\) is \(P \mid Q\) and 0 otherwise. The distribution \(Q \mid \mu \) is defined analogously. For distributions \(\mu _1, \mu _2\) over \(\mathcal {E}\) and a rational number \(p \in [0,1]\), the convex combination \(\mu _1 +_p^{\mathcal {E}} \mu _2\) is the distribution \(\mu \) on \(\mathcal {E}\) such that \(\mu (\eta ) = p \cdot \mu _1(\eta ) + (1-p) \cdot \mu _2(\eta )\) for all \(\eta \in \mathcal {E}\). The definition of \(\varDelta \) is given in Fig. 1, where we write \((P, \varphi , \sigma ) \xrightarrow {\alpha } \mu \) if \(\varDelta ((P, \varphi , \sigma ), \alpha ) = \mu \). If \(\varDelta ((P, \varphi , \sigma ), \alpha )\) is undefined in Fig. 1 then \(\varDelta ((P, \varphi , \sigma ), \alpha ) = \delta _{{\mathsf {error}}}\). Note that \(\varDelta \) is well-defined, as roles are deterministic.
Fig. 1.

Process semantics

4.4 Indistinguishability in Randomized Cryptographic Protocols

Protocols P and \(P'\) are said to indistinguishable if \([\![P]\!] \approx [\![P']\!]\). Many interesting properties of randomized security protocols can be specified using indistinguishability. For example, consider the simple electronic voting protocol from Example 2. We say that the protocol satisfies the vote privacy property if \(\mathsf {evote}(c_0, c_1)\) and \(\mathsf {evote}(c_1, c_0)\) are indistinguishable.

In the remainder of this section, we study the problem of deciding when two protocols are indistinguishable by a bounded Dolev-Yao adversary. We restrict our attention to indistinguishability of protocols over subterm convergent equational theories [4]. Before presenting our results, we give some relevant definitions. \(({\mathcal {F}}, E)\) is said to be subterm convergent if for every equation \(u = v \in E\) oriented as a rewrite rule \(u \rightarrow v\), either v is a proper subterm of u or v is a public name. A term u can be represented as a directed acyclic graph (dag), denoted \(\mathsf {dag}(u)\) [4, 51]. Every node in \(\mathsf {dag}(u)\) is a function symbol, name or a variable. Nodes labeled by names and variables have out-degree 0. A node labeled with a function symbol f has out-degree equal to the arity of f where outgoing edges of the node are labeled from 1 to the arity of f. Every node of \(\mathsf {dag}(u)\) represents a unique sub-term of u. The depth of a term u, denoted \(\mathsf {depth}(u)\), is the length of the longest simple path from the root in \(\mathsf {dag}(u)\). Given an action \(\alpha \), \(\mathsf {depth}(\alpha ) = 0\) if \(\alpha = (\tau , j)\) and \(\mathsf {depth}(\alpha ) = m\) if \(\alpha = (r, j)\) and \(\mathsf {depth}(r) = m\).

Let P be a protocol such that \([\![P]\!] = (Z, z_s, \mathsf {Act}, \varDelta , \mathcal {O}, \mathsf {obs})\). Define \([\![P]\!]_{d}\) to be the POMDP \((Z, z_s, \mathsf {Act}_d, \varDelta , \mathcal {O}, \mathsf {obs})\) where \(\mathsf {Act}_d \subseteq \mathsf {Act}\) is such that every \(\alpha \in \mathsf {Act}\) has \(\mathsf {depth}(\alpha ) \le d\). For a constant \(d \in \mathbb {N}\), we define \(\mathsf {InDist}(d)\) to be the decision problem that, given a subterm convergent theory \(({\mathcal {F}},E)\) and protocols P and \(P'\) over \(({\mathcal {F}},E)\), determines if \([\![P]\!]_{d}\) and \([\![P']\!]_{d}\) are indistinguishable. We assume that the arity of the function symbols in \({\mathcal {F}}\) is given in unary. We have the following.

Theorem 1

For any constant \(d \in \mathbb {N}\), \(\mathsf {InDist}(d)\) is in \({\mathbf {PSPACE}}\).

We now show \(\mathsf {InDist}(d)\) is both NP-hard and coNP-hard by showing a reduction from \(\#\mathsf {SAT_D}\) to \(\mathsf {InDist}(d)\). \(\#\mathsf {SAT_D}\) is the decision problem that, given a 3CNF formula \(\phi \) and a constant \(k \in \mathbb {N}\), checks if the number of satisfying assignments of \(\phi \) is equal to k.

Theorem 2

There is a \(d_0\in \mathbb {N}\) such that \(\#\mathsf {SAT_D}\) reduces to \(\mathsf {InDist}(d)\) in logspace for every \(d>d_0\). Thus, \(\mathsf {InDist}(d)\) is NP-hard and coNP-hard for every \(d>d_0\).

5 Implementation and Evaluation

Using (the proof of) Proposition 1, we can solve the indistinguishability problem for randomized security protocols as follows. For protocols \(P,P'\), translate \([\![P]\!], [\![P']\!]\) into PFAs \(\mathsf {A}, \mathsf {A}'\) and determine if \(\mathsf {A}\equiv \mathsf {A}'\) using the linear programming algorithm from [31]. We will henceforth refer to this approach as the PFA algorithm and the approach from Algorithm 1 as the OTF algorithm. We have implemented both the PFA and OTF algorithms as part of Stochastic Protocol ANalayzer (Span), which is a Java based tool for analyzing randomized security protocols. The tool is available for download at [1]. The main engine of Span translates a protocol into a POMDP, belief MDP or PFA by exploring all protocol executions and grouping equivalent states using an engine, Kiss [4] or Akiss [16], for static equivalence. Kiss is guaranteed to terminate for subterm convergent theories and Akiss provides support for XOR while considering a slighly larger class of equational theories called optimally reducing. Operations from rewriting logic are provided by queries to Maude [24] and support for arbitrary precision numbers is given by Apfloat [2]. Our experiments were conducted on an Intel core i7 dual quad core processor at 2.67 GHz with 12Gb of RAM. The host operating system was 64 bit Ubuntu 16.04.3 LTS.

Our comparison of the PFA and OTF algorithms began by examining how each approach scaled on a variety of examples (detailed at the end of this section). The results of the analysis are given in Fig. 2. For each example, we consider a fixed recipe depth and report the running times for 2 parties as well as the maximum number of parties for which one of the algorithms terminates within the timeout bound of 60 min. On small examples for which the protocols were indistinguishable, we found that the OTF and PFA algorithms were roughly equivalent. On large examples where the protocols were indistinguishable, such as the 3 ballot protocol, the PFA algorithm did not scale as well as the OTF algorithm. In particular, an out-of-memory exception often occurred during construction of the automata or the linear programming constraints. On examples for which the protocols were distinguishable, the OTF algorithm demonstrated a significant advantage. This was a result of the fact that the OTF approach analyzed the model as it was constructed. If at any point during model construction the bisimulation relation was determined not to hold, model construction was halted. By contrast, the PFA algorithm required the entire model to be constructed and stored before any analysis could take place.
Fig. 2.

Experimental Results: Columns 1 and 2 describe the example being analyzed. Column 3 gives the maximum recipe depth and column 4 indicates when the example protocols were indistinguishable. Columns 5–8 give the running time (in seconds) for the respective algorithms and static equivalence engines. We report OOM for an out of memory exception and TO for a timeout - which occurs if no solution is generated in 60 min. Column 9 gives the number of states in the protocol’s POMDP and Column 10 gives the number of belief states explored in the OTF algorithm. When information could not be determined due to a failure of the tool to terminate, we report n/a. For protocols using equational theories that were not subterm convergent, we write n/s (not supported) for the Kiss engine.

In addition to stress-testing the tool, we also examined how each algorithm performed under various parameters of the mix-network example. The results are given in Fig. 3, where we examine how running times are affected by scaling the number of protocol participants and the recipe depth. Our results coincided with the observations from above. One interesting observation is that the number of beliefs explored on the 5 party example was identical for recipe depth 4 and recipe depth 10. The reason is that, for a given protocol input step, Span generates a minimal set of recipes. This is in the sense that if recipes \(r_0,r_1\) are generated at an input step with frame \(\varphi \), then \(r_0\varphi \not =_E r_1\varphi \). For the given number of public names available to the protocol, changing the recipe depth from 4 to 10 did not alter the number of unique terms that could be constructed by the attacker. We conclude this section by describing our benchmark examples, which are available at [3]. Evote is the simple electronic voting protocol derived from Example 2 and the DC-net, mix-net and 3 ballot protocols are described below.

Dinning Cryptographers Networks. In a simple DC-net protocol [38], two parties Alice and Bob want to anonymously publish two confidential bits \(m_A\) and \(m_B\), respectively. To achieve this, Alice and Bob agree on three private random bits \(b_0\), \(b_1\) and \(b_2\) and output a pair of messages according to the following scheme. In our modeling the protocol, the private bits are generated by a trusted third party who communicates them with Alice and Bob using symmetric encryption.
From the protocol output, the messages \(m_A\) and \(m_B\) can be retrieved as \(M_{A,0} \oplus M_{B,0}\) and \(M_{A,1} \oplus M_{B,1}\). The party to which the messages belong, however, remains unconditionally private, provided the exchanged secrets are not revealed.
Fig. 3.

Detailed Experimental Results for Mix Networks: The columns have an identical meaning to the ones from Fig. 2. We report OOM for an out of memory exception and when information could not be determined due to a failure of the tool to terminate, we report n/a.

Mix Networks. A mix-network [21], is a routing protocol used to break the link between a message’s sender and the message. This is achieved by routing messages through a series of proxy servers, called mixes. Each mix collects a batch of encrypted messages, privately decrypts each message and forwards the resulting messages in random order. More formally, consider a sender Alice (A) who wishes to send a message m to Bob (B) through Mix (M). Alice prepares a cipher-text of the form \(\mathsf {aenc}(\mathsf {aenc}(m, n_1, \mathsf {pk}(B)), n_0, \mathsf {pk}(M))\) where \(\mathsf {aenc}\) is asymmetric encryption, \(n_0, n_1\) are nonces and \(\mathsf {pk}(M)\), \(\mathsf {pk}(B)\) are the public keys of the Mix and Bob, respectively. Upon receiving a batch of N such cipher-texts, the Mix unwraps the outer layer of encryption on each message using its secret key, randomly permutes and forwards the messages. A passive attacker, who observes all the traffic but does not otherwise modify the network, cannot (with high probability) correlate messages entering and exiting the Mix. Unfortunately, this simple design, known as a threshold mix, is vulnerable to a very simple active attack. To expose Alice as the sender of the message \(\mathsf {aenc}(m, n_1, \mathsf {pk}(B))\), an attacker simply forwards Alice’s message along with \(N{-}1\) dummy messages to the Mix. In this way, the attacker can distinguish which of the Mix’s N output messages is not a dummy message and hence must have originated from Alice.

3-Ballot Electronic Voting. We have modeled and analyzed the 3-ballot voting system from [54]. To simplify the presentation of this model, we first describe the major concepts behind 3-ballot voting schemes, as originally introduced by [50]. At the polling station, each voter is given 3 ballots at random. A ballot is comprised of a list of candidates and a ballot ID. When casting a vote, a voter begins by placing exactly one mark next to each candidate on one of the three ballots chosen a random. An additional mark is then placed next to the desired candidate on one of the ballots, again chosen at random. At the completion of the procedure, at least one mark should have been placed on each ballot and two ballots should have marks corresponding to the desired candidate. Once all of the votes have been cast, ballots are collected and released to a public bulletin board. Each voter retains a copy of one of the three ballots as a receipt, which can be used to verify his/her vote was counted.

In the full protocol, a registration agent is responsible for authenticating voters and receiving ballots and ballot ids generated by a vote manager. Once a voter marks his/her set of three ballots, they are returned to the vote manager who forwards them to one of three vote repositories. The vote repositories store the ballots they receive in a random position. After all votes have been collected in the repositories, they are released to a bulletin board by a vote collector. Communication between the registration agent, vote manager, vote repositories and vote collector is encrypted using asymmetric encryption and authenticated using digital signatures. In our modeling, we assume all parties behave honestly.

6 Conclusion

In this paper, we have considered the problem of model checking indistinguishability in randomized security protocols that are executed with respect to a Dolev-Yao adversary. We have presented two different algorithms for the indistinguishability problem assuming bounded recipe sizes. The algorithms have been implemented in the Span protocol analysis tool, which has been used to verify some well known randomized security protocols. We propose the following as part of future work: (i) extension of the current algorithms as well the tool to the case of unbounded recipe sizes; (ii) application of the tool for checking other randomized protocols; (iii) giving tight upper and lower bounds for the indistinguishability problem for the randomized protocols.

Footnotes

  1. 1.

    The operations considered are pairing, hashing, encryption and decryption.

  2. 2.

    As we assume \({\mathcal {F}}\) is finite, we allow only a fixed number of public nonces are available to the adversary.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
    Abadi, M., Cortier, V.: Deciding knowledge in security protocols under equational theories. Theor. Comput. Sci. 367(1–2), 2–32 (2006)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Abadi, M., Fournet, C.: Mobile values, new names, and secure communication. In: ACM SIGPLAN Notices, vol. 36, pp. 104–115. ACM (2001)Google Scholar
  6. 6.
    Adida, B.: Helios: web-based open-audit voting. In: USENIX Security Symposium, vol. 17, pp. 335–348 (2008)Google Scholar
  7. 7.
    Arapinis, M., Chothia, T., Ritter, E., Ryan, M.: Analysing unlinkability and anonymity using the applied pi calculus. In: Computer Security Foundations, pp. 107–121 (2010)Google Scholar
  8. 8.
    Armando, A., Compagna, L.: SAT-based model-checking for security protocols analysis. Int. J. Inf. Secur. 7(1), 3–32 (2008)CrossRefGoogle Scholar
  9. 9.
    Basin, D., Dreier, J., Sasse, R.: Automated symbolic proofs of observational equivalence. In: Computer and Communications Security, pp. 1144–1155 (2015)Google Scholar
  10. 10.
    Bauer, M.S., Chadha, R., Viswanathan, M.: Composing protocols with randomized actions. In: Piessens, F., Viganò, L. (eds.) POST 2016. LNCS, vol. 9635, pp. 189–210. Springer, Heidelberg (2016).  https://doi.org/10.1007/978-3-662-49635-0_10CrossRefGoogle Scholar
  11. 11.
    Ben-Or, M., Goldreich, O., Micali, S., Rivest, R.L.: A fair protocol for signing contracts. IEEE Trans. Inf. Theory 36(1), 40–46 (1990)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Blanchet, B., Abadi, M., Fournet, C.: Automated verification of selected equivalences for security protocols. J. Log. Algebraic Program. 75(1), 3–51 (2008)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Canetti, R., Cheung, L., Kaynar, D., Liskov, M., Lynch, N., Pereira, O., Segala, R.: Task-structured probabilistic I/O automata. In: Discrete Event Systems (2006)Google Scholar
  14. 14.
    Castro, P.S., Panangaden, P., Precup, D.: Equivalence relations in fully and partially observable Markov decision processes. In: International Joint Conference on Artificial Intelligence, vol. 9, pp. 1653–1658 (2009)Google Scholar
  15. 15.
    Chadha, R., Cheval, V., Ciobâcă, Ş., Kremer, S.: Automated verification of equivalence properties of cryptographic protocol. ACM Trans. Comput. Log. 17(4), 23 (2016)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Chadha, R., Ciobâcă, Ş., Kremer, S.: Automated verification of equivalence properties of cryptographic protocols. In: Seidl, H. (ed.) ESOP 2012. LNCS, vol. 7211, pp. 108–127. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-28869-2_6CrossRefGoogle Scholar
  17. 17.
    Chadha, R., Sistla, A.P., Viswanathan, M.: Model checking concurrent programs with nondeterminism and randomization. In: Foundations of Software Technology and Theoretical Computer Science, pp. 364–375 (2010)Google Scholar
  18. 18.
    Chadha, R., Sistla, A.P., Viswanathan, M.: Verification of randomized security protocols. In: Logic in Computer Science, pp. 1–12. IEEE (2017)Google Scholar
  19. 19.
    Chatzikokolakis, K., Palamidessi, C.: Making random choices invisible to the scheduler. In: Caires, L., Vasconcelos, V.T. (eds.) CONCUR 2007. LNCS, vol. 4703, pp. 42–58. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-74407-8_4CrossRefGoogle Scholar
  20. 20.
    Chaum, D., Ryan, P.Y.A., Schneider, S.: A practical voter-verifiable election scheme. In: di Vimercati, S.C., Syverson, P., Gollmann, D. (eds.) ESORICS 2005. LNCS, vol. 3679, pp. 118–139. Springer, Heidelberg (2005).  https://doi.org/10.1007/11555827_8CrossRefGoogle Scholar
  21. 21.
    Chaum, D.L.: Untraceable electronic mail, return addresses, and digital pseudonyms. Commun. ACM 24(2), 84–90 (1981)CrossRefGoogle Scholar
  22. 22.
    Cheung, L.: Reconciling nondeterministic and probabilistic choices. Ph.D. thesis, Radboud University of Nijmegen (2006)Google Scholar
  23. 23.
    Cheval, V.: APTE: an algorithm for proving trace equivalence. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 587–592. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-54862-8_50CrossRefGoogle Scholar
  24. 24.
    Clavel, M., Durán, F., Eker, S., Lincoln, P., Martı-Oliet, N., Meseguer, J., Quesada, J.F.: Maude: specification and programming in rewriting logic. Theor. Comput. Sci. 285(2), 187–243 (2002)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Cortier, V., Delaune, S.: A method for proving observational equivalence. In: Computer Security Foundations, pp. 266–276. IEEE (2009)Google Scholar
  26. 26.
    de Alfaro, L.: The verification of probabilistic systems under memoryless partial-information policies is hard. Technical report (1999)Google Scholar
  27. 27.
    Dehnert, C., Junges, S., Katoen, J.-P., Volk, M.: A storm is coming: a modern probabilistic model checker. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 592–600. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63390-9_31CrossRefGoogle Scholar
  28. 28.
    Delaune, S., Kremer, S., Ryan, M.: Verifying privacy-type properties of electronic voting protocols. J. Comput. Secur. 17(4), 435–487 (2009)CrossRefGoogle Scholar
  29. 29.
    Dingledine, R., Mathewson, N., Syverson, P.: Tor: the second-generation onion router. Technical report, DTIC Document (2004)Google Scholar
  30. 30.
    Dolev, D., Yao, A.: On the security of public key protocols. IEEE Trans. Inf. Theory 29(2), 198–208 (1983)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Doyen, L., Henzinger, T.A., Raskin, J.-F.: Equivalence of labeled Markov chains. Found. Comput. Sci. 19(03), 549–563 (2008)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Dreier, J., Duménil, C., Kremer, S., Sasse, R.: Beyond subterm-convergent equational theories in automated verification of stateful protocols. In: Maffei, M., Ryan, M. (eds.) POST 2017. LNCS, vol. 10204, pp. 117–140. Springer, Heidelberg (2017).  https://doi.org/10.1007/978-3-662-54455-6_6CrossRefGoogle Scholar
  33. 33.
    Durgin, N., Lincoln, P., Mitchell, J., Scedrov, A.: Multiset rewriting and the complexity of bounded security protocols. Comput. Secur. 12(2), 247–311 (2004)CrossRefGoogle Scholar
  34. 34.
    Escobar, S., Meadows, C., Meseguer, J.: Maude-NPA: cryptographic protocol analysis modulo equational properties. In: Aldini, A., Barthe, G., Gorrieri, R. (eds.) FOSAD 2007-2009. LNCS, vol. 5705, pp. 1–50. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-03829-7_1CrossRefzbMATHGoogle Scholar
  35. 35.
    Even, S., Goldreich, O., Lempel, A.: A randomized protocol for signing contracts. Commun. ACM 28(6), 637–647 (1985)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Garcia, F.D., Van Rossum, P., Sokolova, A.: Probabilistic anonymity and admissible schedulers. arXiv preprint arXiv:0706.1019 (2007)
  37. 37.
    Goldschlag, D.M., Reed, M.G., Syverson, P.F.: Hiding routing information. In: Anderson, R. (ed.) IH 1996. LNCS, vol. 1174, pp. 137–150. Springer, Heidelberg (1996).  https://doi.org/10.1007/3-540-61996-8_37CrossRefGoogle Scholar
  38. 38.
    Golle, P., Juels, A.: Dining cryptographers revisited. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 456–473. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24676-3_27CrossRefGoogle Scholar
  39. 39.
    Goubault-Larrecq, J., Palamidessi, C., Troina, A.: A probabilistic applied pi–calculus. In: Shao, Z. (ed.) APLAS 2007. LNCS, vol. 4807, pp. 175–190. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-76637-7_12CrossRefGoogle Scholar
  40. 40.
    Gunter, C.A., Khanna, S., Tan, K., Venkatesh, S.S.: DoS protection for reliably authenticated broadcast. In: Network and Distributed System Security (2004)Google Scholar
  41. 41.
    Hirschi, L., Baelde, D., Delaune, S.: A method for verifying privacy-type properties: the unbounded case. In: Security and Privacy, pp. 564–581 (2016)Google Scholar
  42. 42.
    Kiefer, S., Murawski, A.S., Ouaknine, J., Wachter, B., Worrell, J.: APEX: an analyzer for open probabilistic programs. In: Madhusudan, P., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 693–698. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-31424-7_51CrossRefGoogle Scholar
  43. 43.
    Kremer, S., Ryan, M.: Analysis of an electronic voting protocol in the applied pi calculus. In: Sagiv, M. (ed.) ESOP 2005. LNCS, vol. 3444, pp. 186–200. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31987-0_14CrossRefGoogle Scholar
  44. 44.
    Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-22110-1_47CrossRefGoogle Scholar
  45. 45.
    Lenhardt, R.: Probabilistic automata with parameters. Master’s thesis (2009)Google Scholar
  46. 46.
    Meadows, C.: Formal methods for cryptographic protocol analysis: emerging issues and trends. IEEE J. Sel. Areas Commun. 21(1), 44–54 (2003)CrossRefGoogle Scholar
  47. 47.
    Meadows, C.: Emerging issues and trends in formal methods in cryptographic protocol analysis: twelve years later. In: Martí-Oliet, N., Ölveczky, P.C., Talcott, C. (eds.) Logic, Rewriting, and Concurrency. LNCS, vol. 9200, pp. 475–492. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23165-5_22CrossRefzbMATHGoogle Scholar
  48. 48.
    Meier, S., Schmidt, B., Cremers, C., Basin, D.: The TAMARIN prover for the symbolic analysis of security protocols. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 696–701. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-39799-8_48CrossRefGoogle Scholar
  49. 49.
    Reiter, M.K., Rubin, A.D.: Crowds: anonymity for web transactions. ACM Trans. Inf. Syst. Secur. 1(1), 66–92 (1998)CrossRefGoogle Scholar
  50. 50.
    Rivest, R.L.: The threeballot voting system (2006)Google Scholar
  51. 51.
    Rusinowitch, M., Turuani, M.: Protocol insecurity with finite number of sessions is NP-complete. Ph.D. thesis, INRIA (2001)Google Scholar
  52. 52.
    Ryan, P.Y.A., Bismark, D., Heather, J., Schneider, S., Xia, Z.: Prêt à voter: a voter-verifiable voting system. IEEE Trans. Inf. Forensics Secur. 4(4), 662–673 (2009)CrossRefGoogle Scholar
  53. 53.
    Bauer, M.S., Chadha, R., Viswanathan, M.: Modular verification of protocol equivalence in the presence of randomness. In: Foley, S.N., Gollmann, D., Snekkenes, E. (eds.) ESORICS 2017. LNCS, vol. 10492, pp. 187–205. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66402-6_12CrossRefGoogle Scholar
  54. 54.
    Santin, A.O., Costa, R.G., Maziero, C.A.: A three-ballot-based secure electronic voting system. Secur. Priv. 6(3), 14–21 (2008)CrossRefGoogle Scholar
  55. 55.
    Schmidt, B., Meier, S., Cremers, C., Basin, D.: Automated analysis of Diffie-Hellman protocols and advanced security properties. In: Computer Security Foundations, pp. 78–94 (2012)Google Scholar
  56. 56.
    Shmatikov, V.: Probabilistic analysis of anonymity. In: Computer Security Foundations, pp. 119–128. IEEE (2002)Google Scholar
  57. 57.
    Tzeng, W.-G.: A polynomial-time algorithm for the equivalence of probabilistic automata. SIAM J. Comput. 21(2), 216–227 (1992)MathSciNetCrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

<SimplePara><Emphasis Type="Bold">Open Access</Emphasis>This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.</SimplePara><SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>

Authors and Affiliations

  • Matthew S. Bauer
    • 1
    • 4
  • Rohit Chadha
    • 2
  • A. Prasad Sistla
    • 3
  • Mahesh Viswanathan
    • 1
  1. 1.University of Illinois at Urbana-ChampaignChampaignUSA
  2. 2.University of MissouriColumbiaUSA
  3. 3.University of Illinois at ChicagoChicagoUSA
  4. 4.Galois Inc.ArlingtonUSA

Personalised recommendations