Advertisement

Secure Two-Party Computation in Applied Pi-Calculus: Models and Verification

  • Sergiu BursucEmail author
Conference paper
  • 249 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9533)

Abstract

Secure two-party computation allows two distrusting parties to compute a function, without revealing their inputs to each other. Traditionally, the security properties desired in this context, and the corresponding security proofs, are based on a notion of simulation, which can be symbolic or computational. Either way, the proofs of security are intricate, requiring first to find a simulator, and then to prove a notion of indistinguishability. Furthermore, even for classic protocols such as Yao’s (based on garbled circuits and oblivious transfer), we do not have adequate symbolic models for cryptographic primitives and protocol roles, that can form the basis for automated security proofs.

We propose new models in applied pi-calculus to address these gaps. Our contributions, formulated in the context of Yao’s protocol, include: an equational theory for specifying the primitives of garbled computation and oblivious transfer; process specifications for the roles of the two parties in Yao’s protocol; definitions of security that are more clear and direct: result integrity, input agreement (both based on correspondence assertions) and input privacy (based on observational equivalence). We put these models together and illustrate their use with ProVerif, providing a first automated verification of security for Yao’s two-party computation protocol.

Keywords

Garbled Circuit Oblivious Transfer Correspondence Assertions Input Agreement Garbled Input 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In secure two-party computation, two parties with inputs a and b wish to compute a function f(ab) such that each party can both preserve the privacy of its inputs and be sure to receive the correct result of the computation [1]. Even more, each party would like assurance that the other party does not learn more from the protocol, like the evaluation of the function f on other inputs, e.g. \(f(a',b')\), or the evaluation of another function on the same inputs, e.g. g(ab). A classic, and still most efficient, way of achieving secure two-party computation is Yao’s protocol [2]. It allows two parties to exchange a garbled circuit and garbled inputs for a function, and compute the corresponding output, without leaking private inputs. In addition, zero-knowledge proofs can be incorporated into this protocol to ensure that any party cannot cheat [3, 4].

Security Proofs in Computational Models. The active security of Yao’s protocol has been defined and proved in the simulation-based model [3, 5, 6], which states that, by executing a two-party computation protocol for a function f, an attacker can obtain essentially nothing more than the output of the function. First, this requires the definition of an ideal model where the desired functionality can be securely computed in a trivial manner, for instance relying on a trusted third party and private channels. Secondly, one has to show that the view of every attacker on the real protocol can be matched by a computationally indistinguishable view that comes from the ideal model. This requires a simulator, whose role is to decorate an ideal run with innocuous data that makes it look like a real run to any polynomially bounded adversary. This level of generality comes however at a cost, the security proofs being complex and challenging to automate.

Security Proofs in Symbolic Models. On the other hand, significant progress has been made in the field of automated verification of security protocols in formal (or symbolic) models [7, 8]. However, even symbolic definitions of simulation-based security, e.g. [9, 10] or [11, 12] (in applied pi-calculus), are still a challenging task for such methods, which are tailored for basic properties like secrecy, authentication or privacy. Indeed, recent work aiming to automate verification for multi-party computation protocols is either relying on additional manual input [12, 13], or only captures properties of correctness [14]. For Yao’s protocol in particular, we also lack symbolic models for the required cryptographic primitives of garbled computation and oblivious transfer. Overall, we do not yet have the models that could be given directly to a verification tool and ask the basic question: is a particular two-party computation protocol secure or not? We propose such models for Yao’s protocol.

Our Approach and Contributions. The main challenge in automating simulation-based security proofs comes from the fact that a simulator first needs to be found, and, for some methods (e.g. [12, 13]), processes need to be rearranged to have the same structure in order to check indistinguishability - this requires some human input in order to be tractable by tools. In this paper, we propose an alternative approach, formulating two-party computation security for Yao’s protocol as a conjunction of three basic properties: result integrity, input agreement and input privacy (Sect. 5). They are based on the standard symbolic notions of correspondence assertions and observational equivalence (of two processes with the same structure), do not require a simulator, and are directly amenable to automation. We also propose formal models in applied pi-calculus for the cryptographic primitives (Sect. 3) and the processes (Sect. 4) of Yao’s two-party computation protocol. We show that our models can be combined and verified with ProVerif, deriving a first automated proof of security for Yao’s protocol.

Relations Among Notions. Computational soundness results in [9, 10, 13, 14] show that it is sufficient to prove security in the symbolic model, in order to derive security guarantees in the corresponding computational model. The models in [11, 12] have not yet been shown to be computationally sound, to our knowledge. Our models are related to [11, 12, 13, 14], being formulated in the same language of applied pi-calculus. In future work, we aim to show an even stronger relation, deriving conditions under which our properties imply, or not, simulation-based security in these formal models. We discuss this open problem and related work in more detail in Sect. 6.

2 Preliminaries

2.1 Secure Two-Party Computation with Garbled Circuits

Assume two parties \(\mathcal {A}\) (with secret input x) and \(\mathcal {B}\) (with secret input y) want to compute f(xy), for a function f. The basic tool in Yao’s two-party computation protocol [2, 6] is a garbling construction that can be applied to any circuit representing the function f. For a fresh key k, it generates a garbled circuit \({GF}(f,k)\) and garbled input wires \({GW}(x,k,a),{GW}(y,k,b)\), where a and b mark the circuit wires corresponding to the input of \(\mathcal {A}\) or \(\mathcal {B}\). Then: (i) the output of the circuit \({GF}(f,k)\) on inputs \({GW}(x,k,a),{GW}(x,k,b)\) is equal to f(xy), as depicted in the left part of Fig. 1; and (ii) without access to the key k, f(xy) is the only meaningful information that can be derived from \({GF}(f,k),{GW}(x,k,a),{GW}(y,k,b)\). In particular, the values x and y remain secret and, for any \(\{x',y'\}\ne \{x,y\}\), these garbled values do not allow to compute \(f(x',y')\). Relying on garbling, one of the two parties, say \(\mathcal {A}\), can play the role of a sender and the other party, say \(\mathcal {B}\), can play the role of a receiver. The role of the sender, as depicted in the right part of Fig. 1, is to garble the circuit and the inputs of the two parties. The role of the receiver is to execute the garbled computation and send the result back to \(\mathcal {A}\). Note, however, that the party \(\mathcal {A}\) does not have access to the private input of \(\mathcal {B}\), so we need another tool to ensure that \(\mathcal {A}\) and \(\mathcal {B}\) can agree on a garbled input for \(\mathcal {B}\).
Fig. 1.

Garbled computation and Yao’s protocol for two parties

This is where \(\mathcal {A}\) and \(\mathcal {B}\) rely on oblivious transfer [15, 16]. An oblivious transfer protocol allows a receiver to obtain a message from a set computed by the sender such that: (i) only one message can be received and (ii) the sender does not know which message has been chosen by the receiver. In Yao’s protocol, the receiver \(\mathcal {B}\) can then get one message, which is the garbling of his desired input for the function, and nothing else, whereas the sender \(\mathcal {A}\) does not learn what value \(\mathcal {B}\) has chosen as input. Having obtained \({GF}(f,k)\), \({GW}(x,k,a)\) and \({GW}(y,k,b)\), \(\mathcal {B}\) can evaluate the garbled circuit and obtain f(xy), which can be sent back to \(\mathcal {A}\) as the result of the computation.

Active Security. In the case when \(\mathcal {B}\) might be malicious, we have to ensure that \(\mathcal {A}\) can obtain from \(\mathcal {B}\) the correct result. For this, the functionality of the garbled circuit is modified such that its output is a pair of values f(xy) and \({enc}(f(x,y),k)\), where k is a fresh secret key chosen by \(\mathcal {A}\) for each protocol session. Then, instead of f(xy), \(\mathcal {B}\) returns \({enc}(f(x,y),k)\) to \(\mathcal {A}\): the result f(xy) is authenticated by the key k. To counter the case of a malicious \(\mathcal {A}\), the sender \(\mathcal {A}\) can prove that the garbling is correct, relying on cut-and-choose techniques [3, 17] or zero-knowledge proofs [4, 18].

2.2 Applied Pi-Calculus and ProVerif [19, 20, 21, 22, 23]

Term Algebra. We are given a set of names, a set of variables and a signature \(\mathcal {F}\) formed of a set of constants and function symbols. Names, constants and variables are basic terms and new terms are built by applying function symbols to already defined terms. The signature \(\mathcal {F}\) can be partitioned into public and private symbols. A substitution \(\sigma \) is a function from variables to terms, whose application to a term T is the term \(T\sigma \), called an instance of T, obtained by replacing every variable x with the term \(x\sigma \). A term context is a term \(\mathcal {C}[\__1,\ldots ,\__n]\) containing special constants \(\__1,\ldots ,\__n\) (also called holes). For a context \(\mathcal {C}[\__1,\ldots ,\__n]\) and a sequence of terms \(T_1,\ldots ,T_n\), we denote by \(\mathcal {C}[T_1,\ldots ,T_n]\) the term obtained by replacing each \(\__i\) with the corresponding \(T_i\) in \(\mathcal {C}\).

An equational theory is a pair \(\mathcal {E}=(\mathcal {F},\mathcal {R})\), for a signature \(\mathcal {F}\) and a set \(\mathcal {R}\) of rewrite rules of the form \(U\rightarrow V\), where UV are terms. A term \(T_1\) rewrites to \(T_2\) in one step, denoted by \(T_1\rightarrow T_2\), if there is a context \(\mathcal {C}[\_]\), a substitution \(\sigma \) and a rule \(U\rightarrow V\) such that \(T_1=\mathcal {C}[U\sigma ]\) and \(T_2=\mathcal {C}[V\sigma ]\). More generally, \(T_1\rightarrow ^*T_2\), if \(T_1\) rewrites to \(T_2\) in several steps [24]. We assume convergent theories: for any term T, there is a unique term \(T{\downarrow }\) such that \(T\rightarrow ^* T{\downarrow }\). We write \(U=_\mathcal {E}V\) if \(U{\downarrow }=V{\downarrow }\). A term T can be deduced from a sequence of terms S, denoted by \(S\vdash _\mathcal {E}T\) (or simply \(S\vdash T\)), if there is a context \(\mathcal {C}[\__1,\ldots ,\__n]\) and terms \(T_1,\ldots ,T_n\) in S such that \(\mathcal {C}[T_1,\ldots ,T_n]{\downarrow }=T\) and \(\mathcal {C}\) does not contain function symbols in \(\mathcal {F}^{priv}\). Such a context, together with the positions of terms \(T_1,\ldots ,T_n\) in S, is called a proof of \(S\vdash _\mathcal {E}T\).
Fig. 2.

Process algebra

Processes of the calculus, denoted by \(P,Q,\ldots \), are built according to the grammar in Fig. 2, where cn are names, x is a variable, TUV are terms. Replication allows the creation of any number of instances of a process. Names introduced by \({new}\,\) are called private, or fresh, otherwise they are public, or free. The term T in \({event}\;T\) is usually of the form \(\mathcal {A}(T_1,\ldots ,T_n)\), where \(\mathcal {A}\) is a special symbol representing the name of an occuring event (e.g. the start of a protocol session), while the terms \(T_1,\ldots ,T_n\) represent the parameters of the event (e.g. the names or inputs of parties). A variable x is free in a process P if P does not contain x in any of its input actions and any term evaluation of the form \(x=T\). A process P with free variables \(x_1,\ldots ,x_n\) is denoted by \(P(x_1,\ldots ,x_n)\), i.e. \(x_1,\ldots ,x_n\) are parameters of P that will be instantiated in the context where P is used. We denote by \({sig}(P)\) the set of function symbols that appear in P. A process context \(\mathcal {C}[\_]\) is defined similarly as a term context.

Formally, the operational semantics of processes is defined as a relation on tuples of the form \((\mathcal{N},\mathcal {M},\mathcal {L},\mathcal {P})\), called configurations, whose elements represent the following information in the execution of a process: \(\mathcal{N}\) is the set of freshly generated names; \(\mathcal {M}\) is the sequence of terms output on public channels (i.e. to the attacker); \(\mathcal {L}\) is the set of occured events; \(\mathcal {P}\) is the multiset of processes being executed in parallel. The rules that define the operational semantics, presented in the associated research report [25] and adapted from [21, 22], are quite standard and correspond to the informal meaning previously discussed. We denote by \(P\rightarrow ^*(\mathcal{N},\mathcal {M},\mathcal {L},\mathcal {P})\) if the configuration \((\mathcal{N},\mathcal {M},\mathcal {L},\mathcal {P})\) can be reached from the initial configuration of P, which is \((\emptyset ,\emptyset ,\emptyset ,\{P\})\).

Security Properties. We rely on correspondence assertions [21] and observational equivalence [22]. Correspondence assertions allow to specify constraints for events occuring in the execution of the protocol. They are based on formulas \(\varPhi ,\varPsi \) whose syntax is defined as follows:
$$\begin{aligned} ev:T \;\;\;\;\;\;\; att :T \;\;\;\;\;\;\; U=V \;\;\;\;\;\;\; \varPhi \; \wedge \; \varPsi \;\;\;\;\;\; \varPhi \vee \varPsi \;\;\;\;\;\lnot \varPhi \end{aligned}$$
Their semantics, for a configuration \(\mathcal {C}=(\mathcal{N},\mathcal {M},\mathcal {L},\mathcal {P})\) and equational theory \(\mathcal {E}\), is defined by \(\mathcal {C}\models _\mathcal {E}ev:T \; \Leftrightarrow \; \exists T'\in \mathcal {L}.\;T'=_\mathcal {E}T\), and \(\mathcal {C}\models _\mathcal {E}U=V \Leftrightarrow U=_\mathcal {E}V\) and \(\mathcal {C}\models _\mathcal {E}att:T\; \Leftrightarrow \;\mathcal {M}\vdash _\mathcal {E}T\), plus the usual semantics of boolean operators. Note, a predicate ev : T is true for a configuration if the event T occured in the execution trace leading to it, and att : T is true if the attacker can deduce T from the public messages of the configuration. A correspondence assertion is a formula of the form \(\varPhi \leadsto \varPsi \). Such a formula is satisfied for a process P if and only if, for every process Q, with \({sig}(Q)\cap \mathcal {F}^{priv}=\emptyset \), and every configuration \(\mathcal {C}\) reachable from \(P\;|\;Q\), i.e. \(P\;|\;Q\rightarrow ^* \mathcal {C}\), and any substition \(\sigma \), we have that \(\mathcal {C}\models \varPhi \sigma \) implies \(\mathcal {C}\models \varPsi \sigma '\), for some substition \(\sigma '\) that extends \(\sigma \), i.e. if \(x\sigma \) is defined, then \(x\sigma '=x\sigma \). Intuitively, a correspondence assertion requires that every time the formula \(\varPhi \) is true during the execution of a process, the constraints specified in \(\varPsi \) must also be true for the same parameters. The process Q stands for any computation that may be performed by the attacker.

Observational equivalence, denoted by \(P_1\sim P_2\), specifies the inability of the attacker to distinguish between two processes \(P_1\) and \(P_2\). Formally, \(P_1\sim P_2\) is true if and only if, for every process Q, with \({sig}(Q)\cap \mathcal {F}^{priv}=\emptyset \), and every configuration \((\mathcal{N}_1,\mathcal {M}_1,\mathcal {L}_1,\mathcal {P}_1)\) reachable from \(P_1\;|\;Q\), there is a configuration \((\mathcal{N}_2,\mathcal {M}_2,\mathcal {L}_2,\mathcal {P}_2)\) reachable from \(P_2\;|\;Q\), such that for any term \(T_1\) and any two different proofs \(\pi _1,\pi _2\) of \(\mathcal {M}_1\vdash _\mathcal {E}T_1\), there is a term \(T_2\) such that \(\pi _1,\pi _2\) are also proofs of \(\mathcal {M}_2\vdash _\mathcal {E}T_2\) [19, 22, 23, 26].

3 Equational Theory for Garbled Computation

In this section we present an equational theory to model the cryptographic primitives used in garbled computation protocols like [2, 3, 6]. We will refer to a party \(\mathcal {A}\) as the sender (who garbles and transmits data), and to a party \(\mathcal {B}\) as the receiver (who receives and ungarbles data). The equational theory, presented in Fig. 3 and discussed below, allows \(\mathcal {B}\) to evaluate a garbled circuit on garbled inputs; \(\mathcal {A}\) to prove that the circuits and its inputs are correctly garbled; \(\mathcal {B}\) to obtain by oblivious transfer \(\mathcal {B}\)’s garbled input.

Garbled Circuit Evaluation. The term \(eval(T_\mathcal {F},T_\mathcal {A},T_\mathcal {B})\) represents the result of evaluating a circuit, represented by the term \(T_\mathcal {F}\), on inputs of \(\mathcal {A}\) and \(\mathcal {B}\), represented by terms \(T_\mathcal {A}\) and \(T_\mathcal {B}\) respectively.

The term \(gf(T_\mathcal {F},T_\mathcal {K})\) represents the garbling of a circuit \(T_\mathcal {F}\), given a garbling key \(T_\mathcal {K}\). The term \(gw(T,T_\mathcal {K}, i)\), with \(i\in \{a,b\}\), represents a garbling of the input T with a key \(T_\mathcal {K}\), where T corresponds to the input wires of party \(\mathcal {A}\), when i is \(a\), or of party \(\mathcal {B}\), when i is \(b\).

The term \({geval}(gf(T_\mathcal {F},T_\mathcal {K}),gw(T_\mathcal {A},T_\mathcal {K},a),gw(T_\mathcal {B},T_\mathcal {K},b))\) represents the computation performed on the garbled function and garbled inputs given as arguments to \({geval}\), the result of which is \(eval(T_\mathcal {F},T_\mathcal {A},T_\mathcal {B})\), as specified by the rewrite rule \(\mathcal {R}_1\).
Fig. 3.

Equational theory \(\mathcal {E}_{GC}\) for garbled computation

In addition, the function \({geval}'\) specified by \(\mathcal {R}_2\) provides an encryption of the function output. As explained in Sect. 2.1, this ciphertext can be sent as response to \(\mathcal {A}\), providing confidence that the final result correctly reflects \(\mathcal {A}\)’s inputs in the protocol, even while interacting with a malicious \(\mathcal {B}\). For brevity, the key in the encryption returned by \(\mathcal {R}_2\) is the same as the one used for garbling, but the model can be easily adapted for more complex scenarios.

Overall, \(\mathcal {R}_1\) and \(\mathcal {R}_2\) are the only operations that can be performed on garbled values without the key, this way enforcing several security properties. First, the function and the inputs of the garbled circuit cannot be modified. Second, the computation in rules \(\mathcal {R}_1,\mathcal {R}_2\) succeeds only for circuits and inputs that are garbled with the same key y (otherwise, a malicious party may combine garbled values from different sessions of the protocol in order to derive more information than it should). Third, the inputs must be used consistently, e.g. the garbled input of \(\mathcal {A}\) cannot be substituted with a garbled input for \(\mathcal {B}\) (ensured by the constants \(a\) and \(b\)). Garbled data can only be ungarbled by the key holder, as specified by the rule \(\mathcal {R}_3\) for garbled functions and the rule \(\mathcal {R}_4\) for garbled inputs.

These features ensure that a malicious receiver cannot cheat. In addition, we need to ensure that a malicious sender cannot cheat. This is the role of \(\mathcal {R}_5\), which allows a party to check that a function is correctly garbled, without access to the garbling key. Cryptographically, there are various ways in which this abstraction can be instantiated, e.g. by zero-knowledge proofs [4] or cut-and-choose techniques [3, 27]. The model of oblivious transfer that we explain next will also allow the receiver to be convinced that his input is correctly garbled.

Garbled oblivious transfer is modeled relying on functions \({gwot},{get},com\), and the rewrite rule \(\mathcal {R}_6\), as follows: the term \(com(T_\mathcal {B},V)\) represents a commitment to a term \(T_\mathcal {B}\), which cannot be modified, and is hidden by a nonce V; such a term will be used by \(\mathcal {B}\) to request a garbled version of \(T_\mathcal {B}\) without disclosing it.

The term \({gwot}(com(T_\mathcal {B},V), T_\mathcal {K}, T)\) is an oblivious transfer term, obtained from a commited input \(com(T_\mathcal {B},V)\) and a garbling key \(T_\mathcal {K}\); such a term will be constructed by \(\mathcal {A}\) and sent in response to \(\mathcal {B}\)’s commitment.

The term \({get}({gwot}(com(T_\mathcal {B},V), T_\mathcal {K}, T), T_\mathcal {B}, V)\) allows to obtain \(gw(T_\mathcal {B},T_\mathcal {K},T)\) from an oblivious transfer term, if a party has the secret input \(T_\mathcal {B}\) and the nonce V that have been used to construct the corresponding commitment. The term T would be equal to the constant \(b\) in a normal execution of the protocol.

This way, we capture formally the security properties of oblivious transfer protocols like [15, 16, 27, 28]: for a sender \(\mathcal {A}\) and a receiver \(\mathcal {B}\): \(\mathcal {B}\) should only learn one garbled value among many possible ones; and \(\mathcal {A}\) should not learn which value \(\mathcal {B}\) has chosen. The first property is ensured in our model by the fact that a dishonest \(\mathcal {B}\) cannot change the commitment \(com(T_\mathcal {B},V)\) in an oblivious transfer term \({gwot}(com(T_\mathcal {B}, V), T_\mathcal {K}, T)\). The only way to obtain a garbling of a second message would be to run a second instance of the protocol with \(\mathcal {A}\), involving another commitment and corresponding oblivious transfer term - this is a legitimate behaviour that is also allowed by our model. The second property is ensured by the fact that a commitment \(com(T_\mathcal {B},V)\) does not reveal \(T_\mathcal {B}\) or V. Furthermore, only the holder of \(T_\mathcal {B}\) and V can extract the respective garbled value from an oblivious transfer term, ensuring that \(\mathcal {B}\) is in fact the only party that can obtain \(gw(T_\mathcal {B},T_\mathcal {K},T)\).

4 Formal Protocol Specification

In this section, we show how the equational theory from Sect. 3 is integrated into higher level protocols modeled by processes communicating over a public network. Figure 4 contains the process specifications of the two roles in Yao’s protocol for secure two-party computation: the sender process \(\mathcal {A}\) and the receiver process \(\mathcal {B}\). Text within \(\mathbf (* \) and \(\mathbf *) \) represents comments. The public parameter of \(\mathcal {A}\) and \(\mathcal {B}\) is the function to be evaluated, represented by the free variable \(x_\mathcal {F}\). The private parameters of \(\mathcal {A}\) and \(\mathcal {B}\) are their respective inputs, represented by the free variables \(x_\mathcal {A}\) and respectively \(x_\mathcal {B}\). The goal of \(\mathcal {A}\) and \(\mathcal {B}\) is therefore to obtain \(eval(x_\mathcal {F},x_\mathcal {A},x_\mathcal {B})\), without disclosing \(x_\mathcal {A}\) to \(\mathcal {B}\) and \(x_\mathcal {B}\) to \(\mathcal {A}\). A public name c represents the communication channel between the two parties, possibly controlled by an attacker.
Fig. 4.

Processes for two-party computation

Sender. The sender \(\mathcal {A}\) creates a new key \(k_\mathcal {A}\), which it uses to garble the circuit \(x_\mathcal {F}\), its input \(x_\mathcal {A}\) and, obliviously, the input of \(\mathcal {B}\). As part of the oblivious transfer, \(\mathcal {A}\) first receives the commited input of \(\mathcal {B}\). The garbled values, as well as the corresponding oblivious transfer term, are sent to \(\mathcal {B}\) over the public channel c. As response from \(\mathcal {B}\), \(\mathcal {A}\) receives the result of the computation encrypted with \(k_\mathcal {A}\).

Receiver. The receiver \(\mathcal {B}\) obtains garbled data from \(\mathcal {A}\) and, to get a garbled version \(x_{gb}\) of its own input \(x_\mathcal {B}\), engages in the oblivious transfer protocol: it makes a commitment to \(x_\mathcal {B}\), sends the commitment to \(\mathcal {A}\) and receives in response the corresponding oblivious transfer term containing the garbled input. Next, \(\mathcal {B}\) verifies that the function is correctly garbled and performs the garbled computation. The value \(x_{res}\) is the result obtained by \(\mathcal {B}\), while \(y_\mathcal {A}\) is the encrypted result that is sent back to \(\mathcal {A}\).

Events. The events \(\mathcal {A}_{in}\), \(\mathcal {A}_{res}\), \(\mathcal {B}_{in}\) and \(\mathcal {B}_{res}\) are used as part of the formal specification of security properties that we present in Sect. 5. The event \(\mathcal {A}_{in}(x_\mathcal {F}, x_\mathcal {A}, x_c)\) records that \(\mathcal {A}\) has engaged in a protocol session for the computation of \(x_\mathcal {F}\), having \(\mathcal {A}\)’s input equal to \(x_\mathcal {A}\), and \(\mathcal {B}\)’s input being committed to \(x_c\). The event \(\mathcal {A}_{res}(x_\mathcal {F}, x_\mathcal {A}, x_c,x_{res})\) records in addition that \(\mathcal {A}\) has obtained the result \(x_{res}\) as outcome of the protocol session.

The event \(\mathcal {B}_{in}(x_\mathcal {F}, x_{ga}, x_\mathcal {B})\) records that \(\mathcal {B}\) has engaged in a protocol session for the computation of \(x_\mathcal {F}\), having \(\mathcal {B}\)’s input equal to \(x_\mathcal {B}\), and \(\mathcal {A}\)’s input being garbled as \(x_{ga}\). The event \(\mathcal {B}_{res}(x_\mathcal {F}, x_{ga}, x_\mathcal {B},x_{res})\) records in addition that \(\mathcal {B}\) has obtained the result \(x_{res}\) as outcome of the protocol session.

Attacker. As usual, the attacker can execute any of the operations that we have described, as well as any other operations allowed by the equational theory, and (pretend to) play the role of any party, while interacting with an honest party \(\mathcal {A}\) or \(\mathcal {B}\) on the public channel c. This is captured formally by the semantics of the applied pi-calculus and the definition of the security properties that we present in the next section.

5 Formal Models of Security for Two-Party Computation

Informally, we require the following security properties for a two-party computation protocol:
  1. 1.
    The dishonest parties should not learn too much:
    • (a) The only leakage about the input of an honest party should come from the result of the evaluated function (Input privacy).

    • (b) A dishonest party should be able to evaluate a function on honest inputs only as agreed by the corresponding honest party (Input agreement).

     
  2. 2.

    The honest parties learn the correct result (Result integrity).

     

The distinction between input privacy and input agreement separates the task of input protection for honest parties into (a) protecting the honest input during the protocol flow (without bothering about the output of the function); and (b) ensuring that function outputs are released only as agreed to by the owners of private inputs. This distinction helps to address automated verification problems when the public output of the protocol depends on the private input of parties. For example, automating privacy proofs for electronic voting protocols is known to be problematic, because care should be taken to separate the legitimate (e.g. the result of the election) from the illegitimate information flow [29, 30]. This is also a problem for automating simulation-based proofs, where an ideal functionality models exactly what can be leaked by the protocol, and a simulator needs to be found that shows the protocol not to leak more [11, 12, 13]. Our separation of this property into (a) and (b) is a new way of addressing this problem, and is making more explicit the properties that are achieved, without requiring a simulator as in [11, 12, 13] or additional honest parties as in [29, 30].

These security properties can be formalized in a general setting, but for brevity we present them in relation to the models of Sects. 3 and 4, and leave their generalization as future work. In this setting, a specification of a two-party computation protocol is given by a triple \((\mathcal {A},\mathcal {B},\mathcal {E})\), where \(\mathcal {E}\) is an equational theory containing \(\mathcal {E}_{GC}\) from Sect. 3, \(\mathcal {A}\) is a sender process with free variables \(x_\mathcal {F},x_\mathcal {A}\), \(\mathcal {B}\) is a receiver process with free variables \(x_\mathcal {F},x_\mathcal {B}\), and these processes are enriched with events \(\mathcal {A}_{in},\mathcal {B}_{in},\mathcal {A}_{res},\mathcal {B}_{res}\) presented in Sect. 4.

5.1 Result Integrity

Result integrity should ensure that the final result obtained by an honest party \(\mathcal {P}\in \{\mathcal {A},\mathcal {B}\}\) after a session of the protocol is consistent with the function that \(\mathcal {P}\) expects to be evaluated, with the input of \(\mathcal {P}\) in this session, and with the input of the other party, that has responded to this session, or has initiated it. Formally, the events \(\mathcal {A}_{res}(x_\mathcal {F}, x_\mathcal {A}, x_c,x_{res})\) and \(\mathcal {B}_{res}(x_\mathcal {F}, x_{ga}, x_\mathcal {B}, x_{res})\) capture the views of \(\mathcal {A}\) and \(\mathcal {B}\) after a session of the protocol has ended, recording all the relevant data, in particular the result obtained by the respective party, and the committed (resp. garbled) input of the other party. Therefore, we can specify the requirement of result integrity by the correspondence assertions \(\varPhi _{int}^\mathcal {A}\) and \(\varPhi _{int}^\mathcal {B}\) presented in Definition 1.

Definition 1

(Result Integrity). Let \((\mathcal {A},\mathcal {B},\mathcal {E})\) be a specification of a two-party computation protocol. We define the correspondence assertions \(\varPhi _{int}^\mathcal {A}\) and \(\varPhi _{int}^\mathcal {B}\) as follows:
$$ \begin{array}{rclcl} \varPhi _{int}^\mathcal {A}&{} \doteq &{} ev:\mathcal {A}_{res}(x, y, z, w) &{} \leadsto &{} z=com(z_1,z_2)\; \wedge \; w = {eval}(x,y,z_1)\\ \varPhi _{int}^\mathcal {B}&{} \doteq &{} ev:\mathcal {B}_{res}(x, y, z, w) &{} \leadsto &{} y=gw(y_1,y_2,a)\; \wedge \; w = {eval}(x,y_1,z)\\ \end{array} $$
We say that \((\mathcal {A},\mathcal {B},\mathcal {E})\) satisfies result integrity if
$$ \begin{array}{lclr} !\;(\; {in}(c,x_\mathcal {F});{in}(c,x_\mathcal {A});\mathcal {A}(x_\mathcal {F},x_\mathcal {A})\;) &{} \models _\mathcal {E}&{} \varPhi _{int}^\mathcal {A}&{} \;\;\,and \\ !\;(\;{in}(c,x_\mathcal {F});{in}(c,x_\mathcal {B});\mathcal {B}(x_\mathcal {F},x_\mathcal {B})\;) &{} \models _\mathcal {E}&{} \varPhi _{int}^\mathcal {B}\end{array} $$

The specification lets the attacker execute any number of sessions of an honest party \(\mathcal {A}\) or \(\mathcal {B}\), with any function \(x_\mathcal {F}\) and any values \(x_\mathcal {A},x_\mathcal {B}\) as inputs, and requires the correspondence assertions \(\varPhi _{int}^\mathcal {A}\) and \(\varPhi _{int}^\mathcal {B}\) to be satisfied by this process. In turn, \(\varPhi _{int}^\mathcal {A}\) and \(\varPhi _{int}^\mathcal {B}\) require that for any occurence of the event \(\mathcal {A}_{res}\) or \(\mathcal {B}_{res}\), the result obtained by the respective honest party, recorded in the variable w, correctly reflects the function and relevant messages of the corresponding session, recorded in variables xyz. Note that the variables \(z_1,z_2,y_1,y_2\) in \(\varPhi _{int}^\mathcal {A},\varPhi _{int}^\mathcal {B}\) are existentially quantified implicitly. This allows the specified property to hold for any message choices in the protocol, as long as the desired constraints are satisfied.

5.2 Input Agreement

Input agreement should ensure that the function outputs obtained by a dishonest party after executing a session of the protocol are consistent with the expectation of an honest party when it releases its private inputs. Specifically, consider the case where an honest party \(\mathcal {A}\) supplied an input \(T_\mathcal {A}\) in order to compute a function \(T_\mathcal {F}\). Then, the other party should only be able to obtain \({eval}(T_\mathcal {F},T_\mathcal {A}, T_\mathcal {B})\), where \(T_\mathcal {B}\) is its own input when playing the role of \(\mathcal {B}\) in the corresponding protocol session. In particular, the other party should not be able to obtain \({eval}(T_\mathcal {F}, T_\mathcal {A}, T_\mathcal {B}')\), for a different input \(T_\mathcal {B}'\), or \({eval}(T_\mathcal {F}',T_\mathcal {A}, T_\mathcal {B})\), for different function \(T_\mathcal {F}'\). Similar guarantees should hold for an honest party \(\mathcal {B}\).

We formally define these requirements as correspondence assertions. The fact that the attacker knows a particular function output can be expressed by the formula \(att :{:}eval(x,y,z)\). To express the constraints associated with this formula, we rely on events \(\mathcal {A}_{in}(x_\mathcal {F}, x_\mathcal {A}, x_c)\) and \(\mathcal {B}_{in}(x_\mathcal {F}, x_{ga}, x_\mathcal {B})\), that record the parameters of each honest party in a started protocol session. In particular, the event \(\mathcal {A}_{in}\) records the commited input of \(\mathcal {B}\), received by \(\mathcal {A}\), and \(\mathcal {B}_{in}\) records the garbled input of \(\mathcal {A}\), received by \(\mathcal {B}\). Therefore, these events fully determine the result that each party (and in particular a dishonest party) should obtain from the respective protocol session. Then, in Definition 2 we require that to any function output \({eval}(x,y,z)\) obtained by the attacker, there corresponds an initial event recording the agreement of the respective honest party \(\mathcal {A}\) or \(\mathcal {B}\).

Definition 2

(Input Agreement). Let \((\mathcal {A},\mathcal {B},\mathcal {E})\) be a specification of a two-party computation protocol. We define the correspondence assertions \(\varPhi _{agr}^\mathcal {A}\) and \(\varPhi _{agr}^\mathcal {B}\) as follows:
$$ \begin{array}{rclcl} \varPhi _{agr}^\mathcal {A}&{} \doteq &{} att :{eval}(x, y, z) &{}\leadsto &{} (\;ev:\mathcal {A}_{in}(x, y, z_1)\;\wedge \; z_1=com(z,z_2)\;)\; \vee \; att :y \\ \varPhi _{agr}^\mathcal {B}&{} \doteq &{} att :{eval}(x, y, z) &{}\leadsto &{} (\;ev:\mathcal {B}_{in}(x, y_1, z)\;\wedge \;y_1={gw}(y,y_2,a)\;)\vee att :z \end{array} $$
We say that a specification \((\mathcal {A},\mathcal {B},\mathcal {E})\) of a two-party computation protocol satisfies input agreement if:
$$ \begin{array}{lclr} !\;(\;{in}(c,x_\mathcal {F});{new}\,i_\mathcal {A};\mathcal {A}(x_\mathcal {F},i_\mathcal {A})\;) &{} \models _\mathcal {E}&{} \varPhi _{agr}^\mathcal {A}&{} \;\;\, and\\ !\;(\;{in}(c,x_\mathcal {F});{new}\,i_\mathcal {B};\mathcal {B}(x_\mathcal {F},i_\mathcal {B})\;) &{} \models _\mathcal {E}&{} \varPhi _{agr}^\mathcal {B}\end{array} $$

Note, however, that this property cannot be achieved if the input of the honest party is known to the attacker, who can obtain \({eval}(x,y,z)\) from xyz, by simply evaluating the function. Therefore, input agreement as defined here makes sense only for honest input values that are not available to the attacker. This is captured by the disjunction in the correspondence assertions \(\varPhi _{agr}^\mathcal {A}\) and \(\varPhi _{agr}^\mathcal {B}\) of Definition 2, and by the fact that inputs \(i_\mathcal {A},i_\mathcal {B}\) of honest parties in the test processes \(\mathcal {A}(x_\mathcal {F},i_\mathcal {A}),\mathcal {B}(x_\mathcal {F},i_\mathcal {B})\) are locally generated for each session.

5.3 Input Privacy

Traditionally, e.g. for verifying strong secrecy [31] or vote privacy [29, 30], the privacy of an input x in a process \(\mathcal {P}(x)\) is defined as a property of indistinguishability between two of its instances, say \(\mathcal {P}(a_1)\) and \(\mathcal {P}(a_2)\). In our case, we have to make the indistinguishability notion robust in order to take into account information flow that is inherent from the functionality of the protocol. In fact, we will require that the only leakage about the input of an honest party comes from the evaluated function. In other words, if the output of the function is withheld from the attacker, no leakage should occur about the honest inputs. This amounts to a standard requirement of strong secrecy, which can be formalized as an observational equivalence.

It remains to formalize what it means for the output of the function to be withheld from the attacker. The attacker might be able to compute the output by combining data gathered throughout the protocol (for example, an attacker playing the role of \(\mathcal {B}\) in Yao’s protocol can evaluate the function output from the received garbled data). In such cases, it is not clear what data can be legitimately withheld from the attacker when defining input privacy. Instead, we will enrich the equational theory such that, for honest inputs, all corresponding function outputs are equivalent, i.e. the attacker cannot observe the difference between them. Therefore, rather than suppressing the function output in the protocol specification, we suppress the attacker’s ability to gain information from this output. The enriched equational theory relies on special function symbols \(\alpha \) and \(\beta \) that will decorate the private inputs of an honest party \(\mathcal {A}\), respectively \(\mathcal {B}\). The additional rewrite rules for \({eval}\) declare function evaluations of these inputs to be equivalent, relying on the constants \(\alpha _0,\beta _0\).

Definition 3

Let \(\mathcal {E}\) be an equational theory. Consider the function symbols \(\alpha ,\beta \) and the constants \(\alpha _0,\beta _0\). We define the equational theories \(\mathcal {E}_\alpha =\mathcal {E}\cup \{{eval}(x,\alpha (y),z) \rightarrow {eval}(x,\alpha _0,z)\}\) and \(\mathcal {E}_\beta =\mathcal {E}\cup \{{eval}(x,y,\beta (z)) \!\rightarrow \! {eval}(x,y,\beta _0)\}\)

The specification in Definition 4 considers two versions of a process: for any number of sessions, and any choice of terms \(x^0, x^1\) for each session, in the first version an honest party \(\mathcal {A}\), respectively \(\mathcal {B}\), inputs \(\alpha (x^0)\), respectively \(\beta (x^0)\); in the second version the party inputs \(\alpha (x^1)\), respectively \(\beta (x^1)\). We say that the protocol satisfies input privacy if these two versions are in observational equivalence, i.e. indistinguishable for the attacker.

Definition 4

(Input Privacy). Let \((\mathcal {A},\mathcal {B},\mathcal {E})\) be a specification of a two-party computation protocol and \(\mathcal {E}_\alpha ,\mathcal {E}_\beta \) be the equational theories from Definition 3. Let \(\mathcal {C}_{in}[\_]\) be the process context \({in}(c,x_\mathcal {F});\;{in}(c,x^0);\;{in}(c,x^1);\;[\_]\). We say that \((\mathcal {A},\mathcal {B},\mathcal {E})\) satisfies input privacy if
$$ \begin{array}{lclr} !\;\mathcal {C}_{in}[\;\mathcal {A}(x_\mathcal {F},\alpha (x^0))\;] &{}\sim _{\mathcal {E}_\alpha } &{} !\; \mathcal {C}_{in}[\;\mathcal {A}(x_\mathcal {F},\alpha (x^1))\;] &{} \;\;\,and\\ !\;\mathcal {C}_{in}[\;\mathcal {B}(x_\mathcal {F},\beta (x^0))\;] &{}\sim _{\mathcal {E}_\beta }&{} !\; \mathcal {C}_{in}[\;\mathcal {B}(x_\mathcal {F},\beta (x^1))\;] \end{array} $$

Note that \(\alpha (x^0)\) and \(\alpha (x^1)\) remain distinct terms with respect to \(\mathcal {E}_\alpha \) when considered in any context other than in terms of the form \(eval(y,\alpha (x^0),z)\), \(eval(y,\alpha (x^1),z)\); and similarly for \(\mathcal {E}_\beta \). That is why, if there is a privacy weakness in the protocol, the attacker will be able to spot the difference between the two experiments in Definition 4, for either \(\mathcal {A}\) or \(\mathcal {B}\).

6 Conclusion and Related Work

The ProVerif code for the models introduced in this paper is available online and in the associated research report [25]. ProVerif returns within seconds positive results for all queries, and we also perform reachability tests to ensure that all parties can execute the protocol correctly. Our models and results differ from related work in several aspects, and also open new research questions:

The model of Backes et al. [14] considers multi-party computation functionalities abstractly, allowing to reason about their use in larger protocols, without necessarily representing the cryptographic primitives that realize the functionality. Their framework comes equipped with a computational soundness result and is applied to the case study of an auction protocol [32]. A property of robust safety, which can be related to our property of result integrity, is verified automatically relying on type-checking.

Dahl and Damgård [13] propose a computationally sound formal framework for two-party computation protocols in applied pi-calculus and use ProVerif to verify an oblivious transfer protocol based on homomorphic encryption [28]. In order to use ProVerif, they have to find a simulator and to additionally transform the processes manually. On the other hand, we do not require a simulator and our models can be given as input directly to automated tools. Our case study is also different, allowing to evaluate any given function, relying on garbled circuits and on oblivious transfer as a sub-protocol. However, we do not provide a soundness result, and the relation of our models to simulation-based security remains an open question. In that direction, we can also explore extensions of our models into a general framework allowing the verification of other protocols, for two or multiple parties, and relying on various cryptographic primitives.

Delaune et al. [11] and Böhl and Unruh [12] study definitions of simulation-based security in applied pi-calculus, showing their application to the analysis of several protocols. Although quite general, their frameworks are not easily amenable to automation. As in [13], the authors of [12] have to perform a significant amount of manual proof before applying ProVerif. Earlier computationally sound symbolic models for simulation-based security are yet more complex [9, 10, 33]. Our paper proposes a different approach: rather than directly expressing simulation-based security in formal models, we propose several security notions whose conjunction should be sufficient for secure two-party computation, while it remains to be seen under what conditions they imply simulation-based security. This methodology promises not only better automation, but also a better understanding of what security properties are achieved. In turn, this may aid the design of new protocols, where some of the properties can be relaxed.

A formal model for oblivious transfer in applied pi-calculus is presented by Dahl and Damgård [13]. Their specification is a process modeling a particular protocol, whereas we propose a more abstract equational theory. However, our theory only models oblivious transfer of garbled values; automated verification modulo a more general equational theory for oblivious transfer remains for future work. Conversely, the model of Goubault et al. [34] aims to capture formally the probabilistic aspect of some oblivious transfer protocols.

Notes

Acknowledgement

We thank the reviewers for their valuable comments.

References

  1. 1.
    Yao, A.: Protocols for secure computations (extended abstract). In: FOCS, pp. 160–164. IEEE Computer Society (1982)Google Scholar
  2. 2.
    Yao, A.: How to generate and exchange secrets (extended abstract). In: FOCS, pp. 162–167. IEEE Computer Society (1986)Google Scholar
  3. 3.
    Lindell, Y., Pinkas, B.: An efficient protocol for secure two-party computation in the presence of malicious adversaries. In: Naor [35], pp. 52–78Google Scholar
  4. 4.
    Jarecki, S., Shmatikov, V.: Efficient two-party secure computation on committed inputs. In: Naor [35], pp. 97–114Google Scholar
  5. 5.
    Canetti, R.: Universally composable security: A new paradigm for cryptographic protocols. In: FOCS, pp. 136–145. IEEE Computer Society (2001)Google Scholar
  6. 6.
    Lindell, Y., Pinkas, B.: A proof of security of Yao’s protocol for two-party computation. J. Cryptol. 22(2), 161–188 (2009)CrossRefMathSciNetzbMATHGoogle Scholar
  7. 7.
    Abadi, M., Blanchet, B., Comon-Lundh, H.: Models and proofs of protocol security: a progress report. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 35–49. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  8. 8.
    Cortier, V., Kremer, S. (eds.): Formal Models and Techniques for Analyzing Security Protocols. Cryptology and Information Security Series. IOS Press, Amsterdam (2011)Google Scholar
  9. 9.
    Canetti, R., Herzog, J.: Universally composable symbolic security analysis. J. Cryptol. 24(1), 83–147 (2011)CrossRefMathSciNetzbMATHGoogle Scholar
  10. 10.
    Backes, M., Pfitzmann, B., Waidner, M.: A composable cryptographic library with nested operations. In: Proceedings of the 10th ACM Conference on Computer and Communications Security, CCS 2003, Washington, DC, USA, 27–30 October 2003 (2003)Google Scholar
  11. 11.
    Delaune, S., Kremer, S., Pereira, O.: Simulation based security in the applied pi calculus. In: Kannan, R., Narayan Kumar, K. (eds.), FSTTCS. LIPIcs, vol. 4, pp. 169–180 (2009)Google Scholar
  12. 12.
    Böhl, F., Unruh, D.: Symbolic universal composability. In: 2013 IEEE 26th Computer Security Foundations Symposium, New Orleans, LA, USA, 26–28 June 2013, pp. 257–271. IEEE (2013)Google Scholar
  13. 13.
    Dahl, M., Damgård, I.: Universally composable symbolic analysis for two-party protocols based on homomorphic encryption. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 695–712. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  14. 14.
    Backes, M., Maffei, M., Mohammadi, E.: Computationally sound abstraction and verification of secure multi-party computations. In: Lodaya, K., Mahajan, M. (eds.) FSTTCS. LIPIcs, vol. 8, pp. 352–363 (2010)Google Scholar
  15. 15.
    Rabin, M.O.: How to exchange secrets with oblivious transfer. IACR Cryptol. ePrint Arch. 2005, 187 (2005)Google Scholar
  16. 16.
    Even, S., Goldreich, O., Lempel, A.: A randomized protocol for signing contracts. Commun. ACM 28(6), 637–647 (1985)CrossRefMathSciNetzbMATHGoogle Scholar
  17. 17.
    Huang, Y., Katz, J., Evans, D.: Efficient secure two-party computation using symmetric cut-and-choose. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 18–35. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  18. 18.
    Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or a completeness theorem for protocols with honest majority. In: Aho, A.V. (eds.) STOC, pp. 218–229. ACM (1987)Google Scholar
  19. 19.
    Abadi, M., Fournet, C.: Mobile values, new names, and secure communication. In: Proceedings of the 28th ACM Symposium on Principles of Programming Languages (POPL 2001), pp. 104–115, January 2001Google Scholar
  20. 20.
    Blanchet, B.: An efficient cryptographic protocol verifier based on prolog rules. In: Computer Security Foundations Workshop (CSFW 2001) (2001)Google Scholar
  21. 21.
    Blanchet, B.: Automatic verification of correspondences for security protocols. J. Comput. Secur. 17(4), 363–434 (2009)Google Scholar
  22. 22.
    Blanchet, B., Abadi, M., Fournet, C.: Automated verification of selected equivalences for security protocols. J. Log. Algebr. Program. 75(1), 3–51 (2008)CrossRefMathSciNetzbMATHGoogle Scholar
  23. 23.
    Ryan, M., Smyth, B.: Applied pi calculus. In: Cortier, V., Kremer, S. (eds.) Formal Models and Techniques for Analyzing Security Protocols. Cryptology and Information Security Series. IOS Press (2011)Google Scholar
  24. 24.
    Dershowitz, N., Jouannaud, J.-P.: Rewrite systems. In: Handbook of Theoretical Computer Science, Volume B: Formal Models and Sematics (B), pp. 243–320. MIT Press (1990)Google Scholar
  25. 25.
    Bursuc, S.: Secure two-party computation in applied pi-calculus: models and verification. Cryptology ePrint Archive, Report 2015/782 (2015). http://eprint.iacr.org/
  26. 26.
    Cortier, V., Delaune, S.: A method for proving observational equivalence. In: Computer Security Foundations Symposium (CSF), Port Jefferson, New York, USA, 8–10 July 2009, pp. 266–276. IEEE Computer Society (2009)Google Scholar
  27. 27.
    Lindell, Y., Pinkas, B.: Secure two-party computation via cut-and-choose oblivious transfer. J. Cryptol. 25(4), 680–722 (2012)CrossRefMathSciNetzbMATHGoogle Scholar
  28. 28.
    Damgård, I., Nielsen, J.B., Orlandi, C.: Essentially optimal universally composable oblivious transfer. In: Lee, P.J., Cheon, J.H. (eds.) ICISC 2008. LNCS, vol. 5461, pp. 318–335. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  29. 29.
    Delaune, S., Kremer, S., Ryan, M.: Verifying privacy-type properties of electronic voting protocols. J. Comput. Secur. 17(4), 435–487 (2009)zbMATHGoogle Scholar
  30. 30.
    Backes, M., Hriţcu, C., Maffei, M.: Automated verification of remote electronic voting protocols in the applied pi-calculus. In: Computer Security Foundations Symposium (CSF), pp. 195–209. IEEE Computer Society (2008)Google Scholar
  31. 31.
    Blanchet, B.: Automatic proof of strong secrecy for security protocols. In: 2004 IEEE Symposium on Security and Privacy (S&P 2004), 9–12 May 2004, Berkeley, CA, USA, p. 86. IEEE Computer Society (2004)Google Scholar
  32. 32.
    Bogetoft, P., Christensen, D.L., Damgård, I., Geisler, M., Jakobsen, T., Krøigaard, M., Nielsen, J.D., Nielsen, J.B., Nielsen, K., Pagter, J., Schwartzbach, M., Toft, T.: Secure multiparty computation goes live. In: Dingledine, R., Golle, P. (eds.) FC 2009. LNCS, vol. 5628, pp. 325–343. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  33. 33.
    Backes, M., Pfitzmann, B., Waidner, M.: The reactive simulatability (RSIM) framework for asynchronous systems. Inf. Comput. 205(12), 1685–1720 (2007)CrossRefMathSciNetzbMATHGoogle Scholar
  34. 34.
    Goubault-Larrecq, J., Palamidessi, C., Troina, A.: A probabilistic applied pi–calculus. In: Shao, Z. (ed.) APLAS 2007. LNCS, vol. 4807, pp. 175–190. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  35. 35.
    Naor, M. (ed.): EUROCRYPT 2007. LNCS, vol. 4515, pp. 52–78. Springer, Heidelberg (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.School of Computer ScienceUniversity of BristolBristolUK

Personalised recommendations