Equivalence Properties by Typing in Cryptographic Branching Protocols

Recently, many tools have been proposed for automatically analysing, in symbolic models, equivalence of security protocols. Equivalence is a property needed to state privacy properties or game-based properties like strong secrecy. Tools for a bounded number of sessions can decide equivalence but typically suffer from efficiency issues. Tools for an unbounded number of sessions like Tamarin or ProVerif prove a stronger notion of equivalence (diff-equivalence) that does not properly handle protocols with else branches. Building upon a recent approach, we propose a type system for reasoning about branching protocols and dynamic keys. We prove our type system to entail equivalence, for all the standard primitives. Our type system has been implemented and shows a significant speedup compared to the tools for a bounded number of sessions, and compares similarly to ProVerif for an unbounded number of sessions. Moreover, we can also prove security of protocols that require a mix of bounded and unbounded number of sessions, which ProVerif cannot properly handle.

In the recent years, attention has been given also to equivalence properties, which are crucial to model privacy properties such as vote privacy [8,32], unlikability [5], or anonymity [9]. For example, consider an authentication protocol P pass embedded in a biometric passport. P pass preserves anonymity of passport holders if an attacker cannot distinguish an execution with Alice from an execution with Bob. This can be expressed by the equivalence P pass (Alice) ≈ t P pass (Bob). Equivalence is also used to express properties closer to cryptographic games like strong secrecy.
Two main classes of tools have been developed for equivalence. First, in the case of an unbounded number of sessions (when the protocol is executed arbitrarily many times), equivalence is undecidable. Instead, the tools ProVerif [13,15] and Tamarin [37,11] try to prove a stronger property, namely diff-equivalence, that may be too strong e.g. in the context of voting. Tamarin covers a larger class of protocols but may require some guidance from the user. Maude-NPA [34,39] also proves diff-equivalence but may have non-termination issues. Another class of tools aim at deciding equivalence, for bounded number of sessions. This is the case in particular of SPEC [31], APTE [23], Akiss [22], and SatEquiv [26]. SPEC, APTE, and Akiss suffer from efficiency issues and can typically not handle more than 3-4 sessions. SatEquiv is much more efficient but is limited to symmetric encryption and requires protocols to be well-typed, which often assumes some additional tagging of the protocol.
Our contribution. Following the approach of [28], we propose a novel technique for proving equivalence properties for a bounded number of sessions as well as an unbounded number of sessions (or a mix of both), based on typing.
[28] proposes a first type system that entails trace equivalence P ≈ t Q, provided protocols use fixed (long-term) keys, identical in P and Q. In this paper, we target a larger class of protocols, that includes in particular key-exchange protocols and protocols whose security relies on branching on the secret. This is the case e.g. of the private authentication protocol [3], where agent B returns a true answer to A, encrypted with A's public key if A is one of his friends, and sends a decoy message (encrypted with a dummy key) otherwise.
We devise a new type system for reasoning about keys. In particular, we introduce bikeys to cover behaviours where keys in P differ from the keys in Q. We design new typing rules to reason about protocols that may branch differently (in P and Q), depending on the input. Following the approach of [28], our type system collects sent messages into constraints that are required to be consistent. Intuitively, the type system guarantees that any execution of P can be matched by an execution of Q, while consistency imposes that the resulting sequences of messages are indistinguishable for an attacker. We had to entirely revisit the approach of [28] and prove a finer invariant in order to cope with the case where keys are used as variables. Specifically, most of the rules for encryption, signature, and decryption had to be adapted to accommodate the flexible usage of keys. For messages, we had to modify the rules for keys and encryption, in order to encrypt messages with keys of different type (bi-key type), instead of only fixed keys. We show that our type system entails equivalence for the standard notion of trace equivalence [24] and we devise a procedure for proving consistency. This yields an efficient approach for proving equivalence of protocols for a bounded and an unbounded number of sessions (or a combination of both).
We implemented a prototype of our type-checker that we evaluate on a set of examples, that includes private authentication, the BAC protocol (of the biometric passport), as well as Helios together with the setup phase. Our tool requires a light type annotation that specifies which keys and names are likely to be secret or public and the form of the messages encrypted by a given key. This can be easily inferred from the structure of the protocol. Our type-checker outperforms even the most efficient existing tools for a bounded number of sessions by two (for examples with few processes) to three (for examples with more processes) orders of magnitude. Note however that these tools decide equivalence while our type system is incomplete. In the case of an unbounded number of sessions, on our examples, the performance is comparable to ProVerif, one of the most popular tools. We consider in particular vote privacy in the Helios protocol, in the case of a dishonest ballot board, with no revote (as the protocol is insecure otherwise). ProVerif fails to handle this case as it cannot (faithfully) consider a mix of bounded and unbounded number of sessions. Compared to [28], our analysis includes the setup phase (where voters receive the election key), which could not be considered before.
2 High-level description 2.1 Background Trace equivalence of two processes is a property that guarantees that an attacker observing the execution of either of the two processes cannot decide which one it is. Previous work [28] has shown how trace equivalence can be proved statically using a type system combined with a constraint checking procedure. The type system consists of typing rules of the form Γ P ∼ Q → C, meaning that in an environment Γ two processes P and Q are equivalent if the produced set of constraints C, encoding the attacker observables, is consistent.
The typing environment Γ is a mapping from nonces, keys, and variables to types. Nonces are assigned security labels with a confidentiality and an integrity component, e.g. HL for high confidentiality and low integrity. Key types are of the form key l (T ) where l is the security label of the key and T is the type of the payload. Key types are crucial to convey typing information from one process to another one. Normally, we cannot make any assumptions about values received from the network -they might possibly originate from the attacker. If we however successfully decrypt a message using a secret symmetric key, we know that the result is of the key's payload type. This is enforced on the sender side, whenever outputting an encryption.
A core assumption of virtually any efficient static analysis for equivalence is uniform execution, meaning that the two processes of interest always take the same branch in a branching statement. For instance, this means that all decryptions must always succeed or fail equally in the two processes. For this reason, previous work introduced a restriction to allow only encryption and decryption with keys whose equality could be statically proved.

Limitation
There are however protocols that require non-uniform execution for a proof of trace equivalence, e.g., the private authentication protocol [3]. The protocol aims at authenticating B to A, anonymously w.r.t. other agents. More specifically, agent B may refuse to communicate with agent A but a third agent D should not learn whether B declines communication with A or not. The protocol can be informally described as follows, where pk(k) denotes the public key associated to key k, and aenc(M, pk(k)) denotes the asymmetric encryption of message M with this public key.
A → B : aenc( N a , pk(k a ) , pk(k b )) B → A : aenc( N a , N b , pk(k b ) , pk(k a )) if B accepts A's request aenc(N b , pk(k)) if B declines A's request Γ (k b , k b ) = key HH (HL * LL) initial message uses same key on both sides Γ (ka, k) = key HH (HL) authentication succeeded on the left, failed on the right Γ (k, kc) = key HH (HL) authentication succeeded on the right, failed on the left Γ (ka, kc) = key HH (HL) authentication succeeded on both sides Γ (k, k) = key HH (HL) authentication failed on both sides

Encrypting with different keys
Let P a (k a , pk(k b )) model agent A willing to talk with B, and P b (k b , pk(k a )) model agent B willing to talk with A (and declining requests from other agents). We model the protocol as: P a (k a , pk b ) = new N a .out(aenc( N a , pk(k a ) , pk b )). in(z) P b (k b , pk a ) = new N b . in(x). let y = adec(x, k b ) in let y 1 = π 1 (y) in let y 2 = π 2 (y) in if y 2 = pk a then out(aenc( y 1 , N b , pk(k b ) , pk a )) else out(aenc(N b , pk(k))) where adec(M, k) denotes asymmetric decryption of message M with private key k. We model anonymity as the following equivalence, intuitively stating that an attacker should not be able to tell whether B accepts requests from the agent A or C: P a (k a , pk(k b )) | P b (k b , pk(k a )) ≈ t P a (k a , pk(k b )) | P b (k b , pk(k c )) We now show how we can type the protocol in order to show trace equivalence. The initiator P a is trivially executing uniformly, since it does not contain any branching operations. We hence focus on typing the responder P b .
The beginning of the responder protocol can be typed using standard techniques. Then however, we perform the test y 2 = pk(k a ) on the left side and y 2 = pk(k c ) on the right side. Since we cannot statically determine the result of the two equality checks -and thus guarantee uniform execution -we have to typecheck the four possible combinations of then and else branches. This means we have to typecheck outputs of encryptions that use different keys on the left and the right side.
To deal with this we do not assign types to single keys, but rather to pairs of keys (k, k ) -which we call bikeys -where k is the key used in the left process and k is the key used in the right process. The key types used for typing are presented in Fig. 1.
As an example, we consider the combination of the then branch on the left with the else branch on the right. This combination occurs when A is successfully authenticated on the left side, while being rejected on the right side. We then have to typecheck B's positive answer together with the decoy message: Γ aenc( y 1 , N b , pk(k b ) , pk(k a )) ∼ aenc(N b , pk(k)) : LL. For this we need the type for the bikey (k a , k).

Decrypting non-uniformly
When decrypting a ciphertext that was potentially generated using two different keys on the left and the right side, we have to take all possibilities into account. Consider the following extension of the process P a where agent A decrypts B's message. P a (k a , pk b ) = new N a .out(aenc( N a , pk(k a ) , pk b )). in(z). let z = adec(z, k a ) in out (1) else out (0) In the decryption, there are the following possible cases: -The message is a valid encryption supplied by the attacker (using the public key pk(k a )), so we check the then branch on both sides with Γ (z ) = LL. -The message is not a valid encryption supplied by the attacker so we check the else branch on both sides. -The message is a valid response from B. The keys used on the left and the right are then one of the four possible combinations (k a , k), (k a , k c ), (k, k c ) and (k, k).
• In the first two cases the decryption will succeed on the left and fail on the right. We hence check the then branch on the left with Γ (z ) = HL with the else branch on the right. If the type Γ (k a , k) were different from Γ (k a , k c ), we would check this combination twice, using the two different payload types. • In the remaining two cases the decryption will fail on both sides. We hence would have to check the two else branches (which however we already did).
While checking the then branch together with the else branch, we have to check Γ 1 ∼ 0 : LL, which rightly fails, as the protocol does not guarantee trace equivalence.

Model
In symbolic models, security protocols are typically modelled as processes of a process algebra, such as the applied pi-calculus [2]. We present here a calculus used in [28] and inspired from the calculus underlying the ProVerif tool [14]. This section is mostly an excerpt of [28], recalled here for the sake of completeness, and illustrated with the private authentication protocol.

Terms
Messages are modelled as terms. We assume an infinite set of names N for nonces, further partitioned into the set FN of free nonces (created by the attacker) and the set BN of bound nonces (created by the protocol parties), an infinite set of names K for keys similarly split into FK and BK, and an infinite set of variables V. Cryptographic primitives are modelled through a signature F, that is, a set of function symbols, given with their arity (i.e. the number of arguments). Here, we consider the following signature: F c = {pk, vk, enc, aenc, sign, ·, · , h} that models respectively public and verification key, symmetric and asymmetric encryption, concatenation and hash. The companion primitives (symmetric and asymmetric decryption, signature check, and projections) are represented by the following signature: F d = {dec, adec, checksign, π 1 , π 2 } We also consider a set C of (public) constants (used as agent names for instance). Given a signature F, a set of names N , and a set of variables V, the set of terms T (F, V, N ) is the set inductively defined by applying functions to variables in V and names in N . We denote by names(t) (resp. vars(t)) the set of names (resp. variables) occurring in t.
A term is ground if it does not contain variables. We consider the set T (F c ∪ F d ∪ C, V, N ∪ K) of cryptographic terms, simply called terms. Messages are terms with constructors from T (F c ∪ C, V, N ∪ K). We assume the set of variables to be split into two subsets V = X AX where X are variables used in processes while AX are variables used to store messages. An attacker term is a term from T (F c ∪ F d ∪ C, AX , FN ∪ F K). In particular, an attacker term cannot use nonces and keys created by the protocol's parties.
A substitution σ = {M 1 /x 1 , . . . , M k /x k } is a mapping from variables x 1 , . . . , x k ∈ V to messages M 1 , . . . , M k . We let dom(σ) = {x 1 , . . . , x k }. We say that σ is ground if all messages M 1 , . . . , M k are ground. We let names(σ) = 1≤i≤k names(M i ). The application of a substitution σ to a term t is denoted tσ and is defined as usual.
The evaluation of a term t, denoted t ↓, corresponds to the bottom-up application of the cryptographic primitives and is recursively defined as follows.

Processes
Security protocols describe how messages should be exchanged between participants. We model them through a process algebra, whose syntax is displayed in Fig. 2. We identify processes up to α-renaming, i.e., avoiding substitution of bound names and variables, which are defined as usual. Furthermore, we assume that all bound names, keys, and variables in the process are distinct. A configuration of the system is a tuple (P; φ; σ) where: -P is a multiset of processes that represents the current active processes; φ is a substitution with dom(φ) ⊆ AX and for any x ∈ dom(φ), φ(x) (also denoted xφ) is a message that only contains variables in dom(σ). φ represents the terms that have been sent; σ is a ground substitution.
The semantics of processes is given through a transition relation Intuitively, process new n.P creates a fresh nonce or key, and behaves like P . Process out(M ).P emits M and behaves like P , provided that the evaluation of M is successful. The corresponding message is stored in the frame φ, corresponding to the attacker knowledge. A process may input any message that an attacker can forge (rule IN) from her knowledge φ, using a recipe R to compute a new message from φ. Note that all names are initially assumed to be secret. Process P | Q corresponds to the parallel composition of P and Q. Process let x = d in P else Q behaves like P in which x is replaced by d if d can be successfully evaluated and behaves like Q otherwise. Process if M = N then P else Q behaves like P if M and N correspond to two equal messages and behaves like Q otherwise. The replicated process !P behaves as an unbounded number of copies of P .
A trace of a process P is any possible sequence of transitions in the presence of an attacker that may read, forge, and send messages. Formally, the set of traces trace(P ) is defined as follows. Example 1. Consider the private authentication protocol (PA) presented in Section 2. The process P b (k b , pk(k a )) corresponding to responder B answering a request from A has already been defined in Section 2.3. The process P a (k a , pk(k b )) corresponding A willing to talk to B is: in both processes, we assign a potentially different type to every different combination of keys (k, k ) used in the left and right process -so called bikeys. This is an important non-standard feature that enables us to type protocols using different encryption and decryption keys.
The types for messages are defined in Fig. 4 and explained below. Selected subtyping rules are given in Fig. 5. We assume three security labels HH, HL and LL, ranged over by l, whose first (resp. second) component denotes the confidentiality (resp. integrity) level. Intuitively, values of high confidentiality may never be output to the network in plain, and values of high integrity are guaranteed not to originate from the attacker. Pair types T * T describe the type of their components and the type T ∨ T is given to messages that can have type T or type T .
The type τ l,a n describes nonces and constants of security level l: the label a ranges over {∞, 1}, denoting whether the nonce is bound within a replication or not (constants are always typed with a = 1). We assume a different identifier n for each constant and restriction in the process. The type τ l,1 n is populated by a single name, (i.e., n describes a constant or a non-replicated nonce) and τ l,∞ n is a special type, that is instantiated to τ l,1 nj in the jth replication of the process. Type τ l,a n ; τ l ,a m is a refinement type that restricts the set of possible values of a message to values of type τ l,a n on the left and type τ l ,a m on the right. For a refinement type τ l,a n ; τ l,a n with equal types on both sides we write τ l,a n . Keys can have three different types ranged over by KT , ordered by a subtyping relation (SEQKEY, SSESKEY): seskey l,a (T ) <: eqkey l (T ) <: key l (T ). For all three types, l denotes the security label (SKEY) of the key and T is the type of the payload that can be encrypted or signed with these keys. This allows us to transfer typing information from one process to another one: e.g. when encrypting, we check that the payload type is respected, so that we can be sure to get a value of the payload type upon decryption. The three different types encode different relations between the left and the right component of a bikey (k, k ). While type key l (T ) can be given to bikeys with different components k = k , type eqkey l (T ) ensures that the keys are equal on both sides in the specific typed instruction. Type seskey l,a (T ) additionally guarantees that the key is always the same on the left and the right throughout the whole process. We allow for dynamic generation of keys of type seskey l,a (T ) and use a label a to denote whether the key is generated under replication or not -just like for nonce types.
For a key of type T , we use types pkey(T ) and vkey(T ) for the corresponding public key and verification key, and types (T ) T and {T } T for symmetric and asymmetric encryptions of messages of type T with this key. Public keys and verification keys can Constraints When typing messages, we generate constraints of the form (M ∼ N ), meaning that the attacker may see M and N in the left and right process, respectively, and these two messages are thus required to be indistinguishable. Due to space reasons we only present a few selected rules that are characteristic of the typing of branching protocols. The omitted rules are similar in spirit to the presented ones or are standard rules for equivalence typing [28].

Typing messages
The typing judgement for messages is of the form Γ M ∼ N : T → c which reads as follows: under the environment Γ , M and N are of type T and either this is a high confidentiality type (i.e., M and N are not disclosed to the attacker) or M and N are indistinguishable for the attacker assuming the set of constraints c is consistent.
Confidential nonces can be given their label from the typing environment in rule TNONCE. Since their label prevents them from being released in clear, the attacker cannot observe them and we do not need to add constraints for them. They can however be output in encrypted form and will then appear in the constraints of the encryption. Public nonces (labeled as LL) can be typed if they are equal on both sides (rule TNONCEL). These are standard rules, as well as the rules TVAR, TSUB, TPAIR and THIGH [28].
A non-standard rule that is crucial for the typing of branching protocols is rule TKEY. As the typing environment contains types for bikeys (k, k ) this rule allows us to type two potentially different keys with their type from the environment. With the standard rule TPUBKEYL we can only type a public key of the same keys on both sides, while rule TPUBKEY allows us to type different public keys pk(M ), pk(N ), provided we can show that there exists a valid key type for the terms M and N . This highlights another important technical contribution of this work, as compared to existing type systems for equivalence: we do not only support a fixed set of keys, but also allow for the usage of keys in variables, that have been received from the network.
To show that a message is of type {T } T -a message of type T encrypted asymmetrically with a key of type T , we have to show that the corresponding terms have exactly these types in rule TAENC. The generated constraints are simply propagated. In addition we need to show that T is a valid type for a public key, or LL, which models untrusted keys received from the network. Note, that this rule allows us to encrypt messages with different keys in the two processes. For encryptions with honest keys (label HH) we can use rule TAENC to give type LL to the messages, if we can show that the payload type is respected. In this case we add the entire encryptions to the constraints, since the attacker can check different encryptions for equality, even if he cannot obtain the plaintext. Rule TAENCL allows us to give type LL to encryptions even if we do not respect the payload type, or if the key is corrupted. However, we then have to type the plaintexts with type LL since we cannot guarantee their confidentiality. Additionally, we have to ensure that the same key is used in both processes, because the attacker might possess the corresponding private keys and test which decryption succeeds. Since we already add constraints for giving type LL to the plaintext, we do not need to add any additional constraints.

Typing processes
From now on, we assume that processes assign a type to freshly generated nonces and keys. That is, new n.P is now of the form new n : T. P . This requires a (very light) type annotation from the user. The typing judgement for processes is of the form Γ P ∼ Q → C and can be interpreted as follows: If two processes P and Q can be typed in Γ and if the generated constraint set C is consistent, then P and Q are trace equivalent. We present selected rules in Fig. 7.
Rule POUT states that we can output messages to the network if we can type them with type LL, i.e., they are indistinguishable to the attacker, provided that the generated set c of constraints is consistent. The constraints of c are then added to all constraints in the constraint set C. We define C∪ ∀ c := {(c ∪ c , Γ ) | (c, Γ ) ∈ C}. This rule, as well as the rules PZERO, PIN, PNEW, PPAR, and PLET, are standard rules [28].
Rule PNEWKEY allows us to generate new session keys at runtime, which models security protocols more faithfully. It also allows us to generate infinitely many keys, by introducing new keys under replication.
Γ, n : τ l,a n P ∼ Q → C Γ new n : τ l,a n .P ∼ new n : τ l,a n .Q → C Rule PLETADECSAME treats asymmetric decryptions where we use the same fixed honest key (label HH) for decryptions in both processes. Standard type systems for equivalence have a simplifying (and restrictive) invariant that guarantees that encryptions are always performed using the same keys in both processes and hence guarantee that both processes always take the same branch in decryption (compare rule PLET). In our system however, we allow encryptions with potentially different keys, which requires cross-case validation in order to retain soundness. Still, the number of possible combinations of encryption keys is limited by the assignments in the typing environment Γ . To cover all the possibilities, we type the following combinations of continuation processes: -Both then branches: In this case we know that key k was used for encryption on both sides. Because of Γ (k, k) = key HH (T ), we know that in this case the payload type is T and we type the continuation with Γ, x : T . Because the message may also originate from the attacker (who also has access to the public key), we have to type the two then branches also with Γ, x : LL. -Both else branches: If decryption fails on both sides, we type the two else branches without introducing any new variables. -Left then, right else: The encryption may have been created with key k on the left side and another key k on the right side. Hence, for each k = k, such that Γ (k, k ) maps to a key type with label HH and payload type T , we have to typecheck the left then branch and the right else branch with Γ, x : T . -Left else, right then: This case is analogous to the previous one.
The generated set of constraints is simply the union of all generated constraints for the subprocesses. Rule PIFALL lets us typecheck any conditional by simply checking the four possible branch combinations. In contrast to the other rules for conditionals that we present in Appendix A, this rule does not require any other preconditions or checks on the terms M, M , N, N .
Destructor Rules The rule PLET requires that a destructor application succeeds or fails equally in the two processes. To ensure this property, it relies on additional rules for destructors. We present selected rules in Fig. 8. Rule DADECL is a standard rule that states that a decryption of a variable of type LL with an untrusted key (label LL) yields a result of type LL. Decryption with a trusted (label HH) session key gives us a value of the key's payload type or type LL in case the encryption was created by the attacker using the public key. Here it is important that the key is of type seskey HH,a (T ), since this guarantees that the key is never used in combination with a different key and hence decryption will always equally succeed or fail in both processes. Rule DADECL' is similar to rule DADECL except it uses a variable for decryption instead of a fixed key. Rule DADECT treats the case in which we know that the variable x is an asymmetric encryption of a specific type. If the type of the key used for decryption matches the key type used for encryption, we know the exact type of the result of a successful decryption. DADECT' is similar to DADECT, with a variable as key. In Appendix A we present similar rules for symmetric decryption and verification of signatures. * = y1, Nb, pk(kb) , Nb well formed Γ y1, Nb, pk(kb) ∼ Nb : HL → ∅ THIGH * Γ (ka, k) = key HH (HL) Γ ka ∼ k : key HH (HL) → ∅ TKEY Γ pk(ka) ∼ pk(k) : pkey(key HH (HL)) → ∅ TPUBKEY Γ aenc( y1, Nb, pk(kb) , pk(ka)) ∼ aenc(Nb, pk(k)) : {HL} pkey(key HH (HL)) → ∅ TAENC Γ aenc( y1, Nb, pk(kb) , pk(ka)) ∼ aenc(Nb, pk(k)) : LL → C TAENCH where C = {aenc( y1, Nb, pk(kb) , pk(ka)) ∼ aenc(Nb, pk(k))}.

Typing the private authentication protocol
We now show how our type system can be applied to type the Private Authentication protocol presented in section 2.3, by showing the most interesting parts of the derivation. We type the protocol using the initial environment Γ presented in Fig. 1. We focus on the responder process P b and start with the asymmetric decryption. As we use the same key k b in both processes, we apply rule PLETADECSAME. We have Γ (x) = LL by rule PIN and Γ (k b , k b ) = key HH (HH, LL). We do not have any other entry using key k b in Γ . We hence typecheck the two then branches once with Γ, y : (HH * LL) and once with Γ, y : LL, as well as the two else branches (which are just 0 in this case).
Typing the let expressions is straightforward using rule PLET. In the conditional we check y 2 = pk(k a ) in the left process and y 2 = pk(k c ) in the right process. Since we cannot guarantee which branches are taken or even if the same branch is taken in the two processes, we use rule PIFALL to typecheck all four possible combinations of branches. We now focus on the case where A is successfully authenticated in the left process and is rejected in the right process. We then have to typecheck B's positive answer together with the decoy message: Γ aenc( y 1 , N b , pk(k b ) , pk(k a )) ∼ aenc(N c , pk(k)) : LL. Fig. 9 presents the type derivation for this example. We apply rule TAENC to give type LL to the two terms, adding the two encryptions to the constraint set. Using rule TAENCH we can show that the encryptions are well-typed with type {HL} pkey(key HH (HL)) . The type of the payload is trivially shown with rule THIGH. To type the public key, we use rule TPUBKEY followed by rule TKEY, which looks up the type for the bikey (k a , k) in the typing environment Γ .

Consistency
Our type system collects constraints that intuitively correspond to (symbolic) messages that the attacker may see (or deduce). Therefore, two processes are in trace equivalence only if the collected constraints are in static equivalence for any plausible instantiation.
However, checking static equivalence of symbolic frames for any instantiation corresponding to a real execution may be as hard as checking trace equivalence [24]. Conversely, checking static equivalence for any instantiation may be too strong and may prevent proving equivalence of processes. Instead, we use again the typing information gathered by our type system and we consider only instantiations that comply with the type. Actually, we even restrict our attention to instantiations where variables of type LL are only replaced by deducible terms. This last part is a key ingredient for considering processes with dynamic keys. Hence, we define a constraint to be consistent if the corresponding two frames are in static equivalence for any instantiation that can be typed and produces constraints that are included in the original constraint.
Formally, we first introduce the following ingredients: φ l (c) and φ r (c) denote the frames that are composed of the left and the right terms of the constraints respectively (in the same order). φ Γ LL denotes the frame that is composed of all low confidentiality nonces and keys in Γ , as well as all public encryption keys and verification keys in Γ . This intuitively corresponds to the initial knowledge of the attacker.
-Two ground substitutions σ, σ are well-typed in Γ with constraint c σ if they preserve the types for variables in Γ , i.e., for all The instantiation of a constraint is defined as expected. If c is a set of constraints, and σ, σ are two substitutions, let c σ,σ be the instantiation of c by σ on the left and σ on the right, that is, Definition 3 (Consistency). A set of constraints c is consistent in an environment Γ if for all substitutions σ,σ well-typed in Γ with a constraint c σ such that c σ ⊆ c σ,σ , the frames φ Γ LL ∪ φ l (c)σ and φ Γ LL ∪ φ r (c)σ are statically equivalent. We say that (c, Γ ) is consistent if c is consistent in Γ and that a constraint set C is consistent in Γ if each element (c, Γ ) ∈ C is consistent.
Compared to [28], we now require c σ ⊆ c σ,σ . This means that instead of considering any (well typed) instantiations, we only consider instantiations that use fragments of the constraints. For example, this now imposes that low variables are instantiated by terms deducible from the constraint. This refinement of consistency provides a tighter definition and is needed for non fixed keys, as explained in the next section.

Soundness
In this section, we provide our main results. First, soundness of our type system: whenever two processes can be typed with consistent constraints, then they are in trace equivalence. Then we show how to automatically prove consistency. Finally, we explain how to lift these two first results from finite processes to processes with replication. But first, we discuss why we cannot directly apply the results from [28] developed for processes with long term keys.

Example
Consider the following example, typical for a key-exchange protocol: Alice receives some key and uses it to encrypt, e.g. a nonce. Here, we consider a semi-honest session, where an honest agent A is receiving a key from a dishonest agent D. Such sessions are typically considered in combination with honest sessions.
C → A : aenc( k, C , pk(A)) A → C : aenc(n, k) The process modelling the role of Alice is as follows.
When type-checking P A ∼ P A (as part as a more general process with honest sessions), we would collect the constraint enc(n, y) ∼ enc(n, y) where y comes from the adversary and is therefore a low variable (that is, of type LL). The approach of [28] consisted in opening messages as much as possible. In this example, this would yield the constraint y ∼ y which typically renders the constraint inconsistent, as exemplified below. When typechecking the private authentication protocol, we obtain constraints containing aenc( y 1 , N b , pk(k b ) , pk(k a )) ∼ aenc(N b , pk(k)) (as seen in Fig. 9), where y 1 has type HL. Assume now that the constraint also contains y ∼ y for some variable y of type LL and consider the following instantiations of y and y 1 : σ(y 1 ) = σ (y 1 ) = a for some constant a and σ(y) = σ (y) = aenc(N b , pk(k)). Note that such an instantiation complies with the type since Γ σ(y) ∼ σ (y) : LL → c for some constraint c. The instantiated constraint would then contain {aenc( a, N b , pk(k b ) , pk(k a )) ∼ aenc(N b , pk(k)), aenc(N b , pk(k)) ∼ aenc(N b , pk(k))} and the corresponding frames are not statically equivalent, which makes the constraint inconsistent for the consistency definition of [28] .
Therefore, our first idea consists in proving that we only collect constraints that are saturated w.r.t. deduction: any deducible subterm can already be constructed from the terms of the constraint. Second, we show that for any execution, low variables are instantiated by terms deducible from the constraints. This guarantees that our new notion of consistency is sound. The two results are reflected in the next section.

Soundness
Our type system, together with consistency, implies trace equivalence.
Theorem 1 (Typing implies trace equivalence). For all P , Q, and C, for all Γ containing only keys, if Γ P ∼ Q → C and C is consistent, then P ≈ t Q.
Example 3. We can typecheck PA, that is where Γ has been defined in Fig. 1 and assuming that nonce N a of process P a has been annotated with type τ HH,1 Na and nonce N b of P b has been annotated with type τ HH,1 N b . The constraint set C PA can be proved to be consistent using the procedure presented in the next section. Therefore, we can conclude that which shows anonymity of the private authentication protocol.
The first key ingredient in the proof of Theorem 1 is the fact that any well-typed low term is deducible from the constraint generated when typing it.
. The second key ingredient is a finer invariant on protocol executions: for any typable pair of processes P, Q, any execution of P can be mimicked by an execution of Q such that low variables are instantiated by well-typed terms constructible from the constraint.
Lemma 2. For all processes P , Q, for all φ, σ, for all multisets of processes P, constraint sets C, sequences s of actions, for all Γ containing only keys, if Γ P ∼ Q → C, C is consistent, and ({P }, ∅, ∅) s − → * (P, φ, σ), then there exist a sequence s of actions, a multiset Q, a frame φ , a substitution σ , an environment Γ , a constraint c such that: Note that this finer invariant guarantees that we can restrict our attention to the instantiations considered for defining consistency.
As a by-product, we obtain a finer type system for equivalence, even for processes with long term keys (as in [28]). For example, we can now prove equivalence of processes where some agent signs a low message that comes from the adversary. In such a case, we collect sign(x, k) ∼ sign(x, k) in the constraint, where x has type LL, which we can now prove to be consistent (depending on how x is used in the rest of the constraint).

Procedure for consistency
We devise a procedure check_const(C) for checking consistency of a constraint C, depicted in Figure 10. Compared to [28], the procedure is actually simplified. Thanks to Lemmas 1 and 2, there is no need to open constraints anymore. The rest is very similar and works as follows: -First, variables of refined type τ l,1 m ; τ l ,1 n are replaced by m on the left-hand-side of the constraint and n on the right-hand-side.
-Second, we check that terms have the same shape (encryption, signature, hash) on the left and on the right and that asymmetric encryption and hashes cannot be reconstructed by the adversary (that is, they contain some fresh nonce). -The most important step consists in checking that the terms on the left satisfy the same equalities than the ones on the right. Whenever two left terms M and N are unifiable, their corresponding right terms M and N should be equal after applying a similar instantiation.
For constraint sets without infinite nonce types, check_const entails consistency.
Theorem 2. Let C be a set of constraints such that If check_const(C) = true, then C is consistent.
Example 4. Continuing Example 3, typechecking the PA protocol yields the set C PA of constraint sets. C PA contains in particular the set { aenc( N a , pk(k a ) , pk(k b )) ∼ aenc( N a , pk(k a ) , pk(k b )), where variable y 1 has type HL (we also have the same constraint but where y 1 has type LL). The other constraint sets of C PA are similar and correspond to the various cases (else branch of P a with then branch of P b , etc.). The procedure check_const returns true since no two terms can be unified, which proves consistency. Similarly, the other constraints generated for PA can be proved to be consistent applying check_const.

From finite to replicated processes
The previous results apply to processes without replication only. In the spirit of [28], we lift our results to replicated processes. We proceed in two steps.
1. Whenever Γ P ∼ Q → C, we show that: where [ Γ ] i is intuitively a copy of Γ , where variables x have been replaced by x i , and nonces or keys n of infinite type τ l,∞ n (or seskey l,∞ (T )) have been replaced by n i . The copies [ P ] i , [ Q ] i , and [ C ] i are defined similarly. 2. We cannot directly check consistency of infinitely many constraints of the form Instead, we show that it is sufficient to check consistency of two copies [ C ] 1 ∪ × [ C ] 2 only. The reason why we need two copies (and not just one) is to detect when messages from different sessions may become equal.
Formally, we can prove trace equivalence of replicated processes.
Theorem 3. Consider P , Q, P ,Q , C, C , such that P , Q and P , Q do not share any variable. Consider Γ , containing only keys and nonces with finite types. Assume that P and Q only bind nonces and keys with infinite nonce types, i.e. using new m : τ l,∞ m and new k : seskey l,∞ (T ) for some label l and type T ; while P and Q only bind nonces and keys with finite types, i.e. using new m : τ l,1 m and new k : seskey l,1 (T ).
Let us abbreviate by new n the sequence of declarations of each nonce m ∈ dom(Γ ) and session key k such that Γ (k, k) = seskey l,1 (T ) for some l, T . If The proof, together with fully precise definitions, can be found in Appendices A and B.Interestingly, Theorem 3 allows to consider a mix of finite and replicated processes.

Experimental results
We implemented our typechecker as well as our procedure for consistency in a prototype tool TypeEq. We adapted the original prototype of [28] to implement additional cases corresponding to the new typing rules. This also required to design new heuristics w.r.t. the order in which typing rules should be applied. Of course, we also had to support for the new bikey types, and for arbitrary terms as keys. This represented a change of about 40% of the code of the software. We ran our experiments on a single Intel Xeon E5-2687Wv3 3.10GHz core, with 378GB of RAM (shared with the 19 other cores). Actually, our own prototype does not require a large amount of RAM. However, some of the other tools we consider use more than 64GB of RAM on some examples (at which point we stopped the experiment). More precise figures about our tool are provided in the table of Figure 11. The corresponding files can be found at [27]. We tested TypeEq on two symmetric key protocols that include a handshake on the key (Yahalom-Lowe and Needham-Schroeder symmetric key protocols). In both cases, we prove key usability of the exchanged key. Intuitively, we show that an attacker cannot distinguish between two encryptions of public constants: P.out(enc(a, k)) ≈ t P.out(enc(b, k)). We also consider one standard asymmetric key protocol (Needham-Schroeder-Lowe protocol), showing strong secrecy of the exchanged nonce.
Helios [4] is a well known voting protocol. We show ballot privacy, in the presence of a dishonest board, assuming that voters do not revote (otherwise the protocol is subject to a copy attack [38], a variant of [29]). We consider a more precise model than the previous Helios models which assume that voters initially know the election public key. Here, we model the fact that voters actually receive the (signed) freshly generated election public key from the network. The BAC protocol is one of the protocols embedded in the biometric passport [1]. We show anonymity of the passport holder P (A) ≈ t P (B). Actually, the only data that distinguish P (A) from P (B) are the private keys. Therefore we consider an additional step where the passport sends the identity of the agent to the reader, encrypted with the exchanged key. Finally, we consider the private authentication protocol, as described in this paper. are honest while C is dishonest. This yields 14 sessions for symmetric-key protocols with two agents and one server, and 8 sessions for a protocol with two agents. In some cases, we further increase the number of sessions (replicating identical scenarios) to better compare tools performance. The results of our experiments are reported in Fig. 11. Note that SatEquiv fails to cover several cases because it does not handle asymmetric encryption nor else branches.

Unbounded number of sessions
We then compare TypeEq with Proverif. As shown in Fig. 12, the performances are similar except that ProVerif cannot prove Helios. The reason lies in the fact that Helios is actually subject to a copy attack if voters revote and ProVerif cannot properly handle processes that are executed only once. Similarly, Tamarin cannot properly handle the else branch of Helios (which models that the ballot box rejects duplicated ballots). Tamarin fails to prove that the underlying check either succeeds or fails on both sides.

Conclusion and discussion
We devise a new type system to reason about keys in the context of equivalence properties. Our new type system significantly enhances the preliminary work of [28], covering a Protocols ProVerif TypeEq Helios x 0.005s Needham-Schroeder (sym) 0.23s 0.016s Needham-Schroeder-Lowe 0.08s 0.008s Yahalom-Lowe 0.48s 0.020s Private Authentication 0.034s 0.008s BAC 0.038s 0.005s larger class of protocols that includes key-exchange protocols, protocols with setup phases, as well as protocols that branch differently depending on the decryption key.
Our type system requires a light type annotation that can be directly inferred from the structure of the messages. As future work, we plan to develop an automatic type inference system. In our case study, the only intricate case is the Helios protocol where the user has to write a refined type that corresponds to an over-approximation of any encrypted message. We plan to explore whether such types could be inferred automatically.
We also plan to study how to add phases to our framework, in order to cover more properties (such as unlinkability). This would require to generalize our type system to account for the fact that the type of a key may depend on the phase in which it is used.
Another limitation of our type system is that it does not address processes with too dissimilar structure. While our type system goes beyond diff-equivalence, e.g. allowing else branches to be matched with then branches, we cannot prove equivalence of processes where traces of P are dynamically mapped to traces of Q, depending on the attacker's behaviour. Such cases occur for example when proving unlinkability of the biometric passport. We plan to explore how to enrich our type system with additional rules that could cover such cases, taking advantage of the modularity of the type system.
Conversely, the fact that our type system discards processes that are in equivalence shows that our type system proves something stronger than trace equivalence. Indeed, processes P and Q have to follow some form of uniformity. We could exploit this to prove stronger properties like oblivious execution, probably further restricting our typing rules, in order to prove e.g. the absence of side-channels of a certain form.

A Typing rules and definitions
We give on Figures 13, 14, 15, 16, 17, 18 and 19 a complete version of our types and typing rules, as well as the formal definition of the well-formedness judgement for typing environments. We denote the sets of all keys in Γ by: We denote the set of all session keys in Γ by: In this section, we also provide additional definitions (or more precise versions of previous definitions) regarding constraints, and especially their consistency, that the proofs require.

Fig. 16. Subtyping Rules
Definition 4 (Constraint). A constraint is defined as a couple of messages, separated by the symbol ∼: We will consider sets of constraints, which we usually denote c. We will also consider couples (c, Γ ) composed of such a set, and a typing environment Γ . Finally we will denote sets of such tuples C, and call them constraint sets.
Definition 5 (Compatible environments). We say that two typing environments Γ , Γ are compatible if they are equal on the intersection of their domains, i.e. if Definition 6 (Union of environments). Let Γ , Γ be two compatible environments. Their union Γ ∪ Γ is defined by Note that this function is well defined since Γ and Γ are assumed to be compatible.
Definition 7 (Operations on constraint sets). We define two operations on constraints.
the product union of constraint sets: the addition of a set of constraints c to all elements of a constraint set C: ; τ l ,a n → ∅ τ l,a m = τ l ,a m , τ l ,a n = τ l ,a

Fig. 19. Destructor Rules
Definition 8. For any typing environment Γ , we denote by Γ X its restriction to variables, and by Γ N ,K its restriction to names and key couples.
Definition 9 (Well-typed substitutions). Let Γ be a typing environment, θ, θ two substitutions, and c a set of constraints. We say that θ, θ are well-typed in Γ , and write Γ N ,K θ ∼ θ : Γ X → c, if they are ground and Definition 10 (LL substitutions). Let Γ be an environment, φ, φ two substitutions and c a set of constraints.
Definition 11 (Frames associated to a set of constraints). If c is a set of constraints, let φ l (c) and φ r (c) be the frames composed of the terms respectively on the left and on the right of the ∼ symbol in the constraints of c (in the same order).
Definition 12 (Instantiation of constraints). If c is a set of constraints, and σ, σ are two substitutions, let c σ,σ be the instantiation of c by σ on the left and σ on the right, i.e.
Similarly we write for a constraint set C Definition 13 (Frames associated to environments). If Γ is a typing environment, we denote φ Γ LL the frame containing all the keys k such that Γ (k, k) <: key LL (T ) for some T , all the public keys pk(k) and vk(k) for k ∈ keys(Γ ), and all the nonces n such that Γ (n) = τ LL,a n (for a ∈ {∞, 1}).
Definition 14 (Branches of a type). If T is a type, we write branches(T ) the set of all types T such that T is not a union type, and either -T = T ; or there exist types T 1 ,. . . ,T k ,T 1 ,. . . ,T k such that Definition 15 (Branches of an environment). For a typing environment Γ , we write branches(Γ ) the sets of all environments Γ such that Definition 16 (Consistency). We say that c is consistent in a typing environment Γ , if for all subsets c ⊆ c and We say that a constraint set C is consistent if each element (c, Γ ) ∈ C is consistent.

B Proofs
In this section, we provide the detailed proofs to all of our theorems. Unless specified otherwise, the environments Γ considered in the lemmas are implicitly assumed to be well-formed.

B.1 General results and soundness
In this subsection, we prove soundness for non replicated processes, as well as several results regarding the type system that this proof uses.
Lemma 3 (Subtyping properties). The following properties of subtyping hold: is LL, HL, HH or a pair type.
Proof. All these properties have simple proofs by induction on the subtyping derivation.
Lemma 4 (Terms of type T ∨ T ). For all Γ , T , T , for all ground terms t, t , for all c, if Proof. We prove this property by induction on the derivation of Γ t ∼ t : T ∨ T → c.
In the TSUB case we know that Γ t ∼ t : T → c (with a shorter derivation) for some T <: T ∨ T ; thus, by Lemma 3, T = T ∨ T , and the claim holds by the induction hypothesis.
Finally in the TOR case, the premise of the rule directly proves the claim.

Lemma 5 (Terms and branch types). For all
Proof. This property is a corollary of Lemma 4. We indeed prove it by successively applying this lemma to Γ t ∼ t : T → c until T is not a union type.
Lemma 6 (Substitutions type in a branch). For all Γ , c, for all ground substitutions σ, σ , if Proof. This property follows from Lemma 5. Indeed, by definition, c = x∈dom(Γ X ) c x for some c x such that for all x ∈ dom(Γ X )(= dom(σ) = dom(σ )), Hence by applying Lemma 5 we obtain a type T x ∈ branches(Γ (x)) such that Lemma 7 (Typing terms in branches). For all Γ , T , c, for all terms t, t , for all Proof. We prove this property by induction on the derivation of Γ t ∼ t : T → c. In most cases for the last rule applied, Γ (x) is not directly involved in the premises, for any variable x. Rather, Γ appears only in other typing judgements, or is used in Γ (k, k ) or Γ (n) for some keys k, k or nonce n, and keys or nonces cannot have union types. Hence, since the typing rules for terms do not change Γ , the claim directly follows from the induction hypothesis. For instance in the TPAIR case, Thus by the induction hypothesis, Γ t 1 ∼ t 1 : T 1 → c 1 , and Γ t 2 ∼ t 2 : T 2 → c 2 ; and therefore by rule TPAIR, Γ t ∼ t : T → c. The cases of rules TPUBKEY, TVKEY, TKEY, TENC, TENCH, TENCL, TAENC, TAENCH, TAENCL, THASHL, TSIGNH, TSIGNL, TLR', TLRL', TLRVAR, TSUB, TOR are similar.
Finally, in the TVAR case, t = t = x for some variable x such that Γ (x) = T , and c = ∅. Rule TVAR also proves that Γ x , by applying rule TOR as many times as necessary, we have Γ x The corollary then follows, again by induction on the typing derivation. If T is not a union type, branches(T ) = {T } and the claim is directly the previous property. Otherwise, the last rule applied in the typing derivation can only be TVAR, TSUB, or TOR. The TSUB case follows trivially from the induction hypothesis; since T is a union type, it is its own only subtype. In the TVAR case, , this proves the claim.

Lemma 8 (Typing destructors in branches
Proof. This property is immediate by examining the typing rules for destructors. Indeed, Γ and Γ only differ on variables, and the rules for destructors only involve Γ (x) for x ∈ X in conditions of the form Γ (x) = T for some type T which is not a union type.
Hence in these cases Γ (x) is also T , and the same rule can be applied to Γ to prove the claim.
Lemma 9 (Typing processes in branches). For all Γ , C, for all processes P , Q, for all Γ ∈ branches(Γ ), if Proof. We prove this lemma by induction on the derivation of Γ P ∼ Q → C. In all the cases for the last rule applied in this derivation, we can show that the conditions of this rule still hold in Γ (instead of Γ ) using -Lemma 7 for the conditions of the form Γ M ∼ N : is not a union type, then Γ (x) = Γ (x), for conditions such as "Γ (x) = LL", "Γ (x) = τ l,a m ; τ l ,a n " or "Γ (x) <: key l (T )" (in the PLETLRK case); the induction hypothesis for the conditions of the form Γ P ∼ Q → C . In this case, the induction hypothesis produces a C ⊆ C , which can then be used to show C ⊆ C, since C and C are usually respectively C and C with some terms added.
We detail here the cases of rules POUT, PPAR, and POR. The other cases are similar, as explained above.
If the last rule is POUT, then we have P = out(M ).P , Q = out(N ).Q , C = C ∪ ∀ c for some P , Q , M , N , C , c, such that Γ P ∼ Q → C and Γ M ∼ N : LL → c. Hence by Lemma 7, Γ M ∼ N : LL → c, and by the induction hypothesis applied to P , Q , Γ P ∼ Q → C for some C such that C ⊆ C . Therefore by rule POUT, Γ P ∼ Q → C ∪ ∀ c, and since C ∪ ∀ c ⊆ C ∪ ∀ c(= C), this proves the claim.
If the last rule is PPAR, then we have P Thus by applying the induction hypothesis twice, we have Γ , this proves the claim.
If the last rule is POR, then there exist Γ , x, T 1 , T 2 , C 1 and C 2 such that Γ = Γ , x : We write the proof for the case where Γ ∈ branches(Γ , x : T 1 ), the other case is analogous. By applying the induction hypothesis to Γ , x : Lemma 10 (Environments in the constraints). For all Γ , C, for all processes P , Q, if (where bvars(P ), nnames(P ), nkeys(P ) respectively denote the sets of bound variables, names, and key names in P ).
Proof. We prove this lemma by induction on the typing derivation of Γ P ∼ Q → C.
If the last rule applied in this derivation is PZERO, we have C = {(∅, Γ )}, and the claim clearly holds.
The cases of rules PNEW and PNEWKEY are similar, extending Γ with a nonce or key instead of a variable.
In the PIFL case, there exist P , We write the proof for the case where (c , Γ ) ∈ C , the other case is analogous. By the induction hypothesis, we thus have and since bvars(P ) ⊆ bvars(P ), nnames(P ) ⊆ nnames(P ), and similarly for Q, this proves the claim.
Lemma 11 (Environments in the constraints do not contain union types). For all Γ , C, for all processes P , Proof. This property is immediate by induction on the typing derivation.
Lemma 12 (Typing is preserved by extending the environment).
-The second point is immediate by examining the typing rules for destructors.
-The third point is immediate by induction on the type derivation of the processes. In the PZERO case, to satisfy the condition that the environment is its own only branch, rule POR needs to be applied first, in order to split all the union types in Γ , which yields the environments branches(Γ ∪ Γ ) in the constraints.
Lemma 13 (Consistency for Subsets). The following statements about constraints hold: and similarly for ∪, ∪ ∀ . 6. If σ 1 and σ 1 are ground and have disjoint domains, as well as σ 2 and σ 2 , then for all c, c σ1,σ2 Proof. Points 1 and 2 follow immediately from the definition of consistency and of static equivalence.
Point 3 follows from point 1: for every (c, Γ ) ∈ C, (c ∪ c , Γ ) is in C∪ ∀ c , and is therefore consistent. Hence (c, Γ ) also is by point 1.
Point 7 follows from the definitions of · σ,σ , and of consistency. Indeed, let (c , Γ ) ∈ C σ,σ . There exists c such that c = c σ,σ , and (c , Γ ) ∈ C. Let c 1 ⊆ c , and Note that since σ, σ are ground, c is also ground.
LL ∪ φ r (c 1 )θ will also be statically equivalent, which proves the consistency of C σ,σ .
It only remains to be proved that there exists c 3 such that Γ N ,K σθ ∼ σ θ : Γ X → c 3 and c 3 ⊆ c 2 ∪ c σθ,σ θ .
In the PPAR case, we have P and Γ 1 , Γ 2 are compatible. By the induction hypothesis, both C 1 and C 2 contain a branch of Γ . The claim holds, as these are necessarily the same branch, since Γ 1 and Γ 2 are compatible.
In the POR case, we have Γ = Γ , x : , and the claim holds.
In the PIFL case, there exist P , We write the proof for the case where (c , Γ ) ∈ C , the other case is analogous. By applying the induction hypothesis to Γ P ∼ Q → C , there exists Γ ∈ branches(Γ ) such that Γ ⊆ Γ , which proves the claim.
All remaining cases are similar. We write the proof for the PIFLR* case. In this case, there exist P , P , We write the proof for the case where (c, Γ ) ∈ C , the other case is analogous. By applying the induction hypothesis to Γ P ∼ Q → C , there exists Γ ∈ branches(Γ ) such that Γ ⊆ Γ , which proves the claim.
Lemma 15 (All branches are represented in the constraints). For all Γ , C, for all processes P , Q, if Γ P ∼ Q → C then for all Γ ∈ branches(Γ ), there exists (c, Γ ) ∈ C, such that Γ ⊆ Γ .
Proof. We prove this property by induction on the type derivation of Γ P ∼ Q → C. In the PZERO case, C = {(∅, Γ )}, and by assumption branches(Γ ) = {Γ }, hence the claim trivially holds.
All remaining cases are similar. We write the proof for the PIFLR* case. In this case, there exist P , P , If (c, Γ ) ∈ C, we thus know that (c, Γ ) ∈ C or (c, Γ ) ∈ C . We write the proof for the case where (c, Γ ) ∈ C , the other case is analogous. By applying the induction hypothesis to Γ P ∼ Q → C , there exists Γ ∈ branches(Γ ) such that Γ ⊆ Γ , which proves the claim.
Lemma 16 (Refinement types). For all Γ , for all terms t, t , for all m, n, a, l, l , c, if Γ t ∼ t : τ l,a m ; τ l ,a n → c then c = ∅ and either t = m, t = n, a = ∞ and Γ (m) = τ l,a m and Γ (n) = τ l ,a n ; or t = m, t = n, a = 1, and (Γ (m) = τ l,a m ) ∨ (m ∈ F N ∪ C ∧ l = LL), and (Γ (n) = τ l ,a n ) ∨ (n ∈ FN ∪ C ∧ l = LL); or t and t are variables x, y ∈ X and there exist labels l , l , and names m , n such that Γ (x) = τ l,a m ; τ l ,a n and Γ (y) = τ l ,a m ; τ l ,a n .
In particular if t, t are ground then the last case cannot occur.
Proof. The proof of this property is immediate by induction on the typing derivation for the terms. Indeed, because of the form of the type, and by well-formedness of Γ , the only rules which can lead to Γ t ∼ t : τ l,a m ; τ l ,a n → c are TVAR, TLR 1 , TLR ∞ , TLRVAR, and TSUB. In the TVAR, TLR 1 , TLR ∞ cases the claim directly follows from the premises of the rule. In the TLRVAR case, t and t are necessarily variables, and their types in Γ are obtained directly by applying the induction hypothesis to the premises of the rule.
Finally in the TSUB case, Γ t ∼ t : T → c and T <: τ l,a m ; τ l ,a n . By Lemma 3, T = τ l,a m ; τ l ,a n and we conclude by the induction hypothesis.
Lemma 17 (Encryption types). For all environment Γ , types T , T , messages M , N , M 1 , M 2 and set of constraints c: The symmetric properties to the previous four points, i.e. when the term on the right is an encryption, also hold.
Proof. We prove point 1 by induction on the derivation of Γ M ∼ N : (T ) T → c. Because of the form of the type, and by well-formedness of Γ , the only possibilities for the last rule applied are TVAR, TENC, and TSUB. The claim clearly holds in the TVAR and TENC cases. In the TSUB case, we have Γ M ∼ N : T → c for some T <: (T ) T . Hence by Lemma 3, there exists T <: T such that T = (T ) T . Therefore, by applying the induction hypothesis to Γ M ∼ N : T → c either M and N are two variables, and the claim holds; Point 2 has a similar proof to point 1.
We now prove point 3 by induction on the proof of Γ enc(M 1 , M 2 ) ∼ N : T → c. Because of the form of the terms, the last rule applied can only be THIGH, TOR, TENC, TENCH, TENCL, TAENCH, TAENCL, TLR', TLRL' or TSUB.
The THIGH, TLR', TOR, TENC cases are actually impossible by Lemma 3, since T <: LL. In the TSUB case, we have Γ enc(M 1 , M 2 ) ∼ N : T → c for some T such that T <: T . By transitivity of <:, T <: LL, and the induction hypothesis proves the claim. In all other cases, T = LL and the claim holds.
Point 4 has a similar proof to point 3.
3. The symmetric properties to the previous points, i.e. when the term on the right is a signature, also hold.
Proof. We prove point 1 by induction on the proof of Γ sign(M, k) ∼ N : T → c. Because of the form of the terms, the last rule applied can only be THIGH, TOR, TENCH, TENCL, TAENCH, TAENCL, TSIGNH, TSIGNL, TLR', TLRL' or TSUB.
The THIGH, TLR', TOR cases are actually impossible by Lemma 3, since T <: LL. In the TSUB case, we have Γ sign(M 1 , M 2 ) ∼ N : T → c for some T such that T <: T . By transitivity of <:, T <: LL, and the induction hypothesis proves the claim. In all other cases, T = LL and the claim holds.
We prove point 2 by induction on the proof of Γ sign(M 1 , M 2 ) ∼ N : LL → c. Because of the form of the terms and of the type (i.e. LL) the last rule applied can only be TENCH, TENCL, TAENCH, TAENCL, TSIGNH, TSIGNL, TLRL' or TSUB.
The TLRL' case is impossible, since by Lemma 16 it would imply that sign(M 1 , M 2 ) is a variable or a nonce.
In the TSUB case, we have Γ sign(M 1 , M 2 ) ∼ N : T → c for some T such that T <: LL. By point 1, T = LL, and the premise of the rule thus gives a shorter derivation of Γ sign(M 1 , M 2 ) ∼ N : LL → c. The induction hypothesis applied to this shorter derivation proves the claim.
The TENCH, TENCL, TAENCH and TAENCL cases are impossible, since the condition of the rule would then imply Γ sign(M 1 , M 2 ) ∼ N : (T ) T → c (or {T } T ) for some T , T , c , which is not possible by Lemma 17.
Finally, in the TSIGNH and TSIGNL cases, the premises of the rule directly proves the claim.
The symmetric properties, as described in point 3, have analogous proofs.
Lemma 19 (Pair types). For all environment Γ , for all M , N , T , c: 4. The symmetric properties to the previous two points (i.e. when the term on the right is a pair) also hold.
Proof. Let us prove point 1 by induction on the typing derivation Γ M ∼ N : T 1 * T 2 → c. Because of the form of the type, and by well-formedness of Γ , the only possibilities for the last rule applied are TVAR, TPAIR, and TSUB.
The claim clearly holds in the TVAR and TPAIR cases.
In the TSUB case, Γ M ∼ N : T → c for some T <: T 1 * T 2 , and by Lemma 3, T = T 1 * T 2 for some T 1 , T 2 such that T 1 <: T 1 and T 2 <: T 2 . Therefore, by applying the induction hypothesis to Γ M ∼ N : T 1 * T 2 → c, M and N are either two variables, and the claim holds; or two pairs, i.e. there exist M 1 , M 2 , N 1 , N 2 , c 1 , c 2 such that M = M 1 , M 2 , N = N 1 , N 2 , c = c 1 ∪ c 2 , and for i ∈ {1, 2}, Γ M i ∼ N i : T i → c i . Hence, by subtyping, Γ M i ∼ N i : T i → c i , and the claim holds.
We now prove point 2 by induction on the proof of Γ M 1 , M 2 ∼ N : T → c. Because of the form of the terms, the last rule applied can only be THIGH, TOR, TPAIR, TENCH, TENCL, TAENCH, TAENCL, TLR', TLRL' or TSUB.
The THIGH, TLR', and TOR cases are actually impossible by Lemma 3, since T <: LL.
The TLRL' and case is also impossible, since by Lemma 16 it would imply that M 1 , M 2 is either a variable or a nonce.
The TENCH, TENCL, TAENCH, TAENCL cases are impossible, since the condition of the rule would then imply Γ M 1 , M 2 ∼ N : (T ) T → c (or {T } T ) for some T , T , c , which is not possible by Lemma 17. In the TPAIR case, the claim clearly holds. Finally, in the TSUB case, we have Γ M 1 , M 2 ∼ N : T → c for some T such that T <: T . By transitivity of <:, T <: LL, and we may apply the induction hypothesis to Γ M 1 , M 2 ∼ N : T → c. Hence either T = LL or T = T 1 * T 2 for some T 1 , T 2 . By Lemma 3, this implies in the first case that T = LL and in the second case that T = LL or T is also a pair type (T = HL and T = HH in both cases, since we already know that T <: LL).
We prove point 3 as a consequence of the first two points, by induction on the derivation of Γ M 1 , M 2 ∼ N : LL → c. The last rule in this derivation can only be TENCH, TENCL, TAENCH, TAENCL, TLR', TLRL' or TSUB by the form of the types and terms, but similarly to the previous point TENCH, TENCL, TAENCH, TAENCL, TLR' and TLRL' are actually not possible.
Hence the last rule of the derivation is TSUB. We have Γ M 1 , M 2 ∼ N : T → c for some T such that T <: LL. By point 2, either T = LL or there exist T 1 , T 2 such that T = T 1 * T 2 . If T = LL, we have a shorter proof of Γ M 1 , M 2 ∼ N : LL → c and we conclude by the induction hypothesis. Otherwise, since T <: LL, by Lemma 3, T 1 <: LL and T 2 <: LL. Moreover by the first property, there exist Thus by subtyping, Γ M 1 ∼ N 1 : LL → c 1 and Γ M 2 ∼ N 2 : LL → c 2 , which proves the claim.
The symmetric properties, as described in point 4, have analogous proofs.
Lemma 20 (Type for keys, nonces and constants). For all well-formed environment Γ , for all messages M , N , for all key k ∈ K, for all nonce or constant n ∈ N ∪ C, for all c, l, the following properties hold: 8. If Γ n ∼ N : LL → c, then N = n, c = ∅, and either there exists a ∈ {1, ∞} such that Γ (n) = τ LL,a n , or n ∈ FN ∪ C. 9. The symmetric properties to points 4 to 8 (i.e. with k (resp. pk(M ), vk(M ), n) on the right) also hold.
Proof. Point 1 is easily proved by induction on the derivation of Γ M ∼ N : T → c. Indeed, by the form of the type, using Lemma 3, the last rule can only be TKEY, TVAR, or TSUB. In the TKEY and TVAR cases the claim clearly holds. In the TSUB case, we have T <: T such that Γ M ∼ N : T → c with a shorter derivation. By transitivity, T <: key l (T ). Thus by applying by the induction hypothesis, either M , N are variables, or they are keys and Γ (M, N ) <: T <: T , and in both cases the claim holds.
Similarly, we prove point 2 by induction on the derivation of Γ M ∼ N : pkey(T ) → c. Indeed, by the form of the type, and since Γ is well-formed, the last rule can only be TPUBKEY, TVAR, or TSUB. In the TPUBKEY and TVAR cases the claim clearly holds. In the TSUB case, we have T <: pkey(T ) such that Γ M ∼ N : T → c with a shorter derivation. By Lemma 3, T = pkey(T ). We conclude the proof by applying by the induction hypothesis to the shorter derivation of Γ M ∼ N : T → c.
We prove point 4 by induction on the derivation of Γ k ∼ N : l → c. Because of the form of the terms and type, and by well-formedness of Γ , the last rule applied can only be TCSTFN, TENCH, TENCL, TAENCH, TAENCL, TLR', TLRL' or TSUB.
In the TCSTFN case, k = N ∈ FK and the claim holds. The TENCH, TENCL, TAENCH, TAENCL cases are impossible since they would imply that Γ k ∼ N : (T ) k ,k → c (or {T } k ,k ) for some T , k , k , c , which is impossible by Lemma 17.
The TLR' and TLRL' cases are impossible. Indeed in these cases, we have Γ k ∼ N : τ l,a m ; τ l ,a n → ∅ for some m, n. Lemma 16 then implies that m = k (and n = N ), which is contradictory.
Finally, in the TSUB case, we have Γ k ∼ N : T → c for some T such that T <: l. By Lemma 3, this implies that T is either a pair type, an encryption type, a public or verification key type, a key type, or l. The pair, encryption, public and verification key cases are impossible, respectively by Lemma 19, Lemma 17, and point 2, since k ∈ K. The last case is trivial by the induction hypothesis. Only the case where T <: key l (T ) (for some T ) remains. By point 1, in that case, since k is not a variable, we have N ∈ K and Γ (k, N ) <: key l (T ), and therefore the claim holds.
Similarly, we prove point 5 by induction on the derivation of Γ pk(M ) ∼ N : LL → c. Because of the form of the terms and type, and by well-formedness of Γ , the last rule applied can only be TENCH, TENCL, TAENCH, TAENCL, TLRL', TPUBKEYL or TSUB.
The TENCH, TENCL, TAENCH, TAENCL cases are impossible since they would imply that Γ pk(M ) ∼ N : (T ) k → c (or {T } k ) for some T , k, c , which is impossible by Lemma 17.
The TLRL' case is also impossible. Indeed in this case, we have Γ pk(M ) ∼ N : τ l,a m ; τ l ,a n → ∅ for some m, n. Lemma 16 then implies that m = pk(M ) (and n = N ), which is contradictory.
In the TSUB case, we have Γ pk(M ) ∼ N : T → c for some T such that T <: LL. By Lemma 3, this implies that T is either a pair type, an encryption type, a public or verification key type, a key type, or LL. Just as in the previous point, the pair, encryption, verification key, and key cases are impossible. If T = LL, the claim trivially holds by the induction hypothesis. The case where T = pkey(T ) (for some T ) remains. Since Γ pk(M ) ∼ N : T → c, by point 2 we have N = pk(N ) for some N , c = ∅ and Γ M ∼ N : T → ∅. In addition, by Lemma 3, since T <: LL, there exist l, T such that T <: eqkey l (T ), and the claim holds.
Finally in the TPUBKEYL case, the claim clearly holds.
Point 6 has a similar proof to point 5.
The remaining properties have similar proofs to point 4. For point 7, i.e. if Γ n ∼ t : HH → c, only the TNONCE, TSUB, and TLR' cases are possible. The claim clearly holds in the TNONCE case.
In the TLR' case, we have Γ n ∼ t : τ HH,a m ; τ HH,a p → ∅ for some m, p. Lemma 16 then implies that m = n, and p = t, and Γ (n) = τ HH,a m , and Γ (p) = τ HH,a p , which proves the claim. In the TSUB case, Γ n ∼ t : T → c for some T <: HH, thus by Lemma 3, T is either a pair type (impossible by Lemma 19), an encryption type (impossible by Lemma 17), a public key, verification key, or key type (impossible by points 1 to 3), or HH (and we conclude by the induction hypothesis).
For point 8, similarly, only the TNONCEL, TCSTFN, TSUB, TLRL' cases are possible. The TSUB case is proved in the same way as for the third property. The TLRL' case is proved similarly to the previous point. Finally the claim clearly holds in the TNONCEL and TCSTFN cases.
The symmetric properties, as described in point 9, have analogous proofs.
Lemma 21 (Application of destructors). For all Γ , for all t, t , T , c, for all ground substitutions σ, σ such that Proof. Since Γ d t ∼ t : T , by examining the typing rules for destructors, we can distinguish four cases for t, t .
t = dec(x, M ) and t = dec(x, M ), for some variable x ∈ X , and some M ∈ X ∪ K. We know that · either Γ N ,K k ∼ N : key HH (T ) → c for some T , c : in that case, by Lemma 20, N ∈ K and Γ (k, N ) <: key HH (T ). We have already shown that Γ (k, k) is either a subtype of LL, or seskey l,a (T ) for some l, T . By well-formedness of Γ , only the second case is possible, and it implies that N = k. · or Γ N ,K k ∼ N : LL → c for some c : in that case, by Lemma 20, N = k. In any case we have σ (x) = enc(N , k).  (N , N ), and Γ N ,K k ∼ N : seskey l,a (T ) → ∅. Thus, by Lemma 20, N ∈ K and Γ (k, N ) = seskey l,a (T ). By well-formedness of Γ , this implies that N = k. Therefore σ (x) = enc(N , k). Hence, in any case, t σ = dec(enc (N , k), k). By assumption, σ, σ are well-typed, and σ(x) ↓ = ⊥. Thus by Lemma 22 σ (x) ↓ = ⊥. Then N ↓ = ⊥. Therefore we have t σ ↓= N ↓ = ⊥, which proves the first direction of 1). The other direction is analogous. , and since Γ is well-formed, M σ = k. Let us now show that there exists a ground message N such that σ (x) = aenc(N , pk(k)).
Thus, by Lemma 17, we have Γ N ,K M ∼ N : T → c for some c ⊆ c , and the claim holds. * In the DADECL and DADECL' cases, we have T = LL, and Γ N ,K σ(x) ∼ σ (x) : LL → c for some c ⊆ c, i.e. Γ N ,K aenc(M , pk(k)) ∼ aenc(N , pk(k)) : LL → c . In addition we have Γ M ∼ M : LL → ∅, and thus Γ k ∼ k : LL → c for some c . Therefore, by Lemma 17, we have Γ N ,K M ∼ N : LL → c for some c ⊆ c (the case where Γ k ∼ k : key HH (T ) → c is impossible by Lemma 20, since we already know that Γ k ∼ k : LL → c ), and the claim holds. * In the DADECH' case, we have T = T ∨ LL for some type T . In addition we know that Γ N ,K σ(x) ∼ σ (x) : LL → c for some c ⊆ c, i.e.
In addition, Γ M ∼ M : seskey HH,a (T ) → ∅, and thus Γ k ∼ k : seskey HH,a (T ) → c for some c . Therefore, by Lemma 17, we know that · either there exist types T , T , and constraints c , c ⊆ c such that T is a subtype of key HH (T ), Γ pk(k) ∼ pk(k) : pkey(T ) → c , and Γ M ∼ N : T → c . Since Γ pk(k) ∼ pk(k) : pkey(T ) → ∅, by Lemma 20, we have Γ (k, k) <: key HH (T ). As we already know that Γ k ∼ k : seskey HH,a (T ) → c , by the same lemma and Lemma 3, we have T = T . Thus Γ M ∼ N : T → c , and by rule TOR, we have Γ M ∼ N : T ∨ LL → c . · or Γ M ∼ N : LL → c , and by rule TOR we have Γ M ∼ N : T ∨ LL → c . In all cases, Γ M ∼ N : T → c for some c ⊂ c , and point 2) holds, which concludes this case.
t = t = checksign(x, M ). We know that Γ d t ∼ t : T , which can be proved using either DCHECKH, DCHECKH', DCHECKL, or DCHECKL'. In both cases, Γ (x) = LL. M is either a verification key or a variable. By assumption we have Γ N ,K σ(x) ∼ σ (x) : LL → c x for some c x ⊆ c.
We will now show that M σ = M σ = vk(k). It is clear if M is a verification key. Otherwise, M ∈ X , which means the rule applied to prove Γ d t ∼ t : T is DCHECKL' or DCHECKH'. In either case, from the form of the rule we have Γ M ∼ M : LL → ∅, Since σ, σ are well-typed, we have Thus by Lemma 20, and since Γ is well-formed, M σ = vk(k). In addition, we know that Hence Lemma 18 guarantees that there exist N , N such that σ (x) = sign (N , N ), and either Γ k ∼ N : eqkey HH (T ) → ∅ for some T or Γ k ∼ N : LL → c for some c . In both cases, Lemma 20 implies that N = k. Thus we have σ (x) = sign(N , k). Hence, t σ = checksign(sign(N , k), vk(k)), By assumption, σ, σ are well-typed, and σ(x) ↓ = ⊥. Thus by Lemma 22 σ (x) ↓ = ⊥. Then N ↓ = ⊥. Therefore we have t σ ↓= N ↓ = ⊥, which proves the first direction of 1). The other direction is analogous. t = t = π 1 (x). We know that Γ d t : t T , which can be proved using either rule DFST or DFSTL. In the first case, Γ (x) = T 1 * T 2 is a pair type, and in the second case Γ (x) = LL.
t = t = π 2 (x). This case is similar to the previous one. In the THIGH case, the premise of the rule implies that M ↓ = ⊥ and N ↓ = ⊥.
Proof. We show this property by induction over the attacker term R. Induction Hypothesis: the statement holds for all subterms of R.
There are several cases for R. The base cases are the cases where R is a variable, a name in FN or a constant in C.
Hence the claim holds. 3. R = pk(K) We apply the induction hypothesis to K and distinguish three cases.
(b) Sφ ↓ = ⊥ In this case, by the induction hypothesis, we also have Sφ ↓ = ⊥, and we also know that there exists c ⊆ c such that Γ Sφ ↓∼ Sφ ↓: LL → c Thus, by rule THASHL, Γ Rφ ↓∼ Rφ ↓: LL → c , which proves this case. 10. R = π 1 (S) We apply the induction hypothesis to S and distinguish three cases.
(b) Sφ ↓ = ⊥ and is not a pair Then by IH there exists c ⊆ c such that Γ Sφ ↓∼ Sφ ↓: LL → c , which implies that Rφ ↓ and is not a pair either by Lemma 19. Hence Rφ ↓= Rφ ↓= ⊥. (c) Sφ ↓= M 1 , M 2 is a pair Then by IH there exists c ⊆ c such that Γ Sφ ↓∼ Sφ ↓: LL → c . This implies, by Lemma 19, that Sφ ↓= M 1 , M 2 is also a pair, and that Γ M 1 ∼ M 1 : LL → c for some c ⊆ c . Since Rφ ↓= M 1 and Rφ ↓= M 1 , this proves the case. 11. R = π 2 (S) This case is analogous to the case 10. 12. R = dec(S, K) We apply the induction hypothesis to K and, similarly to the case 6, we distinguish several cases.
14. R = checksign(S, K) We apply the induction hypothesis to K and, similarly to the case 7, we distinguish several cases. (a) If Kφ ↓= ⊥ or is not a verification key then, as in case 7, we can show that Rφ ↓= Rφ ↓= ⊥.
(b) If Kφ ↓ is a verification key vk(k) for some k ∈ K, then similarly to case 7 we can show that Kφ ↓= Kφ ↓, and k ∈ keys(Γ ) ∪ F K. We then apply the IH to S, which creates two cases. Either Sφ ↓= Sφ ↓= ⊥, or there exists c ⊆ c such that Γ Sφ ↓∼ Sφ ↓: LL → c . In the first case, the claim holds, since Rφ ↓= Rφ ↓= ⊥. In the second case, by Lemmas 18 and 20, and since Γ is well-formed, we know that Sφ ↓ is a signature by k(= Kφ ↓) if and only if Sφ ↓ also is a signature by this key. Consequently, if Sφ ↓ is not signed by k, then neither is Sφ ↓, and Rφ ↓= Rφ ↓= ⊥. Otherwise, Sφ ↓= sign(M, k) and Sφ ↓= sign(N, k) for some M , N . Thus by IH we have Γ sign(t, k) ∼ sign(t , k) : LL → c . Therefore, by Lemma 18 (point 2), we know that there exists c ⊆ c such that Γ M ∼ N : LL → c . That is to say Γ Rφ ↓∼ Rφ ↓: LL → c . Hence the claim holds in this case.
Proof. Note that Γ N ,K = ∅, and Γ X = Γ . This proof is done by induction on the typing derivation for the terms. The claim clearly holds in the TNONCE, TNONCEL, TCSTFN, TPUBKEYL, TVKEYL, TKEY, TLR 1 , TLR ∞ since their conditions do not use Γ (x) (for any variable x) or another type judgement, and they still apply to the messages M σ and N σ .
The claim follows directly from the induction hypothesis in all other cases except the TVAR and TLRVAR cases, which are the base cases.
In the TVAR case, the claim also holds, since M = N = x for some variable x ∈ dom(Γ ) ∪ dom(Γ ). If x ∈ dom(Γ ), then xσ = xσ = x, and T = Γ (x). Thus, by rule TVAR, Γ ∪ Γ xσ ∼ xσ : Γ (x) → ∅ and the claim holds. If x ∈ dom(Γ ), then T = Γ (x), and, since by hypothesis the substitutions are well-typed, there exists c Indeed, (as Γ is well-formed) the only possible cases are TNONCE, TSUB, and TLR'. In the TNONCE case the claim clearly holds. In the TSUB case we use Lemma 3 followed by Lemma 20. In the TLR' case we apply Lemma 16 and the claim directly follows.
. Proof. We prove this lemma by induction on the typing derivation of Γ M ∼ N : T → c. We distinguish several cases for the last rule in this derivation.
-TNONCE, THIGH, TOR, TLR 1 , TLR ∞ , TLR', TLRVAR: these cases are not possible, since the type they give to terms is never a subtype of LL by Lemma 3. -TVAR: this case is not possible since M , N are ground.
-TSUB: this case is directly proved by applying the induction hypothesis to the judgement Γ M ∼ N : T → c where T <: T <: LL, which appears in the conditions of this rule, and has a shorter derivation. -TLRL': in this case, Γ M ∼ N : τ LL,a n ; τ LL,a n → c for some nonce n, some a ∈ {∞, 1}, some c , and c = ∅. By Lemma 16, this implies that M = N = n, and Γ (n) = τ LL,a By definition, there exists x such that φ Γ LL (x) = k and the claim holds with R = x. -TPUBKEYL, TVKEYL: then M = N = pk(k) (resp. vk(k)) for some k ∈ keys(Γ ) ∪ FK. If k ∈ keys(Γ ), by definition, there exists x such that φ Γ LL (x) = pk(k) (resp. vk(k)) and the claim holds with R = x. If k ∈ FK, the claim holds with R = pk(k).
-TPUBKEY, TVKEY: these two cases are similar, we write the proof for the TPUBKEY case. The form of this rule application is: for some T such that pkey(T ) <: LL. By Lemma 3, this implies that there exist T , l such that T <: eqkey l (T ). Thus, Γ M ∼ N : eqkey l (T ) → ∅. By Lemma 20, this implies M = N = k ∈ keys(Γ ). By definition, there exists x such that φ Γ LL (x) = pk(k), and the claim holds with R = x. -TPAIR, THASHL: these cases are similar. We detail the TPAIR case. In that case, T = T 1 * T 2 for some T 1 , T 2 . By Lemma 3, T 1 , T 2 are subtypes of LL. In addition, there exist M 1 , 2}). By applying the induction hypothesis to these two judgements (which have shorter proofs), we obtain R 1 , R 2 such that for all i, . Therefore the claim holds with R = R 1 , R 2 .
. Therefore, the claim holds with the recipe enc(R, R ).
-TENC, TAENC: these two cases are similar, we write the proof for the TENC case. The form of this rule application is: for some T , T such that (T ) T <: LL. By Lemma 3, T <: LL and T <: LL. We conclude the proof of this case similarly to the TENCL case.
. Therefore, the claim holds with the recipe sign(R, R ).
Lemma 27 (Low frames with consistent constraints are statically equivalent). For all ground φ, φ , for all c, Γ , if then φ and φ are statically equivalent.
Proof. We can first notice that since φ and φ are ground, so is c (this is easy to see by examining the typing rules for terms). Let R, R be two attacker recipes, such that vars(R) ∪ vars(R ) ⊆ dom(φ)(= dom(φ )).
For all x ∈ dom(φ)(= dom(φ )), by assumption, there exists c . Let R and R be the recipes obtained by replacing every occurence of x with R x in respectively R and R , for all variable x ∈ dom(φ)(= dom(φ )).

We then have
. Since c is ground, and consistent in Γ N ,K , by definition of consistency, the frames φ l (c)∪φ Γ LL and φ r (c)∪φ Γ LL are statically equivalent. Hence, by definition of static equivalence, Therefore, φ and .φ are statically equivalent.
We now prove the following invariant, which corresponds to Lemma 2.
the sets of bound variables in P i and P j (resp. Q i and Q j ) are disjoint, and similarly for the names σ 1 P , σ 1 Q are ground, and there exist ground • for all x ∈ dom(σ P ), σ P (x) ↓= σ P (x), and similarly for σ Q , c σ ⊆ c φ σ 1 then there exist a word w, a multiset Q = {Q i }, constraint sets {C i }, a frame φ Q , a substitution σ Q , an environment Γ , constraints c φ , and c σ such that: w = τ α -|P | = |Q | for all i = j, the sets of bound variables in P i and P j (resp. Q i and Q j ) are disjoint, and similarly for the bound names; Q are ground and there exist σ P ⊇ σ 2 P , and σ Q ⊇ σ 2 Q , such that • (dom(σ P )\dom(σ 2 P )) ∩ (vars(P ) ∪ vars(φ P )) = ∅, • for all x ∈ dom(σ P ), σ P (x) ↓= σ P (x), and similarly for σ Q , Proof. Note that the assumption that σ P (resp. σ Q ) extends σ 1 P (resp. σ 1 Q ) only with variables not appearing in P or φ P (resp. Q or φ Q ) implies that ( First, we show that it is sufficient to prove this lemma in the case where Γ does not contain any union types. Indeed, assume we know the property holds in that case. Let us show that the lemma then also holds in the other case, i.e. if Γ contains union types. By hypothesis, σ P , σ Q are ground, and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence we know by Lemma 6 that there exists a branch Γ ∈ branches(Γ ) (thus Γ does not contain union types), such that (Γ ) N ,K σ P ∼ σ Q : (Γ ) X → c σ .
Moreover, by Lemma 9, ∀i, Γ P i ∼ Q i → C i ⊆ C i ; and by Lemma 7, Γ φ P ∼ φ Q : LL → c φ . In addition by Lemma 13, (∪ × i C i )∪ ∀ c φ σ P ,σ Q is a subset of (∪ × i C i )∪ ∀ c φ σ P ,σ Q and is therefore consistent. Thus, if the lemma holds when the environment does not contain union types, it can be applied to the same processes, frames, substitutions and to Γ , which directly concludes the proof.
Therefore, we may assume that Γ does not contain any union types. Note that, since by assumption c σ ⊆ c φ σ P ,σ Q , we have Hence, by Lemma 13, (∪ × i C i σ P ,σ Q ∪ ∀ c σ ) is consistent. Thus the assumption on the disjointness of the sets of bound variables (and names) in the processes implies, using Lemma 10, that each of the C i σ P ,σ Q ∪ ∀ c σ is also consistent. Moreover, this disjointness property for P and Q follows from the other points, as it is easily proved by examining the reduction rules that it is preserved by reduction.
By hypothesis, (P, φ P , σ 1 P ) reduces to (P , φ P , σ 2 P ) We know from the form of the reduction rules that exactly one process P i ∈ P is reduced, while the others are unchanged. By the assumptions, there is a corresponding process Q i ∈ Q and a derivation Π i of Γ P i ∼ Q i → C i . We continue the proof by a case disjunction on the last rule of Π i . Let us first consider the cases of the rules PZERO, PPAR, PNEW, and POR.
-PZERO: then P i = Q i = 0. Hence, the reduction rule applied to P is Zero, and P = P\{P i }, φ P = φ P , and σ 2 P = σ 1 P . The same reduction can be performed in Q: Since the other processes, the frames, environments and substitutions do not change in this reduction, all the claims clearly hold in this case (with σ P = σ P , σ Q = σ Q , c φ = c φ , c σ = c σ ). In particular, the consistency of the constraints follows from the consistency hypothesis. Indeed, since Γ is already contained in the environments appearing in each C j (by Lemma 14). Thus Hence, the reduction rule applied to P is Par: We choose Γ = Γ . In addition The same reduction rule can be applied to Q: In this case again, the claims on the substitutions and frames, as well as the claim that c σ ⊆ c φ σ 2 P ,σ 2 Q , hold since they do not change in the reduction. Moreover the processes in P and Q are still pairwise typably equivalent. Indeed, all the processes from P and Q are unchanged, except for P i and Q i which are reduced to P 1 i , P 2 i , Q 1 i , Q 2 i , and those are typably equivalent using Π 1 and Π 2 . Finally the constraint set is still consistent, since: -PNEW: then P i = new n : τ l,a n .P i and Q i = new n : τ l,a n .Q i . P i is reduced to P i by rule New: In addition We choose Γ = Γ, n : τ l,a n . The same reduction rule can be applied to Q: The claim clearly holds. Indeed the processes are still pairwise typable: • using Π i in the case of P i and Q i ; • using Π j , for j = i, as well as Lemma 12, for the other processes, since n does not appear in these processes by assumption. In addition, all the frames, substitutions, and constraints are unchanged; and σ, σ are well-typed in Γ if and only if they are well-typed in Γ .
-PNEWKEY: This case is analogous to the PNEW case.
-POr: this case is not possible, since we have already eliminated the case where Γ contains union types.
In all the other cases for the last rule in Π i , we know that the head symbol of P i is not | , 0 or new. Hence, the form of the reduction rules implies that P i ∈ P is reduced to exactly one process P i ∈ P , while the other processes in P do not change (i.e. P j = P j for j = i). If we show in each case that the same reduction rule that is applied to P i can be applied to reduce Q to a multiset Q by reducing process Q i into Q i , we will also have Q j = Q j for j = i. Therefore the claim on the cardinality of the processes multisets will hold.
Since P i , Q i can be typed and the head symbol of P i is not new, it is clear by examining the typing rules that the head symbol of Q i is not new either. Hence, we will choose a Γ containing the same nonces and keys as Γ .
The proofs for theses cases follow the same structure: -The typing rule gives us information on the form of P i and Q i .
-The form of P i gives us information on which reduction rule was applied to P.
-The form of Q i is the same as P i . Hence (additional conditions may need to be checked depending on the rule) Q i can be reduced to some process Q i by applying the same reduction rule that was applied to P i (or at least, a reduction rule with the same action). thus Q can be reduced too, with the same actions as P. We then check the additional conditions on the typing of the processes, frames and substitutions, and the consistency condition.
First, let us consider the POUT case.
-POUT: then P i = out(M ).P i and reduces to P i via the Out rule, and Q i = out(N ).Q i for some N and Q i . Since the Out rule can be applied to P i , M σ 1 P ↓ = ⊥, i.e. M σ 1 P ↓= M σ 1 P . In addition We have σ 2 P = σ 1 P , φ P = φ P ∪ {M/ax n }, and α = new ax n .out(ax n ). Since Γ M ∼ N : LL → c and Γ N ,K σ P ∼ σ Q : Γ X → c σ , by Lemma 24, we have Γ N ,K M σ P ∼ N σ Q : LL → c for some c . That is to say Γ N ,K M σ 1 P ∼ N σ 1 Q : LL → c . Since we also know that M σ 1 P ↓ = ⊥, by Lemma 22, we also have N σ 1 Q ↓ = ⊥. Hence, the same reduction rule Out can be applied to reduce the process Q i into Q i , and the claim on the reduction of Q holds. We choose Γ = Γ . We have σ 2 Q = σ 1 Q , and φ Q = φ Q ∪ {N/ax n }. We also choose σ P = σ P , σ Q = σ Q , c φ = c φ ∪ c and c σ = c σ . The substitutions σ 1 P , σ 1 Q are not extended by the reduction, and the typing environment does not change, which trivially proves the claim regarding the substitutions. In addition, since by assumption c σ ⊆ c φ σ 1 Moreover, since only M and N are added to the frames in the reduction, Π suffices to prove the claim that Γ φ P ∼ φ Q : LL → c φ . Since all processes other that P i and Q i are unchanged by the reduction (and since the typing environment is also unchanged), Π suffices to proves the claim that ∀j.
Thus, in this case, it only remains to be proved that which is consistent by hypothesis. Hence the claim holds in this case.
In the remaining cases, from the form of the typing rules for processes, the head symbol of neither P i nor Q i is out. Thus, the reduction applied to P i (from the assumption), as well as the one applied to Q i (which, as we will show, has the same action as the rule for P i ), cannot be Out. Therefore no new term is output on either side, and φ P = φ P and φ Q = φ Q . Hence the claim on the domains of the frames holds by assumption. Moreover, as we will see, in all cases Γ is either Γ , or Γ, x : T where x is a variable bound in (the head of) P i and Q i , and T is not a union type. We Besides, in the cases where we choose Γ = Γ then it is true (by hypothesis) that for j = i, Γ P j ∼ Q j → C j . In the cases where we choose Γ = Γ, x : T , where x is bound in P i and Q i , then, since the processes are assumed to use different variable names, x does not appear in P j or Q j (for j = i). Hence, if j = i, using the assumption that Γ P j ∼ Q j → C j , by Lemma 12, we have Γ Hence, for each remaining possible last rule of Π i , we only have to show that: a) The same reduction rule can be applied to Q i as to P i , with the same action. (Except in the case of the rule PIFLR, as we will see, where rule If-Then may be applied on one side while rule If-Else is applied on the other side, but this has no influence on the argument, as these two rules both represent a silent action, and have a very similar form.) b) there exist σ P and σ Q ground, and containing σ 2 P and σ 2 Q respectively, that satisfy the conditions on the domains, contain only messages that do not reduce to ⊥, and such that Γ N ,K σ P ∼ σ Q : Γ X → c σ for some set of constraints c σ . Since at most one variable x is added to the substitutions in the reduction, we will show in each case that we can choose these substitutions such that either σ P = σ P and σ Q = σ Q ; or σ P = σ P ∪ {M/x} and σ Q = σ Q ∪ {N/x} for some messages M , N . In all cases, it is clear from the reduction rules that M ↓ = ⊥ and N ↓ = ⊥. We will then only need to check the well-typedness condition on variable x, i.e. Γ N ,K σ P (x) ∼ σ Q (x) : Γ (x) → c x for some c x . We can then choose c σ = c σ ∪ c x . As we will see in the proof, we will always have c x ⊆ c φ σ P ,σ Q ∪ c σ .
In addition, c φ = c φ , and by assumption, x cannot appear in c φ , thus c φ σ P ,σ Q = c φ σ P ,σ Q . Therefore, since by assumption c σ ⊆ c φ σ P ,σ Q , the claim that c σ ⊆ c φ σ P ,σ Q will always hold.
c) the new processes obtained by reducing P i and Q i are typably equivalent in Γ , with a constraint C i , such that The actual claim, from the statement of the lemma, is that is consistent, but we can show that the previous condition is sufficient.
In the case where Γ = Γ , we have σ P = σ P , σ Q = σ Q , C j = C j for j = i, and c σ = c σ . Thus the proposed condition is clearly sufficient (it is even necessary in this case).
In the case where Γ = Γ, x : T for some T which is not a union type, and the substitutions σ P , σ Q are σ P , σ Q extended with a term associated to x, the proof that the condition is sufficient is more involved. First, we Assuming we show that Γ, x : T P i ∼ Q i → C i , by Lemma 14, we will also have that all the Γ i appearing in the elements of C i contain x : T (since T is not a union type). Hence: It is thus sufficient to ensure the consistency of S∪ ∀ c φ σ P ,σ Q . Since σ P = σ P ∪ {σ P (x)/x} (and similarly for Q), it then suffices to show the consistency of S∪ ∀ c φ σ P ,σ Q σ P (x)/x,σ Q (x)/x .
The substitutions σ P (x) and σ Q (x) are ground, and Γ N ,K σ P (x) ∼ σ Q (x) : T → c x (which we will show for each case as point b)). Hence by Lemma 13, if S∪ ∀ c φ σ P ,σ Q ∪ ∀ c x is consistent, then is consistent. Moreover, as explained in point b), we will show in each case that c x ⊆ c φ σ P ,σ Q . Thus S∪ ∀ c φ σ P ,σ Q ∪ ∀ c x = S∪ ∀ c φ σ P ,σ Q . Therefore, by the previous implication, it is sufficient to prove that S∪ ∀ c φ σ P ,σ Q is consistent, to ensure that S∪ ∀ c φ σ P ,σ Q σ P (x)/x,σ Q (x)/x is consistent. This is the condition stated at the beginning of this point, We can now prove the remaining cases for the last rule of Π i , that is to say the cases of the rules PIN, PLET, PLETDEC, PLETADECSAME, PLETADECDIFF, PLETLRK, PIFL, PIFLR, PIFS, PIFLR*, PIFP, PIFI, PIFLR'*, and PIFALL.
-PIN: then P i = in(x).P i and reduces to P i via the In rule, and Q i = in(x).Q i for some Q i . In addition We have α = in(R) for some attacker recipe R such that vars(R) ⊆ dom(φ P ), and Rφ P σ 1 P ↓= Rφ P σ P ↓ = ⊥. We also have σ 2 P = σ 1 P ∪ {Rφ P σ 1 P ↓ /x}, φ P = φ P . The same reduction rule In can be applied to reduce the process Q i into Q . Indeed, This follows from Lemma 23, using the fact that by Lemma 24, Γ N ,K φ P σ P ∼ φ Q σ Q : LL → c, for some c ⊆ c φ σ P ,σ Q ∪ c σ . Therefore point a) holds. We choose Γ = Γ, x : LL. We have σ 2 Q ↓ /x}. Lemmas 24 and 23, previously evoked, guarantee that Moreover, Π and the fact that which is consistent by hypothesis, prove point c) and conclude this case.
t . P i reduces to either P i via the Let-In rule, or P i via the Let-Else rule. In addition We have α = τ . Let σ = σ P | vars(t)∪vars(t ) , σ = σ Q | vars(t)∪vars(t ) , and Γ = Γ N ,K ∪ (Γ | vars(t)∪vars(t ) ). Since by assumption Γ N ,K σ P ∼ σ Q : Γ X → c σ , we have Γ N ,K σ ∼ σ : Γ X → c for some c ⊂ c σ . Hence, by Lemma 21, using Π, we have: i.e. tσ 1 P ↓ = ⊥ ⇐⇒ t σ 1 Q ↓ = ⊥ Therefore, if rule Let-In is applied to P i then it can also be applied to reduce Q i into Q i , and if the rule applied to P i is Let-Else then it can also be applied to reduce Q i into Q i . This proves point a). We prove here the Let-In case. The Let-Else case is similar (although slightly easier, since no new variable is added to the substitutions). In this case we have σ 2 P = σ 1 P ∪ {tσ 1 P ↓ /x} and σ 2 Q = σ 1 Q ∪ {t σ 1 Q ↓ /x}. In addition, by hypothesis, tσ P = tσ 1 P = tσ and t σ Q = t σ 1 Q = t σ . By Lemma 21, we know in this case that there exists c ⊆ c such that Γ N ,K tσ P ↓∼ t σ Q ↓: T → c. Thus, by Lemma 4, there exists T ∈ branches(T ) such that Γ N ,K tσ P ↓∼ t σ Q ↓: T → c. We choose Γ = Γ, x : T , σ P = σ P ∪ {tσ P ↓ /x} and σ Q = σ Q ∪ {t σ Q ↓ /x}. Since Γ does not contain union types, Γ ∈ branches(Γ, x : T ). Since c ⊆ c ⊆ c σ and Γ N ,K tσ P ↓∼ t σ Q ↓: T → c, point b) holds. We now prove that point c) holds. Using Π , we have Γ, x : T P i ∼ Q i → C i . Hence, by Lemma 9, there This last constraint set is consistent by hypothesis.
Hence, by Lemma 13, is also consistent. This proves point c) and concludes this case. -PLETDEC: then there exist y, P i , P i , Q i , Q i such that P i = let x = dec(y, k 1 ) in P i else P i , and Q i = let x = dec(y, k 2 ) in Q i else Q i , and Γ (y) = LL. P i reduces to either P i via the Let-In rule, or P i via the Let-Else rule. In addition We have α = τ . In addition, by hypothesis, σ P (y) = σ 1 P (y) and σ Q (y) = σ 1 Q (y). We consider two cases.
• If dec(yσ P , k 1 ) ↓ = ⊥ then the reduction applied to P i is Let-In, and P i is reduced to P i . This also implies that there exists M such that yσ P = enc(M, k 1 ). Since Γ (y) = LL, we know by assumption that Γ N ,K yσ P ∼ yσ Q : LL → c for some constraint c ⊆ c σ . Hence, by Lemma 17, there exist N , k, T , c ⊆ c σ such that yσ Q = enc(N, k), Γ (k 1 , k) = key HH (T ), and Γ N ,K M ∼ N : T → c . Thus, by Lemma 4, there exists T ∈ branches(T ) such that Γ N ,K M ∼ N : T → c . Two cases are possible. * If k = k 2 , then dec(yσ Q , k 2 ) ↓= N , and rule Let-In can be applied to reduce Q i into Q i , which proves point a). In this case we have σ 2 P = σ 1 P ∪ {M/x} and σ 2 Q = σ 1 Q ∪ {N/x}. We choose Γ = Γ, x : T , σ P = σ P ∪ {M/x} and σ Q = σ Q ∪ {N/x} Since Γ does not contain union types, Γ ∈ branches(Γ, x : T ). Since c ⊆ c σ and Γ N ,K M ∼ N : T → c , point b) holds. We now prove that point c) holds. Using Π i , we have Γ, x : This last constraint set is consistent by hypothesis. Hence, by Lemma 13, (∪ × j =i C j )∪ × C i ∪ ∀ c φ σ P ,σ Q is also consistent. This proves point c) and concludes this case. * If k = k 2 , then dec(yσ Q , k 2 ) ↓= ⊥, and rule Let-Else can be applied to reduce Q i into Q i , which proves point a). In this case we have σ 2 P = σ 1 P ∪ {M/x} and σ 2 Q = σ 1 Q . We choose Γ = Γ, x : T , σ P = σ P ∪ {M/x} and σ Q = σ Q ∪ {N/x} (by well-formedness of the processes, x does not appear in Q i ). Since Γ does not contain union types, Γ ∈ branches(Γ, x : T ). Since c ⊆ c σ and Γ N ,K M ∼ N : T → c , point b) holds. We now prove that point c) holds. Using Π 1,k i , we have Γ, x : This last constraint set is consistent by hypothesis. Hence, by Lemma 13, is also consistent. This proves point c) and concludes this case.
• If dec(yσ P , k 1 ) ↓= ⊥ then the reduction applied to P i is Let-Else, and P i is reduced to P i . Again we distinguish two cases. * If dec(yσ Q , k 2 ) ↓ = ⊥ then rule Let-In can be applied to reduce Q i into Q i . This also implies that there exists N such that yσ Q = enc(N, k 2 ). Since Γ (y) = LL, we know by assumption that Γ N ,K yσ P ∼ yσ Q : LL → c for some constraint c ⊆ c σ . Hence, by Lemma 17, there exist M , k, T , c ⊆ c σ such that yσ P = enc(M, k), Γ (k, k 2 ) = key HH (T ), and Γ N ,K M ∼ N : T → c . Thus, by Lemma 4, there exists T ∈ branches(T ) such that Γ N ,K M ∼ N : T → c .
In this case we have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q ∪ {N/x}. We choose Γ = Γ, x : T , σ P = σ P ∪ {M/x} and σ Q = σ Q ∪ {N/x} (by well-formedness of the processes, x does not appear in P i ). Since Γ does not contain union types, Γ ∈ branches(Γ, x : T ). Since c ⊆ c σ and Γ N ,K M ∼ N : T → c , point b) holds. We now prove that point c) holds. Using Π 2,k i , we have Γ, x : T P i ∼ Q i → C k . Hence, by Lemma 9, there exists This last constraint set is consistent by hypothesis. Hence, by Lemma 13, (∪ × j =i C j )∪ × C i ∪ ∀ c φ σ P ,σ Q is also consistent. This proves point c) and concludes this case. * If dec(yσ Q , k 2 ) ↓= ⊥ then rule Let-Else can be applied to reduce Q i into Q i . In this case we have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . We choose Γ = Γ , σ P = σ P and σ Q = σ Q . Since the substitutions and environments do not change, point b) clearly holds. We now prove that point c) holds.
This last constraint set is consistent by hypothesis. Hence, by Lemma 13, (∪ × j =i C j )∪ × C i ∪ ∀ c φ σ P ,σ Q is also consistent. This proves point c) and concludes this case.
-PLETADECSAME and PLETADECDIFF: these two cases are similar to the PLETDEC case.
-PLETLRK: then P i = let x = d(y) in P i else P i and Q i = let x = d(y) in Q i else Q i for some P i , P i , Q i , Q i . P i reduces to either P i via the Let-In rule, or P i via the Let-Else rule.
In addition .
We have α = τ . By assumption we also have σ P (y) = σ 1 P (y) and σ Q (y) = σ 1 Q (y). By hypothesis, σ P , σ Q are ground and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence, by definition of the welltypedness of substitutions, there exists c y ⊆ c σ such that Γ N ,K σ P (y) ∼ σ Q (y) : τ l,a m ; τ l ,a n → c y , or Γ N ,K σ P (y) ∼ σ Q (y) : key l (T ) → c y In the first case by Lemma 16, σ P (y) = m and σ Q (y) = n. Since m, n are nonces, d(m) ↓= d(n) ↓= ⊥, and we thus have d(σ P (y)) ↓= d(σ Q (y)) ↓= ⊥. Similarly, in the second case, by Lemma 20, σ P (y) and σ Q (y) are both keys in K, and thus d(σ P (y)) ↓= d(σ Q (y)) ↓= ⊥. Therefore the reduction rule applied to P i can only be Let-Else, and P i is reduced to P i . Since we also have d(σ Q (y)) ↓= ⊥, this rule can also be applied to reduce Q i into Q i . This proves point a). We therefore have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . We choose Γ = Γ . Since the substitutions and typing environments are unchanged by the reduction, point b) clearly holds. Moreover, Π , and the fact that which is consistent by hypothesis, prove point c) and conclude this case.
Thus, by Lemma 27, φ and φ are statically equivalent. Hence, in particular, M σ P = M σ P ⇐⇒ N σ Q = N σ Q . Therefore, if rule If-Then is applied to P i then it can also be applied to reduce Q i into Q i , and if the rule applied to P i is If-Else then it can also be applied to reduce Q i into Q ⊥ i . This proves point a). We prove here the If-Then case. The If-Else case is similar. We choose Γ = Γ . We have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . Since the substitutions and environments do not change in this reduction, point b) trivially holds. Moreover, by hypothesis, is consistent. Thus by Lemma 13, is consistent. Π and this fact prove point c) and conclude this case.
The case where M σ P ↓= N σ Q ↓= ⊥ or M σ P ↓= N σ Q ↓= ⊥ remains. In that case, the rule applied to P i is necessarily If − Else, and this rule can also be applied to Q i . We conclude the proof similarly to the previous case.
i . P i reduces to P i which is either P i via the If-Then rule, or P ⊥ i via the If-Else rule. In addition We have α = τ in any case. In addition, by assumption, tσ P = tσ 1 P for t ∈ {M 1 , M 2 } and t σ Q = t σ 1 Q for t ∈ {N 1 , N 2 }. By hypothesis, σ P , σ Q are ground and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence, by Lemma 24, using Π, there exists c such that Γ N ,K M 1 σ P ∼ N 1 σ Q : τ l,1 m ; τ l ,1 n → c . Therefore by Lemma 16, M 1 σ P = m and N 1 σ Q = n. Similarly we can show that M 2 σ P = m and N 2 σ Q = n . There are four cases for b and b , which are all similar. We write the proof for the case where b = and b = ⊥, i.e. τ l,1 m = τ l ,1 m and τ l ,1 n = τ l ,1 n . Thus the reduction rule applied to P i is If-Then and P i = P i . On the other hand, rule If-Else can be applied to reduce Q i into Q i = Q ⊥ i . This proves point a) (these rules both correspond to silent actions). We choose Γ = Γ . We have σ P = σ P and σ Q = σ Q .
Since the substitutions and environments do not change in this reduction, point b) trivially holds. Moreover, Π and the fact that prove point c) and conclude this case.
i . P i reduces to P i which is either P i via the If-Then rule, or P ⊥ i via the If-Else rule. In addition We have α = τ in any case. In addition, by hypothesis, tσ P = tσ 1 P for t ∈ {M, M } and t σ Q = t σ 1 Q for t ∈ {N, N }. By hypothesis, σ P , σ Q are ground and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence, by Lemma 24, using Π, there exists c such that Γ N ,K M σ P ∼ N σ Q : LL → c . Similarly we can show that Γ N ,K M σ P ∼ N σ Q : HH → c for some c . Hence, by Lemma 22, either M σ P ↓= N σ Q ↓= ⊥; or M σ P ↓= M σ P = ⊥ and N σ Q ↓= N σ Q = ⊥. Similarly, either M σ P ↓= N σ Q ↓= ⊥; or M σ P ↓= M σ P = ⊥ and N σ Q ↓= N σ Q = ⊥. Let us first consider the case where M σ P ↓ = ⊥, M σ P ↓ = ⊥, N σ Q ↓ = ⊥ and N σ Q ↓ = ⊥. Therefore by Lemma 25, M σ P = M σ P and N σ Q = N σ Q . Hence the reduction for P i is necessarily If-Else, which is also applicable to reduce Q i to Q ⊥ i . This proves point a). We choose Γ = Γ . We have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . Since the substitutions and typing environments do not change in this reduction, point b) trivially holds. Moreover, Π and the fact that prove point c) and conclude this case.
The case where M σ P ↓= N σ Q ↓= ⊥ or M σ P ↓= N σ Q ↓= ⊥ remains. In that case, the rule applied to P i is necessarily If − Else, and this rule can also be applied to Q i . We conclude the proof similarly to the previous case.
This case is similar to the PIFS case: the incompatibility of the types of M , N and M , N ensures that the processes can only follow the else branch. P i reduces to P i which is either P i via the If-Then rule, or P ⊥ i via the If-Else rule. In addition We have α = τ in any case. In addition, by hypothesis, tσ P = tσ 1 P for t ∈ {M, M } and t σ Q = t σ 1 Q for t ∈ {N, N }. By hypothesis, σ P , σ Q are ground and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence, by Lemma 24, using Π, there exists c such that Γ N ,K M σ P ∼ N σ Q : T * T → c .
Hence, by Lemma 22, either M σ P ↓= N σ Q ↓= ⊥; or M σ P ↓= M σ P = ⊥ and N σ Q ↓= N σ Q = ⊥. Let us first consider the case where M σ P ↓ = ⊥, and N σ Q ↓ = ⊥. By Lemma 19, M σ P and N σ Q both are pairs. Similarly we can show that Γ N ,K M σ P ∼ N σ Q : τ l,a m ; τ l ,a n → c for some c . By Lemma 16, this implies that M σ P = m and N σ Q = n. Thus neither of these two terms are pairs. Therefore M σ P = M σ P and N σ Q = N σ Q . The end of the proof for this case is then the same as for the PIFS case.
The case where M σ P ↓= N σ Q ↓= ⊥ or M σ P ↓= N σ Q ↓= ⊥ remains. In that case, the rule applied to P i is necessarily If − Else, and this rule can also be applied to Q i . We conclude the proof similarly to the previous case.
i , some messages M , N , and some t ∈ C ∪ K ∪ N . P i reduces to P i which is either P i via the If-Then rule, or P ⊥ i via the If-Else rule. In addition We have in any case α = τ . In addition, by assumption, t σ P = t σ 1 P for t ∈ {M, M } and t σ Q = t σ 1 Q for t ∈ {N, N }. By hypothesis, σ P , σ Q are ground and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence, by Lemma 24, using Π, there exists c ⊆ c σ P ,σ Q ∪ c σ such that Γ N ,K M σ P ∼ N σ Q : LL → c . Hence, by Lemma 22, either M σ P ↓= N σ Q ↓= ⊥; or M σ P ↓= M σ P = ⊥ and N σ Q ↓= N σ Q = ⊥. Similarly, either t ↓= ⊥ or t ↓= t = ⊥. Let us first consider the case where M σ P ↓ = ⊥, M σ P ↓ = ⊥, and t ↓ = ⊥. We then show that M σ P = t if and only if N σ Q = t (note that since t is ground, t = tσ P = tσ Q ). If M σ P = t, then Γ N ,K t ∼ N σ Q : LL → c . In all possible cases for t, i.e. t ∈ K, t ∈ N , and t ∈ C, Lemma 20 implies that N σ Q = t. This proves the first direction of the equivalence, the other direction is similar. Therefore, if rule If-Then is applied to P i then it can also be applied to reduce Q i into Q i , and if the rule applied to P i is If-Else then it can also be applied to reduce Q i into Q ⊥ i . This proves point a). We prove here the If-Then case. The If-Else case is similar. We choose Γ = Γ . We have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . Since the substitutions and typing environments do not change in this reduction, point b) trivially holds. Moreover, by hypothesis, is consistent. This fact proves point c) and concludes this case.
The case where M σ P ↓= N σ Q ↓= ⊥ or t ↓= ⊥ remains. In that case, the rule applied to P i is necessarily If − Else, and this rule can also be applied to Q i . We conclude the proof similarly to the previous case.
i . P i reduces to P i which is either P i via the If-Then rule, or P ⊥ i via the If-Else rule. In addition We have α = τ in any case. In addition, by assumption, tσ P = tσ 1 P for t ∈ {M 1 , M 2 } and t σ Q = t σ 1 Q for t ∈ {N 1 , N 2 }. By hypothesis, σ P , σ Q are ground and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence, by Lemma 24, using Π, there exists c such that Γ N ,K M 1 σ P ∼ N 1 σ Q : τ l,∞ m ; τ l ,∞ m → c . Therefore by Lemma 16, M 1 σ P = m and N 1 σ Q = n. Similarly we can show that M 2 σ P = m and N 2 σ Q = n. Hence M 1 = M 2 and N 1 = N 2 . Thus the reduction rule applied to P i is If-Then and P i = P i . On the other hand, rule If-Then can also be applied to reduce Q i into Q i = Q i . This proves point a). Note that we still need to type the other branch, even though it is not used here, as when replicating the process this test may fail if M 1 , N 1 and M 2 , N 2 are nonces from different sessions. We choose Γ = Γ . We have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . Since the substitutions and environments do not change in this reduction, point b) trivially holds. Moreover, Π and the fact that, with C i = C i , prove point c) and conclude this case.
i . P i reduces to P i which is either P i via the If-Then rule, or P ⊥ i via the If-Else rule. In addition We have α = τ in any case. In addition, by assumption, tσ P = tσ 1 P for t ∈ {M 1 , M 2 } and t σ Q = t σ 1 Q for t ∈ {N 1 , N 2 }. By hypothesis, σ P , σ Q are ground and Γ N ,K σ P ∼ σ Q : Γ X → c σ . Hence, by Lemma 24, using Π, there exists c such that Γ N ,K M 1 σ P ∼ N 1 σ Q : τ l,a m ; τ l ,a n → c . Therefore by Lemma 16, M 1 σ P = m and N 1 σ Q = n. Similarly, using Lemma 16, we can show that M 2 σ P = m and N 2 σ Q = n .
Moreover, since τ l,a m = τ l ,a m , we know that m = m (by well-formedness of the processes), and similarly n = n . Hence, M 1 σ P = M 2 σ P and N 1 σ Q = N 2 σ Q . Thus the reduction rule applied to P i is If-Else and P i = P ⊥ i . On the other hand, rule If-Else can also be applied to reduce Q i into Q i = Q ⊥ i . This proves point a). We choose Γ = Γ . We have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . Since the substitutions and environments do not change in this reduction, point b) trivially holds. Moreover, Π and the fact that prove point c) and conclude this case.
i . P i reduces to P i which is either P i via the If-Then rule, or P ⊥ i via the If-Else rule. In addition In addition, by hypothesis, tσ P = tσ 1 P for t ∈ {M, M } and t σ Q = t σ 1 Q for t ∈ {N, N }. Four cases are possible: • M σ P ↓ = ⊥, M σ P ↓ = ⊥, N σ Q ↓ = ⊥, N σ Q ↓ = ⊥, M σ P = M σ P and N σ Q = N σ Q ; • or M σ P ↓ = ⊥, M σ P ↓ = ⊥, M σ P = M σ P and (N σ Q = N σ Q or N σ Q ↓= ⊥ or N σ Q ↓= ⊥); • or N σ Q ↓ = ⊥, N σ Q ↓ = ⊥, N σ Q = N σ Q and (M σ P = M σ P or M σ P ↓= ⊥ or M σ P ↓= ⊥); • or (M σ P = M σ P or M σ P ↓= ⊥ or M σ P ↓= ⊥) and (N σ Q = N σ Q or N σ Q ↓= ⊥ or N σ Q ↓= ⊥). In any case, we have α = τ . These four cases are similar, we detail the proof for the second case, where M σ P = M σ P and N σ Q = N σ Q . Since M σ P = M σ P , M σ P ↓ = ⊥, and M σ P = M σ P , the reduction applied to P i can only be If-Then, and P i is reduced to P i . Since N σ Q = N σ Q , N σ Q ↓= ⊥, or N σ Q ↓= ⊥, rule If-Else can be applied to reduce Q i into Q ⊥ i . This proves point a). We choose Γ = Γ . We have σ 2 P = σ 1 P and σ 2 Q = σ 1 Q . Since the substitutions and environments do not change in this reduction, point b) trivially holds. Moreover, by hypothesis, Theorem 4 (Typing implies trace inclusion). For all processes P , Q, for all φ P , φ Q , σ P , σ Q , for all multisets of processes P, Q for all constraints C, for all sequence s of actions, for all Γ containing only keys, if then there exists a sequence s of actions, a multiset Q, a frame φ Q , a substitution σ Q , such that Proof. We successively apply Lemma 28 to each of the reduction steps in the reduction The lemma can indeed be applied successively. At each reduction step of P we obtain a sequence of reduction steps for Q with the same actions, and the conclusions the lemma provides imply the conditions needed for its next application.
It is clear, for the first application, that all the hypotheses of this lemma are satisfied.
In the end, we know that there exist Γ , some constraint sets C i , some c φ , c σ , and a reduction with s = τ s , such that (among other conclusions) σ P , σ Q are ground, and there exist ground σ P ⊇ σ P , σ Q ⊇ σ Q such that • (dom(σ P )\dom(σ P )) ∩ (vars(P) ∪ vars(φ P )) = ∅, • for all x ∈ dom(σ P ), σ P (x) ↓= σ P (x), and similarly for σ Q , for all i = j, the sets of bound variables in P i and P j (resp. Q i and Q j ) are disjoint, and similarly for the bound names; To prove the claim, it is then sufficient to show that φ P σ P and φ Q σ Q are statically equivalent. Note that since σ P ⊆ σ P and (dom(σ P )\dom(σ P )) ∩ (vars(P) ∪ vars(φ P )) = ∅, we have φ P σ P = φ P σ P . Similarly φ Q σ Q = φ Q σ Q . In addition we also know that c φ σ P ,σ Q = c φ σ P ,σ Q .
We will now show that (c, Γ N ,K ) is consistent. Let Γ ∈ branches(Γ ). By Lemma 14, for all i, since Γ P i ∼ Q i → C i , there exists (c i , Γ i ) ∈ C i such that Γ ⊆ Γ i . The disjointness condition on the bound variables implies by Lemma 10 that for all i, j, Γ i and Γ j are compatible. Thus . Therefore, as c ⊆ c φ σ P ,σ Q , by Lemma 13, (c, Γ ) is consistent. Since c is ground, it follows from the definition of consistency that (c, Γ N ,K ) is also consistent. Moreover, we know that Γ ⊆ Γ , and Γ is a branch of Γ . It is then clear that Γ N ,K ⊆ Γ . Hence, by Lemma 12, since Hence, we have Γ φ P σ P ∼ φ Q σ Q : LL → c with (c, Γ N ,K ) consistent. Moreover, φ P σ P and φ Q σ Q are ground (by well-formedness of the processes). Therefore, by Lemma 27, the frames φ P σ P and φ Q σ Q are statically equivalent.
This theorem corresponds to Theorem 1.
Theorem 5 (Typing implies trace equivalence). For all Γ containing only keys, for all P and Q, if Proof. Theorem 4 proves that under these assumptions, P t Q. This is sufficient to prove the theorem. Indeed, it is clear from the typing rules for processes and terms that where C is the constraint obtained from C by swapping the left and right hand sides of all of its elements, and Γ is the environment obtained from Γ by swapping the left and right types in all refinement types, as well as swapping all pairs of keys in its domain. Clearly from the definition of consistency, C is consistent if and only if C is. Therefore, by symmetry, proving that the assumptions imply P t Q also proves that they imply Q t P , and thus P ≈ t Q.

B.2 Typing replicated processes
In this subsection, we prove the soundness result for replicated processes. In this subsection, as well as the following ones, without loss of generality we assume, for each infinite nonce type τ l,∞ m appearing in the processes we consider, that N contains an infinite number of fresh names which we will denote by {m i | i ∈ N}; such that the m i do not appear in the processes or environments considered. We will denote by N 0 the set of unindexed names and by N i the set of indexed names. We similarly assume that for all the variables x appearing in the processes, the set X of all variables also contains variables {x i | i ∈ N}. We denote X 0 the set of unindexed variables, and X i the set of indexed variables. Finally, we assume for all key k declared in the processes with type seskey l,∞ (T ) that the set BK contains keys {k i | i ∈ N}.
Definition 17 (Renaming of a term). We denote by [ t ] Γ i , the term t in which names n such that Γ (n) = τ l,∞ n for some l are replaced by n i , keys k such that Γ (k, k) = seskey l,∞ (T ) for some l, T are replaced by k i , and variables x are replaced by x i .
Definition 18 (Expansion of a type). given a type T , we define its expansion to n sessions, denoted [ T ] n , as follows.
[ Definition 19 (Renaming of a process). For all process P , for all i ∈ N, for all environment Γ , we define [ P ] Γ i , the renaming of P for session i with respect to Γ , as the process obtained from P by: for each nonce n declared in P by new n : τ l,∞ n , and each nonce n such that Γ (n) = τ l,∞ n for some l, replacing every occurrence of n with n i , and the declaration new n : τ l,∞ n with new n i : τ l,1 ni ; for each key k declared in P by new k : seskey l,∞ (T ), and each key k such that Γ (k, k) = seskey l,∞ (T ) for some l, T replacing every occurrence of k with k i , and the declaration new k : seskey l,∞ (T ) with new k i : seskey l,1 ([ T ] n ); replacing every occurence of a variable x with x i .
Definition 20 (Renaming and expansion of typing environments). For any typing environment Γ , we define its renaming for session i as: mi | Γ (m) = τ l,∞ m }. and then its expansion to n sessions as This is propagated to constraint sets as follows: the renaming of C for session i is There are several possible cases for the last rule applied in Π.
-TNONCE: then M = m and N = p for some m, p ∈ N , T = l for some l ∈ {HH, HL}, and By applying the induction hypothesis to Π , since In addition it is clear by definition of [ · ] n and Lemma 3 that since T <: key HH (T ) we have [ T ] n <: Therefore by rule TENCH, we have TPAIR, TPUBKEY, TVKEY, TENC, TENCL, TAENC, TAENCH, TAENCL, TSIGNH, TSIGNL as well as  TOR,THASH, THASHL: Similarly to the TENCH case, the claim is proved directly by applying the induction hypothesis to the type judgement appearing in the conditions of the last rule in these cases. -TVAR: then M = N = x for some x ∈ X , and Hence by rule TVAR, Γ x i ∼ x i : Γ (x i ) → ∅. Therefore, by rule TOR, we have which proves the claim. Let us distinguish the case where a is 1 from the case where a is ∞.
If a is 1: by applying the induction hypothesis to Π , since τ l,a m ; τ l,a p n = τ l,1 m ; τ l,1 p , we have Thus by rule TLR', we have If a is ∞: by applying the induction hypothesis to Π , since τ l,a m ; τ l,a p n = 1≤j≤n τ l,1 mj ; τ l,1 pj , Thus, by Lemma 7, there exists j ∈ 1, n and a proof Π of Thus, by rule TLR', Γ i , which proves the claim.
-TLRVAR: this case is similar to the TLR' case, but only the case where a is 1 is possible.
-TSUB: then there exists T <: T such that By applying the induction hypothesis to Π , we have Since it is clear by induction on the subtyping rules that T <: T implies that [ T ] n <: [ T ] n , the TSUB rule can be applied and proves the claim. In addition, τ l,∞ m ; τ l ,∞ p n = 1≤j≤n τ l,1 mj ; τ l ,1 pj . Therefore, by applying rule TOR, we have which proves the claim.
Lemma 30 (Typing destructors with replicated names). For all Γ , t, t , T , if Proof. The first point is proved by induction on T .
If T = T ∨ T for some T , T , then by the induction hypothesis. Since branches(T ) = branches(T ) ∪ branches(T ), this proves the claim. Otherwise, branches(T ) = {T } and the claim trivially holds.
The second point directly follows from the first point, using the definition of [ Γ ] n i .
Lemma 32 (Typing processes in all branches). For all P , Q, Γ , Consequently if for some C, C Γ ⊆ C for all Γ , then there exists C ⊆ C such that Proof. The first point is easily proved by successive applications of rule POR. The second point is a direct consequence of the first point.
-For all C, C , such that ∀(c, Γ ) ∈ C ∪ C . branches(Γ ) = {Γ }, i.e. such that Γ does not contain union types, and names(c) ⊆ dom(Γ ) ∪ FN , and Γ only nonce types with names from N 0 (i.e. unindexed names), we have Proof. The first point follows from the definition of [ · ] n i and ∪ × . Indeed, if C, C are as assumed in the claim, we have: The last step is proved by directly showing both inclusions.
n i , which proves the claim.
The second point directly follows from the definition of [ · ] n i and ∪ ∀ . Indeed, for all C, c, Γ satisfying the assumptions, we have: n i )} (since Γ , Γ give the same types to names and keys) Theorem 6 (Typing processes with expanded types). For all Γ , P , Q and C, if Proof. We prove this theorem by induction on the derivation Π of Γ P ∼ Q → C. We distinguish several cases for the last rule applied in this derivation. n i into all of its branches, followed by rule PZERO, we have -POUT: then P = out(M ).P , Q = out(N ).Q for some messages M , N and some processes P , Q , and Therefore, using Π Γ , Π Γ and rule POUT, we have for all Γ ∈ branches( Γ i (by Lemma 33, whose conditions are satisfied, by Lemma 14).
n i , which proves the claim.
-PIN: then P = in(x).P , Q = in(x).Q for some variable x and some processes P , Q , and Γ,x:LL i . Therefore, using Π and rule PIN, we have ; and similarly for Q. Therefore, using Π and rule PNEW, we have -PNEWKEY: this case is similar to the PNEW case.
-PPAR: then P = P | P , Q = Q | Q for some processes P , Q , P , Q , and By applying the induction hypothesis to Π , there exists n i , by Lemma 33 (using Lemma 11 to ensure the condition that the environments do not contain union types). Therefore, using Π , Π and rule PPAR, we have -POR: then Γ = Γ , x : T ∨ T for some Γ , some x ∈ X and some types T , T , and By applying the induction hypothesis to Π T , there exist Γ ,x:T i , and similarly for Q. Thus by rule POR, we have n , this proves the claim in this case. -PLET: then P = let x = t in P else P , Q = let x = t in Q else Q for some variable x and some processes P , Q , P , Q , and by applying the induction hypothesis to Π , there exist By Lemma 30 applied to Π d , we also have Therefore, using Π and rule PLET, we have -PLETDEC: then P = let x = dec(y, k 1 ) in P else P , Q = let x = dec(y, k 2 ) in Q else Q for some variable x, some keys k 1 , k 2 , and some processes P , Q , P , Q , and Let us write the proof for the case where Γ (k 1 , k 2 ) = seskey HH,∞ (T ). The other case is similar, although the keys are renamed, and slightly easier, since by well-formedness of Γ there are no k 3 satisfying the assumptions of the last two premises. We thus have Γ i = k 2 , and for any k 3 satisfying the assumptions of either of the last two premises, by applying the induction hypothesis to Π , there exist C 1 ⊆ [ C ] n i and a proof Π 1 of Similarly, there exist C 1 ⊆ [ C ] n i and a proof Π 1 of Similarly, for each k 3 = k 2 such that Γ (k 1 , k 3 ) = key HH (T ) for some T , there exist C k3,1 ⊆ [ C k3 ] n i and a proof Π 1,k3 Similarly, for each k 3 = k 1 such that Γ (k 3 , k 2 ) = key HH (T ) for some T , there exist C k3,1 ⊆ C k3 n i and a proof Π 2,k3

Moreover, by definition, [ Γ ]
n i (y i ) = LL, and for all l, T , and all keys k, k that are either k 1 , k 2 , or a k 3 such as in the premises of the rule, if Γ (k, k ) <: In addition, Therefore, using Π 1 , Π 1 , the Π 1,k3 1 and Π 2,k3 1 , and rule PLETDEC, -PLETADECSAME, PLETADECDIFF: these cases are similar to the PLETDEC case.
-PLETLRK: then P = let x = d(y) in P else P , Q = let x = d(y) in Q else Q for some variable x ∈ X and some processes P , Q , P , Q , and Π = Γ (y) = τ l,a m ; τ l ,a p ∨ Γ (y) <: key l (T ) for some m, p. By applying the induction hypothesis to Π , there exists Let us first prove the case where Γ (y) = τ l,a m ; τ l ,a p . We distinguish two cases, depending on whether the types in the refinement τ l,a m ; τ l ,a p are finite nonce types or infinite nonce types, i.e. whether a is 1 or ∞. • n i ). By definition, there exists j ∈ 1, n , such that Γ (y i ) = τ l,1 mj ; τ l ,1 pj . Using Π and Lemma 9, there exist C n i . Thus, by Lemma 32, we have n i , which proves the claim in this case.
In all cases we have Γ (y i ) <: key l ([ T ] n ). Hence using Π and rule PLETLRK, we have -PIFL: then P = if M = M then P else P , Q = if N = N then Q else Q for some messages M , N , M , N , and some processes P , Q , P , Q , and .

Thus by Lemma 32, there exists C ⊆ [ C ]
n i such that which proves the claim in this case. • if a is ∞ and a is 1: This case is similar to the symmetric one.
• if a and a both are ∞: This case is similar to the case where a is 1 and a is ∞.
-PIFALL: this case is similar to the PIFL case.
The next theorem corresponds to the first step mentioned in Subsection 6.4.
Theorem 7 (Typing n sessions). For all Γ , P , Q and C, such that Γ P ∼ Q → C then for all n ≥ 1, there exists C ⊆ ∪ × 1≤i≤n [ C ] n i such that n is defined as 1≤i≤n [ Γ ] n i .
Proof. Let us assume Γ , P , Q and C are such that Γ P ∼ Q → C.
The claim clearly holds (using Theorem 6) if n = 1. Let then n ≥ 2.
Note that the union 1≤i≤n [ Γ ]  It only remains to be proved that ∪ × 1≤i≤n C i ⊆ ∪ × 1≤i≤n [ C ] n i . Since for all i ∈ 1, n we have C i ⊆ [ C ] n i , by Lemma 13 we know that ∪ × 1≤i≤n C i ⊆ ∪ × 1≤i≤n [ C ] n i .
For all i, (c i , Γ i ) ∈ C i . Thus by definition of C i there exist Γ i and Γ i such that (c i , Γ i ) ∈ C i , Γ i ∈ branches( j =i ([ Γ ] n j )| di,j ), and Γ i = Γ i ∪ Γ i . Since for all i = j, Γ i and Γ j are compatible, we know that Γ i and Γ j also are, as well as Γ i and Γ j .
This next theorem, together with Theorem 11, entails Theorem 3: Theorem 8. Consider P , Q, P ,Q , C, C , such that P , Q and P , Q do not share any variable. Consider Γ , containing only keys and nonces with types of the form τ l,1 n . Assume that P and Q only bind nonces and keys with infinite nonce types, i.e. using new m : τ l,∞ m and new k : seskey l,∞ (T ) for some label l and type T ; while P and Q only bind nonces and keys with finite types, i.e. using new m : τ l,1 m and new k : seskey l,1 (T ). Let us abbreviate by new n the sequence of declarations of each nonce m ∈ dom(Γ ) and session key k such that Γ (k, k) = seskey l,1 (T ) for some l, T . If   Therefore, by Lemma 12, we have where C is C where all the environments have been extended with 1≤i≤n ([ Γ ] n i ) N ,K (note that this environment still only contains nonces and keys).
Therefore, by rules PPAR and PNEW, where Γ is the restriction of [ Γ ] n to keys.
n i ) is consistent, similarly to the reasoning in the proof of Theorem 7, C ∪ × C also is.

B.3 Checking consistency
In this subsection, we first recall in detail the check_const procedure presented in Section 6.3, which was described in Section 6.3,and prove its correctness in the non-replicated case. and Γ is the environment obtained by extending the restriction of Γ to dom(Γ )\F with Γ (n) = τ l,1 n for all nonce n such that τ l,1 n occurs in Γ . This is well defined, since by assumption on the well-formedness of the processes and by definition of the processes, a name n is always associated with the same label. As we have just shown, for all x ∈ dom(θ ), there exists i ∈ 1, n such that xσ l = m i and xσ r = p i , and µ (x) is either m i or a variable. By definition of dom(θ ), only the case where µ (x) = m i is actually possible, and we have θ (x) = p i . Thus, ∀x ∈ dom(θ ). σ r (x) = θ (x). It then is clear from the definitions of the domains of θ and σ r that there exists τ such that σ r = θ τ .
We can now prove the following theorem, which corresponds to the second step necessary for Theorem 3, mentioned in Subsection 6.4: Theorem 11. Let C, and C be two constraint sets without any common variable Let us show that [ C ]  By assumption, we know that check_const( That is to say, for each (c 1 , Γ 1 ) ∈ C, (c 2 , Γ 2 ) ∈ C, (c 3 , Thus, by Lemma 44, for all (c 1 , Γ 1 ) ∈ C, (c 2 , Γ 2 ) ∈ C, (c 3 ,