Joint State Composition Theorems for Public-Key Encryption and Digital Signature Functionalities with Local Computation

In frameworks for universal composability, complex protocols can be built from sub-protocols in a modular way using composition theorems. However, as first pointed out and studied by Canetti and Rabin, this modular approach often leads to impractical implementations. For example, when using a functionality for digital signatures within a more complex protocol, parties have to generate new verification and signing keys for every session of the protocol. This motivates to generalize composition theorems to so-called joint state (composition) theorems, where different copies of a functionality may share some state, e.g., the same verification and signing keys. In this paper, we present a joint state theorem which is more general than the original theorem of Canetti and Rabin, for which several problems and limitations are pointed out. We apply our theorem to obtain joint state realizations for three functionalities: public-key encryption, replayable public-key encryption, and digital signatures. Unlike most other formulations, our functionalities model that ciphertexts and signatures are computed locally, rather than being provided by the adversary. To obtain the joint state realizations, the functionalities have to be designed carefully. Other formulations proposed in the literature are shown to be unsuitable. Our work is based on the IITM model. Our definitions and results demonstrate the expressivity and simplicity of this model. For example, unlike Canetti’s UC model, in the IITM model no explicit joint state operator needs to be defined and the joint state theorem follows immediately from the composition theorem in the IITM model.


Introduction
In frameworks for universal composability (see, e.g., [6,7,9,15,[18][19][20][21]24,26]) the security of protocols is defined in terms of an ideal protocol (also called an ideal functionality). A real protocol securely realizes the ideal protocol if every attack on the real protocol can be translated to an "equivalent" attack on the ideal protocol, where equivalence is specified based on an environment trying to distinguish the real attack from the ideal one. That is, for every real adversary on the real protocol, there must exist an ideal adversary (also called a simulator) on the ideal protocol such that no environment can distinguish whether it interacts with the real protocol and the real adversary or the ideal protocol and the ideal adversary. So the real protocol is as secure as the ideal protocol (which, by definition, is secure) in all environments. At the core of the universal composability approach are composition theorems which say that if a protocol uses one or more (independent) instances 1 of an ideal functionality, then all instances of the ideal functionality can be replaced by instances of the real protocol that realizes the ideal functionality. In this way, more and more complex protocols can be designed and analyzed in a modular way based on ideal functionalities, which later can be replaced by their realizations.
However, as first pointed out and studied by Canetti and Rabin [14] (see the related work), this modular approach often leads to impractical implementations since the composition theorems assume that different instances of a protocol have disjoint state. In particular, the random coins used in different instances have to be chosen independently. Consequently, when, for example, using a functionality for digital signatures within a more complex protocol, e.g., a key exchange protocol, parties have to generate new verification and signing keys for every instance of the protocol. This is completely impractical and motivates to generalize composition theorems to so-called joint state (composition) theorems, where different instances of a protocol may share some state, such as the same verification and signing keys.
The main goal of this paper is to obtain a general joint state theorem and to apply it to (novel) public-key encryption, replayable public-key encryption, and digital signature functionalities with local computation. In these functionalities, ciphertexts and signatures are computed locally, rather than being provided by the adversary, a feature often needed in applications. To obtain the joint state realizations, the functionalities have to be designed carefully. Other formulations proposed in the literature are shown to be unsuitable.

Contribution of this paper.
In a nutshell, our contributions include (i) novel and rigorous formulations of ideal (replayable) public-key encryption and digital signature functionalities with local computation, along with their implementations, (ii) a joint state theorem which is more general than other formulations and corrects flaws in these formulations, and (iii) based on this theorem, joint state realizations and theorems for (replayable) public-key encryption and digital signatures.
Unfortunately, all other joint state theorems claimed in the literature for such functionalities with local computation can be shown to be flawed. An overall distinguishing feature of our work is the rigorous treatment, the simplicity of our definitions, and the generality of our results, which is due to the expressivity and simplicity of the model for universal composability that we use, the IITM model [21,24]. For example, unlike Canetti's UC model [6,7], 2 in the IITM model no explicit joint state operator needs to be defined and the joint state theorem follows immediately from the composition theorems of the IITM model. More precisely, our contributions are as follows.
(i) We formulate three functionalities: digital signatures, public-key encryption, and replayable public-key encryption. Our formulation of replayable public-key encryption is meant to model in a universal composability setting the notion of replayable IND-CCA2 security (IND-RCCA security) [12]. This relaxation of IND-CCA2 security permits anyone to generate new ciphertexts that decrypt to the same plaintext as a given ciphertext. As argued in [12], IND-RCCA security suffices for most existing applications of IND-CCA2 security. In our formulations of the above mentioned functionalities ciphertexts and signatures are determined by local computations, and hence, as needed in many applications, a priori do not reveal signed messages or ciphertexts. In other formulations, e.g., those in [1,8,12,14,17], signatures and ciphertexts are determined by interaction with the adversary, with the disadvantage that the adversary learns all signed messages and all ciphertexts. Hence, such functionalities cannot be used, for example, in the context of secure message transmissions where a message is first signed and then encrypted, or in protocols with nested encryptions. Although there exist formulations of non-replayable public-key encryption and digital signature functionalities with local computation in the literature, these formulations have several deficiencies, in particular, as mentioned, concerning joint state realizations (see below).
We show that a public-key encryption scheme implements our (replayable) publickey encryption functionality if and only if it is IND-CCA2 secure (IND-RCCA secure), in case of static corruptions. We also prove equivalence between UF-CMA security of digital signatures schemes and our digital signature functionality, in case of adaptive corruptions.
(ii) In the spirit of Canetti and Rabin [14], we state a general joint state theorem. However, in contrast to Canetti's UC model as employed in [14] and the new versions of his model [6], within the IITM model we do not need to explicitly define a specific joint state operator. Also, our joint state theorem, unlike the one in the UC model, immediately follows from the composition theorem in the IITM model, no extra proof is needed. In addition to the seamless treatment of the joint state theorem within the IITM model, which exemplifies the simplicity and expressivity of the IITM model, our theorem is even more general than the ones in [6,14] (see Sect. 3). We also note in Sect. 3 that, due to the kind of ITMs used in the UC model, the assumptions of the joint state theorems in the UC models can in many interesting cases not be satisfied and in the cases where they are satisfied, the theorem does not necessarily hold true.
We note that, similarly to the UC model, in the proposed GNUC model [20] dealing with joint state is quite cumbersome as well. In this model, in a run of a system machines have to form a call tree (every machine must have a unique caller), which is not the case in settings with joint state. Hence, unlike the IITM model, this model does not allow for dealing with joint state in a natural and smooth way. For example, the general joint state theorem does not immediately follow from the composition theorem in the GNUC model. It rather requires a non-trivial proof, which has to take into account details fixed in the GNUC model, such as corruption and so-called invited messages. (iii) We apply our general joint state theorem to obtain joint state theorems for our (replayable) public-key encryption and digital signature functionalities. These joint state theorems are based on our ideal functionalities alone, and hence, work for all implementations of these functionalities. While the core of our joint state realizations are quite standard, their constructions and the proofs need care; as already mentioned, all other joint state theorems claimed in the literature for such functionalities with local computation are flawed. Related work.
As mentioned, Canetti and Rabin [14] were the first to explicitly study the problem of joint state, based on Canetti's original UC model [7]. They propose a general joint state theorem and apply it to a digital signature functionality with non-local computation (see also [1,13]), i.e., the adversary is asked to provide a signature for every message. While the basic ideas in this work are interesting and useful, their general joint state theorem has several problems and limitations, as discussed in Sect. 3.
While most formulations of digital signatures and public-key encryption proposed in the literature use non-local computation, some formulations with local computations exist, which however, as already mentioned, are unsuitable for obtaining joint state realizations (see Sect. 6 for a detailed discussion).
For example, in [6] (version of December 2005), 3 Canetti proposes functionalities for public-key encryption and digital signatures with local computation. He sketches a functionality for replayable public-key encryption in a few lines. However, this formulation only makes sense in a setting with non-local computation, as proposed in [12]. As for joint state, Canetti only points to [14], with the limitations and problems inherited from this work. Moreover, as further discussed in Sect. 6, the joint state theorems claimed for the public-key encryption and digital signature functionalities in [6] are flawed. The same is true for the work by Canetti and Herzog in [11], where another public-key encryption functionality with local computation is proposed and a joint state theorem is claimed.
We note that, despite the problems with the joint state theorem and its application in the UC model pointed out in this work (see Sects. 3.1 and 6 for detailed discussions), the basic ideas and contributions in that model are important and useful. However, we believe that it is crucial to equip that body of work with a more rigorous and elegant framework. This is one of the goals of this work.
In [10], Canetti et al. study universal composability with global setup. We note that they have to extend the UC model to allow the environment to access the functionality for the global setup. In the IITM model, this is not necessary (see the discussion in [24]). The global setup can be considered as joint state. But it is a joint state shared across all entities, unlike the joint state settings considered here, where the joint state is only shared within instances of functionalities. Therefore the results proved in [10] do not apply to the problem studied in this paper.
The present paper is an extended and updated version of [22]. In contrast to [22], where we use the original version of the IITM model [21], here we use the new version [24]. Structure of the paper.
In the following section, we briefly recall the IITM model. The general joint state theorem is presented in Sect. 3, along with a discussion of the joint state theorem of Canetti and Rabin [14]. In Sect. 4, we present our formulations of ideal functionalities for digital signatures, public-key encryption, and replayable public-key encryption along with realizations of these functionalities. Joint state realizations of these functionalities are provided in Sect. 5. In Sect. 6, we discuss further related work and provide more details for the related work mentioned above. Some more details are provided in the appendix. Notation and basic terminology.
For a bit string a ∈ {0, 1} * we denote by |a| the length of a. Given bit strings a 1 , . . . , a n , by (a 1 , . . . , a n ) we denote the tuple consisting of these bit strings. We assume that tuples have a simple bit string representation and that converting a tuple to its bit string representation and vice versa is efficient. We do not distinguish between a tuple and its bit string representation.

The IITM Model
In this section, we recall the IITM model [21,24], a simple and expressive model for universal composability. More precisely, here we use the IITM model as presented in [24], which equips the original IITM model [21] with a more general notion of runtime. This allows us to formulate protocols and ideal functionalities in a more intuitive way, without technical artifacts concerning runtime. As discussed in [24], the (new) IITM model has several advantages compared to other models for universal composability. In particular, it resolves problems in Canetti's UC model and does not suffer from restrictions imposed in the GNUC model [20]. As already mentioned in the introduction and further discussed in Sect. 6, these problems and restrictions also affect the joint state theorems. 4 We note that this definition of negligibility is equivalent to the following: f is negligible if and only if for all positive polynomials p(η) and q(η) in η ∈ N (i.e., p(η) > 0 and q(η) > 0 for all η ∈ N) there exists η 0 ∈ N such that for all η > η 0 and all a ∈ η ≤q(η) {0, 1} η : f (η, a) < 1 p(η) . We further note that negligible functions have the following properties: (i) If f and g are negligible, then f + g is negligible.
As discussed in [24], the IITM model does not fix details such as addressing of machines by party/session IDs or corruption. Such details can be specified in a flexible and general way as part of the protocol specification. The IITM model also does not impose any specific structure, e.g., a hierarchical structure with protocols and subroutines, on systems. Altogether, this makes the model more expressive. It also makes the theorems proven in the IITM model, such as composition and joint state theorems, more general as they hold true for a large class of protocols and no matter how certain details are fixed.
Since the IITM model is in the spirit of Canetti's UC model, we note that conceptually the results presented in this paper also carry over to other models for universal composability.

The General Computational Model
In the IITM model, security notions and composition theorems are formalized based on a simple, expressive general computational model, in which IITMs (inexhaustible interactive Turing machines) and systems of IITMs are defined.

Inexhaustible interactive Turing machines.
An inexhaustible interactive Turing machine (IITM) is a probabilistic Turing machine with named input and output tapes as well as an associated polynomial. The tape names determine how different machines are connected in a system of IITMs (see below). Tapes named start and decision serve a particular purpose when running a system of IITMs. It is required that only input tapes can be named start and only output tapes can be named decision. Tapes named start are used to provide a system with external input and to trigger an IITM if no other IITM was triggered. An IITM is triggered by another IITM if the latter sends a message to the former. An IITM with an input tape named start is called master IITM. On tapes named decision the final output of a system of IITMs will be written. An IITM runs in one of two modes, CheckAddress and Compute. The CheckAddress mode is used as a generic mechanism for addressing copies of IITMs in a system of IITMs, as explained below. In this mode, an IITM may perform, in every activation, a deterministic polynomial-time computation in the length of the security parameter plus the length of the current input plus the length of its current configuration, where the polynomial is the one associated with the IITM. The IITM is supposed to output "accept" or "reject" at the end of the computation in this mode, indicating whether the received message is processed further or ignored. The actual processing of the message, if accepted, is done in mode Compute. In this mode, a machine may only output at most one message on an output tape (and hence, only at most one other machine is triggered). The runtime in this mode is not a priori bounded. Later the runtime of systems and their subsystems will be defined in such a way that the overall runtime of a system of IITMs is polynomially bounded in the security parameter. We note that in both modes, an IITM cannot be exhausted (hence, the name): in every activation, it can perform actions and cannot be forced to stop. This property, while not satisfied in all other models, is crucial to obtain a reasonable model for universal composability (see, e.g., [24] for more discussion).

Systems of IITMs.
A system S of IITMs is of the form S = M 1 | · · · | M k | !M 1 | · · · | !M k where M i , i ∈ {1, . . . , k}, and M j , j ∈ {1, . . . , k }, are IITMs such that, for every tape name c, at most two of these IITMs have a tape named c and, if two IITMs have a tape named c, this tape is an input tape in one of the machines and an output tape in the other. That is, two IITMs can be connected via tapes with the same name and opposite directions. These tapes are called internal and all other tapes are called external. The IITMs M j are said to be in the scope of a bang operator. This operator indicates that in a run of a system an unbounded number of (fresh) copies of a machine may be generated. Conversely, machines which are not in the scope of a bang operator may not be copied. Systems in which multiple copies of a machine may be generated are often needed, e.g., in case of multi-party protocols or in case a system describes the concurrent execution of multiple instances of a protocol. The above conditions imply that in every system only at most one IITM may be a master IITM, i.e., may have an input tape named start; there may be several copies of such a machine in a run of a system though.

Running a system.
In a run of a system S with security parameter η and external input a -such a system is denoted by S(1 η , a) -, at any time only one (copy of an) IITM is active and all other (copies of) IITMs wait for new input. 6 The active copy, say M , which is a copy of a machine M defined in S, may write at most one message, say m, on one of its output tapes, say c. This message is then delivered to another (copy of an) IITM with an input tape named c, say N is the machine specified in S with an input tape named c. 7 In the current configuration of the system, there may be several copies of N . In the order of creation, the copies of N are run in mode CheckAddress with input m. Once one copy accepts m, this copy gets to process m, i.e., it runs in mode Compute with input m, and in particular, may produce output on one output tape, which is then sent to another copy and so on. If no copy of N accepts m and N is in the scope of a bang, a fresh copy of N is created and run in mode CheckAddress. If this copy accepts m, it gets to process m in mode Compute. Otherwise, the new copy of N is deleted, m is dropped, and a master IITM is activated (with empty input). If N is not in the scope of a bang (and-the only copy of-N does not accept m), then too a master IITM is activated. The first IITM to be activated in a run is a master IITM. It gets the bit string a as external input (on tape start). A master IITM is also activated if the currently active machine does not produce output (i.e., stops in its activation without writing to any output tape). A run stops if a master IITM, after being activated, does not produce output or output was written by some machine on an output tape named decision. The overall output of 6 We would like to emphasize the difference between a description of a machine and an instance/copy of a machine. The description of a machine M specifies the behavior of a machine and is part of the specification of a system S. In a run of S, instances of M are created. These instances have a specific state (or configuration), receive input on their input tapes, process the input according to their specifications (program code), thereby updating their state, and produce output. In what follows, for simplicity, we do not always distinguish between a description of a machine and its instances as the meaning is clear from the context. 7 By the convention on the names of input tapes in systems of IITMs there can be at most one such machine. the run is defined to be the message that is output on decision. The probability that, in runs of S(1 η , a), the overall output is m ∈ {0, 1} * is denoted by To illustrate runs of systems, consider, for example, the system S = M 1 | !M 2 and assume that M 1 has an output tape named c, M 2 has an input tape named c, and M 1 is the master IITM. (There may be other tapes connecting M 1 and M 2 .) Furthermore, assume that in the run of S executed so far, two copies of M 2 , say M 2 and M 2 , have been generated, with M 2 generated before M 2 , and that M 1 just sent a message m on tape c. This message is delivered to M 2 (as the first copy of M 2 ). First, M 2 runs in mode CheckAddress with input m; as mentioned, this is a deterministic polynomialtime computation which outputs "accept" or "reject". If M 2 accepts m, then M 2 gets to process m in mode Compute and could, for example, send a message back to M 1 . Otherwise, m is given to M 2 which then runs in mode CheckAddress with input m. If M 2 accepts m, then M 2 gets to process m in mode Compute. Otherwise (if both M 2 and M 2 do not accept m), a new copy M 2 of M 2 with fresh randomness is generated and M 2 runs in mode CheckAddress with input m. If M 2 accepts m, then M 2 gets to process m. Otherwise, M 2 is removed again, the message m is dropped, and the master IITM is activated (with empty input), in this case M 1 , and so on.

Equivalence/indistinguishability of systems.
Two systems that produce overall output 1 with almost the same probability are called equivalent or indistinguishable: Definition 1. [24] Let f : N × {0, 1} * → R ≥0 be a function. Two systems P and Q are called f -equivalent or f-indistinguishable (P ≡ f Q) if and only if for every security parameter η ∈ N and external input a ∈ {0, 1} * : Two systems P and Q are called equivalent or indistinguishable (P ≡ Q) if and only if there exists a negligible function f such that P ≡ f Q.
It is easy to see that for every two functions f, f as in Definition 1 the relation ≡ f is reflexive and that P ≡ f Q and Q ≡ f S implies P ≡ f + f S. In particular, ≡ is reflexive and transitive. Composition of systems. We say that a system P is connectable or can be connected to a system Q if P connects only to the external tapes of Q, i.e., tapes with the same name in P and Q are external tapes of P and Q, respectively, and they have opposite directions (an input tape in one system is an output tape in the other). By P | Q we denote the composition of the systems P and Q, defined in the obvious way. For example, if When writing P | Q we implicitly assume that the internal tapes of P and Q are renamed 8 Formally, S(1 η , a) is a random variable that describes the overall output of runs of S(1 η , a), based on a standard probability space for runs of systems, see [24] for details. in such a way that the sets of internal tapes of P and Q are disjoint. This guarantees that P and Q communicate only over their external tapes.

Polynomial Time and Properties of Systems
So far, the runtime of IITMs in mode Compute has not been restricted in any way. To define notions of universal composability, it has to be enforced that systems run in polynomial time (except maybe with negligible probability). This will be done based on the following runtime notions.
A system S is called strictly bounded if there exists a polynomial p such that, for every security parameter η and external input a, the overall runtime of S in mode Compute (i.e., the overall number of transitions taken in this mode) is bounded by p(η + |a|) in every run of S(1 η , a). If this holds only for an overwhelming set of runs, S is still called almost bounded. As shown in [24], every almost/strictly bounded system can be simulated (except maybe with a negligible error) by a probabilistic polynomial-time Turing machine.
A system E is called universally bounded if there exists a polynomial p such that, for every security parameter η, external input a, and system S that can be connected to E, the overall runtime of E in mode Compute is bounded by p(η + |a|) in every run of (E | S)(1 η , a). (We note that environmental systems will be defined to be universally bounded, see below.) A system P is called environmentally (almost) bounded if E | P is almost bounded for every universally bounded system E that can be connected to P. Similarly, P is called environmentally strictly bounded if E | P is strictly bounded for every universally bounded system E that can be connected to P. 9 (We note that protocol systems will be environmentally bounded. Therefore, it will be guaranteed that a protocol, together with an environment, runs in polynomial time, see below.)

Notions of Universal Composability
To define notions of universal composability, we first introduce the following terminology. For a system S, the external tapes are grouped into I/O and network tapes. Three different types of systems are considered: protocol systems, adversarial systems, and environmental systems, modeling (i) real and ideal protocols/functionalities, (ii) adversaries and simulators, and (iii) environments, respectively. Protocol systems, adversarial systems, and environmental systems are systems which have an I/O and network interface, i.e., they may have I/O and network tapes. Environmental systems have to be universally bounded and protocol systems have to be environmentally bounded. 10 Protocol systems 9 As discussed in [24], since the runtime of universally bounded systems is polynomially bounded, the definition of environmentally almost/strictly bounded is equivalent to the following: P is environmentally almost/strictly bounded iff, for every universally bounded system E, there exists a polynomial p such that, for every η and a, the overall runtime of P in mode Compute (i.e., the overall number of transitions taken by machines of P in mode Compute) is bounded by p(η + |a|) in every run of (E | P)(1 η , a) (except for a negligible set of runs). 10 We note that protocol systems, as defined in [24], per se are not required to be environmentally bounded.
Instead, to obtain more general results, this is explicitly stated where needed. However, in most applications and adversarial systems may not have a tape named start or decision; only environmental systems may have such tapes, i.e., environmental systems may contain a master IITM and may determine the overall output of a run. Furthermore, for every IITM M that occurs in a protocol system and is not in the scope of a bang, it is required that M accepts every incoming message in mode CheckAddress. 11 Given a system S, the set of all environmental systems that can be connected to S (on the network or I/O interface) is denoted by Env(S). For two protocol systems P and F, Sim P (F) denotes the set of all adversarial systems A such that A can be connected to F, the set of external tapes of A is disjoint from the set of I/O tapes of F (i.e., A only connects to the network interface of F), A | F and P have the same external network and I/O interface, and A | F is environmentally bounded.
We now recall the definition of strong simulatability; other, equivalent, security notions, such as UC and dummy UC, can be defined in a similar way [24]. The systems considered in this definition are depicted in Fig. 1.

Definition 2.
[24] Let P and F be protocol systems, the real and ideal protocol, respectively. Then, P realizes F (P ≤ F) if and only if there exists a simulator S ∈ Sim P (F) (also called ideal adversary) such that E | P ≡ E | S | F for every environment E ∈ Env(P).
As shown in [24], this relation is reflexive and transitive.

Composition Theorems
The first composition theorem handles concurrent composition of a fixed number of (possibly different) protocol systems. The second one guarantees secure composition of an unbounded number of copies of a protocol system. and throughout this paper protocol systems are always environmentally bounded (or even environmentally strictly bounded). Therefore, we simply require protocol systems to be environmentally bounded here. 11 The motivation behind this condition is that if M does not occur in the scope of a bang, then, in every run of the protocol system (in some context), there will be at most one copy of M. Hence, there is no reason to address different copies of M, and therefore, in mode CheckAddress, M should accept every incoming message. This condition is needed in the proofs of the composition theorems for unbounded self-composition. Theorem 1. [24] Let k ≥ 1. Let Q, P 1 , . . . , P k , F 1 , . . . , F k be protocol systems such that they connect only via their I/O interfaces, Q | P 1 | · · · | P k is environmentally bounded, and P i ≤ F i , for i ∈ {1, . . . , k}. Then, Q | P 1 | · · · | P k ≤ Q | F 1 | · · · | F k .
Note that this theorem does not require that the protocols P i /F i are subprotocols of Q, i.e., that Q has matching external I/O tapes for all of these protocols. How these protocols connect to each other via their I/O interfaces is not restricted in any way, even the environment could connect directly to (the full or the partial) I/O interface of these protocols. Clearly, the theorem also holds true if the system Q is dropped.
For the following composition theorem, we introduce the notion of a session version of a protocol in order to be able to address copies of the protocol. Given an IITM M, the session version M of M is an IITM which internally simulates M and acts as a "wrapper" for M. More precisely, in mode CheckAddress, M accepts an incoming message m only if the following conditions are satisfied: (i) M has not accepted a message yet (in mode CheckAddress), m is of the form (id, m), and m is accepted by the simulated M in mode CheckAddress. (In this case, later when activated in mode Compute, the ID id will be stored by M.) (ii) M has accepted a message before, m is of the form (id , m), id coincides with the ID id that M has stored before (in mode Compute), and m is accepted by M when simulated in mode CheckAddress. In mode Compute, if M is activated for the first time in this mode, i.e., the incoming message, say m = (id, m), was accepted in mode CheckAddress for the first time, then first id is stored and then M is simulated with input m. Otherwise (if M was activated in mode Compute before), M is directly simulated with input m. If the simulated M produces output on some tape, then M prefixes this output with id and then outputs the resulting message on the corresponding tape.
The ID id typically is some session ID (SID) or some party ID (PID) or a combination of both. Clearly, it is not essential that messages are of the form (id, m). Other forms are possible as well. In fact, everything checkable in polynomial time works.
To illustrate the notion of a session version of an IITM, assume that M specifies some ideal functionality. Then !M denotes the multi-session version of M, i.e., a system in which an unbounded number of copies of M can be created where every copy of M can be addressed by a unique ID, where the ID could be a PID (then an instance of M might model one party running M) or an SID (then an instance of M models one session of M) We sometimes require IDs to belong to a specific (polynomially decidable) domain D. In this case, we refer to session version with domain D. For such a session version, in mode CheckAddress only those SIDs are accepted that belong to D. With this, we could, for example, define a session version M of an IITM M which only accepts SIDs of the form (sid, pid), where pid denotes a party and sid identifies the session in which this party runs. Hence, in a run of the system !M (in some environment) all instances of M would have SIDs of this form. In this case, an instance M with ID (sid, pid) models an instance of party pid running M in session sid.
In statements involving session versions, such as composition theorems, details of how the domains of SIDs are chosen are typically not important, as long as they are chosen consistently. We therefore omit such details in the statements.
Given a system S, its session version S is obtained by replacing all IITMs in S by their session version. For example, we obtain S = M | !M for S = M | !M . Now, the following composition theorem says that if a protocol P realizes F, then the multi-session version of P realizes the multi-session version of F. Theorem 2. [24] Let P and F be protocol systems such that !P is environmentally bounded and P ≤ F. Then, !P ≤ !F.
We note that the extra proof obligation that !P is environmentally bounded is typically easy to show. If P is environmentally strictly bounded (which should be the case in most applications), it even follows immediately that !P is environmentally (strictly) bounded, as further discussed below.
Theorems 1 and 2 can be applied iteratively to construct more and more complex systems. For example, as an immediate consequence of Theorem 1 and 2 we obtain that if (an unbounded number of sessions of) an ideal protocol F is used as a component in a more complex system Q, then it can be replaced by its realization P: 12 Corollary 1. Let Q, P, and F be protocol systems such that P and F have the same I/O interface, Q only connects to the I/O interface of !P (and, hence, !F), and !P and When addressing a session version M of a machine M, the machine M simulated within M is not aware of its ID and cannot use it. For example, it cannot put the ID into a message that M creates. However, sometimes this is desirable. Therefore another, more general, composition theorem is considered, where machines are aware of their IDs. While these IDs can, as already mentioned above, be interpreted in different ways, they will often be referred to as SIDs.
To this end, [24] first generalized the notion of a session version. They consider (polynomial-time computable) session identifier (SID) functions which, given a message and a tape name, output a SID (a bit string) or ⊥. For example, the following function takes the prefix of a message as its SID: σ prefix (m, c):=s if m = (s, m ) for some s, m and σ prefix (m, c):=⊥ otherwise, for all m, c. Clearly, many more examples are conceivable. The reason that σ , besides a message, also takes a tape name as input is that the way SIDs are extracted from messages may depend on the tape a message is received on.
Given an SID function σ , an IITM M is called a σ -session machine (or a σ -session version) if the following conditions are satisfied: A system S is a σ -session version if all IITMs defined in S are. It is easy to see that session versions are specific forms of σ -session versions: given an IITM M, we have that M is a σ prefix -session version. The crucial difference is that while σ -session versions look like session version from the outside, inside they are aware of their SID.
Before the composition theorem can be stated, a notion of single-session realizability needs to be introduced.
An environmental system E is called σ -single session if it outputs messages only with the same SID according to σ . Hence, when interacting with a σ -session version, such an environmental system invokes at most one protocol session. Given a system S and an SID function σ , Env σ -single (S) denotes the set of all environments E ∈ Env(S) such that E is σ -single session, i.e., Env σ -single (S) is the set of all σ -single session environmental systems that can be connected to S.
For two protocol systems P and F and an SID function σ , Sim P σ -single (F) denotes the set of all adversarial systems A such that A can be connected to F, the set of external tapes of A is disjoint from the set of I/O tapes of F (i.e., A connects to only the network interface of F), A | F has the same external tapes as P, and E | A | F is almost bounded for every E ∈ Env σ -single (A | F). We note that Sim P (F) ⊆ Sim P σ -single (F); the only difference between these two sets is that the runtime condition on A | F is relaxed in Sim P σ -single (F). Let P and F be protocol systems, which in the setting considered here would typically describe multiple sessions of a protocol. Moreover, we assume that P and F are σ -session versions. Now, it is defined what it means that a single session of P realizes a single session of F. This is defined just as P ≤ F (Definition 2), with the difference that only σ -single session environments are considered, and hence, environments that invoke at most one session of P and F. Definition 3. [24] Let σ be an SID function and let P and F be protocol systems, the real and ideal protocol, respectively, such that P and F are σ -session versions. Then, P single-session realizes F w.r.t. σ (P ≤ σ -single F) if and only if there exists S ∈ Sim P σ -single (F) such that E | P ≡ E | S | F for every σ -single session environment E ∈ Env σ -single (P). Now, analogously to Theorem 2, the following theorem says that if P realizes F w.r.t. a single session, then P realizes F w.r.t. multiple sessions. As mentioned before, in the setting considered here P and F would typically model multi-session versions of a protocol/functionality.

Theorem 3. [24]
Let σ be an SID function and let P and F be protocol systems such that P and F are σ -session versions and P ≤ σ -single F. Then, P ≤ F.
Clearly, this theorem can be combined with the other composition theorems to construct more and more complex systems. For example, similar to the above corollary, we obtain the following corollary: Corollary 2. Let Q, P, and F be protocol systems such that P and F are σ -session versions for some SID function σ , P and F have the same I/O interface, Q connects to only the I/O interface of P (and, hence, F), and Q | P is environmentally bounded. If P ≤ σ -single F, then Q | P ≤ Q | F. As discussed in [24], the composition of two environmentally bounded systems is not necessarily environmentally bounded. For instance, two systems, where each on its own is environmentally bounded, could play ping-pong, i.e., send message back and forth between each other. However, in applications the composition of environmentally almost/strictly bounded systems is basically always environmentally almost/strictly bounded. Moreover, in applications, it is typically easy to see whether a system, including the composition of two environmentally almost/strictly bounded systems, is environmentally almost/strictly bounded. As also observed in [24], in applications protocol systems are typically strictly bounded, and for such systems, we obtain useful general composability statements, which are briefly recalled next. Lemma 1. [24] Let P and Q be two environmentally strictly bounded protocol systems such that the sets of external tapes of P and Q are disjoint. Then, P | Q is environmentally strictly bounded.
This lemma can be generalized to the case where P and Q can communicate via tapes, provided that the information flow from P to Q is polynomially bounded in the security parameter, the length of the external input, and the overall length of messages P gets from the environment.
The following lemma says that the notion of environmentally strict boundedness is closed under unbounded self-composition. Lemma 2. [24] Let S be an environmentally strictly bounded protocol system. Then, !S is environmentally strictly bounded.

The Joint State Theorem
As already sketched in the introduction, joint state theorems are needed for the following reason. Composition theorems (for unbounded self-composition) state that it suffices to prove that a real protocol realizes an ideal functionality in a single session in order to conclude that multiple sessions of the real protocol realize multiple sessions of the ideal functionality. The problem is that this requires the states of the different sessions of the protocols/functionalities to be disjoint. In particular, the random coins used in different sessions have to be chosen independently. This, for example for digital signatures or public-key encryption, means that a party would have to choose new key pairs for every session, which is completely impractical.
Canetti and Rabin [14] proposed composition theorems with joint state, or joint state (composition) theorems for short, to solve this problem.
In this section, we first recall the general joint state theorem proposed by Canetti and Rabin in [14] and discuss several (partly severe) problems of this theorem. We then present a general joint state theorem in the IITM model. As we will see, this theorem does not suffer from the problems in the UC model and it can be stated in a more elegant and general way, and, unlike in the UC model, it follows immediately from the composition theorem as a simple special case.

The Joint State Theorem in the UC Model
To state the general joint state theorem proposed by Canetti and Rabin in the UC model, let Q be a protocol which uses multiple sessions with multiple parties of some ideal functionality F, i.e., Q works in an F-hybrid model. Let P be a realization of F, where F is a single machine which simulates the multi-session multi-party version of F. Now, Q [ P] denotes the JUC composition of Q and P, where calls from Q to F are translated to calls to P and where for each party there is only one copy of P and this copy handles all sessions of this party, i.e., P may make use of joint state. Now, Canetti and Rabin obtain the following theorem. The typical use case of this theorem is that P realizes F in the F-hybrid model in such a way that P creates only one copy of F per party and that this copy handles all sessions of this party. The protocol P then plays the role of a kind of multiplexer which maps all sessions of one party to the corresponding copy of F. In this sense, P is a joint state realization of the multi-session and multi-party version of F. Now, the theorem says that if Q uses the multi-session and multi-party version of F (i.e., Q works in the F-hybrid model where there is on fresh copy of F per party and session), then Q can instead use the joint state realization P where only one copy of F is used per party and this copy is used across all sessions of that party. For example, if F is an ideal functionality for digital signatures which allows one party to sign messages and allows all parties to verify signatures of that party, then the theorem says that the protocol Q which uses one "signing box" per party (through the joint state realization P) realizes the protocol Q when it uses a new signing box per party and session.
As further discussed in Sect. 3.2, due to the restricted expressivity of the UC model and unlike the IITM model, formulating the joint state theorem in the UC model requires some new notions, such as the notion of JUC composition, and a non-trivial proof.
Moreover, unfortunately there are some partly severe technical problems with this theorem in the UC model as discussed next, which are mainly due to the way the runtime of (systems of) ITMs is defined.
We note that the JUC theorem from [14] has been shown for the initial version of the UC model from 2001 only. Thus, technically speaking, there currently is no JUC theorem for any of the more recent versions. However, as we argue below, the fundamental problems of the JUC theorem still exist even if one were to transfer the theorem to more recent versions of the UC model. Problems of the joint state theorem in the UC model. In the UC model, the overall runtime of an ITM is bounded by a polynomial in the security parameter alone in the original UC model [7] or in the security parameter and the overall length of the input on the I/O interface in the new versions of the model [6], including the most recent one. Consequently, once the overall bound is hit, the ITMs are forced to stop. In particular, it is easy to force an ITM to stop by sending many (useless) messages (on the network interface). This, among others, results in the following problem in the UC model. In general, a single ITM, say M, cannot simulate a concurrent composition of a fixed finite number of ITMs, say M 1 , . . . , M n , or an unbounded number of (instances of) ITMs: By sending many messages to M intended for M 1 , say, M will eventually stop, and hence, cannot simulate the other machines anymore, even though, in the actual composition these machines could still take actions. Now, this causes problems in the joint state theorem of the UC model: Although the ITM F in the joint state theorem is intended to simulate the multi-party, multi-session version of F, for the reason explained above, it cannot do this in general; it can only simulate some approximated version. The same is true for P. This, as further explained below, has several negative consequences: A) For many interesting functionalities, including existing versions of digital signatures and public-key encryption, it is not always possible to find a P that realizes F (for a reasonable functionality F), and hence, in these cases the precondition of the joint state theorem cannot be satisfied. B) In some cases, the joint state theorem in the UC model itself fails. ad A) We first illustrate the problem of realizing F in the original UC model, i.e., the one presented in [7], on which the work in [14] is based. We then explain the corresponding problem for the new versions of the UC model [6].
The ITM F is intended to simulate the multi-party, multi-session version of F, e.g., a digital signature functionality. The realization P is intended to do the same, but it contains an ITM for every party. Now, consider an environment that sends many requests to one party, e.g., verification requests such that the answer to all of them is ok. Eventually, F will be forced to stop, as it runs out of resources. Consequently, requests to other parties cannot be answered anymore. However, such requests can still be answered in P, because these requests are handled by other ITMs, which are not exhausted. Consequently, an environment can easily distinguish between the ideal ( F) and real world ( P). This argument works independently of the simulator. The situation just described is very common. Therefore, strictly speaking, for many functionalities of interest it is not possible to find a realization of F in the original UC model.
In the new versions of the UC model [6], the problem of realizing F is similar. However, ITMs cannot be exhausted (forced to stop) via communication on the I/O interface. Nevertheless, exhaustion is possible via the network interface. Assume that P tries to realize F in an F-hybrid model, where for every party one instance of P and F is generated, if any. 13 The environment (via a dummy adversary) can access any copy of F in the Fhybrid model directly via the network interface. In this way, the environment can send many messages to a copy of F, and hence, exhaust this copy, i.e., force it to stop, after some time. Even when the copy has stopped, the environment can keep sending messages to this copy, which in the hybrid model does not have any effect. On the ideal side, the simulator, say S, has to know when a copy of F would stop in the hybrid model, because it then must not forward messages addressed to this copy of F to F. Otherwise, F would get exhausted as well and the environment could distinguish between the hybrid and the ideal world as above: It simply contacts another copy of F in the F-hybrid world 13 This, as already mentioned before, is the typical setting for joint state realizations. Our arguments also apply in many cases where P does not work in the F-hybrid model, which is however quite uncommon. The whole point of modular protocol analysis and design is to use the ideal functionalities.
(via P and the I/O interface or directly via the network interface). This copy (since it is another ITM and not exhausted) would still be able to react, while F is not. However, in general S does not necessarily know if an instance in the hybrid model is exhausted, e.g., because the simulator does not know how many resources have been provided to the functionalities on the I/O interface, to which S does not have access, and how many resources the functionality has consumed. Hence, in this case S always has to forward messages, because the functionality might still have enough resources to react. But this then leads to the exhaustion of F, with the consequence that the environment can distinguish between the hybrid and the ideal world as described above. It is easy to come up with functionalities where the problem just described occurs, including reasonable formulations of public-key encryption and digital signature functionalities. Typically, formulations of functionalities in the UC model are not precise about the runtime of functionalities, e.g., whether a functionality stops as soon as it gets a message of a wrong format or whether it ignores messages until it gets the expected message and only stops if it runs out of runtime. Ill-defined functionalities or different interpretations of how the runtime is defined can then lead to the mentioned problems. Even if there is a realization of F that would work, proving this can become quite tricky because of the described exhaustion problem and its consequences.
We note that even if one were to prove a different JUC theorem that, e.g., changes how F is defined, it would still be hard or even impossible to show that the precondition of the theorem is fulfilled for many interesting functionalities. This is due to the following general problem caused by the runtime notion employed by the UC model: In every type of joint-state realization, there is one instance i in the real world that corresponds to multiple instances/sessions s 1 , . . . , s n in the ideal world. The runtime notion of the UC model allows the environment to exhaust the runtime of i in the real world such that i does not perform any actions anymore. The simulator S has to simulate the same behavior for the instances s 1 , . . . , s n in the ideal world. That is, S typically has to learn how much runtime each of the instances/sessions s 1 , . . . , s n currently has obtained so far, compute the runtime bound of i from this, learn how much runtime is left for each s 1 , . . . , s n , then send more runtime to those s j that would stop earlier than i, and stop those s j that would run longer than i. Thus, ideal functionalities in the UC model generally have to leak their runtime and provide some means to the simulator to stop sessions in order to enable joint-state realizations. This is typically not done by functionalities found in the literature and is not feasible in cases where the runtime depends on secret information. ad B) Having discussed the problem of meeting the assumptions of the joint state theorem in the UC model, we now turn to flaws of the joint state theorem itself. For this, assume that P realizes F within the F-hybrid model, with the (usual) intention that P creates only one copy per party of F. Such a copy handles all sessions of F for that party. In contrast, F simulates a new copy of F per party and session. According to the joint state theorem in the UC model, we should have that Q [ P] (real world) realizes Q (ideal world) in the F-hybrid model. However, the following problems occur: An environment can directly access (via a dummy adversary) a copy of F in the real world. By sending many messages to this copy, this copy will be exhausted. This copy of F, let us call it F[pid], which together with P handles all sessions of a party pid, corresponds to several copies F[pid, sid] of F, for SIDs sid, in the ideal world. Hence, once F[pid] in the real world is exhausted, the simulator also has to exhaust all its corresponding copies F[pid, sid] in the ideal world for every sid, because otherwise an environment could easily distinguish the two worlds. (While F[pid] cannot respond, some of the copies F[pid, sid] still could otherwise.) Consequently, for the simulation to work, F will have to provide to the simulator a way to be terminated, a feature typically not contained in formulations of functionalities in the UC model. Hence, for such functionalities the joint state theorem would typically fail. However, this could be fixed by assuming this feature for functionalities (even though this might be quite artificial.) A more serious problem is that the simulator might not know whether F[pid] in the real model is exhausted (the simulator does not necessarily see how many resources F[pid] gets from the I/O interface and how much resources F[pid] has used), and hence, the simulator does not know when to terminate the corresponding copies in the ideal model. So, in these cases again the joint state theorem fails. In fact, just as in the case of realizing F, it is not hard to come up with functionalities where the joint state theorem fails, including reasonable formulations of public-key encryption and digital signature functionalities. So, the joint state theorem cannot simply be applied to arbitrary functionalities. One has to reprove this theorem on a case by case basis or characterize classes of functionalities for which the theorem holds true. We finally note that in the original UC model [7] there is yet another, but smaller problem with the joint state theorem. Since in the original UC model the number of copies of F that F can simulate is bounded by a polynomial in the security parameter, this number typically also has to be bounded in the realization P. However, now the environment can instruct Q to generate many copies of F for one party. In the real world, after some time no new copies of F for this party can be generated because P is bounded. However, an unbounded number of copies can be generated in the ideal world, which allows the environment to distinguish between the real and ideal world. The above argument uses that the runtime of Q is big enough such that the environment can generate, through Q, more copies than P can produce. So, this problem can easily be fixed by assuming that the runtime of Q is bounded appropriately. Conversely, given Q, the runtime of P should be made big enough. This, however, has not been mentioned in the joint state theorem in [14].
As already mentioned in the introduction, despite of the various problems with the joint state theorem in the UC model, within that model useful and interesting results have been obtained. However, it is crucial to equip that body of work with a coherent as well as more rigorous and elegant framework. We believe that the IITM model provides such a framework.

The Joint State Theorem in the IITM Model
In order to present the joint state theorem in the IITM model, assume that F is a protocol system (modeling an ideal functionality). For our joint state theorem any protocol system can be used. In applications, F will typically model an ideal functionality that can be used by multiple parties in one session. For example, F could be some σ prefix -session version which expects messages of the form (pid, m), where pid is a party ID (PID). A specific instance of such a functionality would be a functionality of the form !F , where F is a protocol system which describes an ideal functionality that can be used by one party in one session. Runs of !F can thus contain multiple instances of F where every Fig. 2. A run of Q | P F js (left) and Q | !F (right), respectively, with three sessions (with SIDs) sid 1 , sid 2 , sid 3 . The runs are with respect to some environment that is not displayed. By !F [sid i ] we denote the copy of F that is addressed by sid i . The arrows denote the connections between the systems via I/O tapes and addressing with SIDs. In addition, all systems may be connected to the environment via I/O and network tapes; these connections are not displayed. instance can be addressed by some ID, which in this case would be interpreted as a PID. In particular, messages to !F would be of the form (pid, m) and such a message would be sent to the instance of F corresponding to pid and this instance would be given the message m.
Given some ideal functionality F, the system !F models a multi-session version of F: A run of !F can contain multiple sessions of F. In order to send a message m to session sid, one would send the message (sid, m) to !F. If F is a multi-party, single-session formulation of an ideal functionality, as explained above, in order to send a message m to party pid in session sid one would send the message (sid, (pid, m)) to !F.
In the formulation of our joint state theorem we use !F to denote a multi-session version of the functionality F. However, the specific form of the multi-session version does not matter. We could replace !F by any protocol system. We use !F because this system is closer to the intended application of the theorem. Now, our joint state theorem can be stated as follows (see also Fig. 2 for an illustration of the runs of the systems considered in this theorem).
The fact that Theorem 5 immediately follows from Theorem 1 shows that, in the IITM model, there is no need for an explicit (general) joint state theorem.
The reason that such a theorem is needed in the UC model lies in the restricted expressivity of this model: First, one has to define a single ITM F which simulates the multi-party, multi-session version of F. One cannot simply write !F because multiparty, multi-session versions only exist as part of a hybrid model. In particular, one cannot write P F js ≤ !F directly, but has to say that P realizes F. Second, the JUC operator has to be defined explicitly since it cannot be directly stated that only one instance of P F js is invoked by Q; in the IITM model we can simply write Q | P F js . Also, a composition theorem corresponding to Theorem 1, which is used to show that P F js can be replaced by !F, is not directly available in the UC model, only a composition theorem corresponding to Corollaries 1 and 2. Finally, due to the addressing mechanism employed in the UC model, redirection of messages have to be made explicit.
We note that despite the trivial proof of Theorem 5 in the IITM model (given the composition theorem), the statement that Theorem 5 makes is stronger than that of the joint state theorem in the UC model [6,14]. Inherited from our composition theorems, and unlike the theorem in the UC model, Theorem 5 does not require that Q completely shields the sub-protocol from the environment, and hence, from super-protocols on higher levels. This can lead to simpler systems and more efficient implementations.
As already mentioned in the introduction and further explained in [24], also in the recently proposed GNUC model [20] it is necessary to explicitly state a joint state theorem. The main problem in that model is that it imposes a tree structure on protocols, which for joint state (and global state) is too restricted and requires a quite artificial workaround in that model. Applying the joint state theorem. Theorem 5, just like the joint state theorem in the UC model, does not by itself yield practical joint state realizations, as it does not answer the question of how a practical realization P F js can be found. A desirable instantiation of P F js would be of the form !P js | F where !P js is a very simple protocol in which for every party only one copy of P js is generated and this copy handles, as a multiplexer, all sessions of this party via the single instance of the ideal multi-party, single-session functionality F. Hence, the goal is to find a protocol system !P js (with one copy per party) such that: (1) 14 The protocol !P js | F will be called a (practical) joint state realization of !F in what follows. Now, assume that P ≤ F. Provided that F is a multi-party, single-session functionality, note that P too is a multi-party protocol which realizes a single session of F. By (1), the composition theorems, and the transitivity of ≤ we immediately obtain that !P js | P ≤ !F. That is, we obtain a realization of the multi-session version of F where only one session of P is used (in combination with the multiplexer P js ) to realize all sessions of F.
Moreover, if F = !F is the multi-party, single-session version of the single-party, single-session functionality F and P realizes F , i.e., P ≤ F , then !P js | !P ≤ !P js | !F ≤ !F = !F , where P denotes the party version of P , F the party version of F , and F the session and party version of F . That is, to realize the multi-session and multi-party version of F , we obtain a joint state realization where only one copy of P is used per party. This copy handles all sessions of that party.
14 Strictly speaking, one has to rename the I/O tapes of F on the right-hand side (or I/O tapes of P js on the left-hand side), to ensure that both sides have the same external I/O interface.
The seamless treatment of joint state in the IITM model allows for iterative applications of the joint state theorem. Consider a protocol Q that uses the multi-session version !F of a (multi-party) ideal functionality F. That is, we consider the system Q | !F. Furthermore, assume that multiple sessions of Q are used within a more complex protocol. Hence, such a protocol uses the system !(Q | !F) = !Q | !F. In this system, in every session of Q several sub-sessions of F can be used. Now iterated application of the composition theorems/joint state theorem and (1) yields: This means that !P js | !P js | F is a joint state realization of !F. Note that in this realization only a single instance of F is used to realize all sessions of F in the system !F. Messages sent to !F (and hence, !P js | !P js | F) are of the form (sid 1 , sid 2 , pid) where sid 1 denotes the SID of a session of Q, sid 2 denotes the session of F within the session sid 1 , and pid denotes the party running in session (sid 1 , sid 2 ). While in !F there is a new copy of F for each SID (sid 1 , sid 2 ), in the joint state realization all such sessions would be handled by a single copy of F. If F = !F , then all sessions (sid 1 , sid 2 ) for party pid would be handled by the copy F of pid. If, for example, F is an ideal (single-party, single-session) public-key encryption functionality, then this means that there is only one decryption/encryption box for every party which is used across all sessions of Q.

Ideal Functionalities
We now present ideal functionalities for digital signatures, public-key encryption, and replayable public-key encryption; along with realizations.

Notation for the Definition of IITMs
We start with notational conventions that we use in the following to define IITMs.

Pseudocode
To define IITMs (and algorithms in general), we use standard pseudocode with the obvious semantics.
By x:=y we denote deterministic assignment of a variable or constant y to a variable x. By x ← A we denote probabilistic assignment to a variable x according to the distribution of the output of an algorithm A. By x $ ← S we denote that x is chosen uniformly at random from a finite set S.
All values that are manipulated are bit strings or special symbols such as the symbol ⊥. We only use very basic data structures. For example, we often use tuples and sets of bit strings. As already mentioned at the end of the introduction, for tuples, we assume an efficient encoding as bit strings. Furthermore, we assume an efficient implementation of sets (e.g., by lists or tuples) that allows us (i) to add a bit string to a set, (ii) to remove a bit string from a set, (iii) to test if a bit string is an element of a set, and (iv) to iterate over all elements of a set. We denote the empty set by ∅.

Specification of IITMs
Most of our definitions of IITMs are divided into six parts (where some are optional): Parameters, Tapes, State, CheckAddress, Initialization, and Compute. Parameters. In this part, we list all parameters of the IITM. That is, when defining a system that contains this IITM, these parameters have to be instantiated. This part is omitted if the IITM has no parameters.
For example, our ideal functionalities are typically parameterized by a number n > 0 that defines the I/O interface (more precisely, the number of I/O tape pairs, see below). Tapes. This part lists all input and output tapes. Unless otherwise stated, I/O tapes are named io y x and network tapes are named net y x for some decorations x, y. The IITMs we define in this paper have a corresponding output tape for every input tape. The intuition is that, upon receiving a message on some input tape, the response is sent on the corresponding output tape. Furthermore, we typically give a name (this name is independent of the tape names) to every such pair of input and output tapes: We write "from/to z: (c, c )" to denote that the pair of tapes (c, c ) is named z. Then, we refer to the input tape c by "from z" and to the output tape c by "to z". We use the generic names IO and NET to refer to general I/O and network tapes to which an environment or adversary/simulator, respectively, connect to. If the tapes connect to a known machine/system, we typically use the name of this machine/system. For example, the ideal signature functionality F sig (see Sect. 4.2.1) has the I/O input tapes io in i (for all i ∈ {1, . . . , n} where the number n is a parameter of F sig ), the network input tape net in F sig , and the corresponding output tapes io out i and net out F sig . We give the name IO i to the pair (io in i , io out i ) and the name NET to (net in F sig , net out F sig ). So, "from IO i " refers to the tape io in i , "to NET" refers to net out F sig , etc.

State.
Here, we list all state variables of the machine. These are variables that define the state of this copy of the IITM and are saved on its work tapes (i.e., they are local to the copy of the IITM and cannot be accessed by other copies). These state variables are set to some initial value when a copy of this machine is created. Typically, the initial value is ⊥ (undefined) for bit strings and tuples of bit strings and the empty set ∅ for sets. In mode Compute, the machine may modify the values of these variables. We always use sans-serif font for state variables.
For example, all ideal functionalities that we define in this paper have a state variable corrupted ∈ {false, true} which holds the corruption status of the ideal functionality.

CheckAddress.
In this part, we define the mode CheckAddress of the machine.

Initialization.
This part is optional. If it exists and (this copy of) the machine is activated for the first time in mode Compute, then the machine executes the code in this part. When the code finishes, the machine then processes the incoming message as defined in the part Compute, see below.
Initialization is used for example to tell the adversary (or simulator) that a new copy of this machine has been created and to allow her to corrupt this copy of the machine right from the start.

Compute.
The description in mode Compute consists of a sequence of blocks where every block is of the form "recv m t on c s.t. condition : code " where m t is an input template (see below), c is an input tape (see above), condition is a condition on the input, and code is the code of this block that is executed if the input template matches and the condition is satisfied (see below).
An input template is recursively defined as follows: It is either an unbound variable, a constant bit string, a state variable (see above), or a tuple of input templates. We say that a bit string m matches an input template m t if there exists a mapping σ from the unbound variables in m t to bit strings such that m equals m t where m t is obtained from m t by replacing every unbound variable x in m t by the bit string σ (x) and every state variable x in m t by the value of the state variable (according to the state of the machine). We say that σ is the matcher of m and m t . To distinguish unbound variables from constant bit strings and state variables, we use sans-serif font for constant bit strings and state variables and cursive font for unbound variables. For example, the input template (Enc, x) is matched by every tuple that consists of the constant bit string Enc and an arbitrary bit string.
Upon activation, the blocks are checked one after the other. The (copy of the) machine executes the code of the first block that matches the input (see below). If no block matches the input, the machine stops for this activation without producing output. In the next activation, the machine will again go through the sequence of blocks, starting with the first one, and so on.
A block, as above, matches some input, say message m on input tape c , if c = c , m matches m t (as defined above), and condition is satisfied. The condition may use state variables of the machine and the unbound variables contained in m t (these are instantiated by the matcher σ of m and m t ). Similarly, when executing the code, the unbound variables contained in m t are instantiated by the matcher σ of m and m t .
Every execution of code ends with a send command: send m to c, where m is a bit string and c is an output tape. This means that the machine outputs the message m on tape c and stops for this activation. In the next activation, the machine will not proceed at the point where it stopped, but again go through the sequence of blocks, starting with the first one, as explained above. However, if the send command is followed directly by a receive command, such as send m on c; recv m t on c s.t. condition (where m t is an input template, c an input tape, and condition a condition, as above), then the machine does the following: It outputs m on tape c and stops for this activation. In the next activation, it will check whether it received a message on input tape c and check whether this message matches m t and the condition is satisfied (as above). If it does, the computation continues at this point in the code. Otherwise, the machine stops for this activation without producing output. In the next activation, it will again check whether it received a message on input tape c and whether this message matches m t and the condition is satisfied and behaves as before, and so on, until it receives an expected message.
For named pairs of input and output tapes, as described above in the Tapes part, we use the following notation: Let z be the name of the pair (c, c ) of an input tape c and an output tape c . Then, we write "recv m t from z s.t. condition " for "recv m t on c s.t. condition " and "send m to z" for "send m on c ".

Running External Code
Sometimes, an IITM M obtains the description of an algorithm A as input on some tape and has to execute it (e.g., all ideal functionalities defined in this paper receive algorithms from the adversary/simulator). We write y ← A ( p) (x), where p is a polynomial, to say that M simulates algorithm A on input x for p(η + |x|) steps, where η is the security parameter and |x| the length of x. The random coins that might be used by A are chosen by M uniformly at random. The variable y is set to the output of A if A terminates after at most p(η + |x|) steps. Otherwise, y is set to the error symbol ⊥. If we want to enforce that M simulates A in a deterministic way, we write y:=A ( p) (x). In the simulation of A, M sets the random coins of A to zero.
Typically, we are interested in environmentally bounded systems. If such a system contains an IITM M that executes external code A (e.g., A is provided by the adversary or simulator), then M is only allowed to perform a polynomial number of steps for executing the algorithm A (except with negligible probability). So, M has to be parameterized by a polynomial p and simulates A as described above. We note that at least the degree of the polynomial that bounds the runtime of the algorithm has to be fixed in advance because it must not depend on the security parameter. This holds true for any definition of polynomial time and is not a limitation of the definition of polynomial time in the IITM model.

Digital Signatures
In this section, we present our ideal functionality for digital signatures with local computation as explained in the introduction and show that a digital signature scheme realizes this functionality if and only if it is UF-CMA secure; see Sect. 6.1 for a comparison of our digital signature functionality with other functionalities in the literature.

An Ideal Functionality F sig for Digital Signatures
The basic idea of an ideal functionality for digital signatures is that verification only succeeds if the message has actually been signed using the functionality. This ideally prevents forgery of signatures, see, e.g., [6,8].
Our ideal signature functionality F sig (n, p) is an IITM which is parametrized by a number n > 0 and a polynomial p. We often omit n and p and just write F sig instead of F sig (n, p). The number n defines the I/O interface: for every i ∈ {1, . . . , n}, F sig has an I/O input tape and an I/O output tape. These I/O tapes allow (machines of) a protocol that uses F sig to send requests to F sig (and to receive the responses). For example, these tapes can be used by a protocol system that consists of n machines such that the i-th machine connects to the i-th I/O input and output tape of F sig . We note that these tapes are only for addressing purposes, to allow n different machines to connect to F sig ; F sig does not interpret input on different I/O tapes differently. If a request is sent on the i-th I/O input tape, F sig outputs the response on the i-th I/O output tape. 15 Furthermore, F sig has a network input tape and a network output tape to communicate with the adversary (or simulator). In mode CheckAddress, F sig accepts all input on all tapes. As usual for machines that run external code (see Sect. 4.1.3), the polynomial p bounds the runtime of the signing and verification algorithms provided by the adversary. Since every potential signing and verification algorithm has polynomial runtime, p can always be chosen in such a way that the algorithms run as expected.
The functionality F sig is defined in pseudocode in Fig. 3. Upon the first request (initialization), F sig first asks the adversary for a signature and verification algorithm, a public/private key pair, and whether it is corrupted (this allows corruption upon initialization but later corruption is allowed too, see below). We note that, when F sig executes these algorithms, F sig executes them as described in Sect. 4.1.3 where the polynomial p is used to bound their runtime and the execution of the verification algorithm is forced to be deterministic. After the initialization, the first request is executed just as all later requests. We now describe the operations that F sig provides in more detail. See also the remarks below for the typical usage of this functionality.
Public key request PubKey?: Upon this request on an I/O input tape, F sig returns (on the corresponding I/O output tape) the recorded public key (provided by the adversary upon initialization). This request allows the "owner" of the public/private key pair to obtain its public key (e.g., to distribute it) and can also be used to model certain setup assumptions such as a public-key infrastructure (see the remarks below). the form (r, m) where r is an identifier for a role. In this way, one can model an arbitrary number of roles that can use F sig .
Signature generation request (Sign, x): Upon a signature generation request for a message x on an I/O input tape, F sig computes a signature for x using the recorded signature generation algorithm and private key (both provided by the adversary upon initialization). Then, F sig checks that the signature verifies (using the recorded verification algorithm and public key). If this check fails 16 and F sig is uncorrupted (note that upon corruption, F sig does not guarantee anything, not even that the public and private key belong together), F sig returns an error message. Otherwise, F sig records the message x (to prevent forgery, see below) and returns the signature.
Verification request (Verify, pk, x, σ ): Upon a signature verification request on an I/O input tape, F sig verifies the signature σ for x using the provided public key pk and the recorded verification algorithm (provided by the adversary). If the verification succeeds but F sig is not corrupted, pk equals the recorded public key (provided by the adversary), and x has not been recorded (upon signature generation), then F sig returns an error message. This ideally prevents forgery (if F sig is uncorrupted and the correct public key is used) because it guarantees that signatures only verify if the message has previously been signed using F sig . Otherwise, F sig returns the verification result.
Corruption status request CorrStatus?: Upon a corruption status request on an I/O input tape, F sig returns its corruption status, i.e., true if it is corrupted and false otherwise.
As always in universal composability settings, the distinguishing environment should have the possibility to know which functionalities are corrupted because, otherwise, a simulator could always corrupt a functionality and then no security guarantees would be provided by the functionality. As a result, in the case of F sig , even insecure digital signature schemes would realize F sig .
Corrupt request Corrupt: Upon a corruption request on the network input tape (i.e., from the adversary), F sig records that it is corrupted and returns an acknowledgment message (on the network output tape). This models adaptive corruption. We could have defined F sig to output its entire state (in particularly all recorded messages) to the adversary upon corruption. However, this would only make the simulator stronger and it is not needed to realize F sig , as we will see below.
Remarks. As mentioned in the introduction, since signatures are determined by local computations, the signatures and the signed messages are a priori not revealed to the adversary. This, for example, is needed to reason about protocols where signatures or signed messages should remain secret. 16 Note that every reasonable digital signature scheme satisfies that this check never fails. However, as we do not put any restrictions on the algorithms provided by the adversary, F sig does not know whether they have this property. This test guarantees that every verification request to F sig succeeds for signatures that have been created by F sig (if the correct message and public key are provided upon verification).
The functionality F sig is formulated for a single public/private key pair. The "owner" of this key pair is not made explicit in F sig because it is irrelevant for the tasks provided by F sig . Instead, the environment has to use F sig appropriately, i.e., only the party that "owns" this key pair should be allowed to send Sign requests. Of course, every party should be allowed to send Verify requests. If parties other than the "owner" are allowed to send PubKey? requests to F sig , then this models that the public key is distributed among the parties, e.g., by some kind of a public-key infrastructure.
To make the "owner" explicit and to obtain multiple instances of F sig , one can consider the system !F sig , where F sig is the multi-party version of F sig . Recall that for every PID pid (in a run of !F sig with some environment) there can be a copy of F sig . Let us denote this copy by F sig [pid]. The "owner" of F sig [pid] is the party with PID pid. Every message sent to/received from this copy is prefixed with pid. For example, if a party wants to verify a message, it would send a message of the form (pid, (Verify, pk, x, σ )) to F sig [pid]. Only the owner of F sig [pid] should have unrestricted access to all commands provided by F sig [pid]. Other parties should, for example, not be able to issue signing requests to F sig [pid]. As mentioned, this should be guaranteed by the protocols that use F sig [pid].
Alternatively, one can restrict access to F sig in the following way. One can add a wrapper M access that controls access to F sig . The machine M access connects to all I/O tapes of F sig and has the same number of I/O tapes for connecting to the environment. An instance of M access expects inputs that are prefixed with two PIDs (pid sender , pid owner ), where pid sender is the ID of the sender of the input and pid owner is the ID of the owner of the instance F sig [pid owner ]. Now, M access forwards verification requests to F sig [pid owner ] for any combination of pid sender and pid owner , however, signing requests are blocked unless pid sender = pid owner . All responses from F sig are returned via M access to the environment. Thus, a higher level protocol can only sign messages in the name of their own PID, but verify messages for any PID. Note that this assumes that higher-level protocols are defined in such a way that parties cannot lie about their PIDs at the I/O interface.
The multi-session, multi-party version of F sig can be described by the system !F sig .
In this system, to address different copies of F sig all messages are prefixed by a SID and a PID. It is easy to see that F sig is environmentally strictly bounded. Hence, by Lemma 2, both the multi-party version !F sig and the multi-session, multi-party version !F sig are environmentally strictly bounded.

Realizing F sig by UF-CMA Secure Digital Signature Schemes
In this section, we show that a (digital) signature scheme (more precisely, the protocol system induced by it) realizes the ideal signature functionality F sig if and only if the signature scheme is UF-CMA secure (unforgeability under chosen-message attacks). UF-CMA security is a standard security notion for signature schemes, see, e.g., [16]. We recall the definition of signature schemes and UF-CMA security in "Appendix A.1".
Every signature scheme = (gen, sig, ver) induces a realization P sig (n, ) of F sig (n, p) (where p depends on ) in a straightforward way. This realization is defined as follows (see also Fig. 9 in the appendix): Upon initialization (i.e., when receiving the first message), P sig asks the adversary whether it is corrupted. If the adversary decides to corrupt P sig upon initialization, she provides a public/private key pair. Otherwise, P sig generates a fresh key pair itself (using gen). The key pair, say (pk, sk), is recorded in P sig . The adversary can also corrupt P sig adaptively by sending the message Corrupt to P sig upon which P sig returns the recorded key pair (pk, sk). Upon a signature generation requests of the form (Sign, x) from some party (i.e., from the environment on an I/O input tape), P sig computes a signature σ ← sig(sk, x) and returns σ . Upon a signature verification requests of the form (Verify, pk , x, σ ) from some party, P sig verifies the signature, b:=ver(pk , x, σ ), and returns b. Upon a public key request (PubKey?) from some party, P sig returns the recorded public key pk. Upon a corruption status request from some party, P sig returns true if it has been corrupted by the adversary (upon initialization or by receiving the message Corrupt) and false otherwise. It is easy to see that P sig is environmentally strictly bounded.
We obtain the following theorem. A proof is provided in "Appendix B.1". The proof is similar to other proofs for realizations of digital signatures [1,6,8].
Theorem 6. Let n > 0, be a signature scheme, and p be a polynomial that bounds the runtime of the algorithms in (in the length of their inputs). Then, is UF-CMA secure if and only if P sig (n, ) ≤ F sig (n, p).

Public-Key Encryption
In this section, we present our ideal functionality for public-key encryption with local computation as explained in the introduction. Our functionality is parametrized by what we call a leakage algorithm which allows to define the amount of information that may be leaked by the encryption. We first define and discuss leakage algorithms. Then, we present our ideal public-key encryption functionality and show that a public-key encryption scheme realizes this functionality (given an appropriate leakage algorithm) if and only if it is IND-CCA2 secure; see Sect. 6.2 for a comparison of our public-key encryption functionality with other functionalities in the literature.

Leakage Algorithms
We now introduce leakage algorithms that are used by our ideal functionality for publickey encryption. In this functionality, instead of the actual plaintext, its leakage is encrypted. The leakage is computed by a leakage algorithm and captures the amount of information that may be leaked about the plaintexts even in the ideal setting. 17 A leakage algorithm L with domain D is a probabilistic, polynomial-time algorithm that takes as input 1 η for some security parameter η ∈ N and a plaintext x ∈ D(η) and returns a bit string x ∈ D(η), the leakage of x.
The plaintext domain associated with a leakage algorithm L is denoted by D L . 17 That is, there exists a deterministic algorithm A and a polynomial p such that, for all η ∈ N and x ∈ {0, 1} * , and (ii) the algorithm that returns a random bit string of length |x|. They both leak exactly the length of a plaintext. The domain of these leakage algorithms is the domain of all bit strings.
We sometimes require leakage algorithms to have some of the following properties.

Definition 5.
We call a leakage algorithm L length preserving if |x| = |x| for every η ∈ N, x ∈ D L (η), and leakage x produced by L (1 η , x).
We say that a leakage algorithm leaks at most the length of a plaintext if the leakage of a plaintext does not reveal any information about the actual bits of the plaintext. Formally, this is defined as follows: Definition 6. A leakage algorithm L leaks at most the length of a plaintext if there exists a probabilistic, polynomial-time algorithm T such that, for every η ∈ N and for every x ∈ {0, 1} * , where the probability is over the random coins of T and L, respectively).

Definition 7.
We say that a leakage algorithm L leaks exactly the length of a plaintext if it is length preserving and leaks at most the length of a plaintext.
We say that a leakage algorithm has high entropy if collisions of the leakage occur only with negligible probability. For example, both leakage algorithms from Example 1 are length preserving and leak at most the length of a plaintext (for any plaintext domain), i.e., they leak exactly the length of a plaintext. Moreover, the second leakage algorithm, which returns a random bit string of the same length as the plaintext, has high entropy if its plaintext domain only contains "long" plaintexts, e.g., only bit strings of length ≥ η. 18 We note that deterministic leakage algorithms (e.g., the first leakage algorithm from Example 1, which returns a constant bit string of the same length as the plaintext) do not have high entropy if they are associated with any non-empty domain of plaintexts. 18 For this leakage algorithm, the probability of collisions of leakages is 2 −l for plaintexts that have the same length l and 0 if they are of different length. Hence, for a plaintext domain {D(η)} η∈N with |x| ≥ η for all η ∈ N and x ∈ D(η), the probability in Definition 8 is at most 2 −η , which is negligible.

An Ideal Functionality F pke for Public-Key Encryption
Our ideal functionality F pke for public-key encryption with local computation is in the spirit of the one proposed by Canetti in [6] (version of December 2005) in that, other than providing an encryption and decryption algorithm as well as a public/private key pair (Canetti does not distinguish between a public key and the encryption algorithm), the simulator is not involved in the execution of the functionality. In particular, all ciphertexts and decryptions are performed locally within the functionality. However, our formulation differs in essential ways from the one by Canetti, e.g., Canetti's formulation is not suitable for joint state realizations (see Sect. 6.2).
We now present our ideal public-key encryption functionality F pke (n, p, L). In many technical matters the formulation is similar to F sig . The IITM F pke (n, p, L) is parametrized by a number n > 0, a polynomial p, and a leakage algorithm L. We often omit some or all parameters if they are clear from the context and, for example, just write F pke instead of F pke (n, p, L). Just like for F sig , n determines the I/O interface of F pke and p bounds the runtime of the encryption and decryption algorithms provided by the adversary. Since every potential encryption and decryption algorithm has polynomial runtime, p can always be chosen in such a way that the algorithms run as expected. Furthermore, just like F sig , F pke has a network input and output tape to communicate with the adversary (or simulator).
The functionality F pke is defined in pseudocode in Fig. 4. We now describe the operations that F pke provides in more detail. Upon the first request (initialization), F pke first asks the adversary for an encryption and decryption algorithm, a public/private key pair, and whether it is corrupted (this models static corruption). We note that, when F pke executes these algorithms, F pke executes them as described in Sect. 4.1.3 where the polynomial p is used to bound their runtime and the execution of the decryption algorithm is forced to be deterministic. After the initialization, the first request is executed just as all later requests.
Public key request PubKey?: Just as F sig , upon this request on an I/O input tape, F pke returns the recorded public key (on the corresponding I/O output tape). This request allows the "owner" of the public/private key pair to obtain its public key (e.g., to distribute it) and can also be used to model certain setup assumptions such as a public-key infrastructure (see the remarks below).
Encryption request (Enc, pk, x): Upon an encryption request for a plaintext x ∈ D L (η) (recall that D L = {D L (η)} η∈N is the plaintext domain associated with the leakage algorithm L) under a public key pk on an I/O input tape, F pke does the following. If F pke is corrupted or pk is not the recorded public key (that has been provided by the adversary upon initialization), F pke encrypts x under pk (using the encryption algorithm provided by the adversary upon initialization) and returns the ciphertext. Otherwise, F pke generates the ciphertext by encrypting the leakage x ← L(1 η , x) of x. Then, F pke checks that the decryption of the ciphertext yields the leakage x again. If this check fails, F pke returns an error message. Otherwise, F pke records the message x for that ciphertext (for later decryption) and returns the ciphertext.
We note that, every reasonable encryption scheme satisfies that the decryption of the encryption yields the plaintext again. However, as we do not put any restrictions on the algorithms provided by the adversary, F pke does not know whether they have this property. In the remarks below we explain why the decryption test performed by F pke is useful and sometimes needed. In particularly, it is needed for our joint state realization, see Sect. 5.2.
Decryption request (Dec, y): Upon a decryption request for a ciphertext y on an I/O input tape, F pke does the following. If F pke is corrupted or there is no recorded message for y, F pke decrypts y using the recorded private key and decryption algorithm (both provided by the adversary upon initialization) and returns the resulting plaintext. Otherwise, the plaintext that is recorded for y is returned (an error message is returned if there is more than one recorded plaintext for y because unique decryption is not possible in this case).
Corruption status request CorrStatus?: Just as F sig , upon a corruption status request on an I/O input tape, F pke returns true if it is corrupted and false otherwise; see the description of F sig for a discussion on corruption status requests. Remarks. The same remarks for F sig (see Sect. 4.2.1) apply also to F pke : It is left to the environment to use F pke appropriately, i.e., only the "owner" of the public/private key pair should use F pke to decrypt messages. Alternatively, one can use a wrapper similar to M access for F sig to control encryption and decryption requests. As mentioned for F sig , a multi-party version of F pke where every party (with PID) pid owns one copy of F pke can be modeled by the system !F pke and a multi-session, multi-party version of F pke can be modeled by !F pke .
If F pke (L) is used with a leakage algorithm L with high entropy, then an uncorrupted F pke guarantees that ciphertexts stored in H cannot be guessed. For example, if one ciphertext, say y, is given to the adversary only encrypted (nested encryption), then the adversary is not able to guess y. The reason that F pke (L) has this property, provided that L has high entropy, is as follows: the ciphertext has to contain as much information as the leakage L(1 η , x), because of the decryption test performed in F pke (L) (decryption of a ciphertext must yield the original plaintext). Since the leakage has high entropy, L(1 η , x) is sufficiently random and can be guessed only with negligible probability.
It can be shown that a realization of F pke is impossible if it is adaptively corruptible [25]. Therefore, our formulation of F pke , unlike F sig , only allows for corruption upon initialization.
It is easy to see that F pke is environmentally strictly bounded. Hence, by Lemma 2, both the multi-party version !F pke and the multi-session, multi-party version !F pke are environmentally strictly bounded.

Realizing F pke by IND-CCA2 Secure Public-Key Encryption Schemes
In this section, we show that a public-key encryption scheme realizes the ideal public-key encryption functionality F pke (given an appropriate leakage algorithm) if and only if the encryption scheme is IND-CCA2 secure (indistinguishability under chosen-ciphertext attacks). IND-CCA2 security is a standard security notion for public-key encryption schemes, see, e.g., [3,4]. We recall the definition of public-key encryption schemes and IND-CCA2 security in "Appendix A.2". Similar to leakage algorithms, we assume that every public-key encryption scheme is associated with a polynomial-time decidable domain of plaintexts D = {D (η)} η∈N for some D (η) ⊆ {0, 1} * for every security parameter η ∈ N.
Every public-key encryption scheme = (gen, enc, dec) induces in a straightforward way a realization P pke (n, ) of F pke . The realization P pke (n, ) is defined in Fig. 10 (in the appendix). Informally, it is described as follows: Upon initialization (i.e., when the first message is received), P pke asks the adversary whether it is corrupted. If the adversary decides to corrupt P pke upon initialization, he provides a public/private key pair. Otherwise, P pke generates a fresh key pair itself (using gen). The key pair, say (pk, sk), is recorded in P pke . As already mentioned above, F pke is not realizable under adaptive corruption due to the commitment problem [25]. Therefore, the adversary can only corrupt P pke upon initialization. Upon an encryption requests of the form (Enc, pk , x) with x ∈ D (η) from some party (i.e., from the environment on an I/O input tape), P pke computes the ciphertext y ← enc(pk , x) and returns y. Upon a decryption requests of the form (Dec, y) from some party, P pke computes the plaintext x:=dec(sk, x) (where sk is the recorded private key) and returns x. Upon a public key request (PubKey?) from some party, P pke returns the recorded public key pk. Upon a corruption status request from some party, P pke returns true if it has been corrupted by the adversary upon initialization and false otherwise. It is easy to see that P pke is environmentally strictly bounded.
The following theorem shows that F pke (L) exactly captures the standard security notion IND-CCA2, if the leakage algorithm leaks exactly the length of a plaintext (Definition 7). A proof of the following theorem is provided in "Appendix B.2". Theorem 7. Let n > 0, be a public-key encryption scheme, p be a polynomial that bounds the runtime of the algorithms in (in the length of their inputs), and L be a leakage algorithm such that D = D L (i.e., and L have the same plaintext domain) and L leaks exactly the length of a plaintext (e.g., L is one of the algorithms from Example 1). Then, is IND-CCA2 secure if and only if P pke (n, ) ≤ F pke (n, p, L).
The direction from left to right holds for any length preserving leakage algorithm L and the direction from right to left holds for any leakage algorithm L that leaks at most the length of a plaintext.
We note that Bellare et al. [4] define two security notions for public-key encryption schemes (namely IND-CCA-BP and IND-CCA-BE) that are shown to be strictly weaker than IND-CCA2 security (which is called IND-CCA-SE in the taxonomy of [4]). Theorem 7 now shows that these weaker notions do not suffice to realize F pke (if L leaks at most the length of a plaintext).

Replayable Public-Key Encryption
In this section, we present our replayable public-key encryption functionality with local computation, as explained in the introduction, and show that a public-key encryption scheme realizes this functionality (given an appropriate leakage algorithm) if and only if it is IND-RCCA secure. We refer to Sect. 6.3 for a comparison of our replayable public-key encryption functionality with other functionalities in the literature.

An Ideal Functionality F rpke for Replayable Public-Key Encryption
Our ideal functionality F rpke with local computation for replayable public-key encryption is defined as follows.
The functionality F rpke (n, p, L) (or F rpke for short) is, just as F pke , parametrized by a number n > 0 which defines the I/O interface, a polynomial p that bounds the runtime of the algorithms provided by the adversary (or simulator), and a leakage algorithm L. A definition of F rpke in pseudocode is given in Fig. 5. The only difference between F rpke and F pke is that upon encryption of a plaintext x, the pair ( is the leakage of x) is stored instead of (x, y) (where y is the ciphertext) and that upon decryption it is not looked for the ciphertext y but for the decryption dec(sk, y) of the ciphertext. Hence, it might be possible for an adversary to produce a ciphertext y = y such that the decryption x of y is the same as the one of y, without knowing x. This models replayable encryption.
We note that the decryption test upon encryption (to test that the decryption yields the leakage again) is not needed for the joint state theorem for F rpke (see below), so, it could be omitted. However, it is sometimes useful, e.g., when reasoning about protocols with nested encryption, as discussed for F pke . It is easy to see that F rpke is environmentally strictly bounded. Hence, just as for F pke , by Lemma 2, both the multi-party version !F rpke and the multi-session, multi-party version !F rpke are environmentally strictly bounded.

Realizing F rpke by IND-RCCA Secure Public-Key Encryption Schemes
We now show that a public-key encryption scheme realizes the ideal replayable publickey encryption functionality F rpke (given an appropriate leakage algorithm) if and only if the encryption scheme is IND-RCCA (replayable IND-CCA2) secure. IND-RCCA security, which has been introduced by Canetti et al. [12], is a relaxed form of IND-CCA2 security where modifications of the ciphertext that yield the same plaintext are permitted. In particular, IND-CCA2 security implies IND-RCCA security [12]. As explained by Canetti et al., IND-RCCA security suffices in many applications where IND-CCA2 security is used. We recall the definition of public-key encryption schemes and IND-RCCA security in "Appendix A.2". As mentioned above, similar to leakage algorithms, we assume that every public-key encryption scheme is associated with a polynomialtime decidable domain of plaintexts D .
The realization P pke (n, ) (see Sect. 4.3.3) of F rpke is the same as for F pke ; only the requirements on are milder, namely IND-RCCA security instead of IND-CCA2 security.
The following theorem shows that F rpke (L) exactly captures IND-RCCA security if the leakage algorithm L leaks exactly the length of a plaintext (Definition 7) and has high entropy (Definition 8). For example, this condition on L is satisfied if L is the leakage algorithm that returns a random bit string of the length of the plaintext and the domain of plaintexts only contains "long" plaintexts, e.g., only plaintexts of length ≥ η (where η is the security parameter), see Sect The direction from left to right holds for any length preserving leakage algorithm L that has high entropy and the direction from right to left holds for any leakage algorithm L that leaks at most the length of a plaintext.
We note that if a length preserving leakage algorithm has high entropy, then its domain of plaintexts contains only "long" plaintexts (e.g., only plaintexts of length ≥ η for security parameter η). So, our result is consistent with the result by Canetti et al. [12], where large plaintext domains are assumed. We further remark that Canetti et al. showed that IND-RCCA security is not sufficient to realize F rpke if plaintext domains have only polynomial size.

Joint State Realizations
In this section, we present joint state realizations of the ideal functionalities presented in the previous section, i.e., for digital signatures, public-key encryption, and replayable public-key encryption. We refer to Sect. 6 for a comparison of our joint state theorems with others proposed in the literature. The explanations given in Sect. 6 will also motivate and justify the definitions of our functionalities and the way our joint state theorems are stated.

A Joint State Realization for Digital Signatures
We now present a joint state realization P js sig of F sig . This realization uses a single copy of F sig per party to realize multiple sessions of F sig per party. The joint state theorem for digital signatures basically says that where F sig is obtained from F sig by renaming all input and output tapes. As described in Sect. 3, on the right-hand side we have the multi-session multi-party version of F sig : !F sig is the multi-party version of F sig , where in a run of this system we can have one copy of F sig per party, and !F sig is the multi-session version of the multi-party version of F sig , where in a run of this system we can have multiple sessions of F sig per party. So, altogether there can be one copy of F sig , denoted by F sig [sid, pid], per session (with SID) sid and per party (with PID) pid in a run of !F sig with some environment. On the lefthand side, we consider only the multi-party version !F sig of F sig and the "multiplexer" !P js sig and in a run of !P js sig | !F sig with some environment there can be at most one copy of P js sig , denoted by P js sig [pid], for every party pid and this copy handles all sessions of this party through one copy of F sig , namely the copy for party pid, which we denote by F sig [pid]. Hence, the multi-session multi-party version of F sig is realized by a system (the joint state realization) with only a multi-party version of F sig where a copy of F sig for one party handles all sessions of that party. This is illustrated in Fig. 6.
The basic idea of P js sig is simple and follows the one by Canetti and Rabin [14] (see also Fig. 6): SIDs are added by P js sig to the messages to be signed so that signatures cannot be mixed between different sessions. More specifically, if a party pid in session sid sends a request to !P js sig | !F sig to sign a message x, i.e., a message of the form (sid, (pid, (Sign, x))) is send to !P js sig | !F sig , and hence, P js sig [pid], then P js sig [pid] replaces the message x by (sid, x) and forwards the request to the copy of F sig for this party, i.e., to F sig [pid]. 19 Similarly, when a party pid in session sid sends a verification request for a signature σ , a message x, and a public key pk, then P js sig [pid] forwards the requests to F sig [pid] but replaces x by (sid, x). However, this simple idea only works given an appropriate formulation of the digital signature functionality (see Sect. 6.1) and if some technical details are taken care of, see below.
We define P js sig (n, D sid ) to be an IITM that is parametrized i) by a number n > 0 which, like in the case of F sig , defines the I/O interface of P js sig and ii) by a polynomialtime decidable domain of SIDs D sid = {D sid (η)} η∈N . Requests with an SID not in D sid (η) (where η is the security parameter) are ignored by P js sig ; see below why this is needed. We often omit n and/or D sid and just write, e.g., P js sig instead of P js sig (n, D sid ). The machine P js sig additionally has an I/O interface to connect to !F sig (n) such that P js sig (n) | !F sig (n) and !F sig (n) have the same external I/O interface (which they must 19 We note that the actual encoding of (sid, x) as a bit string is not important. In fact, we could parametrize  have because P js sig (n) | !F sig (n) is meant to realize !F sig (n). Following the above basic idea, P js sig is defined in pseudocode in Fig. 7. It is easy to see that P js sig | !F sig is environmentally strictly bounded.
We emphasize a technical detail of P js sig which is necessary for the joint state theorem to hold. If the environment sends the first request to P js sig [pid], for some PID pid, with some SID sid, then P js sig [pid] forwards it to F sig [pid] which in turn sends an initialization request to the adversary (on the network tape) and waits for a response from the adversary (because this is the first request sent to it). Now, while waiting for this response, the environment might send another request to P js sig [pid], with some other SID sid = sid. If this happens, F sig [pid] is still blocked because it is waiting for a response from the adversary. In the ideal world (i.e., in an interaction of the environment with !F sig and a simulator) there would now be two copies, namely F sig [sid, pid] and F sig [sid , pid] waiting for a response to the initialization request from the simulator. The environment could provide a response to F sig [sid , pid] which could then continue its work, while F sig [sid, pid] is still blocked. Also, the environment could provide different responses to F sig [sid, pid] and F sig [sid , pid]. This is not possible in the real world where there is only one copy F sig [pid] which is used to realize both F sig [sid, pid] and F sig [sid , pid]. To make the joint state realization indistinguishable from the ideal world in this case, we define P js sig [pid] to record sid as blocked and to ignore this last request, i.e., to end this activation without producing output. All later requests with blocked SIDs are ignored too. Accordingly, the simulator will be defined to never complete initialization for F sig [sid , pid]. This guarantees that the environment cannot exploit such race conditions to distinguish between the joint state realization and the ideal world. It basically forces the environment to first finish the initialization before it can use F sig [pid]. Note that this problem of race conditions is limited to the initialization phase of F sig [pid]. During normal operation, i.e., after initialization, this instance answers all requests of P js sig [pid] immediately without involving the environment. Thus, each request from the environment to P js sig [pid] is also answered immediately, which prevents the environment from activating P js sig [pid] in several different sessions simultaneously. 20 This would be a natural assumption as potential race conditions caused by non-immediate responses to initialization requests do not correspond to any actual attacks in reality.
Next, we state and prove the joint state theorem for digital signatures. In this theorem, we have to restrict the length of SIDs to be polynomially bounded in the security parameter. This is needed to prove the theorem because the algorithms that are provided by the simulator and executed by F sig get different inputs. In the joint state realization, they obtain input of the form (sid, x) and in the ideal world, they just obtain input of the form x and have to add the SID (see the proof for details). Therefore, we require that the domain of SIDs D sid = {D sid (η)} η∈N is polynomially bounded: The domain D sid = {D sid (η)} η∈N is called polynomially bounded if there exists a polynomial q such that |sid| ≤ q(η) for all η ∈ N and sid ∈ D sid (η).
We note that the following theorem can be applied iteratively as described in Sect. 3 in order to reason about more and more complex systems. We also emphasize that the proof of this theorem uses Theorem 3 (composition theorem). By Theorem 3, it suffices to reason about only one party in order to obtain a result for multiple parties. where !F sig (n, p) is the multi-party version of F sig where all input and output tapes are renamed as described above and !F sig (n, p ) is the multi-session, multi-party version of F sig where the domain of SIDs is D sid . 21 20 The problem could also be solved by restricting the environment. In particular, we could use the extended IITM model with responsive environments [5] which allows for requiring that the environment directly replies to certain requests without otherwise interfering with a protocol/functionality. In our specific case, this can be used to force the environment to directly reply to initialization requests from instances of F sig , making the blocking of SIDs unnecessary. The concept of responsive environment has been developed after the original submission of this work, and therefore is not considered here, although it would have been useful for simplifying the modeling. 21 Recall the definition of session versions with domain from Sect. 2.4.
Proof. Let n > 0 and p be a polynomial as required by the theorem; we will show the existence of an appropriate polynomial p below. By Theorem 3, it suffices to reason about environments that use only a single PID: The protocol systems P:= !P js sig (n, D sid ) | !F sig (n, p) and F:= !F sig (n, p ) are σ -session versions (as defined in Sect. 2.4) for the following SID function σ : σ (m, c):=pid if (i) m = (sid, (pid, m )) for some sid, pid, m and c is an external tape of F (or an external I/O tape of P because F and P have the same I/O interface) or (ii) m = (pid, m ) for some pid, m and c is an external tape of !F sig (i.e., an internal tape of P, that connects P js sig with !F sig , or an external network tape of P).
Otherwise, σ (m, c):=⊥. So, to prove P ≤ F, by Theorem 3, it suffices to show that P is environmentally bounded (which is easy to see, as mentioned above) and that P ≤ σ -single F, i.e., that there exists a simulator S ∈ Sim P σ -single (F) such that E | P ≡ E | S | F for every environment E ∈ Env σ -single (P) that only uses a single PID pid (of course, E may use multiple SIDs).
This "single-PID" simulator S that we define below will, when the environment sends algorithms sig, ver and keys pk, sk (to F sig [pid]), provide the algorithms sig (sid) , sig (sid) and the keys pk, sk to the instance F sig [sid, pid] for every SID sid.
Let η ∈ N be a security parameter, sid ∈ D sid (η) be an SID, and sig and ver be descriptions of algorithms. We now define the algorithms sig (sid) and ver (sid) : , (sid, x)) and counts the steps needed. If at most p(η + |sk| + |(sid, x)|) steps are needed, then it returns σ . Otherwise, it enters an infinite loop. • ver (sid) (pk, x, σ ) computes b ← ver(pk, (sid, x), σ ) and counts the steps needed.
We now define the "single-PID" simulator S ∈ Sim P σ -single (F). Recall that this simulator has to work for environments that only use a single PID. The task of S is merely to forward initialization requests, to provide algorithms and keys, and to perform corruptions. For the simulation, S maintains a set I of SIDs (initially ∅), a flag corrupted ∈ {false, true} (initially false), and variables sig, ver, pk, sk (initially undefined).
• When the simulator S receives the first initialization request from F, i.e., the message (sid, (pid, Init)) from F sig [sid, pid] for some sid, pid, then S sends (pid, Init) to E.
• If another initialization requests arrives from F, say with SID sid (i.e., from F sig [sid , pid]), but S has not yet received a response to the first initialization request from E, then S records sid as blocked and ends this activation with empty output (S will never complete initialization for F sig [sid , pid], i.e., this instance is "blocked", which corresponds to the fact that P js sig [pid] would record sid as blocked in this case).
• When S receives a response to the initialization request from E, i.e., a message of the form (pid, (corrupt, sig, ver, pk, sk)) with corrupt ∈ {false, true}, then S adds sid to the set of initialized SIDs I, sets sig:=sig; ver:=ver; pk:=pk; sk:=sk, and, if corrupt = true, sets corrupted:=true (if corrupted is already true, it remains true no matter what value corrupt has). Then, S sends (sid, (pid, (corrupted, sig (sid) , ver (sid) , pk, sk))) to F where sid is the SID contained in the first initialization request S received from F. That is, S completes initialization for F sig [sid, pid].
• When another initialization request arrives from F, say with SID sid and PID pid, and S has already received an initialization response from E (i.e., I = ∅ and sig, ver, pk, sk are defined), then S sends(sid , (pid, (corrupted, sig (sid) , ver (sid) , pk, sk))) to F and adds sid to I. That is, S completes initialization for F sig [sid , pid] without sending a request to E. • When S receives a corrupt request from E, i.e., the message (pid, Corrupt) for some pid, then S distinguishes the following cases: (i) If S has sent an initialization request to E but not yet received a response (i.e., I = ∅), then S ignores the corrupt request (i.e., it ends this activation without producing output). (ii) If S has already received an initialization response from E (i.e., I = ∅), then S sets corrupted:=true and, for every sid ∈ I, sends (sid, (pid, Corrupt)) to F and waits for receiving (sid, (pid, Corrupted)) from F (which, by definition of F sig , is the immediate response of F to corrupt requests). That is, S corrupts all existing instances of F sig . Then, S returns (pid, Corrupted) to E.
(iii) If S has not received any initialization request from F so far, then S sets corrupted:=true and returns (pid, Corrupted) to E.
• Upon any other input, S ends this activation without producing output.
It is easy to see that S | F is environmentally strictly bounded, and hence, S ∈ Sim P σ -single (F). Let E ∈ Env σ -single (P), i.e., E uses only a single PID. Furthermore, let η ∈ N be a security parameter and a ∈ {0, 1} * be some external input. We now prove that showing that there exists a bijective mapping that maps every run ρ of (E | P) (1 η , a) to a run ρ of (E | S | F)(1 η , a) such that both runs have the same probability and overall output. From this, we immediately obtain Pr [(E | P)(1 η , a) = 1] = Pr [(E | S | F)(1 η , a) = 1]. In fact, defining the bijection is simple. Let ρ be a run of (E | P) (1 η , a). We define ρ as follows: First, we note that P js sig is deterministic and, since E only uses a single PID, there exists at most one instance of F sig in ρ. Let α E be the random coins used by E and α F sig be the random coins used by the instance of F sig in ρ. 22 Furthermore, S is deterministic, that is, a run of (E | S | F) ( 1 η , a) is fixed by defining the random coins of E and F (note that F only uses random coins in the simulation of the signature algorithm). We define ρ by defining the random coins of E to be α E (i.e., E in ρ uses the same randomness as in ρ) and the random coins of F to be such that F uses the same random coins to sign messages as the instance of F sig in ρ uses to sign the messages. That is, the first message that is signed in ρ is signed using the same random coins as those used to sign the first message in ρ . This also holds for the second message and so on. By induction on the length of runs ρ, it is easy to see that the view of E is the same in both runs ρ and ρ using the following arguments: 1. E does not observe any difference regarding blocked SIDs: It holds that sid ∈ B in P js sig [pid] (in ρ) where pid is the PID E uses iff S (in ρ ) recorded sid as blocked. Therefore, if E sends a request with a blocked SID sid, then, in ρ, P js sig [pid] will end its activation with empty output and, in ρ , F sig [sid, pid] will end its activation with empty output because it never completed initialization. Hence, in both runs the master IITM (in E) is activated with empty input. 2. E does not observe any difference regarding Corrupted? requests: for all sid such that sid is not blocked and this instance exists. 3. The signing algorithm is executed on the same messages using the same random coins: Let x be a message that is signed in ρ with some SID sid. Let sig be the signing algorithm and sk be the secret key (both provided previously by E). Then, the signature σ is computed in ρ by simulating sig(sk, (sid, x)) at most p(η + |sk| + |(sid, x)|) steps and in ρ by simulating sig (sid) (sk, x) at most p (η + |sk| + |x|) steps. In both runs the same random coins are used. Hence, by definition of sig (sid) and (a), the same signature is created in both runs. 4. Similarly to signing, the verification algorithm is executed on the same messages in both runs. Hence, by definition of ver (sid) and (b), the algorithm returns the same verification result in both runs. Furthermore, the check to prevent forgery produces the same result in both runs because, for all sid, x, (sid, From this, we obtain that E | P ≡ E | S | F. Hence, P ≤ σ -single F. By Theorem 3, we conclude P ≤ F. Using the composition theorems, we can immediately replace the ideal functionality in the joint state realization by its realization as stated in Theorem 6, resulting in an actual joint state realization (without any ideal functionality): Corollary 3. Let n > 0, D sid be a polynomially bounded domain of SIDs, and be an UF-CMA secure signature scheme. Then, there exists a polynomial p such that: where !P sig (n, ) is the multi-party version of P sig where all input and output tapes are renamed just as for F sig and, as above, !F sig (n, p) is the multi-session, multi-party version of F sig where the domain of SIDs is D sid .
Proof. By Theorem 6 (because is UF-CMA secure), it holds that P sig ≤ F sig ( p ) for any polynomial p that bounds the runtime of the algorithms in . From this, by the composition theorems (Theorems 1 and 2), we obtain that !P js sig | !P sig ≤ !P js sig | !F sig ( p ). By Theorem 9 and transitivity of ≤, we conclude that !P js sig | !P sig ≤ !F sig ( p) for some polynomial p.

A Joint State Realization for Public-Key Encryption
The joint state realization P js pke presented in this section is similar to the joint state realization P js sig for digital signatures. It uses one copy of F pke per party for all sessions of this party and realizes the multi-session, multi-party version of F pke where one copy of F pke is used per party per session. Hence, similarly to the case of digital signatures the joint state theorem for public-key encryption states that !P js pke | !F pke ≤ !F pke where F pke is obtained from F pke by renaming all input and output tapes. The basic idea for P js pke , which again is similar to the case of digital signatures and first appeared in [11], but without any details or proofs (see Sect. 6.2 for further discussion), is as follows: The SID sid is prefixed to the plaintext x prior to encryption, i.e., instead of encrypting x in session sid under a separate key for this session, (sid, x) is encrypted (under the same key for every session). Upon decryption of a ciphertext y in session sid it is checked whether y decrypts to (sid, x) with the correct SID sid. 23 While the main idea is simple, it only works given an appropriate formulation of the public-key encryption functionality (see also Sect. 6.2).
Following this basic idea, P js pke is defined in pseudocode in Fig. 8. Just as P js sig , P js pke is parametrized by a number n that defines the I/O interface and a domain of SIDs D sid . We write P js pke (n, D sid ) to denote P js pke with parameters n and D sid but often omit one or both parameters. Analogously to the case of digital signatures, P js pke (n) connects to the I/O interface of !F pke (n). 23 We note that the actual encoding of (sid, x) as a bit string is not important. In fact, we could parametrize  We can now state and prove the joint state theorem for public-key encryption. As mentioned above, the joint state realization P js pke is based on the multi-party version !F pke of the ideal functionality F pke . Recall that F pke is obtained from F pke by renaming all external tapes. More importantly, F pke will use the leakage algorithm L that in addition to the leakage algorithm L in the ideal world ( !F pke ) also leaks the SID of the session in which the message was encrypted. This, in conjunction with the decryption test performed in F pke (to guarantee that the decryption of the encryption of a leakage yields the leakage again), guarantees that ciphertexts generated in different sessions are different. This is crucial for the joint state theorem to hold (see below). Just as in the case of digital signatures, the domain of SIDs has to be restricted. We further remark that the theorem can be applied iteratively, as described in Sect. 3. Proof. The proof is similar to the proof of Theorem 9. The basic idea is that the usage of the SID in every plaintext, in conjunction with the definition of the leakage algorithm L (i.e., the SID is part of the leakage) and the decryption test performed in F pke (i.e., ciphertexts are guaranteed to contain all the information contained in the leakage, in particularly the SID), guarantees that ciphertexts generated in different sessions are different and that ideal decryption (i.e., decryption that returns a recorded plaintext) in some session only succeeds if the ciphertext has been generated in this session.
As in the proof of Theorem 9, to show that P:= !P js pke (n, D sid ) | !F pke (n, p, L ) realizes F:= !F pke (n, p , L), by Theorem 3, it suffices to show that there exists a simulator S ∈ Sim P σ -single (F) such that E | P ≡ E | S | F for every environment E ∈ Env σ -single (P) that only uses a single PID pid (of course, E may use multiple SIDs), where σ is the SID function defined in the proof of Theorem 9.
The "single-PID" simulator S is defined analogously to the one in the proof of Theorem 9, except for the following: • S ignores corrupt requests from the environment (i.e., messages of the form (pid, Corrupt)) because F pke is only corruptible upon initialization. • We replace the algorithms sig and ver, obtained from the environment, by the algorithms enc and dec. Also, instead of sig (sid) and sig (sid) , S provides the algorithms enc (sid) and dec (sid) , defined below, to F.
Let η ∈ N be a security parameter, sid ∈ D sid (η) be an SID, and enc and dec be descriptions of algorithms. We now define the algorithms enc (sid) and dec (sid) : • enc (sid) (pk, x) computes y ← enc(pk, (sid, x)) and counts the steps needed. If at most p(η + |pk| + |(sid, x)|) steps are needed, then it returns y. Otherwise, it enters an infinite loop. • dec (sid) (sk, y) computes x ← dec(sk, y) and counts the steps needed. If more than p(η + |sk| + |y|) steps are needed, it enters an infinite loop. Otherwise, if x = (sid, x) for some x, then it returns x, otherwise, it returns the error symbol ⊥.
Let E ∈ Env σ -single (P), i.e., E uses only a single PID. Furthermore, let η ∈ N be a security parameter and a ∈ {0, 1} * be some external input. As in the proof of Theorem 9, to prove that Pr [(E | P)(1 η , a) = 1] = Pr [(E | S | F)(1 η , a) = 1], we show that there exists a bijective mapping that maps every run ρ of (E | P) (1 η , a) to a run ρ of (E | S | F) (1 η , a) such that both runs have the same probability and overall output. Again, defining such a bijection is simple. Let ρ be a run of (E | P) ( 1 η , a). Now, ρ is defined by defining the random coins of E and F (note that S is deterministic and F only uses random coins in the simulation of the leakage and encryption algorithms) such that E uses the same random coins as in ρ and F uses the same random coins to encrypt messages as F pke uses in ρ. That is, the ciphertext for the first message that is encrypted in ρ is computed using the same random coins as those used to compute the ciphertext for the first message in ρ . This also holds for the second message and so on. By induction on the length of runs ρ, it is easy to see that the view of E is the same in both runs ρ and ρ using the following arguments: 1. As in the proof of Theorem 9, E does not observe any difference regarding blocked SIDs or Corrupted? requests. 2. Upon encryption, in both runs, the leakage and encryption algorithms are executed on the same messages using the same random coins, and hence, the same ciphertext is returned: Let x be a plaintext that is encrypted in ρ under the public key pk with some SID sid and PID pid. Furthermore, let enc be the encryption algorithm and pk be the public key provided previously by E. We distinguish two cases: (i) Ideal encryption, i.e., pk = pk and F pke [pid] (in ρ) is not corrupted: In this case, the ciphertext y returned to E is computed in ρ by computing the leakage x ← L (1 η , (sid, x)) and simulating enc(pk, x) at most p(η + |pk| + |x|) steps. We note that in this case, by definition of S, pk is the recorded public key in F pke [sid, pid] (in ρ ) and F pke [sid, pid] is not corrupted. Hence, in ρ , the ciphertext that is returned to E is computed by computing the leakage x ← L(1 η , x) and simulating enc (sid) (pk, x ) at most p (η + |pk| + |x |) steps. The same random coins as in ρ are used to compute the leakage. Hence, by definition of L , x = (sid, x ). Furthermore, the same random coins as in ρ are used to simulate the encryption algorithm. Hence, by definition of enc (sid) and by (a), the same ciphertext y is returned.
(ii) Non-ideal encryption, i.e., pk = pk or F pke in ρ is corrupted: First, we note that in this case, by definition of S, pk is not the recorded public key in F pke [sid, pid] (in ρ ) or F pke [sid, pid] is corrupted. Now, the ciphertext y is computed in ρ by simulating enc(pk, (sid, x)) at most p(η + |pk| + |(sid, x)|) steps and in ρ by simulating enc (sid) (pk, x) at most p (η + |pk| + |x|) steps. Since in both runs the same random coins are used to simulate the algorithm, by definition of enc (sid) and by (a), the same ciphertext is created in both runs.
3. Upon decryption, in both runs, the same algorithms are executed on the same bit strings, and hence, the same plaintext is returned: Let y be a ciphertext that is decrypted in ρ with some SID sid and PID pid. Furthermore, let enc, dec be the algorithms and pk, sk be the key pair provided previously by E. We now distinguish the following cases: First, we note that in this case F pke [sid, pid] (in ρ ) is not corrupted. We now distinguish two cases: • Ciphertexts collide, i.e., there exist x 0 , x 1 such that x 0 = x 1 and In this case, decryption fails in ρ. We now show that decryption also fails in ρ . For this purpose, we first show that x 0 and x 1 "belong" to the same session. More precisely, we show that there exist sid , x 0 , x 1 such that x 0 = (sid , x 0 ) and x 1 = (sid , x 1 ). By definition of P js pke , x 0 = (sid 0 , x 0 ) and x 1 = (sid 1 , x 1 ) for some sid 0 , x 0 , sid 1 , x 1 . Since (x 0 , y), (x 1 , y) ∈ H (in F pke [pid] in ρ), x 0 and x 1 have been encrypted to y such that y has been computed once as x 0 ← L (1 η , x 0 ); y ← enc(pk, x 0 ) and another time as x 1 ← L (1 η , x 1 ); y ← enc(pk, x 1 ). Furthermore, the decryption check succeeded in both cases, i.e., x 0 = dec(sk, y) = x 1 (decryption is deterministic). By definition of L (because it leaks the SID), we have that x 0 = (sid 0 , x 0 ) and x 1 = (sid 1 , x 1 ) for some x 0 , x 1 . Hence, sid 0 = sid 1 . Now, if sid = sid (i.e., x 0 and x 1 "belong" to session sid), then, because ideal encryption is performed identically in ρ and ρ (as shown above), we have that [sid, pid]. Since x 0 = x 1 (because x 0 = x 1 ), we conclude that decryption also fails in ρ . Otherwise, i.e., sid = sid , decryption also fails in ρ because dec(sk, y) = (sid , x) for some x. Hence, by definition of dec (sid) , we obtain that dec (sid) (sk, y) = ⊥.
• Ciphertexts do not collide, i.e., there exist x such that (x, y) ∈ H in F pke [pid] and x is unique with this property. If x = (sid, x ) for some x , then the plaintext x is returned (to E) in ρ. Since ideal encryption is performed identically in ρ and ρ (as shown above), (x , y) ∈ H in F pke [sid, pid], and there is no other recorded plaintext for y. Hence, x is returned in ρ too. Otherwise, i.e., x = (sid , x ) for some sid , x such that sid = sid , by definition of P js pke , decryption fails in ρ (i.e., ⊥ is returned to E). Since ideal encryption is performed identically in ρ and ρ (as shown above), (x , y) / ∈ H in F pke [sid, pid] for any x ((x , y) is only recorded in F pke [sid , pid] = (sid, x ) for some x . Otherwise, P js pke returns ⊥ (decryption error). We note that in this case, by definition of S, F pke [sid, pid] (in ρ ) is corrupted or, because ideal encryption is performed identically in ρ and ρ (as shown above), there does not exist x such that (x, y) ∈ H in F pke [sid, pid] (in fact, y is not recorded in any instance of F pke ). Hence, in ρ , the plaintext is computed by simulating dec (sid) (sk, y) at most p (η + |sk| + |y|) steps. By definition of dec (sid) and by (b), the same plaintext x (possibly x = ⊥) is returned.
From this, we obtain that E | P ≡ E | S | F. Hence, P ≤ σ -single F. By Theorem 3, we conclude that P ≤ F.
We note that the above theorem implies that we obtain joint state realizations for all realizations of F pke . In particular, by Theorem 7 (and the composition theorems), we obtain that the joint state realization !P js pke | !P pke ( ) of an IND-CCA2 secure publickey encryption scheme realizes the multi-party, multi-session version of F pke . In this corollary, we assume that the leakage algorithm L (recall that L (1 η , (sid, x) L(1 η , x))) is length preserving (i.e., |L (1 η , (sid, x))| = |(sid, L(1 η , x))| = |(sid, x)|). Note that L is length preserving if L is length preserving (e.g., L is one of the leakage algorithms from Example 1) and pairing (·, ·) is length preserving. We say that pairing (·, ·) is length preserving if |(sid, x)| = |(sid, x )| for all η ∈ N, sid ∈ D sid (η), and x, x ∈ D L (η) such that |x| = |x |. This is a natural assumption. Proof. By Theorem 7 (because is IND-CCA2 secure, L is length preserving, and D = D L ), it holds that P pke ≤ F pke ( p , L ) for any polynomial p that bounds the runtime of the algorithms in . From this, by the composition theorems (Theorems 1 and 2), we obtain that !P js pke | !P pke ≤ !P js pke | !F pke ( p , L ). By Theorem 10 and transitivity of ≤, we conclude that !P js pke | !P pke ≤ !F pke ( p, L) for some polynomial p.

A Joint State Realization for Replayable Public-Key Encryption
The joint state realization for replayable public-key encryption is analogous to the joint state realization for public-key encryption. In fact, the same protocol P js pke (see Sect. 5.2) can be used. As before, the joint state theorem for replayable public-key encryption states that !P js pke | !F rpke (L ) (where F rpke is obtained from F rpke by renaming all external tapes) realizes the multi-session, multi-party version !F rpke (L) where, again, L leaks the SID plus the information that L leaks (L (1 η , (sid, x)) = (sid, L(1 η , x)) for all η, sid, x) and the domain of SIDs has to be restricted. The joint state theorem can be applied iteratively, as described in Sect. 3. n, p , L) where the leakage algorithm L is defined as in Theorem 10, !F rpke (n, p, L ) is the multi-party version of F rpke where all input and output tapes are renamed, as described above, and !F rpke (n, p , L) is the multi-session, multi-party version of F rpke where the domain of SIDs is D sid . 25 Proof. The proof is similar to the proof of Theorem 10. To show that P:= !P js pke (n, D sid ) | !F rpke (n, p, L ) realizes F:= !F rpke (n, p , L), by Theorem 3, if suffices to show that there exists a simulator S ∈ Sim P σ -single (F) such that E | P ≡ E | S | F for every environment E ∈ Env σ -single (P) that only uses a single PID pid (of course, E may use multiple SIDs), where σ is the SID function defined in the proof of Theorem 9. This "single-PID" simulator S is defined as in the proof of Theorem 10 and we use the same polynomial p . In particular, S uses the same algorithms enc (sid) and dec (sid) .

Theorem 11. Let n > 0, D sid be a polynomially bounded domain of SIDs, and L be a leakage algorithm. Then, for every polynomial p there exists a polynomial p such that:
Let E ∈ Env σ -single (P), i.e., E uses only a single PID. Furthermore, let η ∈ N be a security parameter and a ∈ {0, 1} * be some external input. To prove that Pr [(E | P)(1 η , a) = 1] =Pr [(E | S | F )(1 η , a) = 1], we show that there exists a bijective mapping that maps every run ρ of (E | P) (1 η , a) to a run ρ of (E | S | F )(1 η , a) such that both runs have the same probability and overall output. The definition of such a bijection coincides with the one in the proof of Theorem 10, except that F pke is replaced by F rpke . The proof that this bijection has the desired properties is similar to the one in the proof of Theorem 10: By induction on the length of runs ρ, it can be shown that E has the same view in both runs ρ and ρ . The proof only differs from the one of Theorem 10 in the argument for the case of decryption: Let y be a ciphertext that is decrypted in ρ with some SID sid and PID pid and let enc, dec be the algorithms and pk, sk be the key pair provided previously by E. Furthermore, let x be the plaintext/leakage computed by F rpke [pid] (in ρ), i.e., by simulating dec(sk, y) at most p(η + |sk| + |y|) steps (if more steps would be needed, x = ⊥). Analogously, let x * be the plaintext/leakage computed by F rpke [sid, pid] (in ρ ), i.e., by simulating dec (sid) (sk, y) at most p (η + |sk| + |y|) steps (if more steps would be needed, x * = ⊥). By definition of dec (sid) and p (see (b) in the proof of Theorem 10), we have that Hence, if x = ⊥, then x * = ⊥ and, therefore, decryption fails both in ρ and ρ (i.e., ⊥ is returned to E). Now, assume that x = ⊥. We distinguish the following cases: (i) Ideal decryption, i.e., F rpke [pid] (in ρ) is not corrupted and there exists x such First, we note that in this case F rpke [sid, pid] (in ρ ) is not corrupted. Furthermore, by definition of L (since L leaks the SID), x = (sid , x ) for some sid , x . We now distinguish two cases: • Leakages collide, i.e., there exist x 0 , x 1 such that x 0 = x 1 and (x 0 , x), In this case, decryption fails in ρ. We now show that decryption also fails in ρ . First, it is easy to see that x 0 and x 1 "belong" to the same session sid , i.e., x 0 = (sid , x 0 ) and x 1 = (sid , x 1 ) for some x 0 , x 1 . 26 Now, if sid = sid (i.e., x 0 and x 1 "belong" to session sid), then, by (2), x = x * . Since ideal encryption is performed identically in ρ and ρ , we have that , we conclude that decryption also fails in ρ . Otherwise, i.e., sid = sid , decryption also fails in ρ because, by (2), x * = ⊥. • Leakages do not collide, i.e., there exist x such that (x, x) ∈ H in F rpke [pid] and x is unique with this property. If x = (sid, x ) for some x , then the plaintext x is returned (to E) in ρ. Furthermore, sid = sid and, by (2), x = (sid, x * ). Since ideal encryption is performed identically in ρ and ρ , we have that (x , x * ) ∈ H in F rpke [sid, pid] and there is no other recorded plaintext for x * . Hence, x is returned in ρ too. Otherwise, i.e., x = (sid , x ) for some sid , x such that sid = sid , by definition of P js pke , decryption fails in ρ (i.e., ⊥ is returned to E). Furthermore, sid = sid , i.e., x = (sid , x ). Hence, by (2), x * = ⊥, i.e., decryption fails in ρ too.
(ii) Non-ideal decryption, i.e., F rpke [pid] (in ρ) is corrupted or there does not exist x such If x = (sid, x ) for some x , then, by definition of P js pke , the plaintext x is returned to E in ρ. In this case, by (2), x = x * and this plaintext is returned to E in ρ too. Otherwise, i.e., x = (sid, x ) for any x , then decryption fails in ρ (by definition of P js pke ). By (2), x * = ⊥ in this case and, hence, decryption fails in ρ too. From this, we obtain that E | P ≡ E | S | F. Hence, P ≤ σ -single F. By Theorem 3, we conclude that P ≤ F.
Similarly to the case of public-key encryption, we note that the above theorem implies that we obtain joint state realizations for all realizations of F rpke . In particular, by Theorem 8 (and the composition theorems), we obtain that the joint state realization !P js pke | !P pke ( ) with an IND-RCCA secure public-key encryption scheme realizes the multi-party, multi-session version of F rpke . 26 We note that in the proof of Theorem 10, to show this, the decryption test upon encryption was required. This is not needed here. Hence, the decryption test could be omitted in F rpke , as mentioned above. Corollary 5. Let n > 0, D sid be a polynomially bounded domain of SIDs, be an IND-RCCA secure public-key encryption scheme, and L be a leakage algorithm such that the leakage algorithm L (as defined in Theorem 10) is length preserving, has high entropy, and D = D L (i.e., and L have the same plaintext domain). Then, there exists a polynomial p such that: where !P pke (n, ) is the multi-party version of P pke where all input and output tapes are renamed as for F rpke and, as above, !F rpke (n, p, L) is the multi-session, multi-party version of F rpke where the domain of SIDs is D sid .
Proof. By Theorem 8 (because is IND-RCCA secure, L is length preserving and has high entropy, and D = D L ), it holds that P pke ≤ F rpke ( p , L ) for any polynomial p that bounds the runtime of the algorithms in . From this, by the composition theorems (Theorems 1 and 2), we obtain that !P js pke | !P pke ≤ !P js pke | !F rpke ( p , L ). By Theorem 11 and transitivity of ≤, we conclude that !P js pke | !P pke ≤ !F rpke ( p, L) for some polynomial p.
We note that the above corollary holds in particular if L is the leakage algorithm that returns a random bit string of the same length as the plaintext and if the plaintext domain contains only "long" plaintexts (e.g., only plaintexts of length ≥ η for security parameter η) and if pairing (·, ·) is length preserving, as defined in Sect. 5.2 (i.e., the length of a pair of an SID and a plaintext does not depend on the actual bits of the plaintext but only on the SID and the length of the plaintext). In this case, it is easy to see that L is length preserving and has high entropy.

Related Work
We have already discussed some related work in the introduction. In this section, we compare our ideal functionalities and results with the ones from the literature in more detail.

Digital Signatures
We first compare our formulation of the digital signature functionality with other formulations in the literature.
As mentioned in the introduction, most other formulations of digital signature functionalities are defined in a non-local way [1,7,13,14], i.e., all signatures are provided by the adversary, with the mentioned disadvantages. The only formulations with local computation in the literature, besides the one in the present paper, are the ones in [6] (see the version of December 2005) and [2].
The digital signature functionality in [2] is part of a Dolev-Yao style cryptographic library. A user does not obtain the actual signature but only a handle to this signature within the library. By this, the use of the signature is restricted for the user to the operations provided in the cryptographic library. The implementation for the digital signature functionality within the library does not use a standard UF-CMA secure digital signature scheme, but requires a specific stronger construction. Joint state realizations have not been considered. In fact, the library is expressed within the model by Pfitzmann and Waidner [26] which does not explicitly talk about copies of protocols/functionalities.
One problem of the formulation in [6] is that it does not seem to have any reasonable joint state realization, unlike claimed in [6]: The signature functionality in [6] uses only the signing and verification algorithms sig and ver, but no public/private keys pk/sk. It is argued that the public/private keys can be incorporated in the algorithms ver/sig. That is, the verification algorithm plays the role of the public key. Thus, in order to verify a message-signature pair (x, σ ) in addition to this pair a verification algorithm ver has to be provided and in the functionality it is then checked if ver = ver. If ver = ver, the algorithm ver is run on (x, σ ), and the result of this algorithm is returned. While this integration of the keys into the algorithms works for the private key, it does not work for the public key. As argued next, failing to make the distinction between the verification algorithm and the public key prevents to obtain joint state realizations following the "concatenate and sign" approach or any approach that somehow manipulates the signed messages in an observable way.
An environment E that distinguishes a joint state realization from the multi-session, multi-party version of the digital signature functionality in the ideal world works as follows: It sends an initialization message to some copy of the digital signature functionality and provides some algorithms sig and ver. It then requests to verify the message-signature pair (x, σ ), where x is not of the form (sid, x ) for any sid, x , with the verification algorithm ver where ver = ver is defined as follows: ver (x, σ ) outputs true if the message x is of shape (sid, x ), it outputs false otherwise. If E obtains (VerResult, true), it outputs 1, and 0 otherwise. It is easy to see that if E communicates with the joint state realization it will always output 1 since this realization forwards (sid, x) to the digital signature functionality. Since ver = ver, the functionality will call ver ((sid, x), σ ) and so 1 is returned. Conversely, in the ideal world where E communicates directly with a copy of the digital signature functionality, E will always output 0 since this copy runs ver (x, σ ).
Another problem in Canetti's formulation of the digital signature functionality in [6] is that the signing algorithm sig is allowed to preserve some state (i.e., the signature values may depend on the messages signed so far). Note that in our formulation sig is stateless. It is easy to prove that with a stateful sig, joint state realizations, such as "concatenate and sign" or similar approaches, fail, depending on the kind of state that is used. The problem is that the signing algorithms in the real and the ideal world will have different states, and that this cannot be prevented by the simulator. If states of signing algorithms are predictable and observable to some extent, then an environment can easily distinguish between the real and the ideal world. Note that Canetti's joint state realization is based on his ideal digital signature functionality and this functionality accepts any signature and verification algorithms from the environment/simulator. Hence, one in particular has to deal with the described "problematic" algorithms, which, however, is not possible. An alternative would be to restrict the kind of stateful signing algorithms that may be provided by the environment/simulator. This class would have to be carefully defined in order to fulfill certain closure properties to be useful in the context of joint state realizations. In any case, it would have to exclude several existing stateful signature schemes as they are problematic in the sense described. Also, the analysis of complex protocols based on functionalities which are parametrized by certain classes of signing/verification algorithms would be more complex.
While we define corruption very thoroughly, other formulations of signature functionalities lack to do so. But when it comes to joint state realizations, this is crucial. For example, if corruption reveals the order of the messages that have been signed so far, then the environment is able to distinguish the joint state world from the ideal world because the simulator has no chance to determine the order in which messages of different sessions where signed in the ideal world. If the messages are revealed in random order or in some order that is independent from the moment of activation (e.g., in lexicographical order), the joint state theorem for digital signatures still holds because the simulator is able to obtain the messages from each copy of the digital signature functionality and can combine them such that they respect the expected ordering.

Public-Key Encryption
In the proof of the joint state theorem for public key encryption, several subtleties come up which were overlooked in other works, in particular [6,11]. In these works, joint state theorems, similar to Theorem 10, for public-key encryption functionalities with local computation were mentioned. However, the joint state realizations were only sketched and no proofs were provided. It, in fact, turns out that the joint state theorems for these functionalities do not hold true. Let us first explain this for [6] (see the version of December 2005) and then for [11]. These explanations motivate and justify the definition of our functionality and the way our joint state theorem is stated. Problems with the joint state realization in [6]. There are two problems: 1. The public-key encryption functionality in [6], unlike our functionality, identifies the public/private keys with the encryption/decryption algorithms enc/dec. While this works for the private key, it is problematic for the public key as explained next. If the environment wishes to encrypt a message x, it is supposed to also present an encryption algorithm enc (not just a key, as in our functionality). If enc = enc , i.e., enc is different from the algorithm associated with the functionality, then the ciphertext returned is enc (x). Now, assume that the environment asks to encrypt some message x with enc in session sid, where, say enc coincides with enc except that enc uses a different public key. In the joint state world (i.e., in an interaction with !P js pke | !F pke ), the ciphertext is computed as enc ((sid, x)). In the ideal world (i.e., in an interaction with the simulator and !F pke ), the ciphertext is computed as enc (x). Since the two ciphertexts have different lengths, the environment can easily distinguish between the joint state and ideal world no matter what simulator is chosen.
(2) In [6], the leakage is fixed to be the length of a message, i.e., instead of a message x a fixed message μ |x| of length |x| is encrypted (e.g., μ |x| = 1 |x| ). In particular, this is so also in the joint state world. Hence, the SID is not leaked. This is problematic: The kind of encryption and decryption algorithms that may be provided by the simulator/environment in the joint state and ideal world to the public-key encryption functionality are not restricted in any way. In particular, the encryption algorithm that is provided may be deterministic. But then, if the environment asks to encrypt two different messages of the same length in two different sessions for the same party, then the resulting ciphertexts will be the same, since in both cases some fixed message is encrypted. In the ideal world, the two ciphertexts can be decrypted, since they are stored in different sessions. In the joint state world, decryption fails: The decryption box has two entries with the same ciphertext but different plaintexts. (The leakage that we use prevents this.) Consequently, the environment can easily distinguish between the ideal and joint state world.
To circumvent the second problem, one might think that restricting the environment to only provide encryption and decryption algorithms that originate from probabilistic encryption schemes where the probability for clashes between ciphertexts are negligible solves the problem. Let us call such an encryption scheme a valid encryption scheme. However, this is not the case if, as in [6], SIDs are not leaked in the joint state world; even if the algorithms provided by the environment/simulator are assumed to be IND-CCA2 secure.
Upon encryption of some message x 0 with the proper public key pk in some session sid 0 in the joint state world the ciphertext y is computed as enc(pk, μ |(sid 0 ,x 0 )| ). Depending on μ n and how pairings are encoded, we have that for some SID sid 1 and some plaintext x 1 . This is, for example, the case if SIDs are assumed to have fixed length (e.g., the length of the security parameter) and are simply appended at the beginning of a message. This is a natural encoding, but our argument also works for other encodings and choices of μ n (see below). Note that the environment can even try to choose x 0 and sid 0 in order to make (3) true. When trying to prove the joint state theorem, the obvious candidate for a simulator, subsequently called the standard simulator, is the following. If the standard simulator receives algorithms enc(·, ·), dec(·, ·) and the private/public key pair sk/pk from the environment, then it provides the algorithms enc (sid) (·, ·) and dec (sid) (·, ·) and the key pair sk/pk to the instance of F pke with SID sid where enc (sid) (pk , x) : if pk = pk and not corrupted then return enc(pk, μ |(sid,x)| ) else return enc(pk , (sid, x)) 27 and dec (sid) (sk, y) : x:=dec(sk, y); if x = (sid , x ) for some x and sid = sid then return x else return ⊥ 27 Technically, enc sid cannot know whether the functionality is corrupted or not but if we assume only static corruption then the simulator is able to know whether the functionality is corrupted or not at the moment it is requested to present the algorithms and can hard-code this into enc sid . for all SIDs sid. This seems to be the only reasonable simulator because in the joint state world a ciphertext for a message x in session sid is computed as enc(pk, μ |(sid,x)| ) and the plaintext of a ciphertext y that was not output by the functionality is computed as x = dec(sk, y) and the joint state realization checks if x = (sid , x ) and outputs x if sid = sid and ⊥ otherwise. Now, we provide an environment E that distinguishes between the joint state and the ideal world for such a simulator. As we will see, E will not corrupt any parties. Therefore, the simulator will not do so either: In the UC model the simulator is prohibited to do so by the control function and in the IITM model E could check this by requesting the functionality if it is corrupted and then distinguish between joint state and ideal world.
First, E initializes two instances for the same party, say with PID pid; one in session (with SID) sid 0 and one in session sid 1 . Furthermore, E provides algorithms enc(·, ·), dec(·, ·) and the public/private keys pk/sk where enc, dec, pk, and sk originate from a valid encryption scheme (e.g., they could belong to an IND-CCA2 secure encryption scheme). Then, E requests to encrypt the plaintext x 0 under the (correct) public key pk of party pid in session sid 0 . Let y denote the resulting ciphertext. Finally, E sends a decryption request for y and party pid in session sid 1 . It outputs "joint state" (or 1) if the returned plaintext is ⊥, and "ideal" (or 0) otherwise.
It is easy to see that E determines correctly whether it interacts with the joint state or the ideal world: In the joint state world, the plaintext returned by the ideal functionality upon the decryption request by E is (sid 0 , x 0 ) (this is the plaintext recorded in the ideal functionality along with y). Since sid 0 = sid 1 , the joint state realization returns ⊥ as plaintext. In the ideal world, since y has not been recorded in session sid 1 , y is encrypted using dec (sid 1 ) (sk, y). That is, first dec(sk, y) = μ |(sid 0 ,x 0 )| is computed. Then, it is checked whether the first component of μ |(sid 0 ,x 0 )| is sid 1 , which, because of μ |(sid 0 ,x 0 )| (3) = (sid 1 , x 1 ), is the case. Therefore, x 1 is returned as plaintext by dec (sid 1 ) (sk, y).
We note that even if in F pke instead of a constant message randomly chosen messages are encrypted in the case of ideal encryption, the above argument still works in case SIDs are short, but the success probability of the environment will be smaller (still non-negligible).
Problems with the joint state realization in [11]. In [11], a (certified) public-key encryption functionality with local computation is proposed which is parametrized by fixed encryption and decryption algorithms; the keys are embedded in the algorithms, and hence, are also fixed (below we discuss the case that keys are not fixed). For this functionality, a theorem similar to Theorem 10 is stated only informally and without a proof. One can only hope such a theorem to hold if one assumes that in the ideal world the ideal functionality is defined in such a way that its SID is given to the encryption and decryption algorithms by the functionality, and that the encryption and decryption algorithms make use of the SID in the same way as prescribed by the simulator in the proof of Theorem 10. So, already the ideal functionality has to mimic the joint state realization. However, the ideal functionality in the joint state world should be defined differently: It should ignore SIDs, because in the joint state world SIDs are handled outside of the ideal functionality. Hence, the joint state theorem would be defined with different ideal functionalities in the joint state and ideal world. This has not been mentioned in [11].
But even if this is done, the theorem would still not hold if in the joint state world SIDs are not leaked. The reasoning is similar to the one above for the joint state theorem in [6]. Note that since the keys as well as encryption and decryption algorithms are fixed, the environment can still decrypt messages on its own. To fix this problem, the ideal functionality in the joint state world would have to be modified to account for the leakage. Altogether, these modifications would mimic what is happening in Theorem 10 and our proof of this theorem.
Alternatively, instead of parametrizing the functionality with a fixed public-key, encryption and decryption algorithm, one could have the functionality generate its own keys. In this case in the ideal world for encryption different public keys would be used in different sessions for the same party while in the joint state world the same key would be used for all sessions of this party. For the joint state theorem to hold this would require the encryption scheme to hide the public key, which is not a property IND-CCA2 secure schemes have in general. Remarks on other functionalities in the literature.
As mentioned in the introduction, other formulations of public-key encryption functionalities, e.g., those in [7,17], are defined in a non-local way, i.e., all ciphertexts are provided by the adversary, with the mentioned disadvantages. Formulations with local computation, besides the one discussed above, have been proposed in [2,26].
The public-key encryption functionality in [2] is part of a Dolev-Yao style cryptographic library. It has similar restrictions as the digital signatures in this library: A user does not obtain the actual ciphertexts but only a handle to the ciphertexts within the library. By this, the use of ciphertexts by the user is restricted to the operations provided in the library. The implementation of the public-key encryption functionality within the library does not use a standard IND-CCA2 secure scheme, but requires a specific stronger construction.
In [26], formulations of public-key encryption functionalities with local computation are proposed which are parametrized by specific encryption and decryption algorithms, with the same drawbacks (concerning joint state) mentioned for [11].
We note that in [2,26] joint state realizations of the proposed functionalities have not been considered. General remarks. One general remark for joint state theorems is that specifying corruption precisely is vital, as we do in our work, since some forms of corruption do not allow for joint state realizations. For example, if upon corruption all messages encrypted so far would be given to the adversary in order of occurrence, the joint state and ideal world could be distinguished because the order in the joint state world cannot be reconstructed by the simulator in the ideal world. (See also the discussion of corruption for joint state for digital signatures in Sect. 6.1.)

Replayable Public-Key Encryption
Canetti et al. [12] define and motivate IND-RCCA secure encryption schemes and propose a public-key functionality with non-local computation that captures IND-RCCA security. In [6] (see the version of December 2005), Canetti sketches in a few lines how his public-key encryption functionality with local computation should be modified to obtain a functionality that mimics IND-RCCA security. However, the modification that Canetti proposes only makes sense in a setting with non-local computation of ciphertexts. A proof of equivalence of his functionality with IND-RCCA security is not provided. Also, neither in [12] nor in [6] the issue of joint state is mentioned in the context of IND-RCCA security. So, our formulation of replayable public-key encryption with local computation is the first such formulation. Also, we are the first to propose a joint state realization (see Sect. 5.3) in the context of IND-RCCA security.
The general remarks in Sects. 6.1 and 6.2 about the features and advantages of our formulations of digital signature and public-key encryption functionalities compared to other formulations also apply to our formulation of the functionality for replayable public-key encryption.

Joint State Theorems Without Pre-established Session Identifiers
The ideal functionalities and the realizations proposed here (and in other works) require parties to have pre-established and globally unique SIDs before using the functionalities/realizations. This implicitly requires protocols to use these SIDs in some essential way in order to prevent "interference" between different protocol sessions. In joint state realizations such SIDs are prefixed to the messages to be encrypted/signed.
While this is a good design principle, not all protocols use pre-established SIDs. This is, for example, the case for most real-world authentication, key exchange, and secure channel protocols. Therefore, in [23], an alternative way of addressing multiple protocol session without pre-established and globally unique SIDs is presented within the IITM model. Also, composition and joint state theorems without such SIDs are presented. In the formulation in [23], parties merely use locally chosen and managed SIDs.

Acknowledgements
Open Access funding provided by Projekt DEAL. We thank Ran Canetti for many interesting discussions on the UC model and joint state. This work was in part funded by the Deutsche Forschungsgemeinschaft (DFG) through Grant KU 1434/9-1.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Security Definitions for Cryptographic Primitives
In this section, we recall standard security notions for cryptographic schemes. They will be used to realize the ideal functionalities for cryptographic primitives that we present in this paper.
Traditionally, security notions for cryptographic primitives are defined with respect to adversaries that do not obtain external input, except for the security parameter. Universal composability frameworks such as the UC model [6,7] or the IITM model deal with environments that receive external input (and where the runtime of the environment might depend on the security parameter and the length of the external input). In order to realize ideal functionalities for cryptographic primitives, reduction proofs to the security of the underlying primitives are necessary, and hence, the notions have to be compatible. We chose to adapt the standard notions of security, i.e., we formulate them with respect to adversaries that receive external input. We note, however, that all our results carry over to the setting without external input, i.e., where all environments and adversaries do not receive external input (except for the security parameter).
To define the security notions, as usual in cryptographic literature, we use the following notation: By we denote the probability of an event B where the probability distribution is given by a probabilistic algorithm A with input x: y is a random variable that is distributed according to the probability distribution induced by A. This notion is extended naturally to allow for a sequence of algorithms A 1 , . . . , A n instead of A. For example, given algorithms gen, enc, and A; a security parameter η ∈ N; and a bit string x, the probability that A on input y outputs 1 where (the probability distribution of) y is obtained from running gen on input 1 η (to obtain k) and then running enc on input k and x is denoted by Pr k ← gen(1 η ); y ← enc(k, x) : A(y) = 1 .
Furthermore, we use the notion of negligible functions as used in the IITM model (which we recalled at the end of the introduction).

A.1 Digital Signatures
In this section, following [16], we recall standard notions for digital signature schemes.

Definition 9.
A (digital) signature scheme = (gen, sig, ver) consists of three polynomial-time algorithms. The probabilistic key generation algorithm gen expects a security parameter (in unary form) and returns a pair of keys (pk, sk), the public key pk and the private key sk. The (possibly) probabilistic signing algorithm sig expects a private key and a message and returns a signature. The deterministic verification algorithm ver expects a public key, a message, and a signature and returns true (verification succeeds) or false (verification fails).
We note that we do not restrict the domain of messages, i.e., any bit string is a valid message. However, all results presented in this paper could easily be extended to deal with other domains. is negligible (as a function in η and a).

Definition 11.
A public-key encryption scheme = (gen, enc, dec) consists of three polynomial-time algorithms. The probabilistic key generation algorithm gen expects a security parameter 1 η and returns a pair of keys (pk, sk), the public key pk and the private key sk. The probabilistic encryption algorithm enc expects a public key and a plaintext and returns a ciphertext. The deterministic decryption algorithm dec expects a private key and a ciphertext and returns a plaintext if decryption succeeds. Otherwise, it returns the special symbol ⊥.
We require that, for every security parameter η ∈ N, public/private key pair (pk, sk) generated by gen(1 η ), plaintext x ∈ D (η), and ciphertext y generated by enc(pk, x), it holds that dec(sk, y) = x.

IND-CCA2 security.
Following [3], IND-CCA2 security is defined as follows. We note that this notion is called IND-CCA-SE in the taxonomy of [4].  1 , A 2 ) that is a pair of probabilistic, polynomial-time algorithms such that: expects a security parameter 1 η , external input a, and a public key pk as input, has access to an oracle O, and produces output of the form (x 0 , x 1 , s) consisting of two plaintexts x 0 , x 1 ∈ D (η) of the same length (|x 0 | = |x 1 |) and a bit string s (some information A 1 wants to pass to A 2 ), and

A O(·)
2 expects a bit string s and a ciphertext y ∈ {0, 1} * as input, has access to an oracle O but never queries O with input y, and outputs a bit b ∈ {0, 1}, the IND-CCA2 advantage of A against (1 η , a):= 2 · Pr (pk, sk) ← gen (1 η a function in η and a).

IND-RCCA security.
The notion IND-RCCA security (replayable IND-CCA2 security) for public-key encryption schemes is a relaxed form of IND-CCA2 security where modifications of the ciphertext that yield the same plaintext are permitted. In particular, IND-CCA2 security implies IND-RCCA security [12]. IND-RCCA security has been introduced by Canetti, Krawczyk, and Nielsen in [12]. As explained in [12], IND-RCCA security suffices in many applications where IND-CCA2 security is used. IND-RCCA security is defined as follows. expects a security parameter 1 η , external input a, and a public key pk as input, has access to an oracle O, and produces output of the form (x 0 , x 1 , s) consisting of two plaintexts x 0 , x 1 ∈ D (η) of the same length (|x 0 | = |x 1 |) and a bit string s (some information A 1 wants to pass to A 2 ), and 2. (1 η , a):= 2 · Pr (pk, sk) ← gen (1 η a function in η and a), where the oracle dec is defined as follows: dec(x 0 , x 1 , sk, y) : if dec(sk, y) ∈ {x 0 , x 1 }: return test else: return dec(sk, y) .
(The symbol test is a special symbol which is not confused with any bit string or ⊥.) We now prove Theorem 6, i.e., that P sig realizes F sig if and only if is UF-CMA secure (Fig. 9).
First, we assume that is UF-CMA secure and show that P sig ≤ F sig .
We use the following straightforward simulator S ∈ Sim P sig (F sig ): S is a single IITM that accepts all messages in mode CheckAddress. In mode Compute, upon receiving Init from F sig , S forwards Init to the environment and waits for receiving a message of the form (corrupt, pk, sk) from the environment where corrupt ∈ {false, true} and pk, sk ∈ {0, 1} * . Any message not of this form is ignored by S, i.e., S ends the activation with empty output. When receiving (corrupt, pk, sk) with corrupt = false, i.e., the environment does not want to corrupt P sig upon initialization, then S generates a new public/private key pair (pk , sk ) ← gen(1 η ) using the key generation algorithm of , sends (false, sig, ver, pk , sk ) to F sig (where sig and ver are descriptions of the signing and verification algorithms of ). When receiving (corrupt, pk, sk) with corrupt = true, i.e., the environment wants to corrupt P sig upon initialization, then S sends (true, sig, ver, pk, sk) to F sig (i.e., S corrupts F sig upon initialization). Then, S waits for receiving Corrupt from the environment (any other input is ignored by S, as above). If it receives Corrupt, S forwards Corrupt to F sig , waits for receiving Corrupted from F sig , and sends (Corrupted, pk, sk) to the network where pk, sk is the key pair provided previously to F sig (i.e., either the key pair provided by the environment or the key pair generated by S). Then, S again waits for receiving Corrupt from the environment and this request is processed in the same way as described above.
It is easy to see that S | F sig is environmentally strictly bounded, i.e., S ∈ Sim P sig (F sig ).
Let E ∈ Env(P sig ) be an environment of P sig . Using E, we construct an UF-CMA adversary A on such that, basically, A is successful if E successfully distinguishes between P sig and S | F sig .
We define A O(·) (1 η , a, pk) (where O is a signing oracle) as follows: A simulates a run of (E | P sig )(1 η , a) with the following exceptions: (a) If E corrupts P sig (i.e., either upon initialization or later by sending the request Corrupt to P sig on the network tape), then A aborts (i.e., in this case, A fails to produce a forgery). (b) Instead of using the public key generated by P sig , the public key pk is used. (The private key generated by P sig is never used because O is used for signing, see below.) (c) Whenever P sig wants to compute the signature σ of a message x, then A instead computes σ ← O(x) using its signing oracle O. (d) Whenever P sig verifies a signature σ for a message x, then A checks whether (x, σ ) constitutes a forgery, i.e., x has never been signed before and ver(pk, x, σ ) = true.
If (x, σ ) is a forgery, then A terminates with output (x, σ ). Otherwise, A continues the simulation of E | P sig .
It is easy to see that A is polynomial-time because E | P sig is strictly bounded. Let B(1 η , a) be the set of runs of (E | P sig )(1 η , a) where, at some point during the run, P sig is uncorrupted and verifies a signature σ for a message x where x has never been signed before and ver(pk, x, σ ) = true. By B(1 η , a) we denote the complement, i.e., the set of runs of (E | P sig )(1 η , a) which are not in B(1 η , a). Since every run of (E | P sig )(1 η , a) that is simulated by A corresponds to a run of (E | P sig ) (1 η , a), it is easy to see that: Pr B(1 η , a) = Pr[(pk, sk) ← gen(1 η ), (x, σ ) ← A sig(sk,·) (1 η , a, pk) : ver(pk, x, σ ) = true and A has not previously called sig(sk, x)] (1 η , a) .
By assumption, is UF-CMA secure and, hence, Pr [B(1 η , a)] is negligible (as a function in η and a).
Because the behavior of P sig and S | F sig is identical as long as no forgery occurs or once P sig (and hence, by definition of S, F sig ) is corrupted, it is easy to see that every run ρ of (E | P sig )(1 η , a) which is not in B(1 η , a) corresponds to a run ρ of (E | S | F sig )(1 η , a) such that both ρ and ρ have the same probability and the view and probabilistic choices of E are the same in both runs. Formally, there exists an injective mapping from runs ρ of (E | P sig ) (1 η , a) excluding B(1 η , a) to runs ρ of (E | S | F sig )(1 η , a) such that both ρ and ρ have the same probability and the same overall output on decision. We obtain that: Pr (E | P sig ) ( It immediately follows that: Pr B(1 η , a) .
From this, we conclude that: which is negligible (as a function in η and a) because is UF-CMA secure. Hence, We now show that is UF-CMA secure if P sig ≤ F sig . Therefore, we assume that is not UF-CMA secure, i.e., there exists a UF-CMA adversary A O(·) such that the UF-CMA advantage of A against is not negligible (see Definition 10). Using A, we construct an environment E ∈ Env(P sig ) such that E distinguishes P sig from S | F sig for every simulator S ∈ Sim P sig (F sig ).
We define E to be a master IITM (i.e., E has an input tape named start) which has an output tape named decision and connects to the I/O and network tapes of P sig (and, hence, S | F sig ). In mode CheckAddress, E accepts every message. Next, we describe the mode Compute of E in an interaction with P sig , but, of course, P sig can be replaced by S | F sig : (a) Upon the first activation with external input a on tape start, E sends PubKey? to P sig on an I/O tape, say on io in 1 . Then, E waits for receiving Init from P sig on the network tape (i.e., on net out P sig ) and replies with (false, ε, ε) (where ε is the empty bit string) on net in P sig , i.e., E does not corrupt P sig and P sig generates a fresh key pair. Then, E waits for receiving (PubKey, pk) from P sig on io out 1 . (b) Then, E simulates the algorithm A O(·) (1 η , a, pk). Whenever A asks its signing oracle O to sign a message x, then E sends (Sign, x) to P sig on io in 1 and waits for receiving (Signature, σ ) from P sig on io out 1 . Then, E continues the simulation of A as if O returned σ . The output of A will be a pair (x 0 , σ 0 ). (c) Then, E checks whether (x 0 , σ 0 ) constitutes a forgery, i.e., x 0 has not been signed and P sig , upon request (Verify, pk, x 0 , σ 0 ) on io in 1 , returns (VerResult, true). If (x 0 , σ 0 ) constitutes a forgery and P sig is not corrupted (E can check this by sending CorrStatus? to P sig on io in 1 ), then E terminates with output 1 on decision, otherwise with output 0. (Note that E never corrupts P sig but S might have corrupted F sig .) If at some point in the description above, E waits for receiving a message but the input is not as expected or on an unexpected tape (this will never happen in the real world, i.e., in a run of E | P sig , but possibly in the ideal world, i.e., in a run of E | S | F sig ), then E terminates with output 0 on decision.
In the ideal world (i.e., in a run of E | S | F sig ) E will never output 1 on decision, i.e.: Pr because, by definition, an uncorrupted F sig will never return (VerResult, true) upon a Verify request for a message that has not previously been signed using F sig . The probability that E outputs 1 on decision in the real world is exactly the advantage of A against : Pr (E | P sig )(1 η , a) = 1 = Pr[(pk, sk) ← gen(1 η ), (x, σ ) ← A sig(sk,·) (1 η , a, pk) : ver(pk, x, σ ) = true and A has not previously called sig(sk, x)] (1 η , a) .
We obtain that: Hence, by assumption that A is successful, we conclude that E | P sig ≡ E | S | F sig , i.e., P sig ≤ F sig . This concludes the proof of Theorem 6.

B.2 Proof of Theorem 7
We prove Theorem 7, i.e., that P pke realizes F pke if and only if is IND-CCA2 secure (Fig. 10). This proof is along the lines of the proof in [6] (version of December 2005). Let n, = (gen, enc, dec), p, and L be given as in the theorem.
First, we assume that is IND-CCA2 secure and show that P pke ≤ F pke . We use a straightforward simulator S ∈ Sim P pke (F pke ) that accepts all messages in mode CheckAddress, forwards the Init request from F pke to the environment, and completes initialization with a freshly generated key pair in the uncorrupted case and with the key pair provided by the environment upon corruption. More formally, S is defined exactly as the simulator in the proof of Theorem 6 (see "Appendix B.1.1"), except that Corrupt requests from the environment are not forwarded to F pke (recall that P pke and F pke both do not allow adaptive corruption, in contrast to P sig and F sig ). It is easy to see that S | F pke is environmentally strictly bounded, i.e., S ∈ Sim P pke (F pke ).
Let E ∈ Env(P pke ) be an environment of P pke . Using E, we construct an IND-CCA2 adversary A on such that, basically, A is successful if E successfully distinguishes between P pke and S | F pke .
To simplify the presentation of the adversary A, without loss of generality, we assume the following: (i) The first request E sends to P pke (or S | F pke ) in any run is an PubKey? request.
Then, E receives Init from P pke (or S | F pke ) on the network tape and completes initialization of P pke (or S | F pke ) by sending (false, ε, ε) (ε is the empty bit string) to the network interface of P pke (or S | F pke ). That is, E does not corrupt P pke (or F pke ) and directly completes initialization. (It is easy to see that, upon corruption, P pke and S | F pke are indistinguishable; in fact, the observational behavior of P pke and S | F pke would be exactly the same.) Furthermore, E never sends a second PubKey? request. (ii) In any run, E never sends corruption status requests (CorrStatus?). (By definition of S, P pke and F pke always agree on the corruption status. Hence, this request would not help E to distinguishes between P pke and S | F pke .) (iii) In any run, E only sends encryption requests with the correct public key, i.e., the public key pk in every Enc request is the public key that E received as response to the PubKey? request. (It is easy to see that E has no advantage of sending Enc requests with a different key.) (iv) There exists a polynomial n Enc such that the overall number of encryption requests that E sends in any run (with security parameter η and external input a) is exactly n Enc (η + |a|). (Note that the number of Enc requests sent by E is polynomially bounded in η + |a| because E is universally bounded.) In the following, we just write n Enc to denote n Enc (η + |a|).
We now define an IND-CCA2 adversary A = (A 1 , A 2 ) against . The first part A O(·) 1 (1 η , a, pk) (where O(·) is a decryption oracle) is defined as follows: At first A 1 chooses h ∈ {1, . . . , n Enc } uniformly at random. Then, A 1 simulates a run of E as follows: • A 1 starts the simulation of E with security parameter η and external input a.
• When E sends the PubKey? request, A 1 sends Init on the network tape to E. (This is what E expects in a run with P pke or S | F pke .) • When E replies with (false, ε, ε), A 1 sends the public key (PubKey, pk)  2 (s, y * ) reconstructs the information stored in s, records the pair (x h , y * ), and continues the simulation of the run of E as follows: • First, A 2 sends (Ciphertext, y * ) to E. (Recall that E just sent an encryption request and is waiting for a ciphertext.) • When E sends an Enc request, say the i-th encryption request and the plaintext is x i (for some i ∈ {h + 1, . . . , n Enc }), then, similarly to F pke , A 2 computes x i ← L(1 η , x i ) and y ← enc(pk, x i ). Then, A 2 records the pair (x i , y) (for later decryption) and sends (Ciphertext, y) to E. • When E sends a Dec request, then A 2 behaves exactly as A 1 , see above.
• When the simulated run stops, then A 2 outputs 1 if E has output 1 on decision, otherwise, A 2 outputs 0.
Note that it always holds that |x h | = |x h | because L is length preserving. Furthermore, A 2 , by definition, never asks its oracle O for the decryption of y * . Since E is universally bounded, A 1 and A 2 are polynomial-time. Hence, A is a valid IND-CCA2 adversary against . Before we analyze the advantage of A against , we note that the following is easy to see: (1 η , a, pk); y ← enc(pk, x b ); b ← A dec(sk,·) 2 (s, y); return b .
By construction of A, it is easy to see that for all η, a: (1 η , a) = 1 | h = 1 and (6) Pr Furthermore, it is easy to see that for all η, a and all i ∈ {1, . . . , n Enc − 1}: with h = i + 1), it is the case that the first i encryptions are encryptions of the real messages and all later encryptions are encryptions of leakages.
We now show that is IND-CCA2 secure if P pke ≤ F pke . Therefore, we assume that is not IND-CCA2 secure, i.e., there exists an IND-CCA2 adversary A = (A 1 , A 2 ) with non-negligible IND-CCA2 advantage against (see Definition 12). Using A, we construct an environment E ∈ Env(P pke ) such that E distinguishes P pke from S | F pke for every simulator S ∈ Sim P pke (F pke ).
We define E to be a master IITM (i.e., an IITM with a tape named start) with an output tape named decision and tapes to connect to P pke (or S | F pke for any S ∈ Sim P pke (F pke )). In what follows, when we say E encrypts/decrypts using P pke , we mean using P pke or S | F pke , depending on which system E is connected to. In mode CheckAddress E accepts every incoming message and in mode Compute it operates as follows: • Upon the first activation with external input a on tape start, E sends PubKey? to P pke on an I/O tape, say on io in 1 . Then, E waits for receiving Init from P pke on the network tape (i.e., on net out P pke ) and replies with (false, ε, ε) (where ε is the empty bit string) on net in P pke , i.e., E does not corrupt P pke and P pke generates a fresh key pair. Then, E waits for receiving (PubKey, pk) from P pke on io out 1 . • Then, E simulates a run of the adversary A O(·) 1 (1 η , a, pk) as follows: · Whenever A 1 asks its decryption oracle O to decrypt a ciphertext, say y, then E decrypts y using P pke , i.e., E sends (Dec, y) to P pke and waits for receiving (Plaintext, x) from P pke . Then, E continues simulating A 1 as if O returned x. · When A 1 halts and outputs (x 0 , x 1 , s), then E chooses a bit b ∈ {0, 1} uniformly at random and encrypts x b under pk using P pke by sending (Enc, pk, x b ) the request and waits for receiving (Ciphertext, y * ).
• Then, E simulates a run of A O(·) 2 (1 η , s, y * ) as follows: · Whenever A 2 asks its decryption oracle O to decrypt a ciphertext, say y, then E decrypts y using P pke and continues simulating A 2 as described above for A 1 . · When A 2 halts and outputs a bit b ∈ {0, 1}, then E does the following: First, E checks whether P pke is corrupted (it should not be corrupted because E did not corrupt P pke but if E interacts with S | F pke instead of P pke , then S might have corrupted F pke ), i.e., E sends CorrStatus? to P pke and waits for receiving (Corrupted, corrupt) from P pke . If corrupt = false and b = b , then E outputs 1 on the tape decision. Otherwise, if corrupt = false (i.e., b = b ), then E outputs 0 on decision. Otherwise (i.e., corrupt = true), then E outputs a random bit on decision (i.e., E chooses b ∈ {0, 1} uniformly at random and outputs b on decision).
If at some point above E waits for receiving a message but this message is not as expected, then E outputs a random bit on decision.
It is easy to see that E is universally bounded, i.e., E ∈ Env(P pke ).
In the real world (i.e., in runs of E | P pke ), E always receives what it expects because of the definition of P pke . Furthermore, P pke never gets corrupted (i.e., corrupt = false) and, hence, the simulation of A is exactly as in the IND-CCA2 experiment, i.e., for all b ∈ {0, 1}: In the ideal world (i.e., in runs of E | S | F pke ), E outputs 1 with probability exactly 1/2: If E receives some unexpected input or if F pke gets corrupted (i.e., corrupt = true) then, by definition of E, E outputs a random bit. Otherwise, E always receives what it expects and F pke is uncorrupted. In this case, E outputs 1 iff b = b . Because the leakage algorithm L leaks at most the length, by definition, there exists a PPT algorithm T such that for all x (in particular for x ∈ {x 0 , x 1 }) the distribution of T (1 η , 1 |x| )) equals the distribution of L(1 η , x). That is, since A 1 outputs plaintexts of the same length (i.e., |x 0 | = |x 1 |), the input to A 2 (as a random variable) is independent of b (as a random variable) and, hence, the output of A 2 (as a random variable) is independent of b. We conclude that b = b occurs with probability exactly 1/2 and we obtain: We conclude that: Pr (1 η , a) , which, by assumption, is non-negligible (as a function in η and a). Hence, E | P pke ≡ E | S | F pke and we conclude that P pke ≤ F pke .
This concludes the proof of Theorem 7.

B.3 Proof of Theorem 8
We now prove Theorem 8, i.e., that P pke realizes F rpke if and only if is IND-RCCA secure. Let n, = (gen, enc, dec), p, and L be given as in the theorem.
B.3.1 is IND-RCCA Secure ⇒ P pke ≤ F rpke First, we assume that is IND-RCCA secure and show that P pke ≤ F rpke . Let S be the simulator defined in "Appendix B.2.1" (to prove that P pke ≤ F pke ). It is easy to see that S ∈ Sim P pke (F rpke ), i.e., S is also a valid simulator for F rpke .
Let E ∈ Env(P pke ) be an environment of P pke . Using E, we construct an IND-RCCA adversary A on such that, basically, A is successful if E successfully distinguishes between P pke and S | F rpke .
As in "Appendix B.2.1", to simplify the presentation of the adversary A, without loss of generality, we assume the following: (i) The first request E sends to P pke (or S | F rpke ) in any run is an PubKey? request.
Then, E receives Init from P pke (or S | F rpke ) on the network tape and completes initialization of P pke (or S | F rpke ) by sending (false, ε, ε) (ε is the empty bit string) to the network interface of P pke (or S | F rpke ). That is, E does not corrupt P pke (or F rpke ) and directly completes initialization. (It is easy to see that, upon corruption, P pke and S | F rpke are indistinguishable; in fact, the observational behavior of P pke and S | F rpke would be exactly the same.) Furthermore, E never sends a second PubKey? request. (ii) In any run, E never sends corruption status requests (CorrStatus?). (By definition of S, P pke and F rpke always agree on the corruption status. Hence, this request would not help E to distinguishes between P pke and S | F rpke .) (iii) In any run, E only sends encryption requests with the correct public key, i.e., the public key pk in every Enc request is the public key that E received as response to the PubKey? request. (It is easy to see that E has no advantage of sending Enc requests with a different key.) (iv) There exists a polynomial n Enc such that the overall number of encryption requests that E sends in any run (with security parameter η and external input a) is exactly n Enc (η + |a|). (Note that the number of Enc requests sent by E is polynomially bounded in η + |a| because E is universally bounded.) In the following, we just write n Enc to denote n Enc (η + |a|).
We now define an IND-RCCA adversary A = (A 1 , A 2 ) against . It is defined similarly to the adversary A in "Appendix B.2.1". It only differs slightly upon encryption and decryption requests. A O(·) 1 (1 η , a, pk) first chooses h ∈ {1, . . . , n Enc } uniformly at random and then simulates a run of E with security parameter η and external input a: Upon the PubKey? request of E, A 1 sends the public key pk to E. The first h − 1 encryption requests (i.e., for the plaintexts x 1 , . . . , x h−1 ) are answered by simply encrypting the plaintext under the public key pk. In contrast to A in "Appendix B.2.1", the plaintext/ciphertext pair is not recorded for later decryption. Decryption requests are answered by using the decryption oracle O of A 1 . When E sends the h-th encryption request, then A 1 halts and outputs (x h , x h , s) where x h is the plaintext in this encryption request, x h is the leakage x h ← L(1 η , x h ) of x h , and s is a bit string that encodes all information that A 2 needs to continue the simulation of E. In the IND-RCCA experiment, A O(·) 2 (1 η , s, y * ) then receives the encryption y * of x h if b = 0 and of x h of b = 1 and has to guess b. Similar to A 1 , A 2 continues the simulation of E as follows: Recall that E has just sent an encryption request and is still waiting to receive a ciphertext. A 2 returns y * to E as the ciphertext and continues the simulation of E. Now, encryption requests are handled as in F rpke . More precisely: When E sends the i-th encryption request for the plaintext x i , with i ∈ {h + 1, . . . , n Enc }, then A 2 computes the leakage x i ← L(1 η , x i ) of x i , records the pair (x i , x i ) for later decryption, encrypts x i under pk, and returns the obtained ciphertext to E. When E sends a decryption request, say for the ciphertext y, then, similarly to F rpke , A 2 decrypts y using its decryption oracle O; let x:=O(y). If x = test, then A 2 sends x h (recall that x h is the plaintext in the h-th encryption request, i.e., the left part of the challenge output by A 1 ) to E. Otherwise (i.e., x = test), A 2 does the following: If there exists exactly one plaintext x such that the pair (x, x) has been recorded upon encryption, then A 2 returns this x to E. Otherwise, if there exist more than one such x, A 2 sends an error message to E. Otherwise (i.e., there exists no such x), A 2 sends x to E.
When the simulated run stops, then A 2 outputs 1 if E has output 1 on decision, otherwise, A 2 outputs 0.
Note that it always holds that |x h | = |x h | because L is length preserving. Since E is universally bounded, A 1 and A 2 are polynomial-time. Hence, A is a valid IND-RCCA adversary against . Before we analyze the advantage of A against , we note that it is easy to see that: Adv ind-rcca Now, as in the proof of Theorem 7, we would like to prove that Pr Exp 0 i = 1 = Pr Exp 1 i+1 = 1 for all i ∈ {0, . . . , n Enc − 1} because this would imply |Pr E | P pke = 1 − Pr E | S | F rpke = 1 | = n Enc · Adv ind-rcca A, . This however is not possible because the systems differ slightly (see below). Instead, we show that there exists a negligible function f B such that: Before we prove (15), we show how this implies P pke ≤ F rpke . It holds that: Pr Exp 1 i = 1 − Pr Exp 0 i = 1 (14), (15) ≤ (n Enc + 1) · f B (η, a) + n Enc · Adv ind-rcca A, (1 η , a) .
By the assumption that is IND-RCCA secure, Adv ind-rcca

A,
is negligible (as a function in η and a). Hence, because n Enc is a polynomial (in η and a) and f B is negligible, we conclude that E | S | F rpke ≡ E | P pke , i.e., P pke ≤ F rpke .
Finally, we show (15); here we will need that L has high entropy (Definition 8). For every i ∈ {0, . . . , n Enc }, we define B i (1 η , a) (B i for short) to be the event that, in a run of Exp 0 i (1 η , a), (at least) one of the following things happens: 28 1. Collision of a leakage with the i-th plaintext: x i ∈ {x i+1 , . . . , x n Enc } and 0 < i < n Enc . 2. Collision of a leakage with the (i+1)-st plaintext: x i+1 ∈ {x i+2 , . . . , x n Enc } and i < n Enc − 1. 3. Collision of a leakage with the (i+1)-st leakage: x i+1 ∈ {x i+2 , . . . , x n Enc } and i < n Enc − 1. 4. The ciphertext in a decryption request of E decrypts to the i-th leakage: i > 0 and E sends a decryption request for some ciphertext y such that dec(sk, y) = x i , where sk is the private key that has been generated in the experiment.
Note that B 0 is the event that in a run of E | S | F pke = Exp 0 0 it holds that x 1 ∈ {x 2 , . . . , x n Enc } (i.e., the first plaintext collides with some leakage) or x 1 ∈ {x 2 ,. . . , x n Enc } (i.e., the first leakage collides with some other leakage).
We show that the event B i occurs with negligible probability. More precisely, there exists a negligible function f B such that: Pr B i (1 η , a) ≤ f B (η, a) for all i ∈ {0, . . . , n Enc }.
It is easy to see that (16) holds: Since L has high entropy, freshly generated leakages do not collide (except with negligible probability) with x i , x i+1 , and x i+1 . Furthermore, E does not (except with negligible probability) send a decryption request for a ciphertext y such that y decrypts to x i because x i is a leakage and the view of E is independent (as a random variable) of x i until E sends this decryption request. Note that x i is only used in the decryption oracle O because in Exp 0 i the ciphertext y * is the encryption of x i and not the encryption of x i . Hence, we find a polynomial q (because E is universally bounded) such that where D L is the domain of plaintexts associated with L (and ), satisfies (16). Since L has high entropy (i.e., the supremum in the definition of f B is negligible) and q is a polynomial (in η and a), f B is negligible. For every i ∈ {0, . . . , n Enc }, it is now easy to see that every run of Exp 0 i where B i does not occur corresponds to a run of Exp 1 i+1 such that both runs have the same probability and the same overall output. More formally, we can define an injective mapping from runs of Exp 0 i where B i does not occur to runs of Exp 1 i+1 such that E and every call to the encryption and leakage algorithm uses the same randomness in both runs. One can then show that E has the same view in both runs (the view of E would only differ if B i would occur). Furthermore, both runs have the same probability. Hence: ≤ f B (η, a) .

B.3.2 P pke ≤ F rpke ⇒ is IND-RCCA Secure
We now show that is IND-RCCA secure if P pke ≤ F rpke . The proof is very similar to the corresponding part of the proof of Theorem 7 (Appendix B.2.2). Assuming that is not IND-RCCA secure, we use a successful adversary A = (A 1 , A 2 ) against to construct an environment E ∈ Env(P pke ) that distinguishes between P pke and S | F rpke for any simulator S ∈ Sim P pke (F rpke ). The environment E is defined as in "Appendix B.2.2" except that whenever A 2 asks its decryption oracle to decrypt a ciphertext y then E does the following: It asks P pke to decrypt y; let x be the returned plaintext. If x = x 0 or x = x 1 (where x 0 , x 1 are the challenge plaintexts that have been output by A 1 ), then E continues the simulation of A 2 as if the decryption oracle returned test. Otherwise, E continues the simulation as if the oracle returned x. Analogously to the proof in "Appendix B.2.2", we can show that in the real world (i.e., in runs of E | P pke ), the simulation of A is exactly like in the IND-RCCA experiment and we obtain: (See (13) in "Appendix B.3.1" for the definition of Exp ind-rcca-b A, for b ∈ {0, 1}.) As for the ideal world, again analogously to the proof in "Appendix B.2.2", one can show that in runs of E | S | F rpke , E outputs 1 with probability exactly 1/2: We conclude that: Pr (E | P pke )(1 η , a) = 1 − Pr (E | S | F rpke )(1 η , a) = 1 (17), (18) = 1 2 · Pr Exp ind-rcca-1
This concludes the proof of Theorem 8.