1 Introduction

When designing a cryptographic security notion, it is of central importance to keep in mind the purpose and applications it is developed for. For \(\mathsf {CCA\text {-}2}\) secure encryption schemesFootnote 1, the most important historical application is to enable confidential communication: assuming an insecure channel from Alice to Bob (over which ciphertexts are sent), and an authenticated channel from Bob to Alice (over which the public key can be transmitted authentically), the scheme should construct a confidential channel, i.e. an idealized object with the property that whatever Alice sends to Bob does not leak any information to an attacker (except possibly the length of the message), and where the only active capability of the attacker is to inject new messages (uncorrelated to Alice’s inputs)Footnote 2. Coretti, Maurer, and Tackmann [10] proved that indeed \(\mathsf {CCA\text {-}2}\) security is sufficient for this construction to be achieved, by having Bob generating a key-pair, sending the public key authentically to Alice, and by letting Alice encrypt all messages with respect to the obtained public key. It is also known that \(\mathsf {CCA\text {-}2}\) security is actually too strong for this task: a \(\mathsf {CCA\text {-}2}\) secure scheme can be easily modified, for example by appending a single bit to ciphertexts which is ignored by the decryption algorithm, to yield a scheme that is not \(\mathsf {CCA\text {-}2}\) secure but still allows to achieve a confidential channel.

To address the question what weaker security notion(s) would actually match more closely to the application of secure communication, Canetti, Krawczyk, and Nielsen [8] study relaxed \(\mathsf {CCA\text {-}2}\) security notions and their relationships; they formalize an entire spectrum: at the weakest end, they propose \(\mathsf {RCCA}\) security, which for large message spaces (size super-polynomial in the security parameter) is known to achieve confidential channels [10]. This fact has bolstered \(\mathsf {RCCA}\) security into becoming the default security notion in settings where \(\mathsf {CCA\text {-}2}\) is not achievable, such as in rerandomizable encryption schemes [14, 22] and updatable encryption schemes [16]. Intuitively, a scheme can be \(\mathsf {RCCA}\) secure even if it is easy to create from a known ciphertext another one that still decrypts to the same message. Inheriting from prior work on relaxing \(\mathsf {CCA\text {-}2}\) security, most notably [1, 17, 24], they further provide formalizations for intermediate notions between \(\mathsf {CCA\text {-}2}\) and \(\mathsf {RCCA}\). These so-called detectable notions of \(\mathsf {RCCA}\) security further demand that modifications of an already known ciphertext can be efficiently detected—either with the help of the secret key (\(\mathsf {sd}\)-\(\mathsf {RCCA}\)) or the public key only (\(\mathsf {pd}\)-\(\mathsf {RCCA}\)) yielding two separate security notions. These notions of detectable \(\mathsf {RCCA}\) security, and in particular \(\mathsf {pd}\)-\(\mathsf {RCCA}\), are designed to capture an appealing property of \(\mathsf {CCA\text {-}2}\) security, namely that replays can be efficiently detected. This not only admits a more precise language to specify the types of replays a scheme admits, but furthermore is a useful property in applications like voting or access-control encryption, where a trusted third party must perform the filtering without access to the secret key [3]. We elaborate on the former aspects later in Remark 1 at the end of Sect. 1.1.

It has however never been formally investigated whether the detectable notions are suitable to capture the security of the intended application of replay detection. Moreover, our analysis shows that these detectable \(\mathsf {RCCA}\) notions (i.e. \(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {sd}\)-\(\mathsf {RCCA}\)) are actually not proper relaxations of \(\mathsf {CCA\text {-}2}\), in that they are not implied by \(\mathsf {CCA\text {-}2}\).

In this work, we fill this gap and provide a systematic treatment of these relaxations of \(\mathsf {CCA\text {-}2}\) security using the Constructive Cryptography framework by Maurer and Renner [18, 19] and building upon the work of Coretti et al. [10]. We formalize the intuitive security goals that \(\mathsf {RCCA}\) security and the detectable \(\mathsf {RCCA}\) security notions aim to achieve, yielding what we call benchmarks to assess whether the existing security notions are adequate. We observe that none of the previous notions seems to allow a proof that they meet this level of security and therefore propose new security notions for detectable \(\mathsf {RCCA}\) security (which can be regarded as the corrections of the existing ones), show which benchmarks they achieve, and prove that they are implied by \(\mathsf {CCA\text {-}2}\). In summary, this shows that the newly introduced notions are placed correctly in the spectrum between \(\mathsf {CCA\text {-}2}\) and \(\mathsf {RCCA}\) and that they can be safely used in the intended applications.

1.1 Overview of Contributions

A Systematic Approach to \(\mathsf {RCCA}\) and Replay Protection. Following the constructive paradigm, a construction consists of three elements: the assumed resources (such as an insecure communication channel), the constructed or ideal resource (such as a confidential channel), and the real-world protocol. A protocol is said to achieve the construction, if there is a simulator such that the real world (consisting of the protocol running with the assumed resources) is indistinguishable from the ideal system (consisting of the ideal resource and the simulator). This way, it is ensured that any attack on the real system can be translated into an attack to the ideal system, the latter being secure by definition.

Building upon the work of Coretti et al. [10], we present three benchmarks to approach the intended security of \(\mathsf {RCCA}\) and replay protection:

  • The construction of a confidential channel between Alice and Bob from an insecure communication channel (and an authenticated channel to distribute the public key). This is arguably the most natural goal of confidential (and non-malleable) communication. An encryption scheme should achieve this construction by having Bob generating the key-pair and sending the public key to Alice over the authenticated channel. Alice sends encryptions of the messages over the insecure channel to Bob, who can decrypt the ciphertexts and output the resulting messages. This benchmark is formalized in Sect. 3.1.

  • The construction of a replay-protected confidential channel from (essentially) the same resources as above. A replay-protected confidential channel is a channel that only allows an attacker to deliver each message sent by Alice at most once to Bob. This construction captures the most basic form of replay protection. An encryption scheme can be applied as above, except that Bob must make use of the secret key (and a memory resource to store received ciphertexts) to detect and filter out replays. This construction is formalized in Sect. 3.2.

  • The construction of a replay-protected confidential channel from basically the same resources, but where the task of detecting replays is done by a third-party, say Charlie, that does not need to have access to Bob’s secret key. Hence, an encryption scheme is employed as above, but the task of filtering and detecting replays can be outsourced to any party possessing the public key (having sufficient memory to store the received ciphertexts). This benchmark is formalized in Sect. 3.3.

We note that only the first benchmark is taken from existing literature [10] (which is an abstract version of the UC-formalization \(\mathcal {F}_\text {RPKE}\) defined in [8])Footnote 3 while the other benchmarks are new formulations and variants of the known goal of replay protection. The benefits of our benchmarks is that they yield a precise way to assess the guarantees provided by a security notion for an encryption scheme: does a scheme secure with respect to a certain notion achieve the above construction(s)?

We propose three game-based security notions, each designed to suffice for achieving the intended benchmark. The abbreviations stand for confidential (\(\mathsf {cl}\)), secret-key replay protection (\({\mathsf {srp}}^{}\)), and public-key replay protection (\({\mathsf {prp}}^{}\)):

  • We first propose \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\), a security notion which is sufficient to achieve confidential communication even for small message spaces, which we prove in Sect. 6.1. This is the weakest new notion we introduce and we prove that it achieves the first benchmark; \(\mathsf {cl}\)-\(\mathsf {RCCA}\) should then take the role of \(\mathsf {RCCA}\) as the default security notion when one aims at the design of schemes that enable confidential communication (in particular when the message space size is small). Note that \(\mathsf {cl}\)-\(\mathsf {RCCA}\) is strictly stronger than \(\mathsf {RCCA}\) since the latter does not achieve confidential communication for small message spaces (see Theorem 1).Footnote 4

  • The second security notion we introduce is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) and it achieves the second benchmark: realizing a replay protected confidential channel. The notion is hence designed to enable the implementation of a replay-protection mechanism by the receiver, who knows the secret decryption key. We also argue why the strengthening compared to \(\mathsf {cl}\)-\(\mathsf {RCCA}\) (and \(\mathsf {sd}\)-\(\mathsf {RCCA}\)) is needed to achieve replay-protection: from a conceptual perspective, implementing a replay-protector as part of the receiver requires the detection of replays without necessarily ever seeing the original ciphertext by the sender which is a security requirement that is not captured by \(\mathsf {cl}\)-\(\mathsf {RCCA}\) (nor \(\mathsf {sd}\)-\(\mathsf {RCCA}\)).Footnote 5 The notion and the construction proof appear in Sect. 6.2.

  • We finally propose a security notion to capture the idea of publicly-detectable \(\mathsf {RCCA}\) that we call \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\). This notion is sufficient to achieve the third benchmark and therefore captures the outsourced replay-protection mechanism that was originally envisioned from \(\mathsf {pd}\)-\(\mathsf {RCCA}\). This notion and the construction proof appear in Sect. 6.3.

We finally show that all these notions can be strictly separated: \(\mathsf {IND}\)-\(\mathsf {RCCA}\) security, the weakest notion considered in this work, is strictly weaker than \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\). The latter is strictly weaker than \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\), which is in turn strictly weaker than \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\). Finally, \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) is strictly weaker than \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) security. These results are proven in Sect. 7; Fig. 1 illustrates all these new notions, their relations to each other and to the benchmarks.

Fig. 1.
figure 1

New notions of security between \(\mathsf {CCA\text {-}2}\) and \(\mathsf {RCCA}\), and their relations to each other and to the benchmarks. Solid black arrows denote implications and dashed red arrows denote separations. The new security notions introduced in this paper are marked with *.

Technical Inconsistencies with Existing \(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {sd}\)-\(\mathsf {RCCA}\) Notions. Numerous weaker versions of \(\mathsf {CCA\text {-}2}\) security have been proposed [1, 8, 17, 24] which are essentially equivalent versions of what is formalized in [8] as publicly detectable (\(\mathsf {pd}\))-\(\mathsf {RCCA}\) and secretly detectable (\(\mathsf {sd}\))-\(\mathsf {RCCA}\). We show for the given formalizations that the notions are generally not implied by \(\mathsf {CCA\text {-}2}\) security (unless one would restrict, for example, explicitly to the case of deterministic decryption [1], or alternatively to the case of perfect correctness), which seems to be a rather unintended artifact of the concrete definition as we show in Sect. 5. While these shortcomings can be fixed, the existing notions do not appear to suffice to achieve the intended benchmarks for replay protection (see Sect. 6), leaving the state of affairs unclear, as depicted in Fig. 2. This justifies the need to propose new intermediate notions that provably avoid these shortcomings: on one hand, our notions are implied by \(\mathsf {CCA\text {-}2}\), and on the other hand, they deliver the desired level of security required by a replay protection mechanism. The security notions and results of this paper clean up the space between \(\mathsf {CCA\text {-}2}\) and \(\mathsf {RCCA}\) security, yielding, as aforementioned, a clean hierarchy of security notions as depicted in Fig. 1: not only all notions are separated, but also we show that each of the notions we introduce is sufficient for achieving each of the benchmarks.

Fig. 2.
figure 2

Relations between the notions of security from [8]. The solid black arrows denote implications whilst the dashed red arrows denote separations.

Remark 1

Recall that the original motivation of introducing relaxed versions of CCA security stems from the observation that CCA is much stronger than the composable confidentiality requirement [8]. RCCA has the built-in assumption that generating replays of a (challenge) ciphertext is generally easy and therefore, in the security game the adversary is denied to decrypt a broad class of ciphertexts. Detectable RCCA as introduced in [8, Definition 7], develops a language to talk about the ability to detect specific kinds of replays and introduce a relation among ciphertexts accompanied by an efficient algorithm to evaluate it. Therefore, to capture detectable RCCA security, aside of the ordinary three algorithms of a PKE system, there is by definition an additional one to detect replays. While in this work we develop a composable understanding of what [8] calls the ability to detect replays, our \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) and \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) notions can equivalently be seen as ordinary PKE notions. Confidentiality then means that no adversary learns anything about the plaintext when the challenger denies decryption queries that the replay detection algorithm considers being replays of the challenge ciphertext.

1.2 Further Related Work

The investigation of relaxed, enhanced, and modified versions of \(\mathsf {CCA\text {-}2}\) security has a rich history and has found numerous applications in proxy-reencryption, updatable encryption, attribute based-encryption, rerandomizable encryption, or steganography [2, 5, 7, 9, 12,13,14, 16, 22, 23].

The main relaxations of \(\mathsf {CCA\text {-}2}\), upon which the formalization of [8] builds, have been proposed in [24] as benign malleability and in [1] as generalized \(\mathsf {CCA\text {-}2}\) security, and also relate to loose ciphertext-unforgeability [17]. All these versions fall essentially into the formalization of public detectability discussed above, and all suffer from analogous technical issues, and hence in this work we focus on the formalization given in [8]. Three different flavours of \(\mathsf {RCCA}\) have been introduced: \(\mathsf {IND}\)-\(\mathsf {RCCA}\), \(\mathsf {UC}\)-\(\mathsf {RCCA}\) and \(\mathsf {NM}\)-\(\mathsf {RCCA}\). In this work we focus on \(\mathsf {IND}\)-\(\mathsf {RCCA}\). Our first benchmark is an abstract version of \(\mathsf {UC}\)-\(\mathsf {RCCA}\). While the third flavour, \(\mathsf {NM}\)-\(\mathsf {RCCA}\), is a strengthening of \(\mathsf {IND}\)-\(\mathsf {RCCA}\) (since it captures one additional attack vector), it does not seem to suffice to construct a confidential channel (or imply \(\mathsf {UC}\)-\(\mathsf {RCCA}\) for small message spaces) and is superseded in our treatment by \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) that provably constructs the confidential channel for any message space.

A further relaxation of \(\mathsf {CCA\text {-}2}\) security, only loosely related to this work, is called detectable \(\mathsf {CCA\text {-}2}\) [15] and formalizes the detection of “dangerous” queries in \(\mathsf {CCA\text {-}2}\) (without considering replayable properties). This notion provides a rather weak level of security on its own (in that it does not imply \(\mathsf {RCCA}\)) [15].

Another line of research has consisted in studying q-bounded security definitions [11] wherein a scheme is assumed to only be used to decrypt at most q messages. Cramer et al. [11] give a black-box construction of a \(\mathsf {IND}\)-q-bounded-\(\mathsf {CCA\text {-}2}\) secure PKE scheme from any \(\mathsf {IND}\)-\(\mathsf {CPA}\) secure one. The proposed construction crucially relies on knowing the value q in advance as it is hardcoded in the scheme.

2 Preliminaries

2.1 Constructive Cryptography

The Constructive Cryptography (CC) framework [18, 19] is a composable security framework which views cryptography as a resource theory: a protocol transforms the assumed resources into the constructed resources.Footnote 6 For example, if Alice and Bob have (access to) a shared secret key and an authentic channel, by running a one-time pad they construct a secure channel—this example is treated more formally further in this section.

In this view, encryption is the task of constructing channel resources. We thus start by defining various channels—used and constructed in this work—here below. Then we give the formal definition of a construction in CC.

Fig. 3.
figure 3

A depiction of the channels used in this work. From top-left to bottom right: an insecure channel \(\mathbf {INS}\), an authentic channel \(\mathbf {AUT}\), a (replay protected) confidential channel (\(\mathbf {RP}\)-)\(\mathbf {CONF}\), and a secure channel \(\mathbf {SEC}\).

  • INS. The weakest channel we consider is the (completely) insecure channel \(\mathbf {INS}\), where any message input by the sender goes straight to the adversary, and the adversary may insert any messages into the channel, which are then delivered to the receiver. This is drawn in the top left in Fig. 3.

  • AUT. In order to distribute the public keys used by PKE schemes, the players will also need an authentic channel \(\mathbf {AUT}\), which guarantees that anything received by the legitimate receiver was sent by the legitimate sender, but an adversary may also receive a copy of these messages. For simplicity, in our model we do not allow the adversary to either block an authentic channel or insert any replays. Such a channel is drawn in the top right of Fig. 3.

  • CONF. A confidential channel \(\mathbf {CONF}\) only leaks the message length (denoted \(|m|\)) to the adversary, i.e. when the message m is input by the sender, the adversary receives |m| at her interface. She can choose which message \(j \le i\) is delivered to the receiver, where i is the total number of messages input by the sender so far, or—since the channel is only confidential, but does not provide authenticity—the adversary may also inject a message of her own with \((\mathtt {inj},m')\), and \(m'\) is then delivered to the receiver. This is depicted in the bottom left of Fig. 3.

  • RP-CONF. The \(\mathbf {CONF}\) channel described above allows the adversary to deliver multiple times the same message to the receiver by inserting multiple times \((\mathtt {dlv},j)\). We define a stronger channel, the replay protected confidential channel \(\mathbf {RP}\hbox {-}\mathbf {CONF}\), which will only process each \((\mathtt {dlv},j)\) query at most once.

  • SEC. Finally, the secure channel \(\mathbf {SEC}\) is both confidential and authentic, and is drawn in the bottom right of Fig. 3.

We will often consider channels that only transmit n messages, i.e. the sender may only input n messages. These channels will be denoted . The main properties of these channels are summarized in Fig. 4.

Fig. 4.
figure 4

A summary of the channel properties used in this work. Leak is the information about the message given to Eve, where |m| denotes the length of the message. Insert denotes whether Eve is allowed to insert messages of her own into the channel. Replay denotes whether Eve can force a channel to deliver multiple times a message that was sent only once.

Formally, a resource (e.g. a channel) in an N-player setting is an interactive system with N interfaces, where each player may interact with the system at their interface by receiving outputs and providing inputs. These may be mathematically modeled as random systems [20, 21] and can be specified by pseudo-code or an informal description as the channels above. In this work we consider the 3 player setting, and the interfaces are labeled \(A\), \(B\), and \(E\) for Alice, Bob, and Eve.

If multiple resources \(\mathbf {R}_1,\dotsc ,\mathbf {R}_\ell \) are accessible to players, we write \([\mathbf {R}_1,\dotsc ,\mathbf {R}_\ell ]\) for the new resource resulting from having all resources accessible in parallel to the parties.

Operations run locally by some party (e.g. encrypting or decrypting a message) are modeled by interactive systems with two interface and are called converters. The inner interface connects to the available resources, whereas the outer interface is accessible to the corresponding party to provide inputs and receive outputs. The composition of the resource and the converter is a new resource. For example, let \(\mathbf {R}\) be a resource, and let \(\alpha \) be a converter which we connect at the \(A\)-interface of \(\mathbf {R}\), then we write \(\alpha ^{A}\mathbf {R}\) for the new resource resulting from this connection. Formally, a converter is thus a map between resources.

To illustrate this, we draw the real system corresponding to a one-time pad encryption in Fig. 5. Here, the players have access to a secret key \(\mathbf {KEY}\) and an authentic channel \(\mathbf {AUT}\). Alice runs the encryption converter \(\mathsf {enc}_{\text {otp}}\), which sends the ciphertext on the authentic channel. Bob runs the decryption converter \(\mathsf {dec}_{\text {otp}}\), which outputs the result of the decryption. The entire resource drawn on the left in Fig. 5 is denoted \(\mathsf {enc}_{\text {otp}}^{A}\mathsf {dec}_{\text {otp}}^{B}[\mathbf {KEY},\mathbf {AUT}]\), where the order of \(\mathsf {enc}_{\text {otp}}\) and \(\mathsf {dec}_{\text {otp}}\) does not matter since converters at different interfaces commute.

Fig. 5.
figure 5

The real and ideal systems for the one-time pad. Viewed as a black box, the real and ideal systems are indistinguishable.

In order to argue that the protocol \(\mathsf {otp} = (\mathsf {enc}_{\text {otp}},\mathsf {dec}_{\text {otp}})\) constructs a secure channel \(\mathbf {SEC}\) from a shared secret key \(\mathbf {KEY}\) and an authentic channel \(\mathbf {AUT}\), we need to find a converter \(\sigma _{\text {otp}}\) (called a simulator) such that when this simulator is attached to the adversarial interface of the constructed resource \(\mathbf {SEC}\) (resulting in \(\sigma ^E_{\text {otp}}\mathbf {SEC}\)), the real and ideal systems are indistinguishable. As illustrated in Fig. 5, a simulator \(\sigma _{\text {otp}}\) which outputs a random string of the right length is sufficient for proving that the one-time pad constructs a secure channel.

Distinguishability between two systems \(\mathbf {R}\) and \(\mathbf {S}\) is defined with respect to a distinguisher \(\mathbf {D}\) which interacts with one of the systems, and has to output a bit corresponding to its guess. Let \(\mathbf {D}[\mathbf {R}]\) and \(\mathbf {D}[\mathbf {S}]\) denote the random variables corresponding to the output of \(\mathbf {D}\) when interacting with \(\mathbf {R}\) and \(\mathbf {S}\), respectively. Then its advantage in distinguishing between the two is given by

$$ \varDelta ^{\mathbf {D}}(\mathbf {R},\mathbf {S}) := \Pr [\mathbf {D}[\mathbf {R}] = 0] - \Pr [\mathbf {D}[\mathbf {S}] = 0 ].$$

In the case of the one-time pad example with \(\mathbf {R}\) denoting the real system and \(\mathbf {S}\) the ideal system (drawn on the left and right in Fig. 5) we have that for all \(\mathbf {D}\), \(\varDelta ^{\mathbf {D}}(\mathbf {R},\mathbf {S}) = 0\).

We now have all the elements needed to define a cryptographic construction in the three party setting.

Definition 1

(Asymptotic security [18, 19]). Let be an efficient family of converters, and let and be two efficient families of resources. We say that \(\pi \) asymptotically constructs \(\mathbf {R}\) from \(\mathbf {S}\) if there exists an efficient family of simulators such that for any efficient family of distinguishers ,

$$ \varepsilon (k) = \varDelta ^{\mathbf {D}_k}(\pi _k\mathbf {R}_k,\sigma _k\mathbf {S}_k) $$

is negligible. The construction is information-theoretically secure if the same holds for all (possibly inefficient) families of distinguishers.

For clarity we have made the security parameter k explicit in Definition 1, though in most of the technical part of this work we leave this parameter implicit to simplify the notation.

2.2 Public Key Encryption

We recap the basic definitions when a public-key encryption (PKE) system is considered correct and CCA/RCCA secure.

Definition 2

A Public Key Encryption (PKE) scheme \(\varPi \) with message space \(\mathcal {M} \subseteq \{0,1\}^{*}\), is a triple \(\varPi = \left( G,E,D\right) \) of Probabilistic Polynomial-Time algorithms (PPTs) such that for any PPT adversary \(\mathbf {A}\), the function \(\text {Corr}\left( k\right) \) defined below is at most negligible in (the security parameter) k

$$\text {Corr}\left( k\right) := \Pr \left[ \left. \quad \begin{matrix} \left( \mathtt {pk},\mathtt {sk}\right) \leftarrow G\left( 1^{k}\right) \\ m \leftarrow \mathbf {A}\left( 1^{k},\mathtt {pk}\right) \end{matrix}\quad \right| \quad \begin{matrix} D_{\mathtt {sk}}\left( E_{\mathtt {pk}}\left( m\right) \right) \ne m \end{matrix} \quad \right] $$

We point out that the above condition is a succinct expression that captures the correctness of communication protocols in general and intuitively says that even under knowledge of the sampled public key of the system, no one can find (except with negligible probability) a message that would violate the correctness condition (where the error term can be understood as computational distance to a perfectly correct channel). Furthermore, the correctness requirement often holds w.r.t. all adversaries.

Definition 3

A PKE scheme \(\varPi = \left( G,E,D\right) \) is \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure if no PPT distinguisher \(\mathbf {D}\) distinguishes the two game systems \({\mathbf {G}}_{0}^{\varPi \hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}\) and \({\mathbf {G}}_{1}^{\varPi \hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}\) (specified below) with non-negligible advantage (in the security parameter k) over random guessing (i.e. if \(\varDelta ^{\mathbf {D}}\left( {\mathbf {G}}_{0}^{\varPi \hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}\right) ,{\mathbf {G}}_{1}^{\varPi {\hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}} \le \text {negl}\left( k\right) \)). For \(b \in \{0,1\}\), game system \({\mathbf {G}}_{b}^{\varPi \hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}\) is as follows:

  • Initialization: \({\mathbf {G}}_{b}^{\varPi \hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}\) generates a key-pair \(\left( \mathtt {pk},\mathtt {sk}\right) \leftarrow G\left( 1^{k}\right) \), and sends \(\mathtt {pk}\) to \(\mathbf {D}\).

  • First decryption stage: Whenever \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system \({\mathbf {G}}_{b}^{\varPi {\hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}}\) computes \(m = D_{\mathtt {sk}}\left( c\right) \) and sends m to \(\mathbf {D}\).

  • Challenge stage: When \(\mathbf {D}\) queries \(\left( \mathtt {test\ messages},m_{0},m_{1}\right) \), for \(m_{0},m_{1} \in \mathcal {M}\) such that \(|m_{0}| = |m_{1}|\), \({\mathbf {G}}_{b}^{\varPi {\hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}}\) computes \(c^{*} = E_{\mathtt {pk}}\left( m_{b}\right) \), and sends \(c^{*}\) to \(\mathbf {D}\).Footnote 7

  • Second decryption stage: Whenever \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system \({\mathbf {G}}_{b}^{\varPi {\hbox {-}\mathsf {IND}\hbox {-}\mathsf {CCA\text {-}2}}}\) replies \(\mathtt {test}\) if \(c = {c}^{*}\) and replies \(m = D_{\mathtt {sk}}\left( c\right) \) (i.e. the decryption of c) otherwise.

For simplicity, throughout the paper we will omit the prefix \(\varPi \) from the notation of the game systems, unless needed for clarity.

Definition 4

A PKE scheme \(\varPi = \left( G,E,D\right) \) is \(\mathsf {IND}\)-\(\mathsf {RCCA}\) secure if it is secure according to the definition of \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) security (Definition 3), but where the \(\mathsf {IND}\)-\(\mathsf {RCCA}\) game systems differ from the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) game systems in the second decryption stage, which now works as follows: In the following, let \(m_{0},m_{1}\) be the two challenge messages queried by distinguisher \(\mathbf {D}\) during the Challenge stage:

  • Second decryption stage: When \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system computes \(m = D_{\mathtt {sk}}\left( c\right) \). If \(m \in \{m_{0},m_{1}\}\), then the game system replies with the special response \(\mathtt {test}\) to \(\mathbf {D}\), and otherwise sends m to \(\mathbf {D}\).

2.3 Public Key Encryption with Replay Filtering

We now introduce two new types of PKE schemes, namely ones in which ciphertext replays can be efficiently detected by an algorithm F that is defined as part of the scheme. For the correctness condition of these schemes we require, in addition to the usual correctness condition of PKE schemes, that with high probability F cannot relate two fresh encryptions of any messages. This is an essential requirement such that F can be used for filtering out ciphertext replays, because the correctness condition guarantees that it will not filter out honestly generated ciphertexts (later in Sect. 6.2 we couple such schemes with the proper security notions).

Definition 5

A PKE scheme with Secret (Replay) Filtering (PKESF) \(\varPi \) with message space \(\mathcal {M} \subseteq \{0,1\}^{*}\), is a 4-tuple \(\varPi = \left( G,E,D,F\right) \) of PPT algorithms such that for any PPT adversary \(\mathbf {A}\), the function \(\text {Corr}\left( k\right) \) defined below is at most negligible in (the security parameter) k

$$\text {Corr}\left( k\right) := \Pr \left[ \left. \quad \begin{matrix} \left( \mathtt {pk},\mathtt {sk}\right) \leftarrow G(1^{k}) \\ (m,{m}') \leftarrow \mathbf {A}(1^{k},\mathtt {pk}) \end{matrix}\quad \right| \quad \begin{matrix} F\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( {m}'\right) \right) = 1\\ \vee \quad D_{\mathtt {sk}}\left( E_{\mathtt {pk}}\left( m\right) \right) \ne m \end{matrix} \quad \right] $$

A PKE scheme with Public (Replay) Filtering (PKEPF) \(\varPi \) is just like a PKESF except that F now does not receive the secret key \(\mathtt {sk}\).

As one might note, from any correct and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure PKE scheme \(\varPi = \left( G,E,D\right) \), one can define a correct PKEPF scheme \({\varPi }' = \left( G,E,D,F\right) \) where \(F\left( \mathtt {pk},c,{c}'\right) = 1\) if and only if \(c = {c}'\); the correctness of \({\varPi }'\) with respect to Definition 5 follows from the correctness and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) security of \(\varPi \).

2.4 Reductions

Most of the proofs in this work consist in showing reductions between various security definitions. Both the constructive statements introduced in Sect. 2.1 and game-based definitions such as \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) (Definition 3) can be viewed as distinguishing systems—the real world \(\mathbf {W}_0\) from the ideal world \(\mathbf {W}_1\) and game \(\mathbf {G}_0\) from game \(\mathbf {G}_1\), respectively. A reduction between two such definitions consists in proving that if a distinguisher \(\mathbf {D}\) can succeed in one task, then a (related) distinguisher \(\mathbf {D}'\) can succeed in the other. We only give explicit reductions with single blackbox access to \(\mathbf {D}\) in this work, i.e. we define , where \(\mathbf {DC}\) denotes the composition of two systems \(\mathbf {D}\) and \(\mathbf {C}\). \(\mathbf {C}\) is called the reduction system (or simply the reduction).

For example, if we wish to reduce the task of breaking a constructive definition (with real and ideal systems \(\mathbf {W}_0 = \pi ^{AB}\mathbf {R}\) and \(\mathbf {W}_1=\sigma ^E\mathbf {S}\) for some simulator \(\sigma \)) to a game-based definition (with games \(\mathbf {G}_0\) and \(\mathbf {G}_1\)), we will typically fix \(\sigma \) and find a system \(\mathbf {C}\) such that \(\mathbf {W}_0=\mathbf {CG}_0\) and \(\mathbf {W}_1=\mathbf {CG}_1\). Then

$$ \varDelta ^{\mathbf {D}}(\mathbf {W}_0,\mathbf {W}_1) = \varDelta ^{\mathbf {D}}(\mathbf {CG}_0, \mathbf {CG}_1) = \varDelta ^{\mathbf {DC}}(\mathbf {G}_0,\mathbf {G}_1),$$

i.e. given a distinguisher \(\mathbf {D}\) that can distinguish \(\mathbf {W}_0\) from \(\mathbf {W}_1\) with non-negligible advantage, we get an explicit new distinguisher \(\mathbf {DC}\) that can win the game with non-negligible advantage. Or, the contrapositive, if \(\mathbf {G}_0\) and \(\mathbf {G}_1\) are hard to distinguish, then in particular they are hard to distinguish for all distinguishers of the form \(\mathbf {DC}\) (for any efficient \(\mathbf {D}\) and fixed \(\mathbf {C}\)). This means that no efficient distinguish \(\mathbf {D}\) can tell \(\mathbf {W}_0\) from \(\mathbf {W}_1\) for the given simulator \(\sigma \).

3 Benchmarking Confidentiality

In this section we present three benchmark constructions to capture the security of confidential communication and replay protected confidential communication.

3.1 Benchmark 1: The \(\mathbf {CONF}\) Channel

The first channel we want to construct is the confidential channel \(\mathbf {CONF}\) introduced in Sect. 2.1. The ideal system thus simply consists of this channel and a simulator \(\sigma \), as depicted on the right in Fig. 6, and is denoted \(\sigma ^{E}\mathbf {CONF}\).

Fig. 6.
figure 6

Real and ideal systems for (replay protected) confidential channel construction. Capital letters (A, B, E.1, E.2) represent interface labels and small letters (m, \(\tilde{m}\), c, \(c'\), j, \(\mathtt {pk}\)) represent values that are in- or output.

In order to achieve this, Alice and Bob need an authentic channel for one message \(\mathbf {AUT}\)[1] (from Bob to Alice), so that Bob can send his public key authentically to Alice. They also use a completely insecure channel \(\mathbf {INS}\) to transmit the ciphertexts. Alice’s converter \(\mathsf {enc}\) encrypts any messages with the public key obtained from \(\mathbf {AUT}[1]\), and sends the resulting ciphertext on \(\mathbf {INS}\) (i.e. for a PKE \(\varPi = (G,E,D)\), \(\mathsf {enc}\) runs E). Bob’s converter \(\mathsf {dec}\) generates the key-pair \((\mathtt {pk},\mathtt {sk})\), sends \(\mathtt {pk}\) over \(\mathbf {AUT}\)[1] to Alice, and decrypts any ciphertext received from \(\mathbf {INS}\) using \(\mathtt {sk}\) (i.e. \(\mathsf {dec}\) runs G and D). The resulting message is output at Bob’s outer interface \(B\) (to the environment/distinguisher). This real system is drawn on the left in Fig. 6), and is denoted \(\mathsf {enc}^{A}\mathsf {dec}^{B}[\mathbf {AUT}[1],\mathbf {INS}]\).

As already mentioned, we will often parameterize channels by the number messages that can be input at Alice’s interface. As an example, we will denote by \(\mathbf {CONF}[n]\) the confidential channel where at most n messages can be input at Alice’s interface.

3.2 Benchmark 2: The \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) Channel

As explained in Sect. 1.1, our second benchmark is the construction of a stronger channel, namely a replay protected confidential channel, i.e. one in which an adversary’s input \((\mathtt {dlv},j)\) may only be processed once for each j. The ideal system \(\sigma ^{E}\mathbf {RP}\hbox {-}\mathbf {CONF}\) is thus similar to the one of Benchmark 1, only differing in the underlying ideal channel which now is the stronger \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) channel.

The real system is similar to the real system from Benchmark 1 in that we want to construct \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) from a single use authentic channel \(\mathbf {AUT}\)[1] and an insecure channel \(\mathbf {INS}\). However, the replay detection algorithm requires memory to store the ciphertexts it has already processed. We model this memory use explicitly by providing a memory resource \(\mathbf {M}\) to the decryption converter. This is drawn in Fig. 7. The real system is thus \(\mathsf {enc}^{A}\mathsf {dec}^{B}[\mathbf {AUT}[1],\mathbf {INS},\mathbf {M}]\).

Fig. 7.
figure 7

Real system for constructing a replay protected confidential channel. Capital letters (A, B, E.1, E.2) represent interface labels and small letters (m, \(\tilde{m}\), c, \(c'\), \(\mathtt {pk}\)) represent values that are in- or output.

If one uses a public key encryption scheme with replay filtering defined by an algorithm F (see Sect. 2.3), then Alice’s converter \(\mathsf {enc}\) runs the encryption algorithm as for a normal PKE, but Bob’s converter additionally runs the filtering algorithm F before decrypting to detect (and filter out) replays.

3.3 Benchmark 3: The \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) Channel with Outsourceable Replay Protection

In this section we again want to construct a replay protected confidential channel \(\mathbf {RP}\hbox {-}\mathbf {CONF}\)—but where the job of filtering out ciphertext replays is outsourced to a third party. The ideal system is thus identical to Benchmark 2, i.e. \(\sigma ^{E} \mathbf {RP}\hbox {-}\mathbf {CONF}\).

The real system now has three honest parties, Alice the sender, Bob the receiver, and Charlie the replay-filterer, where each runs its own converter \(\mathsf {enc}\), \(\mathsf {dec}\) and \(\mathsf {rp}\), respectively. As before, a public key \(\mathtt {pk}\) is generated by \(\mathsf {dec}\) and sent on an authentic channel \(\mathbf {AUT}[1]_{B}\) to both Alice and Charlie—but Eve gets a copy as well—where the index \(B\) denotes the origin of the authenticated message. And as before, \(\mathsf {enc}\) encrypts the message and sends it on an insecure channel \(\mathbf {INS}\), but this time Charlie is on the receiving end of \(\mathbf {INS}\). Charlie then runs \(\mathsf {rp}\), which decides if the message should be forwarded to Bob through \(\mathbf {AUT}_C\) or if it gets filtered out—this channel needs to be authenticated so that Eve cannot change the messages or inject replays again.Footnote 8 To do this, \(\mathsf {rp}\) needs access to the memory resource \(\mathbf {M}\) so that it can store the previously forwarded (i.e. not filtered out) ciphertexts. Finally, \(\mathsf {dec}\) decrypts the ciphertexts received. This is depicted in Fig. 8.

Fig. 8.
figure 8

Real system for constructing a replay protected confidential channel with outsourced replay filtering. As in previous figures, the sender Alice is on the left, the receiver Bob is on the right and the eavesdropper Eve is below. In this setting we have another party, Charlie, above in the picture, to whom replay detection is outsourced, and who runs the converter \(\mathsf {rp}\). Capital letters (A, B, E.1, E.2) represent interface labels and small letters (m, \(\tilde{m}\), c, \(c'\), \(\mathtt {pk}\)) represent values that are in- or output.

Note that in this setup, \(\mathsf {rp}\) does not have access to the secret key and so it must detect replays with the public key only; since \(\mathsf {dec}\) does not have access to the memory \(\mathbf {M}\), it can not perform the replay filtering itself. In the case where the players use a PKEPF \(\varPi = (G,E,D,F)\), then \(\mathsf {enc}\) runs E, \(\mathsf {dec}\) runs G and D, and \(\mathsf {rp}\) runs F.

4 \(\mathsf {IND}\)-\(\mathsf {RCCA}\) Is Not Sufficient for Benchmark 1

In this section we give a correct and \(\mathsf {IND}\)-\(\mathsf {RCCA}\) secure PKE scheme which does not achieve Benchmark 1 (see Sect. 3.1). As already mentioned, this separation result is in spirit with the separation proven in [8] between \(\mathsf {UC}\)-\(\mathsf {RCCA}\) and \(\mathsf {IND}\)-\(\mathsf {RCCA}\) for small message spaces.

Theorem 1

There is a correct and \(\mathsf {IND}\)-\(\mathsf {RCCA}\) secure PKE scheme \({\varPi }'\) for which there is an efficient distinguisher \(\mathbf {D}\) such that for any simulator \(\sigma \),

figure b

At a high level, we construct an \(\mathsf {IND}\)-\(\mathsf {RCCA}\) secure PKE scheme \({\varPi }'\) for the binary message space that is malleable, in that an adversary can tamper a ciphertext into another that decrypts to a related message. While such tampering attacks do not help an adversary winning the \(\mathsf {IND}\)-\(\mathsf {RCCA}\) game for \({\varPi }'\)Footnote 9, we show that Benchmark 1 cannot be achieved using \({\varPi }'\), as it still allows an attacker to tamper with what Alice sends.

Let \(\varPi = \left( G,E,D\right) \) be a correct and \(\mathsf {IND}\)-\(\mathsf {RCCA}\) secure PKE scheme for the binary message \(\mathcal {M} = \{0,1\}\). From \(\varPi \), we construct a PKE scheme \({\varPi }' = \left( {G}',{E}',{D}'\right) \), which works just as \(\varPi \), except that now, \({E}'\) appends an extra bit 0 to the ciphertexts, and during decryption \({D}'\) uses D internally to decrypt the input ciphertext (ignoring the last bit appended by \({E}'\)), and then XORs the plaintext output by D with the extra bit that was appended to the ciphertext during encryption (unless D outputs \(\bot \), in which case \({D}'\) also outputs \(\bot \)). It is easy to see, on one hand, that if \({\varPi }\) is correct and \(\mathsf {IND}\)-\(\mathsf {RCCA}\) secure, then so is \({\varPi }'\). On the other hand, it is also easy to come up with a distinguisher that can distinguish, for any simulator \(\sigma ^{E}\) the real world system from the ideal world system \(\sigma ^{E}\mathbf {CONF}[1]\), where protocol \(\pi = \left( \mathsf {enc},\mathsf {dec}\right) \) uses \({\varPi }'\) as the underlying PKE scheme. A formal proof of Theorem 1 can be found in [4].

5 Technical Issues with \(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {sd}\)-\(\mathsf {RCCA}\)

In [8], Canetti et al. introduce \(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {sd}\)-\(\mathsf {RCCA}\) as supposedly relaxed versions of \(\mathsf {CCA\text {-}2}\) security. Although other supposedly relaxed versions of \(\mathsf {CCA\text {-}2}\), such as Benign Malleability [24] and generalized \(\mathsf {CCA\text {-}2}\) security [1], had been introduced before, these notions are subsumed by the definition of \(\mathsf {pd}\)-\(\mathsf {RCCA}\) and suffer from the same technical issues we uncover in this section. For this reason, we will focus only on the \(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {sd}\)-\(\mathsf {RCCA}\) security notions. We now recall the definition of \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) [8].

Definition 6

Let \(\varPi = \left( G,E,D\right) \) be an encryption scheme.

  1. 1.

    Say that a family of binary relations \(\equiv _{\mathtt {pk}}\) (indexed by the public keys of \(\varPi \)) on ciphertext pairs is a compatible relation for \(\varPi \) if for all key-pairs \(\left( \mathtt {pk},\mathtt {sk}\right) \) of \(\varPi \):

    1. (a)

      For any two ciphertexts \(c,{c}'\), if \(c \equiv _{\mathtt {pk}} {c}'\), then \(D_{\mathtt {sk}}\left( c\right) = D_{\mathtt {sk}}\left( {c}'\right) \), except with negligible probability over the random choices of D.

    2. (b)

      For any plaintext \(m \in \mathcal {M}\), if c and \({c}'\) are two ciphertexts obtained as independent encryptions of m (i.e. two applications of algorithm E on m using independent random bits), then \(c \equiv _{\mathtt {pk}} {c}'\) only with negligible probability.

  2. 2.

    We say that a relation family as above is publicly computable (resp. secretly computable) if for all key pairs \(\left( \mathtt {pk},\mathtt {sk}\right) \) and ciphertext pairs \(\left( c,{c}'\right) \) it can be determined whether \(c \equiv _{\mathtt {pk}} {c}'\) using a PPT algorithm taking inputs \(\left( \mathtt {pk},c,{c}'\right) \) (resp. \(\left( \mathtt {pk},\mathtt {sk},c,{c}'\right) \)).

  3. 3.

    We say that \(\varPi \) is publicly-detectable Replayable-CCA (\(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\)) if there exists a compatible and publicly computable relation family \(\equiv _{\mathtt {pk}}\) such that \(\varPi \) is secure according to the standard definition of \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) (Definition 3), but where the game systems differ from the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) game systems in the second decryption stage, which now works as follows: In the following, let \(c^{*}\) be the challenge ciphertext output by the game system:

    • Second decryption stage: When \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system replies \(\mathtt {test}\) if \({c}^{*} \equiv _{\mathtt {pk}} c\), and otherwise computes \(m = D_{\mathtt {sk}}\left( c\right) \) and then sends m to \(\mathbf {D}\).

    Similarly, we say that \(\varPi \) is secretly-detectable Replayable-CCA (\(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\)) if the above holds for a secretly computable relation family \(\equiv _{\mathtt {pk}}\).

Remark 2

Note that Condition 1b, which demands two fresh encryptions of any plaintext not to be detected as replays of one another, is equivalent to the additional correctness condition imposed for PKESF and PKEPF schemes (see Definition 5). As mentioned in [8], and as we will see later, the correctness of the replay filtering algorithm follows from the semantic security of the underlying PKE scheme.

It is claimed in [8] that \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) security implies \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) security (with the equality relation serving as the compatible relation), which in turn implies \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) security. However, as we now show, Definition 6 is not an actual relaxation of the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) security notion. More concretely, we prove that \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) security does not entail \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) nor even \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) security, according to their definition.

Theorem 2

If there is a correct and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure PKE scheme, then there is a correct and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure PKE scheme which is not \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) nor \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) secure.

Throughout the rest of the section, let \(\varPi = \left( G,E,D\right) \) be a correct and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure PKE scheme. Without loss of generality, assume that all messages in \(\varPi \)’s message space have the same length. We create a scheme \({\varPi }' = \left( {G}',{E}',{D}'\right) \) (see Algorithm 1) that is a correct and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure PKE scheme, but is not \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) nor \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) secure.

figure d

Lemma 1

If \(\varPi \) is correct and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure, then so is \({\varPi }'\).

Proof

It is easy to see that if \(\varPi \) is correct and \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure then \({\varPi }'\) is a correct PKE scheme. We now prove that \({\varPi }'\) is \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure.

Let \(\mathbf {D}\) be a distinguisher for the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) game systems for \({\varPi }'\). We construct a distinguisher \({\mathbf {D}}'\), which internally uses \(\mathbf {D}\), for the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) game systems for \(\varPi \) such that

(5.1)

\({\mathbf {D}}'\) works as follows: When \({\mathbf {D}}'\) receives \(\mathtt {pk}\) from the game, it picks a plaintext \(\tilde{m}\) uniformly at random from \(\mathcal {M}\), generates a ciphertext \(c = E_{\mathtt {pk}}\left( \tilde{m}\right) \), and forwards \(\mathtt {pk}\) to \(\mathbf {D}\). Before the challenge ciphertext is set, whenever \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext}, {c}'\right) \), \({\mathbf {D}}'\) first checks if \(c = {c}'\): if this is the case then \({\mathbf {D}}'\) flips a coin uniformly at random and (depending on the outcome of the coin) either returns \(\bot \) as the result of the query, or forwards it to the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) game. If \(c \ne {c}'\) then \({\mathbf {D}}'\) simply forwards the query to the game. Upon receiving the result of the decryption query, \({\mathbf {D}}'\) forwards it to \(\mathbf {D}\). When \(\mathbf {D}\) issues the challenge query, \({\mathbf {D}}'\) forwards it to the game, and, upon receiving the challenge ciphertext \(c^{*}\) from the game, \({\mathbf {D}}'\) forwards it back to \(\mathbf {D}\). After the challenge ciphertext is set, whenever \(\mathbf {D}\) issues a decryption query \(\left( \mathtt {ciphertext}, {c}'\right) \), \({\mathbf {D}}'\) behaves just as before, unless \({c}' = c^{*}\). In such case, \({\mathbf {D}}'\) simply forwards the decryption query to the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) game and returns the result to \(\mathbf {D}\). When \(\mathbf {D}\) outputs a guess b, \({\mathbf {D}}'\) outputs the same guess and terminates. Clearly, (5.1) holds, and thus, if \(\varPi \) is \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure, then so is \({\varPi }'\).    \(\square \)

We now show that a compatible relation for \({\varPi }'\) cannot relate any freshly generated ciphertext to itself.

Lemma 2

Let \(\equiv _{\mathtt {pk}}\) be any family of compatible relations for \({\varPi }'\) (indexed by the public keys of \({\varPi }'\)). Then, for each \(\mathtt {pk}\) in the support of \({\varPi }'\)’s public keys, we have: for any fresh encryption c of some plaintext \(m \in \mathcal {M}\) under \(\mathtt {pk}\), \(c \not \equiv _{\mathtt {pk}} c\).

Proof

For each public key \(\mathtt {pk}\) in the support of \({\varPi }'\)’s public keys, let \(\equiv _{\mathtt {pk}}\) be a compatible relation for \({\varPi }'\) with respect to \(\mathtt {pk}\). For each ciphertext c that can be generated as a fresh encryption of some plaintext m by \({E}'\) under \(\mathtt {pk}\), there is a key-pair \(\left( \mathtt {pk},\mathtt {sk}\right) \) (for the same public key \(\mathtt {pk}\)) such that . Hence, by the compatibility condition of Definition 6, \(c \not \equiv _{\mathtt {pk}} c\).    \(\square \)

Lemma 3

\({\varPi }'\) is not \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) nor \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) secure.

Proof

By the definitions of \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\), the challenge ciphertext \(c^{*}\) is always a fresh encryption of some plaintext. By Lemma 2 it then follows \(c^{*} \not \equiv _{\mathtt {pk}} c^{*}\). As such, a distinguisher is allowed to simply ask for the decryption of the challenge \(c^{*}\) and thus distinguish the two game systems.    \(\square \)

Lemmas 1 and 3 conclude the proof of Theorem 2.

A way to avoid this technical issue with the definitions of \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) and \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) is by restricting the class of schemes one considers. For instance, if one would require the decryption algorithm to be deterministic, then the equality relation between ciphertexts would be a compatible relation. Alternatively, one could require PKE schemes to have perfect correctness. In this case, the equality relation between ciphertexts that are in the support of the encryption algorithm (for some public key \(\mathtt {pk}\) and message \(m \in \mathcal {M}\)) would be a compatible relation. It however appears as more natural to have security notions that do not depend on this fact (which is true for most if not all confidentiality notions). Furthermore, it might not always be feasible to have perfect correctness or detectability [3] and therefore, avoiding this dependence is crucial.

6 Relaxing Chosen Ciphertext Security

As discussed in Sect. 1, while \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) is generally a too strong security notion, \(\mathsf {IND}\)-\(\mathsf {RCCA}\) security is too weak, in that it is not sufficient to achieve the weaker Benchmark 1 for small message spaces. In this section we introduce three new security notions—which are provably between \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) and \(\mathsf {IND}\)-\(\mathsf {RCCA}\), see Sect. 7—and prove that they are sufficient to achieve the three benchmarks introduced in Sect. 3.

6.1 Achieving Benchmark 1: Constructing the \(\mathbf {CONF}\) Channel

A game-based security notion that captures the confidentiality of an encryption scheme against active adversaries is one which is sufficiently strong to achieve a confidential channel (as defined in Sect. 3.1). Yet, it must also be as weak as possible so that it does not exclude any schemes which provide confidentiality. To achieve this, we introduce the \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security notion, and its multi-challenge version [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\).

Definition 7

We say that a PKE scheme \(\varPi = \left( G,E,D\right) \) is \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) secure if there exists an efficient algorithm v that takes as input a key-pair \(\left( \mathtt {pk},\mathtt {sk}\right) \) and a pair of ciphertexts \(c,{c}'\) and outputs a boolean (corresponding to whether the ciphertexts seem related or not), such that no PPT distinguisher \(\mathbf {D}\) distinguishes the game systems \({\mathbf {G}}_{0}^{{\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) and \({\mathbf {G}}_{1}^{{\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) (specified below) with non-negligible advantage (in the security parameter k) over random guessing. For \(b \in \{0,1\}\), game system \({\mathbf {G}}_{b}^{{\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) is as follows:

  • Initialization: \({\mathbf {G}}_{b}^{{\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) generates a key-pair \(\left( \mathtt {pk},\mathtt {sk}\right) \leftarrow G\left( 1^{k}\right) \), and sends \(\mathtt {pk}\) to \(\mathbf {D}\).

  • First decryption stage: Whenever \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system \({\mathbf {G}}_{b}^{{\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) computes \(m = D_{\mathtt {sk}}\left( c\right) \) and sends m to \(\mathbf {D}\).

  • Challenge stage: When \(\mathbf {D}\) queries \(\left( \mathtt {test\ messages},m_{0},m_{1}\right) \), for \(m_{0},m_{1} \in \mathcal {M}\) such that \(|m_{0}| = |m_{1}|\), \({\mathbf {G}}_{b}^{{\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) computes \(c^{*} = E_{\mathtt {pk}}\left( m_{b}\right) \), and sends \(c^{*}\) to \(\mathbf {D}\).

  • Second decryption stage: Whenever \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system \({\mathbf {G}}_{b}^{{\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) calls \(v\left( \mathtt {pk},\mathtt {sk},{c}^{*},c\right) \) and decrypts c, obtaining a plaintext \(m = D_{\mathtt {sk}}\left( c\right) \). If v’s output is 1 and \(m = m_{b}\), the game system replies \(\mathtt {test}\) to \(\mathbf {D}\), and in all other cases the game replies with m.

At a high level, the job of algorithm v is to disallow strategies that an adversary could take to win the security game, but would not help break confidentiality of the encryption. In the context of the \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game, v is used to disallow adversaries to pursue strategies in which they would ask for the decryption of a ciphertext that would decrypt to the challenge message (a so-called replay). Thus, the game can only refuse to answer a decryption query for a ciphertext c if both of the following two conditions are met: 1. according to v, c is a replay of the challenge ciphertext; and 2. c indeed decrypts to the same plaintext as the challenge ciphertext. Note that if one would relax the second condition to checking if c decrypts to one of the (two) challenge plaintexts, the resulting security notion would be equivalent to \(\mathsf {RCCA}\) security; allowing the adversary to perform decryption queries of ciphertexts that do not decrypt to the same as the challenge ciphertext is the key for capturing the non-malleability feature of confidential channels.

\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security is sufficient for achieving Benchmark 1 for a single message (i.e. constructing an ideal \(\mathbf {CONF}\)[1] channel)—this follows from Theorem 3 below. However, it is not clear whether it is also sufficient for achieving Benchmark 1 for multiple messages: since, in order to check if two ciphertexts are related, v requires the secret key, it becomes apparently unfeasible to detect relations between pairs of arbitrary ciphertexts, which is crucial for making a hybrid reduction from distinguishing from \(\mathbf {CONF}\)[n] to distinguishing the two \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game systems. To achieve Benchmark 1 for multiple messages, we now present the multi-challenge version of \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security, which we denote by [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security, where n is the maximum number of challenge queries that a distinguisher can make.

Definition 8

We say that a PKE scheme \(\varPi = \left( G,E,D\right) \) is [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) secure if it is secure according to Definition 7, but where, for \(b \in \{0,1\}\), the game system \({\mathbf {G}}_{b}^{{[n]\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\), which now accepts n challenge queries, behaves as follows:

  • Initialization: First, \({\mathbf {G}}_{b}^{{[n]\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) creates and initializes a table t of plaintext-ciphertext pairs which is initially empty. Then, \({\mathbf {G}}_{b}^{{[n]\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) runs \(\left( \mathtt {pk},\mathtt {sk}\right) \leftarrow G\left( 1^{k}\right) \), and sends \(\mathtt {pk}\) to \(\mathbf {D}\).

  • Decryption queries: Whenever \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system calls, for each plaintext-ciphertext pair \(\left( m_{b,j},c_{j}^{*}\right) \) stored in t, \(v\left( \mathtt {pk},\mathtt {sk},{c}_{j}^{*},c\right) \) and decrypts c, obtaining a plaintext \(m = D_{\mathtt {sk}}\left( c\right) \). If for every plaintext-ciphertext pair stored in t, either v’s output is 0 or \(m \ne m_{b,j}\), then the game system replies with m to \(\mathbf {D}\). Otherwise, let \(\left( m_{b,l},c_{l}^{*}\right) \) be the plaintext-ciphertext pair stored in t with the smallest l such that both \(v\left( \mathtt {pk},\mathtt {sk},{c}_{l}^{*},c\right) = 1\) and \(m = m_{b,l}\). Then, \({\mathbf {G}}_{b}^{{[n]\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}}\) replies \(\left( \mathtt {test},l\right) \) to \(\mathbf {D}\).

  • i-th challenge query (for \(i \le n\)): Whenever the distinguisher \(\mathbf {D}\) issues a challenge query \(\left( \mathtt {test\ messages},m_{0,i},m_{1,i}\right) \), where \(m_{0,i},m_{1,i} \in \mathcal {M}\) such that \(|m_{0,i}| = |m_{1,i}|\), the game system computes \(c_{i}^{*} = E_{\mathtt {pk}}\left( m_{b,i}\right) \), stores \(\left( m_{b,i},c_{i}^{*}\right) \) in table t, and sends \(c_{i}^{*}\) to \(\mathbf {D}\).

We now show that [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security is sufficient for achieving Benchmark 1 when Alice is restricted to sending up to n messages. Thus, we need to prove that the construction is indistinguishable from the ideal \(\mathbf {CONF}\)[n] channel up to the [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security of the underlying PKE scheme.

Remark 3

Note that the above security notion stands in sharp contrast with the q-bounded security notions from [11], which bound to q the number of decryption queries an adversary can make. Even if a PKE scheme is only [1]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) secure—the weakest security notion introduced in this paper—the adversary is not restricted in the number of decryption queries it can issue to the game. Note that in order to achieve our benchmarks, no such restriction can be imposed, as it would be a restriction on the distinguisher (sending at most q ciphertexts at Eve’s interface) which would impede general composability.

Let \(\varPi = \left( G,E,D\right) \) be a correct and [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) secure PKE scheme, and let the protocol \(\pi = \left( \mathsf {enc},\mathsf {dec}\right) \) be such that Alice’s converter \(\mathsf {enc}\) runs the encryption algorithm E to encrypt plaintexts, and Bob’s converter \(\mathsf {dec}\) runs the key-pair generation algorithm G to generate a public-secret key-pair and runs D to decrypt the received ciphertexts.

To prove that \(\pi \) constructs \(\mathbf {CONF}\)[n] from \(\mathbf {AUT}[1]\) and \(\mathbf {INS}[n]\) (Definition 1), we show how to create, from any algorithm v that satisfies Definition 8, an efficient simulator \(\sigma \) which internally uses v such that any distinguisher \(\mathbf {D}\) for and \(\sigma ^{E}\mathbf {CONF}[\)n] can be transformed into an equally good distinguisher for the [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game systems. Then, from the [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security of \(\varPi \), it follows that there is such an algorithm v, implying that no efficient distinguisher \(\mathbf {D}\) can distinguish between the real world and the ideal world \(\sigma ^{E}\mathbf {CONF}[\)n] with simulator \(\sigma \) attached. In turn, this implies that Benchmark 1 is achieved.

Theorem 3

Let v be an algorithm that suits [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) (Definition 8). There exists an efficient simulator \(\sigma \) and an efficient reduction \(\mathbf {R}\) such that for every distinguisher \(\mathbf {D}\),

figure h

Proof

Consider the following simulator \(\sigma \) for interface \(E\) of \(\mathbf {CONF}\)[n], which has two sub-interfaces denoted by \(E.1\) and \(E.2\) on the outside (since the real-world system also has two sub-interfaces at \(E\)): Initially, \(\sigma \) generates a key-pair \(\left( \mathtt {pk},\mathtt {sk}\right) \) and outputs \(\mathtt {pk}\) at \(E.1\). When it receives the i-th input \(l_{i}\) at the inside interface \(in\) (which is connected to \(\mathbf {CONF}\)[n]), \(\sigma \) generates an encryption \(c \leftarrow E_{\mathtt {pk}}\left( \tilde{m}\right) \) of a randomly chosen message \(\tilde{m}\) of length \(l_{i}\), records \(\left( i,\tilde{m},c\right) \) and outputs c at \(E.2\). When \({c}'\) is input at \(E.2\), \(\sigma \) proceeds as follows: First, it decrypts \({c}'\), obtaining some plaintext \({m}'\). If \(\left( j,\tilde{m},c\right) \) has been recorded for some j such that \(\tilde{m} = {m}'\) and \(v\left( \mathtt {pk},\mathtt {sk},c,{c}'\right) = 1\), then \(\sigma \) outputs \(\left( \mathtt {dlv},j\right) \) at \(in\) (where j is the smallest index satisfying this condition). If no such triple has been recorded, \(\sigma \) outputs \(\left( \mathtt {inj},{m}'\right) \) at \(in\) (unless \({m}' = \bot \)).

Having defined the simulator \(\sigma \), we now introduce a reduction system \(\mathbf {R}\), such that for any efficient distinguisher \(\mathbf {D}\)

  1. 1.

    ; and

  2. 2.

    \(\mathbf {R}{\mathbf {G}}_{1}^{{[n]\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}} \equiv \sigma ^{E}\mathbf {CONF}[n]\).

Consider the following reduction system \(\mathbf {R}\) (which processes at most n inputs at the outside \(A\) interface): Initially, \(\mathbf {R}\) forwards the public key \(\mathtt {pk}\) generated by the game system to the \(E.1\) interface. When the j-th message m is input at the \(A\) interface of \(\mathbf {R}\): \(\mathbf {R}\) chooses a message \(\tilde{m}\) of length \(|m|\) uniformly at random, and makes the challenge query \(\left( \mathtt {test\ messages}, m, \tilde{m}\right) \) to the game system, which replies with some ciphertext c. Then, \(\mathbf {R}\) records \(m_{j}^{*} = m\). Next, \(\mathbf {R}\) outputs c at the outside \(E.2\) interface. When \(\left( \mathtt {inj},{c}'\right) \) is input at interface \(E.2\), \(\mathbf {R}\) behaves as follows. First, \(\mathbf {R}\) makes a decryption query for \({c}'\) to the game, obtaining some \({m}'\). If \({m}' = \left( \mathtt {test},j\right) \), then \(\mathbf {R}\) outputs \(m_{j}^{*}\) at interface \(B\). If \({m}' = \bot \), \(\mathbf {R}\) ignores the injection, and nothing happens. Else, \(\mathbf {R}\) outputs \({m}'\) at the \(B\) interface. It is easy to see that indeed and \(\mathbf {R}{\mathbf {G}}_{1}^{{[n]\mathsf {IND}\hbox {-}\mathsf {cl}\hbox {-}\mathsf {RCCA}}} \equiv \sigma ^{E}\mathbf {CONF}[n]\). Using the above facts, it finally follows

   \(\square \)

6.2 Achieving Benchmark 2: Constructing the \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) Channel

Another use of \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) security is for achieving replay protected confidential communication. As hinted by Benchmarks 2 and 3, replay protection comes in two flavours: 1. private detection and filtering of replays; and 2. public detection and filtering of replays. We begin by looking into the setting where Bob is the one responsible for filtering out ciphertext replays (Benchmark 2).

Before introducing a new security notion, we first look into why \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) does not seem to suffice for constructing the \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) channel. First, note that, the \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) channel construction (Benchmark 2) has to protect not only against replays of ciphertexts sent by Alice, but also against replays of ciphertexts injected by Eve. This is so since the receiving end (i.e. the \(\mathsf {dec}\) converter) does not know where the ciphertexts have originated.Footnote 10 Hence, for each ciphertext that the converter receives, it has to make sure that it is not a replay of any previously received ciphertext, implying that the converter has to impede all ciphertext replays. When one tries to make a reduction from distinguishing the real world construction and the ideal world channel \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) to winning the \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game, two critical issues arise:

  1. 1.

    The algorithm v used by the game systems might not compute an equivalence relation: Consider the case where Alice inputs a message m into the channel which results in a ciphertext c being output at the \(E\) interface. Eve can create two distinct replays of the ciphertext c, say \({c}'\) and \({c}''\), and input them into the \(E\) interface. While, from \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security, v should detect that ciphertext c is related to both \({c}'\) and \({c}''\), it does not necessarily detect whether \({c}'\) is related to \({c}''\). In such case, v cannot be used to detect ciphertext replays, as it would allow Eve to replay what Alice sends, by generating different replays of c and injecting them into the channel (without ever injecting c into the channel).

  2. 2.

    The reduction does not have access to the secret key generated by the game system: Even assuming that v computes an equivalence relation, it is not clear how one could reduce distinguishing the real and ideal worlds to distinguishing the two underlying \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game systems. Since any reduction system \(\mathbf {R}\) that one would attach to the game systems does not have access to the secret key, it is not clear how \(\mathbf {R}\) would be able to check if any arbitrary pair of ciphertexts \({c}'\) and \({c}''\) are related according to v (i.e. \(\mathbf {R}\) would be able to compute \(v\left( \mathtt {pk},\mathtt {sk},{c}',{c}''\right) \) without knowing \(\mathtt {sk}\)).

Interestingly these remarks also apply to the \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) notion from [8], hinting at the fact that the \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) security notion does not capture what it was meant to capture. Another interesting remark is that, as for \(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\), the single challenge and the multi challenge versions of \(\mathsf {IND}\)-\(\mathsf {sd}\)-\(\mathsf {RCCA}\) security do not seem to be necessarily equivalent.Footnote 11 With this, we now introduce \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security, which captures the secret detectability of ciphertext replays.

Definition 9

A PKE scheme \(\varPi = \left( G,E,D\right) \) is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure if there exists an efficient algorithm v that computes, for each key-pair \(\left( \mathtt {pk},\mathtt {sk}\right) \), an equivalence relation over ciphertexts \(c,{c}'\) such that for every key-pair \(\left( \mathtt {pk},\mathtt {sk}\right) \) in the support of \(G(1^k)\) and every pair of ciphertexts \(c,{c}'\), if \(v\left( \mathtt {pk},\mathtt {sk},c,{c}'\right) = 1\) then \(\delta \left( D_{\mathtt {sk}}\left( c\right) , D_{\mathtt {sk}}\left( {c}'\right) \right) \le \text {negl}\left( k\right) \) (where the randomness is over the internal randomness of D), and if no efficient distinguisher \(\mathbf {D}\) distinguishes the game systems \({\mathbf {G}}_{0}^{{\mathsf {IND}\hbox {-}{\mathsf {srp}}^{}\hbox {-}\mathsf {RCCA}}}\) and \({\mathbf {G}}_{1}^{{\mathsf {IND}\hbox {-}{\mathsf {srp}}^{}\hbox {-}\mathsf {RCCA}}}\) (specified below) with non-negligible advantage (in the security parameter k) over random guessing. The \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) game systems work just as the \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) game systems, except that the \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) game systems give distinguisher \(\mathbf {D}\) oracle access to v throughout the entire game (so that \(\mathbf {D}\) can check whether any two ciphertexts \(c,{c}'\) are related according to v with respect to the key-pair \(\mathtt {pk},\mathtt {sk}\) generated by the game system), and also except for the second decryption stage, which now works as follows:

  • Second decryption stage: Whenever \(\mathbf {D}\) queries \(\left( \mathtt {ciphertext},c\right) \), the game system replies \(\mathtt {test}\) if \(v\left( \mathtt {pk},\mathtt {sk},{c}^{*},c\right) = 1\) and replies \(m = D_{\mathtt {sk}}\left( c\right) \) otherwise.

Definition 9 addresses both of the issues we mentioned above by, on one hand giving the distinguisher oracle access to v, and on the other hand by requiring that v computes an equivalence relation. The requirement that for any key-pair \(\mathtt {pk},\mathtt {sk}\) and any pair of ciphertexts \(c,{c}'\), if \(v\left( \mathtt {pk},\mathtt {sk},c,{c}'\right) = 1\) then \(\delta \left( D_{\mathtt {sk}}\left( c\right) , D_{\mathtt {sk}}\left( {c}'\right) \right) \le \text {negl}\left( k\right) \) is captures that the two ciphertexts c and \({c}'\) can only be considered as replays of one another if they “carry essentially the same information”.

The definition of \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security is written for a PKE scheme \(\varPi = \left( G,E,D\right) \), but by taking the algorithm v required to exist by Definition 9 as a replay-filtering algorithm, we get a PKESF scheme \({\varPi }' = \left( G,E,D,v\right) \). Conversely, a PKESF scheme \({\varPi } = \left( G,E,D,F\right) \) is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure if the underlying PKE scheme \({\varPi }' = \left( G,E,D\right) \) is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure with respect to the filtering algorithm F of \({\varPi }\). Correctness of an \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure PKESF \(\varPi '\) then follows from the correctness of the corresponding PKE \(\varPi = \left( G,E,D\right) \).

It is instructive to see why \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security does indeed require the filtering algorithm v to be meaningful. Consider, e.g. a trivial filtering algorithm such as the one that always sets \(v(\mathtt {pk},\mathtt {sk},c,c')=0\). This algorithm will not satisfy the definition above. But more importantly, it turns out that the above definition implies that Benchmark 2 is satisfied (see Theorem 4 further below), and by definition, Benchmark 2 requires the filtering algorithm to be meaningful (as otherwise the real and ideal systems are trivially distinguishable).

Lemma 4

Consider any correct PKE scheme \(\varPi = \left( G,E,D\right) \) that is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure, and let v be an algorithm with respect to which \(\varPi \) is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure. Then, \({\varPi }' = \left( G,E,D,v\right) \) is a correct PKESF scheme.

Proof

We show a slightly stronger statement. The event \(D_{\mathtt {sk}}\left( E_{\mathtt {pk}}\left( m\right) \right) \ne m \vee v\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( {m}'\right) \right) = 1\) can only occur if at least one of \(D_{\mathtt {sk}}\left( E_{\mathtt {pk}}\left( m\right) \right) \ne m\) or \(v\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( {m}'\right) \right) = 1\) occurs (for any adversary producing such messages). From the correctness of \(\varPi \), it follows that \(D_{\mathtt {sk}}\left( E_{\mathtt {pk}}\left( m\right) \right) \ne m\) only occurs with negligible probability. Thus, it now only remains to show that \(v\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( {m}'\right) \right) = 1\) occurs with at most negligible probability too.

Letting \(c = E_{\mathtt {pk}}\left( m\right) \) and \({c}' = E_{\mathtt {pk}}\left( {m}'\right) \), from the correctness of \(\varPi \) we have that \(\delta \left( m,D_{\mathtt {sk}}\left( c\right) \right) \le \text {negl}\left( k\right) \) and \(\delta \left( {m}',D_{\mathtt {sk}}\left( {c}'\right) \right) \le \text {negl}\left( k\right) \). From the definition of \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security we have that if \(v\left( \mathtt {pk},\mathtt {sk},c,{c}'\right) = 1\) then \(\delta \left( D_{\mathtt {sk}}\left( c\right) ,D_{\mathtt {sk}}\left( {c}'\right) \right) \le \text {negl}\left( k\right) \). Combining these last 3 inequalities with the triangle inequality we find that \(\delta \left( m,{m}'\right) \le \text {negl}\left( k\right) \). But note that m and \(m'\) are deterministic values (unlike \(D_{\mathtt {sk}}\left( c\right) \) and \(D_{\mathtt {sk}}\left( {c}'\right) \) which are random variables over the distribution of the encryption and decryption randomness), hence we must have \(\delta \left( m,m'\right) = 0\) and \(m = m'\). Putting this together, we have just shown that if \(v\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( {m}'\right) \right) = 1\) then \(m = {m}'\).

Now, suppose that for some \(m \in \mathcal {M}\) we have that with non-negligible probability \(v\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( m\right) \right) = 1\) (i.e. v declares two fresh encryptions of the m as related). Then it is easy to create an efficient distinguisher \(\mathbf {D}\) that has non-negligible advantage in distinguishing the two \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) game systems of \(\varPi \) with respect to v: First, \(\mathbf {D}\) makes a challenge query \(\left( \mathtt {test\ messages}, m, \bar{m}\right) \) to the game system (where \(m \ne \bar{m}\)), and then \(\mathbf {D}\) generates a fresh encryption \(c = E_{\mathtt {pk}}\left( m\right) \) of m, and asks for the decryption of c to the game system. If the game system replies \(\mathtt {test}\), then \(\mathbf {D}\) outputs 0, and otherwise outputs 1. It is easy to see that \(\mathbf {D}\)’s advantage in distinguishing the two game systems is at least half of the probability that event \(v\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( m\right) \right) = 1\) occurs, which by our assumption is non-negligible. Thus, \(\mathbf {D}\) has non-negligible advantage in distinguishing the two game systems, contradicting that \(\varPi \) is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure with respect to v. From this contradiction, it follows that for any m, \(v\left( \mathtt {pk},\mathtt {sk},E_{\mathtt {pk}}\left( m\right) ,E_{\mathtt {pk}}\left( m\right) \right) = 1\) can only occur with negligible probability.    \(\square \)

The following result states that the \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security of a PKESF \(\varPi = (G,E,D,F)\) suffices for constructing an \(\mathbf {RP}\hbox {-}\mathbf {CONF}\)[n] channel, i.e. satisfying Benchmark 2. To prove this, one creates a simulator \(\sigma \) which internally uses F such that any distinguisher \(\mathbf {D}\) for and \(\sigma ^{E}\mathbf {RP}\hbox {-}\mathbf {CONF}[n]\) can be transformed into an equally good distinguisher for the \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) game systems. A formal proof of Theorem 4 can be found in [4].

Theorem 4

Let \(\varPi = \left( G,E,D,F\right) \) be a correct PKESF scheme that is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure. There exists an efficient simulator \(\sigma \) and for any \(n \in \mathbb {N}\) there exists an efficient reduction \(\mathbf {R}\) such that for every distinguisher \(\mathbf {D}\),

figure k

6.3 Achieving Benchmark 3: Constructing the \(\mathbf {RP}\hbox {-}\mathbf {CONF}\) Channel with Outsourceable Replay Protection

We now look into the setting where a third party who does not possess the secret-key is responsible for filtering out ciphertext replays (Benchmark 3). In this setting \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security seems too weak, as the algorithm v which the \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) game systems use for detecting ciphertext replays (i.e. to check if two ciphertexts are replays of one another) have access to the secret-key. For this reason, we will now introduce the \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) security notion, which is the analogous of \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security for public detection of ciphertext replays.

Definition 10

A scheme \(\varPi = \left( G,E,D\right) \) is \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure if there is an efficient algorithm v that computes, for each public key \(\mathtt {pk}\), an equivalence relation over ciphertexts \(c,{c}'\) such that for every \(\mathtt {pk}\) in the support of \(G(1^k)\) and every pair of ciphertexts \(c,{c}'\), if \(v\left( \mathtt {pk},c,{c}'\right) = 1\) then \(\delta \left( D_{\mathtt {sk}}\left( c\right) , D_{\mathtt {sk}}\left( {c}'\right) \right) \le \text {negl}\left( k\right) \) (where the randomness is over the internal randomness of D and over the conditional distribution of the secret key \(\mathtt {sk}\) for the given public key \(\mathtt {pk}\) according to the key-pair distribution of \(G(1^k)\)), and if no efficient distinguisher \(\mathbf {D}\) distinguishes the two \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) game systems (described ahead) with non-negligible advantage (in the security parameter k) over random guessing. The \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) game systems work just as the \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) game systems, except that now the game system does not have to provide the distinguisher with oracle access to v, as the distinguisher can anyway check whether any two ciphertexts are related according to v by itself.

Recall that \(\mathsf {IND}\)-\(\mathsf {pd}\)-\(\mathsf {RCCA}\) security was introduced to capture efficient public detectability of ciphertext replays [8]. However, apart from the technical issues we already identified with its definition, it turns out to be crucial, like in the previous section, that the replay detection algorithm computes an equivalence relation over ciphertexts in order to meet the benchmark.

Just like for \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\), Definition 10 is written for a PKE scheme \(\varPi = \left( G,E,D\right) \), but by taking the algorithm v required to exist by Definition 10 as a replay-filtering algorithm, we get a PKEPF scheme \({\varPi }' = \left( G,E,D,v\right) \). Correctness of an \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure PKEPF \(\varPi '\) then follows from the correctness of the corresponding PKE \(\varPi = \left( G,E,D\right) \).

Lemma 5

Consider any correct PKE scheme \(\varPi = \left( G,E,D\right) \) that is \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure, and let v be an algorithm with respect to which \(\varPi \) is \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure. Then, \({\varPi }' = \left( G,E,D,v\right) \) is a correct PKEPF scheme.

We omit the proof of Lemma 5 as it resembles the one of Lemma 4.

Theorem 5 states that the \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) security of a PKEPF scheme \(\varPi = (G,E,D,F)\) suffices for constructing an \(\mathbf {RP}\hbox {-}\mathbf {CONF}\)[n] channel even when the filtering is run by a third-party without access to the secret key, i.e. it satisfies Benchmark 3. To prove this, one would have to create a simulator \(\sigma \) which internally used F such that any distinguisher \(\mathbf {D}\) for and \(\sigma ^{E}\mathbf {RP}\hbox {-}\mathbf {CONF}[n]\) could be transformed into an equally good distinguisher for the \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) game systems. This result can be obtained along the lines of Theorem 4, whose proof can be found in the full version of this paper [4].

Theorem 5

Let \(\varPi = \left( G,E,D,F\right) \) be a correct and \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure PKEPF scheme. There exists an efficient simulator \(\sigma \) and for any \(n \in \mathbb {N}\) there exists an efficient reduction \(\mathbf {R}\) such that for every distinguisher \(\mathbf {D}\),

7 Relating the Security Games

In this section we prove all the implications and separations between the game-based security notions that are depicted in Fig. 1.

Lemma 6

\(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) \(\Rightarrow \) \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\).

Proof

Define v so that \(v\left( \mathtt {pk},c,{c}'\right) = 1\) if and only if \(c = {c}'\). Note that v satisfies \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) security, since if \(v\left( \mathtt {pk},c,{c}'\right) = 1\) then \(\delta \left( D_{\mathtt {sk}}\left( c\right) , D_{\mathtt {sk}}\left( {c}'\right) \right) = 0\).    \(\square \)

Lemma 7

\(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) \(\Rightarrow \) \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\).

Proof

Any algorithm v that satisfies \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) also satisfies \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) security (where v ignores the secret key \(\mathtt {sk}\)).    \(\square \)

The proof of the following result can be found in [4].

Lemma 8

Any correct and \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure PKE scheme \(\varPi \) is [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) secure.

Lemma 9

[n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) \(\Rightarrow \) [\(n-1\)]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\).

Proof

Any distinguisher for the [\(n-1\)]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game systems is also a distinguisher for the [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) systems with the same advantage.    \(\square \)

Lemma 10

[1]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) \(\Rightarrow \) \(\mathsf {IND}\)-\(\mathsf {RCCA}\).

Proof

From any distinguisher \(\mathbf {D}\) for the \(\mathsf {IND}\)-\(\mathsf {RCCA}\) game systems we create a distinguisher \({\mathbf {D}}'\) for the [1]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game systems: \({\mathbf {D}}'\) uses \(\mathbf {D}\) internally forwarding every query between \(\mathbf {D}\) and the [1]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game, except for decryption queries, where it behaves as follows: If, after the challenge plaintexts \(m_{0}\) and \(m_{1}\) are set, \(\mathbf {D}\) makes a decryption query of some ciphertext such that the [1]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game replies with either \(m_{0}\) or \(m_{1}\), then \({\mathbf {D}}'\) sends \(\mathtt {test}\) to \(\mathbf {D}\), and otherwise it sends what was output by the \(\mathsf {IND}\)-\(\mathsf {RCCA}\) game system.    \(\square \)

Lemma 11

\(\mathsf {IND}\)-\(\mathsf {RCCA}\) \(\not \Rightarrow \) [1]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\).

Proof

By Theorem 3, [1]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) security suffices for achieving Benchmark 1 for a single message. By Theorem 1, \(\mathsf {IND}\)-\(\mathsf {RCCA}\) does not suffice for achieving Benchmark 1 for a single message.    \(\square \)

For the sake of simplicity, the two following results (Lemmata 12 and 13) assume the existence of an \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure PKE scheme. We note that both results can be generalized to only assume an [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) (\(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\), respectively) secure scheme at the price of having a less elegant proof.

Lemma 12

[n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) \(\not \Rightarrow \) \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\).

Proof

From a \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure scheme \(\varPi = \left( G,E,D\right) \), we create a scheme \({\varPi }' = \left( {G}',{E}',{D}'\right) \) that is [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) secure but not \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure. \({\varPi }'\) works just as \(\varPi \) except that now during encryption \({E}'\) appends a bit 0 to the ciphertexts generated by E, and during decryption, if the last bit of the ciphertext is 0 then \({D}'\) ignores it and decrypts the ciphertext using D, and otherwise, with \(\frac{1}{2}\) probability \({D}'\) outputs \(\bot \) and with the remaining \(\frac{1}{2}\) probability \({D}'\) ignores the last bit and decrypts the ciphertext using D.

Clearly, it is easy to create an algorithm v that suits [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) such that no distinguisher has non-negligible advantage in distinguishing the two [n]\(\mathsf {IND}\)-\(\mathsf {cl}\)-\(\mathsf {RCCA}\) game systems for \({\varPi }'\) with respect to v: for \(b \in \{0,1\}\), \(v\left( \mathtt {pk},\mathtt {sk},c \mid \mid 0,{c}' \mid \mid b\right) = 1\) if and only if \(c = {c}'\). On the other hand, any algorithm \({v}'\) that suits \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) cannot relate ciphertexts \(c \mid \mid 0\) and \(c \mid \mid 1\) since \(\delta \left( {D}'_{\mathtt {sk}}\left( c \mid \mid 0\right) , {D}'_{\mathtt {sk}}\left( c \mid \mid 1\right) \right) \) is not negligible anymore. As such, a distinguisher can ask for the decryption of \(c \mid \mid 1\) and use this to distinguish the game systems.   \(\square \)

Lemma 13

\(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) \(\not \Rightarrow \) \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\).

Proof

From a \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure scheme \(\varPi = \left( G,E,D\right) \), we create a scheme \({\varPi }' = \left( {G}',{E}',{D}'\right) \) that is \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) secure but not \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure. \({\varPi }'\) works just as \(\varPi \) except that now \({G}'\) additionally picks a bit b uniformly at random and sets the key-pair to be \(\left( \mathtt {pk},\left( \mathtt {sk},b\right) \right) \), where \(\left( \mathtt {pk},\mathtt {sk}\right) \) was the key-pair generated by G. More, during encryption \({E}'\) uses E internally to generate a ciphertext c and outputs \(\left( c,c\right) \) as the ciphertext, and during decryption, on input \(\left( c_0,c_1\right) \), \({D}'\) uses D internally to decrypt \(c_b\) (where b is the bit of the secret key that was sampled by \({G}'\)).

It is easy to create an algorithm v that suits \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) such that no distinguisher has non-negligible advantage in distinguishing the two \(\mathsf {IND}\)-\({\mathsf {srp}}^{}\)-\(\mathsf {RCCA}\) game systems for \({\varPi }'\) with respect to v: for \(b \in \{0,1\}\), \(v\left( \mathtt {pk},\mathtt {sk},\left( c_0,c_1\right) ,\left( {c_0}',{c_1}'\right) \right) = 1\) if and only if \(c_b = {c_b}'\), where b is again the bit of the secret key.

On the other hand, any algorithm \({v}'\) that suits \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) cannot relate ciphertext \(\left( c,c\right) \) with any of the following ciphertexts: \(\left( c,{c_0}'\right) \), \(\left( c,{c_1}'\right) \), \(\left( {c_0}',c\right) \) and \(\left( {c_1}',c\right) \), where \({c_0}'\) and \({c_1}'\) are fresh encryptions of 0 and 1 respectively. This is so since, otherwise, either one could use \({v}'\) to break the semantic security of \(\varPi \) (contradicting that it is \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure), or \({v}'\) would not be suitable for \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\), as one of \(\delta \left( {D}'_{\mathtt {sk}}\left( c,c\right) , {D}'_{\mathtt {sk}}\left( c,{c_0}'\right) \right) \), \(\delta \left( {D}'_{\mathtt {sk}}\left( c,c\right) , {D}'_{\mathtt {sk}}\left( c,{c_1}'\right) \right) \), \(\delta \left( {D}'_{\mathtt {sk}}\left( c,c\right) , {D}'_{\mathtt {sk}}\left( {c_0}',c\right) \right) \) and \(\delta \left( {D}'_{\mathtt {sk}}\left( c,c\right) , {D}'_{\mathtt {sk}}\left( {c_1}',c\right) \right) \) is not negligible anymore. As such, a distinguisher can ask for the decryption of these four ciphertexts and use the outputs to distinguish the \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) game systems.    \(\square \)

Lemma 14

\(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) \(\not \Rightarrow \) \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\).

Proof

Consider an \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure PKE scheme \(\varPi = \left( G,E,D\right) \); we create a scheme \({\varPi }' = \left( {G}',{E}',{D}'\right) \) that is \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure but not \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure: \({\varPi }'\) works exactly as \(\varPi \) except that \({E}'\) appends a bit 0 to the ciphertexts generated by E, and during decryption \({D}'\) ignores the last bit added by \({E}'\) is ignored. Since \(\varPi \) is \(\mathsf {IND}\)-\({\mathsf {prp}}^{}\)-\(\mathsf {RCCA}\) secure, so is \({\varPi }'\). However, \({\varPi }'\) is not \(\mathsf {IND}\)-\(\mathsf {CCA\text {-}2}\) secure.    \(\square \)