In this section we present a construction of a public-key deterministic encryption scheme that is secure according to our notion of adaptive security even when adversaries can access a decryption oracle. As discussed in Sect. 1.3, our construction is inspired by that of Boldyreva et al. [5] combined with the approach of Boneh and Boyen [1] (and its refinement by Cash et al. [9]) for converting a large class of selectively secure IBE schemes to fully secure ones, and the notion of \(\mathcal {R}\)-lossy trapdoor functions that we introduced in Sect. 6 following Boyle et al. [8]. In what follows we formally describe the scheme, discuss the parameters that we obtain using known instantiations of its building blocks, and prove its security.
The scheme \(\varvec{\mathcal {D}\mathcal {E}_\mathrm{CCA}}\) Let \(n = n(\lambda )\), \(\ell = \ell (\lambda )\), \(v=v(\lambda )\), \(t_1 = t_1(\lambda )\), \(t_2=t_2(\lambda )\), \(\delta _1 = \delta _1(\lambda )\), and \(\delta _2=\delta _2(\lambda )\) be functions of the security parameter \(\lambda \in \mathbb {N}\). Our construction relies on the following building blocksFootnote 10:
-
1.
A collection \(\mathcal {H}_\lambda \) of admissible hash functions \(h:\{0,1\}^n \rightarrow \{0,1\}^v\) for every \(\lambda \in \mathbb {N}\).
-
2.
A collection \((\mathsf{Gen}_0, \mathsf{Gen}_1, \mathsf{F}, \mathsf{F}^{-1})\) of \((n, \ell )\)-lossy trapdoor functions.
-
3.
A collection \((\mathsf{Gen}_\mathtt {BM},\mathsf{G},\mathsf{G}^{-1})\) of \(\mathcal {R}^\mathtt {BM}\)-\((n,\ell )\)-lossy trapdoor functions.
-
4.
A \(t_1\)-wise \(\delta _1\)-dependent collection \(\Pi ^{(1)}_\lambda \) of permutations over \(\{0,1\}^n\) for every \(\lambda \in \mathbb {N}\).
-
5.
A \(t_2\)-wise \(\delta _2\)-dependent collection \(\Pi ^{(2)}_\lambda \) of permutations over \(\{0,1\}^n\) for every \(\lambda \in \mathbb {N}\).
Our scheme \(\mathcal {D}\mathcal {E}_\mathrm{CCA} = (\mathsf{KeyGen}, \mathsf{Enc}, \mathsf{Dec})\) is defined as follows:
-
Key generation The key-generation algorithm \(\mathsf{KeyGen}\) on input \(1^{\lambda }\) samples \(h \leftarrow \mathcal {H}_\lambda \), \((\sigma _f,\tau _f) \leftarrow \mathsf{Gen}_1(1^\lambda )\), \(K \leftarrow \mathcal {K}_\lambda \), \((\sigma _g, \tau _g) \leftarrow \mathsf{Gen}_\mathtt {BM}(1^{\lambda }, K)\), \(\pi _1 \leftarrow \Pi ^{(1)}_\lambda \), and \(\pi _2 \leftarrow \Pi ^{(2)}_\lambda \). Then, it outputs \(pk = \left( h, \sigma _f, \sigma _g, \pi _1, \pi _2 \right) \) and \(sk= (\tau _f, \tau _g)\).
-
Encryption The encryption algorithm \(\mathsf{Enc}\) on input a public key \(pk = (h, \sigma _f, \sigma _g, \pi _1,\pi _2)\) and a message \(m \in \{0,1\}^n\) outputs
$$\begin{aligned} c = \Big (h(\pi _1(m)),\;\; \mathsf{F}\big (\sigma _f,\pi _2(m)\big ),\;\; \mathsf{G}\big (\sigma _g, h(\pi _1(m)), \pi _2(m)\big ) \Big ) . \end{aligned}$$
-
Decryption The decryption algorithm \(\mathsf{Dec}\) on input a secret key \(sk = (\tau _f, \tau _g)\) and a ciphertext tuple \((c_h,c_f,c_g)\) first computes \(m = \pi _2^{-1} \left( \mathsf{F}^{-1}(\tau _f, c_f) \right) \). Then, if \(\mathsf{Enc}_{pk}(m)=(c_h,c_f,c_g)\) it outputs m, and otherwise it outputs \(\bot \).
In other words, the decryption algorithm inverts \(c_f\) using the trapdoor \(\tau _f\), and outputs m if the ciphertext is well-formed.
Theorem 7.1
The scheme \(\mathcal {D}\mathcal {E}_\mathrm{CCA}\) is block-wise (p, T, k)-ACD-CCA-secure for any \(n = n(\lambda )\), \(\ell = \ell (\lambda )\), \(v=v(\lambda )\), \(p=p(\lambda )\), and \(T = T(\lambda )\) by setting
$$\begin{aligned} t_1&=p+(T-1)\cdot n+v+\omega (\log {\lambda }),&\delta _1&=2^{-nt_1},&&\\ t_2&=p+(T-1)\cdot n +v + n - (2\ell - n)+\omega (\log {\lambda }),&\delta _2&=2^{-nt_2},&&\\ k&=\mathrm{max}\big (n-(2\ell -n),v\big )+2\log {t_2}+\omega (\log {\lambda }).&&\end{aligned}$$
Parameters Using existing constructions of admissible hash functions and lossy trapdoor functions (see Sects. 2.2 and 2.3, respectively), and using our construction of \(\mathcal {R}^\mathtt {BM}\)-lossy trapdoor functions (see Sect. 6), for any \(n = n(\lambda )\) and for any constant \(0< \epsilon < 1\) we can instantiate our scheme with \(v = n^{\epsilon }\) and \(\ell = n - n^{\epsilon }\). Therefore, for any \(n = n(\lambda )\), \(p=p(\lambda )\), and \(T = T(\lambda )\), we obtain schemes with
$$\begin{aligned} t_1&=p+(T-1)\cdot n+ n^{\epsilon } +\omega (\log {\lambda }),&\delta _1&=2^{-nt_1},&&\\ t_2&=p+(T-1)\cdot n +3 n^{2\epsilon }+\omega (\log {\lambda }),&\delta _2&=2^{-nt_2},&&\\ k&=2 n^{2\epsilon }+\omega (\log {\lambda }).&&\end{aligned}$$
Proof overview. On a high level, an encryption of a message m in our scheme consists of three ciphertext components. The first ciphertext component is a short tag \(h(\pi _1(m))\), where h is an admissible hash function and \(\pi _1\) is a permutation. Looking ahead, our high-moment crooked leftover hash lemma will enable us to argue that such a tag reveals essentially no information on m, as h is a compressing function. The second ciphertext component is \(f(\pi _2(m))\), where f is an injective function sampled from a collection of lossy trapdoor functions, and \(\pi _2\) is a permutation. The third ciphertext component is \(g(h(\pi _1(m)), \pi _2(m))\) where g is sampled from a collection of \(\mathcal {R}^\mathtt {BM}\)-lossy trapdoor functions, and is evaluated on \(\pi _2(m)\) using the tag \(h(\pi _1(m))\). The role of the second and third components is to allow us to prove security using a generalization of the “all-but-one” simulation paradigm, as discussed in Sect. 1.3, to our setting of adaptive adversaries.
Specifically, in our proof of security, the combination of the admissible hash function and the \(\mathcal {R}^\mathtt {BM}\)-lossy trapdoor function enables us to generate a public key for which, with a non-negligible probability, all decryption queries correspond to injective tags for g, while the challenge ciphertext corresponds to a lossy tag for g—even when the challenge plaintext is not known in advance. This is done via a subtle artificial abort argument, similar to the one of Cash et al. [9]. Looking ahead, such a partitioning of the tags will enable us to simulate the decryption oracle for answering all decryption queries, and apply our high-moment crooked leftover hash lemma to argue that the second and third ciphertext components, \(f(\pi _2(m))\) and \(g(h(\pi _1(m)), \pi _2(m))\), reveal essentially no information on m. For applying our lemma, we observe that f can be replaced by a lossy function \(\widetilde{f}\) (while answering decryption queries through the trapdoor for g—as all decryption queries correspond to injective tags for g), and that g is evaluated on \(\pi _2(m)\) using a lossy tag \(h(\pi _1(m))\).
Proof of Theorem 7.1. Using Theorem 3.7, it suffices to prove that \(\mathcal {D}\mathcal {E}_\mathrm{CCA}\) is block-wise (p, T, k)-ACD1-CCA-secure. Let \(\mathcal {A}\) be a \(2^p\)-bounded (T, k)-block-source chosen-ciphertext adversary that queries the real-or-random oracle \(\mathsf {RoR}\) exactly once. We assume without loss of generality that \(\mathcal {A}\) always makes q decryption queries for some polynomial \(q=q(\lambda )\). We denote by \(c^{(1)},\ldots ,c^{(q)}\) the random variables corresponding to these decryption queries, and by \(\varvec{c}^\mathbf {*}= \left( c^*_1, \ldots , c^*_T \right) \) the vector of random variables corresponding to the challenge ciphertexts returned by the \(\mathsf {RoR}\) oracle.
For every \(i \in \{0, \ldots , T\}\) we define an experiment \(\mathsf {Expt}^{(i)}\) that is obtained from the experiment \(\mathsf {Expt}^{\mathsf {realCCA}}_{\mathcal {D}\mathcal {E}_\mathrm{CCA}, \mathcal {A}}\) by modifying the distribution of the challenge ciphertext. Recall that in the experiment \(\mathsf {Expt}^{\mathsf {realCCA}}_{\mathcal {D}\mathcal {E}_\mathrm{CCA}, \mathcal {A}}\) the oracle \(\mathsf {RoR}\) is given a block-source \(\varvec{M}\), samples \((m_1, \ldots , m_T) \leftarrow \varvec{M}\), and outputs the challenge ciphertext \(\big ( \mathsf{Enc}_{pk}(m_1), \ldots , \mathsf{Enc}_{pk}(m_T) \big )\). In the experiment \(\mathsf {Expt}^{(i)}\), the oracle \(\mathsf {RoR}\) on input a block-source \(\varvec{M}\), samples \((m_1, \ldots , m_T) \leftarrow \varvec{M}\) and \((u_1, \ldots , u_T) \leftarrow \left( \{0,1\}^n \right) ^T\), and outputs the challenge ciphertext \(\left( \mathsf{Enc}_{pk}(m_1), \ldots , \mathsf{Enc}_{pk}(m_{T-i}), \mathsf{Enc}_{pk}(u_{T-i+1}), \mathsf{Enc}_{pk}(u_{T}) \right) \). That is, the first \(T-i\) challenge messages are sampled according to \(\varvec{M}\), and the remaining messages are sampled independently and uniformly at random. Then, observe that \(\mathsf {Expt}^{(0)} = \mathsf {Expt}^{\mathsf {realCCA}}_{\mathcal {D}\mathcal {E}_\mathrm{CCA}, \mathcal {A}}\) and \(\mathsf {Expt}^{(T)} = \mathsf {Expt}^{\mathsf {randCCA}}_{\mathcal {D}\mathcal {E}_\mathrm{CCA}, \mathcal {A}}\). Therefore, it suffices to prove that for every \(i \in \{0, \ldots , T-1\}\) the expression
$$\begin{aligned} \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}(\lambda ) = 1} \right] \right| \end{aligned}$$
(7.1)
is negligible in the security parameter \(\lambda \). For the remainder of the proof we fix the value of i and focus on the experiments \(\mathsf {Expt}^{(i)}\) and \(\mathsf {Expt}^{(i+1)}\). We denote by \(\mathsf {RoR}(i, pk, \cdot )\) and \(\mathsf {RoR}(i+1, pk, \cdot )\) the encryption oracles of these two experiments, respectively, and observe that the only difference between them is the distribution of the challenge message \(m_{T-i}\).
In what follows, for each \(j \in \{i, i+1\}\) we describe seven experiments, \(\mathsf {Expt}^{(j)}_0, \ldots , \mathsf {Expt}^{(j+1)}_6\), and derive a series of claims relating them. We then combine these claims to bound the expression in Eq. (7.1).
Experiment \(\varvec{\mathsf {Expt}^{(j)}_0}\) This experiment is the experiment \(\mathsf {Expt}^{(j)}\) as defined above.
Experiment \(\varvec{\mathsf {Expt}^{(j)}_1}\) This experiment is obtained from \(\mathsf {Expt}^{(j)}_0\) by outputting an independently and uniformly sampled bit whenever the \((T-i)\)th challenge message and the messages corresponding to the decryption queries \(c^{(1)}, \ldots , c^{(q)}\) define a “bad” sequence of inputs for the admissible hash function h (recall the efficiently recognizable set \(\mathsf{Unlikely}_h\) from Definition 2.3).
Formally, let \(x^* = \pi _1(m_{T-i})\) for \(j = i\) and let \(x^* = \pi _1(u_{T-i})\) for \(j = i+1\). In addition, for any \(\zeta \in [q]\), if \(\mathsf{Dec}_{sk}(c^{(\zeta )}) \ne \bot \) then let \(x_\zeta = \pi _1\left( \mathsf{Dec}_{sk}\left( c^{(\zeta )}\right) \right) \), and if \(\mathsf{Dec}_{sk}\left( c^{(\zeta )}\right) = \bot \) then let \(x_\zeta \) be an arbitrary value that is different from \(x^*, x_1, \ldots , x_{\zeta -1}\). The experiment \(\mathsf {Expt}^{(j)}_1\) is defined by running \(\mathsf {Expt}^{(j)}_0\), and then outputting either an independently and uniformly sampled bit if \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\), or the output of \(\mathsf {Expt}^{(j)}_0\) if \((x^*,x_1,\ldots ,x_q) \notin \mathsf{Unlikely}_h\).
Claim 7.2
For each \(j \in \{i,i+1\}\), it holds that
$$\begin{aligned} \left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_0({\lambda }) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(j)}_1({\lambda }) = 1} \right] \right| \le \mathrm {negl}({\lambda }) . \end{aligned}$$
Proof
By the definition of admissible hash functions (see Definition 2.3), the probability that \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\) is some negligible function \(\nu (\lambda )\). Let \(\mathtt {Bad}^{(j)}\) denote the event in which \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\) in the experiment \(\mathsf {Expt}^{(j)}_0\), then
$$\begin{aligned}&\left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_1(\lambda )=1} \right] -\Pr \! \left[ {\mathsf {Expt}^{(j)}_0(\lambda )=1} \right] \right| \\&\qquad \qquad \le \Pr \! \left[ {\varvec{\lnot }\mathtt {Bad}^{(j)}} \right] \\&\quad \quad \quad \quad \quad \quad \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_1(\lambda )=1\;\Bigg |\;\varvec{\lnot }\mathtt {Bad}^{(j)}} \right] -\Pr \! \left[ {\mathsf {Expt}^{(j)}_0(\lambda )\;\Bigg |\;\varvec{\lnot }\mathtt {Bad}^{(j)}} \right] =1 \right| \\&\quad \qquad \qquad + \Pr \! \left[ { \mathtt {Bad}^{(j)}} \right] \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_1(\lambda )=1\;\Bigg |\; \mathtt {Bad}} \right] -\Pr \! \left[ {\mathsf {Expt}^{(j)}_0(\lambda )\;\Bigg |\; \mathtt {Bad}^{(j)}} \right] =1 \right| \\&\qquad \qquad = \Pr \! \left[ {\varvec{\lnot }\mathtt {Bad}^{(j)}} \right] \cdot 0 + \Pr \! \left[ {\mathtt {Bad}} \right] \cdot \left| \frac{1}{2} - \Pr \! \left[ {\mathsf {Expt}^{(j)}_0(\lambda )=1} \right] \right| \\&\qquad \qquad \le \frac{\Pr \! \left[ {\mathtt {Bad}^{(j)}} \right] }{2} \\&\qquad \qquad = \frac{\nu (\lambda )}{2} , \end{aligned}$$
which is negligible as required. \(\square \)
Experiment \(\varvec{\mathsf {Expt}^{(j)}_2}\) This experiment is obtained from \(\mathsf {Expt}^{(j)}_1\) by outputting the output of \(\mathsf {Expt}^{(j)}_1\) with probability \(1/\Delta \), and outputting an independent and uniform bit with probability \(1 - 1/\Delta \), where \(\Delta = \Delta (\lambda )\) is the polynomial corresponding to q from the definition of admissible hash functions (see Definition 2.3). The following claim follows in a straightforward manner.
Claim 7.3
It holds that
$$\begin{aligned}&\left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_2({\lambda }) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_2({\lambda }) = 1} \right] \right| = \frac{1}{\Delta } \\&\quad \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_1({\lambda }) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_1({\lambda }) = 1} \right] \right| . \end{aligned}$$
Proof
For each \(j \in \{i,i+1\}\) it holds that
$$\begin{aligned} \Pr \! \left[ {\mathsf {Expt}^{(j)}_2({\lambda }) = 1} \right] = \frac{1}{\Delta } \cdot \Pr \! \left[ {\mathsf {Expt}^{(j)}_1({\lambda }) = 1} \right] + \left( 1 - \frac{1}{\Delta } \right) \cdot \frac{1}{2} . \end{aligned}$$
\(\square \)
Now, from the triangle inequality and Claim 7.2, we have the following series of inequalities.
$$\begin{aligned}&\left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_0(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_0(\lambda ) = 1} \right] \right| \\&\quad \le \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_0(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i)}_1(\lambda ) = 1} \right] \right| \\&\qquad + \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_1(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_1(\lambda ) = 1} \right] \right| \\&\qquad + \left| \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_1(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_0(\lambda ) = 1} \right] \right| \\&\quad \le \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_0(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i)}_1(\lambda ) = 1} \right] \right| \\&\qquad + \Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_2(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_2(\lambda ) = 1} \right] \right| \\&\qquad + \left| \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_1(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_0(\lambda ) = 1} \right] \right| , \end{aligned}$$
leading to the following corollary.
Corollary 7.4
It holds that
$$\begin{aligned}&\left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_0(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_0(\lambda ) = 1} \right] \right| \\&\qquad \qquad \le \Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_2(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_2(\lambda ) = 1} \right] \right| + \mathrm {negl}({\lambda }) . \end{aligned}$$
Experiment \(\varvec{\mathsf {Expt}^{(j)}_3}\) This experiment is obtained from \(\mathsf {Expt}^{(j)}_2\) by changing the abort condition. Specifically, at the end of experiment \(\mathsf {Expt}^{(j)}_2\), we sample an independent initialization value \(K'\) (in addition to K that is used by the key-generation algorithm), and denote by \(\mathsf {Partition}^{(j)}_{K',h}\) the event in which \(P_{K'}(h(x^*))=\mathtt {Lossy}\) and \(P_{K'}(h(x_i))=\mathtt {Inj}\) for any \(\zeta \in [q]\) such that \(\mathsf{Dec}_{sk}\left( c^{(\zeta )}\right) \ne \bot \), where \(P_{K'}:\{0,1\}^v \rightarrow \{\mathtt {Lossy},\mathtt {Inj}\}\) is the partitioning function of the admissible hash function (recall that the values \(x^*, x_1, \ldots , x_q\) were defined in \(\mathsf {Expt}^{(j)}_1\)).
We would like to replace the abort condition from experiment \(\mathsf {Expt}^{(j)}_2\) (which is independent of the adversary’s view) with one that depends on the event \(\mathsf {Partition}^{(j)}_{K',h}\). Unfortunately, all we are guaranteed is that the event \(\mathsf {Partition}^{(j)}_{K',h}\) occurs with probability that is at least \(1/\Delta \) (assuming that \((x^*,x_1,\ldots ,x_q) \notin \mathsf{Unlikely}_h\)). Therefore, if \((x^*,x_1,\ldots ,x_q) \notin \mathsf{Unlikely}_h\), we first approximate the value
$$\begin{aligned} p^{(j)} = \Pr _{K' \leftarrow \mathcal {K}_\lambda }\left[ \mathsf {Partition}^{(j)}_{K',h}\; |\; (h(x^*), h(x_1),\ldots , h(x_q))\right] \end{aligned}$$
by sampling a sufficient number of independent initialization keys \(K''\leftarrow \mathcal {K}_\lambda \) and observing whether or not the event \(\mathsf {Partition}^{(j)}_{K'',h}\) occurs (with respect to the fixed values \(h(x^*), h(x_1),\ldots , h(x_q)\)). For any polynomial S, Hoeffding’s inequality yields that with \(\lceil \lambda S \cdot \Delta \rceil \) samples we can obtain an approximation \(\tilde{p}^{(j)} \ge (1/\Delta )\) of \(p^{(j)}\) such that
$$\begin{aligned} \Pr \! \left[ {\left| p^{(j)} - \tilde{p}^{(j)} \right| \ge \frac{1}{\Delta \cdot S} } \right] \le \frac{1}{2^{\lambda }} . \end{aligned}$$
(7.2)
Then, looking all the way back to experiment \(\mathsf {Expt}^{(j)}_1\), the output of \(\mathsf {Expt}^{(j)}_3\) is computed as follows:
-
1.
If \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\) or if the event \(\mathsf {Partition}^{(j)}_{K',h}\) does not occur, then we output the output of \(\mathsf {Expt}^{(j)}_1\).
-
2.
If \((x^*,x_1,\ldots ,x_q) \notin \mathsf{Unlikely}_h\) and the event \(\mathsf {Partition}^{(j)}_{K',h}\) does occur, then we output the output of \(\mathsf {Expt}^{(j)}_1\) with probability \(1/(\Delta \tilde{p}^{(j)})\), and we “artificially” enforce an abort and output an independent and uniform bit with probability \(1 - 1/(\Delta \tilde{p}^{(j)})\).
Claim 7.5
For each \(j \in \{i,i+1\}\) and for any polynomial \(S = S(\lambda )\) it holds that
$$\begin{aligned} \left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_2({\lambda }) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(j)}_3({\lambda }) = 1} \right] \right| \le \frac{1}{\Delta S} + \frac{1}{2^{\lambda }}. \end{aligned}$$
Proof
Denote by \(\varvec{\lnot }\mathtt {Abort}^{(j)}_2\) and \(\varvec{\lnot }\mathtt {Abort}^{(j)}_3\) the events in which the experiments \(\mathsf {Expt}^{(j)}_2\) and \(\mathsf {Expt}^{(j)}_3\) output the output of \(\mathsf {Expt}^{(j)}_1\), respectively. Then,
$$\begin{aligned} \Pr \! \left[ {\varvec{\lnot }\mathtt {Abort}^{(j)}_2} \right] = \frac{1}{\Delta } \text { and } \Pr \! \left[ {\varvec{\lnot }\mathtt {Abort}^{(j)}_3} \right] = p^{(j)} \cdot \frac{1}{\Delta \tilde{p}^{(j)}} = \frac{1}{\Delta } \cdot \frac{p^{(j)}}{\tilde{p}^{(j)}} .\end{aligned}$$
Equation (7.2) implies that with probability at least \(1 - 2^{-\lambda }\) it holds that
$$\begin{aligned} \left| \Pr \! \left[ {\varvec{\lnot }\mathtt {Abort}^{(j)}_2} \right] - \Pr \! \left[ {\varvec{\lnot }\mathtt {Abort}^{(j)}_3} \right] \right| = \frac{1}{\Delta } \cdot \left| \frac{\tilde{p}^{(j)} - p^{(j)}}{\tilde{p}^{(j)}} \right| \le \frac{1}{\Delta ^2 S \tilde{p}^{(j)}} \le \frac{1}{\Delta S} .\quad \end{aligned}$$
(7.3)
As (7.3) holds for any \((x^*, x_1,\ldots , x_q)\) with probability at least \(1 - 2^{-\lambda }\), we obtain that the statistical distance between the outputs of experiments \(\mathsf {Expt}^{(j)}_2\) and \(\mathsf {Expt}^{(j)}_3\) is at most \(1/(\Delta S) + 2^{-\lambda }\). \(\square \)
Now, from the triangle inequality and Claim 7.5, we get
$$\begin{aligned}&\Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_2(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_2(\lambda ) = 1} \right] \right| \\&\qquad \qquad \qquad \le \Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_2(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i)}_3(\lambda ) = 1} \right] \right| \\&\qquad \qquad \qquad \qquad \qquad + \Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_3(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_3(\lambda ) = 1} \right] \right| \\&\qquad \qquad \qquad \qquad \qquad + \Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_3(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_2(\lambda ) = 1} \right] \right| . \end{aligned}$$
This gives us the following corollary.
Corollary 7.6
For any polynomial \(S=S(\lambda )\) it holds that
$$\begin{aligned}&\Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_2(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_2(\lambda ) = 1} \right] \right| \\&\qquad \qquad \le 2 \cdot \left( \frac{1}{S} + \frac{\Delta }{2^{\lambda }} \right) + \Delta \cdot \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_3(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_3(\lambda ) = 1} \right] \right| . \end{aligned}$$
Experiment \(\varvec{\mathsf {Expt}^{(j)}_4}\) This experiment is obtained from \(\mathsf {Expt}^{(j)}_3\) by replacing the event \(\mathsf {Partition}_{K',h}\) with the event \(\mathsf {Partition}_{K,h}\). That is, we do not sample a new initialization value \(K'\) for the partitioning, but rather consider the partition defined by the initialization value K used by the key-generation algorithm.
Claim 7.7
For each \(j \in \{i,i+1\}\) it holds that
$$\begin{aligned} \left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_3(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(j)}_4(\lambda ) = 1} \right] \right| \le \mathrm {negl}({\lambda }) .\end{aligned}$$
Proof
We observe that any adversary \(\mathcal {A}\) for which the above difference is non-negligible can be used to distinguish initialization values of the \(\mathcal {R}\)-lossy trapdoor function family. The distinguisher first chooses two keys \(K, K' \leftarrow \mathcal {K}\) independently and uniformly at random. Then, upon receiving \(\sigma \) the public index sampled from one of the two ensembles \(\{ \sigma : (\sigma , \tau ) \leftarrow \mathsf{Gen}_\mathtt {BM}(1^\lambda , K_\lambda ) \}_{\lambda \in \mathbb {N}}\) or \(\{ \sigma : (\sigma , \tau ) \leftarrow \mathsf{Gen}_\mathtt {BM}(1^\lambda , K'_\lambda ) \}_{\lambda \in \mathbb {N}}\), the distinguisher proceeds to efficiently simulate \(\mathcal {A}\) as follows: Sample two permutations and a lossy trapdoor function as in \(\mathsf {Expt}^{(j)}_3\) but use \(\sigma _g=\sigma \) (one of the two possible function indices returned by the \(\mathcal {R}\)-lossy challenge) to setup the public key pk. Then proceed to simulate \(\mathsf {Expt}^{(j)}_3\) with the initialization value K.
If \(\sigma \) was sampled from the ensemble corresponding to \(K'\) then the adversary participates exactly in \(\mathsf {Expt}^{(j)}_3\). However, if \(\sigma \) was sampled from the ensemble corresponding to K then the simulation proceeds exactly as in \(\mathsf {Expt}^{(j)}_4\). \(\square \)
Corollary 7.8
It holds that
$$\begin{aligned}&\left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_3(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_3(\lambda ) = 1} \right] \right| \\&\quad \quad \le \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_4(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_4(\lambda ) = 1} \right] \right| + \mathrm {negl}({\lambda }). \end{aligned}$$
Experiment \(\varvec{\mathsf {Expt}^{(j)}_5}\) This experiment is obtained from \(\mathsf {Expt}^{(j)}_4\) by not taking into account the event \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\) when computing the output of the experiment. Looking all the way back to experiment \(\mathsf {Expt}^{(j)}_0\), the output of \(\mathsf {Expt}^{(j)}_4\) is computed as follows:
-
1.
If the event \(\mathsf {Partition}^{(j)}_{K,h}\) does not occur, then we output an independent uniform bit.
-
2.
If the event \(\mathsf {Partition}^{(j)}_{K,h}\) does occur, then we output the output of \(\mathsf {Expt}^{(j)}_0\) with probability \(1/(\Delta \tilde{p}^{(j)})\), and we “artificially” enforce an abort and output an independent and uniform bit with probability \(1 - 1/(\Delta \tilde{p}^{(j)})\).
Note that the event \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\) has the same probability in the experiments \(\mathsf {Expt}^{(j)}_4\) and \(\mathsf {Expt}^{(j)}_5\), and that this probability is upper bounded by some negligible function \(\nu (n)\) (see Claim 7.2). Therefore, for each \(j \in \{i, i+1\}\) we have that \(\left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_4(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(j)}_5(\lambda ) = 1} \right] \right| \le \mathrm {negl}({\lambda })\), implying the following corollary
Corollary 7.9
It holds that
$$\begin{aligned}&\left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_4(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_4(\lambda ) = 1} \right] \right| \\&\quad \quad \le \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_5(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_5(\lambda ) = 1} \right] \right| + \mathrm {negl}({\lambda }). \end{aligned}$$
Looking ahead, the modification of ignoring the (negligible probability) event \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\) ensures that the abort conditions of experiments \(\mathsf {Expt}^{(i)}_5\) and \(\mathsf {Expt}^{(i+1)}_5\) are computed in an identical manner given K, h, and the challenge ciphertexts. Previously, the abort condition relied on \(x^*\) (which was defined as \(\pi _1(m_{T-i})\) for \(j = i\) and as \(\pi _1(u_{T-i})\) for \(j = i+1\)), and now it relies on \(h(x^*)\) which is given as part of the challenge ciphertext (therefore, given the challenge ciphertexts, the abort condition is now completely independent of whether \(j = i\) or \(j = i+1\)).
Experiment \(\varvec{\mathsf {Expt}^{(j)}_6}\) This experiment is obtained from \(\mathsf {Expt}^{(j)}_5\) by changing the decryption oracle to decrypt using the trapdoor \(\tau _g\) of the \(\mathcal {R}\)-lossy trapdoor function, instead of using the trapdoor \(\tau _f\) of the lossy trapdoor function. Specifically, we define the oracle \(\widetilde{\mathsf{Dec}}(sk,\cdot )\) that on input the ith decryption query \(c^{(i)}=\left( c^{(i)}_h,c^{(i)}_f,c^{(i)}_g\right) \) computes \(m=\pi _2^{-1}\left( \mathsf{G}^{-1}\left( \tau _g,c^{(i)}_g\right) \right) \), and checks whether the ciphertext components are well-formed. Note, however, that for a decryption query \(c^{(i)}\) that corresponds to a lossy tag it is impossible to (efficiently) decrypt using \(\tau _g\). In this case the decryption oracle outputs \(\bot \), and the output of the experiment is an independent and uniform bit.
Claim 7.10
For each \(j \in \{i,i+1\}\), we have \(\Pr \! \left[ {\mathsf {Expt}_5^{(j)}(\lambda ) = 1} \right] = \Pr \! \left[ {\mathsf {Expt}_6^{(j)}(\lambda ) = 1} \right] \).
Proof
Note that whenever the event \(\mathsf {Partition}^{(j)}_{K,h}\) occurs then in particular all decryption queries which are well-formed correspond to injective tags and therefore can be decrypted using \(\tau _g\). Thus, conditioned on the event \(\mathsf {Partition}^{(j)}_{K,h}\) (which as the exact same probability in \(\mathsf {Expt}_5^{(j)}\) and \(\mathsf {Expt}_6^{(j)}\)) the oracles \(\mathsf{Dec}\) and \(\widetilde{\mathsf{Dec}}\) are identical from which the claim follows. \(\square \)
Corollary 7.11
It holds that
$$\begin{aligned} \bigg |\Pr \! \left[ {\mathsf {Expt}_5^{(i)}=1} \right] -\Pr \! \left[ {\mathsf {Expt}_5^{(i+1)}=1} \right] \bigg | = \bigg |\Pr \! \left[ {\mathsf {Expt}_6^{(i)}=1} \right] -\Pr \! \left[ {\mathsf {Expt}_6^{(i+1)}=1} \right] \bigg | . \end{aligned}$$
Experiment \(\varvec{\mathsf {Expt}^{(j)}_7}\) This experiment is obtained from \(\mathsf {Expt}^{(j)}_6\) by sampling the public key as follows: instead of an injective function \(\sigma _f\), sample a lossy function \(\tilde{\sigma }_f\). The rest of the experiment is identical to \(\mathsf {Expt}^{(j)}_6\).
Claim 7.12
For each \(j \in \{i,i+1\}\) it holds that
$$\begin{aligned} \left| \Pr \! \left[ {\mathsf {Expt}^{(j)}_6(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(j)}_7(\lambda ) = 1} \right] \right| \le \mathrm {negl}({\lambda }) .\end{aligned}$$
Proof
Observe that as \(\sigma _f\) is no longer used by the decryption oracle, and thus, replacing \(\sigma _f\) with \(\tilde{\sigma }_f\) does not affect decryption queries. Therefore, any efficient adversary for which the claim is false can be used to distinguish a randomly sampled injective function \(\sigma _f\) from a randomly sampled lossy function \(\tilde{\sigma }_f\). \(\square \)
As a corollary, we get
Corollary 7.13
It holds that
$$\begin{aligned}&\left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_6(\lambda )=1} \right] -\Pr \! \left[ {\mathsf {Expt}^{(i+1)}_6(\lambda )=1} \right] \right| \\&\qquad \qquad \qquad \le \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_7(\lambda )=1} \right] -\Pr \! \left[ {\mathsf {Expt}^{(i+1)}_7(\lambda )=1} \right] \right| +\mathrm {negl}({\lambda }). \end{aligned}$$
The final claim we require is as follows.
Claim 7.14
It holds that
$$\begin{aligned} \left| \Pr \! \left[ {\mathsf {Expt}^{(i)}_7(\lambda )=1} \right] -\Pr \! \left[ {\mathsf {Expt}^{(i+1)}_7(\lambda )=1} \right] \right| \le \mathrm {negl}({\lambda }) . \end{aligned}$$
Proof
We prove the claim by upper bounding the statistical distance between the output distributions of \(\mathsf {Expt}^{(i)}_7\) and \(\mathsf {Expt}^{(i+1)}_7\). We observe that these output distributions can be computed by applying the exact same stochastic (and, very likely, inefficient) map to the joint distribution of the public key \(\widetilde{pk}\) and the challenge ciphertext \(\varvec{c}^\mathbf {*}\) in each experiment. The difference between the resulting distributions will follow from the difference between the challenge ciphertexts: In \(\mathsf {Expt}^{(i)}_7\) the \((T-i)\)th challenge message is \(m_{T-i}\), whereas in \(\mathsf {Expt}^{(i+1)}_7\) it is a uniform message \(u_{T-i}\). This follows since, as discussed above, the modification of ignoring the (negligible probability) event \((x^*,x_1,\ldots ,x_q) \in \mathsf{Unlikely}_h\) ensures that the abort conditions of experiments \(\mathsf {Expt}^{(i)}_5\) and \(\mathsf {Expt}^{(i+1)}_5\) are computed in an identical manner given K, h, and the challenge ciphertexts (and this continued to hold in remaining experiments). Previously, the abort condition relied on \(x^*\) (which was defined as \(\pi _1(m_{T-i})\) for \(j = i\) and as \(\pi _1(u_{T-i})\) for \(j = i+1\)), and now it relies on \(h(x^*)\) which is given as part of the challenge ciphertext (therefore, given the challenge ciphertexts, the abort condition is now completely independent of whether \(j = i\) or \(j = i+1\)).
Therefore, it suffices to consider the statistical distance between the distribution \((\widetilde{pk}, \varvec{c}^\mathbf {*})\) in the experiment \(\mathsf {Expt}^{(i)}_7\) and the same distribution in the experiment \(\mathsf {Expt}^{(i+1)}_7\) (since applying the same stochastic map to a pair of distributions cannot increase the statistical distance between them). Moreover, we prove that this statistical distance is negligible in the security parameter even when fixing all components of the public key \(\widetilde{pk}\) other than the two permutations \(\pi _1\) and \(\pi _2\). Specifically, we prove that for any set \(\mathcal {X}\) of at most \(2^p\) (T, k)-block-sources, with an overwhelming probability over the choice of \(\pi _1\) and \(\pi _2\), for any \(\varvec{M}\in \mathcal {X}\), the distribution of the challenge ciphertext \(\varvec{c}^\mathbf {*}\) resulting from \(\varvec{M}\) in \(\mathsf {Expt}^{(i)}_7\) and the distribution of the challenge ciphertext \(\varvec{c}^\mathbf {*}\) resulting from \(\varvec{M}\) in \(\mathsf {Expt}^{(i+1)}_7\) lead these two experiments to statistically close outputs.
Recall that the challenge ciphertexts for \(\mathsf {Expt}^{(j)}_7\) are of the form \(\varvec{c}^\mathbf {*}=(c_1^*,\ldots ,c_T^*)\) where the components \(c_1^*, \ldots ,c_{T-i-1}^*\) and \(c_{T-i+1}^*, \ldots , c_T^*\) are identically distributed for \(j \in \{i,i+1\}\). Moreover, in both experiments the components \(c_{T-i+1}^*, \ldots , c_T^*\) are encryptions of independent and uniformly distributed messages. Therefore, it suffices to consider the distribution of \(c_{T-i}^*\) conditioned on \(c_1^*,\ldots ,c_{T-i-1}^*\) in each experiment. Recall from our definitions that,
$$\begin{aligned} c_{T-i}^* = \left\{ \begin{array}{rl} (c^*_h,c^*_f,c^*_g) &{}{\mathop {=}\limits ^\mathsf{def}}\Big (h(\pi _1(m_{T-i})), \mathsf{F}\big (\tilde{\sigma }_f,\pi _2(m_{T-i})\big ), \mathsf{G}\big (\sigma _g, h(\pi _1(m_{T-i})), \pi _2(m_{T-i})\big ) \Big ) \\ &{} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text {for } j=i,\\ \\ (u^*_h,u^*_f,u^*_g) &{}{\mathop {=}\limits ^\mathsf{def}}\Big (h(\pi _1(u_{T-i})), \mathsf{F}\big (\tilde{\sigma }_f,\pi _2(u_{T-i})\big ), \mathsf{G}\big (\sigma _g, h(\pi _1(u_{T-i})), \pi _2(u_{T-i})\big ) \Big ) \\ &{}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text {for } j=i+1. \end{array} \right. \end{aligned}$$
Denote by \(C^*_h\), \(C^*_f\), and \(C^*_g\) the random variables corresponding to \(c^*_h\), \(c^*_f\), and \(c^*_g\), respectively, and similarly \(U^*_h\), \(U^*_f\), \(U^*_g\) corresponding to \(u^*_h,u^*_f,u^*_g\) where the probability is taken over the choice of \(\pi _1\), \(\pi _2\), \(m_{T-i}\), and \(u_{T-i}\). In what follows, we fix \(m_1, \ldots , m_{T-i-1}\), and argue that the two distributions \((C^*_h,C^*_f,C^*_g)\) and \((U^*_h,U^*_f,U^*_g)\) conditioned on the first \(T-i-1\) challenge messages \(m_1, \ldots , m_{T-i-1}\) are statistically close.
We begin by focusing on the distributions \(C^*_h = h(\pi _1(M_{T-i}))\) and \(U^*_h = h(\pi _1(U_{T-i})\). Observe that \(h:\{0,1\}^* \rightarrow \{0,1\}^v\) is an \((n,n-v)\)-lossy function, and let Z denote the indicator of the event in which \(M_1=m_1,\ldots ,M_{T-i-1}=m_{T-i-1}\). Consider the set \(\mathcal {Z}\) defined as the set of distributions \(M_{T-i}|_{Z=1}\) for all \(\varvec{M}= (M_1,\ldots ,M_{T-i},\ldots ,M_T) \in \mathcal {X}\) and for all possible values of \(m_1, \ldots , m_{T-i-1}\). Then, we have,
$$\begin{aligned} |\mathcal {Z}| \le |\mathcal {X}| \cdot 2^{(T-i-1)n} \le |\mathcal {X}| \cdot 2^{(T-1)n} \le 2^{p+n(T-1)}. \end{aligned}$$
Applying Theorem 4.6 (for \(T=1\)) with our choice of parameters implies that with an overwhelming probability over the choice of \(\pi _1 \leftarrow \Pi _1\) for any such \(M_{T-i}\) we have
$$\begin{aligned} \mathbf {SD}\left( h(\pi _1(M_{T-i}))|_{Z=1}, h(\pi _1(U_{T-i})|_{Z=1}\right) \le 2^{-\omega (\log {\lambda })} . \end{aligned}$$
(7.4)
We now fix any \(\pi _1 \in \Pi _1\) for which (7.4) holds. Consider now any possible value \(\alpha _h\) that the random variables \(h(\pi _1(M_{T-i}))\) and \(h(\pi _1(U_{T-i})\) may obtain in the experiments \(\mathsf {Expt}^{(i)}_7\) and \(\mathsf {Expt}^{(i+1)}_7\), respectively. If \(\alpha _h\) corresponds to an injective tag for \(\mathsf{G}\), then in particular the event \(\mathsf {Partition}_{K,h}\) will not occur in either one of the experiments, and thus the output of both experiments is an independent and uniform bit. Moreover, (7.4) above implies that the probabilities of having an \(\alpha _h\) that corresponds to injective tag in \(\mathsf {Expt}^{(i)}_7\) and \(\mathsf {Expt}^{(i+1)}_7\) are negligibly close. Therefore, it remains to show that for all but a negligible probability of the \(\alpha _h\)’s that correspond to lossy tags for \(\mathsf{G}\), the distributions \((C^*_f,C^*_g)|_{C^*_h = \alpha _h}\) and \((U^*_f,U^*_g)|_{U^*_h = \alpha _h}\) are statistically close (once again, conditioned on the first \(T-i-1\) challenge messages \(m_1, \ldots , m_{T-i-1}\) as before).
The following straightforward claim shows that the message distributions of the \((T-i)\)th challenge message in \(\mathsf {Expt}^{(i)}_7\) and \(\mathsf {Expt}^{(i+1)}_7\) (denoted \(M_{T-i}\) and \(U_{T-i}\), respectively), have sufficient entropy even when conditioned on \(\alpha _h\). \(\square \)
Claim 7.15
For any \(\epsilon >0\), with probability at least \(1-\epsilon \) over the choice of \(\alpha _h \leftarrow C^*_h\) conditioned on \(P_{K}(\alpha _h)=\mathtt {Lossy}\), it holds that
$$\begin{aligned} \mathbf {H}_{\infty }\! \left( M_{T-i} \;\Bigg |\; C^*_h = \alpha _h, M_1=m_1,\ldots ,M_{T-i-1}=m_{T-i-1} \right) \ge k - v - \log (1/\epsilon ) .\end{aligned}$$
Similarly, for any \(\epsilon >0\), with probability at least \(1-\epsilon \) over the choice of \(\alpha _h \leftarrow U^*_h\) conditioned on \(P_{K}(\alpha _h)=\mathtt {Lossy}\), it holds that
$$\begin{aligned} \mathbf {H}_{\infty }\! \left( U_{T-i} \;\Bigg |\; U^*_h = \alpha _h, M_1=m_1,\ldots ,M_{T-i-1}=m_{T-i-1} \right) \ge n - v - \log (1/\epsilon ) .\end{aligned}$$
Proof
As the output length of h is v bits, the claim follows from applying Lemma 2.1 to the distribution \(M_{T-i}|_{M_1=m_1,\ldots ,M_{T-i-1}=m_{T-i-1}}\) (recall that \(\varvec{M}\) is a (T, k)-block-source) and to the uniform distribution \(U_{T-i}\). \(\square \)
Fix some \(\epsilon = \omega (\log \lambda )\) and any \(\alpha _h\) for which both parts of Claim 7.15 hold, and let \(k' = k - v - \log (1/\epsilon )\). Then, since \(P_{K}(\alpha _h)=\mathtt {Lossy}\), we have that \(\alpha _h\) corresponds to a lossy tag for \(\mathsf{G}\), and therefore for the function \(f_h:\{0,1\}^n \rightarrow \{0,1\}^{n'}\) defined as \(f_h(\cdot )=(\mathsf{F}(\tilde{\sigma }_f,\cdot ), \mathsf{G}(\sigma _g,c^*_h,\cdot ))\) it holds that \(|\mathrm{Im}(f_h)| \le 2^{2n-2\ell }\). Let Y denote the indicator of the event in which \(C^*_h = \alpha _h\), \( M_1=m_1,\ldots ,M_{T-i-1}=m_{T-i-1}\), and consider the set \(\mathcal {Y}\) defined as the set of distributions \(M_{T-i}|_{Y=1}\) for all \(\varvec{M}= (M_1,\ldots ,M_{T-i},\ldots ,M_T) \in \mathcal {X}\) and for all possible values of \(\alpha _h, m_1, \ldots , m_{T-i-1}\). Then, we have,
$$\begin{aligned} |\mathcal {Y}| \le |\mathcal {X}| \cdot 2^{v} \cdot 2^{(T-i-1)n} \le |\mathcal {X}| \cdot 2^{v+(T-1)n} \le 2^{p+v+n(T-1)}. \end{aligned}$$
Now, applying Theorem 4.6 (setting \(T=1\)) with our choice of parameters implies that with an overwhelming probability over the choice of \(\pi _2\), for any such \(M_{T-i}\) and Y we have
$$\begin{aligned} \mathbf {SD}\left( f_h(\pi _2(M_{T-i}))|_{Y=1}, f_h(U_n)\right) \le 2^{-\omega (\log {\lambda })} . \end{aligned}$$
An essentially identical argument holds for \(U_{T-i}\), and from this it follows that
$$\begin{aligned} \mathbf {SD}\left( \left( C^*_f,C^*_g\right) \Big |_{C^*_h=\alpha _h}, \left( U^*_f,U^*_g\right) \Big |_{U^*_h = \alpha _h}\right) \le \mathrm {negl}({\lambda }) \end{aligned}$$
(7.5)
for all but a negligible probability of the \(\alpha _h\)’s that correspond to lossy tags for \(\mathsf{G}\), as required. \(\square \)
Completing the proof of Theorem 7.1 To complete the proof of the theorem, recollect that it suffices to bound the expression in (7.1). For any polynomial \(S=S(\lambda )\), collecting negligible terms \(\mathrm {negl}({\lambda })\), we have
$$\begin{aligned}&\bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}(\lambda ) = 1} \right] \bigg | \\&\quad {\mathop {=}\limits ^\mathsf{def}}\bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}_0(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_0(\lambda ) = 1} \right] \bigg | \\&\quad \le \Delta \cdot \bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}_2(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_2(\lambda ) = 1} \right] \bigg | + \mathrm {negl}({\lambda }) \quad (\text {from Cor.}~7.4)\\&\quad \le 2 \left( \frac{1}{S} + \frac{\Delta }{2^{\lambda }} \right) + \Delta \\&\qquad \cdot \bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}_3(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_3(\lambda ) = 1} \right] \bigg | +\mathrm {negl}({\lambda }) \quad (\text {from Cor.}~7.6)\\&\quad \le \Delta \cdot \bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}_4(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_4(\lambda ) = 1} \right] \bigg | \\&\qquad + 2 \left( \frac{1}{S} + \frac{\Delta }{2^{\lambda }} \right) + \mathrm {negl}({\lambda }) \quad (\text {from Cor.}~7.8)\\&\quad \le \Delta \cdot \bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}_5(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_5(\lambda ) = 1} \right] \bigg |\\&\qquad + 2 \left( \frac{1}{S} + \frac{\Delta }{2^{\lambda }} \right) + \mathrm {negl}({\lambda }) \quad (\text {from Cor.}~7.9)\\&\quad = \Delta \cdot \bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}_6(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_6(\lambda ) = 1} \right] \bigg | \\&\qquad + 2 \left( \frac{1}{S} + \frac{\Delta }{2^{\lambda }} \right) + \mathrm {negl}({\lambda }) \quad (\text {from Cor.}~7.11)\\&\quad \le \Delta \cdot \bigg | \Pr \! \left[ {\mathsf {Expt}^{(i)}_7(\lambda ) = 1} \right] - \Pr \! \left[ {\mathsf {Expt}^{(i+1)}_7(\lambda ) = 1} \right] \bigg | \\&\qquad + 2 \left( \frac{1}{S} + \frac{\Delta }{2^{\lambda }} \right) + \mathrm {negl}({\lambda }) \quad (\text {from Cor.}~7.13)\\&\quad \le 2 \left( \frac{1}{S} + \frac{\Delta }{2^{\lambda }} \right) + \mathrm {negl}({\lambda }) . \quad (\text {from Claim} 7.14) \end{aligned}$$
As \(\Delta =\Delta (\lambda )\) is some fixed polynomial, and the above holds for any polynomial \(S=S(\lambda )\), this completes the proof of Theorem 7.1. \(\square \)