Keywords

1 Introduction

In this paper, we study selective-opening-attack (SOA) security of some deterministic primitives, namely hash functions, (public-key) deterministic encryption, and trapdoor functions. In particular, we extend the work of Hoang et al. [20] in addition to answering some open questions there. We also provide a new analysis of Chaum’s blind signature scheme [12].

1.1 Background and Motivation

SOA security. Roughly, SOA security of a cryptographic primitive refers to giving the adversary the power to adaptively choose instances of the primitive to corrupt and considering security of the uncorrupted instances. SOA grew out of work on non-committing and deniable primitives [6, 9,10,11, 14, 16, 26, 27, 31], which are even stronger forms of security. Namely, SOA has been studied in a line of work on public-key encryption and commitments started by Bellare, Hofheinz, and Yilek [2, 3, 7, 19, 21, 22]. When considering adaptive corruption, SOA arguably captures the security one wants in practice. Here we only consider sender SOA (i.e., sender, not receiver, corruption), which we just refer to SOA security in the remainder of the paper for simplicity.

SOA for deterministic encryption. SOA security has usually been studied for randomized primitives, where the parties use random coins that are given to the adversary when corrupted, in particular randomized encryption. The study of SOA for deterministic primitives, namely deterministic encryption was initiated by Bellare et al. [1], who showed an impossibility result wrt. a simulation based definition. Subsequently, Hoang et al. [20] proposed a comparison based definition and showed positive results in the non programmable random oracle (RO) model [5, 25]. They left open the problem of constructions in the standard (RO devoid) model, which we study in this work. In particular, Hoang et al. emphasized this problem is open even for uniform and independent messages.

SOA for hash functions. In addition to randomized encryption, SOA security has often been considered for randomized commitments. Note that a simple construction of a commitment in the RO model is \(H(x \Vert r)\) where x is the input and r is the randomness (decommitment). Analogously to the case of encryption, we study SOA security of hash functions. This can also be seen as studying the more basic case compared to deterministic encryption, as Goyal et al. [18] did in the non-SOA setting. The practical motivation is password hashing—note some passwords may be recovered by coercion, and one would like to say something about security of the other passwords.

One-more RSA inversion problem. Finally, an influential problem that we cast in the framework of SOA (this problem has not been explicitly connected to SOA before as far as we are aware) is the one-more RSA inversion problem of Bellare et al. [4]. Informally, the problem asks that an adversary with many RSA challenges and an inversion oracle cannot produce more preimages than number of oracle calls. Bellare et al. show this leads to a proof of security of Chaum’s blind signature scheme in the RO model.

Challenges. For randomized primitives, a key challenge in security proofs has been that at the time the simulator prepares the challenge ciphertexts it does not know the subset that the adversary will corrupt. Compared to randomized primitives, deterministic primitives additionally presents some unique challenges in the SOA setting. To see why, say for encryption, a common strategy is for the simulator to “lie” about the randomness in order to make the message encrypt to the right ciphertext. However, in the deterministic case the adversary there is no randomness to fake.

1.2 Our Contributions

Results for hash functions. We start with the study of a more basic primitive than deterministic encryption, namely hash functions (which in some sense are the deterministic analogue of commitments). We note that SOA notion for hash functions is stronger than the one-wayness notion. We point out that the SOA adversary without any opening could simply run the one-wayness adversary on each image challenge and recover the preimages. Thus, SOA notion is strictly stronger than one-wayness. Here we show results for an unbounded number of “t-correlated” messages, meaning each set of up to t messages may be arbitrarily correlated. Namely, we show that 2t-wise independent hash functions, which can be realized information-theoretically by a classical construction of polynomial evaluation. We also consider the notion of t-correlated messages to be interesting in its own right, and it captures a setting with password hashing where a password is correlated with a small number of others (and it is even stronger than that, in that a password may be correlated with any small number of others).

To show 2t-wise independent hash functions are SOA secure, we first show that in the information theoretic setting, knowing the content of the opened messages increases the upper-bound on the adversary’s advantage by at most factor of 2. This is because the messages are independent, and knowing the opened messages does not increase the adversary’s advantage in guessing the unopened messages. Then, we show that for any hash key s in the set of “good hash keys”, the probability of \(H(s,X)=y\) is almost equally distributed over all hash value y. Therefore, we can show for any hash key s in the set of “good hash keys” and any vector of hash values, opening does not increase the upper-bound on adversary’s advantage. Thus, it is only enough to bound the adversary’s advantage without any opening. Note that this strategy avoids the exponential (in the number of messages) blow-up in the bound compared to the naïve strategy of guessing the subset the adversary will open.

Constructions in the standard model. In the setting of deterministic encryption, it is easy to see the same strategy as above works using lossy trapdoor functions [30] that are 2t-wise independent in the lossy mode. However, for \(t > 1\) we are not aware of any such construction and highlight this as an interesting open problem.Footnote 1 Hence, we turn to building a D-SO-CPA secure scheme in the standard model. We give a new DPKE scheme using 2t-wise independent hash functions and regular lossy trapdoor function [30], which has practical instantiations, e.g., RSA is regular lossy [24]. A close variant of our scheme is shown to be D-SO-CPA secure in the NPROM [20]. The proof strategy here is very similar to the hash function case above. We start by switching to the lossy mode and then bound the adversary’s advantage in the information-theoretic setting.

Results for one-more-RSA. Bellare et al. [4] were first to introduce one-more-RSA problem. They show assuming hardness of the one-more-RSA inversion problem leads to a proof of security of Chaum’s blind signature scheme [12] in the random oracle model. This problem is natural SOA extension of the one-wayness of RSA. Intuitively, in the one-more inversion problem, the adversary gets a number of image points and has access to the corruption oracle that allows it to get preimages for image points of its choice. It needs to produce one more correct preimage than the number of queries it makes. We show that one-more inversion problem is hard for RSA with a large enough encryption exponent e. In particular, we show that one-more inversion problem is hard for any regular lossy trapdoor function. Intuitively, we show that in the lossy mode the images are uniformly distributed. Then we show that inverting even one of the images is hard, since any preimage x is equally likely. RSA is known to be regular lossy under the \(\varPhi \)-Hiding Assumption [24]. Thus, by the result of [4], we obtain a security proof for Chaum’s scheme.Footnote 2 Interestingly, this result avoids an impossibility result of Pass [29] because if RSA is lossy then Chaum’s scheme does not have unique signatures. Analogously, in a different context, Kakvi and Kiltz [23] used non-uniqueness of RSA-FDH signatures under \(\varPhi \)-Hiding to show tight security, getting around an impossibility result of Coron [13].

1.3 Seeing us as Replacing Random Oracles

Another way of seeing our treatment of hash functions is as isolating a property of random oracles and realizing it in the standard model, building on a line of work in this vein started by Canetti [8]. In this context, it would be interesting to consider adaptive SOA security for hash functions similar to [28] who consider adaptive commitments. We leave this as another open problem. Additionally, it would be interesting to see if our results allow replacing ROs in any particular higher-level protocols.

2 Preliminaries

2.1 Notation and Conventions

For a probabilistic algorithm A, by we mean that A is executed on input x and the output is assigned to y. We sometimes use \(y \leftarrow A(x;r)\) to make A’s random coins explicit. If A is deterministic we denote this instead by \(y \leftarrow A(x)\). We denote by [A(x)] the set of all possible outputs of A when run on input x. For a finite set S, we denote by the choice of a uniformly random element from S and assigning it to s.

Let \({{\mathbb N}}\) denote the set of all non-negative integers. For any \(n \in {{\mathbb N}}\) we denote by [n] the set \(\{1,\ldots ,n\}\). For a vector \(\mathbf {x}\), we denote by \(|\mathbf {x}|\) its length (number of components) and by \(\mathbf {x}[i]\) its i-th component. For a vector \(\mathbf {x}\) of length n and any \(I \subseteq [n]\), we denote by \(\mathbf {x}[I]\) the vector of length |I| such that \(\mathbf {x}[I] = (\mathbf {x}[i])_{i \in I}\), and by \(\mathbf {x}[\overline{I}]\) the vector of length \(n - | I |\) such that \(\mathbf {x}[\overline{I}] = (\mathbf {x}[i])_{i \notin I}\). For a string X, we denote by |X| its length.

Let XY be random variables taking values on a common finite domain. The statistical distance between X and Y is given by

$$ \varDelta (X,Y) = \frac{1}{2} \sum _{ x} \big | {\Pr \left[ \,{X = x}\,\right] } - {\Pr \left[ \,{Y = x}\,\right] } \big | \;. $$

We also define \(\varDelta (X,Y\mid S) = \frac{1}{2} \sum _{x\in S} \big | {\Pr \left[ \,{X = x}\,\right] } - {\Pr \left[ \,{Y = x}\,\right] } \big |\), for a set S. The min-entropy of a random variable X is \(\text {H}_\infty (X) = -\log (\max _x {\Pr \left[ \,{X=x}\,\right] })\). The average conditional min-entropy of X given Y is

$$ \widetilde{\text {H}}_\infty (X|Y) = -\log (\sum _y P_Y(y) \max _x {\Pr }\left[ \, X=x\,\left| \right. \,Y=y\,\right] ) . $$

Entropy after information leakage. Dodis et al. [15] characterized the effect of auxiliary information on average min-entropy:

Lemma 1

[15] Let XYZ be random variables and \(\delta > 0\) be a real number.

(a) If Y has at most \(2^\lambda \) possible values then we have \(\widetilde{\text {H}}_\infty (X \mid Z, Y) \ge \widetilde{\text {H}}_\infty (X \mid Z) - \lambda \).

(b) Let S be the set of values b such that \(\text {H}_\infty (X \mid Y = b) \ge \widetilde{\text {H}}_\infty (X \mid Y) - \log (1/\delta )\). Then it holds that \(\Pr [Y \in S] \ge 1 - \delta \).

2.2 Public-Key Encryption

Public-key encryption. A public-key encryption scheme \({\mathsf {PKE}}\) with message-space \({\mathsf {Msg}}\) is a tuple of algorithms \((\mathsf {Kg},\mathsf {Enc},\mathsf {Dec})\) defined as follows. The key-generation algorithm \(\mathsf {Kg}\) on input unary encoding of the security parameter \(1^k\) outputs a public key \( pk \) and matching secret key \( sk \). The encryption algorithm \(\mathsf {Enc}\) on inputs a public key \( pk \) and message \(m \in {\mathsf {Msg}}(1^k)\) outputs a ciphertext c. The deterministic decryption algorithm \(\mathsf {Dec}\) on inputs a secret key \( sk \) and ciphertext c outputs a message m or \(\bot \). We require that for all \(( pk , sk ) \in [\mathsf {Kg}(1^k)]\) and all \(m \in {\mathsf {Msg}}(1^k)\), it holds that \(\mathsf {Dec}( sk ,(\mathsf {Enc}( pk ,m)) = m\). We say that \({\mathsf {PKE}}\) is deterministic if \(\mathsf {Enc}\) is deterministic.

\(\text {D-SO-CPA}\) security. Let \({\mathsf {DE}}= (\mathsf {Kg},\mathsf {Enc},\mathsf {Dec})\) be a D-PKE scheme. To a message sampler \({\mathcal M}\) and an adversary \(A = (A.\mathrm {pg}, A.\mathrm {cor}, A.\mathrm {g}, A.\mathrm {f})\), we associate the experiment in Fig. 1 for every \(k \in {{\mathbb N}}\). We say that \({\mathsf {DE}}\) is \(\text {D-SO-CPA}\) secure for a class \(\mathscr {M}\) of efficiently resamplable message samplers and a class \(\mathscr {A}\) of adversaries if for every \({\mathcal M}\in \mathscr {M}\) and any \(A\in \mathscr {A}\),

$$\begin{aligned}&\mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {DE}},A, {\mathcal M}}(k) \\= & {} {\Pr \left[ \,{\text {D-CPA1-REAL}^{A, {\mathcal M}}_{\mathsf {DE}}(k) \Rightarrow 1}\,\right] } - {\Pr \left[ \,{\text {D-CPA1-IDEAL}^{A, {\mathcal M}}_{\mathsf {DE}}(k) \Rightarrow 1}\,\right] } \; \end{aligned}$$

is negligible in k.

Fig. 1.
figure 1

Games to define the \(\text {D-SO-CPA}\) security.

2.3 Lossy Trapdoor Functions and Their Security

Lossy trapdoor functions. A lossy trapdoor function [30] with domain \(\mathsf {LDom}\), range \(\mathsf {LRng}\) and lossiness \(\tau \) is a tuple of algorithms \({\mathsf {LT}}= (\mathsf {IKg}, \mathsf {LKg}\), \(\textsf {Eval}, \mathsf {Inv})\) that work as follows. Algorithm \(\mathsf {IKg}\) on input a unary encoding of the security parameter \(1^k\) outputs an “injective” evaluation key \( ek \) and matching trapdoor \( td \). Algorithm \(\mathsf {LKg}\) on input \(1^k\) outputs a “lossy” evaluation key . Algorithm \(\textsf {Eval}\) on inputs an (either injective or lossy) evaluation key \( ek \) and \(x \in \mathsf {LDom}(k)\) outputs \(y \in \mathsf {LRng}(k)\). Algorithm \(\mathsf {Inv}\) on inputs a trapdoor \( td \) and a \(y \in \mathsf {LRng}(k)\) outputs \(x \in \mathsf {LDom}(k)\). We denote by the co-domain of . We require the following properties:

Correctness: For all \(k \in {{\mathbb N}}\), all \(( ek , td ) \in [\mathsf {IKg}(1^k)]\) and all \(x \in \mathsf {LDom}(k)\) it holds that \( \mathsf {Inv}( td , \textsf {Eval}( ek ,x)) = x \).

Key Indistinguishability: We require that for every PPT distinguisher D, the following advantage be negligible in k.

where and .

Lossiness: The size of the co-domain of is at most \(|\mathsf {LRng}(k)| / 2^{\tau (k)}\) for all \(k \in {{\mathbb N}}\) and all . We call \(\tau \) the lossiness of \({\mathsf {LT}}\).

t -wise independent. Let \({\mathsf {LT}}\) be a lossy trapdoor function with domain \(\mathsf {LDom}\), range \(\mathsf {LRng}\) and lossiness \(\tau \). We say \({\mathsf {LT}}\) is t-wise independent if for all and all distinct \(x_1, \ldots , x_{t(k)} \in \mathsf {LDom}(k)\)

where and \(U_1, \ldots , U_{t(k)}\) are uniform and independent on \(\mathsf {LRng}(k)\).

Regularity. Let \({\mathsf {LT}}\) be a lossy trapdoor function with domain \(\mathsf {LDom}\), range \(\mathsf {LRng}\) and lossiness \(\tau \). We say \({\mathsf {LT}}\) is regular if for all and all , we have , where U is uniform on \(\mathsf {LDom}(k)\).

2.4 Hash Functions and Associated Security Notions

Hash functions. A hash function with domain \(\mathsf {HDom}\) and range \(\mathsf {HRng}\) is a pair of algorithms \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) that work as follows. Algorithm \({\mathsf {HKg}}\) on input a unary encoding of the security parameter \(1^k\) outputs a key K. Algorithm \({\mathsf {h}}\) on inputs a key K and \(x \in \mathsf {HDom}(k)\) outputs \(y \in \mathsf {HRng}(k)\). We say that \({\mathsf {H}}\) is t-wise independent if for all \(k \in {{\mathbb N}}\) and all distinct \(x_1, \ldots , x_{t(k)} \in \mathsf {HDom}(k)\)

$$ \varDelta \left( ({\mathsf {h}}(K,x_1), \ldots , {\mathsf {h}}(K,x_{t(k)})), (U_1,\ldots ,U_{t(k)})\right) = 0 $$

where and \(U_1, \ldots , U_{t(k)}\) are uniform and independent in \(\mathsf {HRng}(k)\).

3 Selective Opening Security for Hash Functions

Bellare, Dowsley, and Keelveedhi [1] were the first to consider selective-opening security of deterministic PKE. They propose a “simulation-based” semantic security notion, but then show that this definition is unachievable in both the standard model and the non-programmable random-oracle model. Later in [20] Hoang et al. introduce an alternative, “comparison-based” semantic-security notion and show that this definition is achievable in the non-programmable random-oracle model but leave it open in the standard model. In this section, we extend their definitions to hash function families and show that t-wise independent hash functions are selective opening secure under this notion.

3.1 Security Notion

Message samplers. A message sampler \({\mathcal M}\) is a PPT algorithm that takes as input the unary representation \(1^k\) of the security parameter and a string , and outputs a vector \(\mathbf {m}\) of messages. We require that \({\mathcal M}\) be associated with functions v and n such that for any , for any \(k \in {{\mathbb N}}\), and any , we have \(|\mathbf {m}| = v(k)\) and \(|\mathbf {m}[i]| = n(k)\), for every \(i \le |\mathbf {m}|\). Moreover, the components of \(\mathbf {m}\) must be distinct. Let \(\mathsf {Coins}[k]\) be the set of coins for \({\mathcal M}(1^k, \cdot )\). Define .

A message sampler \({\mathcal M}\) is \((\mu , d)\)-correlated if

  • For any \(k \in {{\mathbb N}}\), any , every and any \(i \in [v]\), \(\mathbf {m}[i]\) have min-entropy at least \(\mu \) and is independent of at least \(v-d\) messages.

  • Messages \(\mathbf {m}[1], \ldots , \mathbf {m}[v(k)]\) must be distinct, for any and any .

Note that in this definition, d can be 0, which corresponds to a message sampler in which each message is independent of all other messages and has at least \(\mu \) bits of min-entropy.

Resampling. Following [3], let be the algorithm that samples and returns . (We note that \(\mathsf {Resamp}\) may run in exponential time.) A resampling algorithm of \({\mathcal M}\) is an algorithm \(\mathsf {Rsmp}\) such that is identically distributed as . A message sampler \({\mathcal M}\) is efficiently resamplable if it admits a PT resampling algorithm.

\(\text {H-SO}\) security. Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) be a hash function family with domain \(\mathsf {HDom}\) and range \(\mathsf {HRng}\). To an adversary \(A = (A.\mathrm {pg}, A.\mathrm {cor}, A.\mathrm {g}, A.\mathrm {f})\) and a message sampler \({\mathcal M}\), we associate the experiment in Fig. 2 for every \(k \in {{\mathbb N}}\). We say that \({\mathsf {H}}\) is \(\text {H-SO}\) secure for a class \(\mathscr {M}\) of efficiently resamplable message samplers and a class \(\mathscr {A}\) of adversaries if for every \({\mathcal M}\in \mathscr {M}\) and any \(A\in \mathscr {A}\),

$$\begin{aligned}&\mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},A,{\mathcal M}}(k) \\= & {} {\Pr \left[ \,{\text {H-SO-REAL}^{A,{\mathcal M}}_{\mathsf {H}}(k) \Rightarrow 1}\,\right] } - {\Pr \left[ \,{\text {H-SO-IDEAL}^{A,{\mathcal M}}_{\mathsf {H}}(k) \Rightarrow 1}\,\right] } \; \end{aligned}$$

is negligible in k.

Fig. 2.
figure 2

Games to define the \(\text {H-SO}\) security.

Discussion. We refer to the messages indexed by I as the “opened” messages. For every message \(\mathbf {m}[i]\) that adversary A opens, we require that every message correlated to \(\mathbf {m}[i]\) to also be opened.

We show that it is suffices to consider balanced \(\text {H-SO}\) adversaries where output of \(A.\mathrm {f}\) is boolean. We call A \(\delta \text{- }balanced\) boolean \(\text {H-SO}\) adversary if for all \(b \in \{0,1\}\),

for all and m output by \(A.\mathrm {pg}\) and \({\mathcal M}\), respectively.

Theorem 2

Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) be a hash function family with domain \(\mathsf {HDom}\) and range \(\mathsf {HRng}\). Let A be a \(\text {H-SO}\) adversary against \({\mathsf {H}}\) with respect to message sampler \({\mathcal M}\). Then for any \(0 \le \delta < 1/2\), there is a \(\delta \text{- }balanced\) boolean \(\text {H-SO}\) adversary B such that for all \(k \in {{\mathbb N}}\)

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},A,{\mathcal M}}(k) \le \Big (\frac{2\sqrt{2}}{\delta }+\sqrt{2}\Big )^2 \cdot \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M}}(k) . $$

where the running time of A is about that of B plus \(\mathcal {O}(1/\delta )\).

We refer to Appendix A for the proof of Theorem 2. Next, we give a useful lemma that we later use in our proofs.

Lemma 3

Let XY be random variables where \(\widetilde{\text {H}}_\infty (X\mid Y) \ge \mu \). For any \(0 \le \delta < 1/2\), random variable Y is a \(\delta \)-balanced boolean. Then, \(\text {H}_\infty (X\mid Y=b) \ge \mu - \log (\frac{1}{2}-\delta )\) for all \(b \in \{0,1\}\).

Proof

We know that \({\Pr \left[ \,{Y=b}\,\right] } \ge 1/2-\delta \), for all \(b \in \{0,1\}\). We also have that \(\sum _b {\Pr \left[ \,{Y=b}\,\right] } \max _x {\Pr \left[ \,{X=x\mid Y=b}\,\right] } \le 2^{-\mu } \). Therefore, we obtain that \(\max _x {\Pr \left[ \,{X=x\mid Y=b}\,\right] } \le 2^{-\mu }(1/2-\delta )\) for all \(b \in \{0,1\}\). Summing up, we get \(\text {H}_\infty (X\mid Y=b) \ge \mu - \log (\frac{1}{2}-\delta )\) for all \(b \in \{0,1\}\). \(\Box \)

3.2 Achieving H-SO Security

We show in Theorem 4 that pair-wise independent hash functions are selective opening secure when the messages are independent and have high min-entropy. Specifically, we give an upper-bound for the advantage of H-SO adversary attacking the pair-wise independent hash function. We first show that in the information theoretic setting, knowing the content of opened messages increases the upper-bound for advantage of adversary by at most factor of 2. This is because the messages are independent and knowing the opened messages does not increase the advantage of adversary on guessing the unopened messages. We point that for any vector of hash values and hash key, value I is uniquely defined (unbounded adversary can be assumed deterministic) and based on the independence of the messages, we could drop the probability of opened messages in the upper-bound for the advantage of adversary. Note that the adversary still may increase its advantage by choosing I adaptively without seeing the opened messages, we later prove this is not the case.

We show in Lemma 5 that for any hash key s in the set of “good hash keys”, the probability of \(H(s,X)=y\) is almost equally distributed over all hash value y. Therefore, we can show for any hash key s in the set of “good hash keys” and any vector of hash values, opening does not increases the upper-bound for advantage of adversary. Thus, it is only enough to bound the advantage of adversary without any opening.

Theorem 4

Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) be a family of pair-wise independent hash function with domain \(\mathsf {HDom}\) and range \(\mathsf {HRng}\). Let \({\mathcal M}\) be a \((\mu , 0)\)-correlated, efficiently resamplable message sampler. Then for any computationally unbounded adversary A,

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},A,{\mathcal M}}(k) \le 2592 v \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^2} . $$

Proof

We need the following lemma whose proof we’ll give later.

Lemma 5

Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) be a pair-wise independent hash function with domain \(\mathsf {HDom}\) and range \(\mathsf {HRng}\). Let X be a random variable over \(\mathsf {HDom}\) such that \(\text {H}_\infty (X) \ge \eta \). Then, for all \(y \in \mathsf {HRng}(k)\) and for any \(\epsilon >0\),

$$ \Big |{\Pr \left[ \,{H(K,X)=y}\,\right] }-|\mathsf {HRng}(k)|^{-1} \Big |\ge \epsilon |\mathsf {HRng}(k)|^{-1} . $$

for at most \(2^{-u}\) fraction of \(K \in [{\mathsf {HKg}}(1^k)]\), where \(u = \eta - 2\log |\mathsf {HRng}(k)| - 2 \log (1/\epsilon )\).

We begin by showing \({\mathsf {H}}\) is \(\text {H-SO}\) secure against any \(\frac{1}{4}\)-balanced boolean adversary B. Observe that for computationally unbounded adversary B, we can assume wlog that \(B.\mathrm {cor}, B.\mathrm {g}\) and \(B.\mathrm {f}\) are deterministic. Moreover, we can also assume that adversary \(B.\mathrm {cor}\) pass \(K,\mathbf {h}[\bar{I}]\) as state st to adversary \(B.\mathrm {g}\). We denote by \(\mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\), advantage of B when \(K=s\). For any fix key s we have

$$\begin{aligned}&\mathrm {Pr} [\text {H-SO-REAL}^{B}_{{\mathsf {H}},s}(k) \Rightarrow 1]\\= & {} \sum _{b=0}^{1}\sum _{I}\mathrm {Pr}[B.\mathrm {cor}(s,\mathbf {h}) \Rightarrow I ~\wedge ~B.\mathrm {g}(s,\mathbf {m}_1[I],\mathbf {h}[\bar{I}])\Rightarrow b ~\wedge ~B.\mathrm {f}(\mathbf {m}_1)\Rightarrow b ] \end{aligned}$$

For any \(\mathbf {y}\in (\mathsf {HRng}(k))^{\times v}\) and \(s \in [{\mathsf {HKg}}(1^k)]\), we define \(I_{s,\mathbf {y}}\) to be output of \(B.\mathrm {cor}\) on input \(s,\mathbf {y}\). We also define \(M^b_{s,\mathbf {y}}=\{ \mathbf {m}[I_{s,\mathbf {y}}] \mid B.\mathrm {g}(s,\mathbf {m}_1[I_{s,\mathbf {y}}],\mathbf {y}) \Rightarrow b \}\), for \(b \in \{0,1\}\). Thus,

$$\begin{aligned}&\mathrm {Pr} [\text {H-SO-REAL}^{B}_{{\mathsf {H}},s}(k) \Rightarrow 1]\\= & {} \sum _{b=0}^{1} \sum _{\mathbf {y}} \mathrm {Pr}[\mathbf {h}= \mathbf {y}~\wedge ~ \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^b_{s,\mathbf {y}} ~\wedge ~B.\mathrm {f}(\mathbf {m}_1)\Rightarrow b ] \end{aligned}$$

The above probability is over the choice of \(\mathbf {m}_1\). Similarly, we can define the probability of the experiment \(\text {H-SO-IDEAL}\) outputting 1. Therefore, we obtain

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)= & {} \sum _{b=0}^{1} \sum _{\mathbf {y}} \mathrm {Pr}[\mathbf {h}= \mathbf {y}~\wedge ~ \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^b_{s,\mathbf {y}} ~\wedge ~B.\mathrm {f}(\mathbf {m}_1)\Rightarrow b ] \\&~~~~~~~~~~ - \mathrm {Pr}[\mathbf {h}= \mathbf {y}~\wedge ~ \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^b_{s,\mathbf {y}} ~\wedge ~B.\mathrm {f}(\mathbf {m}_0)\Rightarrow b ] \end{aligned}$$

Assume wlog that above difference is maximized when \(b=1\). For \(d \in \{0,1\}\), we define \(E_d\) as an event where \(\mathbf {h}[I_{s,\mathbf {y}}] = \mathbf {y}[I_{s,\mathbf {y}}]\) and \( \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^1_{s,\mathbf {y}}\) and \(B.\mathrm {f}(\mathbf {m}_d)=1\). Note that the messages are independent and has \(\mu \) bits of min-entropy. For convenience, we write I instead of \(I_{s,\mathbf {y}}\). Then, we obtain

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\le & {} 2 \cdot \sum _{\mathbf {y}} \mathrm {Pr}[E_1] \cdot \mathrm {Pr}[\mathbf {h}[\overline{I}] = \mathbf {y}[\overline{I}] \mid B.\mathrm {f}(\mathbf {m}_1)=1 ] \\&~~~~~~~~~~ - \mathrm {Pr}[E_0] \cdot \mathrm {Pr}[\mathbf {h}[\overline{I}] = \mathbf {y}[\overline{I}] ] \end{aligned}$$

Note that \(\mathbf {m}_0\) and \(\mathbf {m}_1\) have the same distribution. Then, we have \(\mathrm {Pr}[E_0] = \mathrm {Pr}[E_1]\) and \(\mathrm {Pr}[E_0] \le \mathrm {Pr}[\mathbf {h}[I] = \mathbf {y}[I]]\). Therefore, we obtain

$$\begin{aligned}&\mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\\\le & {} 2 \cdot \sum _{\mathbf {y}} \mathrm {Pr}[\mathbf {h}[I] = \mathbf {y}[I]] \cdot \Big ( \mathrm {Pr}[\mathbf {h}[\overline{I}] = \mathbf {y}[\overline{I}] \mid B.\mathrm {f}(\mathbf {m}_1)=1 ] - \mathrm {Pr}[\mathbf {h}[\overline{I}] = \mathbf {y}[\overline{I}] ] \Big ) \end{aligned}$$

We define random variable \(\mathbf {X}[i] = (\mathbf {m}_1[i] \mid B.\mathrm {f}(\mathbf {m}_1)=1)\), for all \(i \in [v]\). From property (a) of Lemma 1 and Lemma 3, we obtain that \(\text {H}_\infty (\mathbf {X}[i]) \ge \mu - 3\). For all \(i \in [v]\), we also have \(\text {H}_\infty (\mathbf {m}_1[i]) \ge \mu \ge \mu - 3\). Moreover, we know Lemma 5 holds for at most \(2^{-u}\) fraction of \(K \in [{\mathsf {HKg}}(1^k)]\), where \(u = \mu -3 - 2\log |\mathsf {HRng}(k)| - 2 \log (1/\epsilon )\); we shall determine the value of \(\epsilon \) later. Using union bound, for all \(\mathbf {X}[i], \mathbf {m}[i]\), where \(i \in [v]\) and for any \(\epsilon >0\), we obtain that for at least \(1-2v2^{-u}\) fraction of K, we have \( \big |{\Pr \left[ \,{H(K,x[i])=\mathbf {y}[i]}\,\right] }-|\mathsf {HRng}(k)|^{-1} \big |\le \epsilon |\mathsf {HRng}(k)|^{-1} \), for all \(i \in [v]\) and \(x \in \{\mathbf {m}_1,\mathbf {X}\}\). Let S be the set of such K.

Now, we have for all \(s \in S\) and \(i \in [v]\), we obtain \((1 - \epsilon ) |\mathsf {HRng}(k)|^{-1} \le {\Pr \left[ \,{\mathbf {h}[i] = \mathbf {y}[i]}\,\right] } \le (1 + \epsilon ) |\mathsf {HRng}(k)|^{-1}\). Let \(|I_{s,\mathbf {y}}|= \ell \). Then,

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\le & {} 2 \cdot \sum _{\mathbf {y}} |\mathsf {HRng}(k)|^{-v} (1+\epsilon )^{\ell }\Big ((1+\epsilon )^{v-\ell }- (1-\epsilon )^{v-\ell }\Big ) \\\le & {} 2 \Big ((1+\epsilon )^{v}- (1-\epsilon )^{v}\Big ) \end{aligned}$$

We also have \( (1+\epsilon )^v = 1+ \sum _i \left( {\begin{array}{c}v\\ i\end{array}}\right) \epsilon ^i \le 1+ \sum _i \epsilon ^i v^i \). For \(\epsilon v < 1/2\), we obtain that \((1+\epsilon )^v \le 1+2\epsilon v\). Similarly, we obtain that \((1-\epsilon )^v \ge 1-2\epsilon v\). Therefore, we have that \(\mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k) \le 8\epsilon v\). Then,

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M}}(k)= & {} \sum _{s\in S} {\Pr \left[ \,{K =s}\,\right] } \cdot \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\\&+~ \sum _{s\in \overline{S}} {\Pr \left[ \,{K =s}\,\right] } \cdot \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k) \\\le & {} \max _{s\in S} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k) + 2v2^{-u} . \end{aligned}$$

Finally, by substituting \(\epsilon = \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^{2}}\), we obtain

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M}}(k) \le 16 v \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^2} . $$

Using Theorem 2, we obtain for any unbounded adversary A

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},A,{\mathcal M}}(k) \le 2592 v \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^2} . $$

This completes the proof of Theorem 4.

Proof of Lemma 5. We will need the following tail inequality for pair-wise independent distributions

Claim

Let \(A_1,\cdots ,A_n\) be pair-wise independent random variables in the interval [0, 1]. Let \(A = \sum _i{A_i}\) and \(\mathbb {E}(A)=\mu \) and \(\delta >0\). Then,

$$ {\Pr \left[ \,{|A-\mu | > \delta \mu }\,\right] } \le \frac{1}{\delta ^2\mu } . $$

Proof of Claim 3.2. From Chebychev’s inequality, for any \(\delta >0\) we have

$$ {\Pr \left[ \,{|A-\mu | > \delta \mu }\,\right] } \le \frac{{\text{ Var }}[A]}{\delta ^2\mu ^2} . $$

Note that \(A_1,\cdots ,A_n\) are pair-wise independent random variables. Thus, we have \({\text{ Var }}[A] = \sum _i{{\text{ Var }}[A_i]}\). Moreover, we know that \({\text{ Var }}[A_i]\le \mathbb {E}(A_i)\) for all \(i \in [n]\), since the random variable \(A_i\) is in the interval [0, 1]. Therefore, we have \({\text{ Var }}[A] \le \mu \). This completes the proof of Claim 3.2.

We define \(p_{x}={\Pr \left[ \,{X=x}\,\right] }\), for any \(x \in \mathsf {HDom}(k)\). We consider the probability over the choice of key K. For every \(x \in \mathsf {HDom}(k)\) and \(y \in \mathsf {HRng}(k)\), we also define the following random variable

$$ Z_{x,y}= {\left\{ \begin{array}{ll} p_x &{} \text {if }\ H(K,x)=y \\ 0 &{} \text {otherwise} \end{array}\right. } $$

We define random variable \(A_{x,y} = Z_{x,y}2^\eta \). Note that for every x, H(Kx) is uniformly distributed, over the uniformly random choice of K. Therefore, we have \(\mathbb {E}(Z_{x,y}) = p_x/|\mathsf {HRng}(k)|\), for every xy. Let \(Z_y = \sum _x{Z_{x,y}}\) and \(A_y = \sum _x{A_{x,y}}\). Then, we have \(\mathbb {E}(Z_y) = 1/|\mathsf {HRng}(k)|\) and \(\mathbb {E}(A_y) = 2^\eta /|\mathsf {HRng}(k)|\). Moreover, for every xy, we know \(A_{x,y} \in [0,1]\) and for every y, the variables \(A_{x,y}\) are pair-wise independent. Applying Claim 3.2, we obtain that for every y and \(\delta >0\)

$$ {\Pr \left[ \,{\left| A_y - \frac{2^\eta }{|\mathsf {HRng}(k)|}\right| \ge \frac{\delta 2^\eta }{|\mathsf {HRng}(k)|}}\,\right] } \le \frac{|\mathsf {HRng}(k)| }{\delta ^2 2^\eta } . $$

Substituting \(Z_y\) for \(A_y\) and choosing \(\delta =\epsilon \), we obtain that for every \(\epsilon >0\),

$$ {\Pr \left[ \,{\left| Z_y - \frac{1}{|\mathsf {HRng}(k)|}\right| \ge \frac{\epsilon }{|\mathsf {HRng}(k)|}}\,\right] } \le \frac{|\mathsf {HRng}(k)| }{\epsilon ^2 2^\eta } . $$

Using union bound, we obtain that with probability \(|\mathsf {HRng}(k)|^2 / \epsilon ^2 2^\eta = 2^{-u}\) over the choice of K that \(\left| Z_y - 1/|\mathsf {HRng}(k)|\right| \ge \epsilon /|\mathsf {HRng}(k)|\), for all \(y\in |\mathsf {HRng}(k)|\). This completes the proof of Lemma 5. \(\Box \)

We show in Theorem 6 that the 2d-wise independent hash functions are selective opening secure for \((\mu , d)\)-correlated message samplers.

Theorem 6

Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) be a family of 2d-wise independent hash function with domain \(\mathsf {HDom}\) and range \(\mathsf {HRng}\). Let \({\mathcal M}\) be a \((\mu , d)\)-correlated, efficiently resamplable message sampler. Then for any computationally unbounded adversary A,

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},A,{\mathcal M}}(k) \le 2592 v \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^{2d}} . $$

Proof

We need the following lemma whose proof we’ll give later.

Lemma 7

Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) be a 2d-wise independent hash function with domain \(\mathsf {HDom}\) and range \(\mathsf {HRng}\). Let \(\mathbf {X}= (X_1,\cdots ,X_t)\), where \(t \le d\) and \(X_i\) is a random variable over \(\mathsf {HDom}\) such that \(\text {H}_\infty (X_i) \ge \eta \), for \(i \in [t]\). Then, for all \(\mathbf {y}= (y_1,\cdots , y_t)\), where \(y_i \in \mathsf {HRng}(k)\) and for any \(\epsilon >0\),

$$ \Big |{\Pr \left[ \,{H(K,\mathbf {X})=\mathbf {y}}\,\right] }-|\mathsf {HRng}(k)|^{-t} \Big |\ge \epsilon |\mathsf {HRng}(k)|^{-t} . $$

for at most \(2^{-w}\) fraction of \(K \in [{\mathsf {HKg}}(1^k)]\), where \(w = \eta - 2t\log |\mathsf {HRng}(k)| - 2 \log (1/\epsilon )\).

We begin by showing \({\mathsf {H}}\) is \(\text {H-SO}\) secure against any \(\frac{1}{4}\)-balanced boolean adversary B. Observe that for computationally unbounded adversary B, we can assume wlog that \(B.\mathrm {cor}, B.\mathrm {g}\) and \(B.\mathrm {f}\) are deterministic. Moreover, we can also assume that adversary \(B.\mathrm {cor}\) pass \(K,\mathbf {h}[\bar{I}]\) as state st to adversary \(B.\mathrm {g}\). We denote by \(\mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\), advantage of B when \(K=s\). For any fix key s we have

$$\begin{aligned}&\mathrm {Pr} [\text {H-SO-REAL}^{B}_{{\mathsf {H}},s}(k) \Rightarrow 1]\\= & {} \sum _{b=0}^{1}\sum _{I}\mathrm {Pr}[B.\mathrm {cor}(s,\mathbf {h}) \Rightarrow I ~\wedge ~B.\mathrm {g}(s,\mathbf {m}_1[I],\mathbf {h}[\bar{I}])\Rightarrow b ~\wedge ~B.\mathrm {f}(\mathbf {m}_1)\Rightarrow b ] \end{aligned}$$

For any \(\mathbf {y}\in (\mathsf {HRng}(k))^{\times v}\) and \(s \in [{\mathsf {HKg}}(1^k)]\), we define \(I_{s,\mathbf {y}}\) to be output of \(B.\mathrm {cor}\) on input \(s,\mathbf {y}\). We also define \(M^b_{s,\mathbf {y}}=\{ \mathbf {m}[I_{s,\mathbf {y}}] \mid B.\mathrm {g}(s,\mathbf {m}_1[I_{s,\mathbf {y}}],\mathbf {y}) \Rightarrow b \}\), for \(b \in \{0,1\}\). Thus,

$$\begin{aligned}&\mathrm {Pr} [\text {H-SO-REAL}^{B}_{{\mathsf {H}},s}(k) \Rightarrow 1]\\= & {} \sum _{b=0}^{1} \sum _{\mathbf {y}} \mathrm {Pr}[\mathbf {h}= \mathbf {y}~\wedge ~ \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^b_{s,\mathbf {y}} ~\wedge ~B.\mathrm {f}(\mathbf {m}_1)\Rightarrow b ] \end{aligned}$$

The above probability is over the choice of \(\mathbf {m}_1\). Similarly, we can define the probability of the experiment \(\text {H-SO-IDEAL}\) outputting 1. Therefore, we obtain

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)= & {} \sum _{b=0}^{1} \sum _{\mathbf {y}} \mathrm {Pr}[\mathbf {h}= \mathbf {y}~\wedge ~ \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^b_{s,\mathbf {y}} ~\wedge ~B.\mathrm {f}(\mathbf {m}_1)\Rightarrow b ] \\&~~~~~~~~~~ - \mathrm {Pr}[\mathbf {h}= \mathbf {y}~\wedge ~ \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^b_{s,\mathbf {y}} ~\wedge ~B.\mathrm {f}(\mathbf {m}_0)\Rightarrow b ] \end{aligned}$$

Assume wlog that the above difference is maximized when \(b=1\). For \(d \in \{0,1\}\), we define \(E_d\) as an event where \(\mathbf {h}[I_{s,\mathbf {y}}] = \mathbf {y}[I_{s,\mathbf {y}}]\) and \( \mathbf {m}_1[I_{s,\mathbf {y}}] \in M^1_{s,\mathbf {y}}\) and \(B.\mathrm {f}(\mathbf {m}_d)=1\). Note that the messages are independent and has \(\mu \) bits of min-entropy. For convenience, we write I instead of \(I_{s,\mathbf {y}}\). Then, we obtain

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\le & {} 2 \cdot \sum _{\mathbf {y}} \mathrm {Pr}[E_1] \cdot \mathrm {Pr}[\mathbf {h}[\overline{I}] = \mathbf {y}[\overline{I}] \mid B.\mathrm {f}(\mathbf {m}_1)=1 ] \\&~~~~~~~~~~ - \mathrm {Pr}[E_0] \cdot \mathrm {Pr}[\mathbf {h}[\overline{I}] = \mathbf {y}[\overline{I}] ] \end{aligned}$$

Note that \(\mathbf {m}_0\) and \(\mathbf {m}_1\) have the same distribution. Then, we have \(\mathrm {Pr}[E_0] = \mathrm {Pr}[E_1]\) and \(\mathrm {Pr}[E_0] \le \mathrm {Pr}[\mathbf {h}[I] = \mathbf {y}[I]]\). We define random variable \(\mathbf {X}[i] = (\mathbf {m}_1[i] \mid B.\mathrm {f}(\mathbf {m}_1)=1)\), for all \(i \in [v]\). From property (a) of Lemma 1 and Lemma 3, we obtain that \(\text {H}_\infty (\mathbf {X}[i]) \ge \mu - 3\). For all \(i \in [v]\), we also have \(\text {H}_\infty (\mathbf {m}_1[i]) \ge \mu \ge \mu - 3\)

Moreover, we know Lemma 5 holds for at most \(2^{-u}\) fraction of \(K \in [{\mathsf {HKg}}(1^k)]\), where \(u = \mu - 3 - 2d\log |\mathsf {HRng}(k)| - 2 \log (1/\epsilon )\); we shall determine the value of \(\epsilon \) later. Partition [v] to \(L_1, \cdots ,L_v\) such that \(|L_k| \le d\) and for all \(i,j \in L_k\), messages \(\mathbf {m}[i]\) and \(\mathbf {m}[j]\) are correlated. Using union bound, for all \(\mathbf {y}[L_i] \in (\mathsf {HRng}(k))^{\times |L_i|}\), where \(i \in [v]\) and for any \(\epsilon >0\), we obtain that for at least \(1-2v2^{-u}\) fraction of K, we have \( \big |{\Pr \left[ \,{H(K,x[L_i])=\mathbf {y}[L_i]}\,\right] }-|\mathsf {HRng}(k)|^{-|L_i|} \big |\le \epsilon |\mathsf {HRng}(k)|^{-|L_i|} \), for all \(i \in [v]\) and \(x \in \{\mathbf {m}_1,\mathbf {X}\}\). Let S be the set of such K.

Now, we have for all \(s \in S\) and \(i \in [v]\), we obtain \((1 - \epsilon ) |\mathsf {HRng}(k)|^{-|L_i|} \le {\Pr \left[ \,{\mathbf {h}[L_i] = \mathbf {y}[L_i]}\,\right] } \le (1 + \epsilon ) |\mathsf {HRng}(k)|^{-|L_i|}\). Let \(|I_{s,\mathbf {y}}|= \ell \). Then,

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\le & {} 2 \cdot \sum _{\mathbf {y}} |\mathsf {HRng}(k)|^{-v} (1+\epsilon )^{\ell }\Big ((1+\epsilon )^{v-\ell }- (1-\epsilon )^{v-\ell }\Big ) \\\le & {} 2 \Big ((1+\epsilon )^{v}- (1-\epsilon )^{v}\Big ) \end{aligned}$$

We also have \( (1+\epsilon )^v = 1+ \sum _i \left( {\begin{array}{c}v\\ i\end{array}}\right) \epsilon ^i \le 1+ \sum _i \epsilon ^i v^i \). For \(\epsilon v < 1/2\), we obtain that \((1+\epsilon )^v \le 1+2\epsilon v\). Similarly, we obtain that \((1-\epsilon )^v \ge 1-2\epsilon v\). Therefore, we have that \(\mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k) \le 8\epsilon v\). Then,

$$\begin{aligned} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M}}(k)= & {} \sum _{s\in S} {\Pr \left[ \,{K =s}\,\right] } \cdot \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k)\\&+~ \sum _{s\in \overline{S}} {\Pr \left[ \,{K =s}\,\right] } \cdot \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k) \\\le & {} \max _{s\in S} \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M},s}(k) + 2v2^{-u} . \end{aligned}$$

Finally, by substituting \(\epsilon = \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^{2}}\), we obtain

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},B,{\mathcal M}}(k) \le 16 v \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^{2d}} . $$

Using Theorem 2, we obtain for any unbounded adversary A

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {H}},A,{\mathcal M}}(k) \le 2592 v \root 3 \of {2^{1-\mu }|\mathsf {HRng}(k)|^{2d}} . $$

This completes the proof of Theorem 6.

Proof of Lemma 7. We define \(p_{\mathbf {x}}={\Pr \left[ \,{\mathbf {X}=\mathbf {x}}\,\right] }\), for any \(\mathbf {x}= (x_1,\cdots ,x_t)\), where \(x_i \in \mathsf {HDom}(k)\). We consider the probability over the choice of key K. For every \(\mathbf {x}\) and \(\mathbf {y}\), we also define the following random variable

$$ Z_{\mathbf {x},\mathbf {y}}= {\left\{ \begin{array}{ll} p_\mathbf {x}&{} \text {if }\ H(K,\mathbf {x})=\mathbf {y}\\ 0 &{} \text {otherwise} \end{array}\right. } $$

Let \(A_{\mathbf {x},\mathbf {y}} = Z_{\mathbf {x},\mathbf {y}}2^{\eta }\). Note that for all \(i \in [t]\) and for every \(x_i\), \(H(K,x_i)\) is uniformly distributed, over the uniformly random choice of K. Moreover, \({\mathsf {H}}\) is t-wise independent. Therefore, we have \(\mathbb {E}(Z_{\mathbf {x},\mathbf {y}}) = p_\mathbf {x}/|\mathsf {HRng}(k)|^t\), for every \(\mathbf {x}, \mathbf {y}\). Let \(Z_\mathbf {y}= \sum _\mathbf {x}{Z_{\mathbf {x},\mathbf {y}}}\) and \(A_\mathbf {y}= \sum _\mathbf {x}{A_{\mathbf {x},\mathbf {y}}}\). Then, we have \(\mathbb {E}(Z_\mathbf {y}) = 1/|\mathsf {HRng}(k)|^t\) and \(\mathbb {E}(A_\mathbf {y}) = 2^{\eta }/|\mathsf {HRng}(k)|^t\). Moreover, for every \(\mathbf {x},\mathbf {y}\), we know \(A_{\mathbf {x},\mathbf {y}} \in [0,1]\) and for every \(\mathbf {y}\), the variables \(A_{\mathbf {x},\mathbf {y}}\) are pair-wise independent. Applying Claim 3.2, we obtain that for every \(\mathbf {y}\) and \(\delta >0\)

$$ {\Pr \left[ \,{\left| A_\mathbf {y}- \frac{2^\eta }{|\mathsf {HRng}(k)|^t}\right| \ge \frac{\delta 2^\eta }{|\mathsf {HRng}(k)|^t}}\,\right] } \le \frac{|\mathsf {HRng}(k)|^t }{\delta ^2 2^\eta } . $$

Substituting \(Z_\mathbf {y}\) for \(A_\mathbf {y}\) and choosing \(\delta =\epsilon \), we obtain that for every \(\epsilon >0\),

$$ {\Pr \left[ \,{\left| A_\mathbf {y}- \frac{2^\eta }{|\mathsf {HRng}(k)|^t}\right| \ge \frac{\epsilon 2^\eta }{|\mathsf {HRng}(k)|^t}}\,\right] } \le \frac{|\mathsf {HRng}(k)|^t }{\epsilon ^2 2^\eta } . $$

Using union bound, we obtain that with probability \(|\mathsf {HRng}(k)|^{2t} / \epsilon ^2 2^\eta = 2^{-w}\) over the choice of K that \(\left| Z_\mathbf {y}- |\mathsf {HRng}(k)|^{-t} \right| \ge \epsilon |\mathsf {HRng}(k)|^{-t}\), for all \(\mathbf {y}\). Thus,

$$ \left| {\Pr \left[ \,{H(K,\mathbf {X})=\mathbf {y}}\,\right] }- |\mathsf {HRng}(k)|^{-t} \right| \ge \epsilon |\mathsf {HRng}(k)|^{-t} . $$

with probability at most \(2^{-w}\) over the choice of K. This completes the proof of Lemma 7. \( \Box \)

4 Selective Opening Security for Deterministic Encryption

In this section, we give two different constructions of deterministic public key encryption and show that they achieve \(\text {D-SO-CPA}\) security. First, we show that lossy trapdoor functions that are 2t-wise independent in the lossy mode are selective opening secure for t-correlated messages. However, it is an open problem to construct them for \(t >1\).

Hence, we give another construction of deterministic public key encryption using hash functions and lossy trapdoor permutation and show it is selective opening secure. A close variant of this scheme is shown to be \(\text {D-SO-CPA}\) secure in the NPROM [20]. Our scheme is efficient and only public-key primitive that it uses is a regular lossy trapdoor function, which has practical instantiations, e.g., both Rabin and RSA are regular lossy.

4.1 Achieving D-SO-CPA Security

We start by showing that 2t-wise independent lossy trapdoor functions are selective opening secure. It was previously shown by Hoang et al. [20] that \(\text {D-SO-CPA}\) notion is achievable under the random oracle model. They leave it open to construct a \(\text {D-SO-CPA}\) secure scheme in the standard model. Here, we show that a pair-wise independent lossy trapdoor function is \(\text {D-SO-CPA}\) secure for independent messages. We also show that a 2d-wise independent lossy trapdoor function is \(\text {D-SO-CPA}\) secure for \((\mu ,d)\)-correlated message samplers.

First, we show in Theorem 8 that a pair-wise independent lossy trapdoor functions is \(\text {D-SO-CPA}\) secure for \((\mu , 0)\)-correlated message samplers.

Theorem 8

Let \({\mathcal M}\) be a \((\mu , 0)\)-correlated, efficiently resamplable message sampler. Let \({\mathsf {LT}}\) be a lossy trapdoor function with domain \(\mathsf {LDom}\), range \(\mathsf {LRng}\) and lossiness \(\tau \). Suppose \({\mathsf {LT}}\) is pair-wise independent. Then for any adversary A,

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {LT}},A, {\mathcal M}}(k) \le 2 \cdot \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + 2592 v \root 3 \of {2^{1-\mu -2\tau }|\mathsf {LRng}(k)|^2} . $$

Proof

Consider games \(G_0,G_1\) in Fig. 3. Then

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {LT}},A, {\mathcal M}}(k) = 2\cdot {\Pr \left[ \,{G_0(k) \Rightarrow 1}\,\right] } - 1 . $$
Fig. 3.
figure 3

Games \(G_0, G_1\) of the proof of Theorem 8.

We now explain the game chain. Game \(G_1\) is identical to game \(G_0\), except that instead of generating an injective key for the lossy trapdoor function, we generate a lossy one. Consider the following adversary B attacking the key indistinguishability of \({\mathsf {LT}}\). It simulates game \(G_0\), but uses its given key instead of generating a new one. It outputs 1 if the simulated game returns 1, and outputs 0 otherwise. Then

$$ \Pr [G_0(k) \Rightarrow 1] - \Pr [G_1(k) \Rightarrow 1] \le \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) . $$

Note that game \(G_1\) is identical to games \(\text {H-SO-REAL}\) or \(\text {H-SO-IDEAL}\), when \(b=1\) or \(b=0\), respectively. Then

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {LT}},A, {\mathcal M}}(k) = 2\cdot {\Pr \left[ \,{G_1(k) \Rightarrow 1}\,\right] } - 1 . $$

Note that \({\mathsf {LT}}\) is pair-wise independent and \(\tau \)-lossy. Then, size of the range of \({\mathsf {LT}}\) in the lossy mode is at most \(2^{-\tau }|\mathsf {LRng}(k)|\). From Theorem 4

$$ \mathbf {Adv}^{\text {h-so}}_{{\mathsf {LT}},A, {\mathcal M}}(k) \le 2592 v \root 3 \of {2^{1-\mu -2\tau }|\mathsf {LRng}(k)|^2} . $$

Summing up,

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {LT}},A, {\mathcal M}}(k) \le 2 \cdot \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + 2592 v \root 3 \of {2^{1-\mu -2\tau }|\mathsf {LRng}(k)|^2} . $$

This completes the proof of Theorem 8.

Next, we show in Theorem 9 that a 2d-wise independent lossy trapdoor functions is \(\text {D-SO-CPA}\) secure for \((\mu , d)\)-correlated message samplers.

Theorem 9

Let \({\mathcal M}\) be a \((\mu , d)\)-correlated, efficiently resamplable message sampler. Let \({\mathsf {LT}}\) be a lossy trapdoor function with domain \(\mathsf {LDom}\), range \(\mathsf {LRng}\) and lossiness \(\tau \). Suppose \({\mathsf {LT}}\) is 2d-wise independent. Then for any adversary A,

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {LT}},A, {\mathcal M}}(k) \le 2 \cdot \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + 2592 v \root 3 \of {2^{1-\mu -2d\tau }|\mathsf {LRng}(k)|^{2d}} . $$

The proof of Theorem 9 is very similar to the proof of Theorem 8.

Although that 2t-wise independent trapdoor functions are very efficient and secure against selective opening attack, it is an open problem to construct them for \(t >1\). Hence, we give a new construction of deterministic public key encryption that is selective opening secure. Our scheme \({\mathsf {DE}}[{\mathsf {H}}, {\mathsf {G}}, {\mathsf {LT}}]\) is shown in Fig. 4, where \({\mathsf {LT}}\) is a lossy trapdoor function and \({\mathsf {H}},{\mathsf {G}}\) are hash functions. We begin by showing in Theorem 10 that \({\mathsf {DE}}\) is \(\text {D-SO-CPA}\) secure for independent messages when \({\mathsf {H}}\), \({\mathsf {G}}\) are pair-wise independent hash functions and \({\mathsf {LT}}\) is a regular lossy trapdoor function.

Fig. 4.
figure 4

D-PKE scheme \({\mathsf {DE}}[{\mathsf {H}}, {\mathsf {G}}, {\mathsf {LT}}]\).

Theorem 10

Let \({\mathcal M}\) be a \((\mu , 0)\)-correlated, efficiently resamplable message sampler. Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) with domain \(\{0,1\}^n\) and range \(\{0,1\}^{\ell }\) and \({\mathsf {G}}= ({\mathsf {GKg}}, {\mathsf {g}})\) with domain \(\{0,1\}^\ell \) and range \(\{0,1\}^n\) be hash function families. Suppose \({\mathsf {H}}\) and \({\mathsf {G}}\) are pair-wise independent. Let \({\mathsf {LT}}\) be a regular lossy trapdoor function with domain \(\{0,1\}^{n+\ell }\), range \(\{0,1\}^p\) and lossiness \(\tau \). Let \({\mathsf {DE}}[{\mathsf {H}}, {\mathsf {G}}, {\mathsf {LT}}]\) be as above. Then for any adversary A,

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {DE}},A, {\mathcal M}}(k) \le 2 \cdot \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + 2592 v \root 3 \of {2^{1-\mu -2\tau +2p}} . $$

Proof

We begin by showing the following lemma.

Lemma 11

Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) with domain \(\{0,1\}^n\) and range \(\{0,1\}^{\ell }\) and \({\mathsf {G}}= ({\mathsf {GKg}}, {\mathsf {g}})\) with domain \(\{0,1\}^\ell \) and range \(\{0,1\}^n\) be hash function families. Suppose \({\mathsf {H}}\) and \({\mathsf {G}}\) are pair-wise independent. Let \({\mathsf {LT}}\) be a regular lossy trapdoor function with domain \(\{0,1\}^{n+\ell }\), range \(\{0,1\}^p\) and lossiness \(\tau \). Let X be a random variable over \(\{0,1\}^n\) such that \(\text {H}_\infty (X) \ge \eta \). Then, for all , all and any \(\epsilon >0\),

$$ \Big |{\Pr \left[ \,{\mathsf {DE.Enc}( pk , X)=c}\,\right] }- 2^{\tau -p} \Big |\ge \epsilon 2^{\tau -p} . $$

for at most \(2^{-u}\) fraction of public key pk, where \(u = \eta +2 \tau -2p- 2 \log (1/\epsilon )\).

Proof of Lemma 11. We define \(p_{x}={\Pr \left[ \,{X=x}\,\right] }\), for any \(x \in \{0,1\}^n\). We consider the probability over the choice of public key \( pk \). fix the lossy key , we consider the probability over the choice of \(K_H,K_G\). For every \(x \in \{0,1\}^n\) and , we also define the following random variable

$$ Z_{x,c}= {\left\{ \begin{array}{ll} p_x &{} \text {if }\ \mathsf {DE.Enc}( pk , x)=c \\ 0 &{} \text {otherwise} \end{array}\right. } $$

Let \(A_{x,c} = Z_{x,c}2^\eta \). Note that that for every x, \({\mathsf {h}}(K_H,x)\) is uniformly distributed, over the uniformly random choice of \(K_H\). Moreover, for every x and \(K_H\), \({\mathsf {g}}(K_G,{\mathsf {h}}(K_H,x))\) is uniformly distributed, over the uniformly random choice of \(K_G\). Since \({\mathsf {LT}}\) is a regular LTDF, we have \(\mathbb {E}(Z_{x,c}) = p_x\cdot 2^{\tau -p}\), for every xc. Let \(Z_c = \sum _x{Z_{x,c}}\) and \(A_c = \sum _x{A_{x,c}}\). Then, we have \(\mathbb {E}(Z_c) = 2^{\tau -p}\) and \(\mathbb {E}(A_c) = 2^{\eta +\tau -p}\). Moreover, for every xc, we know \(A_{x,c} \in [0,1]\) and for every c, the variables \(A_{x,c}\) are pair-wise independent. Applying Claim 3.2, we obtain that for every c and \(\delta >0\)

$$ {\Pr \left[ \,{\left| A_c - 2^{\eta +\tau -p}\right| \ge \delta \cdot 2^{\eta +\tau -p}}\,\right] } \le \frac{2^{p-\eta -\tau } }{\delta ^2} . $$

Substituting \(Z_c\) for \(A_c\) and choosing \(\delta =\epsilon \), we obtain that for every \(\epsilon >0\),

$$ {\Pr \left[ \,{\left| Z_c - 2^{\tau -p} \right| \ge \epsilon \cdot 2^{\tau -p}}\,\right] } \le \frac{2^{p-\eta -\tau } }{\epsilon ^2} . $$

Using union bound, we obtain that \(\left| Z_c - 2^{\tau -p} \right| \ge \epsilon \cdot 2^{\tau -p}\) with probability \(2^{2p-\eta -2\tau } / \epsilon ^2 = 2^{-u}\) over the choice of \(K_H,K_G\), for all , all . This completes the proof of Lemma 11. \(\Box \)

Consider games \(G_0,G_1\) in Fig. 5. Then

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {DE}},A, {\mathcal M}}(k) = 2\cdot {\Pr \left[ \,{G_0(k) \Rightarrow 1}\,\right] } - 1 . $$
Fig. 5.
figure 5

Games \(G_0, G_1\) of the proof of Theorem 10.

We now explain the game chain. Game \(G_1\) is identical to game \(G_0\), except that instead of generating an injective key for the lossy trapdoor function, we generate a lossy one. Consider the following adversary B attacking the key indistinguishability of \({\mathsf {LT}}\). It simulates game \(G_0\), but uses its given key instead of generating a new one. It outputs 1 if the simulated game returns 1, and outputs 0 otherwise. Then

$$ \Pr [G_0(k) \Rightarrow 1] - \Pr [G_1(k) \Rightarrow 1] \le \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) . $$

Similar to proof of Theorem 4, using Lemma 11, we obtain that

$$ {\Pr \left[ \,{G_1(k) \Rightarrow 1}\,\right] } \le 1296 v \root 3 \of {2^{1-\mu -2\tau +2p}} + \frac{1}{2} . $$

Summing up,

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {DE}},A, {\mathcal M}}(k) \le 2 \cdot \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + 2592 v \root 3 \of {2^{1-\mu -2\tau +2p}} . $$

This completes the proof of Theorem 10.

We now extend our result to include correlated messages. We show that it is enough to use 2t-wise independent hash functions to extend the security to t-correlated messages. Let \({\mathsf {DE}}[{\mathsf {H}}, {\mathsf {G}}, {\mathsf {LT}}]\) be PKE scheme shown in Fig. 4, where \({\mathsf {LT}}\) is a lossy trapdoor function and \({\mathsf {H}},{\mathsf {G}}\) are hash functions. We show in Theorem 12 that \({\mathsf {DE}}\) is \(\text {D-SO-CPA}\) secure for t-correlated messages when \({\mathsf {H}}, {\mathsf {G}}\) are 2t-wise independent hash functions and \({\mathsf {LT}}\) is a regular lossy trapdoor function.

Theorem 12

Let \({\mathcal M}\) be a \((\mu , d)\)-correlated, efficiently resamplable message sampler. Let \({\mathsf {H}}= ({\mathsf {HKg}}, {\mathsf {h}})\) with domain \(\{0,1\}^n\) and range \(\{0,1\}^{\ell }\) and \({\mathsf {G}}= ({\mathsf {GKg}}, {\mathsf {g}})\) with domain \(\{0,1\}^\ell \) and range \(\{0,1\}^n\) be hash function families. Suppose \({\mathsf {H}}\) and \({\mathsf {G}}\) are 2d-wise independent. Let \({\mathsf {LT}}\) be a regular lossy trapdoor function with domain \(\{0,1\}^{n+\ell }\), range \(\{0,1\}^p\) and lossiness \(\tau \). Let \({\mathsf {DE}}[{\mathsf {H}}, {\mathsf {G}}, {\mathsf {LT}}]\) be as above. Then for any adversary A,

$$ \mathbf {Adv}^{\text {d-so-cpa}}_{{\mathsf {DE}},A, {\mathcal M}}(k) \le 2 \cdot \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + 2592 v \root 3 \of {2^{1-\mu +2d(-\tau +p)}} . $$

The proof of Theorem 12 is very similar to the proof of Theorem 10.

5 Results for One-More-RSA Inversion Problem

In this section, we recall the definition of one-more-RSA inversion problem. This problem is a natural extension of the RSA problem to a setting where the adversary has access to a corruption oracle. Bellare et al. [4] first introduce this notion and show that assuming hardness of one-more-RSA inversion problem leads to a proof of security of Chaum’s blind signature scheme in the random oracle model. Here we show that one-more inversion problem is hard for RSA with a large enough encryption exponent e. More generally, we show that one-more inversion problem is hard for any regular lossy trapdoor function.

5.1 Security Notion

Here we give a formal definition of one-more-RSA inversion problem. Our definition is more general and considers this problem for any trapdoor function. Intuitively, in the one-more inversion problem, the adversary gets a number of image points, and must output the inverses of image points, while it has access to the corruption oracle and can see the preimage of image points of its choice. We note that the number of corruption queries is less than the number of image points. We also note that a special case of the one-more inversion problem in which there is only one image point is exactly the problem underlying the notion of one-wayness.

One-more inversion problem. Let \(\mathsf {TDF}= (\mathsf {Kg}, \mathsf {Eval}, \mathsf {Inv})\) be a trapdoor function with domain \(\mathsf {TDom}(\cdot )\) and range \(\mathsf {TRng}(\cdot )\). To an adversary A, we associate the experiment in Fig. 6 for every \(k \in {{\mathbb N}}\). We say that \(\mathsf {TDF}\) is one-more[v] secure for a class \(\mathscr {A}\) of adversaries if for every any \(A\in \mathscr {A}\),

$$ \mathbf {Adv}^{\text {one-more}}_{\mathsf {TDF},A,v}(k) ={\Pr \left[ \,{\text {ONE-MORE-INV}^{A,v}_\mathsf {TDF}(k) \Rightarrow 1}\,\right] } \; $$

is negligible in k.

Fig. 6.
figure 6

Games to define the One-More security.

5.2 Achieving One-More Security

We show in Theorem 13 that a regular lossy trapdoor function is one-more secure. We point out that, for large enough encryption exponent e, RSA is a regular lossy trapdoor function [24].

Pass [29] showed that the one-more inversion problem for any certified, homomorphic trapdoor permutation cannot be reduced to a more “standard” assumption, meaning one that consists of a fixed number of rounds between challenger and adversary. As noted by Kakvi and Kiltz [23], RSA is not certified unless e is a prime larger than N so there is no contradiction.

Theorem 13

Let \({\mathsf {LT}}\) be a regular lossy trapdoor function with domain \(\mathsf {LDom}\), range \(\mathsf {LRng}\) and lossiness \(\tau \). Then for any adversary A and any \(v \in {{\mathbb N}}\),

$$ \mathbf {Adv}^{\text {one-more}}_{{\mathsf {LT}},A,v}(k) \le \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + v \cdot 2^{-\tau } . \; $$

Proof

Consider games \(G_1\)\(G_3\) in Fig. 7. Then

$$ \mathbf {Adv}^{\text {one-more}}_{{\mathsf {LT}},A,v}(k) = {\Pr \left[ \,{G_0(k) \Rightarrow 1}\,\right] } . $$

We now explain the game chain. Game \(G_1\) is identical to game \(G_0\), except that instead of generating an injective key for the lossy trapdoor function, we generate a lossy one. Consider the following adversary B attacking the key indistinguishability of \({\mathsf {LT}}\). It simulates game \(G_0\), but uses its given key instead of generating a new one. It outputs 1 if the simulated game returns 1, and outputs 0 otherwise. Then

$$ \Pr [G_0(k) \Rightarrow 1] - \Pr [G_1(k) \Rightarrow 1] \le \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) . $$
Fig. 7.
figure 7

Games \(G_2, G_3\) of the proof of Theorem 13.

Let . In game \(G_2\), we reorder the code of game \(G_1\) producing vector \(\mathbf {y}\). Note that \({\mathsf {LT}}\) is a regular lossy trapdoor function. Then, distribution of vector \(\mathbf {y}\) is uniformly random on in game \(G_1\). Thus, vectors \(\mathbf {x}\) and \(\mathbf {y}\) have the same distribution in game \(G_1\) and \(G_2\). Hence, the change is conservative, meaning that \(\Pr [G_1(k) \Rightarrow 1] = \Pr [G_2(k) \Rightarrow 1]\). Moreover, game \(G_3\) is identical to game \(G_2\). Thus, we have \(\Pr [G_2(k) \Rightarrow 1] = \Pr [G_3(k) \Rightarrow 1]\).

Let \(\mathbf {y}[\overline{I}]\) be the unopened images, where \(|\overline{I}| \ge 1\). Note that in game \(G_3\), for all \(i \in \overline{I}\), \(\mathbf {x}[i]\) is chosen uniformly at random after adversary A outputs \(\mathbf {x}'\). Therefore, we obtain \(\Pr [G_3(k) \Rightarrow 1] \le |\overline{I}| \cdot 2^{-\tau }\). Summing up,

$$ \mathbf {Adv}^{\text {one-more}}_{{\mathsf {LT}},A,v}(k) \le \mathbf {Adv}^{\mathrm {ltdf}}_{{\mathsf {LT}},B}(k) + v \cdot 2^{-\tau } . \; $$

This completes the proof of Theorem 13.