MemoryTight Reductions
 10 Citations
 3.6k Downloads
Abstract
Cryptographic reductions typically aim to be tight by transforming an adversary \(\mathsf{A}\) into an algorithm that uses essentially the same resources as \(\mathsf{A}\). In this work we initiate the study of memory efficiency in reductions. We argue that the amount of working memory used (relative to the initial adversary) is a relevant parameter in reductions, and that reductions that are inefficient with memory will sometimes yield less meaningful security guarantees. We then point to several common techniques in reductions that are memoryinefficient and give a toolbox for reducing memory usage. We review common cryptographic assumptions and their sensitivity to memory usage. Finally, we prove an impossibility result showing that reductions between some assumptions must unavoidably be either memory or timeinefficient. This last result follows from a connection to data streaming algorithms for which unconditional memory lower bounds are known.
Keywords
Memory Tightness Provable security Black box reduction1 Introduction
Cryptographic reductions support the security of a cryptographic scheme \(\mathsf{S}\) by showing that any attack against \(\mathsf{S}\) can be transformed into an algorithm for solving a problem \(\mathsf{P}\). The tightness of a reduction is in general some measure of how closely the reduction relates the resources of attacks against \(\mathsf{S}\) to the resources of the algorithm for \(\mathsf{P}\). A tighter reduction gives a better algorithm for \(\mathsf{P}\), ruling out a larger class of attacks against \(\mathsf{S}\). Typically one considers resources like runtime, success probability, and sometimes the number of queries (to oracles defined in \(\mathsf{P}\)) of the resultant algorithm when evaluating the tightness of a reduction.
This work revisits how we measure the resources of the algorithm produced by a reduction. We observe that memory usage is an often important but overlooked metric in evaluating cryptographic reductions. Consider typical “tight” reductions from the literature, which start with an attack against a scheme \(\mathsf{S}\) that uses (say) time \(t_S\) to achieve success probability \(\varepsilon _S\), and transform the attack into an algorithm for problem \(\mathsf{P}\) running in time \(t_P \approx t_S\) and succeeding with probability \(\varepsilon _P \approx \varepsilon _S\). We observe that reductions tight in this sense are sometimes highly memoryloose: If the attack against \(\mathsf{S}\) used \(m_S\) bits of working memory, the reduction may produce an algorithm using \(m_P \gg m_S\) bits of memory to solve \(\mathsf{P}\). Depending on \(\mathsf{P}\), this changes the conclusions we can draw about the security of the scheme.
In this paper we investigate memoryefficiency in cryptographic reductions in various settings. We show that some standard decisions in security definitions have a bearing on memory efficiency of possible reductions. We give several simple techniques for improving memory efficiency of certain classes of reductions, and finally turn to a connection between streaming algorithms and memory/timeefficient reductions.
Tightness, memorytightness, and security. Reductions between a problem \(\mathsf{P}\) and a cryptographic scheme \(\mathsf{S}\) that approximately preserve runtime and success probability are usually called tight (c.f. [6, 8, 17]). Tight reductions are preferred because they provide stronger assurance for the security of \(\mathsf{S}\). Specifically, let us call an algorithm running in time t and succeeding with probability \(\varepsilon \) a \((t,\varepsilon )\)algorithm (for a given problem, or to attack a given scheme). Suppose that a reduction converts a \((t_S,\varepsilon _S)\)adversary against scheme \(\mathsf{S}\) into a \((t_P,\varepsilon _P)\)algorithm for \(\mathsf{P}\) where \((t_P,\varepsilon _P)\) are functions of the first two. If it is believed that no \((t_P,\varepsilon _P)\) algorithm should exist for \(\mathsf{P}\), then one concludes that no \((t_S,\varepsilon _S)\) adversary can exist against \(\mathsf{S}\).
If a reduction is not tight, then in order to conclude that scheme \(\mathsf{S}\) is secure against \((t_S,\varepsilon _S)\)adversaries one must adjust the parameters of the instance of \(\mathsf{P}\) on which \(\mathsf{S}\) is built, leading to a less efficient construction. In some extreme cases, obtaining a reasonable security level for a scheme with a nontight reduction leads to an impractical construction. Addressing this issue has become an active area of research in the last two decades (e.g. [4, 5, 6, 8, 11, 12, 18]).
In this work we keep track of the amount of memory used in reductions. To see when memory usage becomes relevant, let a \((t,m,\varepsilon )\)algorithm use t time steps, m bits of memory, and succeed with probability \(\varepsilon \). A tight reduction from \(\mathsf{S}\) to \(\mathsf{P}\) transforms \((t_S,m_S,\varepsilon _S)\)adversaries into \((t_P,m_P,\varepsilon _P)\)algorithms, where “tight” guarantees \(t_S \approx t_P\) and \(\varepsilon _S \approx \varepsilon _P\), but permits \(m_P \gg m_S\), up to the worstcase \(m_P \approx t_P\).
Now, suppose concretely that we want \(\mathsf{S}\) to be secure against \((2^{256},2^{128},O(1))\)adversaries, based on very conservative estimates of the resources available to a powerful government. Consider two possible “tight” reductions: One that is additionally “memorytight” and transforms a \((2^{256},2^{128},O(1))\)adversary \(\mathsf{A}\) against \(\mathsf{S}\) into a \((2^{256},2^{128},O(1))\)algorithm \(\mathsf {B}_{\mathrm {mt}}\) for \(\mathsf{P}\), and one that is “memoryloose” and instead only yields a \((2^{256},2^{256},O(1))\)algorithm \(\mathsf {B}_{\mathrm {nmt}}\) for \(\mathsf{P}\).
The crucial point is that some problems \(\mathsf{P}\) can be solved faster when larger amounts of memory are used. In our example above, it may be that \(\mathsf{P}\) is impossible to solve with \(2^{256}\) time and \(2^{128}\) memory for some specific security parameter \(\lambda \). But with both time and memory up to \(2^{256}\) bits, the best algorithm may be able to solve instances of \(\mathsf{P}\) with security parameter \(\lambda \), and with even larger parameters up to some \(\lambda '> \lambda \). The memorylooseness of the reduction now bites, because to achieve the original security goal for \(\mathsf{S}\) we must use the larger parameter \(\lambda '\) for \(\mathsf{P}\), resulting in a slower instantiation of the scheme. When \(\mathsf{P}\) is a problem involving a symmetric primitive where the “security parameter” cannot be changed the issue is more difficult to address.
Memorysensitive problems and memorytightness. Many, but not all, problems \(\mathsf{P}\) relevant to cryptography can be solved more quickly with large memory than with small. In the publickey realm these include factoring, discretelogarithm in prime fields, Learning Parities with Noise (LPN), Learning With Errors (LWE), approximate Shortest Vector Problem, and Short Integer Solution (SIS). In symmetrickey cryptography such problems include keyrecovery against multipleencryption, finding multicollisions in hash functions, and computation of memoryhard functions. We refer to problems like these as memorysensitive. (We refer to Sect. 6 for more discussion.)
On the other hand, problems \(\mathsf{P}\) exist where the best known algorithm also uses small memory: Discretelogarithm in elliptic curve groups over primefields [16], finding (single) collisions in hash functions [23], finding a preimage in hash functions (exhaustive search), and key recovery against blockciphers (also exhaustive search).
Let us consider some specific examples to illustrate the impact of a memoryloose reduction to a nonmemorysensitive versus a memorysensitive problem. Let \(\mathsf {CR}_k\) be the problem of finding a kway collision in a hash function \(\mathsf{H}\) with \(\lambda \) output bits, that is, finding k distinct domain points \(x_1,\ldots ,x_k\) such that \(\mathsf{H}(x_1) = \mathsf{H}(x_2) = \cdots = \mathsf{H}(x_k)\) for some fixed \(k\ge 2\).
First suppose we reduce the security of a scheme \(\mathsf{S}\) to \(\mathsf {CR}_2\), which is standard collisionresistance. The problem \(\mathsf {CR}_2\) is not memorysensitive, and the best known attack is a \((2^{\lambda /2}, O(1), O(1))\)algorithm. In the left plot of Fig. 1 we visualize the “feasible” region for \(\mathsf {CR}_2\), where the shaded region is unsolvable. Now we consider two possible reductions. One is a memorytight reduction which maps an adversary \(\mathsf{A}\) (with some time and memory complexity, with possibly much less memory than time) to an algorithm \(\mathsf {B}_{\mathrm {mt}}\) for \(\mathsf {CR}_2\) with the same time and memory. The other reduction is memoryloose (but timetight) and maps \(\mathsf{A}\) to an adversary \(\mathsf {B}_{\mathrm {nmt}}\) that uses time and memory approximately equal to the time of \(\mathsf{A}\). We plot the effect of these reductions in the left part of the figure. A tight reduction leaves the point essentially unchanged, while a memoryloose reduction moves the point horizontally to the right. Both reductions will produce a \(\mathsf {B}_{\mathrm {nmt}}\) in the region not known to be solvable, thus giving a meaningful security statement about \(\mathsf{A}\) that amounts to ruling out the shaded region of adversaries. We do note that there is a possible quantitative difference in the guarantees of the reductions, since it is only harder to produce an algorithm with smaller memory, but this benefit is difficult to measure.
Now suppose instead that we reduce the security of a scheme \(\mathsf{S}\) to \(\mathsf {CR}_3\). The best known attack against \(\mathsf {CR}_3\) is a \((2^{(1\alpha )\lambda },2^{\alpha \lambda },O(1))\)algorithm due to Joux and Lucks [20], for any \(\alpha \le 1/3\). We visualize this timememory tradeoff in the middle plot of Fig. 1, and again any adversary with time and memory in the shaded region would be a cryptanalytic advance. We again consider a memorytight versus a memoryloose reduction. The memorytight reduction preserves the point for the adversary \(\mathsf{A}\) in the plot and thus rules out \((t_S,m_S,O(1))\) adversaries for any \(t_S,m_S\) in the shaded region. A memoryloose (but timetight) reduction mapping \(\mathsf{A}\) to \(\mathsf {B}_{\mathrm {nmt}}\) for \(\mathsf {CR}_3\) that blows up memory usage up to time usage will move the point horizontally to the right. We can see that there are drastic consequences when the original adversary \(\mathsf{A}\) lies in the triangular region with time \({>}2\lambda /3\) and memory \({<}\lambda /3\), because the reduction produces an adversary \(\mathsf {B}_{\mathrm {nmt}}\) using resources for which \(\mathsf {CR}_3\) is known to be broken. In summary, the reduction only rules out adversaries \(\mathsf{A}\) below the horizontal line with time \(=2\lambda /3\).
Finally we consider an example instantiation of parameters for the learning parities with noise (LPN) problem, which is memorysensitive, where a memoryloose reduction would diminish security guarantees. In Sect. 6 we recall this problem and the best attacks, and in the right plot of Fig. 1 the shaded region represents the infeasible region for the problem in dimension 1024 and error rate 1 / 4. (For simplicity, all hidden constants are ignored in the plot.) In this problem the effect of memorylooseness is more stark. Despite using a large dimension, a memoryloose reduction can only rule out attacks running in time \({<}2^{85}\). A memorytight reduction, however, gives a much stronger guarantee for adversaries with memory less than \(2^{85}\).
Memoryloose reductions. Reductions are often memoryloose, and small decisions in definitions can lead to memory usage being artificially high. We start with an illustrative example.
Suppose we have a tight security reduction (in the traditional sense) in the random oracle model [7] between a problem \(\mathsf{P}\) and some cryptographic scheme \(\mathsf{S}\). More concretely, suppose a reduction transforms a \((t_S,m_S,\varepsilon _S)\)adversary \(\mathsf{A}_S\) in the randomoracle model into a \((t_P,m_P,\varepsilon _P)\)algorithm \(\mathsf{A}_P\) for \(\mathsf{P}\). A typical reduction has \(\mathsf{A}_P\) simulate a security game for \(\mathsf{A}_S\), including the random oracle, usually via a table that stores responses to queries issued by \(\mathsf{A}_S\). Naively removing the table from storage usually is not an option for various reasons: For example, if \(\mathsf{A}_S\) queries the oracle on the same input twice, then it expects to see the same output twice, or perhaps the reduction needs to “program” the random oracle with responses that must be remembered.
This example is only the start. Memorylooseness is sometimes, but not always, easily fixed, and seems to occur because it was not measured in reductions. Below we will furnish examples of other reductions that are (sometimes implicitly) memoryloose. We will also discuss some decisions in definitions and modeling that dramatically effect memory usage but are not usually stressed.
1.1 Our Results
Even though there exists an extensive literature on tightness of cryptographic security reductions (e.g. [5, 8, 11, 12]), memory has, to the best of our knowledge, not been considered in the context of security reductions. In this paper we first identify the problems related to nonmemorytight security reductions. To overcome the problems, we initiate a systematic study on how to make known security reductions memorytight. Concretely, we provide several techniques to obtain memoryefficient reductions and give examples where they can be applied. Our techniques can be used to make many security reductions memorytight, but not all of them. Furthermore, we show that this is inherent, i.e., that there exist natural cryptographic problems that do not have a fully tight security reduction. Finally, we examine various memorysensitive problems such as the learning parity with noise (LPN) problem, the factoring problem, and the discrete logarithm problem over finite fields.
The Random Oracle technique. Recall that a classical simulation of the random oracle using the lazy sampling technique requires the reduction to store \(O(q_H)\) values. The idea is to replace the responses H(x) to a random oracle query x by \(\mathsf {PRF}(k,x)\), where \(\mathsf {PRF}\) is a pseudorandom function and k is its key. The limitation of this technique is that it can only be applied to very restricted cases of a programmable random oracle.
The Rewinding Technique. The idea of the rewinding technique is to use the adversary as a “memory device.” Concretely, whenever the reduction would like to access values previously output by the adversary that it did not store in its memory, it simply rewinds the adversary which is executed with the same random coins and with the same input. This way the reduction’s running time doubles, but (unlike previous applications of the rewinding technique in cryptography, e.g., [22]) the overall success probability does not decrease. The rewinding technique can be applied multiple times providing a tradeoff between memory efficiency and running time of the reduction. To exemplify the techniques, we show a memorytight security reduction to the RSA fulldomain hash signature scheme in the appendix.
A Lower Bound. Some reductions appear (to us at least) to inherently require increased memory. We take a first step towards formalizing this intuition by proving a lower bound on the memory usage of a class of blackbox reductions in two scenarios.
First, we revisit a reduction implicitly used to justify the standard unforgeability notion for digital signatures, which reduces a game with several chances to produce a valid forgery to the standard game with only one chance. One can take this as a possible indication that signatures with memorytight reductions in the more permissive model may be preferred. Second, we prove a similar lower bound on the memory usage of a class of reductions between a “multichallenge” variant of collision resistance and standard collision resistance.
Interestingly, our lower bound follows from a result on streaming algorithms, which are designed to use small space while working with sequential access to a large stream of data.
Open problems. This work initiates the study of memorytight reductions in cryptography. We give a number of techniques to obtain such reductions, but many open problems remain. There are likely other reductions in the literature that we have not covered, and to which our techniques do not apply. It is even unclear how one should consider basic definitions, like unforgeability for signatures, since the generic reductions from more complicated (but more realistic) definitions may be tight but not memorytight.
One reduction we did consider, but could not improve, is the INDCCA security proof for Hash ElGamal in the random oracle model [1] under the gap DiffieHellman assumption. This reduction (and some others that use “gap” assumptions) use their random oracle table in a way that our techniques cannot address. We conjecture that a memorytight reduction does not exist in this case, and leave it as an open problem to (dis)prove our conjecture.
2 Complexity Measures
We denote random sampling from a finite set A according to the uniform distribution with Open image in new window . By \(\mathrm {Ber}(\alpha )\) we denote the Bernoulli distribution for parameter \(\alpha \), i.e., the distribution of a random variable that takes value 1 with probability \(\alpha \) and value 0 with probability \(1\alpha \); by \( {\mathbb {P}}_{\ell } \) the set of primes of bit size \(\ell \) and by \(\log \) the logarithm with base 2.
2.1 Computational Model
Computational model. All algorithms in this paper are taken to be RAMs. These programs have access to memory with words of size \(\lambda \), along with a constant number of registers that each hold one word. In this paper \(\lambda \) will always be the security parameter of a construction or a problem under consideration.
We define probabilistic algorithms to be RAMs with a special instruction that fills a distinguished register with random bits (independent of other calls to the special instruction). We note that this instruction does not allow for rewinding of the random bits, so if the algorithm wants to access previously used random bits then it must store them. Running an algorithm \(\mathsf{A}\) means executing a RAM machine with input written in its memory (starting at address 0). If \(\mathsf{A}\) is randomized, we write Open image in new window to denote the random variable y that is obtained by running \(\mathsf{A}\) on input I (which may consist of a tuple \(I=(I_1, \ldots , I_n)\)). If \(\mathsf{A}\) is deterministic, we write \(\leftarrow \) instead of Open image in new window . We sometimes give an algorithm \(\mathsf{A}\) access to stateful oracles \({\mathsf {O}}_1,{\mathsf {O}}_2,\ldots ,{\mathsf {O}}_n\). Each \({\mathsf {O}}_i\) is defined by a RAM \(M_i\). We also define an associated string \(\mathsf {st}_{\mathsf {O}}\) called the oracle state that is stored in a protected region of the memory of \(\mathsf{A}\) that can only be read by the oracles. Initially \(\mathsf {st}_{\mathsf {O}}\) is defined to be empty. An algorithm \(\mathsf{A}\) calls an oracle \({\mathsf {O}}_i\) via a special instruction, which runs the corresponding RAM on input from a fixed region of memory of \(\mathsf{A}\) along with the oracle state \(\mathsf {st}_{\mathsf {O}}\). The RAM \(M_i\) uses its own protected working memory, and finally its output is written into a fixed region of memory for \(\mathsf{A}\), the updated state is written to \(\mathsf {st}_{\mathsf {O}}\), and control is transferred back to \(\mathsf{A}\).
Games. Most of our security definitions and proofs use codebased games [9]. A game \(\mathsf{G}\) consists of a RAM defining an \(\mathsf {Init}\) oracle, zero or more stateful oracles \({\mathsf {O}}_1,\ldots ,{\mathsf {O}}_n\), and a \(\mathsf {Fin}\) RAM oracle. An adversary \(\mathsf{A}\) is said to play game \(\mathsf{G}\) if its first instruction calls \(\mathsf {Init}\) (handing over its own input) and its last instruction calls \(\mathsf {Fin}\), and in between these calls it only invokes \({\mathsf {O}}_1,\ldots ,{\mathsf {O}}_n\) and performs local computation. We further require that \(\mathsf{A}\) outputs whatever \(\mathsf {Fin}\) outputs.
Executing game \(\mathsf{G}\) with \(\mathsf{A}\) is formally just running \(\mathsf{A}\) with input \(\lambda \), the security parameter. Keeping with convention, we denote the random variable induced by executing \(\mathsf{G}\) with \(\mathsf{A}\) as \(\mathsf{G}^\mathsf{A}\) (where the sample space is the randomness of \(\mathsf{A}\) and the associated oracles). By \(\mathsf{G}^\mathsf{A} \Rightarrow {\texttt {out}}\) we denote the event that \(\mathsf{G}\) executed with \(\mathsf{A}\) outputs \({\texttt {out}}\). In our games we sometimes denote a “Stop” command that takes an argument. When Stop is invoked, its argument is considered the output of the game (and the execution of the adversary is halted). If a game description omits the \(\mathsf {Fin}\) procedure, it means that when \(\mathsf{A}\) calls \(\mathsf {Fin}\) on some input x, \(\mathsf {Fin}\) simply invokes Stop with argument x. By default, integer variables are initialized to 0, set variables to \(\emptyset \), strings to the empty string and arrays to the empty array.
2.2 Complexity Measures
This work is concerned with measuring the resource consumption of an adversary in a way that allows for meaningful conclusions about security. Success probabilities and time are widely used in the cryptographic literature with general agreement on the details, which we recall first. Memory consumption of reductions is however new, so we next discuss the possible options in measuring memory and the implications.
Success Probability. We define the success probability of \(\mathsf{A}\) playing game \(\mathsf{G}\) as \(\mathbf {Succ}(\mathsf{G}^\mathsf{A}) := \Pr [\mathsf{G}^\mathsf{A}\Rightarrow 1]\).
Runtime. Let \(\mathsf{A}\) be an algorithm (RAM) with no oracles. The runtime of \(\mathsf{A}\), denoted \(\mathbf {Time}(\mathsf{A})\), is the worstcase number of computation steps of \(\mathsf{A}\) over all inputs of bitlength \(\lambda \) and all possible random choices. Now let \(\mathsf{G}\) be a game and \(\mathsf{A}\) be an adversary that plays game \(\mathsf{G}\). The runtime of executing \(\mathsf{G}\) with \(\mathsf{A}\) is usually taken to be the number of computation steps of \(\mathsf{A}\) plus the number of computation steps of each RAM used to respond to oracle queries: We denote this as \(\mathbf {TotalTime}(\mathsf{G}^\mathsf{A})\) or \(\mathbf {TotalTime}(\mathsf{A})\). One may prefer not to include the time used by the oracles, and in this case we denote \(\mathbf {LocalTime}(\mathsf{G}^\mathsf{A})\) or \(\mathbf {LocalTime}(\mathsf{A})\) to be the number of steps of \(\mathsf{A}\) only.
Memory. We define the memory consumption of a RAM program \(\mathsf{A}\) without oracles, denoted \(\mathbf {Mem}(\mathsf{A})\), to be size (in words of length \(\lambda \)) of the code of \(\mathsf{A}\) plus the worstcase number of registers used in memory at any step in computation, over all inputs of bitlength \(\lambda \) and all random choices. Now let \(\mathsf{G}\) be a game and \(\mathsf{A}\) be an adversary that plays game \(\mathsf{G}\). The memory required to execute game \(\mathsf{G}\) with \(\mathsf{A}\) includes the memory needed to input and output to \(\mathsf{A}\), as well as input and output to each oracle, along with the working memory and state of each oracle. We denote this as \(\mathbf {TotalMem}(\mathsf{G}^\mathsf{A})\) or \(\mathbf {TotalMem}(\mathsf{A})\). Alternatively, one may measure only the code and memory consumed by \(\mathsf{A}\), but not its oracles. We denote this measure by \(\mathbf {LocalMem}(\mathsf{A})\).
One advantage of the \(\mathbf {LocalMem}\) measure is that it can avoid small details of security definitions drastically changing the meaning of memorytightness in reductions.
Sometimes it will be convenient to measure the memory consumption in bits, in which case we use \(\mathbf {Mem}_2(\mathsf{A})\), \(\mathbf {LocalMem}_2(\mathsf{A})\), and \(\mathbf {TotalMem}_2(\mathsf{A})\).
2.3 Case Study I: Unforgeability of Digital Signatures
Let \((\mathsf {Gen},\mathsf {Sign},\mathsf {Ver})\) be a digital signature scheme (see Sect. 5 for the exact syntax of signatures, which is standard). On the left side of Fig. 2 we recall the game \(\mathsf {UFCMA}\) that defines the standard notion of (existential) unforgeability under chosenmessage attacks. The advantage of an adversary \(\mathsf{A}\) is defined by \(\mathbf {Adv}(\mathsf {UFCMA}^{\mathsf{A}}) = \mathbf {Succ}(\mathsf {UFCMA}^\mathsf{A})\), and a signature scheme where \(\mathbf {Adv}(\mathsf {UFCMA}^{\mathsf{A}}) \) is “small” for some class of adversaries is usually defined to be “secure”. In order for the definition to be meaningful, the game \(\mathsf {UFCMA}\) checks that the signature \(\sigma ^*\) on \(m^*\) is valid, and also that \(m^*\) was not queried to the signing oracle. In our version of the definition, the signing oracle maintains a set S of messages that were queried, and the game uses S to check if \(m^*\) was queried.
The \(\mathsf {UFCMA}\) game is an example where we prefer \(\mathbf {LocalMem}\) to \(\mathbf {TotalMem}\). Any adversary \(\mathsf{A}\) playing \(\mathsf {UFCMA}\) will always have \(\mathbf {TotalMem}(\mathsf{A}) = \varOmega (q_S)\), where \(q_S\) is the number of signature queries it issues, while it may have \(\mathbf {LocalMem}(\mathsf{A})\) much smaller. Restricting the number of signing queries \(q_S\) is an option but weakens the definition.
An alternative style of definition for unforgeability is to limit the class of adversaries \(\mathsf{A}\) considered to those that are “well behaved” in that they never submit an \(m^*\) that was previously queried. The game no longer needs to track which messages were queried to the signing oracle in order to be meaningful. This definition is equivalent up to a small increase in (local) running time, but it is not clear if the same is true for memory. To convert any adversary to be well behaved, natural approaches mimic our version of the game, storing a set S and checking the final forgery locally before submitting.
Stronger unforgeability. Games in many cryptodefinitions are chosen to be simple and compact but also general. The game \(\mathsf {UFCMA}\) only allows a single attempt at a forgery in order to shorten proofs, but the definition also tightly implies (up to a small increase in runtime) a version of unforgeability where the attacker gets many attempts, which more closely models usages where an attacker will have many chances to produce a forgery.
It is less clear how \(\mathsf {UFCMA}\) relates to more general definitions when memory tightness is taken into account. To make this more concrete, consider the game \(\mathsf {mUFCMA}\) (for “many \(\mathsf {UFCMA}\)”) on the right side of Fig. 2. In this game the adversary has an additional verification oracle. If it ever submits a fresh forgery to this oracle, it wins the game. It is easy to give a tight, but nonmemorytight, reduction converting any \((t,m,\varepsilon )\)adversary playing \(\mathsf {mUFCMA}\) into a \((t',m',\varepsilon )\)adversary playing \(\mathsf {UFCMA}\) for \(t' \approx t\) but \(m' \gg m\). Other tradeoffs are also possible but achieving tightness in all three parameters seems difficult.
For the reasons described in the introduction, a memorytight reduction from winning \(\mathsf {mUFCMA}\) to winning \(\mathsf {UFCMA}\) is desirable. In Sect. 4, we show that a certain class of blackbox reductions for these problems in fact cannot be simultaneously tight in runtime, memory, and success probability. We conclude that signatures with dedicated memorytight proofs against adversaries in the \(\mathsf {mUFCMA}\) may provide stronger security assurance, especially when security is reduced to a memorysensitive problem like RSA.
We remark that the common reduction from multichallenge to singlechallenge \(\mathsf {IND}\text {}\mathsf {CPA}\)/\(\mathsf {IND}\text {}\mathsf {CCA}\) security for publickey encryption is memory tight (but not tight in terms of the success probability).
2.4 Case Study II: CollisionResistance Definitions
Collisionresistance, and multicollisionresistance of hash functions, is used for security reductions in many contexts. Let \(\mathsf{H}\) be a keyed hash function (with \(\kappa \)bit keys), with standard syntax. On the left side of Fig. 3 we recall the game \(\mathsf {CR}_t\) used to define tcollision resistance. The game provides no extra oracles, and \(\mathsf{A}\) wins if it can find t domain points that are mapped to the same point by \(\mathsf{H}\).
Naive attempts at the reduction to collisionresistance are however not memorytight. One can run the adversary attacking \(\mathsf{F}^*\) and record its queries, checking for any collisions, but this increases memory usage.
Returning to the proof for \(\mathsf{F}^*\), one can easily construct an adversary to play \(\mathsf {mCR}_2\) using any PRF adversary. The resulting reduction will be memorytight. Thus it would be desirable to have a memorytight reduction from \(\mathsf {mCR}_2\) to \(\mathsf {CR}_2\) to complete the proof. This however seems difficult or even impossible, and in Sect. 4 we show that a class of blackbox reductions cannot be memorytight. As discussed in the introduction, tcollisionresistance is not memory sensitive for \(t=2\), and thus the meaning of a memorytight reduction is somewhat diminished (i.e. it does not justify more aggressive parameter settings). For \(t>2\) the effect of memorytightness is more significant.
3 Techniques to Obtain Memory Efficiency
In this section we describe four techniques to obtain memoryefficient reductions. In Sect. 5 we show how to apply those techniques to memorytightly prove the security of the RSA Full Domain Hash signature scheme [7]. Using this example we also point to technical challenges that may arise when applying multiple techniques in the same proof.
3.1 Pseudorandom Functions
First, we formally define pseudorandom functions. They are the main tool used in this section to make reductions memory efficient.
Definition 1
If the range of \(\mathsf{F}\) is just a single bit \(\{0,1\}\), we define the \(\alpha \)PRF advantage with bias \(0 \le \alpha \le 1\) of \(\mathsf{A}\) as \( \mathbf {Adv}(\mathsf {PRF}_\alpha ^\mathsf{A}) := \mathbf {Succ}(\mathsf {Real}^\mathsf{A})\mathbf {Succ}(\mathsf {Random}_\alpha ^\mathsf{A}), \) where \(\mathsf {Real}\) and \(\mathsf {Random}_\alpha \) are the games in Fig. 4.
Note that a \(2^{\rho }\)PRF can be easily constructed from a standard PRF with range \(\{0,1\}^{\rho }\) by mapping \(1^\rho \) to 1 and all other values to 0. A 1 / qPRF for arbitrary q can be constructed in a similar way from a standard PRF with sufficiently large image size \(\rho \).
3.2 Generating (Pseudo)random Coins
Our first technique is the simplest, where we observe random coins used by adversaries can be replaced with pseudorandom coins, and that this substitution will save memory in certain reductions.
Consider a security game \(\mathsf{G}\) and an adversary \(\mathsf{A}\). Both are probabilistic processes and therefore require randomness. When considering memory efficiency details on storing random coins could come to dominate memory usage. Specifically, some reductions run an adversary multiple times with the same random tape, which must be stored in between runs. One possibility to do this is by sampling all randomness required in game \(\mathsf {G^A}\) (including the randomness used by \(\mathsf{A}\)) in advance. More formally let \( L \le 2^\lambda \) be an upper bound on the amount of executions of the instruction filling an register with random bits in \(\mathsf{G}^\mathsf{A}\). Then the sampling of random coins can be replaced filling and storing L registers (memory units) with random bits at the beginning of \(\mathsf {Init}\) and in the rest of the game replacing the ith call to the instruction with a procedure \(\mathsf {Coins}\) returning the contents of the ith register. This is formalized in game \(\mathsf {{G}^{{}}_{{0}}}\) of Fig. 5.
Running Time. Game \(\mathsf {G}_1\) needs to evaluate the PRF (via algorithm \(\mathsf{F}\)) L times, hence we have \(\mathbf {TotalTime}(\mathsf {G_1^A}) \le \mathbf {TotalTime}(\mathsf {G_0^A})+L \cdot \mathbf {Time}(\mathsf{F})\).
3.3 Random Oracles
In the following paragraph we analyze how success probability, running time and memory consumption change if we apply this technique.
Running Time. Let \(q_H\) be the number of random oracle queries posed by the adversary. Then game \(\mathsf {G_1}\) needs to evaluate the PRF \(q_H\) times, hence we have \(\mathbf {TotalTime}(\mathsf {G_1^A}) \le \mathbf {TotalTime}(\mathsf {G_0^A})+q_H \cdot \mathbf {Time}(\mathsf{F})\).
3.4 Random Oracle Index Guessing Technique
This technique is used when random oracle queries are answered in two different ways, e.g. in a reduction where challenge values, like a discrete logarithm challenge \(X=g^x\), are embedded in the programmable random oracle. Usually this is done by guessing some index \(i^*\) between 1 and \(q_H\) in the beginning, where \(q_H\) is the number of random oracle queries posed by the adversary. During the simulation, the challenge value is then embedded in the reduction’s response to the \(i^*\) ^{th} random oracle query.
To do this, the game needs to keep a list of all queries and responses. Independently of the way the game answers all the other queries except for the \(i^*\) ^{th} one, simply keeping a counter is not sufficient, since an adversary posing the same query all the time would then receive two different responses and the random oracle thus wouldn’t be well defined anymore. An example of such a game using the index guessing technique is game \(\mathsf {{G}^{{}}_{{0}}}\) of Fig. 7, where two deterministic procedures \(\mathsf{P}_0\) and \(\mathsf{P}_1\) are used to program H depending on \(i^*\).
We now compare the two games in terms of success probability, running time and memory efficiency.
3.5 Single Rewinding Technique
This technique can be used for games containing a procedure \(\mathsf {Query}\), which can be called by an adversary \(\mathsf{A}\) up to q times on inputs \( x_1,\dots ,x_q \). When \(\mathsf{A}\) terminates, it queries \(\mathsf {Fin}\) on a value \(x^*\). Procedure \(\mathsf {Fin}\) then checks whether there exists \(i\in \{1,\dots ,q\}\) such that \(\mathsf{R}(x_i,x^*)=1\), where \(\mathsf{R}\) is an efficiently computable relation specific to the game. If so, it invokes Stop with 1. If no such i exists it invokes Stop with 0. Note that we do not specify how queries to \(\mathsf {Query}\) are answered since it is not relevant here. To be able to check whether there exists an i such that \(\mathsf{R}(x_i,x^*)=1 \), the game usually stores the values \( x_1,\dots ,x_q \) as described in \(\mathsf {{G}^{{}}_{{0}}}\) in Fig. 9.
4 Streaming Algorithms and MemoryEfficiency
In this section we prove two lower bounds on the memory usage of blackbox reductions between certain problems. The first shows that any reduction from \(\mathsf {mUFCMA}\) to \(\mathsf {UFCMA}\) must either use more memory, run the adversary many times, or obey some tradeoff between the two options. The second gives a similar result for \(\mathsf {mCR}_t\) to \(\mathsf {CR}_t\) reductions. We start by recalling results from the datastream model of computation which will provide the principle tools for our lower bounds.
In this section we also deal with bitmemory (\(\mathbf {Mem}_2\)) which measures the number of bits used, rather than \(\mathbf {Mem}\) which measures the number of \(\lambda \)bit words used.
4.1 The Data Stream Model
The data stream model is typically used to reason about algorithmic challenges where a very large input can only be accessed in discrete pieces in a given order, possibly over multiple passes. For instance, data from a highrate network connection may often be too large to store and thus only accessed in sequence.
Streaming formalization. We adopt the following notation for a streaming problem: An input is a vector \(\mathbf {y}\in U^n\) of dimension n over some finite universe U. We say that the number of elements in the stream is n. An algorithm \(\mathsf{B}\) accesses \(\mathbf {y}\) via a stateful oracle \({\mathsf {O}_\mathbf {y}}\) that works as follows: On the first call it saves an initial state \(i \leftarrow 0\) and returns \(\mathbf {y}[0]\). On future calls, \({\mathsf {O}_\mathbf {y}}\) sets \(i \leftarrow (i + 1 \mod n)\), and returns \(\mathbf {y}[i]\). The oracle models accessing a stream of data, one entry at a time. When the counter i is set to 0 (either at the start or by wrapping modulo n), the algorithm \(\mathsf{B}\) is said to be initiating a pass on the data. The number of passes during a computation \(\mathsf{B}^{\mathsf {O}_\mathbf {y}}\) is thus defined as \(p=\left\lceil q/n \right\rceil \), where q is the number of queries issued by \(\mathsf{B}\) to its oracle.
A streaming lower bound. Below we will use a wellknown result lower bounding the tradeoff between the number of passes and memory required to determining the most frequent element in a stream. We will also use a lower bound on a related problem that can be proven by the same techniques.
Theorem 1
This theorem is actually a simple corollary of a celebrated result on the communication complexity of the disjointness problem, which has several other applications. See also the lecture notes by Roughgarden [25] that give an accessible theorem statement and discussion after Theorem 4.11 of that document.
The standard version of this theorem only states that computing \({F_\infty }\) requires the stated space, so we sketch how to obtain our easy corollary. The full proof is omitted from this version due to the page limit. The proof for \({F_\infty }\) works by showing that any ppass streaming algorithm with local memory m can be used to construct a pround twoparty protocol to compute whether sets \(S_1,S_2\) held by the parties are disjoint. One then proves a communication lower bound on any protocol to test for disjointness.
A simple modification of this argument shows that computing G also gives such a protocol: It easily allows two parties to compute if \(S_1\setminus S_2\) is empty, which is equivalent to computing if \(\overline{S_1}\) and \(S_2\) are disjoint. Thus one can reduce disjointness to this problem by having the first party take the compliment of its set.
The modification for \({F_{\infty ,t}}\) is slightly more subtle. The essential idea is that one party can copy its set \(t1\) times when feeding it to the streaming algorithm. Then if the parties’ sets are not disjoint, we will have \({F_{\infty ,t}}\) equal to 1 and 0 otherwise. Since t is a constant this affects the lower bound by only a constant factor.
4.2 \(\mathsf {mUFCMA}\)to\(\mathsf {UFCMA}\) Lower Bound
Blackbox reductions for \(\mathsf {mUFCMA}\) to \(\mathsf {UFCMA}\). Let \(\mathsf{R}\) be an algorithm playing the \(\mathsf {UFCMA}\) game. Recall that \(\mathsf{R}\) receives input \( pk \) and has access to an oracle \(\mathsf {ProcSign}\), and stops the game by querying \(\mathsf {Fin}(m^*,\sigma ^*)\). Below for an adversary \(\mathsf{A}\) playing \(\mathsf {mUFCMA}\), we write \(\mathsf{R}^\mathsf{A}\) to mean that \(\mathsf{R}\) has additionally “oracle access to \(\mathsf{A}\)”, which means an oracle \(\mathsf {NxtQ}_\mathsf{A}\) that returns the “next query” of \(\mathsf{A}\) after accepting a response to the previous query from \(\mathsf{R}\). When \(\mathsf{A}\) halts (i.e. \(\mathsf {NxtQ}_\mathsf{A}\) returns a query to \(\mathsf {Fin}\)), the oracle resets itself to start again with the same random tape and input \( pk \).
Definition 2
 1.
\(\mathsf{R}^\mathsf{A}\) starts by forwarding its initial input (consisting of the security parameter and public key) to \(\mathsf {NxtQ}_\mathsf{A}\).
 2.
When the oracle \(\mathsf {NxtQ}_\mathsf{A}\) emits a query for \(\mathsf {ProcSign}(m)\), \(\mathsf{R}\) forwards m to its own signing oracle \(\mathsf {ProcSign}\) and returns the result to \(\mathsf {NxtQ}_\mathsf{A}\), possibly after some computation.
 3.
When \(\mathsf {NxtQ}_\mathsf{A}\) emits a query for \(\mathsf {ProcVer}(m^*,\sigma ^*)\), \(\mathsf{R}\) performs some computation then returns an empty response to \(\mathsf {NxtQ}_\mathsf{A}\).
 4.
When \(\mathsf{R}\) queries \(\mathsf {Fin}(m^*,\sigma ^*)\), the value \((m^*,\sigma ^*)\) will be amongst the values that \(\mathsf {NxtQ}_\mathsf{A}\) returned as a query to \(\mathsf {ProcVer}\).
These restrictions force \(\mathsf{R}\) to behave in a combinatorial manner that is amenable to a connection to streaming lower bounds. The final condition, requiring \(\mathsf{R}\) to preserve the advantage of \(\mathsf{A}\) for all random tapes, is especially restrictive. At the end of the section we discuss directions for considering more general \(\mathsf{R}\).
Theorem 2
Proof
We start by fixing the adversary \(\mathsf{A}^*\). It takes as input the security parameter \(\lambda \) and public key \( pk \). Then \(\mathsf{A}^*\) selects q random messages \(m_1,\ldots ,m_q\), and queries them to \(\mathsf {ProcSign}\), and ignores the outputs. Next \(\mathsf{A}^*\) selects q more random messages \(m'_1,\ldots ,m'_q\), and for each \(m'_j\) it forges a signature \(\sigma '_j\) by brute force and queries \((m'_j,\sigma '_j)\) to \(\mathsf {ProcVer}\). After the verification queries, it halts.
We record two facts about \(\mathsf{A}^*\). Let \(\mathbf {y}\in (\{0,1\}^\lambda )^{2q}\) the vector consisting of all of its queried messages, in order (the first q to \(\mathsf {ProcSign}\), and the second q to \(\mathsf {ProcVer}\) along with signatures). First, if \(G(\mathbf {y}) = 0\), then \(\mathbf {Succ}(\mathsf {mUFCMA}^{\mathsf{A}^*} \mid \mathbf {y}) = 0\) because \(\mathsf{A}^*\) will not issue any queries with a fresh forgery. If however \(G(\mathbf {y}) = 1\), then \(\mathbf {Succ}(\mathsf {mUFCMA}^{\mathsf{A}^*}\mid \mathbf {y}) = 1\) because \(\mathsf{A}^*\) will issue at least one fresh forgery to the verification oracle.

\(\mathsf{B}\) starts by initializing a \(\log n\)bit counter \(i\leftarrow 0\), running Open image in new window , and running \(\mathsf{R}\) on input \( pk \).

\(\mathsf{B}\) responds the oracle query \(\mathsf {ProcSign}(m)\) from \(\mathsf{R}\) by returning \(\mathsf {Sign}( sk ,m)\).
 When \(\mathsf{R}\) queries \(\mathsf {NxtQ}_{\mathsf{A}^*}\), \(\mathsf{B}\) ignores the input and responds as follows:

If \(i< n/2\), then \(\mathsf{B}\) queries \({\mathsf {O}_\mathbf {y}}\), which returns \(\mathbf {y}_1[i]\), and has \(\mathsf {NxtQ}_{\mathsf{A}^*}\) return \(\mathsf {ProcSign}(\mathbf {y}_1[i])\) as the next query.

If \(i \ge n/2\), it queries \({\mathsf {O}_\mathbf {y}}\) to get \(\mathbf {y}_2[j]\) (where \(j=in/2\)). Then \(\mathsf{B}\) computes a valid signature \(\sigma _j\) by brute force, and increments i modulo n. It then has \(\mathsf {NxtQ}_{\mathsf{A}^*}\) return \(\mathsf {ProcVer}(\mathbf {y}_2[j],\sigma )\) as the next query.


When \(\mathsf{R}\) queries \(\mathsf {Fin}(m^*,\sigma ^*)\), \(\mathsf{B}\) performs another pass on its stream and checks if \(m^*\) appears anywhere in \(\mathbf {y}_1\). If it does, then it outputs 0 and otherwise it outputs 1.
We now verify (3). If \(G(\mathbf {y}) = 0\) then \(\mathsf{B}^{{\mathsf {O}_\mathbf {y}}}\) will output 0 with probability 1. This is because our restrictions on \(\mathsf{R}\), which restricts it to outputting a value \(m^*\) that was queried by \(\mathsf{A}^*\) to \(\mathsf {ProcVer}\). On the other hand, if \(G(\mathbf {y})= 1\) then \(\mathsf{B}^{{\mathsf {O}_\mathbf {y}}}\) will output 1 with probability at least c. This is because \(\mathsf{A}^*\) will have success probability 1 when such a \(\mathbf {y}\) is fixed, so by (2) \(\mathsf{R}^{\mathsf{A}^*}\) has success probability at least c, and \(\mathsf{B}\) outputs 1 whenever \(\mathsf{R}\) succeeds in the simulated \(\mathsf {mUFCMA}\) game.
4.3 \(\mathsf {mCR}_t\)to\(\mathsf {CR}_t\) Lower Bound
Blackbox reductions for \(\mathsf {mCR}_t\) to \(\mathsf {CR}_t\). Similar to the case with signatures, we formalize a class of reductions from \(\mathsf {mCR}_t\) to \(\mathsf {CR}_t\) for a hash function \(\mathsf{H}\). Let \(\mathsf{R}\) be an oracle algorithm \(\mathsf{R}^{\mathsf{A}}\) that play the \(\mathsf {CR}_t\) game (with the only oracle being \(\mathsf {Fin}\)), and additionally has access to an oracle \(\mathsf {NxtQ}_{\mathsf{A}}\) that returns the next query or some adversary playing the game \(\mathsf {mCR}_t\). The only oracles in \(\mathsf {mCR}_t\) are \(\mathsf {ProcInput}\) and \(\mathsf {Fin}\), so \(\mathsf {NxtQ}_{\mathsf{A}}\) either returns a domain point m or halts \(\mathsf{A}\). As before, the oracle resets itself after the last query by \(\mathsf{A}\), with the same input and random tape.
Definition 3
 1.
\(\mathsf{R}^\mathsf{A}\) starts by forwarding its initial input (consisting of the security parameter and hashing key) to \(\mathsf {NxtQ}_\mathsf{A}\).
 2.
When \(\mathsf{R}\) queries \(\mathsf {Fin}(m_1,\ldots ,m_t)\), the values \(m_1,\ldots ,m_t\) will be amongst the values that \(\mathsf {NxtQ}_\mathsf{A}\) returned as a query to \(\mathsf {ProcInput}\).
Theorem 3
Proof
The adversary \(\mathsf{A}^*\) works as follows: On input \(\lambda \) (and empty hash key), it chooses q random messages \(m_1,\ldots ,m_q\) and queries \(m_i\Vert i\) to its \(\mathsf {ProcInput}\) oracle, where i is encoded in \(\lambda \) bits. It then queries \(\mathsf {Fin}\) and halts.
Let \(\mathbf {y}\in (\{0,1\}^\lambda )^{q}\) be the vector consisting of all of messages queried to \(\mathsf {ProcInput}\). If \({F_{\infty ,t}}(\mathbf {y}) = 0\), then \(\mathbf {Succ}(\mathsf {mCR}_t^{\mathsf{A}^*}\mathbf {y}) = 0\) because there will be no tcollision in the queries of \(\mathsf{A}^*\). If however \({F_{\infty ,t}}(\mathbf {y}) = 1\), then \(\mathbf {Succ}(\mathsf {mUFCMA}^{\mathsf{A}^*}\mathbf {y}) = 1\) because \(\mathsf{A}^*\) there will be a tcollision, as the hash function \(\mathsf{H}\) is defined to truncate the final \(\lambda \) bits of its inputs, which consist of the counter value.
The streaming algorithm \(\mathsf{B}^{{\mathsf {O}_\mathbf {y}}}(2^\lambda ,q)\) works as follows. It initializes a counter i to 0 and runs \(\mathsf{R}\). When \(\mathsf{R}\) requests an input from \(\mathsf {NxtQ}_{\mathsf{A}^*}\), \(\mathsf{B}^{{\mathsf {O}_\mathbf {y}}}\) queries its oracle for \(\mathbf {y}[i]\) and returns \(\mathbf {y}[i]\Vert i\) to \(\mathsf{R}\). When \(\mathsf{R}\) halts by calling \(\mathsf {Fin}(m_1,\ldots ,m_t)\), \(\mathsf{B}^{{\mathsf {O}_\mathbf {y}}}\) simply checks if the messages are all of the form \(y\Vert i\) for a fixed y and different values of i. If so, it outputs 1 and otherwise it outputs 0.
Sharpness of the bounds. We observe that when one is not concerned with memorytightness then it is trivial to reduce tmulticollisionresistance to tcollisionresistance, by simply storing all inputs to \(\mathsf {ProcInput}\) and checking for collisions. This will however be nontight if the \(\mathsf {mCR}_t\) adversary uses small memory but produces a large number of domain points (i.e. q is large). Memory tightness can be achieved via rewinding O(q) times, but this increases the runtime of the reduction.
Theorem 4
If we choose \(c=1\) and \(m=q/p\), this theorem proves that the lower bound from Theorem 3 is sharp.
Proof
So overall, \(\mathsf{B}\) runs \(\mathsf{A}\) at most \(2p+1\) times and the hash algorithm \(\mathsf{H}\) at most \(p(m+mn)+q\) times. It needs to store \(2m+3\) counters of size \(\log q\le \lambda \) (i.e. \( 2m+3 \) memory units), m values from \(\mathsf{H}\)’s range \(\{0,1\}^\rho \) (i.e. m memory units) and the t elements from \(\{0,1\}^\delta \) that collide under \(\mathsf{H}\) (i.e. t memory units) and provide memory for \(\mathsf{A}\) and \(\mathsf{H}\). \(\square \)
Limitations, extensions, and open problems. Our notion of blackbox reductions assumes that the reduction will only run the adversary \(\mathsf{A}\) from beginning to end, each time with the same random tape. It would be interesting to generalize the reduction to allow for partial rewinding of \(\mathsf{A}\), and also for saving “snapshots” of the state of \(\mathsf{A}\) that allow for rewinding.
Our restrictions on blackbox reductions confine them to essentially work like combinatorial streaming algorithms. It seems likely that these restrictions can be greatly relaxed by using a different notion of blackbox reduction and using pathological (unbounded) signature schemes and hash functions to enforce the combinatorial behavior of the reduction with high probability. We pursued our version of the results for simplicity.
5 MemoryTight Reduction for RSA Full Domain Hash Signatures
This section gives an example of a memorytight reduction obtained via the techniques of Sect. 3. We first recall the syntax of signature schemes and recall the RSA assumption. Then we show how the RSA Full Domain Hash signature scheme can be shown secure in the random oracle model using coin replacement, random oracle replacement, single rewinding, and the random oracle index guessing technique. For subtle reasons we implement all techniques using a single PRF to obtain a memory tight reduction.
Signature schemes. A signature scheme consists of algorithms \(\mathsf {Gen}\),\(\mathsf {Sign}\),\(\mathsf {Ver}\) such that: algorithm \(\mathsf {Gen}\) generates a verification key \( pk \) and a signing key \( sk \); on input of a signing key \( sk \) and a message m algorithm \(\mathsf {Sign}\) generates a signature \(\sigma \) or the failure indicator \(\bot \); on input of a verification key \( pk \), a message m, and a candidate signature \(\sigma \), deterministic algorithm \(\mathsf {Ver}\) outputs 0 or 1 to indicate rejection and acceptance, respectively. A signature scheme is correct if for all \( sk , pk ,m\), if \(\mathsf {Sign}( sk ,m)\) outputs a signature then \(\mathsf {Ver}\) accepts it. Recall that the standard security notion of existential unforgeability against chosen message attacks is defined in Sect. 2.3 via the game of Fig. 2.
RSA assumption. Let \(\mathsf {GenRSA}_\lambda \) be an algorithm that returns \((N=pq, e, d)\), where p and q are distinct primes of bit size \( \lambda /2 \) and e, d are such that \(e=d^{1} \bmod \varPhi (N)\).
Definition 4
Theorem 5
Proof
Consider the sequence of games of Fig. 13. For computations in \(\mathbb {Z}_N\) we omit writing \(\bmod \, N\) if it is clear from the context. We assume without loss of generality that any message procedures \(\mathsf {ProcSign}\) or \( \mathsf {Fin}\) are queried on was before already queried to \(\mathsf {Hash}\).
6 MemorySensitive Problems
In this section we discuss the memory sensitivity of two cryptographic problems, multicollisionresistance and learning parities with noise. In the full version of this paper [3], we will also analyze the memory sensitivity of the discrete logarithm problem in prime fields and of the factoring problem.
To quantify the memory sensitivity of a problem \(\mathsf{P}\) we plot time/memory tradeoffs as in the Fig. 1. The horizontal axis is memory consumption and the vertical axis is running time, both on a log scale. A point (x, y) is either labeled with “solvable” or “unsolvable”, where solvable means that there exists an algorithm with running time at most \(2^x\) and memory consumption at most \(2^y\) that solves the problem. We refer to the boundary between the solvable and unsolvable regions as the transition line.
Algorithm \(\mathsf{A}\)  \(\mathbf {Mem}(\mathsf {CR}_t^{\mathsf{A}})\)  \(\mathbf {Time}(\mathsf {CR}_t^{\mathsf{A}})\) 

Birthday (\(k=2\))  O(1)  \(2^{\lambda /2}\) 
JouxLucks [20] (\(k=3\))  \(2^\alpha \)  \(2^{\lambda (1\alpha )}\) (\(\alpha \le 1/3\)) 
From the table we derive the time/memory graph of \(\mathsf {CR}_k\) in Fig. 16. \(\mathsf {CR}_3\) is memory sensitive, whereas \(\mathsf {CR}_2\) is not (as it has a horizontal transition line).
Learning Parity with Noise. Another example of a memory sensitive problem is the wellknown Learning Parity with Noise (LPN) problem. Let \(\lambda \in {\mathbb {N}}\) be the dimension and Open image in new window be a constant that defines the error probability. The problem \(\mathsf {LPN}_{\lambda ,\tau }\) is to compute a random secret Open image in new window , given “noisy” random inner products with s, i.e. samples \((a_i,\nu _i)\) where Open image in new window , and \(\nu _i = \langle {a_i,s}\rangle +e_i\) for Open image in new window .
Figure 16 provides the corresponding time/memory graph. Note that the recent work [15] also considers a hybrid algorithm between \(\mathsf {Well}\)\(\mathsf {Pooled}\) \(\mathsf {Gauss}\) and \(\mathsf {BKW}\), but the interval where the hybrid algorithm actually has better performance is so small that we decided to ignore it.
We note that the situation with the Learning with Errors (LWE), the Shortest Integer Solution (SIS), and the approximate SVP problem is similar to that of the LPN problem [2, 13, 19].
Footnotes
Notes
Acknowledgments
The motivation of considering memory in the context of security reductions stems from the talk “Practical LPN Cryptanalysis”, given by Alexander May at the Dagstuhl Seminar 16371 on PublicKey Cryptography. We thank Elena Kirshanova, Robert Kübler, and Alexander May for their help with assessing the memorysensitivity of a number of hard problems. Finally, we are grateful to one of the CRYPTO 2017 reviewers for his/her very detailed and thoughtful review.
Auerbach was supported by the NRW Research Training Group SecHuman; Cash was supported in part by NSF grant CNS1453132; Fersch and Kiltz were supported in part by ERC Project ERCC (FP7/615074) and by DFG SPP 1736 Big Data.
References
 1.Abdalla, M., Bellare, M., Rogaway, P.: The oracle DiffieHellman assumptions and an analysis of DHIES. In: Naccache, D. (ed.) CTRSA 2001. LNCS, vol. 2020, pp. 143–158. Springer, Heidelberg (2001). doi: 10.1007/3540453539_12 CrossRefGoogle Scholar
 2.Albrecht, M.R., Player, R., Scott, S.: On the concrete hardness of learning with errors. J. Math. Cryptol. 9(3), 169–203 (2015). http://www.degruyter.com/view/j/jmc.2015.9.issue3/jmc20150016/jmc20150016.xml MathSciNetCrossRefzbMATHGoogle Scholar
 3.Auerbach, B., Cash, D., Fersch, M., Kiltz, E.: Memorytight reductions. Cryptology ePrint Archive, Report 2017/??? (2017). http://eprint.iacr.org/2017/???
 4.Bader, C., Jager, T., Li, Y., Schäge, S.: On the impossibility of tight cryptographic reductions. In: Fischlin, M., Coron, J.S. (eds.) EUROCRYPT 2016. LNCS, vol. 9666, pp. 273–304. Springer, Heidelberg (2016). doi: 10.1007/9783662498965_10 CrossRefGoogle Scholar
 5.Bellare, M., Boldyreva, A., Micali, S.: Publickey encryption in a multiuser setting: security proofs and improvements. In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 259–274. Springer, Heidelberg (2000). doi: 10.1007/3540455396_18 CrossRefGoogle Scholar
 6.Bellare, M., Ristenpart, T.: Simulation without the artificial abort: simplified proof and improved concrete security for waters’ IBE scheme. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 407–424. Springer, Heidelberg (2009). doi: 10.1007/9783642010019_24 CrossRefGoogle Scholar
 7.Bellare, M., Rogaway, P.: Random oracles are practical: a paradigm for designing efficient protocols. In: Ashby, V. (ed.) ACM CCS 1993, Fairfax, Virginia, USA, 3–5 November 1993, pp. 62–73. ACM Press (1993)Google Scholar
 8.Bellare, M., Rogaway, P.: The exact security of digital signatureshow to sign with RSA and rabin. In: Maurer, U. (ed.) EUROCRYPT 1996. LNCS, vol. 1070, pp. 399–416. Springer, Heidelberg (1996). doi: 10.1007/3540683399_34 Google Scholar
 9.Bellare, M., Rogaway, P.: The security of triple encryption and a framework for codebased gameplaying proofs. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 409–426. Springer, Heidelberg (2006). doi: 10.1007/11761679_25 CrossRefGoogle Scholar
 10.Blum, A., Kalai, A., Wasserman, H.: Noisetolerant learning, the parity problem, and the statistical query model. J. ACM 50(4), 506–519 (2003). http://doi.acm.org/10.1145/792538.792543 MathSciNetCrossRefzbMATHGoogle Scholar
 11.Chatterjee, S., Koblitz, N., Menezes, A., Sarkar, P.: Another look at tightness II: practical issues in cryptography. Cryptology ePrint Archive, Report 2016/360 (2016). http://eprint.iacr.org/2016/360
 12.Chatterjee, S., Menezes, A., Sarkar, P.: Another look at tightness. In: Miri, A., Vaudenay, S. (eds.) SAC 2011. LNCS, vol. 7118, pp. 293–319. Springer, Heidelberg (2012). doi: 10.1007/9783642284960_18 CrossRefGoogle Scholar
 13.Chen, Y., Nguyen, P.Q.: BKZ 2.0: better lattice security estimates. In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 1–20. Springer, Heidelberg (2011). doi: 10.1007/9783642253850_1 CrossRefGoogle Scholar
 14.Coron, J.S.: On the exact security of full domain hash. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 229–235. Springer, Heidelberg (2000). doi: 10.1007/3540445986_14 CrossRefGoogle Scholar
 15.Esser, A., Kübler, R., May, A.: LPN decoded. In: Katz, J., Shacham, H. (eds.) CRYPTO 2017. LNCS, vol. 10402, pp. 486–514. Springer, Cham (2017)Google Scholar
 16.Galbraith, S.D., Gaudry, P.: Recent progress on the elliptic curve discrete logarithm problem. Cryptology ePrint Archive, Report 2015/1022 (2015). http://eprint.iacr.org/2015/1022
 17.Galindo, D.: The exact security of pairing based encryption and signature schemes. In: Based on a talk at Workshop on Provable Security, INRIA, Paris (2004). http://www.dgalindo.es/galindoEcrypt.pdf
 18.Gay, R., Hofheinz, D., Kiltz, E., Wee, H.: Tightly CCAsecure encryption without pairings. In: Fischlin, M., Coron, J.S. (eds.) EUROCRYPT 2016. LNCS, vol. 9665, pp. 1–27. Springer, Heidelberg (2016). doi: 10.1007/9783662498903_1 CrossRefGoogle Scholar
 19.Herold, G., Kirshanova, E., May, A.: On the asymptotic complexity of solving LWE. Des. Codes Crypt. 1–29 (2017). http://dx.doi.org/10.1007/s1062301603260
 20.Joux, A., Lucks, S.: Improved generic algorithms for 3collisions. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 347–363. Springer, Heidelberg (2009). doi: 10.1007/9783642103667_21 CrossRefGoogle Scholar
 21.Kalyanasundaram, B., Schnitger, G.: The probabilistic communication complexity of set intersection. SIAM J. Discrete Math. 5(4), 545–557 (1992). http://dx.doi.org/10.1137/0405044 MathSciNetCrossRefzbMATHGoogle Scholar
 22.Pointcheval, D., Stern, J.: Security arguments for digital signatures and blind signatures. J. Cryptol. 13(3), 361–396 (2000)CrossRefzbMATHGoogle Scholar
 23.Pollard, J.M.: A monte carlo method for factorization. BIT Numer. Math. 15(3), 331–334 (1975). http://dx.doi.org/10.1007/BF01933667 MathSciNetCrossRefzbMATHGoogle Scholar
 24.Razborov, A.A.: On the distributional complexity of disjointness. Theor. Comput. Sci. 106(2), 385–390 (1992). http://dx.doi.org/10.1016/03043975(92)90260M MathSciNetCrossRefzbMATHGoogle Scholar
 25.Roughgarden, T.: Communication complexity (for algorithm designers) (2015). http://theory.stanford.edu/~tim/w15/l/w15.pdf