Impossibility of BlackBox Simulation Against Leakage Attacks
 3 Citations
 2.6k Downloads
Abstract
In this work, we show how to use the positive results on succinct argument systems to prove impossibility results on leakageresilient blackbox zero knowledge. This recently proposed notion of zero knowledge deals with an adversary that can make leakage queries on the state of the prover. Our result holds for blackbox simulation only and we also give some insights on the nonblackbox case. Additionally, we show that, for several functionalities, leakageresilient multiparty computation is impossible (regardless of the number of players and even if just one player is corrupted).
More in details, we achieve the above results by extending a technique of [Nielsen, Venturi, Zottarel – PKC13] to prove lower bounds for leakageresilient security. Indeed, we use leakage queries to run an execution of a communicationefficient protocol in the head of the adversary. Moreover, to defeat the blackbox simulator we connect the above technique for leakage resilience to security against reset attacks.
Our results show that the open problem of [Ananth, Goyal, Pandey – Crypto 14] (i.e., continual leakageresilient proofs without a common reference string) has a negative answer when security through blackbox simulation is desired. Moreover our results close the open problem of [Boyle et al. – STOC 12] for the case of blackbox simulation (i.e., the possibility of continual leakageresilient secure computation without a leakfree interactive preprocessing).
Keywords
Zero knowledge MPC Resettability Succinct arguments Impossibility results Blackbox vs nonblackbox simulation1 Introduction
The intriguing notion of a zeroknowledge proof introduced by Goldwasser, Micali and Rackoff [31] has been for almost three decades a source of fascinating open questions in Cryptography and Complexity Theory. Indeed, motivated by new realworld attacks, the notion has been studied in different flavors (e.g., noninteractive zero knowledge [8], nonmalleable zero knowledge [21], concurrent zero knowledge [23], resettable zero knowledge [16]) and each of them required extensive research to figure out the proper definition and its (in)feasibility. Moreover all such realworld attacks have been considered also for the natural generalization of the concept of zero knowledge: secure computation [30].
Leakage Attacks. Leakage resilience deals with modeling realword attacks where the adversary manages through some physical observations to obtain sidechannel information on the state (e.g., private input, memory content, randomness) of the honest player (see, for example, [42]). Starting with the works of [25, 34, 35, 41] leakage resilience has been a mainstream research topic in Cryptography, and recently the gap between theory and practice has been significantly reduced [22, 40, 43].
The notions of leakageresilient zero knowledge [28] (LRZK) and secure multiparty computation [10] (LRMPC) have been also considered. Despite the above intensive research on leakage resilience, LRZK and LRMPC are still rich of interesting open problems.
1.1 Previous Work and Open Problems
Leakage Resilience vs. Tolerance. The first definition for leakageresilient zero knowledge (LRZK, in short) was given by Garg et al. in [28]. In their definition, the simulator is allowed to make leakage queries in the ideal world. This was justified by the observation that an adversary can, through leakage queries, easily obtain some of the bits of the witness used by the prover in the real world. Clearly, these bits of information can not be simulated, unless the simulator is allowed to make queries in the ideal model. Therefore the best one can hope for is that a malicious verifier does not learn anything from the protocol beyond the validity of the statement being proved and the leakage obtained from the prover. This formalization of security has been extensively studied by Bitansky et al. in [6] for the case of universally composable secure computation [15]. Similar definitions have been used in [9, 11, 12, 36].
In [28], constructions for LRZK in the standard model and for noninteractive LRZK in the common reference string (CRS) model were given. The simulator of [28] for LRZK asks for a total of \((1+\epsilon )\cdot l\) bits in the ideal world, where l is the number of bits obtained by the adversarial verifier. Thus the simulator is allowed to obtain more bits than the verifier and this seems to be necessary as Garg et al. show that it is impossible to obtain a simulator that ask for less than l bits in the ideal world. Very recently, Pandey [39] gave a constantround construction for LRZK under the definition of [28].
Nowadays, leakage tolerance is the commonly accepted term for the security notion used in [6, 28, 39] as it does not prevent a leakage attack but only guarantees that a protocol does not leak more than what can be obtained through leakage queries. Bitansky et al. [7] obtained UCsecure continual leakage tolerance using an inputindependent leakfree preprocessing phase.
Open Problems: Leakage Resilience with LeakFree Encoding. The motivation to study leakagetolerant Cryptography is based on the observation that a private input can not be protected in full from a leakage query. However this notion is quite extreme and does not necessarily fit all realworld scenarios. Indeed, it is commonly expected that an adversary attacks the honest player during the execution of the protocol, while they are connected through some communication channel. It is thus reasonable to assume that a honest player receives his input in a preliminary phase, before having ever had any interaction with the adversary. Once this input is received, the honest player can encode it in order to make it somewhat intelligible from leakage queries but still valid for the execution of a protocol. This encoding phase can be considered leakfree since, as stressed before, the honest player has never been in touch with the adversary^{1}. Later on, when the interaction with the adversary starts, leakage queries will be possible but they will affect the current state of the honest player that contains an encoding of the input. The need of a leakfree phase to protect a secret from leakage queries was considered also in [26, 32, 33].
The above realistic scenario circumvents the argument that leakage tolerance is the best one can hope for, and opens the following challenging open questions:
Open Question 1: “Assuming players can encode their inputs during a leakfree phase, is it possible to construct LRZK argument/proof systems?”
Open Question 2: “Assuming players can encode their inputs during a leakfree phase, is it possible to construct protocols for leakageresilient MultiParty Computation (LRMPC)?”
Leakage Resilience Assuming the Existence of a CRS. Very recently, Ananth et al. [1], showed that in the CRS (common reference string) model it is possible to have an interactive argument system that remains nontransferable even in presence of continual leakage attacks. More precisely, in their model a prover encodes the witness in a leakfree environment and, later on, the prover runs the protocol with a verifier using the encoded witness. During the execution of the protocol, the adversarial verifier is allowed to launch leakage queries. Once the protocol has been completed, the prover can refresh (again, in a leakfree environment) its encoded witness and then it can play again with the verifier (under leakage attacks). Nontransferability means that an adversarial verifier that mounts the above attack against a honest prover does not get enough information to later prove the same statement to a honest verifier. The main contribution of [1] is the construction of an encoding/refreshing mechanism and a protocol for nontransferable arguments against such continual leakage attacks. They left explicitly open the following open problem (see page 167 of [1]): is it possible to obtain nontransferable arguments/proofs that remain secure against continual leakage attacks without relying on a CRS? This problem has similarities with Open Problem 1. Indeed, zero knowledge (without a CRS) implies nontransferability and therefore solving Open Problem 1 in the positive and with continual leakage would solve the problem opened by [1] in a strong sense since nontransferability would be achieved through zero knowledge, and this goes even beyond the security definition of [1]^{2}. However, as we will show later we will give a negative answer to Open Problem 1 for the case of blackbox simulation. Even in light of our negative results, the open problem of [1] remains open as one might be able to construct leakage resilient nonblackbox zero knowledge (which is clearly nontransferable) or leakage resilient witness hiding/indistinguishable proofs (that can still be nontransferable since nonmalleable proofs can be achieved with nonmalleable forms of WI as shown in [37]).
 1.
a leakfree interactive preprocessing to be run only once, obliviously w.r.t. inputs and functions;
 2.
a leakfree standalone inputencoding phase to be run when a new input arrives (and of course after the interactive preprocessing), obliviously w.r.t. functions to be computed later;
 3.
an online phase where parties, on input the states generated during the last executions of the inputencoding phases, and on input a function f, run a protocol that aims at securely computing the output of f.
In the model of [10] leakage attacks are not possible during the first two phases but are possible in any other moment, including the 3rd phase and in between phases.
Reference [10] showed (a) the impossibility of leakageresilient 2party computation and, more in general, of nparty LRMPC when \(n1\) players are corrupted; (b) the feasibility of leakageresilient MPC when the number of players is polynomial and a constant fraction of them is honest.
The positive result works for an even stronger notion of leakage resilience referred to as “continual leakage” that has been recently investigated in several papers [13, 14, 19, 20, 24]). Continual leakage means that the same input can be reused through unbounded multiple executions of the protocol each allowing for a bounded leakage, as long as the state can be refreshed after each execution. Leakage queries are allowed also during the refreshing.
Boyle et al. explicitly leave open (see paragraph “LRMPC with NonInteractive Preprocessing” on page 1240 of [10]) the problem of achieving their results without the preprocessing (i.e., Open Question 2) and implicitly left open the case of zeroknowledge arguments/proofs. (i.e., Open Question 1) since when restricting to the ZK functionality only, the function is known in advance and therefore their impossibility for the twoparty case does not directly hold.
We notice that the result of [1] does not yield a continual leakageresilient nontransferable proof system for the model of [10]. Indeed, while the preprocessing of [10] can be used to establish the CRS needed by [1], the refresh of the state of [1] requires a leakfree phase that is not available in the model of [10]. We finally stress that the construction of [1] is not proved to be LRZK.
However the interesting open question in the model of [10] consists in achieving continual LRZK without an interactive preprocessing. Indeed, if an interactive preprocessing is allowed, continual LRZK can be trivially achieved as follows. The preprocessing can be used to run a secure 2party computation for generating a shared random string. The inputencoding phase can replace the witness with a noninteractive zeroknowledge proof of knowledge (NIZKPK). The online phase can be implemented by simply sending the previously computed NIZKPK. This trivial solution would allow the leakage of the entire state, therefore guaranteeing continual leakage (i.e., no refresh is needed).
Impossibility Through Obfuscation. In the model studied by Garg et al. [28], the simulator is allowed to see the leakage queries issued by the adversarial verifier (and not the replies) and, based on these, it decides his own leakage queries in the ideal model. Nonetheless, the actual simulator constructed by [28] does not use this possibility; such a simulator is called leakageoblivious. In our setting (in which the simulator is not allowed to ask queries) leakageoblivious simulators are very weak: an adversarial verifier that asks the query for function \(R(x,\cdot )\) applied to the witness w (here R is the relation associated to \(\mathbb {NP}\) language L and x is the common input) cannot be simulated. Notice though that in the model we are interested in, the leakfree encoding phase might invalidate this approach since the encoded witness could have a completely different structure and therefore could make R evaluate to 0. Despite this issue (that is potentially fixable), the main problem is that in our setting the simulator can read the query of the adversarial verifier and could easily answer 1 (the honest prover always has a valid witness). Given the recent construction of circuit obfuscators [27], one could then think of forcing simulators to be leakageoblivious by considering an adversary that obfuscates its leakage queries. While this approach has a potential, we point out that our goal is to show the impossibility under standard assumptions (e.g., the existence of a family of CRHFs).
The Technique of Nielsen et al. [36]. We finally discuss the very relevant work of Nielsen et al. [36] that showed a lower bound on the size of a secret key for leakagetolerant adaptively secure message transmission. Nielsen et al. introduced in their work a very interesting attack consisting in asking a collisionresistant hash of the state of a honest player through a leakage query. Then a succinct argument of knowledge is run through leakage queries in order to ask the honest player to prove knowledge of a state that is consistent with the previously sent hash value. As we will discuss later, we will extend this technique to achieve our main result. The use of CRHFs and succinct arguments of knowledge for impossibility of leakageresilience was also used in [18] but in a very different context. Indeed in [18] the above tools are used to check consistency with the transcript of played messages with the goal of proving that full adaptive security is needed in multiparty protocols as soon as some small amount of leakage must be tolerated.
1.2 Our Results
In this paper we study the above open questions and show the following results.
BlackBox LRZK Without CRS/Preprocessing. As a main result, we show that, if a family of collisionresistant hash functions exist, then blackbox LRZK is impossible for nontrivial languages if we only rely on a leakfree inputencoding phase (i.e., without CRS/preprocessing). More in details, with respect to the works of [1, 10], our results shows that, by removing the CRS/preprocessing, not only nontransferable continual blackbox LRZK is impossible, but even ignoring nontransferability and continual leakage, the simple notion of 1time blackbox LRZK is impossible. Extending the techniques of [36], we design an adversarial verifier \({\mathsf {V}}^\star \) that uses leakage queries to obtain a very small amount of data compared to the state of the prover and whose view cannot be simulated in a blackbox manner. The impossibility holds even knowing already at the inputencoding phase which protocol will be played later.
Overview of Our Techniques. We prove the above impossibility result by extending the previously discussed technique of [36]: the adversary will attack the honest player without running the actual protocol at all! Indeed, the adversary will only run an execution of another (insecure) protocol in its head, using leakage queries to get messages from the other player for the “virtual” execution of the (insecure) protocol.
More in details, assuming by contradiction the existence of a protocol \(({\mathsf {P}},{\mathsf {V}})\) for a language \(L\not \in {\mathsf {BPP}}\), we show an adversary \({\mathsf {V}}^{\star }\) that first runs a leakage query to obtain a collisionresistant (CR) hash \(\tilde{w}\) of the state \(\hat{w}\) of the prover. Then it takes a communicationefficient (insecure) protocol \(\varPi =(\varPi .{\mathsf {P}},\varPi .{\mathsf {V}})\) and, through leakage queries, \({\mathsf {V}}^{\star }\) runs in its head an execution of \(\varPi \) playing as a honest verifier \(\varPi .{\mathsf {V}}\), while the prover \({\mathsf {P}}\) will have to play as \(\varPi .{\mathsf {P}}\) proving that the hash is a good one: namely, it corresponds to a state that would convince a honest verifier \({\mathsf {V}}\) on the membership of the instance in L. We stress that this technique was introduced in [36].
Notice that in the realworld execution \({\mathsf {P}}\) would convince \({\mathsf {V}}^{\star }\) during the “virtual” execution of \(\varPi \) since \({\mathsf {P}}\) runs as input an encoded witness that by the completeness of \(({\mathsf {P}},{\mathsf {V}})\) convinces \({\mathsf {V}}\).
Therefore a blackbox simulator will have to do the same without having the encoding of a witness but just relying on rewinding capabilities. To show our impossibility we extend the technique of [36] by making useless the capabilities of the simulator. This is done by connecting leakage resilience with resettability. Indeed we choose \(\varPi \) not only to be communication efficient on \(\varPi .{\mathsf {P}}\)’s side (this helps so that the sizes of the outputs of leakage queries will correspond to a small portion of the state of \({\mathsf {P}}\)), but also to be a resettable argument of knowledge (and therefore resettably sound). Such arguments of knowledge admit an extractor \(\varPi .\mathsf{Ext}\) that works even against a resetting prover \(\varPi .{\mathsf {P}}^{\star }\) (i.e., such an adversary in our impossibility will be the simulator \(\mathsf{Sim}\) of \(({\mathsf {P}},{\mathsf {V}})\)).
The existence of a family of CR hash functions gives not only the CR hash function required by the first leakage query but also the communicationefficient resettable argument of knowledge for \(\mathbb {NP}\). Indeed we can use Barak’s publiccoin universal argument [3] that enjoys a weak argument of knowledge property when used for languages in NEXP. Instead when used for \(\mathbb {NP}\) languages, Barak’s construction is a regular argument of knowledge with a blackbox extractor. We can finally make it extractable also in presence of a resetting prover by using the transformation of Barak et al. [4] that only requires the existence of oneway functions.
Summing up, we will show that the existence of a blackbox simulator for \(({\mathsf {P}},{\mathsf {V}})\) implies either that the language is in \({\mathsf {BPP}}\), or that \(({\mathsf {P}},{\mathsf {V}})\) is not sound or that the family of hash functions is not collision resistant.
The NonBlackBox Case. Lower bounds in the case of nonblackbox simulation are rare in Cryptography and indeed we can not rule out the existence of LRZK argument whose security is based on the existence of a nonblackbox simulator. We will however discuss some evidence that achieving a positive result under standard assumptions requires a breakthrough on nonblackbox simulation that goes beyond Barak’s nonblackbox techniques.
Impossibility of LeakageResilient MPC for Several Functionalities. Additionally, we address Open Question 2 by showing that for many functionalities LRMPC with a leakfree inputencoding phase (and without an interactive preprocessing phase) is impossible. This impossibility holds regardless of the number of players involved in the computation and only assumes that one player is corrupted. It applies to functionalities that when executed multiple times keeping unchanged the input \(x_{i}\) of a honest player \(P_{i}\), produce outputs delivered to the dishonest players that reveal more information on \(x_{i}\) than what a single output would reveal. Similar functionalities were studied in [17]. We also require outputs to be short.
Our impossibility is actually even stronger since it holds also in case the functionality and the corresponding protocol to be run later are already known during the inputencoding phase.
For simplicity, we will discuss a direct example of such a class of functionalities: a variation of Yao’s Millionaires’ Problem, where n players send their inputs to the functionality that will then send as output a bit b specifying whether player \(P_{1}\) is the richest one.
HighLevel Overview. The adversary will focus on attacking player \(P_{1}\) that has an input to protect. The adversary can play in its head by means of a single leakage query the entire protocol selecting inputs and randomnesses for all other players, and obtaining as output of the leakage query the output of the function (i.e., the bit b). This “virtual” execution can be repeated multiple times, therefore extracting more information on the input of the player. Indeed playing multiple times and changing the inputs of the other players while the input of \(P_{1}\) remains the same, it is possible to restrict the possible input of \(P_{1}\) to a much smaller range of values than what can be inferred by a single execution.
The above attack will be clearly impossible to simulate since it would require the execution of multiple queries in the ideal world, but the simulator by definition can make only one query.
When running the protocol through leakage queries, we are of course assuming that authenticated channels do not need to be simulated by the adversary^{3} since their management is transparent to the state of the players running the leakageresilient protocol. This is already assumed in previous work like [10] since otherwise leakageresilient authenticated channels would have been required, while instead [10] only requires an authenticated broadcast channel (see Sect. 3 of [10]).
We will give only a sketch of this additional simpler result.
2 Definitions
We will denote by “\(\alpha \circ \beta \)” the string resulting from appending \(\beta \) to \(\alpha \), and by [k] the set \(\{1,\ldots ,k\}\). A polynomialtime relation R is a relation for which it is possible to verify in time polynomial in x whether \(R(x,w)=1\). We will consider \(\mathbb {NP}\)languages L and denote by \(R_L\) the corresponding polynomialtime relation such that \(x\in L\) if and only if there exists w such that \(R_L(x,w)=1\). We will call such a w a valid witness for \(x\in L\) and denote by \(W_L(x)\) the set of valid witnesses for \(x\in L\). We will slightly abuse notation and, whenever L is clear from the context, we will simply write W(x) instead of \(W_L(x)\). A negligible function \(\nu (k)\) is a function such that for any constant \(c<0\) and for all sufficiently large k, \(\nu (k)< k^c\).
We will now give all definitions required for the main result of our work, the impossibility of blackbox LRZK. Since we will only sketch the additional result on LRMPC, we defer the reader to [10] for the additional definitions.
2.1 Interactive Proof Systems
An interactive proof system [31] for a language L is a pair of interactive Turing machines \(({\mathsf {P}},{\mathsf {V}})\), satisfying the requirements of completeness and soundness. Informally, completeness requires that for any \(x\in L\), at the end of the interaction between \({\mathsf {P}}\) and \({\mathsf {V}}\), where \({\mathsf {P}}\) has on input a valid witness for \(x \in L\), \({\mathsf {V}}\) rejects with negligible probability. Soundness requires that for any \(x\not \in L\), for any computationally unbounded \({\mathsf {P}}^{\star }\), at the end of the interaction between \({\mathsf {P}}^{\star }\) and \({\mathsf {V}}\), \({\mathsf {V}}\) accepts with negligible probability. When \({\mathsf {P}}^{\star }\) is only probabilistic polynomialtime, then we have an argument system. We denote by \(\langle {\mathsf {P}},{\mathsf {V}}\rangle (x)\) the output of the verifier \({\mathsf {V}}\) when interacting on common input x with prover \({\mathsf {P}}\). Also, sometimes we will use the notation \(\langle {\mathsf {P}}(w),{\mathsf {V}}\rangle (x)\) to stress that prover \({\mathsf {P}}\) receives as additional input witness w for \(x\in L\). We will write \(\langle {\mathsf {P}}(w;r_{P}),{\mathsf {V}}(r_{V}) \rangle (x)\) to make explicit the randomness used by \({\mathsf {P}}\) and \({\mathsf {V}}\). We will also write \({\mathsf {V}}^{\star }(z)\) to denote an adversarial verifier \({\mathsf {V}}^{\star }\) that runs on input an auxiliary string z.
Definition 1
 1.
Completeness: There exists a negligible function \(\nu (\cdot )\) such that for every \(x\in L\) and for every \(w\in W(x)\) \(\text{ Prob }\left[ \;\langle {\mathsf {P}}(w),{\mathsf {V}}\rangle (x)=1\;\right] \ge 1\nu (x).\)
 2.
Soundness: For every \(x\not \in L\) and for every interactive Turing machines \({\mathsf {P}}^{\star }\) there exists a negligible function \(\nu (\cdot )\) such that \(\text{ Prob }\left[ \;\langle {\mathsf {P}}^{\star },{\mathsf {V}}\rangle (x)=1\;\right] \le \nu (x).\)
If the soundness condition holds only with respect to probabilistic polynomialtime interactive Turing machines \({\mathsf {P}}^{\star }\) then \(({\mathsf {P}},{\mathsf {V}})\) is called an argument.
We now define the notions of reset attack and of resetting prover.
Definition 2
[4]. A reset attack of a prover \({\mathsf {P}}^{\star }\) on \({\mathsf {V}}\) is defined by the following twostep random process, indexed by a security parameter k.
 1.
Uniformly select and fix \(t = \mathsf{poly}(k)\) random tapes, denoted by \(r_1,\ldots ,r_t\), for \({\mathsf {V}}\), resulting in deterministic strategies \({\mathsf {V}}^{(i)}(x) = {\mathsf {V}}_{x,r_{i}}\) defined by \({\mathsf {V}}_{x,r_{i}}(\alpha ) = {\mathsf {V}}(x,r_i, \alpha )\), where \(x \in \{0,1\}^{k}\) and \(i \in {1,\ldots ,t}\). Each \({\mathsf {V}}^{(i)}(x)\) is called an incarnation of \({\mathsf {V}}\).
 2.
On input \(1^k\), machine \({\mathsf {P}}^{\star }\) is allowed to initiate \(\mathsf{poly}(k)\)many interactions with \({\mathsf {V}}\). The activity of \({\mathsf {P}}^{\star }\) proceeds in rounds. In each round \({\mathsf {P}}\) chooses \(x \in \{0,1\}^k\) and \(i \in {1,\ldots ,t}\), thus defining \({\mathsf {V}}^{(i)}(x)\), and conducts a complete session (a session is complete if is either terminated or aborted) with it.
We call resetting prover a prover that launches a reset attack.
We now define proofs/arguments of knowledge, in particular considering the case of a prover launching a reset attack.
Definition 3

Nontriviality: There exists a probabilistic polynomialtime interactive machine \({\mathsf {P}}\) such that for every \((x,w) \in R\), with overwhelming probability an interaction of \({\mathsf {V}}\) with \({\mathsf {P}}\) on common input x, where \({\mathsf {P}}\) has auxiliary input w, is accepting.

Validity (or Knowledge Soundness) with Negligible Error \(\epsilon \) : for every probabilistic polynomialtime machine \({\mathsf {P}}^{\star }\), there exists an expected polynomialtime machine \(\mathsf{Ext}\), such that and for every \(x,aux,r \in \{0,1\}^{\star }\), \(\mathsf{Ext}\) satisfies the following condition: Denote by p(x, aux, r) the probability (over the random tape of \({\mathsf {V}}\)) that \({\mathsf {V}}\) accepts upon input x, when interacting with the prover \({\mathsf {P}}^{\star }\) who has input x, auxiliaryinput aux and randomtape r. Then, machine \(\mathsf{Ext}\), upon input (x, aux, r), outputs a solution \(w \in W(x)\) with probability at least \(p(x, aux, r)\epsilon (x)\).
A pair \(({\mathsf {P}}, {\mathsf {V}})\) such that \({\mathsf {V}}\) is a knowledge verifier with negligible knowledge error for a relation R and \({\mathsf {P}}\) is a machine satisfying the nontriviality condition (with respect to \({\mathsf {V}}\) and R) is called an argument of knowledge for the relation R. If the validity condition holds with respect to any (not necessarily polynomial time) machine \({\mathsf {P}}^{\star }\), then \(({\mathsf {P}},{\mathsf {V}})\) is called a proof of knowledge for R. If the validity condition holds with respect to a polynomialtime machine \({\mathsf {P}}^{\star }\) launching a reset attack, then \(({\mathsf {P}},{\mathsf {V}})\) is called a resettable argument of knowledge for R.
In the above definition the extractor does not depends on the code of the prover (i.e., the same extractor works with all possible provers) \(\mathsf{Ext}\) then the interactive argument/proof system is a blackbox (resettable) argument/proof of knowledge.
The InputEncoding Phase. Following previous work we will assume that the prover receives the input and encodes it running in a leakfree environment. This is unavoidable since otherwise a leakage query can cask for some bits of the witness and therefore zero knowledge would be trivially impossible to achieve, unless the simulator is allowed to ask leakage query in the ideal world (i.e., leakage tolerance). After this leakfree phase that we call inputencoding phase, the prover has a state consisting only of the encoded witness and is ready to start the actual leakageresilient protocol.
LeakageResilient Protocol [39]. As in previous work, we assume that random coins are available only in the specific step in which they are needed. More in details, the prover \({\mathsf {P}}\) at each round of the protocol obtains fresh randomness r for the computations related to that round. However, unlike in previous work, we do not require the prover to update its state by appending r to it. We allow the prover to erase randomness and change its state during the protocol execution. This makes our impossibility results even stronger.
The adversarial verifier performs a leakage query by specifying a polynomialsized circuit C that takes as input the current state of the prover. The verifier gets immediately the output of C and can adaptively decide how to continue. An attack of the verifier that includes leakage queries is called a leakage attack.
Definition 4
Given a polynomial p, we say that an interactive argument/proof system \(({\mathsf {P}},{\mathsf {V}})\) for a language \(L \in \mathbb {NP}\) with a witness relation R, is p(x)leakageresilient zero knowledge if for every probabilistic polynomialtime machine \({\mathsf {V}}^{*}\) launching a leakage attack on \({\mathsf {P}}\) after the inputencoding phase, obtaining at most p(x) bits, there exists a probabilistic polynomialtime machine \(\mathsf{Sim}\) such that for every \(x \in L\), every w such that \(R(x, w)=1\), and every \(z \in \{0, 1\}^{*}\) distributions \(\langle {\mathsf {P}}(w),{\mathsf {V}}^{\star }(z)\rangle (x)\) and \(\mathsf{Sim}(x, z)\) are computationally indistinguishable.
The definition of standard zeroknowledge is obtained by enforcing that no leakage query is allowed to any machine and removing the inputencoding phase.
In the above definition the simulator does not depends on the code of the verifier (i.e., the same simulator works with all possible verifiers) \(\mathsf{Sim}\) then the interactive argument/proof system is leakageresilient blackbox zero knowledge. We will denote by \(\mathsf{Sim}^{{\mathsf {V}}^\star }\) an execution of \(\mathsf{Sim}\) having oracle access to \({\mathsf {V}}^\star \).
3 Impossibility of LeakageResilient Zero Knowledge
Here we prove that LRZK argument systems exist only for \({\mathsf {BPP}}\) languages.
Tools. In our proof we assume the existence of a communicationefficient argument system \(\varPi =(\varPi .{\mathsf {P}},\varPi .{\mathsf {V}})\) for a specific auxiliary \(\mathbb {NP}\) language (to be defined later). Moreover we require such an argument system to be a resettable argument of knowledge. Specifically, we require that on common input x, \(\varPi .{\mathsf {P}}\) sends \(O(x^\epsilon )\) bits to \(\varPi .{\mathsf {V}}\) for an arbitrarily chosen constant \(\epsilon >0\). We denote, with a slight abuse of notation, by \(\varPi .{\mathsf {P}}\) the prover’s next message function; that is, \(\varPi .{\mathsf {P}}\) on input x, randomness \(r_1,\ldots ,r_{i1}\) used in the previous \(i1\) rounds, fresh randomness \(r_i\) and verifier messages \(v_1,\ldots ,v_{i}\) received so far, outputs \(\mathtt {msg}_i\), the prover’s ith message. Similarly, we denote the verifier’s next message function by \(\varPi .{\mathsf {V}}\). Finally, we denote by \(\varPi .\mathsf{Ext}\) the extractor that in expected polynomial time outputs a witness for \(x \in L\) whenever a polynomialtime prover can make \(\varPi .{\mathsf {V}}\) accept \(x \in L\) with nonnegligible probability.
Such a resettable argument of knowledge \(\varPi \) exists based on the existence of a family of collisionresistant hash functions. It can be obtained by starting with the publiccoin universal argument of [3] that for \(\mathbb {NP}\) languages is also an argument of knowledge. Then by applying the transformation of [4] that requires oneway functions, we have that the resulting protocol is still communication efficient, and moreover is a resettable argument of knowledge.
Theorem 1
Assume the existence of a family of collisionresistant hash functions. If an \(\mathbb {NP}\)language L admits an \((x^\epsilon )\)leakageresilient blackbox zeroknowledge argument system \(\varPi _{LRZK}=({\mathsf {P}},{\mathsf {V}})\) for some constant \(\epsilon >0\) then \(L\in {\mathsf {BPP}}\).
Proof. For sake of contradiction, we assume that language \(L\not \in {\mathsf {BPP}}\) admits a \((x^\epsilon )\)leakageresilient zeroknowledge argument system \(({\mathsf {P}},{\mathsf {V}})\) with blackbox simulator \(\mathsf{Sim}\) for some constant \(\epsilon >0\). We now describe an adversarial verifier \({\mathsf {V}}^\star ={\mathsf {V}}^\star _{x,s,h,t}\), parameterized by input x, strings s and t, and function h from a family of collisionresistant hash functions. In the description of \({\mathsf {V}}^\star \), we let \(\{F_s\}\) be a pseudorandom family of functions.
Our proof makes use of the auxiliary language \(\varLambda \) consisting of the tuples \(\tau =(h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}})\) for which there exists \(\hat{w}\) such that \(h(\hat{w})=\tilde{w}\) and \(\langle {\mathsf {P}}(\hat{w};\mathtt {rand}^{{\mathsf {P}}}),{\mathsf {V}}(\mathtt {rand}^{{\mathsf {V}}})\rangle (x)=1\). Clearly, \(\Lambda \in \mathbb {NP}\). Let \(\varPi =(\varPi .{\mathsf {P}},\varPi .{\mathsf {V}})\) be a communicationefficient argument system for \(\varLambda \). We assume wlog that the number of rounds of \(\varPi \) is \(2\ell \) (i.e., \(\ell \) messages played by the verifier and \(\ell \) messages played by the prover) where \(\ell >1\) and that the verifier speaks first.
 1.
At the start of the interaction between \({\mathsf {P}}\) and \({\mathsf {V}}^\star \) on an nbit input x with \(n=\mathsf{poly}(k)\), the state of \({\mathsf {P}}\) consists solely of the encoding \(\hat{w}\) of the witness w for \(x\in L\), where \(\hat{w}=\mathsf{poly}(n)\).
 2.
\({\mathsf {V}}^\star \) issues leakage query \(Q_0\) by specifying function h; as a reply, \({\mathsf {V}}^\star \) receives \(\tilde{w}=h(\hat{w})\), a hash of the encoding of the witness used by \({\mathsf {P}}\).
 3.\({\mathsf {V}}^\star \) then selects randomnessby setting \(\mathtt {rand}=F_s(\tilde{w}\circ x)\).$$\mathtt {rand}=( \mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}, \mathtt {rand}^{\varPi .{\mathsf {P}}}_1, \ldots ,\mathtt {rand}^{\varPi .{\mathsf {P}}}_{\ell }, \mathtt {rand}^{\varPi .{\mathsf {V}}}_1,\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell },\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1} )$$
 4.
\({\mathsf {V}}^\star \) performs, by means of leakage queries, an execution of the protocol \(\varPi \) on common input \((h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}})\).
Specifically, for round \(i=1,\ldots ,\ell \), \({\mathsf {V}}^\star \) computesand issues leakage query \(Q_i\) for the prover’s next message function$$v_{i}=\varPi .{\mathsf {V}}\left( (h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}), \{\mathtt {msg}_{j}\}_{0<j<i}, \{\mathtt {rand}^{\varPi .{\mathsf {V}}}_{j}\}_{0<j\le i} \right) $$that is to be applied to the state \(\hat{w}\) of prover \({\mathsf {P}}\). In other words, the query computes the prover’s ith message \(\mathtt {msg}_i\) of an interaction of protocol \(\varPi \) in which prover \(\varPi .{\mathsf {P}}\) (running on randomness \(\mathtt {rand}^{\varPi .{\mathsf {P}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {P}}}_\ell \)) tries to convince verifier \(\varPi .{\mathsf {V}}\) (running on randomness \(\mathtt {rand}^{\varPi .{\mathsf {V}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell },\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1}\)) that \((h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}})\in \varLambda \).$$\varPi .{\mathsf {P}}\left( (h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}),\ \cdot \ , \{v_j\}_{0<j\le i}, \{\mathtt {rand}^{\varPi .{\mathsf {P}}}_{j}\}_{0<j\le i} \right) $$After receiving prover \(\varPi .{\mathsf {P}}\)’s last message, \({\mathsf {V}}^\star \) computes \(\varPi .{\mathsf {V}}\)’s output in this interaction:$$b=\varPi .{\mathsf {V}}((h,\tilde{w},\mathtt {rand}^{\mathsf {P}},\mathtt {rand}^{\mathsf {V}}), \mathtt {msg}_{1},\ldots ,\mathtt {msg}_{\ell }, \mathtt {rand}^{\varPi .{\mathsf {V}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1}).$$  5.
If \(b=1\) then \({\mathsf {V}}^\star \) outputs t; otherwise, \({\mathsf {V}}^\star \) outputs \(\bot \).
This concludes the description of \({\mathsf {V}}^\star \).
Counting the Number of Bits Leaked. The total number of bits leaked is equal to the output of the first leakage query (i.e., the length in bits of a range element of the collisionresistant hash function) \(\tilde{w}=k\) and the number of bits sent by the prover in \(\varPi \) which, for inputs of length n, is \(O(n^{\epsilon '})\) for an arbitrarily constant \(\epsilon '>0\) . Being \(n=\mathsf{poly}(k)\), we have that the amount of leakage can be made smaller than \(n^\epsilon \) for any \(\epsilon >0\).
\(\mathsf{Sim}\) Can Get t only by Succeeding in \(\varPi \), Therefore Properly Answering to Leakage Queries. We continue by observing that the output of the real game (i.e., when \({\mathsf {P}}\) and \({\mathsf {V}}^\star _{x,s,h,t}\) interact) is t. Therefore, \(\mathsf{Sim}\) must output t when interacting with \({\mathsf {V}}^\star _{x,s,h,t}\) with overwhelming probability. Since \(\mathsf{Sim}\) is a blackbox simulator, and since all messages of \({\mathsf {V}}^{\star }_{x,s,h,t}\) except for the last one, are independent of t, the only way \(\mathsf{Sim}\) can obtain t from \({\mathsf {V}}^\star _{x,s,h,t}\) is by replying with a value \(\tilde{w}\) to the first leakage query and by replying to queries \(Q_1,\ldots ,Q_\ell \) so to define a transcript \(\mathsf {Conv}=(v_{1},\mathtt {msg}_{1},\ldots ,v_\ell ,\mathtt {msg}_{\ell })\) that for common input \((h,\tilde{w},\mathtt {rand}^{\mathsf {P}},\mathtt {rand}^{\mathsf {V}})\) produces \(1=\varPi .{\mathsf {V}}((h,\tilde{w},\mathtt {rand}^{\mathsf {P}},\mathtt {rand}^{\mathsf {V}}), \mathtt {msg}_{1},\ldots ,\mathtt {msg}_{\ell }, \mathtt {rand}^{\varPi .{\mathsf {V}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1})\).
By the security of the pseudorandom function, we can consider the same experiment except having that \(\mathtt {rand}=\mathcal{R}(\tilde{w}\circ x)\) (computed by \({\mathsf {V}}^\star \) in step 3 of its description) where \(\mathcal R\) is a truly random function (i.e., each time \(\tilde{w}\circ x\) is new, \(\mathtt {rand}\) is computed by sampling fresh randomness).
We denote by \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }\) the simulation in such a modified game. We can show (the proofs of the following lemmas are omitted for lack of space) that when considering \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }\), still t is given in output with overwhelming probability.
Lemma 1
The output of \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }\) is computationally indistinguishable from the output of \(\mathsf{Sim}^{{\mathsf {V}}^\star }\).
We can then show that \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }(x,z)\) outputs t also for some \(x\not \in L\).
Lemma 2
If \(L \not \in {\mathsf {BPP}}\) then there exists some \(x\not \in L\) such that \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }(x,z)\) outputs t with probability greater than 2/3.
Let \(x\not \in L\) be a special statement such that \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }(x,z)\) outputs t with probability at least 2/3 (such an x exists since we are assuming that \(L\not \in {\mathsf {BPP}}\)). This means that \(\mathsf{Sim}_{R}\) feeds \({\mathsf {V}}^\star \) with a transcript of messages that with nonnegligible probability produces t as output.
Let \(\mathsf{time}_{\mathsf{Sim}_{R}}\) be the expected running time of \(\mathsf{Sim}_{R}\). Consider the strict polynomialtime machine \(\mathsf{Sim}_{pR}\) that consists of running the first \(3\mathsf{time}_{\mathsf{Sim}_{R}}\) steps of \(\mathsf{Sim}_{R}\).
We can prove the following lemma.
Lemma 3
If \(L \not \in {\mathsf {BPP}}\) then there exists some \(x\not \in L\) such that \(\mathsf{Sim}_{pR}^{{\mathsf {V}}^\star }(x,z)\) outputs t with probability greater than 1 / 3.
For notation purposes, we say that a query of \(\mathsf{Sim}_{pR}\) to \({\mathsf {V}}^{\star }\) belongs to the ith session if it is a tuple \((h,\tilde{w},\ldots )\) where \(\tilde{w}\) is the ith different value played by \(\mathsf{Sim}_{pR}\) as first message of \(\varPi .{\mathsf {P}}\) answering a leakage query of \({\mathsf {V}}^{\star }\). Let \(\mathsf{time}_{\mathsf{Sim}_{pR}}\) be the strict polynomial corresponding to the running time of \(\mathsf{Sim}_{pR}\).
We can then prove the existence of a critical session i.
Lemma 4
There exist \(x\not \in L\) and \(i \in [\mathsf{time}_{\mathsf{Sim}_{pR}}]\) such that \(\mathsf{Sim}_{pR}^{{\mathsf {V}}^{\star }}\) obtains t after answering to a query of the ith session with nonnegligible probability.
Consider now the augmented simulator \(\mathsf{Sim}_{pR}^{i^{{\mathsf {V}}^{\star }}}\) that works as \(\mathsf{Sim}_{pR}^{{\mathsf {V}}^{\star }}\) except that \({\mathsf {V}}^{\star }\) in the ith session will only send h, while all other messages of \({\mathsf {V}}^{\star }\) will be asked to an external oracle that plays as honest verifier of \(\varPi \). Let \(\mathsf{time}_{\varPi .\mathsf{Ext}}\) be the expected running time of \(\varPi .\mathsf{Ext}\).
We can prove the following lemma.
Lemma 5
There exist \(x\not \in L\) and \(i \in [\mathsf{time}_{\mathsf{Sim}_{pR}}]\) such that the extractor \(\varPi .\mathsf{Ext}\) of \(\varPi \) outputs a witness \(\hat{w}\) for \(\tau =(h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}) \in \varLambda \) with nonnegligible probability and running in expected polynomial time. Moreover \(\text{ Prob }\left[ \;\langle {\mathsf {P}}(\hat{w}),{\mathsf {V}}\rangle (x)=1\;\right] \) is nonnegligible.
We now show an adversarial prover \({\mathsf {P}}^{\star }\) that violates the soundness of \(\varPi _{LRZK}\). Let \(\varPi .\mathsf{Ext}_{p}\) be the strict polynomialtime extractor that behaves precisely as \(\varPi .\mathsf{Ext}\) (up to a given polynomial number of steps) as specified in the last part of the proof of Lemma 5.
 1.
\({\mathsf {P}}^{\star }\) picks at random \(i \in [\mathsf{time}_{\mathsf{Sim}_{pR}}]\) and then runs \(\varPi .\mathsf{Ext}_{p}\) with respect to \(\mathsf{Sim}_{pR}^{i^{{\mathsf {V}}^{\star }}}.\) If \(\varPi .\mathsf{Ext}_{p}\) does not give in output a state \(\hat{w}\) as part of a witness proving that \(\tau \in \varLambda \), then \({\mathsf {P}}^{\star }\) aborts.
 2.
\({\mathsf {P}}^{\star }\) then runs the honest prover \({\mathsf {P}}\) of \(\varPi _{LRZK}\) on input \(\hat{w}\) for proving to a honest verifier \({\mathsf {V}}\) that \(x \in L\) where x is the above special statement (i.e., \(x \not \in L\)).
First of all, the running time of \({\mathsf {P}}^{\star }\) is clearly polynomial since both the above steps take polynomial time. Then, we notice that by Lemma 5, both Step 1 and 2 correspond to runs without aborting with nonnegligible probability. This is due to the fact that the extractor \(\varPi .\mathsf{Ext}_{p}\) fails only with negligible probability and that the extracted state \(\hat{w}\) gives to a honest prover of \(({\mathsf {P}},{\mathsf {V}})\) nonnegligible probability to convince the verifier. Therefore \({\mathsf {P}}^{\star }\) succeeds in proving a false statement to honest \({\mathsf {V}}\) with nonnegligible probability.
We have proved that if \(L \not \in {\mathsf {BPP}}\) then \(\varPi _{LRZK}\) can not be both LRZK and sound.
3.1 Discussion on NonBlackBox LRZK
Since we have shown that LRZK is impossible when security is proved through blackbox simulation, a natural question is whether nonblackbox simulation can be useful to overcome this impossibility result.
The technique that we have shown for the blackbox case is based on an adversarial verifier \({\mathsf {V}}^\star \) that uses leakage queries to perform an execution of a resettably sound communicationefficient argument of knowledge \(\varPi \) against a honest prover. This makes the rewinding capabilities of the simulator ineffective therefore showing the impossibility of a blackbox simulation.
However, the technique proposed by Barak in [2] allows for nonblackbox straightline simulation thus bypassing the difficulties to simulate a protocol where rewinds are useless. The construction and simulator proposed by Barak in [2] allows to get publiccoin constantround zero knowledge with a straightline simulator, going therefore beyond the limits of blackbox simulation [29]. It is also known that nonblackbox simulation allows for resettably sound zero knowledge [4] where a prover can reset a verifier while the protocol still remains sound and zero knowledge. This is similar to the setting in which our blackbox impossibility result holds. Indeed our adversarial verifier \({\mathsf {V}}^\star \) is resilient to rewinds of the blackbox simulator.
Having in mind the goal of overcoming the above impossibility result through nonblackbox simulation, remember that in order to answer properly to the leakage queries of our adversarial verifier, a simulator either must simulate the execution of the universal argument^{4} or must use a special trapdoor. Such a trapdoor must allow a honest prover of \(\varPi _{LRZK}\) to succeed in convincing a honest verifier that runs on input a randomness r. Such randomness is later revealed by \({\mathsf {V}}^\star \) only after seeing the short representation of the state \(\tilde{w}\). Barak’s construction does not allow to run the prover with an input different from a witness for \(x \in L\), however, we next present a simple variant of it that does.
A Variation of Barak’s Construction. Consider the following variant of Barak’s protocol: (1) the verifier sends the description of a CRHF h; (2) the prover sends \(h_w=h(\mathsf{{Com}}(w,u))\) to the verifier^{5} where w is its private input, \(\mathsf{{Com}}\) is the commitment function of a noninteractive commitment scheme and u is a random string; (3) the verifier sends a random string z; (4) the prover runs a witness indistinguishable universal argument proving that either \(x \in L \vee h_w\) corresponds to the hash of a commitment of a machine M that in at most \(n^{\log \log n}\) steps outputs z; the prover uses its private input w and u as witness in the universal argument.
Notice that the variation is really minimal: it just consists in asking the prover to use its private input when computing \(h_w\). The impact of this variation is that the prover now can run successfully the protocol both when receiving as input a witness for \(x\in L\) and also when receiving as input the code of the verifier.
The above small variation does not affect the zeroknowledge property (the proof is the same as Barak’s), but allows the simulator to answer leakage queries of \({\mathsf {V}}^\star \) since the description of \({\mathsf {V}}^\star \) can be used as a legitimate encoded state that a prover can use in order to convince a verifier using a randomness r (again, such r is revealed by \({\mathsf {V}}^\star \) upon receiving through a leakage query the short representation of the state of the prover).
We stress that the discussion so far does not propose a LRZK protocol, rather it shows that the impossibility result given for the blackbox case fails spectacularly when Barak’s nonblackbox techniques are considered.
Defeating Barak’s NonBlackBox Simulation Technique. While the above discussion seems to say that Barak’s techniques could be used to design a LRZK protocol, we argue here that a breakthrough on nonblackbox simulation^{6} is required in order to obtain a LRZK protocol. Notice that the above variation of Barak’s construction allowed the prover to use a special trapdoor (the code of the verifier) instead of a witness to successfully run the protocol. Moreover, notice that the size of such a trapdoor is not bound by a fixed polynomial in the length of the common input since it depends on the size of the adversarial verifier. Instead there exists a constant \(c>0\) such that the length of a legitimate encoded witness of a LRZK protocol for a common input of length n is at most \(n^c\). Therefore, let us consider an adversarial verifier that, just as in the impossibility proof for blackbox LRZK, uses the leakage queries to execute a special protocol with a prover. In such a protocol, in addition to proving that the encoded state (that is consistent with the commitment already sent) makes the verifier accept, the prover also proves that the committed value is the hash of an encoded state of length at most \(n^c\). Then the code of the adversarial verifier can not be used anymore as the simulation fails for adversarial verifiers whose code is longer than \(n^c\). In other words, Barak’s technique turns out to be insufficient. Additionally, the adversarial verifier might send a long vector of random strings \(r_1,\ldots ,r_\ell \) therefore asking the prover to prove in the universal argument that the verifier would have accepted the proof running with any of those \(\ell \) randomnesses. Since \(\ell \) can be greater than the upperbound on the encoded witness, there is no way to commit to a small machine that can predict all such strings.
In other words, we would need a nonblackbox simulation technique that relies on standard assumptions and allows to construct a protocol where the trapdoor used by the simulator is of an apriori fixed bounded size and can thus be given as input to the prover. Notice that it is exactly because of this limitation (or, rather, because of the lack of it) on the size of the trapdoor that the construction from [2] requires the use of a witness indistinguishable universal arguments instead of a witnessindistinguishable arguments of knowledge. In turn, this implies that the straightline simulation of [2] can only be extended to bounded concurrency, leaving still unsolved the question of achieving constantround concurrent zero knowledge under standard assumptions.
As a conclusion, as for many other lower bounds in zero knowledge, when taking into account nonblackbox simulation, we can not rule out the existence of a nonblackbox LRZK argument system, but at the same time we gave evidence that, to obtain such a result, new breakthroughs on nonblackbox simulation are required.
4 Impossibility of LRMPC
We now use again the technique of running a protocol in the head of the adversary through leakage queries to show that LRMPC is impossible, therefore solving a problem opened in [10]. For this simpler result we give only a sketch of the proof and we defer for the additional definitions to [10]. We stress that the only variation here is that the interactive preprocessing does not take place (as required in the formulation of the open problem in [10]).
We can show that for many functionalities LRMPC with a leakfree inputencoding phase is impossible. The involved functionalities are the ones such that when they are run multiple times keeping unchanged the input \(x_{i}\) of a honest player \(P_{i}\), the (short) outputs delivered to the dishonest players reveal more information on \(x_{i}\) than what a single output would reveal. Our impossibility requires just one dishonest player.
For simplicity we will now consider one such functionality: a variation of Yao’s Millionaires’ Problem, where n players \(P_{1},\ldots ,P_{n}\) send their inputs to the functionality \(\mathcal {F}\) and then \(\mathcal {F}\) outputs to all players a bit b specifying whether \(P_{1}\) is the richest one.
Theorem 2
Consider the nparty functionality \(\mathcal {F}\) that on input n kbit strings \(x_{1},\ldots ,x_{n}\) outputs to all players the bit \(b=1\) when \(x_{1}\ge x_{j}\) for \(1<j \in [n]\) and 0 otherwise. If at least one among \(P_{2},\ldots ,P_{n}\) is corrupted and can get two bits as total output of leakage queries then there exists no LRMPC for \(\mathcal {F}\).
Proof
We will sketch the proof since the main ideas were already used in the proof of the impossibility of LRZK.
Assume by contradiction that there exists a secure multiparty protocol \(\varPi \). Assume wlog that all players are honest except \(P_{n}\). The adversary \(\mathsf{Adv}\) controls \(P_{n}\) and works as follows.
 1.
It sends a leakage query that includes different encodings of the same value \(x_{2}=\cdots =x_{n}=2^{k1}\) for players \(P_{2},\ldots ,P_{n}\); the leakage query asks for a “virtual” execution of the protocol where \(P_{1}\) uses its state \(\hat{x}_{1}\), and requires to give in output the output of \(P_{n}\).
 2.
It repeats Step 1 changing the value to be used for the \(n1\) encodings of \(P_{2},\ldots ,P_{n}\) (still a unique value for all of them) according to binary search (i.e., \(2^{k1}+2^{k2}\) if the previous output was 1 or \(2^{k2}\) otherwise).
 3.
\(\mathsf{Adv}\) ends the protocol by giving in output the first two bits of the original (i.e., preencoding) input of \(P_{1}\).
The communication complexity (from honest player to adversary) of this execution through leakage queries is the constant 2. Notice that the above leakage attack can be mounted with two queries each obtaining one bit as output, or with one single query obtaining two bits as output. As a result of the above leakage attack, \(\mathsf{Adv}\) in the real world obtains the first two bits of \(x_{1}\), the original input of \(P_{1}\). \(\mathsf{Sim}\) in the ideal world does not have such an information since it can perform only one query to \(\mathcal {F}\), therefore getting at most one bit.
Footnotes
 1.
Moreover such a phase can be run on a different device disconnected from the network, running an operating system installed on some readonly disk.
 2.
Their definition does not require zero knowledge.
 3.
More in details, we are assuming that the encoded state of the player does not include any information useful to check if a message supposed to be from a player \(P_{j}\) is genuine.
 4.
 5.
Note that in Barak’s protocol the prover uses \(0^n\) instead of w.
 6.
We stress that our work sticks with the use of standard/falsifiable assumptions.
Notes
Acknowledgments
We thank the anonymous reviewers for their useful comments. The full version of this work appears in [38].
Part of this work was done while the second and third authors were visiting the Computer Science Department of UCLA.
This work has been supported by NSF grants 09165174, 1065276, 1118126 and 1136174, USIsrael BSF grant 2008411, OKAWA Foundation Research Award, IBM Faculty Research Award, Xerox Faculty Research Award, B. John Garrick Foundation Award, Teradata Research Award, and LockheedMartin Corporation Research Award. This material is based upon work supported by the Defense Advanced Research Projects Agency through the U.S. Office of Naval Research under Contract N00014 11 10392. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
References
 1.Ananth, P., Goyal, V., Pandey, O.: Interactive proofs under continual memory leakage. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 164–182. Springer, Heidelberg (2014) Google Scholar
 2.Barak, B.: How to go beyond the blackbox simulation barrier. In: 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, pp. 106–115. IEEE Computer Society (2001)Google Scholar
 3.Barak, B.: Nonblackbox techniques in cryptography. Ph.D. Thesis (2004). http://www.boazbarak.org/Papers/thesis.pdf
 4.Barak, B., Goldreich, O., Goldwasser, S., Lindell, Y.: Resettablysound zeroknowledge and its applications. In: 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, pp. 116–125. IEEE Computer Society (2001)Google Scholar
 5.Bellare, M., Goldreich, O.: On defining proofs of knowledge. In: Brickell, E.F. (ed.) CRYPTO 1992. LNCS, vol. 740, pp. 390–420. Springer, Heidelberg (1993) CrossRefGoogle Scholar
 6.Bitansky, N., Canetti, R., Halevi, S.: Leakagetolerant interactive protocols. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 266–284. Springer, Heidelberg (2012) CrossRefGoogle Scholar
 7.Bitansky, N., DachmanSoled, D., Lin, H.: Leakagetolerant computation with inputindependent preprocessing. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 146–163. Springer, Heidelberg (2014) Google Scholar
 8.Blum, M., De Santis, A., Micali, S., Persiano, G.: Noninteractive zero knowledge. SIAM J. Comput. 20(6), 1084–1118 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
 9.Boyle, E., Garg, S., Jain, A., Kalai, Y.T., Sahai, A.: Secure computation against adaptive auxiliary information. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 316–334. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 10.Boyle, E., Goldwasser, S., Jain, A., Kalai, Y.T.: Multiparty computation secure against continual memory leakage. In: Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, pp. 1235–1254. ACM (2012)Google Scholar
 11.Boyle, E., Goldwasser, S., Kalai, Y.T.: Leakageresilient coin tossing. In: Peleg, D. (ed.) Distributed Computing. LNCS, vol. 6950, pp. 181–196. Springer, Heidelberg (2011) CrossRefGoogle Scholar
 12.Boyle, E., Goldwasser, S., Kalai, Y.T.: Leakageresilient coin tossing. Distrib. Comput. 27(3), 147–164 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 13.Boyle, E., Segev, G., Wichs, D.: Fully leakageresilient signatures. J. Cryptol. 26(3), 513–558 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 14.Brakerski, Z., Kalai, Y.T., Katz, J., Vaikuntanathan, V.: Overcoming the hole in the bucket: Publickey cryptography resilient to continual memory leakage. In: 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, pp. 501–510. IEEE Computer Society (2010)Google Scholar
 15.Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, pp. 136–145. IEEE Computer Society (2001)Google Scholar
 16.Canetti, R., Goldreich, O., Goldwasser, S., Micali, S.: Resettable zeroknowledge (extended abstract). In: Proceedings of the ThirtySecond Annual ACM Symposium on Theory of Computing, STOC 2000, pp. 235–244. ACM (2000)Google Scholar
 17.Dagdelen, Ö., Mohassel, P., Venturi, D.: Ratelimited secure function evaluation: definitions and constructions. In: Kurosawa, K., Hanaoka, G. (eds.) PKC 2013. LNCS, vol. 7778, pp. 461–478. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 18.Damgård, I., Dupuis, F., Nielsen, J.B.: On the orthogonal vector problem and the feasibility of unconditionally secure leakage resilient computation. IACR Cryptology ePrint Archive 2014 (2014). http://eprint.iacr.org/2014/282
 19.Dodis, Y., Haralambiev, K., LópezAlt, A., Wichs, D.: Cryptography against continuous memory attacks. In: 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, pp. 511–520. IEEE Computer Society (2010)Google Scholar
 20.Dodis, Y., Lewko, A.B., Waters, B., Wichs, D.: Storing secrets on continually leaky devices. In: IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, pp. 688–697. IEEE (2011)Google Scholar
 21.Dolev, D., Dwork, C., Naor, M.: Nonmalleable cryptography (extended abstract). In: Proceedings of the 23rd Annual ACM Symposium on Theory of Computing, STOC 1991, pp. 542–552. ACM (1991)Google Scholar
 22.Duc, A., Dziembowski, S., Faust, S.: Unifying leakage models: from probing attacks to noisy leakage. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 423–440. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 23.Dwork, C., Naor, M., Sahai, A.: Concurrent zeroknowledge. In: Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, STOC 1998, pp. 409–418. ACM (1998)Google Scholar
 24.Dziembowski, S., Faust, S.: Leakageresilient circuits without computational assumptions. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 230–247. Springer, Heidelberg (2012) CrossRefGoogle Scholar
 25.Dziembowski, S., Pietrzak, K.: Leakageresilient cryptography. In: 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2008, pp. 293–302. IEEE Computer Society (2008)Google Scholar
 26.Faust, S., Rabin, T., Reyzin, L., Tromer, E., Vaikuntanathan, V.: Protecting circuits from leakage: the computationallybounded and noisy cases. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 135–156. Springer, Heidelberg (2010) CrossRefGoogle Scholar
 27.Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. In: 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, pp. 40–49. IEEE Computer Society (2013)Google Scholar
 28.Garg, S., Jain, A., Sahai, A.: Leakageresilient zero knowledge. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 297–315. Springer, Heidelberg (2011) CrossRefGoogle Scholar
 29.Goldreich, O., Krawczyk, H.: On the composition of zeroknowledge proof systems. SIAM J. Comput. 25(1), 169–192 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
 30.Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or A completeness theorem for protocols with honest majority. In: Proceedings of the 19th Annual ACM Symposium on Theory of Computing, STOC 1987, pp. 218–229. ACM (1987)Google Scholar
 31.Goldwasser, S., Micali, S., Rackoff, C.: The knowledge complexity of interactive proofsystems (extended abstract). In: Proceedings of the 17th Annual ACM Symposium on Theory of Computing, STOC 1985, pp. 291–304. ACM (1985)Google Scholar
 32.Goldwasser, S., Rothblum, G.N.: Securing computation against continuous leakage. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 59–79. Springer, Heidelberg (2010) CrossRefGoogle Scholar
 33.Goldwasser, S., Rothblum, G.N.: How to compute in the presence of leakage. In: 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, pp. 31–40. IEEE Computer Society (2012)Google Scholar
 34.Ishai, Y., Sahai, A., Wagner, D.: Private circuits: securing hardware against probing attacks. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 463–481. Springer, Heidelberg (2003) CrossRefGoogle Scholar
 35.Micali, S., Reyzin, L.: Physically observable cryptography. In: Naor, M. (ed.) TCC 2004. LNCS, vol. 2951, pp. 278–296. Springer, Heidelberg (2004) CrossRefGoogle Scholar
 36.Nielsen, J.B., Venturi, D., Zottarel, A.: On the connection between leakage tolerance and adaptive security. In: Kurosawa, K., Hanaoka, G. (eds.) PKC 2013. LNCS, vol. 7778, pp. 497–515. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 37.Ostrovsky, R., Persiano, G., Visconti, I.: Constantround concurrent nonmalleable zero knowledge in the bare publickey model. In: Aceto, L., Damgård, I., Goldberg, L.A., Halldórsson, M.M., Ingólfsdóttir, A., Walukiewicz, I. (eds.) ICALP 2008, Part II. LNCS, vol. 5126, pp. 548–559. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 38.Ostrovsky, R., Persiano, G., Visconti, I.: Impossibility of blackbox simulation against leakage attacks. IACR Cryptology ePrint Archive 2014 (2014). http://eprint.iacr.org/2014/865
 39.Pandey, O.: Achieving constant round leakageresilient zeroknowledge. In: Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349, pp. 146–166. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 40.Standaert, F.X., Malkin, T., Yung, M.: Does physical security of cryptographic devices need a formal study? (Invited Talk). In: SafaviNaini, R. (ed.) ICITS 2008. LNCS, vol. 5155, pp. 70–70. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 41.Standaert, F.X., Malkin, T.G., Yung, M.: A unified framework for the analysis of sidechannel key recovery attacks. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 443–461. Springer, Heidelberg (2009) CrossRefGoogle Scholar
 42.Standaert, F., Pereira, O., Yu, Y., Quisquater, J., Yung, M., Oswald, E.: Leakage resilient cryptography in practice. In: Sadeghi, A., Naccache, D. (eds.) Towards HardwareIntrinsic Security  Foundations and Practice. Information Security and Cryptography, pp. 99–134. Springer, Heidelberg (2010)CrossRefGoogle Scholar
 43.Yu, Y., Standaert, F., Pereira, O., Yung, M.: Practical leakageresilient pseudorandom generators. In: AlShaer, E., Keromytis, A.D., Shmatikov, V. (eds.) Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS 2010, pp. 141–151. ACM (2010)Google Scholar