Fuzzy PasswordAuthenticated Key Exchange
Abstract
Consider key agreement by two parties who start out knowing a common secret (which we refer to as “passstring”, a generalization of “password”), but face two complications: (1) the passstring may come from a lowentropy distribution, and (2) the two parties’ copies of the passstring may have some noise, and thus not match exactly. We provide the first efficient and general solutions to this problem that enable, for example, key agreement based on commonly used biometrics such as iris scans.
The problem of key agreement with each of these complications individually has been well studied in literature. Key agreement from lowentropy shared passstrings is achieved by passwordauthenticated key exchange (PAKE), and key agreement from noisy but highentropy shared passstrings is achieved by informationreconciliation protocols as long as the two secrets are “close enough.” However, the problem of key agreement from noisy lowentropy passstrings has never been studied.
We introduce (universally composable) fuzzy passwordauthenticated key exchange (fPAKE), which solves exactly this problem. fPAKE does not have any entropy requirements for the passstrings, and enables secure key agreement as long as the two passstrings are “close” for some notion of closeness. We also give two constructions. The first construction achieves our fPAKE definition for any (efficiently computable) notion of closeness, including those that could not be handled before even in the highentropy setting. It uses Yao’s garbled circuits in a way that is only two times more costly than their use against semihonest adversaries, but that guarantees security against malicious adversaries. The second construction is more efficient, but achieves our fPAKE definition only for passstrings with low Hamming distance. It builds on very simple primitives: robust secret sharing and PAKE.
Keywords
Authenticated key exchange PAKE Hamming distance Error correcting codes Yao’s garbled circuits1 Introduction
Consider key agreement by two parties who start out knowing a common secret (which we refer to as “passstring”, a generalization of “password”). These parties may face several complications: (1) the passstring may come from a nonuniform, lowentropy distribution, and (2) the two parties’ copies of the passstring may have some noise, and thus not match exactly. The use of such passstrings for security has been extensively studied; examples include biometrics and other humangenerated data [15, 23, 29, 39, 46, 49, 66], physically unclonable functions (PUFs) [30, 52, 57, 58, 64], noisy channels [61], quantum information [9], and sensor readings of a common environment [32, 33].
The Noiseless Case. When the starting secret is not noisy (i.e., the same for both parties), existing approaches work quite well. The case of lowentropy secrets is covered by passwordauthenticated key exchange (PAKE) (a long line of work, with first formal models introduced in [7, 14]). A PAKE protocol allows two parties to agree on a shared highentropy key if and only if they hold the same short password. Even though the password may have low entropy, PAKE ensures that offline dictionary attacks are impossible. Roughly speaking, an adversary has to participate in one online interaction for every attempted guess at the password. Because key agreement is not usually the final goal, PAKE protocols need to be composed with whatever protocols (such as authenticated encryption) use the output key. This composability has been achieved by universally composable (UC) PAKE defined by Canetti et al. [20] and implemented in several followup works.
In the case of highentropy secrets, offline dictionary attacks are not a concern, which enables more efficient protocols. If the adversary is passive, randomness extractors [51] do the job. The case of active adversaries is covered by the literature on socalled robust extractors defined by Boyen et al. [13] and, more generally, by many papers on privacy amplification protocols secure against active adversaries, starting with the work of Maurer [45]. Composability for these protocols is less studied; in particular, most protocols leak information about the passstring itself, in which case reusing the passstring over multiple protocol executions may present problems [12] (with the exception of [19]).
The Noisy Case. When the passstring is noisy (i.e., the two parties have slightly different versions of it), this problem has been studied only for the case of highentropy passstrings. A long series of works on informationreconciliation protocols started by Bennett et al. [9] and their onemessage variants called fuzzy extractors (defined by Dodis et al. [26], further enhanced for active security starting by Renner and Wolf [54]) achieves key agreement when the passstring has a lot of entropy and not too much noise. Unfortunately, these approaches do not extend to the lowentropy setting and are not designed to prevent offline dictionary attacks.
Constructions for the noisy case depend on the specific noise model. The case of binary Hamming distance—when the \(n\) passstring characters held by the two parties are the same at all but \(\delta \) locations—is the best studied. Most existing constructions require, at a minimum, that the passstring should have at least \(\delta \) bits of entropy. This requirement rules out using most kinds of biometric data as the passstring—for example, estimates of entropy for iris scans (transformed into binary strings via wavelet transforms and projections) are considerably lower than the amount of errors that need to be tolerated [11, Sect. 5]. Even the PAKEbased construction of Boyen et al. [13] suffers from the same problem.
One notable exception is the construction of Canetti et al. [19], which does not have such a requirement, but places other stringent limitations on the probability distribution of passstrings. In particular, because it is a onemessage protocol, it cannot be secure against offline dictionary attacks.
1.1 Our Contributions

Resist offline dictionary attacks and thus can handle lowentropy passstrings,

Can handle a variety of noise types and have high errortolerance, and

Have well specified composition properties via the UC framework [17].
Instead of imposing entropy requirements or other requirements on the distribution of passstrings, our protocols are secure as long as the adversary cannot guess a passstring value that is sufficiently close. There is no requirement, for example, that the amount of passstring entropy is greater than the number of errors; in fact, one of our protocols is suitable for iris scans. Moreover, our protocols prevent offline attacks, so each adversarial attempt to get close to the correct passstring requires an online interaction by the adversary. Thus, for example, our protocols can be meaningfully run with passstrings whose entropy is only 30 bits—something not possible with any prior protocols for the noisy case.
New Models. Our security model is in the Universal Composability (UC) Framework of Canetti [17]. The advantage of this framework is that it comes with a composability theorem that ensures that the protocol stays secure even running in arbitrary environments, including arbitrary parallel executions. Composability is particularly important for key agreement protocols, because key agreement is rarely the ultimate goal. The agreedupon key is typically used for some subsequent protocol—for example, a secure channel. Further, this framework allows to us to give a definition that is agnostic to how the initial passstrings are generated. We have no entropy requirements or constraints on the passstring distribution; rather, security is guaranteed as long as the adversary’s input to the protocol is not close enough to the correct passstring.
As a starting point, we use the definition of UC security for PAKE from Canetti et al. [20]. The \(\textsf {PAKE} \) ideal functionality is defined as follows: the secret passstrings (called “passwords” in PAKE) of the two parties are the inputs to the functionality, and two random keys, which are equal if and only if the two inputs are equal, are the outputs. The main change we make to PAKE is enhancing the functionality to give equal keys even if the two inputs are not equal, as long as they are close enough. We also relax the security requirement to allow one party to find out some information about the other party’s input—perhaps even the entire input—if the two inputs are close. This relaxation makes sense in our application: if the two parties are honest, then the differences between their inputs are a problem rather than a feature, and we would not mind if the inputs were in fact the same. The benefit of this relaxation is that it permits us to construct more efficient protocols. (We also make a few other minor changes which will be described in Sect. 2.) We call our new UC functionality “Fuzzy PasswordAuthenticated Key Exchange” or fPAKE.
New Protocols. The only prior PAKEbased protocol for the noisy setting by Boyen et al. [13], although more efficient than ours, does not satisfy our goal. In particular, it is not composable, because it reveals information about the secret passstrings (we demonstrate this formally in the full version of this paper [28]). Because some information about the passstrings is unconditionally revealed, highentropy passstrings are required. Thus, in order to realize our definition for arbitrary lowentropy passstrings, we need to construct new protocols.
Realizing our fPAKE definition is easy using general twoparty computation techniques for protocols with malicious adversaries and without authenticated channels [4]. However, we develop protocols that are considerably more efficient: our definitional relaxation allows us to build protocols that achieve security against malicious adversaries but cost just a little more than the generic twoparty computation protocols that achieve security only against honestbutcurious adversaries (i.e., adversaries who do not deviate from the protocol, but merely try to infer information they are not supposed to know).
Our first construction uses Yao’s garbled circuits [6, 63] and oblivious transfer (see [21] and references therein). The use of these techniques is standard in twoparty computation. However, by themselves they give protocols secure only against honestbutcurious adversaries. In order to prevent malicious behavior of the players, one usually applies the cutandchoose technique [42], which is quite costly: to achieve an error probability of \(2^{\lambda }\), the number of circuits that need to be garbled increases by a factor of \(\lambda \), and the number of oblivious transfers that need to be performed increases by a factor of \(\lambda / 2\). We show that for our special case, to achieve malicious security, it suffices to repeat the honestbutcurious protocol twice (once in each direction), incurring only a factor of 2 overhead over the semihonest case.^{1} Mohassel and Franklin [48] and Huang et al. [34] suggest a similar technique (known as “dual execution”), but at the cost of leaking a bit of the adversary’s choice to the adversary. In contrast, our construction leaks nothing to the adversary at all (as long as the passstrings are not close). This construction works regardless of what it means for the two inputs to be “close,” as long as the question of closeness can be evaluated by an efficient circuit.
Our second construction is for the Hamming case: the two \(n\)character passstrings have low Hamming distance if not too many characters of one party’s passstring are different from the corresponding characters of the other’s passstring. The two parties execute a PAKE protocol for each position in the string, obtaining \(n\) values each that agree or disagree depending on whether the characters of the passstring agree or disagree in the corresponding positions. It is important that at this stage, agreement or disagreement at individual positions remains unknown to everyone; we therefore make use of a special variant of PAKE which we call implicitonly PAKE (we give a formal UC security definition of implicitonly PAKE and show that it is realized by the PAKE protocol from [1, 8]). This first step upgrades Hamming distance over a potentially small alphabet to Hamming distance over an exponentially large alphabet. We then secretshare the ultimate output key into \(n\) shares using a robust secret sharing scheme, and encrypt each share using the output of the corresponding \(\textsf {PAKE} \) protocol.
The second construction is more efficient than the first in the number of rounds, communication, and computation. However, it works only for Hamming distance. Moreover, it has an intrinsic gap between functionality and security: if the honest parties need to be within distance \(\delta \) to agree, then the adversary may break security by guessing a secret within distance \(2\delta \). See Fig. 10 for a comparison between the two constructions.
The advantages of our protocols are similar to the advantages of UC PAKE: They provide composability, protection against offline attacks, the ability to use lowentropy inputs, and handle any distribution of secrets. And, of course, because we construct fuzzy PAKE, our protocols can handle noisy inputs—including many types of noisy inputs that could not be handled before. Our first protocol can handle any type of noise as long as the notion of “closeness” can be efficiently computed, whereas most prior work was for Hamming distance only. However, these advantages come at the price of efficiency. Our protocols require 2–5 rounds of interaction, as opposed to many singlemessage protocols in the literature [19, 25, 60]. They are also more computationally demanding than most existing protocols for the noisy case, requiring one publickey operation per input character. We emphasize, however, that our protocols are much less computationally demanding than the protocols based on general twoparty computation, as already discussed above, or generalpurpose obfuscation, as discussed in [10, Sect. 4.3.4].
2 Security Model
We now present a security definition for fuzzy passwordauthenticated key exchange (fPAKE). We adapt the definition of PAKE from Canetti et al. [20] to work for passstrings (a generalization of “passwords”) that are similar, but not necessarily equal. Our definition uses measures of the distance \(d(\mathsf {pw}, \mathsf {pw}')\) between passstrings \(\mathsf {pw}, \mathsf {pw}' \in \mathbb {F}_{p}^{n}\). In Sects. 3.3 and 4, Hamming distance is used, but in the generic construction of Sect. 3, any other notion of distance can be used instead. We say that \(\mathsf {pw}\) and \(\mathsf {pw}'\) are “similar enough” if \(d(\mathsf {pw}, \mathsf {pw}') \le \delta \) for a distance notion d and a threshold \(\delta \) that is hardcoded into the functionality.
To model the possibility of dictionary attacks, the functionality allows the adversary to make one passstring guess against each player (\(\mathcal {P}_{0}\) and \(\mathcal {P}_{1}\)). In the real world, if the adversary succeeds in guessing (a passstring similar enough to) party \(\mathcal {P}_{i}\)’s passstring, it can often choose (or at least bias) the session key computed by \(\mathcal {P}_{i}\). To model this, the functionality then allows the adversary to set the session key for \(\mathcal {P}_{i}\).
As usual in security notions for key exchange, the adversary also sets the session keys for corrupted players. In the definition of Canetti et al. [20], the adversary additionally sets \(\mathcal {P}_{i}\)’s key if \(\mathcal {P}_{1i}\) is corrupted. However, contrarily to the original definition, we do not allow the adversary to set \(\mathcal {P}_{i}\)’s key if \(\mathcal {P}_{1i}\) is corrupted but did not guess \(\mathcal {P}_{i}\)’s passstring. We make this change in order to protect an honest \(\mathcal {P}_{i}\) from, for instance, revealing sensitive information to an adversary who did not successfully guess her passstring, but did corrupt her partner.
Another minor change we make is considering only two parties—\(\mathcal {P}_{0}\) and \(\mathcal {P}_{1}\)—in the functionality, instead of considering arbitrarily many parties and enforcing that only two of them engage the functionality. This is because universal composability takes care of ensuring that a twoparty functionality remains secure in a multiparty world.
By default, in the \(\textsf {fPAKE} \) functionality the \(\texttt {TestPwd} \) interface provides the adversary with one bit of information—whether the passstring guess was correct or not. This definition can be strengthened by providing the adversary with no information at all, as in implicitonly PAKE (\(\mathcal {F}_\textsf {iPAKE} \), Fig. 7), or weakened by providing the adversary with extra information when the adversary’s guess is close enough.
 1.The strongest option is to provide no feedback at all to the adversary. We define \(\textsf {fPAKE} ^N\) to be the functionality described in Fig. 1, except that \(\texttt {TestPwd} \) is from Fig. 2 with$$ L_{c}^N(\mathsf {pw}_{i}, \mathsf {pw}_{i}') = L_{m}^N(\mathsf {pw}_{i}, \mathsf {pw}_{i}') = L_{f}^N(\mathsf {pw}_{i}, \mathsf {pw}_{i}') = \bot \,.$$
 2.
 3.Assume the two passstrings are strings of length \(n\) over some finite alphabet, with the jth character of the string \(\mathsf {pw}\) denoted by \(\mathsf {pw}[j]\). We define \(\textsf {fPAKE} ^M\) to be the functionality described in Fig. 1, except that \(\texttt {TestPwd} \) is from Fig. 2, with \(L_{c}\) and \(L_{m}\) that leak the indices at which the guessed passstring differs from the actual one when the guess is close enough (we will call this leakage the mask of the passstrings). That is,
 4.The weakest definition—or the strongest leakage—reveals the entire actual passstring to the adversary if the passstring guess is close enough. We define \(\textsf {fPAKE} ^P\) to be the functionality described in Fig. 1, except that \(\texttt {TestPwd} \) is from Fig. 2, with Here, \(L_{c}^P\) and \(L_{m}^P\) do not need to include “correct guess” and “wrong guess”, respectively, because this is information that can be easily derived from \(\mathsf {pw}_{i}\) itself.
The first two functionalities are the strongest, but there are no known constructions that realize them, other than through generic twoparty computation secure against malicious adversaries, which is an inefficient solution. The last two functionalities, though weaker, still provide meaningful security, especially when \(\gamma = \delta \). Intuitively, this is because strong leakage only occurs when an adversary guesses a “close” passstring, which enables him to authenticate as though he knows the real passstring anyway.
In Sect. 3, we present a construction satisfying \(\textsf {fPAKE} ^P\) for any efficiently computable notion of distance, with \(\gamma = \delta \) (which is the best possible). We present a construction for Hamming distance satisfying \(\textsf {fPAKE} ^M\) in Sect. 4, with \(\gamma = 2\delta \).
3 General Construction Using Garbled Circuits
In this section, we describe a protocol realizing \(\textsf {fPAKE} ^{P}\) that uses Yao’s garbled circuits [63]. We briefly introduce this primitive in Sect. 3.1 and refer to Yakoubov [62] for a more thorough introduction.
 1.
It is more flexible than other approaches; any notion of distance that can be efficiently computed by a circuit can be used. In Sect. 3.3, we describe a suitable circuit for Hamming distance. The total size of this circuit is \(O(n)\), where \(n\) is the length of the passstrings used. Edit distance is slightly less efficient, and uses a circuit whose total size is \(O(n^2)\).
 2.
There is no gap between the distances required for functionality and security—that is, there is no leakage about the passstrings used unless they are similar enough to agree on a key. In other words, \(\delta = \gamma \).
Informally, the construction involves the garbled evaluation of a circuit that takes in two passstrings as input, and computes whether their distance is less than \(\delta \). Because Yao’s garbled circuits are only secure against semihonest garblers, we cannot simply have one party do the garbling and the other party do the evaluation. A malicious garbler could provide a garbling of the wrong function—maybe even a constant function—which would result in successful key agreement even if the two passstrings are very different. However, as suggested by Mohassel and Franklin [48] and Huang et al. [34], since a malicious evaluator (unlike a malicious garbler) cannot compromise the computation, by performing the protocol twice with each party playing each role once, we can protect against malicious behavior. They call this the dual execution protocol.
The dual execution protocol has the downside of allowing the adversary to specify and receive a single additional bit of leakage. It is important to note that because of this, dual execution cannot directly be used to instantiate fPAKE, because a single bit of leakage can be too much when the entropy of the passstrings is low to begin with—a few adversarial attempts will uncover the entire passstring. Our construction is as efficient as that of Mohassel et al. and Huang et al., while guaranteeing no leakage to a malicious adversary in the case that the passstrings used are not close. We describe how we achieve this in Sect. 3.1.3.
3.1 Building Blocks
In Sect. 3.1.1, we briefly review oblivious transfer. In Sect. 3.1.2, we review Yao’s Garbled Circuits. In Sect. 3.1.3, we describe in more detail our take on the dual execution protocol, and how we avoid leakage to the adversary when the passstrings used are dissimilar.
3.1.1 Oblivious Transfer (OT)
Informally, 1outof2 Oblivious Transfer (see [21] and citations therein) enables one party (the sender) to transfer exactly one of two secrets to another party (the receiver). The receiver chooses (by index 0 or 1) which secret she wants. The security of the OT protocol guarantees that the sender does not learn this choice bit, and the receiver does not learn anything about the other secret.
3.1.2 Yao’s Garbled Circuits (YGC)
Next, we give a brief introduction to Yao’s garbled circuits [63]. We refer to Yakoubov [62] for a more detailed description, as well as a summary of some of the Yao’s garbled circuits optimizations [3, 5, 38, 40, 53, 65]. Informally, Yao’s garbled circuits are an asymmetric secure twoparty computation scheme. They enable two parties with sensitive inputs (in our case, passstrings) to compute a joint function of their inputs (in our case, an augmented version of similarity) without revealing any additional information about their inputs. One party “garbles” the function they wish to evaluate, and the other evaluates it in its garbled form.
Below, we summarize the garbling scheme formalization of Bellare et al. [6], which is a generalization of YGC.
 1.
\(\mathsf {Gb}(1^{\lambda }, f) \rightarrow (F, e, d)\). The garbling algorithm \(\mathsf {Gb}\) takes in the security parameter \(\lambda \) and a circuit \(f\), and returns a garbled circuit \(F\), encoding information \(e\), and decoding information \(d\).
 2.
\(\mathsf {En}(e, x) \rightarrow X\). The encoding algorithm \(\mathsf {En}\) takes in the encoding information \(e\) and an input \(x\), and returns a garbled input \(X\).
 3.
\(\mathsf {Ev}(F, X) \rightarrow Y\). The evaluation algorithm \(\mathsf {Ev}\) takes in the garbled circuit \(F\) and the garbled input \(X\), and returns a garbled output \(Y\).
 4.
\(\mathsf {De}(d, Y) \rightarrow y\). The decoding algorithm \(\mathsf {De}\) takes in the decoding information \(d\) and the garbled output \(Y\), and returns the plaintext output \(y\).
A garbling scheme \(\mathcal {G}= (\mathsf {Gb}, \mathsf {En}, \mathsf {Ev}, \mathsf {De})\) is projective if encoding information \(e\) consists of \(2 n\) wire labels (each of which is essentially a random string), where \(n\) is the number of input bits. Two wire labels are associated with each bit of the input; one wire label corresponds to the event of that bit being 0, and the other corresponds to the event of that bit being 1. The garbled input includes only the wire labels corresponding to the actual values of the input bits. In projective schemes, in order to give the evaluator the garbled input she needs for evaluation, the garbler can send her all of the wire labels corresponding to the garbler’s input. The evaluator can then use OT to retrieve the wire labels corresponding to her own input.
Similarly, we call a garbling scheme outputprojective if decoding information \(d\) consists of two labels for each output bit, one corresponding to each possible value of that bit. The garbling schemes used in this paper are both projective and outputprojective.
Correctness. Informally, a garbling scheme \((\mathsf {Gb}, \mathsf {En}, \mathsf {Ev}, \mathsf {De})\) is correct if it always holds that \(\mathsf {De}(d, \mathsf {Ev}(F, \mathsf {En}(e, x))) = f(x)\).
Security. Bellare et al. [6] describe three security notions for garbling schemes: obliviousness, privacy and authenticity. Informally, a garbling scheme \(\mathcal {G}= (\mathsf {Gb}, \mathsf {En}, \mathsf {Ev}, \mathsf {De})\) is oblivious if a garbled function \(F\) and a garbled input \(X\) do not reveal anything about the input \(x\). It is private if additionally knowing the decoding information \(d\) reveals the output \(y\), but does not reveal anything more about the input \(x\). It is authentic if an adversary, given \(F\) and \(X\), cannot find a garbled output \(Y' \ne \mathsf {Ev}(F, X)\) which decodes without error.
In the full version of this paper [28], we define a new property of outputprojective garbling schemes called garbled output randomness. Informally, it states that even given one of the output labels, the other should be indistinguishable from random.
3.1.3 Malicious Security: A New Take on Dual Execution with PrivacyCorrectness Tradeoffs
While Yao’s garbled circuits are naturally secure against a malicious evaluator, they have the drawback of being insecure against a malicious garbler. A garbler can “misgarble” the function, either replacing it with a different function entirely or causing an error to occur in an informative way (this is known as “selective failure”).
Typically, malicious security is introduced to Yao’s garbled circuits by using the cutandchoose transformation [35, 41, 43]. To achieve a \(2^{\lambda }\) probability of cheating without detection, the parties need to exchange \(\lambda \) garbled circuits [41].^{2} Some of the garbled circuits are “checked”, and the rest of them are evaluated, their outputs checked against one another for consistency. Because of the factor of \(\lambda \) computational overhead, though, cutandchoose is expensive, and too heavy a tool for \(\textsf {fPAKE} \). Other, more efficient transformations such as LEGO [50] and authenticated garbling [59] exist as well, but those rely heavily on preprocessing, which cannot be used in \(\textsf {fPAKE} \) since it requires advance interaction between the parties.
Mohassel and Franklin [48] and Huang et al. [34] suggest an efficient transformation known as “dual execution”: each party plays each role (garbler and evaluator) once, and then the two perform a comparison step on their outputs in a secure fashion. Dual execution incurs only a factor of 2 overhead over semihonest garbled circuits. However, it does not achieve fully malicious security. It guarantees correctness, but reduces the privacy guarantee by allowing a malicious garbler to learn one bit of information of her choice. Specifically, if a malicious garbler garbles a wrong circuit, she can use the comparison step to learn one bit about the output of this wrong circuit on the other party’s input. This one extra bit of information could be crucially important, violating the privacy of the evaluator’s input in a significant way.
We introduce a tradeoff between correctness and privacy for boolean functions. For one of the two possible outputs (without loss of generality, ‘0’), we restore full privacy at the cost of correctness. The new privacy guarantee is that if the correct output is ‘0’, then a malicious adversary cannot learn anything beyond this output, but if the correct output is ‘1’, then she can learn a single bit of her choice. The new correctness guarantee is that a malicious adversary can cause the computation that should output ‘1’ to output ‘0’ instead, but not the other way around.
The main idea of dual execution is to have the two parties independently evaluate one another’s circuits, learn the output values, and compare the output labels using a secure comparison protocol. In our construction, however, the parties need not learn the output values before the comparison. Instead, the parties can compare output labels assuming an output of ‘1’, and if the comparison fails, the output is determined to be ‘0’.
More formally, let \(d_{0}[0]\), \(d_{0}[1]\) be the two output labels corresponding to \(\mathcal {P}_{0}\)’s garbled circuit, and \(d_{1}[0]\), \(d_{1}[1]\) be the two output labels corresponding to \(\mathcal {P}_{1}\)’s circuit. Let \(Y_{0}\) be the output label learned by \(\mathcal {P}_{1}\) as a result of evaluation, and \(Y_{1}\) be the label learned by \(\mathcal {P}_{0}\). The two parties securely compare \((d_{0}[1], Y_{1})\) to \((Y_{0}, d_{1}[1])\); if the comparison succeeds, the output is “1”.
Our privacy–correctness tradeoff is perfect for \(\textsf {fPAKE} \). If the parties’ inputs are similar, learning a bit of information about each other’s inputs is not problematic, since arguably the small amount of noise in the inputs is a bug, not a feature. If the parties’ inputs are not similar, however, we are guaranteed to have no leakage at all. We pay for the lack of leakage by allowing a malicious party to force an authentication failure even when authentication should succeed. However, either party can do so anyway by providing an incorrect input.
In Sect. 3.2.2, we describe our Yao’s garbled circuitbased \(\textsf {fPAKE} \) protocol. Note that in this protocol, we omit the final comparison step; instead, we use the output lables (\((d_{0}[1], Y_{1})\) and \((Y_{0}, d_{1}[1])\)) to compute the agreedupon key directly.
3.2 Construction
 1.
First, in Sect. 3.2.1, we define a randomized fuzzy equalitytesting functionality \(\mathcal {F}_\textsf {RFE} \), which is analogous to the randomized equalitytesting functionality of Canetti et al.
 2.
In Sect. 3.2.2, we build a protocol that securely realizes \(\mathcal {F}_\textsf {RFE} \) in the OThybrid model, assuming authenticated channels.
 3.
In Sect. 3.2.3, we apply the transformation of Barak et al. to our protocol. This results in a protocol that realizes the “split” version of functionality \(\mathcal {F}_\textsf {RFE} ^P\), which we show to be enough to implement to \(\textsf {fPAKE} ^{P}\). Split functionalities, which were introduced by Barak et al., adapt functionalities which assume authenticated channels to an unauthenticated channels setting. The only additional ability an adversary has in a split functionality is the ability to execute the protocol separately with the participating parties.
3.2.1 The Randomized Fuzzy Equality Functionality
Figure 3 shows the randomized fuzzy equality functionality \(\mathcal {F}_\textsf {RFE} ^{P}\), which is essentially what \(\mathcal {F}_\textsf {fPAKE} ^{P}\) would look like assuming authenticated channels. The primary difference between \(\mathcal {F}_\textsf {RFE} ^{P}\) and \(\mathcal {F}_\textsf {fPAKE} ^{P}\) is that the only passstring guesses allowed by \(\mathcal {F}_\textsf {RFE} ^{P}\) are the ones actually used as protocol inputs; this limits the adversary to guessing by corrupting one of the participating parties, not through man in the middle attacks. Like \(\mathcal {F}_\textsf {fPAKE} ^{P}\), if a passstring guess is “similar enough”, the entire passstring is leaked. This leakage could be replaced with any other leakage from Sect. 2; \(\mathcal {F}_\textsf {RFE} \) would leak the correctness of the guess, \(\mathcal {F}_\textsf {RFE} ^{M}\) would leak which characters are the same between the two passstrings, etc.
3.2.2 A Randomized Fuzzy Equality Protocol
 1.
\(\mathcal {P}_{i}\) garbles the circuit \(f\) that takes in two passstrings \(\mathsf {pw}_{0}\) and \(\mathsf {pw}_{1}\), and returns ‘1’ if \(d(\mathsf {pw}_{0}, \mathsf {pw}_{1}) \le \delta \) and ‘0’ otherwise. Section 3.3 describes how \(f\) can be designed efficiently for Hamming distance. Instead of using the output of \(f\) (‘0’ or ‘1’), we will use the garbled output, also referred to as an output label in an outputprojective garbling scheme. The possible output labels are two random strings—one corresponding to a ‘1’ output (we call this label \(\mathsf {k}_{i, correct}\)), and one corresponding to a ‘0’ output (we call this label \(\mathsf {k}_{i, wrong}\)).
 2.
\(\mathcal {P}_{i}\) uses OT to retrieve the input labels from \(\mathcal {P}_{1i}\)’s garbling that correspond to \(\mathcal {P}_{i}\)’s passstring.
 3.
\(\mathcal {P}_{i}\) sends \(\mathcal {P}_{1i}\) her garbled circuit, together with the input labels from her garbling that correspond to her own passstring. After this step, \(\mathcal {P}_{i}\) should have \(\mathcal {P}_{1i}\)’s garbled circuit and a garbled input consisting of input labels corresponding to the bits of the two passstrings.
 4.
\(\mathcal {P}_{i}\) evaluates \(\mathcal {P}_{1i}\)’s garbled circuit, and obtains an output label \(Y_{1i}\).
 5.
\(\mathcal {P}_{i}\) outputs \(\mathsf {k}_{i} = \mathsf {k}_{i, correct} \oplus Y_{1i}\).
The natural question to ask is why \(\varPi _\textsf {RFE} \) only realizes \(\mathcal {F}_\textsf {RFE} ^{P}\), and not a stronger functionality with less leakage. We argue this assuming (without loss of generality) that \(\mathcal {P}_{1}\) is corrupted. \(\varPi _\textsf {RFE} \) cannot realize a functionality that leaks less than the full passstring \(\mathsf {pw}_{0}\) to \(\mathcal {P}_{1}\) if \(d(\mathsf {pw}_{0}, \mathsf {pw}_{1}) \le \delta \); intuitively, this is because if \(\mathcal {P}_{1}\) knows a passstring \(\mathsf {pw}_{1}\) such that \(d(\mathsf {pw}_{0}, \mathsf {pw}_{1}) \le \delta \), \(\mathcal {P}_{1}\) can extract the actual passstring \(\mathsf {pw}_{0}\), as follows. If \(\mathcal {P}_{1}\) plays the role of OT receiver and garbled circuit evaluator honestly, \(\mathcal {P}_{0}\) and \(\mathcal {P}_{1}\) will agree on \(\mathsf {k}_{0, correct}\). \(\mathcal {P}_{1}\) can then misgarble a circuit that returns \(\mathsf {k}_{1, correct}\) if the first bit of \(\mathsf {pw}_{0}\) is 0, and \(\mathsf {k}_{1, wrong}\) if the first bit of \(\mathsf {pw}_{0}\) is 1. By testing whether the resulting keys \(\mathsf {k}_{0}\) and \(\mathsf {k}_{1}\) match (which \(\mathcal {P}_{1}\) can do in subsequent protocols where the key is used), \(\mathcal {P}_{1}\) will be able to determine the actual first bit of \(\mathsf {pw}_{0}\). \(\mathcal {P}_{1}\) can then repeat this for the second bit, and so on, extracting the entire passstring \(\mathsf {pw}_{0}\). Of course, if \(\mathcal {P}_{1}\) does not know a sufficiently close \(\mathsf {pw}_{1}\), \(\mathcal {P}_{1}\) will not be able to perform these tests, because the keys will not match no matter what circuit \(\mathcal {P}_{1}\) garbles.
More formally, if \(\mathcal {P}_{1}\) knows a passstring \(\mathsf {pw}_{1}\) such that \(d(\mathsf {pw}_{0}, \mathsf {pw}_{1}) \le \delta \) and carries out the misgarbling attack described above, then in the real world, the keys produced by \(\mathcal {P}_{0}\) and \(\mathcal {P}_{1}\) either will or will not match based on some predicate p of \(\mathcal {P}_{1}\)’s choosing on the two passstrings \(\mathsf {pw}_{0}\) and \(\mathsf {pw}_{1}\). Therefore, in the ideal world, the keys should also match or not match based on \(p(\mathsf {pw}_{0}, \mathsf {pw}_{1})\); otherwise, the environment will be able to distinguish between the two worlds. In order to make that happen, since the simulator does not know the predicate p in question, the simulator must be able to recover the entire passstring \(\mathsf {pw}_{0}\) (given a sufficiently close \(\mathsf {pw}_{1}\)) through the \(\texttt {TestPwd} \) interface.
Theorem 1
If \((\mathsf {Gb}, \mathsf {En}, \mathsf {Ev}, \mathsf {De})\) is a projective, outputprojective and garbledoutput random secure garbling scheme, then protocol \(\varPi _\textsf {RFE} \) with authenticated channels in the \(\mathcal {F}_\textsf {OT} \)hybrid model securely realizes \(\mathcal {F}_\textsf {RFE} ^{P}\) with respect to static corruptions for any threshold \(\delta \), as long as the passstring space and notion of distance are such that for any passstring \(\mathsf {pw}\), it is easy to compute another passstring \(\mathsf {pw}'\) such that \(d(\mathsf {pw}, \mathsf {pw}') > \delta \).
Proof
(Sketch). For every efficient adversary \(\mathcal {A} \), we describe a simulator \(\mathcal {S} _{\textsf {RFE}}\) such that no efficient environment can distinguish an execution with the real protocol \(\varPi _\textsf {RFE} \) and \(\mathcal {A} \) from an execution with the ideal functionality \(\mathcal {F}_\textsf {RFE} ^{P}\) and \(\mathcal {S} _{\textsf {RFE}}\). \(\mathcal {S} _{\textsf {RFE}}\) is described in the full version of this paper. We prove indistinguishability in a series of hybrid steps. First, we introduce the ideal functionality as a dummy node. Next, we allow the functionality to choose the parties’ keys, and we prove the indistinguishability of this step from the previous using the garbled output randomness property of our garbling scheme Next, we simulate an honest party’s interaction with another honest party without using their passstring, and prove the indistinguishability of this step from the previous using the obliviousness property of our garbling scheme. Finally, we simulate an honest party’s interaction with a corrupted party without using the honest party’s passstring, and prove the indistinguishability of this step from the previous using the privacy property of our garbling scheme.
We give a more formal proof of Theorem 1 in the full version of this paper [28].
3.2.3 From Split Randomized Fuzzy Equality to fPAKE
The Randomized Fuzzy Equality (RFE) functionality \(\mathcal {F}_\textsf {RFE} ^{P}\) assumes authenticated channels, which an \(\textsf {fPAKE} \) protocol cannot do. In order to adapt RFE to our setting, we use the split functionality transformation defined by Barak et al. [4]. Barak et al. provide a generic transformation from protocols which require authenticated channels to protocols which do not. In the “transformed” protocol, an adversary can engage in two separate instances of the protocol with the sender and receiver, and they will not realize that they are not talking to one another. However, it does guarantee that the adversary cannot do anything beyond this attack. In other words, it provides “session authentication”, meaning that each party is guaranteed to carry out the entire protocol with the same partner, but not “entity authentication”, meaning that the identity of the partner is not guaranteed.
Barak et al. achieve this transformation in three steps. First, the parties generate signing and verification keys, and send one another their verification keys. Next, the parties sign the list of all keys they have received (which, in a twoparty protocol, consists of only one key), sign that list, and send both list and signature to all other parties. Finally, they verify all of the signatures they have received. After this process—called “link initialization”—has been completed, the parties use those public keys they have exchanged to authenticate subsequent communication.
It turns out that \(s\mathcal {F}_\textsf {RFE} ^{P}\) is enough to realize \(\mathcal {F}_\textsf {fPAKE} ^{P}\). In fact, the protocol \(\varPi _\textsf {RFE} \) with the split functionality transformation directly realizes \(\mathcal {F}_\textsf {fPAKE} ^{P}\). In the full version of this paper [28], we prove that this is the case.
3.3 An Efficient Circuit \(f\) for Hamming Distance
 1.
First, \(f\) XORs corresponding (binary) passstring characters, resulting in a list of bits indicating the (in)equality of those characters.
 2.
Then, \(f\) feeds those bits into a threshold gate, which returns 1 if at least \(n \delta \) of its inputs are 0, and returns 0 otherwise. \(f\) returns the output of that threshold gate, which is 1 if and only if at least \(n \delta \) passstring characters match.
 1.
The input wire labels encode 0 or 1 modulo \(n+ 1\). However, instead of having those input wire labels encode the characters of the two passstrings directly, they encode the outputs of the comparisons of corresponding characters. If the jth character of \(\mathcal {P}_{i}\)’s passstring is 0, then \(\mathcal {P}_{i}\) puts the 0 label first; however, if the jth character of \(\mathcal {P}_{i}\)’s passstring is 1, then \(\mathcal {P}_{i}\) flips the labels. Then, when \(\mathcal {P}_{1i}\) is using oblivious transfer to retrieve the label corresponding to her jth passstring character, she will retrieve the 0 label if the two characters are equal, and the 1 label otherwise. (Note that this preprocessing on the garbler’s side eliminates the need to send \(X_{0,0}\) and \(X_{1,1}\) in Fig. 4.)
 2.
Compute a \(n\)input threshold gate, as illustrated in Fig. 6 of Yakoubov [62]. This gate returns 0 if the sum of the inputs is above a certain threshold (that is, if at least \(n \delta \) passstring characters differ), and 1 otherwise. This will require \(n\) ciphertexts.
Thus, a garbling of \(f\) consists of \(n\) ciphertexts. Since fPAKE requires two such garbled circuits (Fig. 4), \(2n\) ciphertexts will be exchanged.
 1.
Represent each character in terms of bits. Step 1 will then consist of XORing corresponding bits, and taking an OR or the resulting XORs of each character to get negated equality. This will take an additional \(n\log (p)\) ciphertexts for every passstring character.
 2.
Use garbled gadget labels from the outset. We will require a larger OT (1outof\(p\) instead of 1outof2), but nothing else will change.
4 Specialized Construction for Hamming Distance
In the full version of this paper [28], we show that it is not straightforward to build a secure fPAKE from primitives that are, by design, wellsuited for correcting errors. However, PAKE protocols are appealingly efficient compared to the garbled circuits used in the prior construction. In this section, we will see whether the failed approach can be rescued in an efficient way, and we answer this question in the affirmative.
4.1 Building Blocks
4.1.1 Robust Secret Sharing
We recall the definition of a robust secret sharing scheme, slightly simplified for our purposes from Cramer et al. [22]. For a vector \(c\in \mathbb {F}_q^n\) and a set \(A\subseteq [n]\), we denote with \(c_A\) the projection \(\mathbb {F}_q^n\rightarrow \mathbb {F}_q^{A}\), i.e., the subvector \((c_i)_{i\in A}\).
Definition 2

tprivacy: for any \(s,s'\in \mathbb {F}_q, A \subset [n]\) with \(A\le t\), the projections \(c_A\) of \(c{\mathop {\leftarrow }\limits ^{{}_\$}}\mathsf {Share} (s)\) and \(c'_A\) of \(c'{\mathop {\leftarrow }\limits ^{{}_\$}}\mathsf {Share} (s')\) are identically distributed.

rrobustness: for any \(s\in \mathbb {F}_q, A \subset [n]\) with \(A\ge r\), any c output by \(\mathsf {Share} (s)\), and any \(\tilde{c}\) such that \(c_A = \tilde{c}_A\), it holds that \(\mathsf {Reconstruct} (\tilde{c}) = s\).
In other words, an \((n, t,r)\)\(\textsf {RSS}\) is able to reconstruct the shared secret even if the adversary tampered with up to \(nr\) shares, while each set of t shares is distributed independently of the shared secret s and thus reveals nothing about it. We note that we allow for a gap, i.e., \(r\ge t+1\). Schemes with \(r > t+1\) are called ramp \(\textsf {RSS}\).
4.1.2 Linear Codes
A linear qary code of length \(n\) and rank \(k\) is a subspace C with dimension \(k\) of the vector space \(\mathbb {F}_q^n\). The vectors in C are called codewords. The size of a code is the number of codewords it contains, and is thus equal to \(q^k\). The weight of a word \(w\in \mathbb {F}_q^n\) is the number of its nonzero components, and the distance between two words is the Hamming distance between them (equivalently, the weight of their difference). The minimal distance d of a linear code C is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between any two distinct codewords.
A code for an alphabet of size q, of length \(n\), rank \(k\), and minimal distance d is called an \((n,k,d)_q\)code. Such a code can be used to detect up to \(d1\) errors (because if a codeword is sent and fewer than \(d1\) errors occur, it will not get transformed to another codeword), and correct up to \(\lfloor (d1)/2\rfloor \) errors (because for any received word, there is a unique codeword within distance \(\lfloor (d1)/2\rfloor \)). For linear codes, the encoding of a (row vector) word \(W\in \mathbb {F}_q^k\) is performed by an algorithm \(C.\mathsf {Encode}:\mathbb {F}_q^k\rightarrow \mathbb {F}_q^n\), which is the multiplication of W by a socalled “generating matrix” \(G\in \mathbb {F}_q^{k\times n}\) (which defines an injective linear map). This leads to a rowvector codeword \(c\in C\subset \mathbb {F}_q^n\).
The Singleton bound states that for any linear code, \(k+d \le n+1\), and a maximum distance separable (or MDS) code satisfies \(k+d = n+1\). Hence, \(d = nk+1\) and MDS codes are fully described by the parameters \((q,n,k)\). Such an \((n,k)_q\)MDS code can correct up to \(\lfloor (nk)/2\rfloor \) errors; it can detect if there are errors whenever there are no more than \(nk\) of them.
For a thorough introduction to linear codes and proof of all statements in this short overview we refer the reader to [55].
Observe that a linear code, due to the linearity of its encoding algorithm, is not a primitive designed to hide anything about the encoded message. However, we show in the following lemma how to turn an MDS code into a \(\textsf {RSS}\) scheme.
Lemma 3

\(\mathsf {Share} (s)\) for \(s\in \mathbb {F}_q\) first chooses a random row vector \(W\in \mathbb {F}_q^k\) such that \(W \cdot L = s\), and outputs \(c \leftarrow C'.\mathsf {Encode} (W)\) (equivalently, we can say that \(\mathsf {Share} (s)\) chooses a uniformly random codeword of C whose last coordinate is s, and outputs the first \(n\) coordinates as c).

\(\mathsf {Reconstruct} (w)\) for \(w\in \mathbb {F}_q^n\) first runs \(C'.\mathsf {Decode} (w)\). If it gets a vector \(W'\), then output \(s = W' \cdot L\), otherwise output \(s{\mathop {\leftarrow }\limits ^{{}_\$}}\mathbb {F}_q\).
Then \(\mathsf {Share} \) and \(\mathsf {Reconstruct} \) form a \((n,t,r)\)\(\textsf {RSS}\) for \(t=k1\) and \(r = \lceil (n+k)/2\rceil \).
Proof
Let us consider the two properties from Definition 2.

tprivacy: Assume \(A = t\) (privacy for smaller A will follow immediately by adding arbitrary coordinates to it to get to size t). Let \(J = A\cup \{n+1\}\); note that \(J=t+1=k\). Note that for the code C, any \(k\) coordinates of a codeword determine uniquely the input to \(\mathsf {Encode} \) that produces this codeword (otherwise, there would be two codewords that agreed on \(k\) elements and thus had distance \(nk+1\), which is less than the minimum distance of C). Therefore, the mapping given by \(\mathsf {Encode} _J: \mathbb {F}_q^k\rightarrow \mathbb {F}_q^{J}\) is bijective; thus coordinates in J are uniform when the input to \(\mathsf {Encode} \) is uniform. The algorithm \(\mathsf {Share} \) chooses the input to \(\mathsf {Encode} \) uniformly subject to fixing the coordinate \(n+1\) of the output. Therefore, the remaining coordinates (i.e., the coordinates in A) are uniform.

rrobustness: Note that C has minimum distance \(nk+2\), and therefore \(C'\) has minimum distance \(nk+1\) (because dropping one coordinate reduces the distance by at most 1). Therefore, \(C'\) can correct \(\lfloor (nk)/2\rfloor = nr\) errors. Since \(c_A = \tilde{c}_A\) and \(A \ge r\), there are at most \(nr\) errors in \(\tilde{c}\), so the call to \(C'.\mathsf {Decode} (c')\) made by \(\mathsf {Reconstruct} (c')\) will output \(W'=W\). Then \(\mathsf {Reconstruct} (c')\) will output \(s = W'\cdot L = W\cdot L\).
Note that the Shamir’s secret sharing scheme is exactly the above construction with ReedSolomon codes [47].
4.1.3 ImplicitOnly PAKE
PAKE protocols can have two types of authentication: implicit authentication, where at the end of the protocol the two parties share the same key if they used the same passstring and random independent keys otherwise; or explicit authentication where, in addition, they actually know which of the two situations they are in. A PAKE protocol that only achieves implicit authentication can provide explicit authentication by adding keyconfirmation flows [7].
The standard PAKE functionality \(\mathcal {F}_\textsf {pwKE} \) from [20]is designed with explicit authentication in mind, or at least considers that success or failure will later be detected by the adversary when he will try to use the key. Thus, it reveals to the adversary whether a passstring guess attempt was successful or not. However, some applications could require a PAKE that does not provide any feedback, and so does not reveal the situation before the keys are actually used. Observe that, regarding honest players, already \(\mathcal {F}_\textsf {pwKE} \) features implicit authentication since the players do not learn anything but their own session key.
In terms of functionalities, there are two differences from \(\mathcal {F}_\textsf {pwKE} \) to \(\mathcal {F}_\textsf {iPAKE} \). First, the TestPwd query only silently updates the internal state of the record (from fresh to either compromised or interrupted), meaning that its outcome is not given to the adversary \(\mathcal {S}\). Second, the NewKey query is modified so that the adversary gets to choose the key for a noncorrupted party only if it uses the correct passstring (corruption of the other party is no longer enough), as already discussed earlier. Without going too much into the details, it is intuitively clear that simulation of an honest party is hard if the simulator does not know whether it should proceed the simulation with a passstring extracted from a dictionary attack or not. Regarding the output, i.e., the question whether the session keys computed by both parties should match or look random, the simulator thus gets help from our functionality by modifying the NewKey queries.
We further alter this functionality to allow for public labels, as shown in the full version of this paper [28]. The resulting functionality \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \) idealizes what we call labeled implicitonly PAKE (or Open image in new window for short), resembling the notion of labeled public key encryption as formalized in [56]. In a nutshell, labels are public authenticated strings that are chosen by each user individually for each execution of the protocol. Authenticated here means that tampering with the label can be efficiently detected. Such labels can be used to, e.g., distribute public information such as public keys reliably over unauthenticated channels.
Theorem 4
If the \(\textsf {CDH}\) assumption holds in \(\mathbb {G}\), the protocol EKE2 depicted in Fig. 8 securely realizes \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \) in the \(\mathcal {F}_\textsf {RO},\mathcal {F}_\textsf {IC},\mathcal {F}_\textsf {CRS} \)hybrid model with respect to static corruptions.
We note that this result is not surprising, given that other variants of EKE2 have already been proven to UCemulate \(\mathcal {F}_\textsf {pwKE} \). Intuitively, a protocol with only two flows not depending on each other does not leak the outcome to the adversary via the transcript, which explains why EKE2 is implicitonly. Hashing of the transcript keeps the adversary from biasing the key unless he knows the correct passstring or breaks the ideal cipher. For completeness, we include the full proof in the full version of this paper [28].
4.2 Construction
 1.
In the first phase, the two parties aim at enhancing their passstrings to a vector of session keys with good entropy. For this, passstrings are viewed as vectors of characters. The parties repeatedly execute a PAKE on each of these characters separately. The PAKE will ensure that the key vectors held by the two parties match in all positions where their passstrings matched, and are uniformly random in all other positions.
 2.
In the second phase, the two parties exchange nonces of their choice, in such a way that the nonce reaches the other party only if enough of the key vector matches. This is done by applying an \(\textsf {RSS}\) to the nonce, and sending it to the other party using the key vector as a one time pad. Both parties do this symmetrically, each using half of the bits of the key vector. The robustness property of the \(\textsf {RSS}\) ensures that a few nonmatching passstring characters do not prevent both parties from recovering the other party’s nonce. The final key is then obtained by adding the nonces (again, as a onetime pad): this is a scalar in \(\mathbb {F}_q\).
When using the \(\textsf {RSS}\) from MDS codes described in Lemma 3, the onetime pad encryption of the shares (which form a codeword) can be viewed as the codeoffset construction for information reconciliation (aka secure sketch) [27, 36] applied to the key vectors. While our presentation goes through RSS as a separate object, we could instead present this construction using information reconciliation. The syndrome construction of secure sketches Lemma 3 can also be used here instead of the codeoffset construction.
4.3 Security of \(\textsf {fPAKE} _\textsf {RSS} \)
We show that our protocol realizes functionality \(\mathcal {F}_\textsf {fPAKE} ^M\) in the \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \)hybrid model. In a nutshell, the idea is to simulate without the passstrings by adjusting the keys outputted by \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \) to the mask of the passstrings, which is leaked by \(\mathcal {F}_\textsf {fPAKE} ^M\).
Theorem 5
If \((\mathsf {Share}:\mathbb {F}_q \rightarrow \mathbb {F}_q^n,\mathsf {Reconstruct}:\mathbb {F}_q^n\rightarrow \mathbb {F}_q)\) is an \((n,t,r)\) \(\textsf {RSS}\) and \((\mathsf {SigGen},\mathsf {Sign},\mathsf {Vfy})\) is an EUFCMA secure onetime signature scheme, protocol \(\textsf {fPAKE} _{RSS}\) securely realizes \(\mathcal {F}_\textsf {fPAKE} ^M\) with \(\gamma = nt1\) and \(\delta = nr\) in the \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \)hybrid model with respect to static corruptions.
In particular, if we wish key agreement to succeed as long as there are fewer than \(\delta \) errors, we instantiate \(\textsf {RSS}\) using the construction of Lemma 3 based on a \((n+1, k)_q\) MDS code, with \(k= n2\delta \). This will give \(r = \lceil (n+k)/2 \rceil = n \delta \), so \(\delta \) will be equal to \(nr\), as required. It will also give \(\gamma = nt1 = 2\delta \).
We thus obtain the following corollary:
Corollary 6
For any \(\delta \) and \(\gamma = 2\delta \), given an \((n+1,k)_q\)MDS code for \(k= n2\delta \) (with minimal distance \(d = n k+2\)) and an EUFCMA secure onetime signature scheme, protocol \(\textsf {fPAKE} _{RSS}\) securely realizes \(\mathcal {F}_\textsf {fPAKE} ^M\) in the \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \)hybrid model with respect to static corruptions.
Proof sketch of Theorem 5. We start with the real execution of the protocol and indistinguishably switch to an ideal execution with dummy parties relaying their inputs to and obtaining their outputs from \(\mathcal {F}_\textsf {fPAKE} ^M\). To preserve the view of the distinguisher, the environment \(\mathcal {Z}\), a simulator \(\mathcal {S}\) plays the role of the real world adversary by controlling the communication between \(\mathcal {F}_\textsf {fPAKE} ^M\) and \(\mathcal {Z}\). During the proof, we built \(\mathcal {F}_\textsf {fPAKE} ^M\) and \(\mathcal {S}\) by subsequently randomizing passstrings (since the final simulation has to work without them) and session keys (since \(\mathcal {F}_\textsf {fPAKE} ^M\) hands out random session keys in certain cases). We have to tackle the following difficulties, which we will describe in terms of attacks.

Passive attack: in this attack, \(\mathcal {Z}\) picks two passstrings and then observes the transcript and outputs of the protocol, without having access to any internal state of the parties. We show that \(\mathcal {Z}\) cannot distinguish between transcript and outputs that were either produced using \(\mathcal {Z}\)’s passstrings or random passstrings. Regarding the outputs, we argue that even in the real execution the session keys were chosen uniformly at random (with \(\mathcal {Z}\) not knowing the coins consumed by this choice) as long as the distance check is reliable. Using properties of the \(\textsf {RSS}\), we show that this is the case with overwhelming probability. Regarding the transcript, randomization is straightforward using properties of the onetime pad.

Maninthemiddle attack: in this attack, \(\mathcal {Z}\) injects a malicious message into a session of two honest parties. There are several ways to secure protocols that have to run in unauthenticated channels and are prone to this attack. Basically, all of them introduce methods to bind messages together to prevent the adversary from injecting malicious messages. To do this, we need the labeled version of our iPAKE and a onetime signature scheme^{3}. Unless \(\mathcal {Z}\) is able to break a onetimesignature scheme, this attack always results in an abort.

Active attack: in this attack, \(\mathcal {Z}\) injects a malicious message into a session with one corrupted party, thereby knowing the internal state of this party. We show how to produce transcript and outputs looking like in a real execution, but without using the passstrings of the honest party. Since \(\mathcal {Z}\) can now actually decrypt the onetime pad and therefore the transcript reveals the positions of the errors in the passstrings, \(\mathcal {S}\) has to rely on \(\mathcal {F}_\textsf {fPAKE} ^M\) revealing the mask of the passstrings used in the real execution. If, on the other hand, the passstrings are too far away from each other, we show that the privacy property of the \(\textsf {RSS}\) actually hides the number and positions of the errors. This way, \(\mathcal {S}\) can use a random passstring to produce the transcript in that case.
One interesting subtlety that arises is the usage of the iPAKE. Observe that the UC security notion for a regular PAKE as defined in [20] and recalled in the full version of this paper [28] provides an interface to the adversary to test a passstring once and learn whether it is right or wrong. Using this notion, our simulator would have to answer to such queries from \(\mathcal {Z}\). Since this is not possible without \(\mathcal {F}_\textsf {fPAKE} ^M\) leaking the mask all the time, it is crucial to use the iPAKE variant that we introduced in Sect. 4.1.3. Using this stronger notion, the adversary is still allowed one passstring guess which may affect the output, but the adversary learns nothing more about the outcome of his guess than he can infer from whatever access he has to the outputs alone. Since our protocol uses the outputs of the PAKE as onetime pad keys, it is intuitively clear that by preventing \(\mathcal {Z}\) from getting additional leakage about these keys, we protect the secrets of honest parties.
4.4 Further Discussion
4.4.1 Adaptive Corruptions
Adaptive security of our protocol is not achievable without relying on additional assumptions. To see this, consider the following attack: \(\mathcal {Z}\) starts the protocol with two equal passstrings and, without corrupting anyone, silently observes the transcript produced by \(\mathcal {S}\) using random passstrings. Afterwards, \(\mathcal {Z}\) corrupts both players to learn their internal state. \(\mathcal {S}\) may now choose a value K. This also fixes \(L'=K\) since the passstrings were equal. Now note that \(\mathcal {S}\) is committed to E, F since signatures are not equivocable. Since perfect shares are sparse in \(\mathbb {F}_q^n\), the probability that there exists a K such that \(EK\) and \(FK\) are both perfect shares is negligible. Thus, there do not exist plausible values \(U,V'\) that explain the transcript^{4}.
4.4.2 Removing Modeling Assumptions
All modeling assumptions of our protocol come from the realization of the ideal \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \) functionality. E.g., the Open image in new window protocol from Sect. 4.1.3 requires a random oracle, an ideal cipher and a CRS. We note that we can remove everything up to the CRS by, e.g., taking the PAKE protocol introduced in [37]. This protocol also securely realizes our \(\mathcal {F}_{\ell {\textsf {\text {}iPAKE}}} \) functionality^{5}. However, it is more costly than our Open image in new window protocol since both messages each contain one noninteractive zero knowledge proof.
Since fPAKE implies a regular PAKE (simply set \(\delta = 0\)), [20] gives strong evidence that we cannot hope to realize \(\mathcal {F}_\textsf {fPAKE} \) without a CRS.
5 Comparison of fPAKE Protocols
Then, in Fig. 11, we describe the efficiency of the constructions when concrete primitives (OT/ Open image in new window ) are used to instantiate them. \(\textsf {fPAKE} _\textsf {RSS} \) is instantiated as the construction in Fig. 9 with the Open image in new window in Fig. 8 and an \(\textsf {RSS}\). \(\textsf {fPAKE} _\textsf {YGC} \) is instantiated as the construction in Fig. 4 with the UCsecure oblivious transfer protocol of Chou and Orlandi [21], with the garbling scheme of Bal et al. [3], and with the split functionality transformation of Barak et al. [4]. Though \(\textsf {fPAKE} _\textsf {YGC} \) can handle any efficiently computable notion of distance, Fig. 11 assumes that both constructions use Hamming distance (and that, specifically, \(\textsf {fPAKE} _\textsf {YGC} \) uses the circuit described in Fig. 6). We describe efficiency in terms of suboperations (perparty, not in aggregate).
Note that these concrete primitives each have their own set of required assumptions. Specifically, the Open image in new window in Fig. 8 requires a random oracle (RO), ideal cipher (IC) and common reference string (CRS). The oblivious transfer protocol of Chou and Orlandi [21] requires a random oracle. The garbling scheme of Bal et al. [3] requires a mixed modulus circular correlation robust hash function, which is a weakening of the random oracle assumption.
For \(\textsf {fPAKE} _\textsf {RSS} \), the factor of \(n\) arises from the \(n\) times EKE2 is executed. For \(\textsf {fPAKE} _\textsf {YGC} \), the factor of \(n\) comes from the garbled circuit. Additionally, in \(\textsf {fPAKE} _\textsf {YGC} \), three rounds of communication come from OT. The last of these is combined with sending the garbled circuits. Two additional rounds of communication come from the split functionality transformation. The need for signatures also arises from the split functionality transformation.
Open image in new window We can make several small efficiency improvements to the \(\textsf {fPAKE} _\textsf {YGC} \) construction which are not reflected in Fig. 11. First, instead of using the split functionality transformation of Barak et al. [4], we can use the split functionality of Camenisch et al. [16]. It uses a split key exchange functionality to establish symmetric keys, and then uses those to symmetrically encrypt and authenticate each flow. While this does not save any rounds, it does reduce the number of public key operations needed. Second, if the passstrings are more than \(\lambda \) bits long (where \(\lambda \) is the security parameter), OT extensions that are secure against malicious adversaries [2] can be used. If the passstrings are fewer than \(\lambda \) bits long, then nothing is to be gained from using OT extensions, since OT extensions require \(\lambda \) “base OTs”. However, if the passstrings are longer—say, if they are some biometric measurement that is thousands of bits long—then OT extensions would save on the number of public key operations, at the cost of an extra round of communication.
Footnotes
 1.
Gasti et al. [31] similarly use Yao’s garbled circuits for continuous biometric user authentication on a smartphone. Our approach can eliminate the third party in their application, at the cost of requiring two garbled circuits instead of one. As far as we know, ours is the first use of garbled circuits in the twoparty fully malicious setting without calling on an expensive transformation.
 2.
There are techniques [44] that improve this number in the amortized case when many computations are done—however, this does not fit our setting.
 3.
Instead of labels and onetime signature, one could just sign all the messages, as would be done using the splitfunctionality [4], but this would be less efficient. This tradeoff, with labels, is especially useful when we use a PAKE that admits adding labels basically for free, as it is the case with the special PAKE protocol we use.
 4.
We note that additional assumptions like assuming erasures can enable an adaptive security proof.
 5.
In a nutshell, their protocol is implicitonly for the same reason as the Open image in new window protocol we use here: there are only two flows that do not depend on each other, so the transcript cannot reveal the outcome of a guess unless it reveals the passstring to anyone. Regarding the session keys, usage of a hash function takes care of randomizing the session key in case of a failed dictionary attack. Furthermore, the protocol already implements labels. A little more detailed, looking at the proof in [37], the simulator does not make use of the answer of TestPwd to simulate any messages. Regarding the session key that an honest player receives in an corrupted session, they are chosen to be random in the simulation (in Expt\(_3\)). Letting this happen already in the functionality makes the simulation independent of the answer of TestPwd also regarding the computation of the session keys.
Notes
Acknowledgments
We thank Ran Canetti for guidance on the details of UC key agreement definitions, and Adam Smith for discussions on coding and information reconciliation.
This work was supported in part by the European Research Council under the European Community’s Seventh Framework Programme (FP7/20072013 Grant Agreement no. 339563 – CryptoCloud). Leonid Reyzin gratefully acknowledges the hospitality of École Normale Supérieure, where some of this work was performed. He was supported, in part, by US NSF grants 1012910, 1012798, and 1422965.
References
 1.Abdalla, M., Catalano, D., Chevalier, C., Pointcheval, D.: Efficient twoparty passwordbased key exchange protocols in the UC framework. In: Malkin, T. (ed.) CTRSA 2008. LNCS, vol. 4964, pp. 335–351. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540792635_22CrossRefGoogle Scholar
 2.Afshar, A., Hu, Z., Mohassel, P., Rosulek, M.: How to efficiently evaluate RAM programs with malicious security. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part I. LNCS, vol. 9056, pp. 702–729. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662468005_27Google Scholar
 3.Ball, M., Malkin, T., Rosulek, M.: Garbling gadgets for boolean and arithmetic circuits. In: Weippl, E.R., Katzenbeisser, S., Kruegel, C., Myers, A.C., Halevi, S. (eds.) ACM CCS 2016, pp. 565–577. ACM Press, New York (2016)Google Scholar
 4.Barak, B., Canetti, R., Lindell, Y., Pass, R., Rabin, T.: Secure computation without authentication. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 361–377. Springer, Heidelberg (2005). https://doi.org/10.1007/11535218_22CrossRefGoogle Scholar
 5.Beaver, D., Micali, S., Rogaway, P.: The round complexity of secure protocols (extended abstract). In: 22nd ACM STOC, pp. 503–513. ACM Press, May 1990Google Scholar
 6.Bellare, M., Hoang, V.T., Rogaway, P.: Foundations of garbled circuits. In: Yu, T., Danezis, G., Gligor, V.D. (eds.) ACM CCS 2012, pp. 784–796. ACM Press, New York (2012)Google Scholar
 7.Bellare, M., Pointcheval, D., Rogaway, P.: Authenticated key exchange secure against dictionary attacks. In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 139–155. Springer, Heidelberg (2000). https://doi.org/10.1007/3540455396_11CrossRefGoogle Scholar
 8.Bellovin, S.M., Merritt, M.: Encrypted key exchange: passwordbased protocols secure against dictionary attacks. In: 1992 IEEE Symposium on Security and Privacy, pp. 72–84. IEEE Computer Society Press, May 1992Google Scholar
 9.Bennett, C.H., Brassard, G., Robert, J.M.: Privacy amplification by public discussion. SIAM J. Comput. 17(2), 210–229 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
 10.Bitansky, N., Canetti, R., Kalai, Y.T., Paneth, O.: On virtual grey box obfuscation for general circuits. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 108–125. Springer, Heidelberg (2014). https://doi.org/10.1007/9783662443811_7CrossRefGoogle Scholar
 11.Blanton, M., Hudelson, W.M.P.: Biometricbased nontransferable anonymous credentials. In: Qing, S., Mitchell, C.J., Wang, G. (eds.) ICICS 2009. LNCS, vol. 5927, pp. 165–180. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642111457_14CrossRefGoogle Scholar
 12.Boyen, X.: Reusable cryptographic fuzzy extractors. In: Atluri, V., Pfitzmann, B., McDaniel, P. (eds.) ACM CCS 2004, pp. 82–91. ACM Press, New York (2004)Google Scholar
 13.Boyen, X., Dodis, Y., Katz, J., Ostrovsky, R., Smith, A.: Secure remote authentication using biometric data. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 147–163. Springer, Heidelberg (2005). https://doi.org/10.1007/11426639_9CrossRefGoogle Scholar
 14.Boyko, V., MacKenzie, P., Patel, S.: Provably secure passwordauthenticated key exchange using DiffieHellman. In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 156–171. Springer, Heidelberg (2000). https://doi.org/10.1007/3540455396_12CrossRefGoogle Scholar
 15.Brostoff, S., Sasse, M.A.: Are passfaces more usable than passwords? A field trial investigation. In: McDonald, S., Waern, Y., Cockton, G. (eds.) People and Computers XIV – Usability or Else!, pp. 405–424. Springer, London (2000). https://doi.org/10.1007/9781447105152_27CrossRefGoogle Scholar
 16.Camenisch, J., Casati, N., Gross, T., Shoup, V.: Credential authenticated identification and key exchange. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 255–276. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642146237_14CrossRefGoogle Scholar
 17.Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: 42nd FOCS, pp. 136–145. IEEE Computer Society Press, October 2001Google Scholar
 18.Canetti, R., DachmanSoled, D., Vaikuntanathan, V., Wee, H.: Efficient password authenticated key exchange via oblivious transfer. In: Fischlin, M., Buchmann, J., Manulis, M. (eds.) PKC 2012. LNCS, vol. 7293, pp. 449–466. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642300578_27CrossRefGoogle Scholar
 19.Canetti, R., Fuller, B., Paneth, O., Reyzin, L., Smith, A.: Reusable fuzzy extractors for lowentropy distributions. In: Fischlin, M., Coron, J.S. (eds.) EUROCRYPT 2016, Part I. LNCS, vol. 9665, pp. 117–146. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662498903_5CrossRefGoogle Scholar
 20.Canetti, R., Halevi, S., Katz, J., Lindell, Y., MacKenzie, P.: Universally composable passwordbased key exchange. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 404–421. Springer, Heidelberg (2005). https://doi.org/10.1007/11426639_24CrossRefGoogle Scholar
 21.Chou, T., Orlandi, C.: The simplest protocol for oblivious transfer. In: Lauter, K., RodríguezHenríquez, F. (eds.) LATINCRYPT 2015. LNCS, vol. 9230, pp. 40–58. Springer, Cham (2015). https://doi.org/10.1007/9783319221748_3CrossRefGoogle Scholar
 22.Cramer, R., Damgård, I.B., Döttling, N., Fehr, S., Spini, G.: Linear secret sharing schemes from error correcting codes and universal hash functions. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part II. LNCS, vol. 9057, pp. 313–336. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662468036_11Google Scholar
 23.Daugman, J.: How iris recognition works. IEEE Trans. Circuits Syst. Video Technol. 14(1), 21–30 (2004)CrossRefGoogle Scholar
 24.Diffie, W., Hellman, M.E.: New directions in cryptography. IEEE Trans. Inf. Theory 22(6), 644–654 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
 25.Dodis, Y., Kanukurthi, B., Katz, J., Reyzin, L., Smith, A.: Robust fuzzy extractors and authenticated key agreement from close secrets. IEEE Trans. Inf. Theory 58(9), 6207–6222 (2012). https://doi.org/10.1109/TIT.2012.2200290MathSciNetCrossRefzbMATHGoogle Scholar
 26.Dodis, Y., Ostrovsky, R., Reyzin, L., Smith, A.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. SIAM J. Comput. 38(1), 97–139 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 27.Dodis, Y., Reyzin, L., Smith, A.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 523–540. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540246763_31CrossRefGoogle Scholar
 28.Dupont, P.A., Hesse, J., Pointcheval, D., Reyzin, L., Yakoubov, S.: Fuzzy authenticated key exchange. Cryptology ePrint Archive, Report 2017/1111 (2017). https://eprint.iacr.org/2017/1111
 29.Ellison, C., Hall, C., Milbert, R., Schneier, B.: Protecting secret keys with personal entropy. Future Gener. Comput. Syst. 16(4), 311–318 (2000)CrossRefGoogle Scholar
 30.Gassend, B., Clarke, D.E., van Dijk, M., Devadas, S.: Silicon physical random functions. In: Atluri, V. (ed.) ACM CCS 2002, pp. 148–160. ACM Press, New York (2002)Google Scholar
 31.Gasti, P., Sedenka, J., Yang, Q., Zhou, G., Balagani, K.S.: Secure, fast, and energyefficient outsourced authentication for smartphones. Trans. Info. For. Sec. 11(11), 2556–2571 (2016). https://doi.org/10.1109/TIFS.2016.2585093CrossRefGoogle Scholar
 32.Han, J., Chung, A., Sinha, M.K., Harishankar, M., Pan, S., Noh, H.Y., Zhang, P., Tague, P.: Do you feel what I hear? Enabling autonomous IoT device pairing using different sensor types. In: IEEE Symposium on Security and Privacy (2018)Google Scholar
 33.Han, J., Harishankar, M., Wang, X., Chung, A.J., Tague, P.: Convoy: physical context verification for vehicle platoon admission. In: 18th ACM International Workshop on Mobile Computing Systems and Applications (HotMobile) (2017)Google Scholar
 34.Huang, Y., Katz, J., Evans, D.: QuidProQuotocols: strengthening semihonest protocols with dual execution. In: 2012 IEEE Symposium on Security and Privacy, pp. 272–284. IEEE Computer Society Press, May 2012Google Scholar
 35.Huang, Y., Katz, J., Evans, D.: Efficient secure twoparty computation using symmetric cutandchoose. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 18–35. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642400841_2CrossRefGoogle Scholar
 36.Juels, A., Wattenberg, M.: A fuzzy commitment scheme. In: ACM CCS 1999, pp. 28–36. ACM Press, November 1999Google Scholar
 37.Katz, J., Vaikuntanathan, V.: Roundoptimal passwordbased authenticated key exchange. In: Ishai, Y. (ed.) TCC 2011. LNCS, vol. 6597, pp. 293–310. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642195716_18CrossRefGoogle Scholar
 38.Kolesnikov, V., Mohassel, P., Rosulek, M.: FleXOR: flexible garbling for XOR gates that beats freeXOR. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 440–457. Springer, Heidelberg (2014). https://doi.org/10.1007/9783662443811_25CrossRefGoogle Scholar
 39.Kolesnikov, V., Rackoff, C.: Password mistyping in twofactorauthenticated key exchange. In: Aceto, L., Damgård, I., Goldberg, L.A., Halldórsson, M.M., Ingólfsdóttir, A., Walukiewicz, I. (eds.) ICALP 2008, Part II. LNCS, vol. 5126, pp. 702–714. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540705833_57CrossRefGoogle Scholar
 40.Kolesnikov, V., Schneider, T.: Improved garbled circuit: free XOR gates and applications. In: Aceto, L., Damgård, I., Goldberg, L.A., Halldórsson, M.M., Ingólfsdóttir, A., Walukiewicz, I. (eds.) ICALP 2008, Part II. LNCS, vol. 5126, pp. 486–498. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540705833_40CrossRefGoogle Scholar
 41.Lindell, Y.: Fast cutandchoose based protocols for malicious and covert adversaries. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 1–17. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642400841_1CrossRefGoogle Scholar
 42.Lindell, Y., Pinkas, B.: Secure twoparty computation via cutandchoose oblivious transfer. In: Ishai, Y. (ed.) TCC 2011. LNCS, vol. 6597, pp. 329–346. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642195716_20CrossRefGoogle Scholar
 43.Lindell, Y., Pinkas, B.: An efficient protocol for secure twoparty computation in the presence of malicious adversaries. J. Cryptol. 28(2), 312–350 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
 44.Lindell, Y., Riva, B.: Cutandchoose Yaobased secure computation in the online/offline and batch settings. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 476–494. Springer, Heidelberg (2014). https://doi.org/10.1007/9783662443811_27CrossRefGoogle Scholar
 45.Maurer, U.: Informationtheoretically secure secretkey agreement by NOT authenticated public discussion. In: Fumy, W. (ed.) EUROCRYPT 1997. LNCS, vol. 1233, pp. 209–225. Springer, Heidelberg (1997). https://doi.org/10.1007/3540690530_15Google Scholar
 46.Mayrhofer, R., Gellersen, H.: Shake well before use: intuitive and secure pairing of mobile devices. IEEE Trans. Mob. Comput. 8(6), 792–806 (2009)CrossRefGoogle Scholar
 47.McEliece, R.J., Sarwate, D.V.: On sharing secrets and ReedSolomon codes. Commun. ACM 24(9), 583–584 (1981). http://doi.acm.org/10.1145/358746.358762MathSciNetCrossRefGoogle Scholar
 48.Mohassel, P., Franklin, M.: Efficiency tradeoffs for malicious twoparty computation. In: Yung, M., Dodis, Y., Kiayias, A., Malkin, T. (eds.) PKC 2006. LNCS, vol. 3958, pp. 458–473. Springer, Heidelberg (2006). https://doi.org/10.1007/11745853_30CrossRefGoogle Scholar
 49.Monrose, F., Reiter, M.K., Wetzel, S.: Password hardening based on keystroke dynamics. Int. J. Inf. Secur. 1(2), 69–83 (2002)CrossRefzbMATHGoogle Scholar
 50.Nielsen, J.B., Orlandi, C.: LEGO for twoparty secure computation. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 368–386. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642004575_22CrossRefGoogle Scholar
 51.Nisan, N., Zuckerman, D.: More deterministic simulation in logspace. In: 25th ACM STOC, pp. 235–244. ACM Press, May 1993Google Scholar
 52.Pappu, R., Recht, B., Taylor, J., Gershenfeld, N.: Physical oneway functions. Science 297(5589), 2026–2030 (2002)CrossRefGoogle Scholar
 53.Pinkas, B., Schneider, T., Smart, N.P., Williams, S.C.: Secure twoparty computation is practical. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 250–267. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642103667_15CrossRefGoogle Scholar
 54.Renner, R., Wolf, S.: The exact price for unconditionally secure asymmetric cryptography. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 109–125. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540246763_7CrossRefGoogle Scholar
 55.Roth, R.: Introduction to Coding Theory. Cambridge University Press, New York (2006)CrossRefzbMATHGoogle Scholar
 56.Shoup, V.: A proposal for an ISO standard for public key encryption. Cryptology ePrint Archive, Report 2001/112 (2001). http://eprint.iacr.org/2001/112
 57.Suh, G.E., Devadas, S.: Physical unclonable functions for device authentication and secret key generation. In: Proceedings of the 44th Annual Design Automation Conference, pp. 9–14. ACM (2007)Google Scholar
 58.Tuyls, P., Schrijen, G.J., Škorić, B., van Geloven, J., Verhaegh, N., Wolters, R.: Readproof hardware from protective coatings. In: Goubin, L., Matsui, M. (eds.) CHES 2006. LNCS, vol. 4249, pp. 369–383. Springer, Heidelberg (2006). https://doi.org/10.1007/11894063_29CrossRefGoogle Scholar
 59.Wang, X., Ranellucci, S., Katz, J.: Authenticated garbling and efficient maliciously secure twoparty computation. In: Thuraisingham, B.M., Evans, D., Malkin, T., Xu, D. (eds.) ACM CCS 2017, pp. 21–37. ACM Press, New York (2017)Google Scholar
 60.Woodage, J., Chatterjee, R., Dodis, Y., Juels, A., Ristenpart, T.: A new distributionsensitive secure sketch and popularityproportional hashing. In: Katz, J., Shacham, H. (eds.) CRYPTO 2017, Part III. LNCS, vol. 10403, pp. 682–710. Springer, Cham (2017). https://doi.org/10.1007/9783319636979_23CrossRefGoogle Scholar
 61.Wyner, A.D.: The wiretap channel. Bell Syst. Tech. J. 54, 1355–1387 (1975)MathSciNetCrossRefzbMATHGoogle Scholar
 62.Yakoubov, S.: A gentle introduction to Yao’s garbled circuits (2017). http://web.mit.edu/sonka89/www/papers/2017ygc.pdf
 63.Yao, A.C.C.: How to generate and exchange secrets (extended abstract). In: 27th FOCS, pp. 162–167. IEEE Computer Society Press (Oct 1986)Google Scholar
 64.Yu, M.D.M., Devadas, S.: Secure and robust error correction for physical unclonable functions. IEEE Des. Test 27(1), 48–65 (2010)CrossRefGoogle Scholar
 65.Zahur, S., Rosulek, M., Evans, D.: Two halves make a whole: reducing data transfer in garbled circuits using half gates. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part II. LNCS, vol. 9057, pp. 220–250. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662468036_8Google Scholar
 66.Zviran, M., Haga, W.J.: A comparison of password techniques for multilevel authentication mechanisms. Comput. J. 36(3), 227–237 (1993)CrossRefGoogle Scholar