1 Introduction

Traditionally, password-based authentication has been the dominant approach for authenticating users on the Internet, by relying on “what users know”. However, this approach has its fair share of security and usability issues. It typically requires the servers to store a (salted) hash of all passwords, making them susceptible to offline dictionary attacks. Indeed, large-scale password breaches in the wild are extremely common [6, 8]. Passwords also pose challenging usability problems. High entropy passwords are hard to remember by humans, while low entropy passwords provide little security, and research has shown that introducing complex restrictions on password choices can backfire [39, Sec A.3].

There are major ongoing efforts in the industry to address some of these issues. For example, “unique” biometric features such as finger-print [4], facial scans [1], and iris scans [9] are increasingly popular first or second factor authentication mechanisms for logging into devices and applications. Studies show that biometrics are much more user-friendly [2], particularly on mobile devices, as users do not have to remember or enter any secret information. At the same time, a (server-side) breach of biometric data is much more damaging because, unlike passwords, there is no easy way to change biometric information regularly.

Therefore, the industry is shifting away from transmitting or storing user secrets on the server-side. For example, biometric templates and measurements are stored and processed on the client devices where the matching also takes place. A successful match then unlocks a private signing key for a digital signature scheme which is used to generate a token on a fresh challenge. Instead of the user data, the token is transmitted to the server, who only stores a public verification key to verify the tokens. (Throughout the paper, we shall use the terms token and signature interchangeably.) Thus, a server breach does not lead to a loss of sensitive user data.

Most prominently, this is the approach taken by the FIDO Alliance [3], the world’s largest industry-wide effort to enable an interoperable ecosystem of hardware-, mobile- and biometric-based authenticators that can be used by enterprises and service providers. This framework is also widely adopted by major Internet players and incorporated into all major browsers in the form of W3C standard Web Authentication API [10].

Hardware-Based Protection. With biometric data and private keys (for generating tokens) stored on client devices, a primary challenge is to securely protect them. As pointed out before, this is particularly crucial with biometrics since unlike passwords they are not replaceable. The most secure approach for doing so relies on hardware-based solutions such as secure enclaves [5] that provide physical separation between secrets and applications. However, secure hardware is not available on all devices, can be costly to support at scale, and provides very little programmability.

Software-Based Protection. Software-based solutions such as white-box cryptography are often based on ad-hoc techniques that are regularly broken [11]. The provably secure alternative, i.e. cryptographic obfuscation [13, 37], is not yet practical for real-world use-cases.

A simple alternative approach is to apply “salt-and-hash” techniques, often used to protect passwords, to biometric templates before storing them on the client device. Here, naïve solutions fail because biometric matching is almost always a fuzzy match that checks whether the distance between two vectors is above a threshold or not.

Using Fuzzy Extractors. It is tempting to think that a better way to implement the hash-and-salt approach for biometric data is through a cryptographic primitive known as fuzzy extractor [21, 33]. However, as also discussed by Dupont et al. [34], this approach only works for high-entropy biometric data and is susceptible to offline dictionary attacks.

Distributed Cryptography to the Rescue. Our work is motivated by the fact that most users own and carry multiple devices (laptop, smart-phone, smart-watch, etc.) and have other IoT devices around when authenticating (smart TV, smart-home appliances, etc.). We introduce a new framework for client-side biometric-based authentication that securely distributes both the biometric template as well as the secret signing key among multiple devices. These devices can collectively perform biometric matching and token generation without ever reconstructing the template or the signing key on any one device. We refer to this framework as Biometric Enabled Threshold Authentication (BETA for short) and study it at length in this paper.

Before diving deeper into the details, we note that while our primary motivation stems from a client-side authentication mechanism, our framework is quite generic and can be used in other settings. For example, it can also be used to protect biometric information on the server-side by distributing it among multiple servers who perform the matching and token generation (e.g., for a single sign-on authentication token) in a fully distributed manner.

1.1 Our Contributions

To concretely instantiate our framework BETA, we formally introduce the notion of fuzzy threshold tokenizer (\(\text {FTT}\)). We provide a universally composable (UC) security definition for \(\text {FTT}\) and design several protocols that realize it. We first briefly describe the notion of a Fuzzy Threshold Tokenizer.

Fuzzy Threshold Tokenizer. Consider a set of n parties/devices, a distribution \(\mathcal {W}\) over vectors in \(\mathbb {Z}_q^\ell \), a threshold t on the number of parties, a distance predicate \(\mathsf {Dist}\) and an unforgeable threshold signature scheme \(\mathsf {TS}\). Initially, in a global setup phase, a user generates some public and secret parameters (in a trusted setting), and distributes them amongst the n devices she owns. Further, she also runs the setup of the scheme \(\mathsf {TS}\) and secret shares the signing key amongst the devices. In an enrollment phase, user samples a biometric template \(\overrightarrow{\mathbf {w}}\in \mathbb {Z}_q^\ell \) according to \(\mathcal {W}\) and securely shares it amongst all the devices. Any set of t devices can, together, completely reconstruct the biometric template \(\overrightarrow{\mathbf {w}}\) and the signing key of the threshold signature scheme. Then, during an online sign on session, an initiating device \({P}\), with a candidate biometric measurement \(\overrightarrow{\mathbf {u}}\) as input, can interact in a protocol with a set \(S\) of \((t-1)\) other devices. At the end of this, if \(\overrightarrow{\mathbf {u}}\) is “close enough” to the template \(\overrightarrow{\mathbf {w}}\) (with respect to distance predicate \(\mathsf {Dist}\)), the initiating device \({P}\) obtains a token (signature) on a message of its choice.

It is important to note that we do not allow the other participating \((t-1)\) devices to interact amongst themselvesFootnote 1 and all communication goes through the initiating device \({P}\). This is a critical requirement on the communication model for \(\text {FTT}\) since in a typical usage scenario, one or two primary devices (e.g., a laptop or a smart-phone) play the role of the initiating device and all other devices are only paired/connected to the primary device. (These devices may not even be aware of the presence of other devices.) Indeed, this requirement makes the design of constant-round \(\text {FTT}\) protocols significantly more challenging. Further, in any round of communication, we only allow unidirectional exchange of messages, i.e., either \({P}\) sends a message to some subset of the other \((t-1)\) devices or vice versa.

Security Definition. Consider a probabilistic polynomial time adversary \(\mathcal {A} \) that corrupts a set T of devices where \(|T| < t\). Informally, the security properties that we wish to capture in an FTT scheme are as follows:

  1. (i)

    Privacy of biometric template: From any sign on session initiated by a corrupt device, \(\mathcal {A} \) should not be able to learn any information about the biometric template \(\overrightarrow{\mathbf {w}}\) apart from just the output of the predicate \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})\) for its choice of measurement \(\overrightarrow{\mathbf {u}}\). If the sign on session was initiated by an honest device, \(\mathcal {A} \) should learn no information about \(\overrightarrow{\mathbf {w}}\). Crucially, we do not impose any restriction on the entropy of the distribution from which the template is picked.

  2. (ii)

    Privacy of biometric measurement: For any sign on session initiated by an honest device, \(\mathcal {A} \) should learn no information whatsoever about the measurement \(\overrightarrow{\mathbf {u}}\).

  3. (iii)

    Token unforgeability: \(\mathcal {A} \) should not be able to compute a valid token (that verifies according to the threshold signature scheme \(\mathsf {TS}\)) unless it initiated a sign on session on behalf of a corrupt party with a measurement \(\overrightarrow{\mathbf {u}}\) such that \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\). Furthermore, \(\mathcal {A} \) should only be able to compute exactly one token from each such session.

Our first contribution is a formal modeling of the security requirements of a fuzzy threshold tokenizer via a real-ideal world security definition in the universal composability (UC) framework [26]. We refer the reader to Sect. 4 for the formal definition and a detailed discussion on its intricacies.

Our next contribution is a design of several protocols that realize this primitive.

Protocol-1(\(\pi ^{\mathsf {\tiny mpc}}\)). Given any threshold signature scheme \(\mathsf {TS}\), for any distance measure \(\mathsf {Dist}\), any nt, we construct a four roundFootnote 2 UC-secure \(\text {FTT}\) protocol \(\pi ^{\mathsf {\tiny mpc}}\). Our construction is based on any two-round (over a broadcast channel) UC-secure multi-party computation (MPC) protocol  [15, 38, 45, 48] in the CRS model that is secure against up to all but one corruption along with other basic primitives. \(\pi ^{\mathsf {\tiny mpc}}\) tolerates up to \((t-1)\) (which is maximal) malicious devices.

Protocol-2 (\(\pi ^{\mathsf {\tiny tfhe}}\)). Given any threshold signature scheme \(\mathsf {TS}\), for any distance measure \(\mathsf {Dist}\), any nt, we construct a four round UC-secure \(\text {FTT}\) protocol \(\pi ^{\mathsf {\tiny tfhe}}\). Our construction is based on any t out of n threshold fully homomorphic encryption scheme (TFHE) and other basic primitives. Like \(\pi ^{\mathsf {\tiny mpc}}\), this protocol is secure against \((t-1)\) malicious devices.

The above two feasibility results are based on two incomparable primitives (two round MPC and threshold FHE). On the one hand, two-round MPC seems like a stronger notion than threshold FHE. But, on the other hand, two-round MPC is known from a variety of assumptions like LWE/DDH/Quadratic Residuosity, while threshold FHE is known only from LWE. Further, the two protocols have very different techniques which may be of independent interest.

Protocol-3 (\(\pi ^{\mathsf {\tiny ip}}\)). We design the third protocol \(\pi ^{\mathsf {\tiny ip}}\) specifically for the cosine similarity distance metric, which has recently been shown to be quite effective for face recognition (CosFace [55], SphereFace [43], FaceNet [53]). We pick a threshold of three for this protocol as people nowadays have at least three devices on them most of the time (typically, a laptop, a smart-phone and a smart-watch). \(\pi ^{\mathsf {\tiny ip}}\) is secure in the random oracle model as long as at most one of the devices is compromised. We use Paillier encryption, efficient NIZKs for specific languages, and a simple garbled circuit to build an efficient four-round protocol.

Efficiency analysis of \(\pi ^{\mathsf {\tiny ip}}\). Finally, we perform a concrete efficiency analysis of our third protocol \(\pi ^{\mathsf {\tiny ip}}\). We assume that biometric templates and measurements have \(\ell \) features (or elements) and every feature can be represented with m bits. Let \(\lambda \) denote the computational security parameter and s denote the statistical security parameter. In the protocol \(\pi ^{\mathsf {\tiny ip}}\), we use Paillier encryption scheme to encrypt each feature of the measurement and its product with the shares of the template. The initiator device proves that the ciphertexts are well-formed and the features are of the right length. For Paillier encryption, such proofs can be done efficiently using only \(O(\ell m)\) group operations [30, 31].

The other devices use the homomorphic properties of Paillier encryption to compute ciphertexts for inner-product shares and some additional values. They are sent back to the initiator but with a MAC on them. Then the other devices generate a garbled circuit that takes the MAC information from them and the decrypted ciphertexts from the initiator to compute if the cosine value exceeds a certain threshold. The garbled circuit constructed here only does 5 multiplications on numbers of length \(O(m + \log \ell + s)\). Oblivious transfers can be preprocessed in the setup phase between every pair of parties so that the online phase is quite efficient (only symmetric-key operations). Furthermore, since only one of the two helping devices can be corrupt, only one device needs to transfer the garbled circuit [44], further reducing the communication overhead. (We have skipped several important details of the protocol here, but they do not affect the complexity analysis. See Sect. 2.3 for a complete overview of the protocol.)

An alternate design appropach is to use the garbled circuit itself to compute the inner-product. However, there are two disadvantages of this approach. First, it does not scale efficiently with feature vector length. The number of multiplications to be done inside the garbled circuit would be linear in the number of features, or the size of the circuit would be roughly \(O(m^2 \ell )\). This is an important concern because the number of features in a template can be very large (e.g., see Fig. 1 in the NISTIR draft on Ongoing Face Recognition Vendor Test (FRVT) [7]). Second, the devices would have to prove in zero knowledge that the bits fed as input to the circuit match the secret shares of the template given to them in the enrollment phase. This incurs additional computational overheads.

1.2 Related Work

Fuzzy identity based encryption, introduced by Sahai and Waters [52], allows decrypting a ciphertext encrypted with respect to some identity \(\mathsf {id}\) if the decryptor possesses the secret key for an identity that almost matches \(\mathsf {id}\). However, unlike \(\text {FTT}\), the decryptor is required to know both identities and which positions match. Recall that one of our main goals is to distribute the biometric template across all devices so that no one device ever learns it.

Function secret sharing, introduced by Boyle et al. [22], enables to share the computation of a function f amongst several users. Another interesting related primitive is homomorphic secret sharing [23]. However, both these notions don’t quite fit in our context because of the limitations on our communication model and the specific security requirements against a malicious adversary.

Secure multiparty computation protocols in the private simultaneous messages model [14, 36, 41] consider a scenario where there is a client and a set of servers that wish to securely compute a function f on their joint inputs wherein the communication model only involves interaction between the client and each individual server. However, in that model, the adversary can either corrupt the client or a subset of servers but not both.

The work of Dupont et al. [34] construct a fuzzy password authenticated key exchange protocol where each of the two parties have a password with low entropy. At the end of the protocol, both parties learn the shared secret key only if the two passwords are “close enough” with respect to some distance measure. In our work, we consider the problem of generating signatures and also multiple parties. Another crucial difference is that in their work, both parties hold a copy of the password whereas in our case, the biometric template is distributed between parties and therefore is never exposed to any party. There is also a lot of work on distributed password authenticated key exchange [16] (and the references within) but their setting considers passwords (and so, equality matching) and not biometrics.

There has been a lot of work in developing privacy-preserving ways to compare biometric data [17, 25, 32] but it has mostly focused on computing specific distance measures (like Hamming distance) in the two-party setting where each party holds a vector. There has also been some privacy-preserving work in the same communication model as ours [19, 29, 42] but it has mainly focused on private aggregation of sensitive user data.

Open Problems. We leave it as an open problem to define weaker game-based security definitions for FTT and to design more efficient protocols that satisfy those. We also leave it open to design FTT protocols that tolerate adaptive corruptions and/or support dynamic addition/deletion of parties and rotation of signature keys.

2 Technical Overview

2.1 MPC Based Protocol

Emulating General Purpose MPC. Our starting point is the observation that suppose all the parties could freely communicate, then any UC-secure MPC protocol against a malicious adversary in the presence of a broadcast channel would intuitively be very useful in the design of an FTT scheme if we consider the following functionality: the initiator \({P}^*\) has input \((\mathsf {msg},S,\overrightarrow{\mathbf {u}})\), every party \({P}_i \in S\) has input \((\mathsf {msg},S)\), their respective shares of the template \(\overrightarrow{\mathbf {w}}\) and the signing key. The functionality outputs a signature on \(\mathsf {msg}\) to party \({P}^*\) if \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\) and \(|S| =t\). Recently, several works [15, 24, 38, 45, 49] have shown how to construct two round UC-secure MPC protocols in the CRS model in the presence of a broadcast channel from standard cryptographic assumptions. However, the issue with following this intuitive approach is that the communication model of our FTT primitive does not allow all parties to interact amongst each other - in particular, the parties in the set \(S\) can’t directly talk to each other and all communication has to be routed through the initiator. Armed with this insight, our goal now is to emulate a two round MPC protocol \(\pi \) in our setting.

For simplicity, let us first consider \(n=t=3\). That is, there are three parties: \({P}_1,{P}_2,{P}_3\). Consider the case when \({P}_1\) is the initiator. Now, in the first round of our FTT scheme, \({P}_1\) sends \(\mathsf {msg}\) to both parties. Then, in round 2, we have \({P}_2\) and \({P}_3\) send their round one messages of the MPC protocol \(\pi \). In round 3 of our FTT scheme, \({P}_1\) sends its own round one message of the MPC protocol to both parties. Along with this, \({P}_1\) also sends \({P}_2\)’s round one message to \({P}_3\) and vice versa. So now, at the end of round 3 of our FTT scheme, all parties have exchanged their first round messages of protocol \(\pi \).

Our next observation is that since we care only about \({P}_1\) getting output, in the underlying protocol \(\pi \), only party \({P}_1\) needs to receive everyone else’s messages in round 2. Therefore, in round 4 of our FTT scheme, \({P}_2\) and \({P}_3\) can compute their round two messages based on the transcript so far and just send them to \({P}_1\). This will enable \({P}_1\) to compute the output of protocol \(\pi \).

Challenges. Unfortunately, the above scheme is insecure. Note that in order to rely on the security of protocol \(\pi \), we crucially need that for any honest party \({P}_i\), every other honest party receives the same first round message on its behalf. Also, we require that all honest parties receive the same messages on behalf of the adversary. In our case, since the communication is being controlled and directed by \({P}_1\) instead of a broadcast channel, this need not be true if \({P}_1\) was corrupt and \({P}_2,{P}_3\) were honest. Specifically, one of the following two things could occur: (i) \({P}_1\) can forward an incorrect version of \({P}_3\)’s round one message of protocol \(\pi \) to \({P}_2\) and vice versa. (ii) \({P}_1\) could send different copies of its own round 1 message of protocol \(\pi \) to both \({P}_2\) and \({P}_3\).

Signatures to Solve Challenge 2. To solve the first problem,we simply enforce that \({P}_3\) sends a signed copy of its round 1 message of protocol \(\pi \) which is forwarded by \({P}_1\) to \({P}_2\). Then, \({P}_2\) accepts the message to be valid if the signature verifies. In the setup phase, we can distribute a signing key to \({P}_3\) and a verification key to everyone, including \({P}_2\). Similarly, we can ensure that \({P}_2\)’s actual round 1 message of protocol \(\pi \) was forwarded by \({P}_1\) to \({P}_3\).

Pseudorandom Functions to Solve Challenge 2. Tackling the second problem is a bit trickier. The idea is instead of enforcing that \({P}_1\) send the same round 1 message of protocol \(\pi \) to both parties, we will instead ensure that \({P}_1\) learns their round 2 messages of protocol \(\pi \) only if it did indeed send the same round 1 message of protocol \(\pi \) to both parties. We now describe how to implement this mechanism. Let us denote \(\mathsf {msg}_2\) to be \({P}_1\)’s round 1 message of protocol \(\pi \) sent to \({P}_2\) and \(\mathsf {msg}_3\) (possibly different from \(\mathsf {msg}_2\)) to be \({P}_1\)’s round 1 message of protocol \(\pi \) sent to \({P}_3\). In the setup phase, we distribute two keys \(k_2,k_3\) of a pseudorandom function (\(\mathsf {PRF}\)) to both \({P}_2,{P}_3\). Now, in round 4 of our FTT scheme, \({P}_3\) does the following: instead of sending its round 2 message of protocol \(\pi \) as is, it encrypts this message using a secret key encryption scheme where the key is \(\mathsf {PRF}(k_3,\mathsf {msg}_3)\). Then, in round 4, along with its actual message, \({P}_2\) also sends \(\mathsf {PRF}(k_3,\mathsf {msg}_2)\) which would be the correct key used by \({P}_3\) to encrypt its round 2 message of protocol \(\pi \) only if \(\mathsf {msg}_2=\mathsf {msg}_3\). Similarly, we use the key \(k_2\) to ensure that \({P}_2\)’s round 2 message of protocol \(\pi \) is revealed to \({P}_1\) only if \(\mathsf {msg}_2=\mathsf {msg}_3\).

The above approach naturally extends for arbitrary nt. by sharing two PRF keys between every pair of parties. There, each party encrypts its round 2 message of protocol \(\pi \) with a secret key that is an XOR of all the PRF evaluations. There are additional subtle issues when we try to formally prove that the above protocol is UC-secure and we refer the reader to the full version [12]  for more details about the proof.

2.2 Threshold FHE Based Protocol

The basic idea behind our second protocol is to use an FHE scheme to perform the distance predicate computation between the measurement \(\overrightarrow{\mathbf {u}}\) and the template \(\overrightarrow{\mathbf {w}}\). In particular, in the setup phase, we generate the public key \(\mathsf {pk}\) of an FHE scheme and then in the enrollment phase, each party is given an encryption \(\mathsf {ct}_{\overrightarrow{\mathbf {w}}}\) of the template. In the sign on phase, an initiator \({P}^*\) can compute a ciphertext \(\mathsf {ct}_{\overrightarrow{\mathbf {u}}}\) that encrypts the measurement and send it to all the parties in the set \(S\) which will allow them to each individually compute a ciphertext \(\mathsf {ct}^*\) homomorphically that evaluates \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})\). However, the first challenge is how to decrypt this ciphertext \(\mathsf {ct}^*\)? In other words, who gets the secret key \(\mathsf {sk}\) of the FHE scheme in the setup? If \(\mathsf {sk}\) is given to all parties in \(S\), then they can, of course, decrypt \(\mathsf {ct}_{\overrightarrow{\mathbf {u}}}\) but that violates privacy of the measurement. On the other hand, if \(\mathsf {sk}\) is given only to \({P}^*\), that allows \({P}^*\) to decrypt \(\mathsf {ct}_{\overrightarrow{\mathbf {w}}}\) violating privacy of the template.

Threshold FHE. Observe that this issue can be overcome if somehow the secret key is secret shared amongst all the parties in \(S\) in such a way that each of them, using their secret key share \(\mathsf {sk}_i\), can produce a partial decryption of \(\mathsf {ct}^*\) that can then all be combined by \({P}^*\) to decrypt \(\mathsf {ct}^*\). In fact, this is exactly the guarantee of threshold FHE. This brings us to the next issue that if only \({P}^*\) learns whether \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\), how do the parties in \(S\) successfully transfer the threshold signature shares? (recall that the transfer should be conditioned upon \(\mathsf {Dist}\) evaluating to 1) One natural option is, in the homomorphic evaluation of the ciphertext \(\mathsf {ct}\), apart from just checking whether \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\), perhaps the circuit could then also compute the partial signatures with respect to the threshold signature scheme if the check succeeds. However, the problem then is that, for threshold decryption, there must be a common ciphertext available to each party. In this case, however, each party would generate a partial signature using its own signing key share resulting in a different ciphertext and in turn preventing threshold decryption.

Partial Signatures. To overcome this obstacle, at the beginning of the sign-on phase, each party computes its partial signature \(\sigma _i\) and information-theoretically encrypts it via one-time pad with a uniformly sampled one-time key \(K_i\). The parties then transfer the partial signatures in the same round in an encrypted manner without worrying about the result of the decryption. Now, to complete the construction, we develop a mechanism such that:

  • Whenever the FHE decryption results in 1, \({P}^*\) learns the set of one-time secret keys \(\{K_i\}\) and hence reconstructs the set of partial signatures \(\{\sigma _i\}\).

  • Whenever the FHE decryption results in 0, \({P}^*\) fails to learn any of the one-time secret keys, which in turn ensures that each of the partial signatures remains hidden from \({P}^*\).

To achieve that, we do the following: each party additionally broadcasts \(\mathsf {ct}_{K_i}\), which is an FHE encryption of its one-time secret key \(K_i\), to every other party during the enrollment phase. Additionally, we use t copies of the FHE circuit being evaluated as follows: the \(i^{\text {th}}\) circuit outputs \(K_i\) if \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\) – that is, this circuit is homomorphically evaluated using the FHE ciphertexts \(\mathsf {ct}_{\overrightarrow{\mathbf {u}}},\mathsf {ct}_{\overrightarrow{\mathbf {w}}},\mathsf {ct}_{K_i}\).Footnote 3 Now, at the end of the decryption, if \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})\) was indeed equal to 1, \({P}^*\) learns the set of one-time keys \(\{K_i\}\) via homomorphic evaluation and uses these to recover the corresponding partial signatures.

Consider the case where the adversary \(\mathcal {A} \) initiates a session with a measurement \(\overrightarrow{\mathbf {u}}\) such that \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=0\). Our security proof formally establishes that the adversary \(\mathcal {A} \) learns no information about each one-time key \(K_i\) of the honest parties and hence about the corresponding signature share. At a high level, we exploit the simulation and semantic security guarantees of the threshold FHE scheme to: (a) simulate the FHE partial decryptions to correctly output 0 and (b) to switch each \(\mathsf {ct}_{K_i}\) to be an encryption of 0. At this point, we can switch each \(K_i\) to be a uniformly random string and hence “unrecoverable” to \(\mathcal {A} \). We refer the reader to Sect. 6 for more details.

NIZKs. One key issue is that parties may not behave honestly - that is, in the first round, \({P}^*\) might not run the FHE encryption algorithm honestly and similarly, in the second round, each party might not run the FHE partial decryption algorithm honestly which could lead to devastasting attacks. To solve this, we require each party to prove honest behavior using a non-interactive zero knowledge argument (NIZK). Finally, as in the previous section, to ensure that \({P}^*\) sends the same message \(\mathsf {ct}_{\overrightarrow{\mathbf {u}}}\) to all parties, we use a signature-based verification strategy, which adds two rounds resulting in a four round protocol.

2.3 Cosine Similarity: Single Corruption

In this section, we build a protocol for a specific distance measureFootnote 4 (Cosine Similarity). It is more efficient compared to our feasibility results. On the flip side, it tolerates only one corruption: that is, our protocol is UC-secure in the Random Oracle model against a malicious adversary that can corrupt only one party. For two vectors \(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\), \(\mathsf {CS.Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}) = \frac{\langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\rangle }{||\overrightarrow{\mathbf {u}}|| \cdot ||\overrightarrow{\mathbf {w}}||}\) where \(||\overrightarrow{x}||\) denotes the \(L^2\)-norm of the vector. \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\) if \(\mathsf {CS.Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}) \ge d\) where d is chosen by \(\mathsf {Dist}\). Without loss of generality, assume that distribution \(\mathcal {W}\) samples vectors \(\overrightarrow{\mathbf {w}}\) with \(||\overrightarrow{\mathbf {w}}|| = 1\). Then, we check if \(\langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\rangle > (d \cdot \langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {u}}\rangle )^2\) instead of \(\mathsf {CS.Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}) > d\). This syntactic change allows more flexibility.

Distributed Garbling. Our starting point is the following. Suppose we had \(t=2\). Then, we can just directly use Yao’s [56] two party semi-honest secure computation protocol as a building block to construct a two round FTT scheme. In the enrollment phase, secret share \(\overrightarrow{\mathbf {w}}\) into \(\overrightarrow{\mathbf {w}}_1,\overrightarrow{\mathbf {w}}_2\) and give one part to each party. The initiator requests for labels via oblivious transfer (OT) corresponding to his share of \(\overrightarrow{\mathbf {w}}\) and input \(\overrightarrow{\mathbf {u}}\) while the garbled circuit, which has the other share of \(\overrightarrow{\mathbf {w}}\) hardwired, reconstructs \(\overrightarrow{\mathbf {w}}\), checks if \(\langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\rangle > (d \cdot \langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {u}}\rangle )^2\) and if so, outputs a signature. This protocol is secure against a malicious initiator who only has to evaluate the garbled circuit, if we use an OT protocol that is malicious secure in the CRS model. However, to achieve malicious security against the garbler, we would need expensive zero knowledge arguments that prove correctness of the garbled circuit. Now, in order to build an efficient protocol that achieves security against a malicious garbler and to work with threshold \(t=3\), the idea is to distribute the garbling process between two parties.

Consider an initiator \({P}_1\) interacting with parties \({P}_2,{P}_3\). We repeat the below process for any initiator and any pair of parties that it must interact with. For ease of exposition, we just consider \({P}_1,{P}_2,{P}_3\) in this section. Both \({P}_2\) and \({P}_3\) generate one garbled circuit each using shared randomness generated during setup and the evaluator just checks if the two circuits are identical. Further, both \({P}_2\) and \({P}_3\) get the share \(\overrightarrow{\mathbf {w}}_2\) and a share of the signing key in the enrollment and setup phase respectively. Note that since the adversary can corrupt at most one party, this check would guarantee that the evaluator can learn whether the garbled circuit was honestly generated. In order to ensure that the evaluator does not evaluate both garbled circuits on different inputs, we will also require the garbled circuits to check that \({P}_1\)’s OT receiver queries made to both parties was the same. The above approach is inspired from the three party secure computation protocol of Mohassel et al. [44].

However, the issue here is that \({P}_1\) needs a mechanism to prove in zero knowledge that it is indeed using the share \(\overrightarrow{\mathbf {w}}_1\) received in the setup phase as input to the garbled circuit. Moreover, even without this issue, the protocol is computationally quite expensive. For cosine similarity, the garbled circuit will have to perform a lot of expensive operations - for vectors of length \(\ell \), we would have to perform \(O(\ell )\) multiplications inside the garbled circuit. As mentioned in the introduction, because the number of features in a template (\(\ell \)) can be very large for applications like face recognition, our goal is to improve the efficiency and scalability of the above protocol by performing only a constant number of multiplications inside the garbled circuit.

Additive Homomorphic Encryption. Our strategy to build an efficient protocol is to use additional rounds of communication to offload the heavy computation outside the garbled circuit and also along the way, solve the issue of the initiator using the right share \(\overrightarrow{\mathbf {w}}_1\). In particular, if we can perform the inner product computation outside the garbled circuit in the first phase of the protocol, then the resulting garbled circuit in the second phase would have to perform only a constant number of operations. In order to do so, we leverage the tool of efficient additively homomorphic encryption schemes [35, 47]. In our new protocol, in round 1, the initiator \({P}_1\) sends an encryption of \(\overrightarrow{\mathbf {u}}\). \({P}_1\) can compute \(\langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}_1 \rangle \) by itself. Both \({P}_2\) and \({P}_3\) respond with encryptions of \(\langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}_2 \rangle \) computed homomorphically using the same shared randomness. \({P}_1\) can decrypt this to compute \(\langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\rangle \). The parties can then run the garbled circuit based protocol as above in rounds 3 and 4 of our FTT scheme: that is, \({P}_1\) requests for labels corresponding to \(\langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\rangle \) and \(\langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {u}}\rangle \) and the garbled circuit does the rest of the check as before. While this protocol is correct and efficient, there are still several issues.

Leaking Inner Product. The first problem is that the inner product \(\langle \overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\rangle \) is currently leaked to the initiator \({P}_1\) thereby violating the privacy of the template \(\overrightarrow{\mathbf {w}}\). To prevent this, we need to design a mechanism where no party learns the inner product entirely in the clear and yet the check happens inside the garbled circuit. A natural approach is for \({P}_2\) and \({P}_3\) to homomorphically compute an encryption of the result \(\langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}_2 \rangle \) using a very efficient secret key encryption scheme. In our case, just a one time pad suffices. Now, \({P}_1\) only learns an encryption of this value and hence the inner product is hidden, while the garbled circuit, with the secret key hardwired into it, can easily decrypt the one-time pad.

Input Consistency. The second major challenge is to ensure that the input on which \({P}_1\) wishes to evaluate the garbled circuit is indeed the output of the decryption. If not, \({P}_1\) could request to evaluate the garbled circuit on suitably high inputs of his choice, thereby violating unforgeability! In order to prevent this attack, \({P}_2\) and \({P}_3\) homomorphically compute not just \(x = \langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}_2 \rangle \) but also a message authentication code (mac) y on the value x using shared randomness generated in the setup phase. We use a simple one time mac that can be computed using linear operations and hence can be done using the additively homomorphic encryption scheme. Now, the garbled circuit also checks that the mac verifies correctly and from the security of the mac, \({P}_1\) can not change the input between the two stages. Also, we require \({P}_1\) to also send encryptions of \(\langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {u}}\rangle \) in round 1 so that \({P}_2,{P}_3\) can compute a mac on this as well, thereby preventing \({P}_1\) from cheating on this part of the computation too.

Ciphertext Well-Formedness. Another important issue to tackle is to ensure that \({P}_1\) does indeed send well-formed encryptions. To do so, we rely on efficient zero knowledge arguments from literature [30, 31] when instantiating the additively homomorphic encryption scheme with the Paillier encryption scheme [47]. For technical reasons, we also need the homomorphic encryption scheme to be circuit-private. We refer the reader to the full version [12]  for more details. Observe that in our final protocol, the garbled circuit does only a constant number of multiplications, which makes protocol computationally efficient and scalable.

Optimizations. To further improve the efficiency of our protocol, as done in Mohassel et al. [44], we will require only one of the two parties \({P}_2,{P}_3\) to actually send the garbled circuit. The other party can just send a hash of the garbled circuit and the initiator can check that the hash values are equal. We refer to Sect. 7 for more details on this and other optimizations.

3 Preliminaries

Let \(\mathcal {P}_1,\ldots ,\mathcal {P}_n\) denote the n parties and \(\lambda \) the security parameter. Recall that the \(L^2\) norm of a vector \(\overrightarrow{\mathbf {x}}= (\overrightarrow{\mathbf {x}}_1,\ldots ,\overrightarrow{\mathbf {x}}_n)\) is defined as \(||\overrightarrow{\mathbf {x}}|| = \sqrt{\overrightarrow{\mathbf {x}}^2_1+\ldots + \overrightarrow{\mathbf {x}}^2_n}\). \(\langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}\rangle \) denotes the inner product between two vectors \(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\).

Definition 1

(Cosine Similarity). For any two vectors \(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}\in \mathbb {Z}_q^\ell \), the Cosine Similarity between them is defined as follows:

$$ \mathsf {CS.Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}) = \frac{\langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}\rangle }{||\overrightarrow{\mathbf {u}}||\cdot ||\overrightarrow{\mathbf {w}}||} .$$

When using this distance measure, we say that \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\) if and only if \(\mathsf {CS.Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})\) \(\ge \) d where d is a parameter specified by \(\mathsf {Dist}(\cdot )\).

3.1 Threshold Signature

Definition 2

(Threshold Signature [18]). Let \(n,t\in \mathbb {N}\). A threshold signature scheme \(\mathsf {TS}\) is a tuple of four algorithms \((\mathsf {Gen},\mathsf {Sign}, \mathsf {Comb},\mathsf {Ver})\) that satisfy the correctness condition below.

  • \(\mathsf {Gen}(1^\lambda ,n,t)\rightarrow (\mathsf {pp},\mathsf {vk},{[\![\mathbf{sk}]\!]}_{n})\). A randomized algorithm that takes nt and the security parameter \(\lambda \) as input, and generates a verification-key \(\mathsf {vk}\) and a shared signing-key \({[\![\mathbf{\mathsf {sk}}]\!]}_{n}\).

  • \(\mathsf {Sign}(\mathsf {sk}_i,m) =:\sigma _i\). A deterministic algorithm that takes a mesage m and signing key-share \(\mathsf {sk}_i\) as input and outputs a partial signature \(\sigma _i\).

  • \(\mathsf {Comb}(\{\sigma _i\}_{i\in S})=:\sigma /\bot \). A deterministic algorithm that takes a set of partial signatures \(\{\mathsf {sk}_i\}_{i\in S}\) as input and outputs a signature \(\sigma \) or \(\bot \) denoting failure.

  • \(\mathsf {Ver}(\mathsf {vk},(m,\sigma )) =:1/0\). A deterministic algorithm that takes a verification key \(\mathsf {vk}\) and a candidate message-signature pair \((m,\sigma )\) as input, and outputs 1 for a valid signature and 0 otherwise.

  • Correctness. For all \(\lambda \in \mathbb {N}\), any \(t, n \in \mathbb {N}\) such that \(t \le n\), all \((\mathsf {pp},\mathsf {vk},{[\![\mathbf{sk}]\!]}_{n})\) generated by \(\mathsf {Gen}(1^\lambda ,n,t)\), any message m, and any set \(S\subseteq [n]\) of size at least t, if \(\sigma _i = \mathsf {Sign}(\mathsf {sk}_i, m)\) for \(i \in S\), then \(\mathsf {Ver}(\mathsf {vk}, (m, \mathsf {Comb}(\{\sigma _i\}_{i \in S}))) = 1\).

Definition 3 (Unforgeability)

A threshold signatures scheme \(\mathsf {TS}= (\mathsf {Gen},\mathsf {Sign}, \mathsf {Comb},\mathsf {Ver})\) is unforgeable if for all \(n,t\in \mathbb {N}\), \(t \le n\), and any PPT adversary \(\mathcal {A}\), the following game outputs 1 with negligible probability (in security parameter).

  • Initialize. Run \((\mathsf {pp},\mathsf {vk},{[\![\mathbf{sk}]\!]}_{n})\leftarrow \mathsf {Gen}(1^\lambda ,n,t)\). Give \(\mathsf {pp},\mathsf {vk}\) to \(\mathcal {A}\). Receive the set of corrupt parties \(C\subset [n]\) of size at most \(t-1\) from \(\mathcal {A}\). Then give \({[\![\mathbf{\mathsf {sk}}]\!]}_{C}\) to \(\mathcal {A}\). Define \(\gamma :=t - |C|\). Initiate a list \(L:=\emptyset \).

  • Signing queries. On query (mi) for \(i\subseteq [n] \setminus C\) return \(\sigma _i\leftarrow \mathsf {Sign}(\mathsf {sk}_i,m)\). Run this step as many times \(\mathcal {A}\) desires.

  • Building the list. If the number of signing query of the form (mi) is at least \(\gamma \), then insert m into the list \(L\). (This captures that \(\mathcal {A}\) has enough information to compute a signature on m.)

  • Output. Eventually receive output \((m^\star ,\sigma ^\star )\) from \(\mathcal {A}\). Return 1 if and only if \(\mathsf {Ver}(\mathsf {vk},(m^\star ,\sigma ^\star )) = 1\) and \(m^\star \not \in L\), and 0 otherwise.

4 Formalizing Fuzzy Threshold Tokenizer (FTT)

In this section we formally introduce the notion of \(fuzzy \ threshold \ tokenizer\) (\(\text {FTT}\)) and give a UC-secure definition. We first describe the algorithms/protocols in the primitive followed by the security definition in the next subsection.

Definition 4 (Fuzzy Threshold Tokenizer (FTT))

Given a security parameter \(\lambda \in \mathbb {N}\), a threshold signature scheme \(\mathsf {TS}= (\mathsf {TS.Gen},\mathsf {TS.Sign},\mathsf {TS.Combine}, \mathsf {TS.Verify})\), biometric space parameters \(q,\ell \in \mathbb {N}\), a distance predicate \(\mathsf {Dist}:\mathbb {Z}_q^\ell \times \mathbb {Z}_q^\ell \rightarrow \{0,1\}\), \(n\in \mathbb {N}\) parties \(\mathcal {P}_1,\ldots ,\mathcal {P}_n\) and a threshold of parties \(t\in [n]\), a FTT scheme/protocol consists of the following tuple \((\mathsf {Setup}, \mathsf {Enrollment},\mathsf {SignOn},\mathsf {Ver})\) of algorithms/protocols:

  • \(\mathsf {Setup}(1^\lambda ,n,t,\mathsf {TS}) \rightarrow (\mathsf {pp}_{\mathsf {setup}}, \{s_i,\mathsf {sk}^{\mathsf {TS}}_i\}_{i\in [n]},\mathsf {vk})\) : The \(\mathsf {Setup}\) algorithm is run by a trusted authority. It first runs the key-generation of the threshold signature scheme, \((\{\mathsf {sk}^{\mathsf {TS}}_i\}_{i\in [n]},\) \(\mathsf {vk})\leftarrow \mathsf {Gen}(1^\lambda ,n,t)\). It generates other public parameters \(\mathsf {pp}_{\mathsf {setup}}\) and secret values \(s_1,\ldots ,s_n\) for each party respectively. It outputs \((\mathsf {vk},\mathsf {pp}_{\mathsf {setup}})\) to every party and secrets \((\mathsf {sk}^{\mathsf {TS}}_i,s_i)\) to each party \(\mathcal {P}_i\). (\(\mathsf {pp}_{\mathsf {setup}}\) will be an implicit input in all the algorithms below.)

  • \(\mathsf {Enrollment}(n, t, q, \ell ,\mathsf {Dist})\rightarrow (\{a_i\}_{i\in [n]})\) : On input the parameters from any party, this algorithm is run by the trusted authority to choose a random sample \(\overrightarrow{\mathbf {w}}\leftarrow \mathcal {W}\). Then, each party \(\mathcal {P}_i\) receives some information \(a_i\).

  • \(\mathsf {SignOn}(\cdot )\) : \(\mathsf {SignOn}\) is a distributed protocol involving a party \({P}^*\) along with a set \(S\) of parties. Party \(\mathcal {P}^*\) has input a measurement \(\overrightarrow{\mathbf {u}}\), message \(\mathsf {msg}\) and its secret information \((s_*,\mathsf {sk}^{\mathsf {TS}}_*)\). Each party \({P}_i \in S\) has input \((s_i,\mathsf {sk}^{\mathsf {TS}}_i)\). At the end of the protocol, \({P}^*\) obtains a (private) token \(\mathsf {Token}\) (or \(\bot \), denoting failure) as output. Each party \(\mathcal {P}_i\in S\) gets output \((\mathsf {msg}, i, S)\). The trusted authority is not involved in this protocol.

  • \(\mathsf {Ver}(\mathsf {vk},\mathsf {msg},\mathsf {Token})\rightarrow \{0,1\}\) : \(\mathsf {Ver}\) is an algorithm which takes input verification key \(\mathsf {vk}\), message \(\mathsf {msg}\) and token \(\mathsf {Token}\), runs the verification algorithm of the threshold signature scheme \(b:=\mathsf {TS.Verify}(\mathsf {vk},(\mathsf {msg},\mathsf {Token}))\), and outputs \(b\in \{0,1\}\). This can be run locally by any party or even any external entity.

Communication Model. In the \(\mathsf {SignOn}(\cdot )\) protocol, only party \(\mathcal {P}^*\) can communicate directly with every party in the set \(S\). We stress that the other parties in \(S\) can not interact directly with each other.

4.1 Security Definition

We formally define security via the universal composability (UC) framework [26]. Similar to the simplified UC framework [28] we assume existence of a default authenticated channel in the real world. This simplifies the definition of our ideal functionality and can be removed easily by composing with an ideal authenticated channel functionality (e.g. [27]).

Consider n parties \({P}_1,\ldots ,{P}_n\). We consider a fixed number of parties in the system throughout the paper. That is, no new party can join the execution subsequently. Let \(\pi ^\mathsf {TS}\) be an FTT scheme parameterized by a threshold signature scheme \(\mathsf {TS}\). Consider an adversarial environment \(\mathcal {Z}\). We consider a static corruption model where there are a fixed set of corrupt parties decided a priori.Footnote 5 Informally, it is required that for every adversary \(\mathcal {A} \) that corrupts some subset of the parties and participates in the real execution of the protocol, there exist an ideal world adversary \(\mathsf {Sim}\), such that for all environments \(\mathcal {Z}\), the view of the environment is same in both worlds. We describe it more formally below.

Real World. In the real execution, the FTT protocol \(\pi ^\mathsf {TS}\) is executed in the presence of an adversary \(\mathcal {A} \). The adversary \(\mathcal {A} \) takes as input the security parameter \(\lambda \) and corrupts a subset of parties. Initially, the \(\mathsf {Setup}\) algorithm is implemented by a trusted authority. The honest parties follow the instructions of \(\pi ^\mathsf {TS}\). That is, whenever they receive an “Enrollment” query from \(\mathcal {Z}\), they will run the \(\mathsf {Enrollment}\) phase of \(\pi ^\mathsf {TS}\). Similarly, whenever they receive a “Sign on” query from \(\mathcal {Z}\) with input \((\mathsf {msg},\overrightarrow{\mathbf {u}},S)\), they will initiate a \(\mathsf {SignOn}(\cdot )\) protocol with the parties in set \(S\) and using input \((\mathsf {msg},S,\mathsf {sk}^{\mathsf {TS}}_i)\). If a \(\mathsf {SignOn}(\cdot )\) protocol is initiated with them by any other party, they participate honestly using input \(\mathsf {sk}^{\mathsf {TS}}_i\). \(\mathcal {A} \) sends all messages of the protocol on behalf of the corrupt parties following any arbitrary polynomial-time strategy. We assume that parties are connected by point to point secure and authenticated channels.

Ideal World. The ideal world is defined by a trusted ideal functionality \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) described in Fig. 1 that interacts with n (say) ideal dummy parties \(\mathcal {P}_1,\ldots ,\mathcal {P}_n\) and an ideal world adversary, a.k.a. the simulator \(\mathsf {Sim}\) via secure (and authenticated) channels. The simulator can corrupt a subset of the parties and may fully control them. We discuss the ideal functionality in more detail later below.

The environment sets the inputs for all parties including the adversaries and obtain their outputs in both the worlds. However, the environment does not observe any internal interaction. For example, in the ideal world such interactions takes between the ideal functionality and another entity (a dummy party, or the simulator); in real world such interactions take place among the real parties. Finally, once the execution is over, the environment outputs a bit denoting either real or ideal world. For ideal functionality \(\mathcal {F}\), adversary \(\mathcal {A}\), simulator \(\mathsf {Sim}\), environment \(\mathcal {Z}\) and a protocol \(\pi \) we formally denote the output of \(\mathcal {Z}\) by random variable \(\text {IDEAL}_{\mathcal {F},\mathsf {Sim},\mathcal {Z}}\) in the ideal world and \(\text {REAL}_{\pi ,\mathcal {A},\mathcal {Z}}\) in the real world. We describe the ideal functionality for a FTT scheme in Fig. 1 and we elaborate on it in the next subsection.

Definition 5 (UC-Realizing FTT)

Let \(\mathsf {TS}\) be a threshold signature scheme (Definition 3), \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) be an ideal functionality as described in Fig. 1 and \(\pi ^\mathsf {TS}\) be a FTT scheme. \(\pi ^\mathsf {TS}\) UC-realizes \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) if for any real world PPT adversary \(\mathcal {A}\), there exists a PPT simulator \(\mathsf {Sim}\) such that for all environments \(\mathcal {Z}\),

$$\text {IDEAL}_{\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}},\mathsf {Sim},\mathcal {Z}}\approx _c\text {REAL}_{\pi ^\mathsf {TS},\mathcal {A},\mathcal {Z}}$$

Intuitively, for any adversary there should be a simulator that can simulate its behavior such that no environment can distinguish between these two worlds. Also, our definition can also capture setup assumptions such as random oracles by considering a \(\mathcal {G}\)-hybrid model with an ideal functionality \(\mathcal {G}\) for the setup.

Ideal Functionality \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\). The ideal functionality we consider is presented formally in Fig. 1. We provide an informal exposition here. Contrary to most of the UC ideal functionalities, our ideal functionality \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) is parameterized with a threshold signature scheme \(\mathsf {TS}\) \(=\) (\(\mathsf {TS.Gen}\), \(\mathsf {TS.Sign}\), \(\mathsf {TS.Combine}\), \(\mathsf {TS.Verify}\)) (see discussion about this choice later in this section). The ideal functionality is parameterized with a distance predicate \(\mathsf {Dist}\), which takes two vectors, a template and a candidate measurement and returns 1 if and only if the two vectors are “close”. Additionally, the functionality is parameterized with other standard parameters and a probability distribution over the biometric vectors.

The ideal functionality has an interface to handle queries from different parties. For a particular session, the first query it responds to \(``\mathtt {Setup}"\) from \(\mathsf {Sim}\). In response, the functionality \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) generates the key pairs of the given threshold signature scheme, gives the control for the corrupt parties to the simulator and marks this session \(``\textsc {Live}"\). Then, an \(``\mathtt {Enroll}"\) query can be made by any party. \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) chooses a template \(\overrightarrow{\mathbf {w}}\) at random from the distribution \(\mathcal {W}\), stores it and marks the session as \(``\textsc {Enrolled}"\).

For any \(``\textsc {Enrolled}"\) session, \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) can receive many \(``\mathtt {SignOn}"\) queries (the previous two queries are allowed only once per session). This is ensured by not marking the session in response to any such query. The \(``\mathtt {SignOn}"\) query from a party \(\mathcal {P}_i\) contains a set \(S\) of parties (i.e. their identities), a message to be signed and a candidate measurement \(\overrightarrow{\mathbf {u}}\). If the set \(S\) contains any corrupt party, \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) reaches out to the simulator for a response—this captures a corrupt party’s power to deny a request.

Fig. 1.
figure 1

The ideal functionality \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\).

Then, \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) checks whether the measurement \(\overrightarrow{\mathbf {u}}\) is “close enough” by computing \(b:=\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})\). If b is 1, the size of the set \(S= t\) and all parties in \(S\) send an agreement response, \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) generates the partial signatures (tokens) on behalf of the parties in \(S\) and sends them only to the initiator \(\mathcal {P}_i\); otherwise, it sends \(\bot \) denoting failure to \(\mathcal {P}_i\). Note that the signatures (or even the failure messages) are not sent to the simulator unless the initiator \(\mathcal {P}_i\) is corrupt. This is crucial for our definition as it ensures that if a \(``\mathtt {SignOn}"\) query is initiated by an honest party, then the simulator does not obtain anything directly, except when there is a corrupt party in \(S\) via which it knows such a query has been made and only learns the tuple \((m, \mathcal {P}_i,S)\) corresponding to the query. In fact, no one except the initiator learns whether \(``\mathtt {SignOn}"\) was successful. Intuitively, a protocol realizing \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\) must guarantee that a corrupt party can not compute a valid sign-on token (signature) just by participating in a session started by an honest party. In our definition of \(\mathcal{F}_{\tiny {\textsc {ftt }}}^{\tiny \mathsf{TS}}\), such a token would be considered as a forgery. To the best of our knowledge, this feature has not been considered in prior works on threshold signatures.

We provide more discussions on our definition in the full version [12].

5 Any Distance Measure from MPC

In this section, we show how to construct a four round secure fuzzy threshold tokenizer using any two round malicious UC-secure MPC protocol in a broadcast channel as the main technical tool. Our tokenizer scheme satisfies Definition 1 for any nt, for any distance measure. Formally, we show the following theorem:

Theorem 1

Assuming unforgeable threshold signatures and a two round UC-secure MPC protocols in the CRS model in a broadcast channel, there exists a four round secure fuzzy threshold tokenizer protocol for any nt and any distance predicate.

Such two round MPC protocols can be built assuming DDH/LWE/QR/\(N^{th}\) Residuosity [15, 38, 45, 49]. Threshold signatures can be built assuming LWE/Gap-DDH/RSA [18, 20, 54]. Instantiating this, we get the following corollary:

Corollary 1

Assuming LWE, there exists a four round secure \(\text {FTT}\) protocol for any nt and any distance predicate.

We describe the construction below and defer the proof to the full version [12].

5.1 Construction

Notation. Let \(\pi \) be a two round UC-secure MPC protocol in the CRS model in the presence of a broadcast channel that is secure against a malicious adversary that can corrupt upto \((t-1)\) parties. Let \(\pi \mathsf {.Setup}\) denote the algorithm used to generate the CRS. Let \((\pi \mathsf {.Round}_{1},\pi \mathsf {.Round}_{2})\) denote the algorithms used by any party to compute the messages in each of the two rounds and \(\pi \mathsf {.Out}\) denote the algorithm to compute the final output. Let \((\mathsf {TS.Gen},\mathsf {TS.Sign},\mathsf {TS.Combine},\mathsf {TS.Verify})\) be a threshold signature scheme, \((\mathsf {SKE.Enc},\mathsf {SKE.Dec})\) be a secret key encryption scheme, \((\mathsf {Share},\mathsf {Recon})\) be a (tn) threshold secret sharing scheme and \(\mathsf {PRF}\) be a pseudorandom function. We now describe the construction of our four round secure fuzzy threshold tokenizer protocol \(\pi ^{\mathsf {Any}}\) for any n and t.

Setup: The following algorithm is executed by a trusted authority:

  • Generate \(\mathsf {crs}\leftarrow \pi \mathsf {.Setup}(1^\lambda )\).

  • For each \(i \in [n]\), compute \((\mathsf {sk}_i,\mathsf {vk}_i) \leftarrow \mathsf {Gen}(1^\lambda )\).

  • For every \(i,j \in [n]\), compute \((\mathsf {k}^{\mathsf {PRF}}_{i,j},\mathsf {k}^{\mathsf {PRF}}_{j,i})\) as uniformly random strings.

  • Compute \((\mathsf {pp}^{\mathsf {TS}},\mathsf {vk}^{\mathsf {TS}},\mathsf {sk}^{\mathsf {TS}}_1,\ldots ,\mathsf {sk}^{\mathsf {TS}}_n) \leftarrow \mathsf {TS.Gen}(1^\lambda , n, t)\).

  • For each \(i \in [n]\), give \((\mathsf {crs}, \mathsf {pp}^{\mathsf {TS}},\mathsf {vk}^{\mathsf {TS}},\mathsf {sk}^{\mathsf {TS}}_i, \mathsf {sk}_i, \{\mathsf {vk}_j\}_{j \in [n]}, \{\mathsf {k}^{\mathsf {PRF}}_{j,i},\mathsf {k}^{\mathsf {PRF}}_{i,j}\}_{j \in [n]} )\) to party \(\mathcal {P}_i\).

Enrollment: In this phase, any party \(\mathcal {P}_i\) that wishes to enroll queries the trusted authority which then does the following:

  • Sample a random vector \(\overrightarrow{\mathbf {w}}\) from the distribution \(\mathcal {W}\).

  • Compute \((\overrightarrow{\mathbf {w}}_1,\ldots ,\overrightarrow{\mathbf {w}}_n) \leftarrow \mathsf {Share}(1^\lambda ,\overrightarrow{\mathbf {w}},n,t)\).

  • For each \(i \in [n]\), give \((\overrightarrow{\mathbf {w}}_i)\) to party \(\mathcal {P}_i\).

SignOn Phase: In the SignOn phase, let’s consider party \(\mathcal {P}^*\) that uses input vector \(\overrightarrow{\mathbf {u}}\), a message \(\mathsf {msg}\) on which it wants a token. \(\mathcal {P}^*\) interacts with the other parties in the below four round protocol.

Footnote 6 Party \(\mathcal {P}^*\) does the following:

  1. 1.

    Pick a set \(S\) consisting of t parties amongst \(\mathcal {P}_1,\ldots ,\mathcal {P}_n\). For simplicity, without loss of generality, we assume that \(\mathcal {P}^*\) is also part of set \(S\).

  2. 2.

    To each party \(\mathcal {P}_i \in S\), send \((\mathsf {msg},S)\).

Each Party \(\mathcal {P}_i \in S\) (except \(\mathcal {P}^*\)) does the following:

  1. 1.

    Participate in an execution of protocol \(\pi \) with parties in set \(S\) using input \(\mathsf {y}_i = (\overrightarrow{\mathbf {w}}_i,\mathsf {sk}^{\mathsf {TS}}_i)\) and randomness \(\mathsf {r}_i\) to compute circuit \(\mathcal {C}\) defined in Fig. 2. Compute first round message \(\mathsf {msg}_{1,i} \leftarrow \pi \mathsf {.Round}_{1}(\mathsf {y}_i; \mathsf {r}_i)\).

  2. 2.

    Compute \(\sigma _{1,i} = \mathsf {Sign}(\mathsf {sk}_i, \mathsf {msg}_{1,i})\).

  3. 3.

    Send \((\mathsf {msg}_{1,i}, \sigma _{1,i})\) to party \(\mathcal {P}^*\).

Party \(\mathcal {P}^*\) does the following:

  1. 1.

    Let \(\mathsf {Trans}_\mathsf {\text {fuzzy threshold tokenizer}}\) denote the set of messages received in round 2.

  2. 2.

    Participate in an execution of protocol \(\pi \) with parties in set \(S\) using input \(\mathsf {y}_* = (\overrightarrow{\mathbf {w}}_*,\mathsf {sk}^{\mathsf {TS}}_*,\) \(\overrightarrow{\mathbf {u}},\mathsf {msg})\) and randomness \(\mathsf {r}_*\) to compute circuit \(\mathcal {C}\) defined in Fig. 2. Compute first round message \(\mathsf {msg}_{1,*} \leftarrow \pi \mathsf {.Round}_{1}(\mathsf {y}_*; \mathsf {r}_*)\).

  3. 3.

    To each party \(\mathcal {P}_i \in S\), send \((\mathsf {Trans}_\mathsf {\text {fuzzy threshold tokenizer}}, \mathsf {msg}_{1,*})\).

Each Party \(\mathcal {P}_i \in S\) (except \(\mathcal {P}^*\)) does the following:

  1. 1.

    Let \(\mathsf {Trans}_\mathsf {\text {fuzzy threshold tokenizer}}\) consist of a set of messages of the form \((\mathsf {msg}_{1,j}, \sigma _{1,j})\), \(\forall j \in S \setminus \mathcal {P}^*\). Output \(\bot \) if \(\mathsf {Verify}(\mathsf {vk}_j,\mathsf {msg}_{1,j}, \sigma _{1,j}) \ne 1\).

  2. 2.

    Let \(\tau _1 = \{\mathsf {msg}_{1,j}\}_{j \in S}\) denote the transcript of protocol \(\pi \) after round 1. Compute second round message \(\mathsf {msg}_{2,i} \leftarrow \pi \mathsf {.Round}_{2}(\mathsf {y}_i, \tau _1; \mathsf {r}_i)\).

  3. 3.

    Let \((\mathsf {Trans}_\mathsf {\text {fuzzy threshold tokenizer}}, \mathsf {msg}_{1,*})\) denote the message received from \(\mathcal {P}^*\) in round 3. Compute \({\mathsf {ek}_i = \oplus _{j \in S} \mathsf {PRF}(\mathsf {k}^{\mathsf {PRF}}_{i,j} , \mathsf {msg}_{1,*})}\) and \(\mathsf {ct}_i = \mathsf {SKE.Enc}(\mathsf {ek}_i, \mathsf {msg}_{2,i})\).

  4. 4.

    For each party \(\mathcal {P}_j \in S\), compute \(\mathsf {ek}_{j,i} = \mathsf {PRF}(\mathsf {k}^{\mathsf {PRF}}_{j,i},\mathsf {msg}_{1,*})\).

  5. 5.

    Send \((\mathsf {ct}_i, \{\mathsf {ek}_{j,i}\}_{j \in S} )\) to \(\mathcal {P}^*\).

Output Computation: Every party \(\mathcal {P}_j \in S\) outputs \((\mathsf {msg},\mathcal {P}^*,S)\). Additionally, party \(\mathcal {P}^*\) does the following to generate a token:

  1. 1.

    For each party \(\mathcal {P}_j \in S\), compute \(\mathsf {ek}_j = \oplus _{j \in S}\mathsf {ek}_{j,i}\), \(\mathsf {msg}_{2,j} = \mathsf {SKE.Dec}(\mathsf {ek}_j, \mathsf {ct}_j)\).

  2. 2.

    Let \(\tau _2\) denote the transcript of protocol \(\pi \) after round 2. Compute the output of \(\pi \): \(\{\mathsf {Token}_{i}\}_{i \in S} \leftarrow \pi \mathsf {.Out}(\mathsf {y}_*, \tau _2 ; \mathsf {r}_*)\).

  3. 3.

    Reconstruct the signature as \(\mathsf {Token}= \mathsf {TS.Combine}(\{\mathsf {Token}_{i}\}_{i \in S})\).

  4. 4.

    If \(\mathsf {TS.Verify}(\mathsf {vk}^{\mathsf {TS}}, \mathsf {msg}, \mathsf {Token}) = 1\), then output \(\{\mathsf {Token}_i\}_{i\in S}\). Else, output \(\bot \).

Fig. 2.
figure 2

Circuit \(\mathcal {C}\)

Token Verification: Given a verification key \(\mathsf {vk}^{\mathsf {TS}}\), message \(\mathsf {msg}\) and a token \(\{\mathsf {Token}_{i}\}_{i \in S}\), where \(|S| = t\), the token verification algorithm does the following:

  1. 1.

    Compute \(\mathsf {Token}\leftarrow \mathsf {TS.Combine}(\{\mathsf {Token}_{i}\}_{i \in S})\).

  2. 2.

    Output 1 if \(\mathsf {TS.Verify}(\mathsf {vk}^{\mathsf {TS}}, \mathsf {msg}, \mathsf {Token}) = 1\). Else, output 0.

6 Any Distance Measure Using Threshold FHE

In this section, we construct a \(\text {FTT}\) protocol for any distance measure using any fully homomorphic encryption (FHE) scheme with threshold decryption. Our token generation protocol satisfies the definition in Sect. 4 for any nt, and works for any distance measure. Formally, we show the following theorem:

Theorem 2

Assuming threshold fully-homomorphic encryption, non-interactive zero knowledge argument of knowledge (NIZK) and unforgeable threshold signatures, there exists a four round secure \(\text {FTT}\) protocol for any nt and any distance predicate.

Threshold FHE, NIZKs and unforgeable threshold signatures can be built assuming LWE [20, 50]. Instantiating this, we get the following corollary:

Corollary 2

Assuming LWE, there exists a four round secure \(\text {FTT}\) protocol for any nt and any distance predicate.

6.1 Construction

Notation. Let (\(\mathsf {TFHE.Gen}, \mathsf {TFHE.Enc}, \mathsf {TFHE.PartialDec}, \mathsf {TFHE.Eval}, \mathsf {TFHE.Combine}\)) be a threshold FHE scheme and let \((\mathsf {TS.Gen},\mathsf {TS.Sign},\mathsf {TS.Combine},\mathsf {TS.Verify})\) be a threshold signature scheme. Let \((\mathsf {Prove},\mathsf {Verify})\) be a NIZK scheme and \((\mathsf {Gen},\mathsf {Sign},\mathsf {Verify})\) be a strongly-unforgeable digital signature scheme and \(\mathsf {Commit}\) be a non-interactive commitment scheme. We now describe the construction of our four round secure \(\text {FTT}\) protocol \(\pi ^{\mathsf {Any-TFHE}}\) for any n and k. We defer the proof to the full version [12].

Setup Phase: The following algorithm is executed by a trusted authority:

  • Generate \({(\mathsf {pk}^{\mathsf {TFHE}},\mathsf {sk}^{\mathsf {TFHE}}_1,\ldots ,\mathsf {sk}^{\mathsf {TFHE}}_N) \leftarrow \mathsf {TFHE.Gen}(1^\lambda , n, t)}\) and \((\mathsf {pp}^{\mathsf {TS}},\mathsf {vk}^{\mathsf {TS}},\mathsf {sk}^{\mathsf {TS}}_1,\ldots ,\) \(\mathsf {sk}^{\mathsf {TS}}_n)\) \(\leftarrow \mathsf {TS.Gen}(1^\lambda , n, t)\).

  • For each \(i \in [n]\), compute \(\mathsf {com}_i \leftarrow \mathsf {Commit}(\mathsf {sk}^{\mathsf {TFHE}}_i;\mathsf {r}^\mathsf {com}_i)\) and \((\mathsf {sk}_i,\mathsf {vk}_i) \leftarrow \mathsf {Gen}(1^\lambda )\).

  • For each \({i\in [n]}\), give the following to party \(\mathcal {P}_i\): \((\mathsf {pk}^{\mathsf {TFHE}},\mathsf {sk}^{\mathsf {TFHE}}_i,\mathsf {pp}^{\mathsf {TS}},\mathsf {vk}^{\mathsf {TS}},\mathsf {sk}^{\mathsf {TS}}_i, (\mathsf {vk}_1,\ldots \), \(\mathsf {vk}_n)\), \(\mathsf {sk}_i,(\mathsf {com}_1,\ldots ,\mathsf {com}_n),\mathsf {r}^\mathsf {com}_i)\).

Enrollment: In this phase, any party \(\mathcal {P}_i\) that wishes to register a fresh template queries the trusted authority, which then executes the following algorithm:

  • Sample a template \(\overrightarrow{\mathbf {w}}\) from the distribution \(\mathcal {W}\) over \(\{0,1\}^{\ell }\).

  • Compute and give \(\mathsf {ct}_{\overrightarrow{\mathbf {w}}}\) to each party \(\mathcal {P}_i\), where \(\mathsf {ct}_{\overrightarrow{\mathbf {w}}} = \mathsf {TFHE.Enc}(\mathsf {pk}^{\mathsf {TFHE}},\overrightarrow{\mathbf {w}})\).

SignOn Phase: In the SignOn phase, let’s consider party \(\mathcal {P}^*\) that uses input vector \(\overrightarrow{\mathbf {u}}\in \{0,1\}^\ell \) and a message \(\mathsf {msg}\) on which it wants a token. \(\mathcal {P}^*\) interacts with the other parties in the below four round protocol.

  • Footnote 7 Party \(\mathcal {P}^*\) does the following:

    1. 1.

      Compute ciphertext \(\mathsf {ct}_{\overrightarrow{\mathbf {u}}} = \mathsf {TFHE.Enc}(\mathsf {pk}^{\mathsf {TFHE}},\overrightarrow{\mathbf {u}};\mathsf {r}_{\overrightarrow{\mathbf {u}}})\).

    2. 2.

      Compute \(\pi _{\overrightarrow{\mathbf {u}}}\leftarrow \mathsf {Prove}(\mathsf {st}_{\overrightarrow{\mathbf {u}}},\mathsf {wit}_{\overrightarrow{\mathbf {u}}})\) for \(\mathsf {st}_{\overrightarrow{\mathbf {u}}} = (\mathsf {ct}_{\overrightarrow{\mathbf {u}}},\mathsf {pk}^{\mathsf {TFHE}}) \in L_1\) using witness \(\mathsf {wit}_{\overrightarrow{\mathbf {u}}} = ({\overrightarrow{\mathbf {u}}}, \mathsf {r}_{\overrightarrow{\mathbf {u}}})\) (language \(L_1\) is defined in Fig. 3).

    3. 3.

      Pick a set \(S\) consisting of t parties amongst \(\mathcal {P}_1,\ldots ,\mathcal {P}_n\). For simplicity, without loss of generality, we assume that \(\mathcal {P}^*\) is also part of set \(S\).

    4. 4.

      To each party \(\mathcal {P}_i \in S\), send \((\mathsf {ct}_{\overrightarrow{\mathbf {u}}},\pi _{\overrightarrow{\mathbf {u}}})\).

  • Each party \(\mathcal {P}_i \in S\) (except \(\mathcal {P}^*\)) does the following:

    1. 1.

      Abort and output \(\bot \) if \(\mathsf {Verify}(\pi _{\overrightarrow{\mathbf {u}}},\mathsf {st}_{\overrightarrow{\mathbf {u}}}) \ne 1\) for language \(L_1\) where the statement \(\mathsf {st}_{\overrightarrow{\mathbf {u}}} = (\mathsf {ct}_{\overrightarrow{\mathbf {u}}},\mathsf {pk}^{\mathsf {TFHE}})\).

    2. 2.

      Sample a uniformly random one-time key \(K_i\leftarrow \{0,1\}^{\lambda }\) and compute \(\mathsf {ct}_{K_i} = \mathsf {TFHE.Enc}\) \((\mathsf {pk}^{\mathsf {TFHE}},K_i;\mathsf {r}_{K_i})\).

    3. 3.

      Compute \(\pi _{K_i} \leftarrow \mathsf {Prove}(\mathsf {st}_{K_i},\mathsf {wit}_{K_i})\) for \(\mathsf {st}_{K_i} = (\mathsf {ct}_{K_i},\mathsf {pk}^{\mathsf {TFHE}}) \in L_1\) using the witness \(\mathsf {wit}_{K_i} = (K_i, \mathsf {r}_{K_i})\) (language \(L_1\) is defined in Fig. 3).

    4. 4.

      Compute signatures \(\sigma _{i,0} = \mathsf {Sign}(\mathsf {sk}_i,\mathsf {ct}_{\overrightarrow{\mathbf {u}}})\) and \(\sigma _{i,1} = \mathsf {Sign}(\mathsf {sk}_i,\mathsf {ct}_{K_i})\).

    5. 5.

      Send the following to the party \(\mathcal {P}^*\): \((\mathsf {ct}_{K_i},\pi _{K_i},\sigma _{i,0},\sigma _{i,1})\).

  • Party \(\mathcal {P}^*\) checks if there exists some party \(\mathcal {P}_i \in S\) such that \(\mathsf {Verify}(\pi _{K_i},\) \(\mathsf {st}_{K_i})\) \(\ne \) 1 for language \(L_1\) where \(\mathsf {st}_{K_i} = (\mathsf {ct}_{K_i},\mathsf {pk}^{\mathsf {TFHE}})\). If yes, it outputs \(\bot \) and aborts. Otherwise, it sends \(\{(\mathsf {ct}_{K_i},\pi _{K_i},\sigma _{i,0},\sigma _{i,1})\}_{\mathcal {P}_i\in S}\) to each party \(\mathcal {P}_i\in S\).

  • Each party \(\mathcal {P}_i \in S\) (except \(\mathcal {P}^*\)) does the following:

    1. 1.

      If there exists some party \(\mathcal {P}_j \in S\) such that \(\mathsf {Verify}(\pi _{K_j},\mathsf {st}_{K_j}) \ne 1\) for language \(L_1\) where \(\mathsf {st}_{K_j} = (\mathsf {ct}_{K_j},\mathsf {pk}^{\mathsf {TFHE}})\) (OR) \(\mathsf {Verify}(\mathsf {vk}_j,\mathsf {ct}_{\overrightarrow{\mathbf {u}}},\sigma _{j,0}) \ne 1\) (OR) \(\mathsf {Verify}(\mathsf {vk}_j,\mathsf {ct}_{K_j},\sigma _{j,1})\) \(\ne 1\), then output \(\bot \) and abort.

    2. 2.

      Otherwise, for each \(\mathcal {P}_j \in S\), do the following:

      • Compute \(\mathsf {ct}_{\mathcal {C},j} = \mathsf {TFHE.Eval}(\mathsf {pk}^{\mathsf {TFHE}},\mathcal {C}_{\mathsf {Dist}},\mathsf {ct}_{\overrightarrow{\mathbf {w}}},\mathsf {ct}_{\overrightarrow{\mathbf {u}}},\mathsf {ct}_{K_j})\) using circuit \(\mathcal {C}\) (Fig. 4). Note that \(\mathsf {ct}_{\mathcal {C},j}\) is either an encryption \(K_j\) or an encryption of \(0^{\lambda }\).

      • Compute a partial decryption: \(\mu _{i,j} = \mathsf {TFHE.PartialDec}(\mathsf {sk}^{\mathsf {TFHE}}_i,\mathsf {ct}_{\mathcal {C},j})\).

      • Compute \(\pi _{i,j} \leftarrow \mathsf {Prove}(\mathsf {st}_{i,j},\mathsf {wit}_{i})\) for \(\mathsf {st}_{i,j} = (\mathsf {ct}_{\mathcal {C},j},\mu _{i,j},\mathsf {com}_{i}) \in L_2\) using \(\mathsf {wit}_{i} = (\mathsf {sk}^{\mathsf {TFHE}}_i,\mathsf {r}^\mathsf {com}_{i})\) (language \(L_2\) is defined in Fig. 5).

    3. 3.

      Compute partial signature \(\mathsf {Token}_i = \mathsf {TS.Sign}(\mathsf {sk}^{\mathsf {TFHE}}_i,\mathsf {msg})\) and ciphertext \(\mathsf {ct}_i = K_i \oplus \mathsf {Token}_i\).

    4. 4.

      Send \((\mathsf {ct}_i,\{(\pi _{i,j},\mu _{i,j})\}_{\mathcal {P}_j \in S})\) to \(\mathcal {P}^*\).

  • Output Computation: Every party \(\mathcal {P}_i \in S\) outputs \((\mathsf {msg},\mathcal {P}^*,S)\). Additionally, party \(\mathcal {P}^*\) does the following to generate a token:

    1. 1.

      For each \(\mathcal {P}_j\in S\), do the following:

      1. (a)

        For each \(\mathcal {P}_i\in S\), abort if \(\mathsf {Verify}(\pi _{i,j},\mathsf {st}_{i,j}) \ne 1\) for language \(L_2\) where \(\mathsf {st}_{i,j} = (\mathsf {ct}_{\mathcal {C},j},\mu _{i,j},\mathsf {com}_{i})\).

      2. (b)

        Set \(K_j = \mathsf {TFHE.Combine}(\{\mu _{i,j}\}_{\mathcal {P}_i\in S})\). If \(K_j=0^{\lambda }\), output \(\bot \).

      3. (c)

        Otherwise, recover partial signature \(\mathsf {Token}_j = K_j\oplus \mathsf {ct}_j\).

    2. 2.

      Reconstruct the signature as \(\mathsf {Token}= \mathsf {TS.Combine}(\{\mathsf {Token}_{i}\}_{i \in S})\).

    3. 3.

      If \(\mathsf {TS.Verify}(\mathsf {vk}^{\mathsf {TS}}, \mathsf {msg}, \mathsf {Token}) = 1\), then output \(\{\mathsf {Token}_i\}_{\mathcal {P}_i\in S}\). Else, output \(\bot \).

Token Verification: Given a verification key \(\mathsf {vk}^{\mathsf {TS}}\), message \(\mathsf {msg}\) and a set of partial tokens \(\{\mathsf {Token}_i\}_{\mathcal {P}_i\in S}\), the token verification algorithm outputs 1 if \(\mathsf {TS.Verify}(\mathsf {vk}^{\mathsf {TS}}, \mathsf {msg}, \mathsf {Token}) = 1\), where \(\mathsf {Token}= \mathsf {TS.Combine}(\{\mathsf {Token}_{i}\}_{\mathcal {P}_i\in S})\).

Fig. 3.
figure 3

NP language \(L_{1}\)

Fig. 4.
figure 4

Circuit \(\mathcal {C}\)

Fig. 5.
figure 5

NP language \(L_{2}\)

7 Cosine Similarity: Single Corruption

In this section, we construct an efficient four round secure FTT in the Random Oracle (RO) model for Euclidean Distance and Cosine Similarity. Our protocol satisfies Definition 1 for any n with threshold \(t=3\) and is secure against a malicious adversary that can corrupt any one party. The special case of \(n=3\) corresponds to the popularly studied three party honest majority setting. We first focus on the Cosine Similarity distance measure. In the full version, we explain how to extend our result for Euclidean Distance. Formally:

Theorem 3

Assuming unforgeable threshold signatures, two message OT in the CRS model, circuit-private additively homomorphic encryption and NIZKs for NP languages \(L_1, L_2\) defined below, there exists a four round secure fuzzy threshold tokenizer protocol for Cosine Similarity. The protocol works for any n, threshold \(t=3\) and is secure against a malicious adversary that can corrupt any one party.

We describe the construction below and defer the proof to the full version [12].

Paillier Encryption Scheme. The Paillier encryption scheme [47] is an example of a circuit-private additively homomorphic encryption based on the \(N^{th}\) residuosity assumption. With respect to Paillier, we can also build NIZK arguments for languages \(L_1\) and \(L_2\) defined below, in the RO model. Formally:

Imported Theorem 1

([31]). Assuming the hardness of the \(N^{th}\) residuosity assumption, there exists a NIZK for language \(L_1\), defined below, in the RO model.

Imported Theorem 2

([30]). Assuming the hardness of the \(N^{th}\) residuosity assumption, there exists a NIZK for language \(L_2\), defined below, in the RO model.

The above NIZKs are very efficient and only require a constant number of group operations for both prover and verifier. Two message OT in the CRS model can be built assuming DDH/LWE/Quadratic Residuosity/\(N^{th}\) residuosity [40, 46, 51]. Threshold signatures can be built assuming LWE/Gap-DDH/RSA [18, 20, 54]. Instantiating the primitives used in Theorem 3, we get the following corollary:

Corollary 3

Assuming the hardness of the \(N^{th}\) residuosity assumption and LWE, there exists a four round secure fuzzy threshold tokenizer protocol for Cosine Similarity in the RO model. The protocol works for any n, \(t=3\) and is secure against a malicious adversary that can corrupt any one party.

NP Languages.

Let \((\mathsf {AHE.Setup},\mathsf {AHE.Enc},\mathsf {AHE.Add},\mathsf {AHE.ConstMul},\mathsf {AHE.Dec})\) be an additively homomorphic encryption scheme. Let \(\mathsf {epk}\leftarrow \mathsf {AHE.Setup}(1^\lambda )\), \( m = \mathsf {poly}(\lambda )\).

Language \(L_1\):

Statement: \(\mathsf {st}= (\mathsf {ct},\mathsf {pk})\).             Witness: \(\mathsf {wit}= (\mathsf {x},\mathsf {r})\).

Relation: \(\mathsf {R} _{1}(\mathsf {st},\mathsf {wit})=1 \text { if}\) \(\mathsf {ct}= \mathsf {AHE.Enc}(\mathsf {epk},\mathsf {x};\mathsf {r}) \text { AND } x \in \{0,1\}^m\)

Language \(L_2\):

Statement: \(\mathsf {st}= (\mathsf {ct}_1,\mathsf {ct}_2,\mathsf {ct}_3,\mathsf {pk})\).             Witness: \(\mathsf {wit}= (\mathsf {x}_2,\mathsf {r}_2,\mathsf {r}_3)\).

Relation: \(\mathsf {R} _{2}(\mathsf {st},\mathsf {wit})=1 \text { if}\)

$$\begin{aligned} \mathsf {ct}_2 = \mathsf {AHE.Enc}(\mathsf {epk},\mathsf {x}_2; \mathsf {r}_2) \text { AND } \mathsf {ct}_3 = \mathsf {AHE.ConstMul}(\mathsf {pk},\mathsf {ct}_1,\mathsf {x}_2;\mathsf {r}_3). \end{aligned}$$

Construction. Let \(\mathsf {RO}\) denote a random oracle, d be the threshold value for Cosine Similarity. Recall that we denote \(\mathsf {Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}})=1\) if \(\mathsf {CS.Dist}(\overrightarrow{\mathbf {u}},\overrightarrow{\mathbf {w}}) \ge d\). Let \((\mathsf {Share},\mathsf {Recon})\) be a (2, n) threshold secret sharing scheme, \(\mathsf {TS}= (\mathsf {TS.Gen},\mathsf {TS.Sign}, \mathsf {TS.Combine}, \mathsf {TS.Verify})\) be a threshold signature scheme, \((\mathsf {SKE.Enc},\mathsf {SKE.Dec})\) denote a secret key encryption scheme, \(\mathsf {PRF}\) denote a pseudorandom function, \((\mathsf {Garble},\mathsf {Eval})\) denote a garbling scheme for circuits, \((\mathsf {Prove},\mathsf {Verify})\) be a NIZK system in the RO model, \(\mathsf {AHE}= (\mathsf {AHE.Setup},\mathsf {AHE.Enc}, \mathsf {AHE.Add},\mathsf {AHE.ConstMul}, \mathsf {AHE.Dec})\) be a circuit-private additively homomorphic encryption scheme and \(\mathsf {OT}= (\mathsf {OT.Setup},\mathsf {OT}\mathsf {.Round}_{1},\mathsf {OT}\mathsf {.Round}_{2},\mathsf {OT.Output})\) be a two message oblivious transfer protocol in the CRS model. We now describe the construction of our four round secure fuzzy threshold tokenizer protocol \(\pi ^{\mathsf {CS}}\) for Cosine Similarity.

Setup: The trusted authority does the following:

  • Compute \((\mathsf {pp}^{\mathsf {TS}},\mathsf {vk}^{\mathsf {TS}},\mathsf {sk}^{\mathsf {TS}}_1,\ldots ,\mathsf {sk}^{\mathsf {TS}}_n) \leftarrow \mathsf {TS.Gen}(1^\lambda , n, k)\).

  • For \(i \in [n]\), generate \(\mathsf {crs}_i \leftarrow \mathsf {OT.Setup}(1^\lambda )\) and pick a random \(\mathsf {PRF}\) key \(\mathsf {k}_{i}\).

  • For \(i \in [n]\), give \((\mathsf {pp}^{\mathsf {TS}},\mathsf {vk}^{\mathsf {TS}},\mathsf {sk}^{\mathsf {TS}}_i, \{\mathsf {crs}_j\}_{j \in [n]}, \{\mathsf {k}_j\}_{j \in [n]\setminus i})\) to party \(\mathcal {P}_i\).

Enrollment: In this phase, any party \(\mathcal {P}_i\) that wishes to enroll, queries the trusted authority which then does the following:

  • Sample a random vector \(\overrightarrow{\mathbf {w}}\) from the distribution \(\mathcal {W}\). Without loss of generality, let’s assume that the \(\mathsf {L2}\)-norm of \(\overrightarrow{\mathbf {w}}\) is 1.

  • For each \(i \in [n]\), do the following:

    • Compute \((\overrightarrow{\mathbf {w}}_i,\overrightarrow{\mathbf {v}}_i) \leftarrow \mathsf {Share}(1^\lambda ,\overrightarrow{\mathbf {w}},n,2)\).

    • Compute \((\mathsf {esk}_i,\mathsf {epk}_i) \leftarrow \mathsf {AHE.Setup}(1^\lambda )\).

    • Let \(\overrightarrow{\mathbf {w}}_i = (\mathsf {w}_{i,1},\ldots ,\mathsf {w}_{i,\ell })\). \(\forall j \in [\ell ]\), compute \([\![{ \mathsf {w}_{i,j} }]\!] = \mathsf {AHE.Enc}(\mathsf {epk}_i,\mathsf {w}_{i,j})\).

    • Give \((\overrightarrow{\mathbf {w}}_i, \mathsf {sk}_i, \mathsf {pk}_i,\{[\![{ \mathsf {w}_{i,j} }]\!]\}_{j \in [\ell ]})\) to party \(\mathcal {P}_i\) and \((\overrightarrow{\mathbf {v}}_i, \mathsf {pk}_i,\{[\![{ \mathsf {w}_{i,j} }]\!]\}_{j \in [\ell ]})\) to all the other parties.

SignOn Phase: In the SignOn phase, let’s consider party \(\mathcal {P}_i\) that uses an input vector \(\overrightarrow{\mathbf {u}}= (\mathsf {u}_1,\ldots ,\mathsf {u}_\ell )\) and a message \(\mathsf {msg}\) on which it wants a token. \(\mathcal {P}_i\) picks two other parties \(\mathcal {P}_j\) and \(\mathcal {P}_k\) and interacts with them in the below protocol.

Footnote 8 Party \(\mathcal {P}_i\) does the following:

  1. 1.

    Let \(S= (\mathcal {P}_j, \mathcal {P}_k)\) with \(j < k\).

  2. 2.

    For each \(j \in [\ell ]\), compute the following:

    • \([\![{ \mathsf {u}_j }]\!] = \mathsf {AHE.Enc}(\mathsf {epk}_i,\mathsf {u}_j;\mathsf {r}_{1,j})\). \(\pi _{1,j} \leftarrow \mathsf {Prove}(\mathsf {st}_{1,j},\mathsf {wit}_{1,j})\) for \(\mathsf {st}_{1,j} = ([\![{ \mathsf {u}_j }]\!],\mathsf {epk}_i) \in L_1\) using \(\mathsf {wit}_{1,j} = (\mathsf {u}_{j}, \mathsf {r}_{1,j})\).

    • \([\![{ \mathsf {u}^2_j }]\!] = \mathsf {AHE.ConstMul}(\mathsf {epk}_i,[\![{ \mathsf {u}_j }]\!], \mathsf {u}_{j};\mathsf {r}_{2,j})\). \(\pi _{2,j} \leftarrow \mathsf {Prove}(\mathsf {st}_{2,j},\mathsf {wit}_{2,j})\) for \(\mathsf {st}_{2,j} = ([\![{ \mathsf {u}_j }]\!]\), \([\![{ \mathsf {u}_j }]\!]\), \([\![{ \mathsf {u}^2_j }]\!]\), \(\mathsf {epk}_i) \in L_2\) using \(\mathsf {wit}_{2,j} = (\mathsf {u}_{j}, \mathsf {r}_{1,j}, \mathsf {r}_{2,j})\).

    • \({[\![{ \mathsf {w}_{i,j}\cdot \mathsf {u}_j }]\!] = \mathsf {AHE.ConstMul}(\mathsf {epk}_i,[\![{ \mathsf {w}_{i,j} }]\!], \mathsf {u}_{j};\mathsf {r}_{3,j})}\). \(\pi _{3,j} \leftarrow \mathsf {Prove}(\mathsf {st}_{3,j},\mathsf {wit}_{3,j})\) for \(\mathsf {st}_{3,j} = ([\![{ \mathsf {w}_{i,j} }]\!],[\![{ \mathsf {u}_j }]\!], [\![{ \mathsf {w}_{i,j}\cdot \mathsf {u}_j }]\!],\mathsf {epk}_i) \in L_2\) using \(\mathsf {wit}_{3,j} = (\mathsf {u}_{j}, \mathsf {r}_{1,j}, \mathsf {r}_{3,j})\).

  3. 3.

    To both parties in \(S\), send \(\mathsf {msg}_1 = (S,\mathsf {msg}, \{[\![{ \mathsf {u}_j }]\!],[\![{ \mathsf {u}^2_j }]\!],[\![{ \mathsf {w}_{i,j}\cdot \mathsf {u}_j }]\!], \pi _{1,j},\pi _{2,j},\pi _{3,j}\}_{j \in [\ell ]})\).

Both parties \(\mathcal {P}_j\) and \(\mathcal {P}_k\) do the following:

  1. 1.

    Abort if any of the proofs \(\{\pi _{1,j},\pi _{2,j},\pi _{3,j}\}_{j \in [\ell ]}\) don’t verify.

  2. 2.

    Generate randomness \((\mathsf {a},\mathsf {b},\mathsf {e},\mathsf {f},\mathsf {p},\mathsf {q},\mathsf {r}_\mathsf {z}) \leftarrow \mathsf {PRF}(\mathsf {k}_{i},\mathsf {msg}_1)\).

  3. 3.

    Using the algorithms of \(\mathsf {AHE}\), compute \([\![{ \mathsf {x}_1 }]\!],[\![{ \mathsf {x}_2 }]\!],[\![{ \mathsf {y}_1 }]\!],[\![{ \mathsf {y}_2 }]\!],[\![{ \mathsf {z}_1 }]\!],[\![{ \mathsf {z}_2 }]\!]\) as follows:

    • \(\mathsf {x}_1 = \langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}_i \rangle \), \(\mathsf {y}_1 = \langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {u}}\rangle \), \(\mathsf {z}_1 = (\langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {v}}_i \rangle + \mathsf {r}_\mathsf {z})\).

    • \(\mathsf {x}_2 = (\mathsf {a}\cdot \mathsf {x}_1 + \mathsf {b})\), \(\mathsf {y}_2 = (\mathsf {e}\cdot \mathsf {y}_1 + \mathsf {f})\), \(\mathsf {z}_2 = (\mathsf {p}\cdot \mathsf {z}_1 + \mathsf {q})\)

  4. 4.

    Send \(([\![{ \mathsf {x}_2 }]\!],[\![{ \mathsf {y}_2 }]\!],[\![{ \mathsf {z}_1 }]\!],[\![{ \mathsf {z}_2 }]\!])\) to \(\mathcal {P}_i\).

Party \(\mathcal {P}_i\) does the following:

  1. 1.

    Abort if the tuples sent by both \(\mathcal {P}_j\) and \(\mathcal {P}_k\) in round 2 were not the same.

  2. 2.

    Compute \(\mathsf {x}_1 = \langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {w}}_i \rangle \), \(\mathsf {x}_2 = \mathsf {AHE.Dec}(\mathsf {esk}_i, [\![{ \mathsf {x}_2 }]\!])\).

  3. 3.

    Compute \(\mathsf {y}_1 = \langle \overrightarrow{\mathbf {u}}, \overrightarrow{\mathbf {u}}\rangle \), \(\mathsf {y}_2 = \mathsf {AHE.Dec}(\mathsf {esk}_i, [\![{ \mathsf {y}_2 }]\!])\).

  4. 4.

    Compute \(\mathsf {z}_1 = \mathsf {AHE.Dec}(\mathsf {esk}_i, [\![{ \mathsf {z}_1 }]\!])\), \(\mathsf {z}_2 = \mathsf {AHE.Dec}(\mathsf {esk}_i, [\![{ \mathsf {z}_2 }]\!])\).

  5. 5.

    Generate and send \(\mathsf {msg}_3 = \{\mathsf {ot}_{\mathsf {s},\mathsf {t}}^\mathsf {rec}\leftarrow \mathsf {OT}\mathsf {.Round}_{1}( \mathsf {crs}_i, \mathsf {s}_\mathsf {t})\}_{\mathsf {s}\in \{\mathsf {x},\mathsf {y},\mathsf {z}\}, \mathsf {t}\in \{1,2\}}\).

Party \(\mathcal {P}_j\) does the following:

  1. 1.

    Compute \(\widetilde{\mathcal {C}}= \mathsf {Garble}(\mathcal {C})\) for the circuit \(\mathcal {C}\) described in Fig. 6.

  2. 2.

    For each \(\mathsf {s}\in \{\mathsf {x},\mathsf {y},\mathsf {z}\}, \mathsf {t}\in \{0,1\}\), let \(\mathsf {lab}^0_{\mathsf {s},\mathsf {t}},\mathsf {lab}^1_{\mathsf {s},\mathsf {t}}\) denote the labels of the garbled circuit \(\widetilde{\mathcal {C}}\) corresponding to input wires \(\mathsf {s}_\mathsf {t}\). Generate \(\mathsf {ot}_{\mathsf {s},\mathsf {t}}^\mathsf {sen}= \mathsf {OT}\mathsf {.Round}_{2}(\mathsf {crs}_i, \mathsf {lab}^0_{\mathsf {s},\mathsf {t}}, \mathsf {lab}^1_{\mathsf {s},\mathsf {t}}, \mathsf {ot}_{\mathsf {s},\mathsf {t}}^\mathsf {rec})\). Let \(\mathsf {ot}^\mathsf {sen}= \{\mathsf {ot}_{\mathsf {s},\mathsf {t}}^\mathsf {sen}\}_{\mathsf {s}\in \{\mathsf {x},\mathsf {y},\mathsf {z}\}, \mathsf {t}\in \{1,2\}}\)

  3. 3.

    Compute \(\mathsf {pad}= \mathsf {PRF}(\mathsf {k}_{i},\mathsf {msg}_3)\). Set \(\mathsf {ct}_j = \mathsf {SKE.Enc}(\mathsf {pad},\mathsf {TS.Sign}(\mathsf {sk}^{\mathsf {TS}}_j,\mathsf {msg}))\).

  4. 4.

    Send \((\widetilde{\mathcal {C}}, \mathsf {ot}^\mathsf {sen}, \mathsf {ct}_j)\) to \(\mathcal {P}_i\).

Party \(\mathcal {P}_k\) does the following:

  1. 1.

    Compute \((\widetilde{\mathcal {C}},\mathsf {ot}^\mathsf {sen},\mathsf {pad})\) exactly as done by \(\mathcal {P}_j\).

  2. 2.

    Set \(\mathsf {ct}_k = \mathsf {SKE.Enc}(\mathsf {pad},\mathsf {TS.Sign}(\mathsf {sk}^{\mathsf {TS}}_k,\mathsf {msg}))\).

  3. 3.

    Send \((\mathsf {RO}(\widetilde{\mathcal {C}}, \mathsf {ot}^\mathsf {sen}), \mathsf {ct}_k)\) to \(\mathcal {P}_i\).

Output Computation: Parties \(\mathcal {P}_j,\mathcal {P}_k\) output \((\mathsf {msg},\mathcal {P}_i,S)\). Party \(\mathcal {P}_i\) does:

  1. 1.

    Let \((\widetilde{\mathcal {C}}, \mathsf {ot}^\mathsf {sen}, \mathsf {ct}_j)\) be the message received from \(\mathcal {P}_j\) and \((\mathsf {msg}_4, \mathsf {ct}_k)\) be the message received from \(\mathcal {P}_k\). Abort if \(\mathsf {RO}(\widetilde{\mathcal {C}}, \mathsf {ot}^\mathsf {sen}) \ne \mathsf {msg}_4\).

  2. 2.

    For each \(\mathsf {s}\in \{\mathsf {x},\mathsf {y},\mathsf {z}\}, \mathsf {t}\in \{0,1\}\), compute \(\mathsf {lab}_{\mathsf {s},\mathsf {t}} = \mathsf {OT.Output}(\mathsf {ot}_{\mathsf {s},\mathsf {t}}^\mathsf {sen}, \mathsf {ot}_{\mathsf {s},\mathsf {t}}^\mathsf {rec}, \mathsf {r}^\mathsf {ot}_{\mathsf {s},\mathsf {t}})\). Let \(\mathsf {lab}= \{\mathsf {lab}_{\mathsf {s},\mathsf {t}}\}_{\mathsf {s}\in \{\mathsf {x},\mathsf {y},\mathsf {z}\}, \mathsf {t}\in \{0,1\}}\). Compute \(\mathsf {pad}= \mathsf {Eval}(\widetilde{\mathcal {C}},\mathsf {lab})\).

  3. 3.

    Compute \(\mathsf {Token}_j = \mathsf {SKE.Dec}(\mathsf {pad}, \mathsf {ct}_j), \mathsf {Token}_k = \mathsf {SKE.Dec}(\mathsf {pad}, \mathsf {ct}_k), \mathsf {Token}_i = \mathsf {TS.Sign}(\mathsf {sk}^{\mathsf {TS}}_i, \mathsf {msg}), \mathsf {Token}\leftarrow \mathsf {TS.Combine}(\{\mathsf {Token}_{s}\}_{s \in \{i,j,k\}})\).

  4. 4.

    Output \(\{\mathsf {Token}_{s}\}_{s \in \{i,j,k\}}\) if \(\mathsf {TS.Verify}(\mathsf {vk}^{\mathsf {TS}}, \mathsf {msg}, \mathsf {Token})\). Else, output \(\bot \).

Fig. 6.
figure 6

Circuit \(\mathcal {C}\) to be garbled.

Token Verification: Given a verification key \(\mathsf {vk}^{\mathsf {TS}}\), message \(\mathsf {msg}\) and token \((\mathsf {Token}_i, \mathsf {Token}_j, \mathsf {Token}_k)\), the token verification algorithm does the following:

  1. 1.

    Compute \(\mathsf {Token}\leftarrow \mathsf {TS.Combine}(\{\mathsf {Token}_{s}\}_{s \in \{i,j,k\}})\).

  2. 2.

    Output 1 if \(\mathsf {TS.Verify}(\mathsf {vk}^{\mathsf {TS}}, \mathsf {msg}, \mathsf {Token})=1\). Else, output 0.