Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

With the advance of computing and networking, cryptography has evolved from providing straightforward guarantees such as confidentiality, integrity and authenticity to providing many complex features. In particular, much research has been done on the task of performing various kinds of operations on encrypted data. The most well-known topics include fully homomorphic encryption and garbled circuits, whose practical realization would bring into the realm of possibility many applications that would have deemed as magical and unlikely a few decades ago.

A simpler but still useful problem is that of extracting information from a ciphertext. This can include allowing the testing of a single bit of a ciphertext, testing whether a ciphertext contains a particular value or not, whether it includes this value as a substring or not. Searchable encryption already allows many practical uses, such as spam filtering or threat detection in encrypted traffic, but has the advantage over more generic cryptocomputing techniques to have much more efficient instantiations.

1.1 Motivation: Organ Donation

In this paper, we take the specific use case of organ donation as a motivation and derive our constraints from issues related to the actual realization of a solution. With this approach, we devise a method to solve the problem using a new kind of cryptographic primitive. This work is connected to the Kidner ProjectFootnote 1 which aims at providing a solution for kidney donation matching using a blockchain.

Organ Donation. Organ transplant requires the donor and the recipient to be compatible, so as to reduce the risks of a graft rejection. In practice, this means that they must be of similar age, have compatible antigens (blood type and human leukocyte antigen (HLA) system), etc. This also means that a list of donors and a list of recipients must be maintained: when a new donor (respectively recipient) enters the system, the new entry should be checked against existing recipients (respectively donors).

Donors are usually willing to give an organ only to a close relative; this means that the pool for potential matches is very restricted. To lift this constraint, the common approach is to pair a willing donor with a recipient, disregarding medical incompatibility. Then, the problem consists in finding two pairs that are mutually compatible (the donor of a pair is compatible with the recipient of the other pair), so that each donor accepts to give their organ to the other recipient.

To enlarge the pool of potential matches further, we can consider the directed graph over donor-recipient pairs, and draw an edge from a donor to a compatible recipient. The general problem is that of finding simple cycles in this graph; the simplest version is a cycle of length 2, involving two donor-recipient pairs, but longer cycles allow more compatibility solutions.

Confidential information on donors and recipients is usually managed by non-profit entities (e.g., Organ Procurement Organizations) who exchange information in order to find matching pairs. Optimizing the number of matches requires merging information from as many sources as possible, but threatens the confidentiality of patient records.

In this paper, we explore a method to encrypt such records in order to obtain both data confidentiality and the ability to test for compatibility.

Compatibility Matching by Equality Check. We show how to reduce this problem to that of testing for equality.

Blood Type. First, consider compatibility on blood type alone. Donor and recipient can each be O, A, B or AB. The compatibilities are shown on Fig. 1. The recipient will generate a record for each of the compatible donors: for instance, a recipient of blood type A will generate a record whose field =Blood Type= is set to O, and another where this field is set to A.

Fig. 1.
figure 1

Blood compatibility 

Remark 1

Records should not be linkable one to another; however, if they are, the number of compatible blood types can be hidden by padding with filler values. This can be done by constant incompatible values between donors and receivers.

Age Group. Second, consider compatibility on age alone. This criterion is soft: individuals of similar ages should match, but we do not want to discriminate them into separate age groups. Instead, we use overlapping age groups by letting the recipient list acceptable age groups for the donor. For instance, a 19-year-old recipient would list age groups \([12-19]\) and \([20-39]\).

Human Leukocyte Antigens (HLA). Each individual is associated with six variables HLA-{A,B,C,E,F,G}. Two individuals are considered to be HLA-wise compatible when at least three of these variables match. This time, the recipient and the donor each generate a record for each of the 20 combinations of three HLAs (binomial of 3 out of 6).

All Together. By combining these brute-force-inspired solutions, we expect an individual to generate on average less than 200 records. The overhead is non-negligible but overcomes complex matching procedures.

We now consider an encrypted version of an individual’s record, called a “fingerprint”, and we want to test whether two fingerprints are equal or not.

1.2 Related Work

Testing whether two ciphertexts hold the same value can be easily done in some contexts. For instance, when all the relevant ciphertexts are encrypted under the same key, and when a deterministic encryption scheme is used, it is sufficient to compare the two outputs [BBO06]. It is also possible to allow testing a ciphertext against a plaintext value by simply using deterministic public key encryption or a one-way function (optionally along with a classical encryption of the message, if decryption is needed).

Public Key Encryption with Equality Test (PKEET) allows testing plaintext-equality between two ciphertexts [YTHW10], while encryption can be done by anyone. One may think of deterministic public key encryption schemes as a subset of PKEET schemes.

In other contexts, more elaborate schemes are needed. In searchable encryption [SWP00, BDOP04], each word w from the input message m is encrypted separately as \(s = \mathsf {PEKS}(w)\). The PEKS scheme then allows other participants to search for a keyword \(w'\) in m simply by testing whether \(w = w'\) given s, as well as a trapdoor value \(T_{w'}\). Another variant allows testing between a ciphertext and a plaintext, without a trapdoor [CFGL12]. In different settings, it is possible to use interactive protocols, such as for private matching and set intersection [FNP04].

1.3 Our Contribution

Fingerprinting and Testing Keys. It is important to note that PKEETs rely on the high min-entropy of the message distribution [LZL13]. Indeed, an attacker may test a target ciphertext against arbitrary messages to conduct a brute-force search over the message space, since encryption is public. We thus have to exclude this approach.

We introduce the notion of fingerprint, a kind of probabilistic ciphertext that allows plaintext-equality testing. Private fingerprinting (generation of fingerprint) allows us to provide semantic security [GM84] (a.k.a. indistinguishability or polynomial security). Alternatively (or additionally), it is possible to make the testing private. We consider all scenarios in our generic model but, since legitimate users need to run many more plaintext-equality testings than fingerprintings (contrary to an adversary), we are interested in a mechanism where testing is public and fingerprinting is private.

Finally, we would prefer non-interactive matching protocols: in our motivation, searching for a cycle requires many compatibility tests; using an interactive protocol would incur an important communication cost and make the system less robust to network hazards.

Blind and Threshold Fingerprinting. We could entrust the fingerprinting key to a trusted third party (TTP) to control the number of queries but we want the fingerprints to be generated without seeing the input messages. Thus, we will use blind fingerprinting, which is similar to blind signing [Cha82], but we do not require unlinkability.

This is not enough for privacy: the fingerprinter can still generate and test for plaintext-equality as many fingerprints as they want (assuming public-key testing). To actually achieve a decent level of privacy, we therefore split them into several fingerprinters: without collusions above some threshold, no information is leaked about the input message (blindness) and no brute-force attack is possible.

1.4 Organization

In Sect. 2, we first draw a generic model for fingerprinting under the constraints listed above; in particular, we take into consideration both public and private fingerprinting and both public and private testing. Then, in Sect. 3, we introduce the two assumptions which our construction relies on, one of which is new to this paper, and holds in the generic bilinear group model. Finally, we propose a construction for our new scheme in Sect. 4, show its security, and present the blind and threshold variants which extend the privacy of the user.

The proof for the new assumption in the generic bilinear group model is postponed to Appendix A, and the proofs of security of our construction can be found in Subsect. 4.3.

2 Fingerprinting Scheme

In this section, we first define a more general fingerprinting mechanism, where the generation of fingerprints and the testing algorithm require keys that can be either private or public. We will later focus on our concrete scenario, with private fingerprint generation and public testing.

2.1 Description

We consider three kinds of players:

  • the fingerprinter who generates the fingerprints of messages using the fingerprinting key. We consider this operation in the honest-but-curious framework, since we eventually split the fingerprinter into several parties, each holding a share of the fingerprinting key;

  • the tester who checks whether two fingerprints correspond to the same message or not, using the testing key;

  • the users who have access to the list of fingerprints, and who may query for new fingerprints (through the fingerprinter) and compare fingerprints (through the tester).

We stress however that the fingerprinting and testing keys may be either public or private. When a key is secret, the users have to interact with the owner of the key to benefit from the corresponding functionality; when it is public, the users can act on behalf of the fingerprinter or the tester. The choice of publishing a key or keeping it private will depend on the scenario under consideration.

Finally, we eventually choose to take advantage of the asymmetric nature of our use case: bipartite matching, between people from two different groups (donors and receivers). So, we will manipulate two kinds of fingerprints: “left” and “right” fingerprints in this generic specification.

We thus define four protocols:

  • \(\mathrm {\textsf {KeyGen}}(1^k)\) creates the global parameters and the left and right fingerprinting keys \(\mathrm {\textsf {lk}}\) and \(\mathrm {\textsf {rk}}\) as well as the testing key \(\mathrm {\textsf {tk}}\), for security parameter k;

  • \(\mathrm {\textsf {LFingerprint}}(\mathrm {\textsf {lk}}, m)\), given a left-fingerprinting key \(\mathrm {\textsf {lk}}\) and a message m, outputs a left-fingerprint \(f_L\);

  • \(\mathrm {\textsf {RFingerprint}}(\mathrm {\textsf {rk}}, m)\), given a right-fingerprinting key \(\mathrm {\textsf {rk}}\) and a message m, outputs a right-fingerprint \(f_R\);

  • \(\mathrm {\textsf {Test}}(\mathrm {\textsf {tk}}, f_L, f_R)\), given a testing key \(\mathrm {\textsf {tk}}\), a left-fingerprint \(f_L\) and a right-fingerprint \(f_R\), reveals whether they correspond to the same message or not.

As already noted above, these procedures can be either private or public, and they can be algorithms to be run offline, or interactive protocols. Various situations can be envisioned according to the secrecy of the fingerprinting and testing keys.

  • Testing and fingerprinting keys public: security solely rely on the high entropy of the inputs (message-locked encryption, as in PKEETs);

  • Fingerprinting keys private only: our use case, where we want to limit the generation of fingerprints, but allow anyone to test freely for compatibility;

  • Testing key private only: this can be relevant if the message space is very constrained, when even a few tests could leak too much information;

  • Testing and fingerprinting keys private: this has the highest security guarantee, but is usually impractical unless performing very few queries is enough.

Remark 2

We can choose to have one of the fingerprinting keys private, and the other public. This setup can give some flexibility for specific use cases.

2.2 Security Model

Let us now make more precise the security notions we want to achieve. Since secret information can include the fingerprinting keys \(\mathrm {\textsf {lk}}\) and \(\mathrm {\textsf {rk}}\), the testing key \(\mathrm {\textsf {tk}}\), and the users’ input messages, we consider the following security properties:

  1. 1.

    unforgeability of fingerprinting (even against the testerFootnote 2);

  2. 2.

    one-more indistinguishability of testing (even against the fingerprinterFootnote 3);

  3. 3.

    privacy of the user w.r.t. the tester;

  4. 4.

    privacy of the user w.r.t. the fingerprinter.

Authentication of the Fingerprinter. The raison d’être of the fingerprinter is to generate fingerprints, so unforgeability guarantees that no one else can do so: even a collusion between the tester (access to the testing key) and users (queries to the fingerprinter) should be unable to generate a valid fingerprint that was not provided by the fingerprinter. This implies that the fingerprinting key is not leaked during this game. We formally define Fingerprint-Unforgeability (\(\mathrm {\textsf {FP}{-}\textsf {UF}}\)).

Definition 1

(FP UF). Let \(\varPi = (\mathrm {\textsf {KeyGen}},\mathrm {\textsf {LFingerprint}},\mathrm {\textsf {RFingerprint}},\mathrm {\textsf {Test}})\) be the scheme presented above, and let \(\mathcal {A}\) be a polynomial-time adversary. Let

$$ \mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {UF}}_{\varPi ,L}(\mathcal {A}) =\Pr \left( \begin{aligned}&(\mathrm {\textsf {lk}}, \mathrm {\textsf {rk}}, \mathrm {\textsf {tk}}) \overset{{}_\$}{\leftarrow }\mathrm {\textsf {KeyGen}}(1^k), (m^\star , f_L^\star ) \leftarrow \mathcal {A}^\mathcal {L}(\mathrm {\textsf {rk}}, \mathrm {\textsf {tk}}),\\&f_R \leftarrow \mathrm {\textsf {RFingerprint}}(\mathrm {\textsf {rk}}, m^\star ) : \mathrm {\textsf {Test}}(\mathrm {\textsf {tk}}, f_L^\star , f_R) = 1 \end{aligned}\right) $$

where \(\mathcal {L}\) refers to the left-fingerprinting oracle, which answers to queries on message \(m_i\) with \(f_{L,i} = \mathrm {\textsf {LFingerprint}}(\mathrm {\textsf {lk}}, m_i)\). We insist that \(m^\star \) is distinct from any queried \(m_i\).

We similarly define \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {UF}}_{\varPi ,R}\), with the left-fingerprinting key but access to the right-fingerprinting oracle. We say that \(\varPi \) is \((t,\varepsilon )-\mathrm {\textsf {FP}{-}\textsf {UF}}\)-secure when both \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {UF}}_{\varPi ,L}(\mathcal {A}) \le \varepsilon \) and \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {UF}}_{\varPi ,R}(\mathcal {A}) \le \varepsilon \) for any \(\mathcal {A}\) running within time t.

Authentication of the Tester. The purpose of the tester is to help the user to test plaintext equality between fingerprints. But even a collision between the fingerprinter (access to the fingerprinting key) and users (queries to the tester), should be unable to guess the result of another test. This implies that the testing key is not leaked. We formally define Testing-Indistinguishability (\(\mathrm {\textsf {T}{-}\textsf {IND}}\)).

Definition 2

( T IND). Let \(\varPi = (\mathrm {\textsf {KeyGen}},\mathrm {\textsf {LFingerprint}},\mathrm {\textsf {RFingerprint}},\mathrm {\textsf {Test}})\) be the scheme presented above, and \(\mathcal {A}=(\mathcal {A}_1,\mathcal {A}_2)\) a polynomial-time adversary. Let

$$ \mathrm {\textsf {Adv}}^\mathrm {\textsf {T}{-}\textsf {IND}}_{\varPi ,L}(\mathcal {A}) \!= \left| \Pr \left( \begin{aligned}&(\mathrm {\textsf {lk}}, \mathrm {\textsf {rk}}, \mathrm {\textsf {tk}}) \overset{{}_\$}{\leftarrow }\mathrm {\textsf {KeyGen}}(1^k), \\&(m_0, m_1, s) \leftarrow \mathcal {A}_1^\mathcal {T}(\mathrm {\textsf {lk}}, \mathrm {\textsf {rk}}), f_L \leftarrow \mathrm {\textsf {LFingerprint}}(\mathrm {\textsf {lk}}, m_0), \\&b \overset{{}_\$}{\leftarrow }\{0, 1\}, f_R \leftarrow \mathrm {\textsf {RFingerprint}}(\mathrm {\textsf {rk}}, m_b), \\&b' \leftarrow \mathcal {A}_2^\mathcal {T}(s, f_L, f_R) : b' = b \end{aligned}\right) - \frac{1}{2} \right| $$

where \(\mathcal {T}\) refers to the testing oracle, who answers to queries on fingerprints \(f_L,f_R\) with \(\mathcal {T}(f_L,f_R) = \mathrm {\textsf {Test}}(\mathrm {\textsf {tk}}, f_L,f_R)\). We require that the attacker does not submit the challenge fingerprint \(f_R\) to the testing-oracle.

We define \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {T}{-}\textsf {IND}}_{\varPi ,R}(\mathcal {A})\) in a similar fashion. We say that \(\varPi \) is \((t,\varepsilon )-\mathrm {\textsf {T}{-}\textsf {IND}}\)-secure if both \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {T}{-}\textsf {IND}}_{\varPi ,L}(\mathcal {A})\le \varepsilon \) and \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {T}{-}\textsf {IND}}_{\varPi ,R}(\mathcal {A})\le \varepsilon \) for any adversary \(\mathcal {A}\) running within time t.

One can note that for such a strong notion of indistinguishability, which only excludes the challenge fingerprints from being queried to the testing-oracle, the fingerprints must be non-malleable.

Privacy of the User. This security notion adapts semantic security to our scheme: given access to the even a collusion between the tester (access to the testing key) and users (queries to the fingerprinter) should not be able to distinguish a fingerprint of a message \(m_0\) from a fingerprint of a message \(m_1\) (unless they know a fingerprint of \(m_0\) or of \(m_1\)). Furthermore, the collusion could include one of the two fingerprinting keys (but not both): give the left-fingerprinting key when proving the semantic security of left-fingerprinting, and the right-fingerprinting key when proving the semantic security of right-fingerprinting. We formally define Fingerprint-Indistinguishability (\(\mathrm {\textsf {FP}{-}\textsf {IND}}\)).

Definition 3

( FP IND). Let \(\varPi = (\mathrm {\textsf {KeyGen}},\mathrm {\textsf {LFingerprint}},\mathrm {\textsf {RFingerprint}},\mathrm {\textsf {Test}})\) be the scheme presented above, and let \(\mathcal {A}=(\mathcal {A}_1,\mathcal {A}_2)\) be a polynomial-time adversary. Let

$$ \mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {IND}}_{\varPi ,L}(\mathcal {A}) \!= \left| \Pr \left( \begin{aligned}&(\mathrm {\textsf {lk}}, \mathrm {\textsf {rk}}, \mathrm {\textsf {tk}}) \overset{{}_\$}{\leftarrow }\mathrm {\textsf {KeyGen}}(1^k), (m_0, m_1, s) \leftarrow \mathcal {A}_1^\mathcal {R}(\mathrm {\textsf {lk}}, \mathrm {\textsf {tk}}), \\&b \overset{{}_\$}{\leftarrow }\{0, 1\}, f_L \leftarrow \mathrm {\textsf {LFingerprint}}(\mathrm {\textsf {lk}}, m_b), \\&b' \leftarrow \mathcal {A}_2^\mathcal {R}(s, f_L) : b' = b \end{aligned}\right) \! - \frac{1}{2} \right| $$

where \(\mathcal {R}\) refers to the right-fingerprinting oracle, which answers to queries on message \(m'_i\) with \(\mathcal {R}(m'_i) = \mathrm {\textsf {RFingerprint}}(\mathrm {\textsf {rk}}, m'_i)\). We insist that \(m'_i \notin \{m_0,m_1\}\) for any queries to \(\mathcal {R}\).

We define \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {IND}}_{\varPi ,R}(\mathcal {A})\) similarly. We say that \(\varPi \) is \((t,\varepsilon )-\mathrm {\textsf {FP}{-}\textsf {IND}}\)-secure if both \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {IND}}_{\varPi ,L}(\mathcal {A})\le \varepsilon \) and \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {IND}}_{\varPi ,R}(\mathcal {A})\le \varepsilon \) for any adversary \(\mathcal {A}\) running within time t.

Note that fingerprinting generation itself should not reveal anything about the message that is being fingerprinted: the view of the fingerprinter should be the same regardless of the message. Like in blind signatures [Cha82], no adversary playing the role of fingerprinter should be able to distinguish a fingerprinting of \(m_0\) from a fingerprinting of \(m_1\). However, if the fingerprinter sees the resulting fingerprint and locally generates a fingerprint for \(m_0\), they could easily distinguish between the two cases. To avoid this, the fingerprinter should be split into several parties that need to cooperate to create a new fingerprint. This last security notion thus suggest the use of a blind protocol and a threshold scheme.

Remark 3

Contrary to blind signatures, user anonymity is not required; in our use-case, contact information must be joined with the final published fingerprint.

3 Assumptions

Our construction adapts the randomizable signature proposed by Pointcheval and Sanders [PS16], which relies on q-\({\textsf {MSDH}}\)-1 [PS18]. Our scheme additionally requires indistinguishability, which implies another assumption; for this, we introduce q-\({\textsf {DMSDH}}\)-1, which is decisional variant of q-\({\textsf {MSDH}}\)-1, and prove it to hold in the generic bilinear group model.

Definition 4

(q-\({{\mathbf {\mathsf{{MSDH}}}}}\)-1). Let \((p, \mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T, e)\) be a bilinear group setting of type 3, with g (respectively \(\tilde{g}\)) a generator of \(\mathbb {G}_1\) (respectively \(\mathbb {G}_2\)). Given \({(g^{x^i}, {\tilde{g}}^{x^i})}_{0 \le i \le q}\) along with \((g^a, {\tilde{g}}^a, {\tilde{g}}^{a \cdot x})\) for \(a, x \overset{{}_\$}{\leftarrow }\mathbb {Z}_p^*\), no adversary can output a tuple \((w, P, h^{\frac{1}{x + w}}, h^{\frac{a}{P(x)}})\) for some \(h \in \mathbb {G}_1^*\) where P is a polynomial of degree at most q and w is a scalar such that \((X + w)\) and P(X) are relatively prime.

Definition 5

(q-\({{\mathbf {\mathsf{{DMSDH}}}}}\)-1). Let \((p, \mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T, e)\) be a bilinear group setting of type 3, with g (respectively \(\tilde{g}\)) a generator of \(\mathbb {G}_1\) (respectively \(\mathbb {G}_2\)). Given \({(g^{x^i}, {\tilde{g}}^{x^i})}_{0 \le i < q}\) along with \((g^a, g^{a \cdot x}, {\tilde{g}}^a)\) for \(a, x \overset{{}_\$}{\leftarrow }\mathbb {Z}_p^*\), and for any (wP) where P is a polynomial of degree at most q and w is a scalar such that \((X + w)\) and P(X) are relatively prime, no adversary can distinguish \((h^{\frac{1}{x + w}}, h^{\frac{a}{P(x)}})\) for some \(h \in \mathbb {G}_1^*\) from a random pair of elements of \(\mathbb {G}_1\).

Theorem 1

q-\({\textsf {DMSDH}}\)-1 holds in the generic bilinear group model.

Proof

The computational assumption q-\({\textsf {MSDH}}\)-1 from [PS18] gives \({\tilde{g}}^{a \cdot x} \in \mathbb {G}_2\) and expects the forged pair in \(\mathbb {G}_1\), whereas the decisional version q-\({\textsf {DMSDH}}\)-1 gives \(g^{a \cdot x} \in \mathbb {G}_1\) and the challenge pair in \(\mathbb {G}_1\). So, the group the pair belongs to determines what security guarantee we obtain (either unforgeability from q-\({\textsf {MSDH}}\)-1 or indistinguishability from q-\({\textsf {DMSDH}}\)-1). Thus, the reasoning for q-\({\textsf {DMSDH}}\)-1 is very similar to that for q-\({\textsf {MSDH}}\)-1. The full proof can be found in Appendix A.

4 Fingerprinting from Pointcheval-Sanders Signatures

In the following, we focus on our initial scenario with secret fingerprinting and public testing of plaintext-equality, for low-entropy messages. Our construction is heavily influenced by the assumption that it is possible to efficiently enumerate all the valid messages.

4.1 The Pointcheval-Sanders Signature Scheme

Our construction derives from Pointcheval-Sanders signatures [PS16, PS18]. We reproduce here the definition for the single-message version. Let \(e: \mathbb {G}_1 \times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) be a type-3 pairing with \(\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T\) of prime order p, and \(\mathbb {G}_1^* = \mathbb {G}_1 \backslash \{1_{\mathbb {G}_1}\}\). Then, we define the following procedures.

  • \(\mathrm {\textsf {KeyGen}}(1^k)\): \((\tilde{g}, x, y) \overset{{}_\$}{\leftarrow }\mathbb {G}_2 \times \mathbb {Z}_p^2\), \(\mathrm {\textsf {sk}}= (x, y)\) and \(\mathrm {\textsf {pk}}= (\tilde{g}, \tilde{X} = {\tilde{g}}^x, \tilde{Y} = {\tilde{g}}^y)\).

  • \(\mathrm {\textsf {Sign}}(\mathrm {\textsf {sk}}, m)\): draw \(h \overset{{}_\$}{\leftarrow }\mathbb {G}_1^*\) and return \(\sigma = (h, h^{x+y m})\).

  • \(\mathrm {\textsf {Verify}}(\mathrm {\textsf {pk}}, m, \sigma )\): return 1 if \(\sigma _1 \ne 1_{\mathbb {G}_1}\) and \(e(\sigma _1, \tilde{X} {\tilde{Y}}^m) = e(\sigma _2, \tilde{g})\), else 0.

This signature scheme has been shown unforgeable in the sense of EUFCMA under the interactive PS assumption [PS16], and in the sense of EUFwCMA (non-adaptative) under the q-\({\textsf {MSDH}}\)-1 assumption (where q is the bound on signing requests asked before the setup) [PS18]. The same levels of security are achieved when elements \(g \in \mathbb {G}_1\) and \(Y = g^y\) are included in the public key \(\mathrm {\textsf {pk}}\), as long as \(X = g^x\) is kept private.

4.2 Fingerprinting Scheme with Public Plaintext-Equality Testing

Let \(\mathcal {H}\) be a random oracle. We now propose a fingerprinting scheme where the fingerprinting procedure requires a secret key \(\mathrm {\textsf {lk}}\) or \(\mathrm {\textsf {rk}}\), while testing is a public process (there is no testing key \(\mathrm {\textsf {tk}}\), or alternatively it is public).

  • \(\mathrm {\textsf {KeyGen}}(1^k)\): randomly draw \((g, \tilde{g}, x, y) \overset{{}_\$}{\leftarrow }\mathbb {G}_1 \times \mathbb {G}_2 \times \mathbb {Z}_p^2\), set \((X,\tilde{X},Y,\tilde{Y}) \leftarrow (g^x,{\tilde{g}}^x,g^y,{\tilde{g}}^y)\), return \(\mathrm {\textsf {lk}}= X\), \(\mathrm {\textsf {rk}}= \tilde{X}\), and \(\mathrm {\textsf {pk}}= (g,Y,\tilde{g},\tilde{Y})\).

  • \(\mathrm {\textsf {LFingerprint}}(\mathrm {\textsf {lk}}, m)\): draw \(u \overset{{}_\$}{\leftarrow }\mathbb {Z}^*_p\), return \(f_L = (g^u, {(X Y^{\mathcal {H}(m)})}^u)\).

  • \(\mathrm {\textsf {RFingerprint}}(\mathrm {\textsf {rk}}, m)\): draw \(u \overset{{}_\$}{\leftarrow }\mathbb {Z}^*_p\), return \(f_R = ({\tilde{g}}^u, {(\tilde{X} {\tilde{Y}}^{\mathcal {H}(m)})}^u)\).

  • \(\mathrm {\textsf {Test}}(f_L, f_R)\): return 1 if \(f_{L,1}, f_{R,1} \ne 1_{\mathbb {G}_1}\) and \(e(f_{L,1}, f_{R,2}) = e(f_{L,2}, f_{R,1})\), else 0.

4.3 Security of the Basic Scheme

Theorem 2

Our fingerprinting scheme is \(\mathrm {\textsf {FP}{-}\textsf {UF}}\) under q-\({\textsf {MSDH}}\text {-}1\) in the random oracle model, where q corresponds to the number of queries to the random oracle or to the fingerprinting oracles.

Proof

We define the extended Pointcheval-Sanders signature scheme (EPS) as a variant of the PS signature scheme where \(\mathrm {\textsf {pk}}\) includes Y, i.e. \(\mathrm {\textsf {pk}}= (Y, \tilde{g}, \tilde{X}, \tilde{Y})\). We argue that EPS is EUFwCMA secure under q-\({\textsf {MSDH}}\)-1 in Lemma 1, and reduce the \(\mathrm {\textsf {FP}{-}\textsf {UF}}\) security of our fingerprinting scheme to the EUFwCMA security of EPS in Lemma 2.

Lemma 1

If q-\({\textsf {MSDH}}\text {-}1\) holds, then EPS is EUFwCMA where q is the number of queries to the signing oracle.

Proof

We refer to the proof of theorem 10 from [PS18, Sect. 5.1, p. 330] where the challenger is given \({(g^{x^i})}_i\), \({({\tilde{g}}^{x^i})}_i\) and \((g^a, {\tilde{g}}^a, {\tilde{g}}^{a \cdot x})\) and feeds the challenger with \({\tilde{Y}}_1 \leftarrow {\tilde{g}}^a\) and \({\tilde{Y}}_i \leftarrow {\tilde{Y}}_1^{u_i}\). To prove the EUFwCMA security of the signature scheme when \(\mathrm {\textsf {pk}}\) includes \({(Y_i)}_i\), it suffices to have the challenger also offer \(Y_1 \leftarrow g^a\) and \(Y_i \leftarrow Y_1^{u_i}\).

Lemma 2

If EPS is EUFwCMA, then our fingerprinting scheme is \(\mathrm {\textsf {FP}{-}\textsf {UF}}\). The number of queries to the signing oracle in EPS maps to the number of queries to the random oracle or to the fingerprinting oracles in our scheme.

Proof

Let \(\mathcal {A}\) be an adversary that breaks the \(\mathrm {\textsf {FP}{-}\textsf {UF}}\) security of our scheme. Then, we create an adversary \(\mathcal {B}\) that breaks the EUFwCMA security of EPS. By symmetry of the left and right games, we assume that \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {IND}}_{\varPi ,L}(\mathcal {A})\) is non-negligible without loss of generality.

We will use \(\mathcal {H}\) to “redirect” the queries from \(\mathcal {A}\) towards predetermined values: \(\mathcal {B}\) first draws \({(m_i)}_i \overset{{}_\$}{\leftarrow }\mathbb {Z}_p^q\), submits the list of messages \({(m_i)}_i\) to the signing challenger, and will answer to the i-th original query to \(\mathcal {H}\) (for some message \(M_i\)) with \(m_i\).

In return, our adversary \(\mathcal {B}\) is given \(\mathrm {\textsf {pk}}= (Y, \tilde{g}, \tilde{X}, \tilde{Y})\) as well as signatures \({(\sigma _i)}_i\) for \({(m_i)}_i\), i.e. values such that \(e(\sigma _{i,1}, \tilde{X} {\tilde{Y}}^{m_i}) = e(\sigma _{i,2}, \tilde{g})\) We need to output \((m^\star , \sigma ^\star )\) such that \(e(\sigma ^\star _1, \tilde{X} {\tilde{Y}}^{m^\star }) = e(\sigma ^\star _2, \tilde{g})\) where \(m^\star \) is distinct from any queried \(m_i\).

For this, we simulate the \(\mathrm {\textsf {FP}{-}\textsf {UF}}\) game for \(\mathcal {A}\) with \(\mathrm {\textsf {pk}}' \leftarrow (g, Y, \tilde{g}, \tilde{Y})\), \(\mathrm {\textsf {rk}}\leftarrow \tilde{X}\) as well as access to an oracle \(\mathcal {L}\) which answer to queries \(M_i\) with \(\sigma _i\). Then, \(\mathcal {A}\) should output \((M^\star , f_L^\star )\) where \(M^\star \) is distinct from any queried \(M_i\). We also require that \(\mathrm {\textsf {Test}}(\mathrm {\textsf {tk}}, f_L^\star , f_R)\) for some \(f_R \leftarrow \mathrm {\textsf {RFingerprint}}(\mathrm {\textsf {rk}}, m^\star ) = 1\), i.e. such that \(f_{L,1} \ne 1_{\mathbb {G}_1}\) and:

$$ e\left( f_{L,1}, {\left( \tilde{X} \tilde{Y}^{\mathcal {H}(M^\star )}\right) }^u\right) = e\left( f_{L,2}, {\tilde{g}}^u\right) $$

for some u. Thus, \(\sigma ^\star = f_L^\star \) is a valid PS signature for \(m^\star = \mathcal {H}(M^\star )\) with \(m^\star \) distinct from any queried \(m_i\).

Theorem 3

Our fingerprinting scheme is \(\mathrm {\textsf {FP}{-}\textsf {IND}}\) under q-\({\textsf {DMSDH}}\text {-}1\) in the random oracle model, where q corresponds to the number of queries to the random oracle or to the fingerprinting oracles.

Proof

Let \(\mathcal {A}\) be an adversary against \(\mathrm {\textsf {FP}{-}\textsf {IND}}\), then we provide an adversary \(\mathcal {B}\) against q-\({\textsf {DMSDH}}\)-1. We assume that \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {IND}}_{\varPi ,L}(\mathcal {A})\) is non-negligible. Since the roles of \(\mathbb {G}_1\) and \(\mathbb {G}_2\) are symmetric, the same reasoning applies when \(\mathrm {\textsf {Adv}}^\mathrm {\textsf {FP}{-}\textsf {IND}}_{\varPi ,R}(\mathcal {A})\).

First, \(\mathcal {B}\) is given \({(g^{x^i})}_{0 \le i \le q}\), \((g^a, g^{a \cdot x}, {\tilde{g}}^a)\). Then, it draws \({(m_i)}_i \overset{{}_\$}{\leftarrow }\mathbb {Z}_p^q\), \(m \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\), sets \(P = \prod _i (X + m_i)\), and submits (mP) to the challenger, which answers with a pair \(\sigma \) which is either random or of the form \((h^{\frac{1}{x+m}}, h^{\frac{a}{P(x)}})\) for some \(h \in \mathbb {G}_1\).

Now, \(\mathcal {B}\) should be able to distinguish between these two cases. For this, \(\mathcal {B}\) will run \(\mathcal {A}\) while simulating the game for \(\mathrm {\textsf {FP}{-}\textsf {IND}}\) by setting \(g' \leftarrow g^{\prod _i (x + m_i)}\) and \(\tilde{g'} \leftarrow {\tilde{g}}^{\prod _i (x + m_i)}\), using \({(g^{x^i})}_i\) and \({({\tilde{g}}^{x^i})}_i\), as well as \(X \leftarrow g^{a \cdot x}\), \(Y \leftarrow g^a\), and \({\tilde{Y}} \leftarrow {\tilde{g}}^a\) to define the public key \(\mathrm {\textsf {pk}}= (g', Y, \tilde{g'}, \tilde{Y}\)) and the left-fingerprinting key \(\mathrm {\textsf {lk}}= X\). This implicitly sets \(x' = \frac{a \cdot x}{\prod _i (x + m_i)}\) and \(y' = \frac{a}{\prod _i (x + m_i)}\).

To generate fingerprints for the q queried fingerprints, \(\mathcal {B}\) sets the random oracle \(\mathcal {H}\) to map the j-th original query \(M_j\) to \(m_j\), and the right-fingerprinting oracle \(\mathcal {R}\) to return \(({({\tilde{g'}}^{\prod _{i \ne j} (x + m_i)})}^{u_j}, {({\tilde{g'}}^a)}^{u_j})\). One may verify that this is a valid right-fingerprint for \(M_j\).

Finally, \(\mathcal {A}\) outputs \((M'_0, M'_1)\), and \(\mathcal {B}\) draws \(b \leftarrow \{0, 1\}\). We would now like to set \(\mathcal {H}(M'_b)\) to m, but \(\mathcal {A}\) may have queried the random oracle on this value before. Thus, on any query \(M_j\), \(\mathcal {H}\) will additionally guess with probability \(\frac{1}{q}\) that \(M_j = {M'}_b\) and accordingly set \(\mathcal {H}(M_j)\) to m instead of \(m_j\). \(\mathcal {B}\) can then check its guess when \(\mathcal {A}\) outputs \((M'_0, M'_1)\), and abort if it was incorrect; this implies a penalty of a factor q to the probability that \(\mathcal {B}\) wins the q-\({\textsf {DMSDH}}\)-1 game.

Now, since \(\mathcal {H}(M'_b) = m\), if \(\sigma \) is of the form \((h^{\frac{1}{x+m}}, h^{\frac{a}{P(x)}})\), then it is a valid left-fingerprint for \(M_b\). Otherwise, it provides no information about b to the adversary. Thus, \(\mathcal {B}\) answers the final request of \(\mathcal {A}\) with \(\sigma \), and, if \(\mathcal {A}\) guesses b correctly, then \(\mathcal {B}\) guesses that \(\sigma \) is of the form \((h^{\frac{1}{x+m}}, h^{\frac{a}{P(x)}})\); otherwise, that it is a random pair.

4.4 Improving the Privacy of the User

Since the left and right fingerprintings work in similar ways, we will only present the protocols for left fingerprinting.

Against the Fingerprinter Without the Final Fingerprint. In the naive construction above, the user sends the message in the clear to get back the fingerprint. In order to extend user privacy to the fingerprinters, we propose a blinded version, as in [PS16]:

  1. 1.

    the user draws \(r \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and sends \(C \leftarrow Y^m g^r\);

  2. 2.

    the user runs a Zero-Knowledge Proof of Knowledge (ZKPoK) of mr such that \(C = Y^m g^r\);

  3. 3.

    the fingerprinter draws \(u \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and sends back \(\alpha \leftarrow (g^u, {(XC)}^u)\);

  4. 4.

    the user sets \(f_1 \leftarrow \alpha _1\), \(f_2 \leftarrow \alpha _2 \cdot \alpha _1^{-r}\).

This protocol is perfectly blind to the fingerprinter, since his view is just the perfectly hiding Pedersen commitment [Ped92] and a ZK protocol, which do not leak any information about m. Hence the privacy of the user. With an extractable ZKPoK, it is possible to prove the security of this blinded version, as in [PS16].

Against the Fingerprinter with the Final Fingerprint. With the protocol presented above, if the fingerprinter gains access to the final fingerprint f, their ability to create fingerprints for arbitrary messages and the publicness of the testing key let them retrieve the message. In order to block exhaustive searches, we amend the protocol by splitting the fingerprinter into n parties, using secret sharing of the fingerprinting key. For some threshold k, no collusion of less than k parties can generate fingerprints; equivalently, having access to up to \(k-1\) shares of the fingerprinting key does not reveal more than being a common user.

The threshold version makes use of Shamir’s secret sharing [Sha79] to split the secret scalar x into n shares \(x_i\) (for each sub-fingerprinter \(\textsf {F}_i\)), and we note \(X_i = g^{x_i}\). This way, for any qualified subset of \(\textsf {F}_i\) (with at least k shares), there are public coefficients \(\lambda _i\) (Lagrange coefficients) such that \(x = \sum \lambda _i x_i\), and then \(\prod X_i^{\lambda _i} = X\). A group of k sub-fingerprinters interacts as follows with the user:

  1. 1.

    the user draws \(r \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and broadcasts \(C \leftarrow Y^m g^r\);

  2. 2.

    the user sends a NIZKPoK of mr such that \(C = Y^m g^r\);

  3. 3.

    each \(\textsf {F}_i\) draws \(u_i \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and broadcasts \(\alpha _{i,1} \leftarrow g^{u_i}\);

  4. 4.

    each \(\textsf {F}_i\) computes \(G \leftarrow \prod \alpha _{j,1}^{\lambda _j}\), and sends back \(\alpha _{i,2} \leftarrow G^{x_i} C^{u_i}\);

  5. 5.

    the user sets

    $$\begin{aligned} f_1&\leftarrow G = g^u&f_2&\leftarrow G^{-r} \prod \alpha _{i,2}^{\lambda _i} = {(X Y^m)}^u \end{aligned}$$

which implicitly defines \(u = \sum \lambda _i u_i\).

First, one can easily see that this still preserves the privacy of the user, since, as before, C does not contain any information about m; neither does the NIZK. The final fingerprint f traces back to the user but the anonymity is not required: in our use case, it must be possible to contact the appropriate hospital when a match is found. The important property is the privacy of m: no subset of less than k sub-fingerprinters can guess the conduct an exhaustive search.

Of course, we have to prove this still preserves fingerprinter privacy, or more precisely this does not leak private information of honest sub-fingerprinters to corrupted ones. To this aim, we show that the view of any (static) subset of corrupted sub-fingerprinters can be simulated from the same information \(\alpha = (\alpha _1,\alpha _2)\) as the one output by the fingerprinter in the centralized protocol.

Let us assume that the corrupted sub-fingerprinters are \(\textsf {F}_1,\ldots ,\textsf {F}_c\), and the honest ones are \(\textsf {F}_{c+1},\ldots ,\textsf {F}_k\) (where \(c<k\)), and the simulator has drawn \(v_i \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) for \(i=c+1,\ldots ,k\): the corrupted players send \(\alpha _{i,1}\) for \(i=1,\ldots ,c\), and the simulator draws \(u_i \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and generates \(\alpha _{i,1} \leftarrow g^{u_i}\) for \(i=c+1,\ldots , k-1\), while \(\alpha _{k,1}\leftarrow {(\alpha _1/\prod _{i=1}^{k-1} \alpha _{i,1}^{\lambda _i})}^{1/\lambda _k}\). The simulator also sets \(G \leftarrow \alpha _1\), and computes \(\alpha _{i,2} \leftarrow G^{v_i} C^{u_i}\) for \(i=c+1,\ldots , k-1\), while \(\alpha _{k,2}\leftarrow {(\alpha _2/\prod _{i=1}^{k-1} \alpha _{i,2}^{\lambda _i})}^{1/\lambda _k}\).

Since no information is known about the actual secret values \(x_i\), and the values \(v_i\) are indistinguishable from the real secret, all the simulated elements are perfectly indistinguishable from a real execution, under the condition that the corrupted sub-fingerprinters are honest-but-curious (and the subset of honest players remains the same: static corruptions). Indeed, in this protocol, no verifiability is required about the sub-fingerprinters: they are trusted to compute correctly, even if they try to learn more information.

4.5 Verifiability

If one wants to consider malicious sub-fingerprinters, verifiability is required for the user (private verifiability). An additional improvement can be reached: one could avoid fake or copies of fingerprints posted by malicious users in the database, by using a proof of knowledge of the fingerprinted message (public verifiability). To achieve this, sub-fingerprinters first need to publish a commitment \(C_i = g^{x_i} Y^{t_i}\) of their secret shares \(x_i\) during key generation. This is a perfectly hiding commitment, and the binding property relies on secrecy of the discrete logarithm of Y in basis g.

Private Verifiability. For the former case, verifiability can be enforced during the original process of creating the fingerprint, with the additional verification of a NIZK Proof of Existence of \(u_i\) and a NIZK Proof of Knowledge of \(x_i\) and \(t_i\) such that \(\alpha _{i,1} = g^{u_i}\) and \(\alpha _{i,2} = G^{x_i} C^{u_i}\). The proofs can be efficiently done with Schnorr-like proofs.

Public Verifiability. In order to avoid fake fingerprints or copies, the user should prove their validity (to avoid fake ones) and freshness (to avoid copies). A non-malleable NIZK could solve this challenge: in addition to \(f=(f_1,f_2)\), and the NIZKs provided by the sub-fingerprinters, the user sends a NIZK Proof of Knowledge of m and r such that \(\alpha _2/f_2 = \alpha _1^r\) and \(C = Y^m g^r\). In order to guarantee non-malleability or replay attacks, the user includes his own identity in the challenge computation (signature of knowledge).

4.6 Full Protocol

Let us now fully describe the resulting protocol, with an optimized NIZK for the public verifiability: the fingerprinters \(\textsf {F}_i\), for \(i=1,\ldots ,n\), jointly generate a Shamir’s secret sharing of a random scalar secret x. They each own a share \(x_i\), and publish a commitment \(C_i = g^{x_i} Y^{t_i}\), for a random scalar \(t_i\). In order to get a fingerprint on a message m, the user (with identity \(\mathsf {Id}\)) contacts a subset of k sub-fingerprinters:

  1. 1.

    the user draws \(r \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and broadcasts \(C \leftarrow Y^m g^r\);

  2. 2.

    the user sends an (extractable) NIZKPoK of mr such that \(C = Y^m g^r\);

  3. 3.

    each \(\textsf {F}_i\) draws \(u_i \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and sends back \(\alpha _{i,1} \leftarrow g^{u_i}\);

  4. 4.

    each \(\textsf {F}_i\) computes \(G \leftarrow \prod \alpha _{j,1}^{\lambda _j}\), and sends back \(\alpha _{i,2} \leftarrow G^{x_i} C^{u_i}\);

  5. 5.

    each \(\textsf {F}_i\) starts a NIZK for \(u_i\), \(x_i\) and \(t_i\) such that

    $$\begin{aligned} \alpha _{i,1}&= g^{u_i}&\alpha _{i,2}&= G^{x_i} C^{u_i}. \end{aligned}$$

    More precisely, it draws \(u'_i,x'_i,t'_i \overset{{}_\$}{\leftarrow }\mathbb {Z}_p\) and sends

    $$\begin{aligned} A_{i,1}&= g^{u'_i}&A_{i,2}&= G^{x'_i} C^{u'_i}; \end{aligned}$$
  6. 6.

    the user generates

    $$\begin{aligned} \alpha _1&\leftarrow G = \prod \alpha _{i,1}^{\lambda _i} = g^u&\alpha _2&\leftarrow \prod \alpha _{i,2}^{\lambda _i} = {(X C)}^u \\ A_1&\leftarrow \prod A_{i,1}^{\lambda _i} = g^{u'}&A_2&\leftarrow \prod A_{i,2}^{\lambda _i} = {(X C)}^{u'}, \end{aligned}$$

    where \(u = \sum \lambda _i u_i\) and \(u' = \sum \lambda _i u'_i\), as well as

    $$\begin{aligned} f_1&\leftarrow \alpha _1&f_2&\leftarrow G^{-r} \alpha _2 \end{aligned}$$

    and starts the NIZK for m and r such that \(\alpha _2/f_2 = \alpha _1^r\) and \(C = Y^m g^r\), with random \(r'\) and \(m'\):

    $$\begin{aligned} B_1&\leftarrow \alpha _1^{r'}&B_2&\leftarrow Y^{m'} g^{r'} \end{aligned}$$

    and publishes the challenge \(e = \mathcal {H}(\mathsf {Id}, C, f_1,f_2,A_1,A_2,B_1,B_2)\);

  7. 7.

    each \(\textsf {F}_i\) completes the NIZK with

    $$\begin{aligned} u''_i&\leftarrow u'_i - e u_i&x''_i&\leftarrow x'_i - e x_i \end{aligned}$$
  8. 8.

    the user sets

    $$\begin{aligned} u''&\leftarrow \sum \lambda _i u''_i&x''&\leftarrow \sum \lambda _i x''_i \end{aligned}$$

    which satisfy

    $$\begin{aligned} g^{u''}&= A_1 \alpha _1^{-e}&G^{x''}C^{u''}&= A_2 \alpha _2^{-e} \end{aligned}$$

    and completes his NIZK with

    $$\begin{aligned} m''&\leftarrow m' - e m&r''&\leftarrow r' - e r \end{aligned}$$

    which satisfy

    $$\begin{aligned} \alpha _1^{r''}&= B_1 (\alpha _2/f_2)^{-e}&Y^{m''} g^{r''}&= B_2 C^{-e}. \end{aligned}$$

The final fingerprint \(f=(f_1,f_2)\) is published along with the intermediate values \((\alpha _2, C)\), the challenge e and the exponents \((u'',x'',m'',r'')\), which constitute a proof that can be verified by checking that \(e = \mathcal {H}(\mathsf {Id}, C, f_1,f_2,A_1,A_2,B_1,B_2)\), where the missing elements can be recomputed as

$$\begin{aligned} A_1&\leftarrow g^{u''} f_1^e&A_2&\leftarrow f_1^{x''}C^{u''} \alpha _2^e \\ B_1&\leftarrow f_1^{r''} (\alpha _2/f_2)^e&B_2&\leftarrow Y^{m''} g^{r''} C^e \end{aligned}$$

This is just an optimization of the Fiat-Shamir heuristic of Schnorr-like proofs.

5 Conclusion

With this construction, we are able to propose a new kind of scheme that let us derive testable, privacy-preserving, fingerprints from arbitrary messages. This allows us to propose a solution to the initial motivation of organ donation, where the requirement of encrypting low-entropy messages in such a way that they could be publicly tested against each other seemed to imply highly-interactive protocols. In contrast, our construction allows plaintext-equality tests between fingerprints to be performed fully offline, while only their generation requires an interactive process, to guarantee some level of confidentiality despite the low min-entropy.

We hope that this solution will prove useful in practical applications and allow cryptography to be used in more numerous situations. It might also be feasible to design systems which requires fewer interactions, or rely on more mainstream assumptions.