Abstract
The Snowden’s revelations kickstarted a communitywide effort to develop cryptographic tools against mass surveillance. In this work, we propose to add another primitive to that toolbox: FailStop Signatures (FSS) [49]. FSS are digital signatures enhanced with a forgerydetection mechanism that can protect a computationally bounded signer from more powerful attackers. Despite the fascinating concept, research in this area stalled after the ’90 s. However, the ongoing transition to postquantum cryptography, with its hiccups due to the novelty of underlying assumptions, has become the perfect use case for FSS. This paper aims to reboot research on FSS with practical use in mind: Our framework for FSS includes “finegrained” security definitions (that assume a powerful, but bounded adversary e.g.: can break 128bit of security, but not 256bit). As an application, we show new FSS constructions for the postquantum setting. We show that FSS are equivalent to standard, provably secure digital signatures that do not require rewinding or programming random oracles, and that this implies latticebased FSS. Our main construction is an FSS version of \(\textsf{SPHINCS}^+\), which required building FSS versions of all its building blocks: \(\textsf{WOTS}^+\), \(\textsf{XMSS}\), and \(\textsf{FORS}\). In the process, we identify and provide generic solutions for two fundamental issues arising when deriving a large number of private keys from a single seed, and when building FSS for HashandSignbased signatures.
You have full access to this open access chapter, Download conference paper PDF
Keywords
1 Introduction
At Asiacrypt 2015, Phillip Rogaway gave a thoughtprovoking IACR Distinguished Lecture titled “The Moral Character of Cryptographic Work” [37]. In light of the Snowden revelations, Rogaway called for a “communitywide effort to develop more effective means to resist mass surveillance”. Threat actors such as nationstates have many possible approaches when trying to build such mass surveillance capabilities. Of these possible approaches, the most soughtafter is arguably discovering and using a secret cryptographic attack that is not publicly known. More specifically, the ability to forge digital signatures and thus subvert trust mechanisms such as Public Key Infrastructures (PKIs) can lead to devastating results. In this work, we explore the use of FailStop Signature (FSS) [49] to resist such a mass surveillance attempt. By using FSSs, we can allow honest signers to prove that a powerful adversary has forged a signature. In fact, the signer can prove to the world that a previously unknown cryptographic attack has indeed been exploited, and provably pinpoint the security assumption that was broken. Thanks to this proof, schemes based on the broken security assumption can be promptly deprecated and replaced with secure ones.
Digital Signatures. In recent years, digital signatures have received significant attention from the academic community due to the possible threat of quantum computers. Today, all widely deployed digital signature schemes are vulnerable to quantum attacks. This motivates research into new and postquantum secure digital schemes (e.g., the NIST postquantum standardization efforts [2, 32]). Researchers are trying to develop new schemes and gain better understanding on both their classical and postquantum security. This leads to a fast paced research cycle of proposed schemes and attacks that either completely break the underlying assumptions [8], or show that larger security parameters are needed [30]. Adding to the uncertainty is the fact that the academic community might not be aware of advances in both quantum computers and cryptanalysis achieved by nationstates and kept secret to be weaponized (e.g., the exploit of MD5 collisions for Flame [51], on which we elaborate later in the section). This leads to the following natural question:
Can we build a mechanism to expose a previously unknown attack on a digital signature scheme that has been exploited in practice?
Although such a mechanism should not replace ongoing research on the security of the different schemes, it can serve as a “canary in the coal mine”, and alert us if a scheme is practically broken and should be deprecated immediately. This can also act as a major deterrent against wide usage of exploits: such attacks are usually very costly to both develop and implement, so it would be a major loss (for the attacker) if one is detected and the affected scheme deprecated.
“Believably Hard” Cryptographic Assumptions. In cryptography, all practical (and most impractical) schemes are built based on one or more security assumptions. The trust placed in security assumptions is based on the fact that the cryptographic community has not yet been able to break them. However, there is no guarantee that these assumptions will remain secure in the future, as numerous schemes have been broken over time (e.g., the recent keyrecovery attacks on the Rainbow signature scheme [8] and SIDH [11, 28]). The case of security assumptions is no different, as seen in the case of SHA1, whose first theoretical attack appeared in 2005 [50] and became practical in 2017 [41]. Additionally, the advent of quantum computing presents a potential scenario in which many security assumptions (e.g., factoring [40]) will no longer hold. Finally, expecting new attacks to be published is not a foolproof method, as there could be parties who choose to keep their exploits private (e.g., nationstates).
Furthermore, in modern cryptography the security of a scheme is typically based on the assumption that a chosen security parameter is strong enough. History shows that it is crucial to consider the possibility that the security parameter may not be sufficient in practice: In 1986 Fiat and Shamir assumed that a 512bit length factoring challenge would be hard enough for virtually any application, while in 2020, a 829bit length factoring challenge was broken [10, 15]. This raises the question of how to determine if the chosen security parameter is adequate in the face of everevolving attack methods.
The implications of a broken assumption can be severe, as seen in the Logjam attack, where the parameters previously deemed strong enough (512bit) were found to be insecure due to advancements in computing power [1]. The attack broke a version of DiffieHellman that at the time was used by most instantiations of the TLS protocol, thus requiring an immediate, extensive update of the protocol [1]. As we cannot get rid of cryptographic assumptions, it is fair to wonder which mitigation measures can be put in place to reduce the impact of a (possibly secret) exploit, and how much would they cost. The ideal solution would be a “magic box” that could alert us when a security assumption has been broken. Such a mechanism might require compromises, e.g., in term of signature length. Moreover, it might not be an onesizefitsall kind of countermeasure, but would require a clearly define adversarial scenario (e.g., assuming that some assumptions would be harder to break than others). In the context of digital signatures, it would be enough if one could detect forged signatures, and convince a verifier that a contested signature is in fact a forgery. Thus, we ask:
Given a digital signature secure under some cryptographic assumption, is there a way to prove that a (maliciously generated) signature could only have been generated by an entity that has broken the assumption?
Fail Stop Signatures achieve exactly this.
FailStop Signatures. A FailStop Signature (FSS) is a signature scheme that incorporates this kind of “canary in the coal mine” mechanism. It allows a computationally bounded signer to prove whether an unbounded adversary managed to forge signatures [49]. Hence the origin of “FailStop”: in case the signature scheme fails (there exists an adversary who can break the scheme), the honest signer can produce a publicly verifiable proof to the world to stop using the scheme, due to its state of insecurity. Surprisingly, only a handful of works on the topic exist [5, 14, 38, 47, 48], which do not go much further than laying the foundations. The original model [33, 35] includes a (potentially malicious) authority that interacts with the signer during key generation. The rest of the protocol is essentially a standard signature enhanced with a “forgery detection” procedure. Security enhances standard unforgeability (even against a malicious authority) with two additional properties: security for the signer and security for the recipient. Security for the signer guarantees that if an unbounded attacker can generate a valid forgery, then a signer can generate a publicly verifiable “proof of forgery” showing that the hardness assumption underlying the scheme’s security has been broken. Security for the recipient prevents the possibility of a malicious signer being able to sign a message and subsequently disown it by producing a proof of nonauthorship. We note that if the adversary can break the scheme, but does nothing actively, nothing can be proven (but nothing is gained by the adversary either). However, one forgery is enough to prove to the world that the scheme is broken. Thus we believe that the “forgery detection” procedure may deter wide exploits of unknown vulnerabilities.
Can We Trust the Authority? FSS for the Real World. Parameters of reallife implementations of cryptographic protocols have to be set by an authority (e.g., NIST). This implies a certain level of trust: even when choosing the hash function, there are cases (e.g., SNARKs) that require different hashes [6] than the tried and tested SHA256. The role of such a party is usually limited, and in many cases one assumes it exists without even formally model it. As our work focuses on real world uses of FSS, we adapt the FSS framework to explicitly include this and similar “common practices”. In particular we require the key generation to be noninteractive, and we assume the authority that produces the common reference string for the proof of forgeries to be trusted. One could argue that the latter is contradictory with the goal of protecting against powerful adversaries: why would we trust e.g., a standardization body, if we do not trust nationstates? The answer is that the contribution of the trusted authority is limited to the setup/key generation phase, and can in fact be distributed, e.g., using MPC. Noninteractive key generation, when coupled with a trusted setup, eliminates the need for realtime interaction. This allows to drop the requirement of realtime communication between parties, thus eradicating the possibility of attackers exploiting the communication channel, e.g., with network attacks, timing attacks, and other forms of interception. Finally, while in the original model the adversary is computationally unbounded, we allow for finegrained security definitions: we assume that the adversary might be more powerful that the signer, but will still be computationally bounded. For example, the adversary might be powerful enough to break 128bit of security, but not 256bit of security. All of these framework changes allow us to construct practical postquantum FSSs.
Can Forgeries Be Caught by an Honest Signer in the Real World? Checking signature logs for possible forgeries is already a common approach in cyberattack forensics. For example, in the notorious Flame attack [51], signatures on Microsoft software were forged by exploiting collisions in MD5. The forged signatures were recovered from an analysis of the malware and compared to the benign ones. As Microsoft logs all such signatures (as is required for our logbased solution for FSS augmentation of HashandSign schemes described in Sect. 6), it was possible to discover that the attacker did not compromise Microsoft, but that an advanced cryptographic attack against MD5 was exploited in the wild. This is in contrast, for example, to the Diginotar attack in 2011, where an attacker with internal access to the system generated fraudulent certificates. The result of attributing the forged signatures to an advanced cryptographic attack was an acceleration in the process of MD5 deprecation. FSS can allow similar future attribution in case the signature scheme itself is broken. Our proposed application to the case of postquantum schemes is particularly timely [13]. We cannot be sure that the next breakthrough will be published and not kept secret for exploitation by nationstate actors. Having a failstop mechanism allows us to detect if such an attack has been exploited in the wild.
Related Works. The first attempt to deal with accountability in digital signatures schemes was in the electronic cash protocol of Chaum, Fiat and Naor, that allowed the bank to prove doublespending of coins [12]. Later, Pfitzmann and Waidner suggested the notion of failstop signatures [33, 35, 49]. Several constructions of FSS exist in the literature, based on various assumptions such as the factoring assumption [29, 44, 52], the RSA assumption [43, 46] and a generic construction based on oneway functions [14]. A good overview can be found in [33]. There are works on making the scheme more efficient in terms of length of one signature [39, 42, 45, 48], and how to compress efficiently many FSS [5]. Lower bounds for the length of public key and signatures appear in [47].
1.1 Our Results
The aim of this work is to lay the foundations for future research on practical FSS. Hence, our first contribution is a restriction of the FSS framework [33] that will be conducive to practical schemes. The key differences are a noninteractive keygeneration protocol, finegrained security definitions, and that the authority is now trusted. All come from observing how these kind of protocols are implemented in practice. For completeness, we also recap and extend existing results on minimal assumptions to construct FSS, and we prove the equivalence of FSS with (a subset of) digital signatures (due to space constraints the proofs are deferred to the full version [9]). The latter extends a result by Pedersen and Pfitzman [33], that only proved that FSS imply signatures. Finally, we show two FSS instantiations from postquantum computational assumptions (from lattices and from hash functions). The first is a theoretical LatticeBased FSS construction that follows from our equivalence result (deferred to the full version [9]). The second is a practical FSS augmentation of \(\textsf{SPHINCS}^+\), a hashbased signature chosen for standardization by NIST. As stepping stones towards the latter, we include three different independent FSS variants of tree other signatures, \(\textsf{WOTS}^+\), \(\textsf{FORS}\), and \(\textsf{XMSS}\), which are the building blocks of \(\textsf{SPHINCS}^+\). In doing so we highlight and overcome some difficulties inherent to adding a failstop mechanism to any HashandSign signature. We expect our approach to be applicable to a wider range of signatures.
Solidifying the Foundations of FSS. The original FSS definition seems to imply a noninteractive key generation. Accordingly, all known nonblackbox constructions of FSS do not require interaction between the signer and a trusted authority. However, the only known blackbox construction of FSS from oneway functions relies on statistically hiding commitments [14, 19], which inherently require multiple rounds of interaction between the signer and the authority [18]. We propose a classification of FSS schemes in three classes, depending on the level of participation of the authority in the key generation, and review the minimal assumptions needed for each of the three types (see Table 2). Details are deferred to the full version [9] due to length constraints. While constructing FSS from OWF remains a nontrivial open problem, we make a first step in that direction, by showing an equivalence result between FSS and digital signatures under some constraints. When an FSS is constructed from a signature, the security reduction of the signature is turned into a proofofforgery algorithm, which can be done only with some restrictions on the security reduction. For example the process of extracting a witness for the (computationally hard) language should not require performing actions that are not possible in the real world, such as rewinding the adversary or programming random oracles. Moreover, the secret key should have high entropy even when conditioned on the signature (e.g., many private keys can generate the same signature). While the latticebased signature satisfies these requirements, SPHINCS+ does not (e.g., the hash is not compressing; thus, an unbounded adversary would have been able to bruteforce its input and recover the secret key). Our new compiler from a signature to a FSS allows to build two postquantum FSS from lattices in the standard model. As these schemes are the least practical, we defer them to the full version [9], and focus on our main result: \(\mathsf {FSS.SPHINCS}\).
Augmenting \(\boldsymbol{\textsf{SPHINCS}}^+\) to an FSS. Our main contribution is augmenting \(\textsf{SPHINCS}^+\) to an FSS. \(\textsf{SPHINCS}^+\) [7] is a stateless hashbased signature scheme, which is the only postquantum signatures standardized by NIST [22] that is not based on lattices. Adding a failstop mechanism to \(\textsf{SPHINCS}^+\) [7] turned out to be quite complicated, and yielded techniques that will have broader impact on future instatiations of FSS. In fact, \(\textsf{SPHINCS}^+\) has three main components: \(\textsf{WOTS}^+\) (a onetime signature), \(\textsf{XMSS}\) (i.e., multiple \(\textsf{WOTS}^+\) instances compressed with a Merkle tree), and \(\textsf{FORS}\), which is a “fewtime” signature scheme based on binary trees similar to \(\textsf{XMSS}\). As the details of the construction are quite intricate, we opt for building the FSSvariants of all three of them, to then show how to combine them to obtain \(\mathsf {FSS.SPHINCS}\). In doing so, we identify (and propose mitigation for) two interesting issues that arise when building FSS in presence of pseudorandom functions (PRF) and of the HashandSign paradigm. The result is a very specific finegrained security model, where we increase the size of the security parameters only for specific primitives (such as PRFs), while keeping the security parameters of other primitives (such as hash functions) the same as the original scheme. The security proof relies on the reductions for \(\textsf{SPHINCS}^+\) included in the NIST specification [3]. In terms of efficiency, the main impact of adding a failstop mechanism is on the signature size, which increases additively by a factor linear in a constant c (as a direct result of the failstop mechanism). However, Table 1 shows that augmenting to \(\mathsf {FSS.SPHINCS}\) still results in a smaller signature compared to simply using \(\textsf{SPHINCS}^+\) with a higher security level. Indeed, consider \(\mathsf {FSS.SPHINCS}\) with \(\lambda _r=128\), that is, that guarantees 128 bits of security for standard unforgeability (and security for the recipient), and \(\lambda _s=256\) (that is, it guarantees 256 bits of security for the signer). Table 1 shows the size comparison with \(\textsf{SPHINCS}^+\) for the two security levels 128 and 256. \(\mathsf {FSS.SPHINCS}\) provides a significantly shorter signature, even in its less efficient variant, while the signature size of \(\textsf{SPHINCS}^+\) increases by a factor 4 when going from 128bit security to 256bit security. Further details on the computations can be found in Sect. 8.1.
HashandSign Paradigm. In most practical digital signatures, signing requires to first hash the message to a fixed length, and then sign the hash value (e.g., RSA, DSA, ECDSA, and \(\textsf{SPHINCS}^+\)). Unfortunately, naïvely augmenting such signatures with a failstop mechanism yields a vulnerability: An unbounded adversary can find a collision on the hash, that is, two messages m and \(m'\) such that \(h(m)=h(m')\). In this case, the adversary can query a signature \(\sigma \) on m and use it to authenticate \(m'\). The honest signer cannot generate a proof that \((m',\sigma )\) is a forgery, because the adversary did not break the scheme, but the hash function. Section 6 illustrates two workarounds: keeping a log, or parallelsigning.
1.2 Open Problems and Conclusions
Our systematic study of the notions of FSS lays the foundations to reopen an exciting line of research. In fact, several natural questions stem from our work. From a more theoretical point of view, an exciting direction is to investigate whether OWF are enough to construct all types of FSS. Regarding applications, the field is even more open. Can we generalize our construction to a generic compiler for hashbased signatures? Can we augment Schnorr signature to a FSS? Can we extend more NISTPQ signature candidates with a failstop mechanism (e.g., extend our approach to latticebased FSS to Falcon [36])? Can we better handle the HashandSign issue? Finally, it is a natural question to ask whether \(\mathsf {FSS.SPHINCS}\) can be proved secure in a weaker finegrained security model, and whether finegrained assumptions are inherently necessary for hashbased FSS.
2 Technical Overview
We start by explaining the definition of FSS through a toy example, and then move to \(\mathsf {FSS.WOTS}, \mathsf {FSS.XMSS}, \mathsf {FSS.FORS} \), and \(\mathsf {FSS.SPHINCS} \).
2.1 WarmUp: A Toy Example
The intuition behind FSS is that any forgery generated without knowing the secret key \(\textsf{sk}\) contains information that could not have been generated by a polynomialtime machine (i.e., the signer), but that can be extracted using \(\textsf{sk}\). This information constitutes the “proof of forgery”. Thus, as long as the public key \(\textsf{pk}\) does not correspond to a single \(\textsf{sk}\), but allows for multiple valid \(\textsf{sk}\) ’s, even an unbounded adversary has to generate forgeries containing such information. To give an intuition behind our choice for practical model, we start from a toy example: augmenting Lamport signature [25] on 1bit messages to a FSS. The secret key is composed by two 128random bits \(\textsf{sk} =(r_0,r_1)\), and the public key is their hashes \(\textsf{pk} =(h(r_0),h(r_1))\). A signature on \(b\in \{0,1\}\) is simply the string \(r_b\). Security relies on the hardness of finding collisions for the underlying hash function \(h: \{0,1\}^{128} \rightarrow \{0,1\}^{128}\). Observe that if \(h(r_b)\) has only one preimage, an unbounded adversary can recover the private key \(r_b\) and generate the same signature as the honest signer, thus no proof of forgery can be generated. This leads us to a necessary condition for FSS: The secret key must have enough entropy [47]. To augment Lamport signature to an FSS, one has to change the assumption on the hash function, which is now required to be a compressing collision resistance hash function \(h:\{0,1\}^{128+c} \rightarrow \{0,1\}^{128}\), where c is such that \(\Pr [\{h^{1}(h(r))\} > 1]\) is high [23]. In this case an unbounded adversary cannot verify which of the possible preimages of the public keys \((h(r_0),h(r_1))\) is the secret key. Thus, it cannot do better then guessing. In case of a forgery, the honest signer can present the two signatures, the real one \(r_b\) and the forgery \(r' \in \{h^{1}(h(r_b))\}\), as a proof of forgery: a collision for h. Accordingly, security for signer follows from compression: if there exist at least two preimages for \(h(r_b)\), even an unbounded adversary cannot find \(r_b\) with probability greater than 1/2. Security for recipient follows from (computational) collisionresistance: a probabilistic polynomialtime (PPT) malicious signer cannot find a collision for h with high probability. In case of widespread attacks, constant probability is enough to catch w.h.p that the scheme is broken (and attack has occurred).
Observe that every public key of a computationally secure digital signature reveals some information about the secret key. This implies that an unbounded adversary can find the real secret key with some probability \(\epsilon \). Hence we model the security for signer with respect to an “information theoretic” parameter \(\epsilon \). Finally, the notion of FSS requires a trusted party in the setup phase. Intuitively, the reason is to prevent the signer (or the adversary) from placing a trapdoor in the public parameters. For example, in the Lamport signature, if the signer could choose the public CRH by itself, it can choose a function with a hardcoded collision, and use such information to reject its own signature as forgery. Nevertheless, our model require to put very little trust in the “trusted authority”, as its contribution is limited to the setup/key generation phase. For example, in \(\mathsf {FSS.SPHINCS}\) the authority only chooses the hash function (SHA256). In real life, this authority can be the NIST (or comparable standardization bodies).
2.2 Augmenting \(\textsf{SPHINCS}^+\) to \(\mathsf {FSS.SPHINCS}\)
In the following we give an informal description of \(\textsf{SPHINCS}^+\), and we show the main issues that we encountered when converting it (and its building blocks) to an FSS. Throughout this work, we assume the reader to be familiar with \(\textsf{SPHINCS}^+\); for more information we refer to the full version [9].
Overview of \(\textsf{SPHINCS}^+\) . \(\textsf{SPHINCS}^+\) is a hashbased signature obtained as an interesting mixture of Goldreich’s binary authentication tree of onetime signatures and Merkle signatures [16, 31]. Its core structure is a hypertree, that is, a modified Merkle tree where every leaf can be extended into a tree itself to allow to sign very long messages with short signing keys. The “glue” between the base tree and the next tree layer is a signature: Each leaf of the base tree is in fact a public key of a onetime signature (OTS) called \(\textsf{WOTS}^+\). To add another tree, one can generate fresh \(\textsf{WOTS}^+\) public keys, use them as the leaves of the new tree, and sign its root with the \(\textsf{WOTS}^+\) public key contained in the leaf of the base tree. The signature resulting from extending the onetime signature \(\textsf{WOTS}^+\) to a multipletime signature through a Merkle tree is called \(\textsf{XMSS}\). However, \(\textsf{SPHINCS}^+\) is not just a hypertree of \(\textsf{XMSS}\) instances: for efficiency reasons, the very last of the trees in the hypertree is connected to a similar treebased fewtimes signature called \(\textsf{FORS}\), which is used to actually sign the message digest.
The signature protocol is of the HashandSign kind: A message \(\textsf{msg}\) is first hashed to obtain a leaf index \(\textsf{idx}\) and a message digest \(\textsf{MD}\). Then, the signer generates only the trees on the path from the root \(\textsf{pk} \) to the leaf \(\textsf{idx}\). The \(\textsf{WOTS}^+\) \(\textsf{pk}\) contained in the last leaf \(\textsf{idx}\) of the path is used to sign a \(\textsf{FORS}\) \(\textsf{pk}\), which is in turn used to sign the message digest \(\textsf{MD}\). To recap, a signature on \(\textsf{msg}\) contains the authentication path from the root \(\textsf{pk} \) to the leaf \(\textsf{idx}\) (including the \(\textsf{WOTS}^+\) signatures that connect the tree layers), the \(\textsf{FORS}\) signature on \(\textsf{MD}\), and the randomness used to generate the message digest.
It remains to explain how \(\textsf{WOTS}^+\) and \(\textsf{FORS}\) work. \(\textsf{FORS}\) utilizes k trees of depth d, where the leaves in each tree are hashes of random strings (the \(\textsf{sk}\) ’s). The roots of the k trees form the public key. To sign a message \(\textsf{msg} ^*\), the message is split into k blocks, each signed individually. The ith block is treated as an integer \(\textsf{idx} _i \in [1,d]\). Signing the ith block requires revealing the authentication path from the root of the ith tree to its \(\textsf{idx} _i\)th leaf, and a preimage of that leaf, that is, one of the secret keys. \(\textsf{WOTS}^+\) is a onetime signature that relies on \(\ell \) hash chains. The starting point of each chain is a block of the secret key (a random bit string), and each subsequent chain element is obtained by hashing the previous one. The public keys are the output of the last hash evaluation. Signing a message implies revealing an intermediate value in each chain, where the level depends on the message. To avoid easy forgeries, a signature is generated on both the message and its checksum. Verification requires to iterate the chaining function on the signature until one obtains the public key. Observe that \(\textsf{WOTS}^+\) is deterministic: in \(\textsf{SPHINCS}^+\) this ensures the uniqueness of the entire hypertree structure given the first level. This implies that the key generation is efficient, as the signer only has to generate the first level of the hypertree.
Before we describe our challenges in the construction, we state our result.
Theorem 1
(Informal)

Assuming \(\textsf{SPHINCS}^+\) is unforgeable under adaptive CMA, and \(\textsf{PRF}\), \(\textsf{PRF} _\textsf{msg} \) are secure PRFs, then \(\mathsf {FSS.SPHINCS}\) is unforgeable under adaptive CMA.

Assuming \(\textsf{SPHINCS}^+\) is unforgeable under adaptive CMA, and \(\textsf{PRF}\), \(\textsf{PRF} _\textsf{msg} \) are secure PRFs, then \(\mathsf {FSS.SPHINCS}\) is secure for signer against an adversary \(\mathcal {A} \) with running time at most \(2^{c_s\lambda _s/2}\) (in the QROM).

Assuming \(\textsf{SPHINCS}^+\) is unforgeable under adaptive CMA, then \(\mathsf {FSS.SPHINCS}\) is secure for the recipient.
Hiding the sk: Compressing Hash Chains. Adding a failstop mechanism to \(\textsf{WOTS}^+\) can be done with an approach similar to the one used by Kiktenko et al. [24]. In this work the authors identify the possibility of producing a proofofforgery in some special cases, in particular, as long as the adversary could not guess what is the correct preimage from the set of possible ones. Intuitively, \(\textsf{WOTS}^+\) is based on a chaining function \(c^{j,k}\) that applies a hash function i times on a (randomly chosen, secret) input x. If the adversary is not able to correctly guess one of the hidden values in the chain, a forged signature implies that the adversary has found a collision somewhere in the chain, which can be easily recovered by a honest signer and presented as proofofforgery. For this to happen, the chaining function has to be compressing (so that points can have multiple preimages), and behave like a random oracle (so that the probability that a random evaluation of the function has many preimages is high). The latter assumption is not new, as the current instantiation of \(\textsf{SPHINCS}^+\) implicitly relies on it too^{Footnote 1}. However, the security analysis in [24] is not complete: what is proved is that w.h.p. the adversary cannot guess the preimage of one of the chains elements. This is not enough in a scenario where the adversary is unbounded: as long as there is even one element in the chain with exactly one preimage, the adversary can find it (through bruteforce) and produce the exact same signature as the signer would have. Moreover, [24] lacks the FSS framework, and as a result, does not introduce a trusted authority, nor prove security for the recipient, meaning that their scheme does not include any guarantee in case the signer is dishonest. Thus our analysis extends and improves [24].
Dealing with PRFs: FineGrained Assumption. When implementing a signature it is common to generate the random strings that compose the secret key by evaluating a PRF over a counter. This way the signer has to store only the seed of the PRF, which is much shorter than the collection of secret random strings. This is done in \(\textsf{XMSS}\) and \(\textsf{FORS}\) too, as they require an exponential number of secret keys. Observe that the hypertree is essentially public, as it is completely revealed after a number of signatures have been generated. Thus, informationtheoretically the tree leaks the secret \(\textsf{Seed} \): an unbounded adversary could recover it completely and break our FSS requirements. The trivial solution to this challenge is to increase the security parameters. However, even this solution is not always “trivial”: increasing the security parameters of a hash function requires years of cryptanalysis. In addition, this will significantly increase the length of the signature, which is a heavy price to pay. Ideally, an elegant solution would achieve succinct secret key and signatures. Unfortunately, succinct secret keys cannot yield secure FSS: van Heijst et al. [47] show that the size of the secret key has to be at least linear in the number of messages to be signed. To work around this lower bound and achieve succinctness for both the secret key and signatures, we extend the original FSS framework to allow for a more “finegrained” adversarial model. We assume that the adversary is much more powerful than the signer, to the point that it can forge the signature, but it is still somewhat bounded: despite its ability to forge, some computational tasks remain out of its reach. In particular, we assume that, if we increase only the size of the seed \(\textsf{Seed}\) of the PRF, the adversary cannot break its security. In practice, expanding the size of the seed \(\textsf{Seed}\) does not affect the length of the signature. To motivate our assumption, observe that the size of the key to the PRF increases linearly, the run time of generic brute force attack increases exponentially even when using quantum attacks such as the Grover algorithm [17]. Thus, in our \(\mathsf {FSS.XMSS}\) and \(\mathsf {FSS.FORS}\) constructions, we expand the seed of the PRF by a factor \(c_s\), and we assume that an adversary that successfully returns a forgery is still not able to break the pseudorandomness of the PRF. There is still only one possible PRF’s seed \(\textsf{Seed}\), but enumerating over all possible values of \(\textsf{Seed}\) is much harder task to the powerful but bounded adversary. A more detailed explanation can be found in Sect. 5.1. Observe that we do not need this trick to hide the preimages of the nodes of the Merkle trees. By construction, \(\textsf{SPHINCS}^+\) does not hide the (hyper)tree nodes, as the only undetectable way to break the scheme is to find all the correct preimages both for the tree and for the chains. Thus, hiding the preimages of the first element of the chains (and of the leaves of \(\textsf{FORS}\)) is enough to prevent an undetectable forgery, even if the adversary can reconstruct the whole tree.
The HashandSign Workaround. The final step to construct \(\mathsf {FSS.SPHINCS}\) (and \(\mathsf {FSS.FORS}\)) requires handling a generic forgery attack that applies to any digital signature based on the HashandSign paradigm. Recall that in HashandSign signatures, the message \(\textsf{msg}\) can be of arbitrary length and is first hashed into a fixedlength digest that is then signed. Therefore, instead of targeting the signature itself, an unbounded adversary can find another message \(\textsf{msg} ^*\) such that \(HASH(\textsf{msg} ^*) = HASH(\textsf{msg})\), where \(\textsf{msg}\) is a previously signed message: a honest signature for \(\textsf{msg}\) can then be reused as a signature on \(\textsf{msg} ^*\), and no proof of forgery can be generated. To the best of our knowledge, no previous FSS solution addressed this specific problem. We propose two possible solutions. The first one is to rely on a log file storing all the signed messages, so that the signer can produce the collision \((\textsf{msg},\textsf{msg} ^*)\) for HASH as a proof of forgery in case such an attack is performed. We remark that adding the log file does not make it a stateful signature, as the log does not have to be secret, and it is only used when generating a proof of forgery. As such, the loss/publication of the state does not impact the original security of the signature (the standard unforgeability) against PPT adversaries. In addition, usually when several agents/server want to sign with the same secret key using a stateful signature, they need to keep syncing their state for security to hold. This is not needed in our case, where the signers have to just combine their logs when a proof of forgery has to be generated. The second solution is based on a finegrained assumption on the adversary, and uses multiple signatures on multiple unique hashes of the message to prevent the possibility of generating a collision. More details on the attack can be found in Sect. 6. Section 8 explains how to leverage our solutions to transform \(\textsf{SPHINCS}^+\) in \(\mathsf {FSS.SPHINCS}\).
3 Definition of FSS in the Noninteractive Model
A FailStop Signature (FSS) is essentially a standard signature enhanced with a “forgery detection” procedure: a trusted authority computes the public parameters so that, in case of a disputed signature, a PPT signer can convince the authority that generating such a signature from the public parameters required solving a task impossible for a boundedtime machine.
Definition 1
A FailStop signature (FSS) in the noninteractive model consists of six PPT algorithms \((\textsf{GenCh},\textsf{GenKey}, \textsf{Sign}, \textsf{VrfySig }, \textsf{PoF }, \textsf{VrfyPoF })\) such that:

\(\textsf{ch }\leftarrow \textsf{GenCh} (1^{\lambda _r}, 1^{\varepsilon })\):] the challengegeneration algorithm takes as input the security parameters \((1^{\lambda _r}, 1^{\varepsilon })\), and outputs a challenge \(\textsf{ch }\).

\((\textsf{sk},\textsf{pk})\leftarrow \textsf{GenKey} (\textsf{ch })\) : the keygeneration algorithm takes as input the challenge, and outputs a (secret) signing key \(\textsf{sk}\) and a (public) verification key \(\textsf{pk}\).

\(\sigma \leftarrow \textsf{Sign} _{\textsf{sk}}(m)\) : the signing algorithm takes as input a signing key \(\textsf{sk} \) and a message m from the message space \(\mathcal {M} \), and returns a signature \(\sigma \).

\(b\leftarrow \textsf{VrfySig }_{\textsf{pk}}(m, \sigma )\) : the verification algorithm takes as input a public key \(\textsf{pk} \), and a messagesignature pair \((m,\sigma )\), and returns 1 if \(\sigma \) is valid, 0 otherwise.

\(\pi \leftarrow \textsf{PoF }_{\textsf{sk}}((m,\sigma ),\textsf{ch })\) : the proof of forgery algorithm takes as input the secret key \(\textsf{sk} \), a message m, a signature \(\sigma \), and a challenge \(\textsf{ch }\). If \(\textsf{VrfySig }_{\textsf{pk}}(m, \sigma _m)=0\), it aborts (i.e., it returns \(\bot \)). Otherwise, it returns a proof \(\pi \).

\(b \leftarrow \textsf{VrfyPoF }(\textsf{ch }, \pi )\) : the proof of forgery verification algorithm takes as input a challenge \(\textsf{ch }\), and a proof \(\pi \). It outputs 1 if \(\pi \) is valid, and 0 otherwise.
We remark that a trusted party runs the \(\textsf{GenCh} \) algorithm, and the challenge \(\textsf{ch }\) (the output of \(\textsf{GenCh} \) algorithm) is a public challenge (in many cases this challenge is the public parameters of the scheme).
Correctness is defined analogously as for digital signatures, so we omit it here.
At a highlevel, an FSS is secure if it is secure for recipient, that is, if it guarantees to a verifier that a signer cannot repudiate its own signature, and secure for the signer, i.e., it ensures that a signer can explain forgeries, even if generated by unbounded adversaries. However, security for the signer is meaningful only if unbounded adversaries are the only threat to the scheme [14]. This seems counterintuitive, as the notion of unforgeability by a PPT adversary is weaker than security for the signer: if a signature can be successfully forged by a PPT adversary, then it can also be forged by an unbounded one. At the same time, security for the signer only guarantees that forgeries can be disputed: without unforgeability it could still happen that every PPT adversary could generate a forgery, thus exposing the signer to the risk of having to constantly generate proof of forgeries. Essentially, unforgeability gives the additional guarantee that the need for a proof of forgery only arises in exceptional cases, i.e., when the corrupted recipient is not a PPT machine, but an unbounded one.
In light of these observations, the security of an FSS has two security parameters: a “computational” one, \(\lambda _r\), that bounds the probability that a PPT adversary breaks security (be it a signer trying to disavow its own signature, or a PPT malicious verifier attempting an impersonation attack), and a “informationtheoretical” one, \(\varepsilon \), that bounds the probability that an unbounded adversary can produce a forgery for which no proof of forgery can be generated. To be able to easily integrate our finegrained assumption on the adversary in the FSS framework, we define security for the signer a bit differently than in the literature: we only require that the success probability of \(\mathcal {A}\) is bounded by some \(\varepsilon \), without requiring it to be negligible.
Definition 2
(Security for the Signer). An FSS \(\varSigma \) is \(\varepsilon \)secure for the signer for \(0<\varepsilon \le 1\) if, fixed \(\lambda _r >0\), and for all unbounded adversary \(\mathcal {A} \) it holds:
where the security experiment is Experiment 1 that returns 1 if the signer fails to provide a valid proof of forgery. This probability is over the random coins of \(\textsf{GenKey} \), \(\mathcal {A} \) and \(\textsf{PoF }\).
Remark 1
Definition 2 only allows a signer to prove that a forgery has occurred. However, in practice one might want to be able to prove that a specific signature was forged, e.g., to avoid liabilities due to a forged signature on a contract. One way to do it is for the signer to reveal the secret key alongside the disputed signature, to show that by using this specific signature one can generate a proof of forgery from the secret key. As a valid proof of forgery implies there is a successful attack against the signature, publishing the secret key, for the honest signer, is not an issue since in any case we must stop using the scheme. Every secure FSS allows for this: in the rest of the paper we focus on the more basic task of proving that a forgery has occurred.
Definition 3
(Security for the Recipient). An FSS scheme is secure for the recipient iff for all \(\lambda _r > 0\) and PPT adversary \(\mathcal {A} \) there exists a negligible function \(\textsf{negl}\) such that:
The probability is computed on the random coins of \(\textsf{GenCh} \), and \(\mathcal {A} \).
Definition 4
(Unforgeability). An FSS signature scheme \(\varSigma \) is existentially unforgeable under adaptive chosenmessage attack, if for all \(\lambda _r >0\), for all \(0< \varepsilon \le 1\) and PPT adversaries \(\mathcal {A} \) there is a negligible function \(\textsf{negl} \) such that:
where the security experiment is Experiment 2.
Van Hejist et al. [47] identified the following necessary condition for a secure FSS, which we decide to include as it constitutes an easy “rule of thumb” to check whether a candidate FSS is trivially broken. Let \(\textsf{H} \) be the Shannon entropy.
Lemma 1
(Necessary Condition for security of FSS). Every FSS that satisfies Definition 3 and Definition 2 for \(\varepsilon =\textsf{negl} (\lambda )\), also satisfies the following property:
where Hist is the list of the first N signatures generated by the honest signer (cf. [47, Lemma 1]). If signing is deterministic, the requirement reduces to \(\textsf{H} (\textsf{sk} \mid \textsf{pk}) \ge (N+1)(\min \{\lambda _r,\lambda \}1)\) (cf. [47, Theorem 5]).
Lemma 1 is informationtheoretical, in the sense that it is a necessary condition for security against an unbounded \(\mathcal {A}\). However, in practice it is not always possible to guarantee such a high level of security while maintaining the usability of a primitive. This can be seen for theoretical results too: we can show that FSSs are equivalent to signatures, but only assuming that Eq. (1) holds for the signature too. Thus, we introduce a relaxation of Lemma 1, to allow for a more finegrained security model. Instead of assuming that \(\mathcal {A}\) is unbounded, we fix a third parameter \(\lambda _s \), and assume that \(\mathcal {A}\) is more powerful than PPT, but still somewhat bounded in \(\lambda _s\): it can break unforgeability for security parameter \(\lambda _r\), but it cannot extract information about the secret key from the public key and signatures for a large enough secret key. This implies that when dealing with finegrained assumptions on the adversary, we need to substitute Lemma 1 with the following assumption.
Definition 5
(FineGrained Necessary Condition for security of FSS). Let \(\lambda _r,~\lambda _s,c_s\in \mathbb {N}\), \(0<\varepsilon \le 1\). Let \(\mathcal {A}\) be an adversary that breaks unforgeability of the FSS with security parameter \(\lambda _r\), and consider an FSS that is secure for the recipient and \((\varepsilon +\textsf{negl} (\lambda _s))\)secure for the signer against \(\mathcal {A}\). For such an FSS there exists a function \(f:\{0,1\}^{c_s\cdot \lambda _r}\times \{0,1\}^{\lambda _r} \rightarrow \{0,1\}^{\lambda _r}\) such that for \(\forall x\):
where \(\mathcal {O} _{\textsf{sk}}\) is the signing oracle.
The requirement on the security for the signer changes as well as the adversary now has a probability \(\le \textsf{negl} (\lambda _s)\) of recovering the secret key. In Sect. 5.1 we show how to use this threat model in the case of \(\textsf{XMSS}\), \(\textsf{FORS}\), and \(\textsf{SPHINCS}^+\).
Interaction in \(\textsf{GenKey} \) and Minimal Assumptions for FSS. One could define 3 types of FSS depending on the level of interaction in the key generation phase: many rounds (type 1), onetime (type 2), or none (type 3). Table 2 summarizes the minimal assumptions required for each of them. The blackbox construction of Type 2 FSS from OWF and of Type 3 FSS from CIH are an original contributions of this work, and can be found in the full version [9]. While FSS with (somewhat) interactive \(\textsf{GenKey}\) can be built from oneway functions (OWF), the same cannot be said for FSS with noninteractive \(\textsf{GenKey}\). The question of whether one can build Type 3 FSS blackbox from OWF remains, to the best of our knowledge, open. Nevertheless, we shed more light in that direction, by showing an equivalence result between FSS and (a subset of) standard digital signature. The proof is an extension of a result by Pedersen and Pfitzman [33, Thm 3.1]. We defer all technical details to the full version [9].
4 OneTime FSS (or from \(\textsf{WOTS}^+\) to an FSS)
In this section, we show how to build a onetime hashbased FSS. We first present the highlevel idea of \(\textsf{WOTS}^+\), and then how to augment it to an FSS.
\(\textsf{WOTS}^+\) Structure. \(\textsf{WOTS}^+\) [21] is a hashbased onetime signature that is built on the Winternitz signature [31]. The latter is preferable to Lamport signatures [25], as it reduces the length of signatures and keys by signing the representation of a message \(m\in \{0,1\}^h\) in base w, for some \(w\in \mathbb {N}\) (\(\textsf{WOTS}^+\) with \(w=2\) is essentially Lamport signatures). The construction relies on a chaining function \(c^k\), that is, a function that applies a secondpreimage resistant, somewhat pseudorandom oneway function f for \(w1\) times to each secret key x:
where f is a fixed public function chosen at random from a family \(F_n:=\{f:\{0,1\}^n \rightarrow \{0,1\}^n\}\). The chaining function c takes as input some randomness too, but we ignore it here for simplicity. The construction yields \(\ell _1\) chains, where \(\ell _1=\lceil h/\log w\rceil \), i.e., one chain per component of the representation of m in base w (denoted by \([m]_w\) from now on). Let \(m_i\) be the \(i^{\text {th}}\) component of \([m]_w\): the \(i^{\text {th}}\) component of the signature would be \(c^{m_i}(\textsf{sk} _i)\) (Fig. 2).
This is not enough to guarantee unforgeability though. For example, by querying a signature on a message m such that \([m]_w = (0,\ldots ,0)\) the adversary gets all the secret keys \((\textsf{sk} _1,\ldots ,\textsf{sk} _{\ell _1})\), thus in this case an adversary can perfectly impersonate the signer. To avoid this, the message digest that is signed includes both the message m and a checksum \(C= \sum _{i=1}^{\ell _2} (w1m_i)\). This increases the length of keys and signature by \(\ell _2=\lfloor \log _w(\ell _1(w1))\rfloor +1\), but now guarantees unforgeability under some special assumptions on f. The algorithms of \(\textsf{WOTS}^+\) are formally described in the full version [9].
Augmenting \(\textsf{WOTS}^+\) to an FSS. Kiktenko et al. [24] observed that, if an adversary \(\mathcal {A}\) is not able to correctly guess one of the hidden values in the chain, a forged signature implies that \(\mathcal {A}\) has found a collision somewhere in the chain. An honest signer could easily recover the collision using its secret key, and use it as a proof that a more powerful machine generated the disputed signature: the signer being a PPT machine could not have broken collisionresistance. A necessary but not sufficient condition to prevent an unbounded adversary from recovering a preimage in the chain is that an evaluation \(y=c^{k}(\textsf{sk} _i)\) statistically hides \(\textsf{sk} _i\) (cf. Lemma 1), and that multiple choices of \(\textsf{sk} _i\) can correspond to the same y. However, this is not true when \(\textsf{WOTS}^+\) is instantiated with a standard hash function. In [24], the authors propose as workaround to change the chaining function to a compressing oneway function, and assume it to be a random oracle^{Footnote 2} (so that the number of preimages is distributed in the same way as for random functions). Inspired by this approach, our \(\mathsf {FSS.WOTS}\) (formally described in Protocol 1) relies on compressing hash functions.
On the Use of Compressing, Tweakable Hash Functions (THF). In \(\mathsf {FSS.WOTS}\) hash chains are built using compressing hash functions, so that even if an adversary recovers a possible \(\textsf{sk} ^*\), there is still a chance that it is not the preimage chosen by the honest signer. This technique increases the size of the \(\textsf{sk} \) by a compression factor^{Footnote 3} c, but the size is still linear in w. Modeling the security of this function requires some care. Usually, in the security experiment the adversary has oracle access to the function, thus it can query the evaluation of the function on any input x of its choice. This is not the case in the unforgeability game of \(\textsf{WOTS}^+\) (and, analogously, of \(\mathsf {FSS.WOTS}\)): here the adversary can only choose the position in the chain that will be opened, not its input (which is given by the iterative application of the function on the secret key). In [7], they model the index of the chain as a tweak of the function: in the security game the input of the function is uniformly chosen (and different for every chain), while the adversary can query the oracle on tweaks of its choice. To make sure that the public key hides the secret key one has to assume undetectability of the THF (SMUD security): informally it means that \(\textbf{Th} (p,t,x)\) is computationally indistinguishable from x, where x is uniformly sampled. Due to this “pseudorandomness”, a successful forgery \(\sigma '\) must contain at least one intermediate value x of a chain whose preimage is not the one computed during key generation. Security requires secondpreimage resistance of the THF (SMPRE security), that is, the adversary not be able to find a second preimage for a target from the set of its queries Q given an oracle access to \(\textbf{Th} (p,\cdot ,x)\) for random x, and collision resistance (SMTCR security), i.e., that the adversary cannot find a collision for a target from the set of its queries Q given oracle access to \(\textbf{Th} (p,\cdot ,\cdot )\). Definitions of THF and their security are recalled in the full version [9].
Chaining Function. Let \(w\in \mathbb {N}\) a base, \(\ell \) the total number of chains, and n the security parameter (for the FSS constructions, \(n=\lambda _r \)). Let
be a family of tweakable hash functions with parameter set \(\mathcal {P}:=\{0,1\}^n\) and tweaks^{Footnote 4} \(\mathcal {T} =\{0,1\}^{256}\). The chaining function of \(\mathsf {FSS.WOTS}\) takes as inputs an iteration counter \(k\in \{0,\ldots ,w1\}\), a start index \(j\in \{0,\ldots ,w1\}\), a chain index \(i \in \{1,\dots ,w1\}\), a message (which length varies due to the ongoing compression in the chain), and a public parameter \(\textsf{Seed}\) and behaves as follows:
Observe that j and k mean that the chaining function assumes that the input x is the value of the chain after j iterations, and starts from computing \(\textbf{Th} _{j+1}\) on the input x, iterating for k times (until \(\textbf{Th} _{j+k}\)). Tweaks are defined as the output of a (deterministic) encoding function \(T(\textsf{typ},\textsf{adrs}) = T_{a,b}\) that associates a distinct tweak \(T_{a,b}\) with the \(b^{\text {th}}\) function call in the \(a^{\text {th}}\) chain generated with the THF \(\textbf{Th} _b(\textsf{Seed},\cdot ,\cdot )\). The value of \(\textsf{typ}\) and \(\textsf{adrs}\) are deterministically generated to uniquely identify the position of the call to \(\textbf{Th}\) in the \(\textsf{SPHINCS}^+\) structure: \(\textsf{typ}\) has different values depending on what the THF is used for, and \(\textsf{adrs}\) is the “address” of the point where \(\textbf{Th}\) is called. For the purpose of the proofs, we just need to know that the encoding function T is injective. We refer the reader to [3, Section 2.7.3] for a formal definition.
Due to its use in \(\textsf{SPHINCS}^+\), it is enough to prove nonadaptive security of \(\mathsf {FSS.WOTS}\), both for unforgeability and for security for the signer.
Lemma 2
(NonAdaptive Unforgeability). If \(\textbf{Th}\) is a family of THF that is \(\textsf {SMUD} \) secure, nonadaptive \(\textsf {SMTCR} \) secure, and \(\textsf {SMPRE} \) secure, then \(\mathsf {FSS.WOTS} \) is unforgeable under nonadaptive CMA.
The proof is similar to [23, Theorem 1], thus we defer it to the full version [9].
On the Adversarial Model. Relying on the FSS framework allows our model of the adversary (and our security analysis) to be more precise than the analysis by Kiktenko et al. [24]. In the FSS framework only extremely powerful adversaries can forge, as the unforgeability property stops the attempt of PPT ones. Kiktenko et al. [24] implicitly assumed that the adversary cannot on average find the right preimage of a point in the chain. However, we assume that having an unbounded adversary means that \(\mathcal {A}\) can enumerate all preimages of a given point: assuming that finding a preimage is a probabilistic algorithm, the adversary can simply repeat it a logarithmic number of times to find all of the preimages. This can be done for all the points in all of the chains. So, if there is even a single point in one chain with only one preimage, the adversary can find it and break the FSS. Thus, we need to make sure that the probability that every point in every chain has only a single preimage is negligible, not just the expected probability of choosing the correct preimage over all of the points. Lemma 3 shows our analysis. Observe that even though the adversary can enumerate all the preimages in the chains, it cannot win with overwhelming probability as long as the output of \(\textbf{Th}\) does not leak information about the preimage used by the signer.
Lemma 3
(implicit in [24, Lemma 1]). Let \(f:\{0,1\}^{n+\delta }\rightarrow \{0,1\}^n\), \(n\gg 1\), \(\delta >0\) be chosen uniformly at random from the set of all functions from \(\{0,1\}^{n+\delta }\) to \(\{0,1\}^n\). Then, the probability that a point y in the image has strictly more than one preimage can be bounded as
The proof is similar to [24], and can be found in the full version [9].
To use Lemma 3 in our analysis we still need the QROM assumption to ensure that the output of a tweakable hash function on a randomly sampled input is indistinguishable from random, even by a powerful adversary.
Theorem 2
(Security for the Signer). If \(\textbf{Th}\) is a family of compressing THF then there exist a constant value for the compression factor c such that \(\mathsf {FSS.WOTS} \) is \(\varepsilon \)secure for the signer in the QROM for \(\varepsilon =1/2\).
Proof
Let \(\mathcal {A} \) be an adversary in Experiment 1. Then its success probability is
i.e., the adversary wins if it has guessed correctly all the \(\ell \) preimages in \(\sigma ^*\). As we assume the adversary to be unbounded, if there is any unopened point in any chain that has exactly one preimage, then \(\mathcal {A}\) is always successful (as inverting the hash on one chain is enough to generate a forgery). Lemma 3 combined with the observation that the outputs of \(c^{0,j}(\textsf{sk} _i,i,\textsf{Seed})\) are uniformly random and independent (by the QROM assumption on the construction of the THF from hash functions) yields that this happens with probability
which for \(n=l=128\) and \(w=16\) (the usual choice for \(\textsf{SPHINCS}^+\)) is already less than \(2^{81}\) for \(c=6\). Combining (2) and (3) yields that \(\mathcal {A}\) looses with probability
that is, the signer can generate a proof of forgery essentially \(50\%\) of the times. The probability of a correct guess by \(\mathcal {A}\) follows observing that the THF behaves as a QROM, thus it does not leak any information about the input. Given that each evaluation \(\mathcal {A}\) sees has multiple preimages with overwhelming probability, the adversary cannot win with probability much larger than 1/2. Analogously, if c is chosen so that the probability of strictly more than 2 preimages is overwhelming, then the adversary’s winning probability becomes \(1/3+\textsf{negl} (\lambda _s)\). \(\square \)
Second Preimage vs. Collisions. Observe that unforgeability requires \(\textsf {SMPRE}\) security, that is, something analogous to second preimage resistance, but proving a forgery just requires breaking \(\textsf {SMTCR} \) security, that is, a property similar to collisionresistance. This is because the latter is a weaker property implied by the former: a forgery breaking \(\textsf {SMPRE}\) security implies that \(\textsf {SMTCR}\) security is broken too, but the vice versa does not hold.
Theorem 3
(Security for the Recipient). If \(\textbf{Th}\) is a family of adaptively \(\textsf {SMTCR}\) secure THF, then \(\mathsf {FSS.WOTS} \) is secure for the recipient.
Proof
(sketch). We sketch the proof in the following. A formal reduction can be obtained through a series of hybrid games analogously to the proof of Lemma 2.
Assume that there exists a malicious PPT signer \(\mathcal {A} \) that break the security for the recipient of \(\mathsf {FSS.WOTS}\) with probability \(\varepsilon \). We show that one can construct a PPT algorithm \(\mathcal {B}\) that breaks the adaptive \(\textsf {SMTCR}\) security of an element of the family \(\textbf{Th} \). At the beginning of the adaptive \(\textsf {SMTCR}\) experiment, \(\mathcal {B} \) receives the public parameter \(\textsf{Seed} \) from the challenger. From these, it computes the rest of ch , and runs \(\mathcal {A} \) on \(\textsf{ch }\). Upon receiving \(\pi ^*=(b_,i, j, \textsf{sk} _i, \sigma _i)\) from \(\mathcal {A}\), \(\mathcal {B}\) finds \(k\in [j+1,w1]\) for which it holds \(c^{0,b_i+k}(\textsf{sk} _i,i,\textsf{Seed})\ne c^{b_i,k}(\sigma _i,i,\textsf{Seed})\) and \(c^{0,b_i+k+1}(\textsf{sk} _i,i,\textsf{Seed}) = c^{b_i,k+1}(\sigma _i,i,\textsf{Seed}))\). Then it computes \(M_1=c^{0,b_i+k}(\textsf{sk} _i,i,\textsf{Seed})\) and \(M_2=c^{b_i,k}(\sigma _i,i,\textsf{Seed}))\), the index \(\bar{i}=b_i+k+1\), and the tweak \(T=T_{i,\bar{i}}\), and returns \((M_1,M_2,\bar{i},T)\) to the game. As \(\mathcal {B}\) perfectly simulates the authority, its success probability is equal to the success probability of \(\mathcal {A}\) in breaking security for the recipient. \(\square \)
5 Augmenting \(\textsf{XMSS}\) to an FSS
Informally, \(\textsf{XMSS}\) is a set of N \(\textsf{WOTS}^+\) instances whose public keys are compressed using a Merkle tree. Each signature includes a \(\textsf{WOTS}^+\) signature and an authentication path that allows the verifier to recompute the root. As each \(\textsf{WOTS}^+\) key pair can only be used once, the state of the \(\textsf{XMSS}\) scheme needs keep track of the used pair, thus it is stateful^{Footnote 5}. To reduce the total size of the private key (we need for every \(\textsf{WOTS}^+\) signature \(\ell \) independent secret keys), all unique private keys used in the \(\textsf{WOTS}^+\) signatures are generated from a single private \(\textsf{Seed} _\textsf{sk} \) using a PRF, so the \(i^{\textit{th}}\) secret key of the \(j^{\textit{th}}\) leaf (a \(\textsf{WOTS}^+\) signature) is \(\textsf{sk} _i^j = PRF(\textsf{Seed} _{\textsf{sk}},i,j)\). We augment \(\textsf{XMSS}\) to an FSS scheme we call \(\mathsf {FSS.XMSS}\), which is essentially \(\textsf{XMSS}\) where we replace \(\textsf{WOTS}^+\) with our new \(\mathsf {FSS.WOTS}\) and a slightly larger private \(\textsf{Seed} _\textsf{sk} \) is used.
5.1 FineGrained Assumption on Adversary Capabilities
Our \(\mathsf {FSS.WOTS}\) variant of the \(\textsf{WOTS}^+\) signature assumes an unbounded adversary that is able to break the onewayness property of the hash function and find all possible preimages for some target value y. We show that as long as w.h.p. all hash values in all the chains in the \(\textsf{WOTS}^+\) signature have more than one preimage, the honest signer can prove with some constant (nonzero) probability that the forged signature is indeed a forgery even assuming that the adversary is unbounded. Our proof relies on the fact that due to the compression parameter, there are many different private keys that will result in the same signature and public key. This means that even if an adversary can recover all possible private keys that are consistent with the signature, it will not be able to know which is the correct one. However, this is not the case for signature schemes such as \(\textsf{XMSS}\). Each \(\textsf{XMSS}\) signature includes an exponential number of \(\textsf{WOTS}^+\) signatures. Each \(\textsf{WOTS}^+\) signature requires its own unique set of private keys. Not to store an exponential size secret, the private keys used as the start of each chain are computed using a PRF and a small private key. This means that unless we allow an exponentially large private key (which is not practical), even a small fraction of the total supported number of signatures of the \(\textsf{XMSS}\) scheme will result in just a single possible private key that is consistent with the observed signatures. Thus an unbounded adversary can recover the correct private key and use it to forge signatures. In such a scenario the honest signer will not be able to prove a signature was forged.
Fortunately, all is not lost! Instead of assuming a completely unbounded adversary, we can use the more finegrained but natural assumption of Definition 5. We assume a very powerful (exponentialtime) adversary able to break the security assumptions of the \(\textsf{XMSS}\) scheme with security parameter \(\lambda _r\). Under this assumption, if \(\textsf{XMSS}\) uses a PRF with a key of size \(\lambda _r\) to generate the private keys, we assume that the attacker can recover the single possible \(\textsf{Seed} _\textsf{sk} \) after seeing enough outputs and perfectly impersonate a honest signer. However, we assume that if we increase the size of the PRF’s key even by a small constant factor \(c_s > 1\), we make such a key recovery attack exponentially harder. Following Definition 5 with our PRF as the function f, we assume that with a when using a secret key of size \(c_s \cdot \lambda _r\) our adversary is unable to distinguish between the output of the PRF and random samples.
To motivate our assumption, consider the generic classical brute force attack, with an attacker that runs in time \(O(2^{\lambda _r})\) and can enumerate all possible keys and find the correct one. However, when we increase the key size to \(c_s \cdot \lambda _r \), the attacker is required to run in time \(O(2^{c_s \cdot \lambda _r})\). Even for a quantum adversary, the runtime of the generic attack using Grover algorithm [17] will increase exponentially from \(O(2^\frac{\lambda _r}{2})\) to \(O(2^\frac{c_s \cdot \lambda _r}{2})\).^{Footnote 6} To conclude, we assume a powerful adversary, able to break at least one of the security assumptions of the \(\textsf{XMSS}\) scheme with nonnegligible property for security parameter \(\lambda _r \) but that has only a negligible probability of breaking the security of our PRF when using a larger security parameter \(c_s \cdot \lambda _r \); \(c_s\) is treated as third security parameter, alongside \(\lambda _r\) and \(\lambda _s\).
5.2 Construction of \(\mathsf {FSS.XMSS}\)
Let \(\textbf{H} =\{\textbf{H} _i\}_i\) be a family of (compressing) THF, where \(H_i:\mathcal {P} \times \mathcal {T} \times \{0,1\}^{\lambda _r \cdot i}\rightarrow \{0,1\}^{\lambda _r}\). Analogously to \(\textsf{XMSS}\), \(\mathsf {FSS.XMSS}\) allows to sign \(N=2^{h}\) messages of l bits by combining N parallel instantiations of \(\mathsf {FSS.WOTS}\) in a binary tree. Such tree is constructed using the THF \(H:=\textbf{H} _2\). An internal state allows the signer to keep track of which keys have been already used.
\(\mathsf {FSS.WOTS}\) with Public Key Compression. As the public key \(\textsf{pk}\) of \(\mathsf {FSS.WOTS}\) has a size linear in the length of the messages, in \(\textsf{XMSS}\) and \(\mathsf {FSS.XMSS}\) it is compressed using a THF \(F:=\textbf{H} _\ell \) into a nbits long string \(\textsf{lf} \). The \(\{\textsf{lf} _i\}_i\) constitutes the leaves of the binary tree. The secret keys of the various \(\mathsf {FSS.WOTS}\) instances are generated using the PRF \(\textsf{PRF} _1:\{0,1\}^{c_s\lambda _r}\times \mathcal {T} \rightarrow \{0,1\}^{\lambda _r +c(w1)}\) from a secret seed \(\textsf{Seed} _\textsf{sk} \) and from the address \(\textsf{adrs}\), and the chains are generated from the same seed \(\textsf{Seed} _\textsf{pk} \). We abuse the notation and give \((\textsf{Seed} _\textsf{sk},\textsf{Seed},_\textsf{pk},\textsf{adrs})\) as input to the key generation of \(\mathsf {FSS.WOTS}\). Analogously, the \(\mathsf {FSS.WOTS}\) verification now also checks that the tops of the chains obtained from the signature hash to the correct value \(\textsf{lf} _i\). Different public keys hashing to the same leaf constitute a valid proof of forgery for this modified \(\mathsf {FSS.WOTS} \).
As \(\textsf{XMSS}\) is a stateful signature, the syntax of the FSS is adapted accordingly. The proof of forgery can be derived either from a collision on the tree, or as in \(\mathsf {FSS.WOTS} \), thus \(\pi \) contains a bit b that specifies which case it is.
Lemma 4
(Nonadaptive Unforgeability). If \(\mathsf {FSS.WOTS}\) is an unforgeable FSS, \(\textsf{PRF} _1\) is a PRF, and H and F are \(\textsf {SMTCR} \) secure THFs, then \(\mathsf {FSS.XMSS} \) is unforgeable under nonadaptive CMA.
Proof
(sketch). Consider the following sequence of hybrid games:

\(\mathcal {H} _1:\) this is the nonadaptive unforgeability experiment for \(\mathsf {FSS.XMSS}\).

\(\mathcal {H} _2:\) Same as \(\mathcal {H} _1\), but the \(\textsf{sk} \)’s are random strings instead of output by \(\textsf{PRF} _1\).

\(\mathcal {H} _3:\) Same as \(\mathcal {H} _2\), but the winning condition now excludes cases when the forgery contains a collision on the tree.

\(\mathcal {H} _4:\) Same as \(\mathcal {H} _3\), but the winning condition now excludes cases when the forgery contains a collision on a leaf.
Clearly, distinguishing \(\mathcal {H} _1\) from \(\mathcal {H} _2\) requires distinguishing outputs of the PRF from random. To distinguish \(\mathcal {H} _2\) from \(\mathcal {H} _3\) (resp., \(\mathcal {H} _3\) from \(\mathcal {H} _4\)), the adversary has to return a collision on the tree (resp., on a leaf). This happens only if \(\mathcal {A}\) returns a forgery \(\sigma ^*\) where the tree path is computed from a different leaf than what was used to generate the root (resp., where the leaf is obtained hashing different values than the \(\mathsf {FSS.WOTS}\) key \(\textsf{pk} _i\)). This means that the \(\mathsf {FSS.WOTS}\) signature in \(\sigma ^*\) has to verify w.r.t. a key \(\textsf{pk}\) that was not among the ones generated by the challenger. Hence a successful distinguisher can be used to break the \(\textsf {SMTCR}\) security of H (resp., of F). Finally, to win \(\mathcal {H} _4\) \(\textsf{Adv}\) has only one option: forging a signature using one of the \(\mathsf {FSS.WOTS}\) keys generated by the challenger. Thus winning \(\mathcal {H} _4\) is essentially equivalent to breaking the unforgeability of \(\mathsf {FSS.WOTS}\). A tighter proof can be obtained not relying on the unforgeability of \(\mathsf {FSS.WOTS}\) as a blackbox, but by reducing to the security of \(\textbf{Th}\) following the same steps as in the proof of Lemma 2. The only difference is that now the security of \(\textbf{Th}\) has to hold for a larger number of queries (polynomial in \(\lambda _s\)).
Lemma 5
(Security for the Signer). If \(\mathsf {FSS.WOTS}\) is secure for the signer, then \(\mathsf {FSS.XMSS}\) is 1/2secure for the signer against an adversary \(\mathcal {A} \) that satisfies Definition 5.
Proof
Consider the following hybrid games sequence:

\(\mathcal {H} _1:\) this is Experiment 1 for \(\mathsf {FSS.XMSS}\).

\(\mathcal {H} _2:\) Same as \(\mathcal {H} _1\), but the \(\textsf{sk} \)’s are random strings instead of output by \(\textsf{PRF} _1\).

\(\mathcal {H} _3:\) Same as \(\mathcal {H} _2\), but the winning condition now excludes cases when the forgery contains a collision on the tree.

\(\mathcal {H} _4:\) Same as \(\mathcal {H} _3\), but the winning condition now excludes cases when the forgery contains a collision on a leaf.
Distinguishing \(\mathcal {H} _1\) from \(\mathcal {H} _2\) requires to distinguish the output of the PRF from uniformly sampled strings, which is not possible in polynomialtime under our finegrained assumption. Now, observe that the Merkle tree is generated with a public seed and a THF that does not guarantee the existence of more than one preimage per point. Thus, as the adversary runs in exponential time, \(\mathcal {A}\) is able to reconstruct the whole tree. Hence, \(\mathcal {A}\) has the same success probability in both \(\mathcal {H} _2\) and \(\mathcal {H} _3\). An analogous reasoning holds for F, thus \(\mathcal {A}\) has the same success probability also in \(\mathcal {H} _4\). Finally, the proof in case of \(\mathcal {H} _4\) is the same as for \(\mathsf {FSS.WOTS}\), but the probability decreases by a factor N as the more instances of \(\mathsf {FSS.WOTS}\), the higher the probability of one of them having only one preimage. In details, given \(N\cdot \ell (w1)\) points chosen at random, the probability of all having more than one preimage is greater than \(1  2N\cdot \ell (w1)\exp (2^{c})\). Thus, the probability that a honest signer can generate a proof of forgery is greater than \(\left( 1  2N\ell (w1)\exp (2^{c}) \right) \cdot \frac{1}{2}\) that is approximately \(\frac{1}{2}  \textsf{negl} (\lambda _s)\) analogously to the \(\mathsf {FSS.WOTS}\) case, as in practice h is set to be smaller than 9, in which case the probability is greater than \(1/22^{73}\) for \(c=6\).
Lemma 6
(Security for the Recipient). If \(\mathsf {FSS.WOTS}\) is secure for the recipient, and H and F are \(\textsf {SMTCR}\) secure THFs, then \(\mathsf {FSS.XMSS}\) is secure for the recipient.
Proof
(sketch). The proof follows the same intuition of the proof of Lemma 4, in that a successful adversary \(\mathcal {A}\) has to return either a collision for H or F, or a valid proof of forgery for \(\mathsf {FSS.WOTS}\) for a honestly generated signature. As such, as long as both the building blocks are secure, \(\mathsf {FSS.XMSS}\) is secure for the recipient.
5.3 Multiple Instances of \(\mathsf {FSS.XMSS}\): The Hypertree
As we have seen, \(\mathsf {FSS.XMSS}\) can be used to sign a limited amount of messages. The technique used in \(\textsf{SPHINCS}^+\) to extend it to larger message spaces is hypertrees: smalldepth \(\mathsf {FSS.XMSS}\) trees connected to one another by \(\mathsf {FSS.WOTS}\) signatures. In details, during key generation the signer has to generate a single \(\mathsf {FSS.XMSS}\) tree. To expand its signing capabilities, whenever signing a message it can generate a new \(\mathsf {FSS.XMSS}\) tree, and connect it to the previous one by signing the new root with one of the \(\textsf{pk}\) ’s of \(\mathsf {FSS.WOTS}\) contained in the leaves of the original tree. This can be iterated for many layers, depending on the length of the message; the tree in the last layer is used to sign the message as in standard \(\mathsf {FSS.XMSS}\). Analogously to what happens in \(\textsf{SPHINCS}^+\), security follows from the security of the building blocks: a forgery on the hypertree yields a forgery on one of the \(\mathsf {FSS.WOTS}\) signatures. Observe that this is still a stateful signature! To get rid of the state \(\textsf{SPHINCS}^+\) relies on the HashandSign paradigm, which turns out to be quite problematic in the FSS framework (cf. Sect. 6).
6 Augmenting HashandSign Schemes to an FSS
To support arbitrary size messages, practical digital signature schemes usually follow the HashandSign paradigm. In HashandSign, the message \(m\in \{0,1\}^*\) is first hashed into a fixed size digest \(d=HASH(m)\), which is signed as \(\sigma =SIGN_\textsf{sk} (d)=SIGN_\textsf{sk} (HASH(m))\). It is advisable for the signer to add a random prefix to the message before computing HASH(Rm), limiting even an adversary that can choose signed messages to finding a second preimage to break the scheme instead of a collision.
When trying to augment any HashandSign based signature to an FSS, we need to address the following generic forgery attack on the initial HASH phase. Our adversary can try to find a new message \(m^*\) and randomness \(R^*\) such that \(HASH(R^*m^*) = HASH(Rm)\), where m is any of the messages previously signed by the honest signer, and R is the randomness used. As the digest of both Rm and \(R^*m^*\) is the same, \(\sigma \) will also be a valid signature for \(m^*\). Assuming that the size of the digest is approximately the security parameter \(\lambda \) used by the digital signature, we can assume that an adversary that is able to forge the digital signature can also find such values \(R^*\) and \(m^*\). We note that in some signature schemes such as \(\textsf{FORS}\) and \(\textsf{SPHINCS}^+\), the adversary can also target some “interleaved” combination of the previously signed hash values. The problem is called Interleaved Target Subset Resilience (\(\textsf {ITSR} \)) [7].
Augmenting HashandSign signatures to FSS is a very challenging task. We now present two possible solutions, and leave improving them as future work.
Saving a Log File: One possible solution is for the honest signer to keep a log of all previously signed messages. In this case, when a forged signature \((R^*, m^*, \sigma )\) is presented to the signer, it can search the corresponding honest signature \((R, m, \sigma )\) in the log file and show that \(HASH(R^*m^*) = HASH(Rm)\) as a proof of forgery. However, the use of a log file raises the following question:
Doesn’t using a log file means that the FSS augmentation results in a statefull signature scheme?
For that, the answer is no. In statefull signature scheme such as \(\textsf{XMSS}\), the state must be kept online, and if is lost, it can lead to a complete compromise of the scheme. Moreover, if multiple servers share the same secret key, they must also share exact uptodate replicas of the state. However, in our case, the log can be stored in an offline storage, and the multiple servers sharing the same secret key can store their logs separately without online synchronization. Most importantly, if any part of the log is lost, the only result is that the signer will not be able to prove forgeries targeting the lost messages. As the main goal of our FSS augmentation is to stop mass exploitation of a forgery attack, as long as enough log files are stored, we will still be able to detect some forgery attacks and show that the scheme is now insecure.
We note that already today, we have relevant use cases where a log of all signatures is kept. For example, Certificate Authority (CA) servers are trusted parties that issue digital certificates to authenticate the identities of individuals, organizations, or devices for secure communication over a network or the internet. For audit purposes, such CAs usually log all of the certificates they sign. Widely supported standards such as Certificate Transparency [26] already provide an open framework for publicly logging and monitoring the issuance of digital certificates, offering a comprehensive log file of all signed certificates for improved transparency and security. As mentioned above, for some signature schemes such as \(\textsf{FORS}\) and \(\textsf{SPHINCS}^+\), the adversary can target some “interleaved” combination of the previously signed digest values. The log file can also be used to prove forgery in this case, providing the set of honest messages that were “interleaved” to match the target forged digest. In case we do not want to log the signed message (as it might contain sensitive information or due to size constraints), we can slightly modify the hash phase of the signing process such that we will only need to store an intermediate randomized digest value of the message. Based on our finegrained assumption, we will use an intermediate strong hash function \(HASH':\{0,1\}^*\rightarrow \{0,1\}^{c_s\cdot \lambda _r}\), this hash function as a larger digest and we assume that even our strong adversary can’t break it. Our modified signing process will be \(\sigma =SIGN_\textsf{sk} (HASH(HASH'(RM))).\) In our log file we will only need to store \(d'=HASH'(RM)\). Note that we assume that the adversary cannot find a collision on \(d'\), but it can find a collision on \(d=HASH(HASH'(RM))\).
\(\boldsymbol{c}_{\boldsymbol{s}}\) Parallel Signatures: To avoid the log file requirement, we propose another solution that is based on the finegrained assumption of the adversary capabilities presented in Sect. 5.1. Recall that we assume that our (exponentialtime) adversary can break the security assumptions of our scheme (including \(\textsf {ITSR}\)) for some security parameter \(\lambda \). However, we assume this adversary is not powerful enough to break our security assumption of a larger security parameter \(c_s \cdot \lambda \). To use this assumption, our FSS version will now include \(c_s\) signatures (or \(\lceil c_s \rceil \) if \(c_s\notin \mathcal {Z}\)). We use a variant of the method used in [4] to calculate \(c_s\) separate digests and then sign them:
Similar to the analysis in Sect. 5.1, finding a message \(m^*\) such that all its \(c_s\) digests has been signed before becomes exponentially harder as \(c_s\) increases.
The proposed solution increases the running time and size of the signature by a factor of \(c_s\). This raises the following question:
Why can’t we just use the original signatures scheme with larger parameters?
As we mentioned above, increasing the security parameter of original scheme by the same factor of \(c_s\) can lead to a much larger increase in size. Recall that increasing the security parameter for \(\textsf{SPHINCS}^+\) by a factor of 2 from 128 to 256 bits results in a size increase by a factor of 3.8 for the small size variant. This means that our solution can still result in a much smaller signature size.
7 Augmenting \(\textsf{FORS}\) to an FSS
\(\textsf{FORS}\) is the final building block needed for \(\textsf{SPHINCS}^+\). In itself, it is very simple: the public key is the hash \(G:=\textbf{H} _{k}\) (defined in Sect. 5.2) of the roots of k trees of depth d constructed using \(H:=\textbf{H} _2\), whose leaves are obtained computing a THF \(T:=\textbf{Th} _{w1}\) (from the family \(\textbf{Th}\) of compressing THF described in Sect. 4) on the secret keys. The secret keys are output by a PRF \(\textsf{PRF} _2\) evaluated on a secret seed and on its address in the tree. Signing a message requires splitting it into k blocks of d bits, and to interpret the \(i^{\text {th}}\) block as the address of a leaf in the \(i^{\text {th}}\) tree: a signature on the \(i^{\text {th}}\) block is the preimage of such a leaf, and its authentication path. The signature on the full message is the collection of all the signatures on the blocks. Finally, to avoid forgeries through recombination one needs to bind all the paths together, hence the digest to be signed is obtained as the hash of the messages (with the public key, the seed of T, and some message dependent randomness computed with a PRF \(\textsf{PRF} _\textsf{msg} \)) through a function \(H_\textsf{msg}:\{0,1\}^{\lambda _r} \times \{0,1\}^{\lambda _r} \times \mathcal {P} \times \mathcal {M} \rightarrow \{0,1\}^{dk}\). As the structure of \(\textsf{FORS}\) is extremely similar to \(\textsf{XMSS}\), the failstop mechanism is integrated analogously: leaves are generated using a (strongly) compressing THF so that the adversary cannot recover the same preimages used by the signer (with constant probability). Trees are generated like in \(\mathsf {FSS.XMSS}\), thus we assume that a powerful adversary can reconstruct the leaves from the public information. The formal description of the protocol can be found in the full version [9].
On HashandSign, FineGrained Assumptions, and Adaptivity. As \(\mathsf {FSS.FORS}\) is a HashandSign style signature, its security requires either assuming \(\textsf {ITSR}\) security^{Footnote 7} of \(H_\textsf{msg} \), or using one of the workarounds presented in Sect. 6. For the sake of clarity we assume here that the adversary is not powerful enough to break \(\textsf {ITSR}\), and deal with the other cases separately (cf. Sect. 6). Observe that assuming \(\textsf {ITSR}\) and adaptive \(\textsf {SMTCR}\) allows to prove adaptive unforgeability without complexity leveraging (analogously to \(\textsf{SPHINCS}^+\)).
Lemma 7
(Security of \(\mathsf {FSS.FORS}\) ).

If T, G, and H are adaptive \(\textsf {SMTCR} \) secure compressing THFs, \(\textsf{PRF} _2\) and \(\textsf{PRF} _\textsf{msg} \) are PRFs, and \(H_\textsf{msg} \) is a \(\textsf {ITSR}\) secure compressing THF, then \(\mathsf {FSS.FORS}\) is unforgeable under adaptive CMA.

Assume that \(\mathcal {A}\) cannot break the \(\textsf {ITSR}\) security of \(H_\textsf{msg} \) nor invert \(\textsf{PRF} _2\) and \(\textsf{PRF} _\textsf{msg} \). If T is a compressing THF, then \(\mathsf {FSS.FORS}\) is secure for signer against an adversary \(\mathcal {A} \) with running time at most \(2^{c_s\lambda _s/2}\) (in the QROM).

If T, G, H are \(\textsf {SMTCR}\) secure THFs, \(\mathsf {FSS.FORS}\) is secure for the recipient.
Proof
(Sketch). The unforgeability proof follows closely the reduction in [23, Theorem 3]. Hence, we only sketch the hybrid games:

\(\mathcal {H} _1\) : this is the adaptive \(\mathsf {ufcma}\) experiment.

\(\mathcal {H} _2\) : Same as \(\mathcal {H} _1\), but the \(\textsf{sk} \)’s are random strings instead of output by \(\textsf{PRF} _2\).

\(\mathcal {H} _3\) : Same as \(\mathcal {H} _2\), but the output of \(\textsf{PRF} _\textsf{msg} \) is replaced with random strings.

\(\mathcal {H} _4\) : Same as \(\mathcal {H} _3\), but now the adversary looses if the signature is obtained combining k authentication paths that were already output by the signer during the querying phase.

\(\mathcal {H} _5\) : Same as \(\mathcal {H} _4\), but now \(\mathcal {A}\) looses if a FORS leaf in the forgery is different from the leaf that the signer would generate for that place.
Distinguishing \(\mathcal {H} _1\) and \(\mathcal {H} _2\) (resp., \(\mathcal {H} _2\) and \(\mathcal {H} _3\)) requires breaking the pseudorandomness of \(\textsf{PRF} _2\) (resp., \(\textsf{PRF} _\textsf{msg} \)). Distinguishing \(\mathcal {H} _3\) from \(\mathcal {H} _4\) requires breaking the \(\textsf {ITSR}\) property of \(H_\textsf{msg} \), and the proof is analogous to the proof of [7, Claim 21]). Distinguishing \(\mathcal {H} _4\) from \(\mathcal {H} _5\) requires breaking the \(\textsf {SMTCR}\) security of H. Finally, winning \(\mathcal {H} _5\) requires breaking the \(\textsf {SMTCR}\) security of either T or G.
Security for the signer and the receiver can be proved analogously to \(\mathsf {FSS.XMSS}\).
8 Augmenting \(\textsf{SPHINCS}^+\) to an FSS
Augmenting \(\textsf{SPHINCS}^+\) to an FSS just requires keeping its structure as is, and replacing \(\textsf{XMSS}\) by \(\mathsf {FSS.XMSS} \), \(\textsf{FORS}\) by \(\mathsf {FSS.FORS} \). We include a high level description of signing for those who are not familiar with \(\textsf{SPHINCS}^+\). Essentially, (the hash of) a message is interpreted as the address \(\textsf{idx} \) of a leaf in the hypertree, and a message digest \(\textsf{MD} \). The \(\mathsf {FSS.WOTS}\) \(\textsf{pk}\) contained in such leaf is used to sign an \(\mathsf {FSS.FORS}\) public key, which in turn is used to sign \(\textsf{MD}\). A signature on the message includes then the randomness used in \(H_\textsf{msg} \), the authentication path and \(\mathsf {FSS.WOTS}\) signatures needed to go from the public root to the \(\mathsf {FSS.FORS}\) key, and the \(\mathsf {FSS.FORS}\) signature on \(\textsf{MD}\). Differences arise if one does not want to use the finegrained assumption that an adversary cannot break \(\textsf {ITSR}\) security. In such a case one has to use one of our generic augmentation of HashandSign to an FSS (cf. Sect. 6). Let \(\textsf{PRF}\) be the PRF used to generate the secret keys of \(\mathsf {FSS.WOTS}\) and \(\mathsf {FSS.FORS}\).
Lemma 8
(Security of \(\mathsf {FSS.SPHINCS}\) )

If \(\textbf{Th} =\{\textbf{Th} _i\}_i\) is a family of THFs that is \(\textsf {SMUD} \), \(\textsf {SMTCR} \), and \(\textsf {SMPRE} \) secure, \(\textsf{PRF}\), \(\textsf{PRF} _\textsf{msg} \) are PRFs, \(\textbf{H} =\{\textbf{H} _i\}_i\) is a family of \(\textsf {SMTCR}\) secure THFs, and \(H_\textsf{msg} \) is a \(\textsf {ITSR}\) secure compressing THF, then \(\mathsf {FSS.SPHINCS}\) is unforgeable under adaptive CMA.

Assume that \(\mathcal {A} \) cannot break the ITSR security of \(H_\textsf{msg} \) nor invert \(\textsf{PRF} \) and \(\textsf{PRF} _\textsf{msg} \). If \(\textbf{Th} \) and \(\textbf{H} \) are families of compressing THF, then \(\mathsf {FSS.SPHINCS} \) is secure for signer against an adversary \(\mathcal {A} \) with running time at most \(2^{c_s\lambda _s/2}\) (in the QROM).

If \(\textbf{Th} \) and \(\textbf{H} \) are families of \(\textsf {SMTCR} \) secure THFs, then \(\mathsf {FSS.SPHINCS} \) is secure for the recipient.
Proving unforgeability can be done with the same reduction as in the NIST specifications (see the full version [9]). Security for the signer and for the receiver can be proved analogously to what done for \(\mathsf {FSS.XMSS} \).
8.1 Parameters Choice for \(\mathsf {FSS.SPHINCS} \)
We now discuss the cost of the failstop mechanism, taking the small 128bits security variant of \(\textsf{SPHINCS}^+ \) (\(\textsf{SPHINCS}^+ \)128 s) as our use case and computing the cost of augmenting it into an \(\mathsf {FSS.SPHINCS} \) with 128bit security, and with 256bit security for the signer. Table 1 summarizes the results. Our augmentation does not affect the size of the public key, that remains 256 bits (128bits seed and the 128bits Merkle tree root). The secret key size increases multiplicatively by the expansion factor \(c_s\). To get 256 bits of security for the PRF, it is enough to set \(c_s=256/\lambda _r=2\). The size of the secret key is then \(PK + 2\cdot c_s\cdot \lambda _r=768\) bits instead of the 512 bits of \(\textsf{SPHINCS}^+ \)128 s (but smaller than the secret key of \(\textsf{SPHINCS}^+ \)256 s). Hence we focus on computing the expansion in the signature size, which depends on the value of the compression factor.
Value of Compression Factor \(\boldsymbol{c}\) . Recall that in \(\textsf{SPHINCS}^+ \), the entire hypertree (including all \(\textsf{WOTS}^+ \) and \(\textsf{XMSS} \) signatures, and also all of \(\textsf{FORS} \) signatures Merkle trees) is deterministically derived from the private keys and seed, and can be considered as public information as it might be revealed as part of the benign signing process. The messages (and optional randomness) determine which private leaves of the \(\textsf{FORS} \) trees will be opened as part of the signatures. To simplify our calculation, we use the worstcase assumption that the adversary learns the entire hypertree structure of the \(\textsf{SPHINCS}^+ \) signature, and all \(\textsf{FORS} \) signatures’ Merkle trees.^{Footnote 8} As another worstcase assumption, we assume that the adversary can forge a signature without being detected, as long as there is at least a single published value in any chain in any \(\textsf{WOTS}^+\) signature,^{Footnote 9} or a single leaf in any \(\textsf{FORS}\) Merkle tree, that has only one preimage. We will now calculate the probability of such an event to occur in \(\mathsf {FSS.SPHINCS} \)128 s (the augmentation of \(\textsf{SPHINCS}^+ \)128 s to an FSS) instance as a function of the compression factor c. Overall, the number of values \(N_{val}\) we want to have more than one preimage is the the sum of the number of leaves for \(\textsf{FORS} \) and the total number of chains in all of the \(\textsf{WOTS}^+ \) signatures,
Where h is the height of the hypertree, d is the height of the \(\textsf{XMSS} \) tree, t is the number of leaves in a \(\textsf{FORS} \) tree, k is the number of trees in FORS, and l is the number of chains in a \(\textsf{WOTS}^+ \) signatures. For \(\textsf{SPHINCS}^+ \)128 s \(N_{val}\approx 2^{63} \cdot 2^{12} \cdot 14 < 2^{79}\). The probability that at least one of these points has one preimage can be bounded using Lemma 3 as \(\varepsilon :=N_{val}\cdot 2\exp (2^c)\). Hence, \(c=8\) is enough to have \(\varepsilon \approx 2^{368}\ll 2^{128}\). We can now bound the number of additional bytes for \(\mathsf {FSS.SPHINCS}\) by \( c \cdot (k + d \cdot l \cdot w) \). For \(\mathsf {FSS.SPHINCS} \)128 s with \(c = 8\) this results in \(8 \cdot (14 + 7 \cdot 35 \cdot 16)\texttt{bits}=3934\texttt{bytes}\). In the case of the variant that returns parallel signatures, this number is doubled.
9 Discussion
In this work, we propose to use (the already existing) FailStop Signatures as a tool to mitigate the security risk posed by quantum computers. Indeed, their inherent failstop mechanism would allow to detect breaks of postquantum signatures and to deprecate them even without even knowing how an attack would work. Our extension of the model to finegrained security allows to design practical FSS of existing constructions, in particular, \(\mathsf {FSS.SPHINCS}\), the failstop version of \(\textsf{SPHINCS}^+\). This new look at FSS opens up a lot of interesting research directions, from how to design FSS versions of signatures relying on \(\varSigma \)protocols (e.g., Dilithium [27]), to improve our solution to integrate the hashandsign mechanism in FSS.
Notes
 1.
The QROM assumption is necessary to construct practical, secure tweakable hash functions from known hash functions, cf. [7, Section 6 and Appendix F]. Our FSS modification involves nontrivial changes to \(\textsf{SPHINCS}^+\) (e.g., by using compressing hash functions of varying size), hindering a more “blackbox” reduction. We based our proof on QROM as in [7], anticipating that if a QROMfree proof for \(\textsf{SPHINCS}^+\) emerges, a similar technique will apply to \(\mathsf {FSS.SPHINCS}\).
 2.
This assumption is already present in \(\textsf{SPHINCS}^+\). Indeed, in [7] the authors propose three constructions of THF from hash functions to instantiate \(\textsf{SPHINCS}^+\). Among these, the only one proved secure in the standard model (i.e., [7, Construction 5]) is not used in practice, as it would imply exponentiallysized public parameters. The other two rely on the QROM, thus \(\textsf{SPHINCS}^+\) implicitly assumes it too.
 3.
One can assume that the description of the function is public, and the signer only has to publish the parameters of the function it has selected.
 4.
This is the specific choice for \(\textsf{SPHINCS}^+\). In general, we need \(\mathcal {T} \ge w\ell \).
 5.
\(\textsf{SPHINCS}^+\) only uses \(\textsf{XMSS}\) to sign predetermined roots, while the actual message is signed with \(\textsf{FORS}\). Hence, neither \(\textsf{SPHINCS}^+\) nor \(\mathsf {FSS.SPHINCS}\) are stateful.
 6.
Note that in a realworld instantiation of the PRF the function’s state size must be large enough to accommodate the entire \(\textsf{Seed} _\textsf{sk} \) to avoid analogous exploits to [34].
 7.
\(\textsf {ITSR}\) security requires that an adversary cannot find a second preimage for a target among the answers to its queries, when given an oracle access to a keyed hash function that samples a fresh random key for each query [7].
 8.
In practice, the number of the hypertee leaves is close to the number of allowed signatures, thus a large fraction of the leaves will not be revealed.
 9.
Unlike the case of \(\textsf{WOTS}^+ \) and \(\textsf{XMSS} \), here \(\mathcal {A} \) cannot chose the messages signed with \(\textsf{WOTS}^+ \), so it can only target the single value in the chain that was opened.
References
Adrian, D., et al.: Imperfect forward secrecy: how DiffieHellman fails in practice. In: Ray, I., Li, N., Kruegel, C. (eds.) ACM CCS 2015, pp. 5–17. ACM Press, October 2015. https://doi.org/10.1145/2810103.2813707
Alagic, G., et al.: Status report on the third round of the NIST postquantum cryptography standardization process (2022). https://csrc.nist.gov/publications/detail/nistir/8413/final, Accessed 10 July 2022
Aumasson, J.P., et al.: \(\text{SPHINCS}^+\). submission to NIST’s postquantum crypto standardization project, v.3 (2022). http://sphincs.org/data/sphincs+round3specification.pdf. Accessed 5 Oct 2022
Aviram, N., Dowling, B., Komargodski, I., Paterson, K.G., Ronen, E., Yogev, E.: Practical (postquantum) key combiners from onewayness and applications to TLS. IACR Cryptol. ePrint Arch., p. 65 (2022). https://eprint.iacr.org/2022/065
Barić, N., Pfitzmann, B.: Collisionfree accumulators and failstop signature schemes without trees. In: Fumy, W. (ed.) EUROCRYPT 1997. LNCS, vol. 1233, pp. 480–494. Springer, Heidelberg (1997). https://doi.org/10.1007/3540690530_33
BenSasson, E., Goldberg, L., Levit, D.: STARK friendly hash – survey and recommendation. Cryptology ePrint Archive, Report 2020/948 (2020). https://eprint.iacr.org/2020/948
Bernstein, D.J., Hülsing, A., Kölbl, S., Niederhagen, R., Rijneveld, J., Schwabe, P.: The \(\text{ SPHINCS}^+\) signature framework. In: Cavallaro, L., Kinder, J., Wang, X., Katz, J. (eds.) ACM CCS 2019, pp. 2129–2146. ACM Press, November 2019. https://doi.org/10.1145/3319535.3363229
Beullens, W.: Breaking rainbow takes a weekend on a laptop. In: Dodis, Y., Shrimpton, T. (eds.) CRYPTO 2022, Part II. LNCS, vol. 13508, pp. 464–479. Springer, Heidelberg (2022). https://doi.org/10.1007/9783031159794_16
Boschini, C., Dahari, H., Naor, M., Ronen, E.: That’s not my signature! Failstop signatures for a postquantum world. Cryptology ePrint Archive, Paper 2023/1754 (2023). https://eprint.iacr.org/2023/1754
Boudot, F., Gaudry, P., Guillevic, A., Heninger, N., Thomé, E., Zimmermann, P.: Factorization of RSA250 (2020). https://sympa.inria.fr/sympa/arc/cadonfs/202002/msg00001.html
Castryck, W., Decru, T.: An efficient key recovery attack on SIDH (preliminary version). IACR Cryptol. ePrint Arch., p. 975 (2022). https://eprint.iacr.org/2022/975
Chaum, D., Fiat, A., Naor, M.: Untraceable electronic cash. In: Goldwasser, S. (ed.) CRYPTO 1988. LNCS, vol. 403, pp. 319–327. Springer, New York (1990). https://doi.org/10.1007/0387347992_25
Chen, Y.: Quantum algorithms for lattice problems. Cryptology ePrint Archive, Paper 2024/555 (2024). https://eprint.iacr.org/2024/555
Damgård, I.B., Pedersen, T.P., Pfitzmann, B.: On the existence of statistically hiding bit commitment schemes and failstop signatures. In: Stinson, D.R. (ed.) CRYPTO 1993. LNCS, vol. 773, pp. 250–265. Springer, Heidelberg (1994). https://doi.org/10.1007/3540483292_22
Fiat, A., Shamir, A.: How to prove yourself: practical solutions to identification and signature problems. In: Odlyzko, A.M. (ed.) CRYPTO 1986. LNCS, vol. 263, pp. 186–194. Springer, Heidelberg (1987). https://doi.org/10.1007/3540477217_12
Goldreich, O.: Foundations of Cryptography: Basic Applications, vol. 2. Cambridge University Press, Cambridge (2004)
Grover, L.K.: A fast quantum mechanical algorithm for database search. In: STOC, pp. 212–219. ACM (1996)
Haitner, I., Hoch, J.J., Reingold, O., Segev, G.: Finding collisions in interactive protocols  a tight lower bound on the round complexity of statisticallyhiding commitments. In: 48th FOCS, pp. 669–679. IEEE Computer Society Press, October 2007. https://doi.org/10.1109/FOCS.2007.27
Haitner, I., Reingold, O.: Statisticallyhiding commitment from any oneway function. In: Johnson, D.S., Feige, U. (eds.) 39th ACM STOC, pp. 1–10. ACM Press, June 2007. https://doi.org/10.1145/1250790.1250792
Halevi, S., Micali, S.: Practical and provablysecure commitment schemes from collisionfree hashing. In: Koblitz, N. (ed.) CRYPTO 1996. LNCS, vol. 1109, pp. 201–215. Springer, Heidelberg (1996). https://doi.org/10.1007/3540686975_16
Hülsing, A.: WOTS+ – shorter signatures for hashbased signature schemes. In: Youssef, A., Nitaj, A., Hassanien, A.E. (eds.) AFRICACRYPT 2013. LNCS, vol. 7918, pp. 173–188. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642385537_10
Hulsing, A., et al.: SPHINCS+. Technical report, National Institute of Standards and Technology (2022). https://csrc.nist.gov/Projects/postquantumcryptography/selectedalgorithms2022
Hülsing, A., Kudinov, M.: Recovering the tight security proof of \({SPHINCS}^{+}\). Cryptology ePrint Archive, Report 2022/346 (2022). https://eprint.iacr.org/2022/346
Kiktenko, E.O., Kudinov, M.A., Bulychev, A.A., Fedorov, A.K.: Proofofforgery for hashbased signatures. In: di Vimercati, S.D.C., Samarati, P. (eds.) Proceedings of the 18th International Conference on Security and Cryptography, SECRYPT 2021, 6–8 July 2021, pp. 333–342. SCITEPRESS (2021)
Lamport, L.: Constructing digital signatures from a oneway function. Technical report SRICSL98, SRI International Computer Science Laboratory, October 1979
Laurie, B., Messeri, E., Stradling, R.: Certificate transparency version 2.0. RFC 9162, RFC Editor, December 2021
Lyubashevsky, V., et al.: CRYSTALSDILITHIUM. Technical report, National Institute of Standards and Technology (2020). https://csrc.nist.gov/projects/postquantumcryptography/round3submissions
Maino, L., Martindale, C.: An attack on SIDH with arbitrary starting curve. IACR Cryptol. ePrint Arch., p. 1026 (2022). https://eprint.iacr.org/2022/1026
Mashatan, A., Ouafi, K.: Efficient failstop signatures from the factoring assumption. In: Lai, X., Zhou, J., Li, H. (eds.) ISC 2011. LNCS, vol. 7001, pp. 372–385. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642248610_25
MATZOV: Report on the security of LWE: improved dual lattice attack. https://zenodo.org/record/6493704#.Yz_qSUpBxkg. Accessed 7 Oct 2022
Merkle, R.C.: A certified digital signature. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 218–238. Springer, New York (1990). https://doi.org/10.1007/0387348050_21
NIST: Call for additional digital signature schemes for the postquantum cryptography standardization process (2022). https://csrc.nist.gov/projects/pqcdigsig/standardization/callforproposals. Accessed 7 Oct 2022
Pedersen, T.P., Pfitzmann, B.: Failstop signatures. SIAM J. Comput. 26(2), 291–330 (1997). https://doi.org/10.1137/S009753979324557X
Perlner, R., Kelsey, J., Cooper, D.: Breaking category five SPHINCS+ with SHA256. Cryptology ePrint Archive, Paper 2022/1061 (2022). https://eprint.iacr.org/2022/1061
Pfitzmann, B.: Digital Signature Schemes. LNCS, vol. 1100. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0024619
Prest, T., et al.: FALCON. Technical report, National Institute of Standards and Technology (2020). https://csrc.nist.gov/projects/postquantumcryptography/round3submissions
Rogaway, P.: The moral character of cryptographic work. Cryptology ePrint Archive, Report 2015/1162 (2015). https://eprint.iacr.org/2015/1162
SafaviNaini, R., Susilo, W., Wang, H.: Failstop signature for long messages (extended abstract). In: Roy, B., Okamoto, E. (eds.) INDOCRYPT 2000. LNCS, vol. 1977, pp. 165–177. Springer, Heidelberg (2000). https://doi.org/10.1007/3540444955_15
SafaviNaini, R., Susilo, W., Wang, H.: An efficient construction for failstop signature for long messages. J. Inf. Sci. Eng. 17(6), 879–898 (2001). http://www.iis.sinica.edu.tw/page/jise/2001/200111_02.html
Shor, P.W.: Polynomialtime algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 26(5), 1484–1509 (1997). https://doi.org/10.1137/S0097539795293172
Stevens, M., Bursztein, E., Karpman, P., Albertini, A., Markov, Y.: The first collision for full SHA1. In: Katz, J., Shacham, H. (eds.) CRYPTO 2017, Part I. LNCS, vol. 10401, pp. 570–596. Springer, Cham (2017). https://doi.org/10.1007/9783319636887_19
Susilo, W.: Short failstop signature scheme based on factorization and discrete logarithm assumptions. Theor. Comput. Sci. 410(8–10), 736–744 (2009)
Susilo, W., Mu, Y.: Provably secure failstop signature schemes based on RSA. Int. J. Wirel. Mob. Comput. 1(1), 53–60 (2005). https://doi.org/10.1504/IJWMC.2005.008055
Susilo, W., SafaviNaini, R.: An efficient failstop signature scheme based on factorization. In: Lee, P.J., Lim, C.H. (eds.) ICISC 2002. LNCS, vol. 2587, pp. 62–74. Springer, Heidelberg (2003). https://doi.org/10.1007/3540365524_5
Susilo, W., SafaviNaini, R., Gysin, M., Seberry, J.: A new and efficient failstop signature scheme. Comput. J. 43(5), 430–437 (2000). https://doi.org/10.1093/comjnl/43.5.430
Susilo, W., SafaviNaini, R., Pieprzyk, J.: RSAbased failstop signature schemes. In: Proceedings of the 1999 International Conference on Parallel Processing Workshops, ICPPW 1999, Wakamatsu, Japan, 21–24 September 1999, pp. 161–166. IEEE Computer Society (1999). https://doi.org/10.1109/ICPPW.1999.800056
van Heijst, E., Pedersen, T.P., Pfitzmann, B.: New constructions of failstop signatures and lower bounds. In: Brickell, E.F. (ed.) CRYPTO 1992. LNCS, vol. 740, pp. 15–30. Springer, Heidelberg (1993). https://doi.org/10.1007/3540480714_2
van Heyst, E., Pedersen, T.P.: How to make efficient failstop signatures. In: Rueppel, R.A. (ed.) EUROCRYPT 1992. LNCS, vol. 658, pp. 366–377. Springer, Heidelberg (1993). https://doi.org/10.1007/3540475559_30
Waidner, M., Pfitzmann, B.: The dining cryptographers in the disco: unconditional sender and recipient untraceability with computationally secure serviceability. In: Quisquater, J.J., Vandewalle, J. (eds.) EUROCRYPT 1989. LNCS, vol. 434, p. 690. Springer, Heidelberg (1990). https://doi.org/10.1007/3540468854_69
Wang, X., Yin, Y.L., Yu, H.: Finding collisions in the full SHA1. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 17–36. Springer, Heidelberg (2005). https://doi.org/10.1007/11535218_2
Wikipedia contributors: Flame (malware) — Wikipedia, the free encyclopedia (2022). https://en.wikipedia.org/w/index.php?title=Flame_(malware)&oldid=1121984346. Accessed 24 Jan 2023
Yamakawa, T., Kitajima, N., Nishide, T., Hanaoka, G., Okamoto, E.: A short failstop signature scheme from factoring. In: Chow, S.S.M., Liu, J.K., Hui, L.C.K., Yiu, S.M. (eds.) ProvSec 2014. LNCS, vol. 8782, pp. 309–316. Springer, Cham (2014). https://doi.org/10.1007/9783319124759_22
Acknowledgements
We thank Oded Golderich, Yehuda Lindell, and Guy Rothblum for valuable discussions and insights. We also thank Noga Amit, Gal Arnon, and Matan Hamilis for reading a preliminary version of this document. Cecilia Boschini has been supported by the Università della Svizzera Italiana under the SNSF project no. 182452, and by the Postdoc.Mobility grant no. P500PT_203075. Hila Dahari is a fellow of the Ariane de Rothschild Women Doctoral Program and supported in part by grants from the Israel Science Foundation (no. 950/15 and 2686/20) and by the Simons Foundation Collaboration on the Theory of Algorithmic Fairness. Work done in part while a visitor at the FACT Research Center at Reichman University, supported in part by AFOSR Award FA95502110046. Moni Naor is supported in part by grants from the Israel Science Foundation (no. 2686/20), by the Simons Foundation Collaboration on the Theory of Algorithmic Fairness and by the Israeli Council for Higher Education (CHE) via the Weizmann Data Science Research Center. Eyal Ronen is partly supported by ISF grant no. 1807/23 and the Len Blavatnik and the Blavatnik Family Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 International Association for Cryptologic Research
About this paper
Cite this paper
Boschini, C., Dahari, H., Naor, M., Ronen, E. (2024). That’s Not My Signature! FailStop Signatures for a Postquantum World. In: Reyzin, L., Stebila, D. (eds) Advances in Cryptology – CRYPTO 2024. CRYPTO 2024. Lecture Notes in Computer Science, vol 14920. Springer, Cham. https://doi.org/10.1007/9783031683763_4
Download citation
DOI: https://doi.org/10.1007/9783031683763_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783031683756
Online ISBN: 9783031683763
eBook Packages: Computer ScienceComputer Science (R0)