Abstract
This work introduces the concept of flexible signatures. In a flexible signature scheme, the verification algorithm quantifies the validity of a signature based on the number of computations performed, such that the signature’s validation (or confidence) level in [0, 1] improves as the algorithm performs more computations. Importantly, the definition of flexible signatures does not assume the resource restriction to be known in advance, a significant advantage when the verification process is hard stopped by a system interrupt. Prominent traditional signature schemes such as RSA, (EC)DSA seem unsuitable towards building flexible signatures because rigid allornothing guarantees offered by the traditional cryptographic primitives have been particularly unattractive in these unpredictably resourceconstrained environments.
In this work, we find the use of the LamportDiffie onetime signature and Merkle authentication tree to be suitable for building flexible signatures. We present a flexible signature construction based on these hashbased primitives and prove its security with concrete security analysis. We also perform a thorough validitylevel analysis demonstrating an attractive computationvsvalidity tradeoff offered by our construction: a security level of 80 bits can be ensured by performing only around \(\frac{2}{3}\)rd of the total hash computations for our flexible signature construction with a Merkle tree of height 20. Finally, we have implemented our constructions in a resourceconstrained environment on a Raspberry Pi. Our analysis demonstrates that the proposed flexible signature design is comparable to other standard signature schemes in terms of running time while offering a quantified level of security at each step of the verification algorithm.
We see this work as the first step towards realizing the flexiblesecurity cryptographic primitives. Beyond flexible signatures, our flexiblesecurity conceptualization offers an interesting opportunity to build similar primitives in the asymmetric as well as symmetric cryptographic domains.
Mahimna Kelkar—This research was completed at Purdue University.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Security for embedded and realtime systems has become a greater concern with manufacturers increasing connectivity of these traditionally isolated control networks to the outside world. The computerization of hitherto purely mechanical elements in vehicular networks, such as connections to the brakes, throttle, and steering wheel, has led to a lifethreatening increase of exploitation power. In the event that an attacker gains access to an embedded control network, safetycritical message traffic can be manipulated inducing catastrophic system failures. In recent years, numerous attacks have impressively demonstrated that the software running on embedded controllers could be successfully exploited, often even remotely [17, 24, 27]. With the rise of the Internet of Things (IoT), more nontraditional embedded devices have started to get integrated into personal and commercial computing infrastructures, and security will soon become a paramount issue for the newage embedded systems [10, 29].
Wellestablished authentication and integrity protection mechanisms such as digital signatures or MACs can effectively solve many of the security issues with embedded systems. However, the industry is hesitant to adopt those as most embedded devices pose severe resource constraints on the security architecture regarding memory, computational capacity, energy and time. Given the realtime deadlines, the embedded devices might not be able complete verifications by the deadline rendering all verification efforts useless.
Indeed, traditional cryptographic primitives are not designed for such uncertain settings with unpredictable resource constraints. Consider prominent digital signature schemes (such as RSA and ECDSA) that allow a signer who has created a pair of private and public keys to sign messages so that any verifier can later verify the signature with the signer’s public key. The verification algorithms of those signature schemes are deterministic and only return a binary answer for the validity of the signature (i.e., 0 or 1). Such verification mechanisms may be unsatisfactory for an embedded module with unpredictable computing resources or time to perform the verification: if the module can only partially complete the verification process due to resource constraints or some unplanned realtime system interrupt, there are no partial validity guarantees available.
This calls for a signature scheme that can quantify the validity of the signature based on the number of computations performed during the verification. In particular, for a signature scheme instantiation with 128bit security, we expect the verification process to be flexible enough to offer a validity (or confidence) level in [0, 1] based on the resources available during the verification process. We observe that none of the existing signature schemes offer such a tradeoff between the computation time/resource and the security level in a flexible manner.
Contribution. This paper initiates the study of cryptographic primitives with flexible security guarantees that can be of tremendous interest to realtime systems. In particular, we investigate the notion of a flexible signature scheme that offers partial security for an unpredictably partial verification.
As the first step, based on the standard definition of digital signatures, we propose a new definition of a signature scheme with a flexible verification algorithm. Here, instead of returning a binary answer, the verification algorithm returns a value, \(\alpha \in [0,1]\cup \bot \) that quantifies the validity of the signature based on a number of computations performed.
Next, we provide a provably secure construction of the flexible signature scheme based on the LamportDiffie onetime signature construction [19] and the Merkle authentication tree [22]. The security of our signature relies on the difficulty of finding a \(\ell \)nearcollision pair for a collisionresistant hash function. Through our analysis, we demonstrate that our construction still offers a highsecurity level against adaptive chosen message attacks despite performing fewer computations during verification. For example, a security level of 80 bits requires performing only around \(\frac{2}{3}\)rd of the total required hash computations for a Merkle tree of height 20.
Finally, we prototype our constructions in a resourceconstrained environment by implementing those on a Raspberry Pi. We find that the performance of the proposed constructions is comparable to other prominent signature schemes in terms of running time while offering a flexible tradeoff between the security level and the number of computations. Importantly, neither the security level nor the number of computations has to be predetermined during verification.
Related Work. Fischlin [13] proposed a similar framework for progressively verifiable message authentication codes (MACs). In particular, the author presented two concrete constructions for progressively verifiable MACs that allow the verifier to spot errors or invalid tags after a reasonable number of computations. Also, the paper introduced the concept of detection probability to denote the probability that the verifier detects errors after verifying a certain number of blocks. In this work, we address the open problem of a progressively verifiable digital signature scheme, and we incorporate the detection probability concept into the security analysis of our schemes.
Bellare, Goldreich, and Goldwasser [3] introduced incremental signatures. Here, given a signature on a document, a signer can obtain a (new) signature on a similar document by partially updating the available signature. The incremental signature computation is more efficient than computing a signature from scratch and thus can offer some advantage to a resourceconstrained signer. However, it provides no benefit for a resourceconstrained verifier; the verifier still needs to perform a complete verification of the signature.
Signature scheme with batch verification [2, 8] is a cryptographic primitive that offers an efficient verifying property. Namely, after receiving multiple signatures from different sources, a verifier can efficiently verify the entire set of signatures at once. Batch verification signature scheme and flexible signature scheme are similar in that they offer an efficient and flexible verification mechanism. However, while the batch verification signature merely seeks to reduce the load on a busy server, the flexible signature focuses on a resourceconstrained verifier who can tolerate a partial security guarantee from a signature.
Freitag et al. [14] proposed the concept of signatures with randomized verification. Here, the verifying algorithm takes as input the public key along with some random coin to determine the validity of the signature. In those schemes, the attacker’s advantage of forging a valid messagesignature pair, \((m^*,\sigma ^*)\), is determined by the fraction of coins that accept \((m^*,\sigma ^*)\). Freitag et al. constructed a signature scheme with randomized identitybased encryption (IBE) schemes using Naor’s transformation and show that the security level of their signature scheme is fixed to the size of the underlying IBE scheme’s identity space. While our work can be formally defined as a signature scheme with randomized verification, our scheme offers a more flexible verification in which the security level of the scheme can be efficiently computed based on the output of the verifying algorithm.
Finally, Fan, Garay, and Mohassel [11] proposed the concept of short and adjustable signatures. They offered three variants, namely setup adjustable, signing adjustable, and verification adjustable signatures offering different tradeoffs between the length and the security of the signature. The first two variants allow the signer to adjust the length of the signature, while the last variant allows the verifier to shorten the signature during the verification phase. They presented three constructions for each variant based on indistinguishably obfuscation (\(i \mathcal {O}\)), and one concrete construction only for the setupadjustable variant based on the BLS Signature Scheme [5]. Unfortunately, none of those constructions is suitable for constructing flexible signatures tolerating unpredictable interrupts.
2 Preliminaries
Figure 1 presents prominent notational conventions that we use throughout this work. Our constructions employ the following standard properties of cryptographic hash functions. We use \(H:\mathcal {K} \times \mathcal {M} \rightarrow \{0,1\}^n\) to denote a family of hash functions that is parameterized by a key \(k \in \mathcal K\) and message \(m \in \mathcal {M}\) and outputs a binary string of length n. For this work, we consider two security properties for hash functions from [26], preimage resistance, collision resistance, and one weaker security notion from [18, 21], \(\ell \)near collision resistance.
Preimage Resistance: We call a family H of hash functions \((t_{{ow}},\epsilon _{{ow}})\)preimage resistant, if for any \(\mathcal A\) that runs for at most \(t_{ow}\), the adversary’s advantage is:
Collision Resistance: We call a family H of hash functions \((t_{{cr}},\epsilon _{{cr}})\)collision resistant, if for any \(\mathcal A\) that runs for at most \(t_{cr}\), the adversary’s advantage is:
\(\ell \)nearcollision Resistance: We call a family H of hash functions \((t_{{\ell \text {}ncr}},\epsilon _{{\ell \text {}ncr}})\)\(\ell \)nearcollision resistant, if for any \(\mathcal A\) that runs for at most \(t_{\ell \text {}{ncr}}\) and \(0\le \ell \le n\), the adversary’s advantage is:
Generic Attacks. To find the preimage \(t_{ow} = 2^q\) is required to achieve \(\epsilon _{ow} = 1/2^{nq}\) using exhaustive search. Due to the birthday paradox, however, only \(t_{cr} = 2^{n/2}\) is required to find a collision with a success probability of \(\epsilon _{cr}\approx 1/2\). Finally, Lamberger et al. showed in [18] that at least \(t_{\ell \text {}ncr}=2^{n/2}/\sqrt{\sum _{i=0}^\ell {\left( {\begin{array}{c}n\\ i\end{array}}\right) }}\) is required to find a \(\ell \)nearcollision with a success probability of \(\epsilon _{\ell \text {}ncr} \approx 1/2\).
Unkeyed Hash Functions. In practice, the key for standard hash functions is public; therefore, from this point, we refer to the cryptographic hash function H as a fixed function \(H: \mathcal {M} \rightarrow \{0,1\}^n\).
3 Security Definition
In this section, we define our flexible signature scheme. We adopt the standard definition of a signature scheme [16] to the flexible security setting. An instance of an interrupted flexible signature verification is expected to return a validity value, \(\alpha \), in the range [0, 1]. To model the notion of runtime interruptions in the signature definition, we introduce the concept of an interruption oracle \(\mathsf {iOracle}_{\varSigma }(1^n)\) for signature scheme \(\varSigma \) and give the verification algorithm access to it. The interruption oracle outputs an interruption position r in the sequence of computation steps involved the verification algorithm. For simplicity, if we denote \(\mathsf {max}\) to be the maximum number of computations needed (e.g. clock cycles, number of hash computations, or modular exponentiations) for a signature verification, then \(\mathsf {iOracle}_\varSigma (1^n)\) outputs a value \(r \in \{0,\dots , \mathsf {max}\}\). The specification of the interruption position may vary depending on the choice of the signature scheme; e.g., in this work, we define the interruption position as the number of hash computations performed in the verification algorithm.
Definition 1
A flexible signature scheme, \(\varSigma =(\mathsf {Gen}, \mathsf {Sign},\mathsf {Ver})\), consists of three algorithms:

\(\mathsf {Gen} (1^n)\) is a probabilistic algorithm that takes a security parameter \(1^n\) as input and outputs a pair (pk, sk) of public key and secret key.

\(\mathsf {Sign} (sk,m)\) is a probabilistic algorithm that takes a private key sk and a message m from a message space \(\mathcal M\) as inputs and outputs a signature \(\sigma \) from signature space \(\mathcal S\).

is a probabilistic algorithm that takes a public key pk, a message m, a signature \(\sigma \), an optional interruption position \(r \in \{0,\dots ,\mathsf {max}\}\) as inputs. If r is not provided, then the algorithm will query an interruption oracle, \(\mathsf {iOracle}_{\varSigma }(1^n)\) to determine \(r \in \{0,\dots ,\) \(\mathsf {max}\}\). The algorithm outputs a real value \(\alpha \in [0,1] \cup \{\bot \}\)^{Footnote 1}. The signature is invalid if \(\alpha = \bot \).
The following correctness condition must hold: For \(\forall (pk,sk) \leftarrow \mathsf {Gen}(1^n),\forall m \in \mathcal {M}, \) \(\forall r\) \( \in \{0,..., \mathsf {max}\}\) : \(\Pr [\mathsf {Ver}(pk, m, \mathsf {Sign} (sk,m), r) = \bot ] = 0\).
Remark 1
The interruption oracle only serves as a virtual party for definitional reasons. In practice, the verification algorithm does not receive the interruption position r as an input, and the algorithm continues to perform computations until it receives an interruption. To model runtime interruptions using the interruption oracle \(\mathsf {iOracle}_{\varSigma }(1^n)\), in this work, we expect the flow of the verification algorithm to not be affected/biased by the r value offered by \(\mathsf {iOracle}_{\varSigma }(1^n)\) at the beginning of the verification. Also, we note that depending on signature schemes, there can be more than one way to define the interruption position, r (e.g. clock cycles, number of hash computations, or modular exponentiations).
Extracting Function. We assume that for a flexible signature scheme, there exists an efficient function, \(\mathsf {iExtract}_{\varSigma }(\cdot )\), that takes as input the validity of the signature \(\alpha \) and outputs the interruption position r. Intuitively, for the case of an unexpected interruption, the verifier need not know when the verification algorithm is interrupted. However, based on the validity output \(\alpha \), the verifier should be able to use \(\mathsf {iExtract}_{\varSigma }(\cdot )\) to learn the interruption position, r. The definition of extracting function depends on the specification of the interruption position and signature scheme. We will define our \(\mathsf {iExtract}_{\varSigma }(\cdot )\) for each of our proposed constructions in Sects. 4 and 5.
Security of Flexible Signature Scheme. We present a corresponding definition to the existential unforgeability under adaptive chosen message attack (EUFCMA) experiment in order to prove the security of our scheme. For a given flexible signature scheme \(\varSigma = (\mathsf {Gen}, \mathsf {Sign}, \mathsf {Ver})\) and \(\alpha \in [0,1]\), the attack experiment is defined as follows:
Experiment \(\mathsf {FlexExp}_{\mathcal {A},\varSigma }(1^n,\alpha ):\)

1.
The challenger \(\mathcal C\) runs \(\mathsf {Gen} (1^n)\) to obtain (pk, sk) and \(\mathsf {iExtract}_{\varSigma }(\alpha )\) to obtain position r. \(\mathcal C\) sends (pk, r) to \(\mathcal A\).

2.
Attacker \(\mathcal A\) queries \(\mathcal C\) for signatures of its adaptively chosen messages. Let \(Q_{\mathcal {A}}^{\mathsf {Sign} (sk,\cdot )}\) \(=\{m_i\}_{i\in [q]}\) be the set of all messages that \(\mathcal A\) queries \(\mathcal C\) where the \(i^{th}\) query is a message \(m_i \in \mathcal M\). After receiving \(m_i\), \(\mathcal C\) computes \(\sigma _i \leftarrow \mathsf {Sign} (sk,m_i)\), and sends \(\sigma _{i}\) to \(\mathcal A\).

3.
Eventually, \(\mathcal {A}\) outputs a pair \((m^*,\sigma ^*) \in \mathcal {M} \times \mathcal {S}\)^{Footnote 2}, where message \(m^* \notin Q_{\mathcal {A}}^{\mathsf {Sign} (sk,\cdot )}\) and sends the pair to \(\mathcal C\).

4.
\(\mathcal C\) computes \(\alpha ^* \leftarrow \mathsf {Ver}(pk,m^*, \sigma ^*, r)\). If \((\alpha ^* \ne \bot )\) and \((\alpha ^* \ge \alpha )\), the experiment returns 1; else, it returns 0.
Definition 2
For the security parameter n and \(\alpha \in [0,1]\), a flexible signature scheme \(\varSigma \) is \(\big ( t,\epsilon ,q \big )\) existential unforgeable under adaptive chosenmessage attack if for all efficient adversaries \(\mathcal {A}\) that run for at most time t and query \(\mathsf {Sign} (sk,\cdot )\) at most q times, the success probability is:
Here, t and \(\epsilon \) are functions of \(\alpha \) and n, and \(q= \mathsf {poly}(n)\).
4 Flexible LamportDiffie OneTime Signature
In this section, we present our concrete construction of the flexible onetime signature scheme. This construction is based on the LamportDiffie one time signature construction introduced in [19].
4.1 Construction
We show the concrete construction of the flexible LamportDiffie onetime signature in Fig. 2. Here, we use the same key generation and signing algorithms from the LamportDiffie signature and modify the verification algorithm.
Key Generation Algorithm. The key generation algorithm takes a parameter \(1^n\) as input, and generates a private key by choosing 2n bit strings each of length n uniformly at random from \(\{0,1\}^n\), namely, \(\mathsf {SK=}(sk_{i}[b])_{i\in [n], b\in \{0,1\}}\in \{0,1\}^{2n^2}\). The public key is obtained by evaluating the preimageresistant hash function on each of the private key’s n bit string, such that \(\mathsf {PK}=(pk_{i}[b])_{i\in [n],b\in \{0,1\}}\) where \(pk_i[b]=F(sk_{i}[b])\) and \(F(\cdot )\) is the preimageresistant hash function.
Signing Algorithm. The signing algorithm takes as input the message m and the private key \(\mathsf {SK}\). First, it computes the digest of the message \(d=G(m)=(d_i)_{i\in [n]}\) where \(d_i\in \{0,1\}\) and \(G(\cdot )\) is a collisionresistant hash function that outputs digests of length n. The signature is generated based on the digest d as \(\sigma = (sk_i[d_i])_{i \in [n]}\).
Flexible Verification Algorithm. This algorithm takes as input a message m, a public key \(\mathsf {PK}\), a signature \(\sigma \), and an optional interruption position and outputs the validity of the signature \(\alpha \). In this construction, we model the interruption condition \(r \in \{0,1, \dots , n\}\), as the number of hash \(F(\cdot )\) computations performed during verification. As mentioned earlier in Sect. 3, to faithfully model the interruption process, the flow of the verification algorithm should not be biased by the r value in any intelligent manner. First, the verification algorithm will query the interruption oracle to determine the interruption position r. The algorithm then computes the digest of the message, \(d=G(m)=(d_i)_{i\in [n]}\). Now, instead of sequentially verifying the signature bits like the verification in the standard scheme, the flexible verification algorithm randomly selects a position i of the signature and checks whether \(F(\sigma _i[d_i])=pk_i[d_i]\). If there is one invalid preimage, the verification aborts and returns \(\alpha = \bot \). Otherwise, once the interruption condition is met or all positions are verified, the algorithm returns the validity as the fraction of the number of bits that passed the verification check over the length of the signature. In this LamportDiffie construction, given the validity \(\alpha \) value output by the verification algorithm, the verifier simply computes the interruption position as follows:
4.2 Security Analysis
In the flexible LamportDiffie onetime signature setting, as the verification algorithm does not perform verification at every position of the signature, the adversary can increase the probability of winning by outputting two messages whose hash digests are close. This is equivalent to finding an \(\ell \)nearcollision pair where \(\ell \) is determined by the adversary. Theorem 1 offers the tradeoff between computation time and success probability for the adversary.
Theorem 1
Let F be \((t_{ow}, \epsilon _{ow})\) preimageresistant hash function, G be \((t_{\ell \text {}ncr}\), \(\epsilon _{\ell \text {}ncr})\) \(\ell \)nearcollisionresistant hash function, \(k_F,k_G\) be the number of times \(F(\cdot ), G(\cdot )\) evaluated in the verification respectively, d be the Hamming distance between two message digests output by \(\mathcal A\), and \(t_{gen}, t_{sign}, t_{ver}\) be the time it takes to generate keys, sign the message, and verify the signature respectively. With \(1\le k_F \le n\), \(k_G=1\), the flexible LamportDiffie onetime signature \(\varSigma _{fots}\) is \((t_{fots},\epsilon _{fots}, 1)\) EUFCMA where:
The proof of Theorem 1 is shifted to Appendix A.
Security Level. Towards making the security of flexible LamportDiffie onetime signatures more comprehensible, we adapt the security level computation from [7]. For any \((t,\epsilon )\) signature scheme, we define the security of the scheme to be \(\log _2{(t/\epsilon )}\). As, in the flexible setting, the value of the pair \((t,\epsilon )\) may vary as the adversary decides the Hamming distance \(\ell \), for each value of \(k_F \in \{0,\dots ,n\}\), we compute the adversarial advantage for all values \(0 \le \ell \le nk_F\) and output the minimum value of \(\log _{2}{\big ({t_{fots}}/{\epsilon _{fots}}\big )}\) as the security level of our scheme. A detailed security level analysis for the LamportDiffie onetime signature is available in Sect. 6.1.
5 Flexible Merkle Tree Signature
We use the Merkle authentication tree [22] to convert the flexible LamportDiffie onetime signature scheme into a flexible manytime signature scheme.
5.1 Construction
In the Merkle tree signature scheme, in addition to verifying the validity of the signature, the verifier uses the authentication nodes provided by the signer to check the authenticity of the onetime public key. We are interested in quantifying such values under an interruption. To achieve such a requirement, we require the signer to provide additional nodes in the authentication path.
Key Generation Algorithm. Our key generation remains the same as the one proposed in the original Merkle tree signature scheme [22]. For a tree of height h, the generation algorithm generates \(2^h\) LamportDiffie onetime key pairs, \(\mathsf {(PK_i,SK_i)}_{i\in [2^{h}]}\). The leaves of the tree are digests of onetime public keys, \(H(\mathsf {PK}_i)\), where \(H(\cdot )\) is a collisionresistant hash function. An inner node of the Merkle tree is the hash digest of the concatenation of its left and right children. Finally, the public key of the scheme is the root of the tree, and the secret key is the set of \(2^h\) onetime secret keys.
Modified Signing Algorithm. In the original Merkle signature scheme, a signature consists of four parts: the signature state s, a onetime signature \(\sigma _{s}\), a onetime public key \(\mathsf {PK}_s\) and a set of authentication nodes \(\mathsf {Auth_s}=(a_i)_{i\in [h]}\). The verifier can use \(\mathsf {PK}_{s}\) to verify the validity of the \(\sigma _s\) and use nodes in \(\mathsf {Auth_s}\) and state s to efficiently verify the authenticity of \(\mathsf {PK_s}\). For our signing algorithm, along with authentication nodes in the old construction, we require the signer to send the nodes that complete the direct authentication path from the onetime public key to the root. We call this set of nodes complement authentication nodes, \(\mathsf {Auth}_s^c = (a'_i)_{i\in [h]}\). The reason for including additional authentication nodes is to allow the verifier to randomly verify any level of the tree. Moreover, with additional authentication nodes, verifier can verify different levels of the tree in parallel. Figure 3 describes an example of the new requirement for a tree of height three. The modified signature now consists of five parts: a state s, a LamportDiffie onetime signature \(\sigma _{s}\), a onetime public key \(\mathsf {PK}_{s}\), a set of authentication nodes \(\mathsf {Auth}_s\), and a set of complement authentication nodes \(\mathsf {Auth}_s^c\).
Flexible Verification Algorithm. With additional authentication nodes, the verification algorithm can verify the authenticity of the public key at arbitrary levels of the authentication tree as well as use the flexible verification described in Sect. 4 to partially verify the validity of the onetime signature. In the end, the verification returns \(\alpha = (\alpha _{v}, \alpha _{a})\) that contains both the validity of the signature and the authenticity of the public key. In this construction, we define the interruption \(r \in \{0,1,\dots , n+h+1\}\), as the number of computations performed during the verification step.
In contrast to the verification performed in the onetime signature scheme, the security guarantee the verifier gains from the authenticity verification of the onetime public key only increases linearly as the number of computations performed on the authentication path increase: The adversary can always generate a new onetime key pair to sign the message that is not a part of onetime key pairs created by the generation algorithm. In the original Merkle scheme, such a keypair will fail the authenticity check with overwhelming probability because the verifier can use the authentication nodes to compute and verify the root. However, in the flexible setting, the verifier may not be able to complete the authenticity verification, and there is a nonnegligible probability that an invalid onetime public key will be used to verify the validity of the signature. Therefore, the verifier gains an exponential security guarantee about the validity of the onetime signature but only a linear guarantee about the authenticity of the public key as the number of computations increases.
To address this issue, the verification algorithm needs to balance the computations performed on the authentication path and the computations performed on the onetime signature. We define the confidence for the validity of the onetime signature as \(11/2^{k_F/2}\) and the confidence for authenticity of the onetime public key as \(k_H/(h+1)\), where \(k_F\) is the number of computations performed on the onetime signature, \(k_H\) is the number of computations performed on the onetime public key, and h is the height of the Merkle tree. To balance the number of computations, the verifier needs to maintain \(11/2^{k_F/2} \approx k_H/(h+1)\). With the new signing and verifying algorithms described above, we present a detailed construction of the flexible Merkle signature scheme in Fig. 4. In this Merkle signature construction, given the validity \(\alpha = (\alpha _{v}, \alpha _{a})\) value output by the verification algorithm, the verifier can compute the interruption position as follow: .
5.2 Security Analysis
Theorem 2 presents the tradeoff between computation time and success probability for the adversary \(\mathcal {A}\).
Theorem 2
Let F be \((t_{ow}, \epsilon _{ow})\) preimageresistant hash function, G be \((t_{\ell \text {}ncr}\), \(\epsilon _{\ell \text {}ncr})\) \(\ell \)nearcollisionresistant hash function, H be \((t_{cr},\epsilon _{cr})\) collisionresistant hash function, \(k_{F},k_{G},k_H\) be the number of times \(F(\cdot ),G(\cdot ),H(\cdot )\) performed respectively, d be the smallest Hamming distance between the forged message digest and other queried message digests, and \(t_{gen}, t_{sign}, t_{ver}\) be the time it takes to generate keys, sign the message, and verify the signature respectively. With \(1\le k_F \le n\), \(0 \le k_H \le h+1\), and \(k_G = 1\), the flexible Merkle signature construction (\(\varSigma _{fms}\)) from flexible LamportDiffie onetime signature scheme is \((t_{fms},\epsilon _{fms}, 2^h)\) EUCMA, where
The proof of Theorem 2 is shifted to Appendix A. A more detailed version of the proof will be included in the extended version [20].
5.3 Other Signature Schemes
Over the last few years, several optimized versions of Merkle tree signature and onetime signature schemes have been proposed. This includes XMSS [6] and SPHINCS [4] for the tree signatures, and HORS [23], BIBA [25], HORST [4] and Winternitz [22] for onetime signatures. While the security analysis for each scheme may vary, we can use the same technique described above to transform those schemes into signature schemes with a flexible verification. In this work, we choose to use LamportDiffie Onetime signatures in our construction for two reasons. First, the number of hash evaluations in LamportDiffie Signature verification is fixed for constant size messages, and this gives better and more precise security proofs. Second, LamportDiffie onetime signature has better performance in terms of the running time. Thus, according to our experiment and analysis, the LamportDiffie Onetime signature scheme combined with Merkle Tree provides a better speed performance and more concrete security proofs.
We also investigate numbertheoretic signature schemes and observe that the similar verification technique can be applied to the FiatShamir Signature Scheme [12] as its signature is partitioned into different verifiable sets. However, compared to hash function evaluations, the computation of modular exponentiation is significantly more expensive and thus may not be suitable for flexible security application environments. On the other hand, latticebased signature schemes such as GPV signatures [15] can be an interesting candidate for a flexible signature construction. For GPV signatures, a public key is a matrix output by a trapdoor sampling algorithm, and a signature is output by a preimage sampling algorithm. The signature verification is performed using a matrix and vector multiplication. The same randomized verification technique seems to be applicable here on different rows of the matrix. In the future, we plan to explore a flexible version of GPV signatures.
6 Evaluation, Performance Analysis, and Discussion
In this section, we evaluate the performance and the security level of the flexible LamportDiffie onetime signature and flexible Merkle signature schemes. For both schemes, the validity value \(\alpha \) suggests the number of computations performed (i.e., \(k_H,k_F\)) during verification. Based on the value \(\alpha \), the verifier determines the security level achieved by the (interrupted) verification instance.
6.1 Security Level of Flexible LamportDiffie OneTime Signature
The security level of a flexible LamportDiffie signature depends on the actual Hamming distance between two message digests output by the adversary and it can increase its advantage by spending more time to find a nearcollision pair. However, it is unclear how to precisely measure the exact Hamming distance between those two digests. Therefore, we outline some possible assumptions in order to estimate precisely the value of \(\varDelta (G(m) , G(m^*))\). Using the generic attack on finding near collision pair [18], we can assume that an adversary \(\mathcal A\) who uses a generic birthday attack can always output a pair \((m,m^*)\) such that \(\varDelta (G(m) , G(m^*))\le \ell \) after spending \(t_{\ell \text {}ncr}=2^{n/2}/\sqrt{\sum _{i=0}^{\ell } {\left( {\begin{array}{c}n\\ i\end{array}}\right) }}\). Second, for a fixed value \(\ell \), if the adversary finds a pair \((m,m^*)\) such that \(\varDelta (G(m) , G(m^*))\) \(\le \ell \), we let \(d = \varDelta (G(m) , G(m^*))\) is equal to the expected value of \(\varDelta (G(m) , G(m^*))\). The intuition behind the second assumption is that as we let the Hamming distance d decrease by 1, the probability that \(\varDelta (G(m) , G(m^*)) = d\) decreases by factor of n; therefore, the actual value of d should be closer to \(\ell \) than to 0.
We define the set \(B_{\ell }(G(m))=\{x\ \ x\in \{0,1\}^n \wedge \varDelta (x , G(m)) \le \ell \}\). If G(m) and \(G(m^*)\) is a \(\ell \)nearcollision pair, then \(G(m^*) \in B_{\ell }(G(m))\). If \(G(\cdot )\) behaves as an uniformly random function, then given \(\ell \), the expected value of \(\varDelta (G(m) , G(m^*))\) is:
For the case of LamportDiffie onetime signature, we have \(t_{gen}=2n, t_{sign}=t_{ver}=n\). Combining Theorem 1 and Eq. 1, we have:
Finally, the adversary’s advantage varies depending on the value of \(\ell \). Therefore, for a fixed value \(k_F\), we compute the adversarial advantage all values \(\ell \le nk_F\) and output the minimum value of \(\log _{2}{\big ({t_{fots}}/{\epsilon _{fots}}\big )}\) as the security level of the scheme.
Figure 5 gives the tradeoff between the number of computations and the security level of the flexible LamportDiffie scheme. Compared to the original LamportDiffie scheme, our construction offers a reasonable security level despite a smaller number of computations. For example, while a complete verification requires 256 evaluations of \(F(\cdot )\) to achieve the 128bit security level, with only 128 evaluations of \(F(\cdot )\), the scheme still offers around the 92bit security level.
6.2 Security Level of Flexible Merkle Tree Signature
For the Merkle tree signature scheme, using the results from [9, 28], we have \(t_{gen} = 2^h\cdot 2n + 2^{h+1}1, t_{ver}=n+h+1, t_{sign}=(h+1)\cdot n\). There are two cases for the Merkle tree signature: (1) The authenticity check is complete, \(k_H=h+1\) and (2) The authenticity check is not complete, \(k_H < h+1\).
When \(k_H < h+1\), the adversary’s probability of winning is nonnegligible, and the time it needs to spend on the attack is constant; therefore, when the authenticity check is not complete, we simply let: \( t_{fms}=1\), \(\epsilon _{fms}=1k_H/(h+1) \). When the authenticity verification is complete, \(k_H = h+1\), using the equation described in Theorem 2, we obtain the following parameters for the flexible Merkle tree scheme:
Using those formulas, we compute the security level of the flexible Merkle signature as \(\displaystyle \log _2(t_{fms}/\epsilon _{fms})\). Figure 6 shows the tradeoff between the security level of the scheme and the number of computations of the flexible Merkle tree signature with \(h=20\). Notice that, for small number of computations, the security level of Merkle tree construction does not increase. The reason is that if the authenticity of the public key is not completely checked, the probability that the adversary wins the forgery experiment is always the fraction of the number of computations on the authentication path over the height of the tree, and the forging time remains constant. Moreover, for a tree of height h, there are \(2^h\) instances of flexible LamportDiffie onetime signature. Therefore, if \(F(\cdot )\) evaluated only for a small number of times, the cost of finding an \(\ell \)nearcollision pair (for \(\ell \le nk_F\)) is cheap. The probability that such a pair passes the onetime verification step in one instance of \(2^h\) instances of flexible LamportDiffie onetime signature is high. This leads to an undesirable security level during the first few computations.
6.3 Implementation and Performance
We have implemented prototypes of our proposed constructions in C, using the \(\mathsf {SHA}\text {}256\) implementation of OpenSSL. We evaluated the performance of our proposed constructions on a Raspberry Pi 3, Model B equipped with 1 GB RAM.
Table 1 gives the performance and security levels of the flexible verification algorithm of both schemes compared to other standard signature schemes (i.e., RSA, DSA, ECDSA, and EdDSA) based on the percentage of computations \(p = 20\%, 40\%, 60\%,\) \(80\%,\) and \( 100\%\) for messages of size 256^{Footnote 3}. For other signature schemes, we obtain the performance of those schemes using the OpenSSL library. More specifically, for ECDSA, we used two standard curves: \(\mathsf {Ed25519}\) and \(\mathsf {nistp256}\). For the RSA signature scheme, we used the smallest recommended public key \(2^{16}+1\) for the verification algorithm. For the security levels of other signature schemes, we use the information from [1, 6]. As shown in Table 1, the performance of both flexible signature schemes is comparable to other standard schemes in terms of the verification running time. More importantly, both constructions offer an increasing security level at each step of the algorithm while other signature schemes can only provide such information at the end of the verification algorithm, and Table 1 demonstrates that in the form of (Timings, Security Level) pairs. Also, notice that as the number of verification computations increases, the LamportDiffie OTS gives a higher security level than the signing shorter hash digest approach which offers the security level that is equal to half of the length of the hash digest. The main reason is that the verification algorithm verifies the signature at random locations, and while the adversary may learn about the number of computations performed, the adversary does not know which indices of the signature get verified. Thus, the adversary has to decide how close the two digests should be to maximize his adversarial advantage. For the case of Merkle tree signatures, we do not see a huge improvement in the performance of the verification despite a smaller number of computations. This is because the computation of \(H(\mathsf {PK_{fots}})\) and G(m) can be expensive, because of the use the MerkleDamgård transformation in SHA2 hash family, as those computations requires more calls to the compression function depending on the input size. Nevertheless, for realtime environments, we expect messages to be smaller in size.
7 Conclusion
In this paper, we defined the concept of a signature scheme with a flexible verification algorithm. We presented two concrete constructions based on the LamportDiffie onetime signature scheme and the Merkle signature scheme and formally proved their security. We also implemented prototypes of our proposed constructions and showed that the running time performance of our proposed designs is comparable to other signature schemes in a resourceconstrained environment. More importantly, compared to standard signature schemes with deterministic verification, our schemes allow the verifier to put different constraints on the verification algorithm in a spontaneous manner and still guarantee a reasonable security level. Our proposed signature scheme is one of the few cryptographic primitives that offers a tradeoff between security and resources. It can be highly useful for cryptographic mechanisms in unpredictably resource constrained environments such as realtime systems.
In the long run, significant research will be required in this challenging flexible security area. We plan to explore similar ideas for confidentiality in (symmetric or asymmetric) encryptions, integrity with MACs, and possibly beyond. We believe these cryptographic protocols will make security mechanisms more prevalent in the realtime systems.
Notes
 1.
\(\alpha =0\) means that no operations are performed in the verification algorithm.
 2.
The higher validity implies a higher interruption position. Hence, the best strategy for the adversary is to use the initial position defined by the challenger.
 3.
References
Barker, E.: Recommended for key managementpart 1: General. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.80057pt1r4.pdf
Bellare, M., Garay, J.A., Rabin, T.: Fast batch verification for modular exponentiation and digital signatures. In: Nyberg, K. (ed.) EUROCRYPT 1998. LNCS, vol. 1403, pp. 236–250. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0054130
Bellare, M., Goldreich, O., Goldwasser, S.: Incremental cryptography: the case of hashing and signing. In: Desmedt, Y.G. (ed.) CRYPTO 1994. LNCS, vol. 839, pp. 216–233. Springer, Heidelberg (1994). https://doi.org/10.1007/3540486585_22
Bernstein, D.J., et al.: SPHINCS: practical stateless hashbased signatures. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 368–397. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662468005_15
Boneh, D., Lynn, B., Shacham, H.: Short signatures from the weil pairing. J. Cryptol. 17(4), 297–319 (2004)
Buchmann, J., Dahmen, E., Hülsing, A.: XMSS  a practical forward secure signature scheme based on minimal security assumptions. In: Yang, B.Y. (ed.) PQCrypto 2011. LNCS, vol. 7071, pp. 117–129. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642254055_8
Buchmann, J., Dahmen, E., Szydlo, M.: Hashbased digital signature schemes. In: Bernstein, D.J., Buchmann, J., Dahmen, E. (eds.) PostQuantum Cryptography. Springer, Heidelberg (2009). https://doi.org/10.1007/9783540887027_3
Camenisch, J., Hohenberger, S., Pedersen, M.Ø.: Batch verification of short signatures. In: Naor, M. (ed.) EUROCRYPT 2007. LNCS, vol. 4515, pp. 246–263. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540725404_14
Dahmen, E., Okeya, K., Takagi, T., Vuillaume, C.: Digital signatures out of secondpreimage resistant hash functions. In: Buchmann, J., Ding, J. (eds.) PQCrypto 2008. LNCS, vol. 5299, pp. 109–123. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540884033_8
Denning, T., Kohno, T., Levy, H.M.: Computer security and the modern home. Commun. ACM 1, 94–103 (2013)
Fan, X., Garay, J., Mohassel, P.: Short and adjustable signatures. Cryptology ePrint Archive, Report 2016/549 (2016)
Fiat, A., Shamir, A.: How to prove yourself: practical solutions to identification and signature problems. In: Odlyzko, A.M. (ed.) CRYPTO 1986. LNCS, vol. 263, pp. 186–194. Springer, Heidelberg (1987). https://doi.org/10.1007/3540477217_12
Fischlin, M.: Progressive verification: the case of message authentication. In: Johansson, T., Maitra, S. (eds.) INDOCRYPT 2003. LNCS, vol. 2904, pp. 416–429. Springer, Heidelberg (2003). https://doi.org/10.1007/9783540245827_31
Freitag, C., et al.: Signature schemes with randomized verification. In: Gollmann, D., Miyaji, A., Kikuchi, H. (eds.) ACNS 2017. LNCS, vol. 10355, pp. 373–389. Springer, Cham (2017). https://doi.org/10.1007/9783319612041_19
Gentry, C., Peikert, C., Vaikuntanathan, V.: Trapdoors for hard lattices and new cryptographic constructions. In: STOC 2008, pp. 197–206 (2008)
Katz, J., Lindell, Y.: Introduction to Modern Cryptography, chap. 12, pp. 442–443 (2007)
Koscher, K., et al.: Experimental security analysis of a modern automobile. In: IEEE S&P 2010, pp. 447–462 (2010)
Lamberger, M., Teufl, E.: Memoryless nearcollisions, revisited. CoRR (2012)
Lamport, L.: Constructing digital signatures from a one way function. SRI intl. CSL98 (1979)
Le, D.V., Kelkar, M., Kate, A.: Flexible signatures: towards making authentication suitable for realtime environments. Cryptology ePrint Archive, Report 2018/343 (2018)
Menezes, A.J., Vanstone, S.A., Oorschot, P.C.V.: Handbook of Applied Cryptography, 1st edn. CRC Press, Inc., Boca Raton (1996)
Merkle, R.C.: A certified digital signature. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 218–238. Springer, New York (1990). https://doi.org/10.1007/0387348050_21
Perrig, A.: The BiBa onetime signature and broadcast authentication protocol. In: CCS 2001, pp. 28–37 (2001)
Petit, J., Stottelaar, B., Feiri, M., Kargl, F.: Remote attacks on automated vehicles sensors: experiments on camera and liDAR. In: Black Hat Europe, November 2015
Reyzin, L., Reyzin, N.: Better than BiBa: short onetime signatures with fast signing and verifying. In: Batten, L., Seberry, J. (eds.) ACISP 2002. LNCS, vol. 2384, pp. 144–153. Springer, Heidelberg (2002). https://doi.org/10.1007/3540454500_11
Rogaway, P., Shrimpton, T.: Cryptographic hashfunction basics: definitions, implications, and separations for preimage resistance, secondpreimage resistance, and collision resistance. In: Roy, B., Meier, W. (eds.) FSE 2004. LNCS, vol. 3017, pp. 371–388. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540259374_24
Sadeghi, A.R., Wachsmann, C., Waidner, M.: Security and privacy challenges in industrial internet of things. In: DAC 2015, pp. 1–6 (2015)
Szydlo, M.: Merkle tree traversal in log space and time. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 541–554. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540246763_32
Yu, T., Sekar, V., Seshan, S., Agarwal, Y., Xu, C.: Handling a trillion (unfixable) flaws on a billion devices: rethinking network security for the internetofthings. In: HotNets XIV, pp. 5:1–5:7 (2015)
Acknowledgment
We thank Mikhail Atallah, Dominique Schröder, and the anonymous reviewers for encouraging discussions and suggestions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Proofs
A Proofs
In this section, we provided the formal proofs of two stated theorems.
Proof of Theorem 1. Let m be the message asked by \(\mathcal A\) during the experiment \(\mathsf {FlexExp}_{\varSigma , \mathcal A}(1^n,\alpha )\), and \((m^*, \sigma ^*)\) be the forgery pair. We define the distance, \(d = \varDelta (G(m) , G(m^*))\). We notice that for a pair \((m,m^*)\) output by the adversary during the forgery experiment, if \(\varDelta (G(m) , G(m^*)) > nk_F\), then by pigeonhole principle, at least one of different positions will be checked. Therefore, in order to maximize the success probability, the adversary has to choose \(\ell \) and find a \(\ell \)nearcollision pair where the Hamming distance of G(m) and \(G(m^*)\) is less than \(\ell \) where \(\ell \le (nk_F)\). In order to output such nearcollision pair, \(\mathcal A\) requires at least \(t=t_{\ell \text {}ncr}=2^{n/2}/\sqrt{\sum _{i=0}^{\ell } {\left( {\begin{array}{c}n\\ i\end{array}}\right) }}\). Also, on the other hand, \(\mathcal A\) may win the forgery experiment by spending \(t_{ow}\) to break the underlying preimage resistant hash function. Thus, subtracting the running time of generating, signing, and verifying algorithms, we have: \(t_{fots} = \min \{t_{ow}, t_{\ell \text {}ncr}\}  t_{sign}  t_{gen}  t_{ver} \text { where } 0\le \ell \le nk_F\). For the success probability, we let \(\mathsf {Miss}\) be the event that no different bit gets verified. Since d is the Hamming distance between 2 message digests, either none of those different positions were checked, or some of those positions passed the check (i.e. the preimage was found). Thus, we rewrite \(\mathcal A\)’s advantage for the forging experiment as follows: \(\Pr [\mathsf {FlexExp}_{\mathcal {A},\varSigma }(1^n,\alpha )=1] \le \Pr [\mathsf {Miss}] + \Pr [\mathsf {FlexExp}_{\mathcal {A},\varSigma }(1^n,\alpha )=1\wedge \overline{\mathsf {Miss}}]\).
The event \((\mathsf {FlexExp}_{\mathcal {A},\varSigma }(1^n,\alpha )=1 \wedge \overline{\mathsf {Miss}})\ \) implies that \(\mathcal A\) wins the forgery experiment by providing a preimage of \(F(\cdot )\). Therefore, we can use \(\mathcal A\) to construct a preimage finder \(\mathcal {B}\). The reduction is presented in [7]. One can show:
Finally, \(\Pr [\mathsf {Miss}]\) implies the adversary can win the forging experiment if the challenger does not perform verification on the different bits. Since d is the number of different bits between two digests, the probability that the challenger does not perform verification on those positions is:
From Eqs. (2) and (3), we have:
which completes the proof. \(\blacksquare \)
Proof of Theorem 2. Intuitively, if adversary \(\mathcal {A}\) provides an invalid onetime public key, the verification must fail for at least one level of tree. Otherwise, \(\mathcal {A}\) successfully finds a collision of H. However, in our scheme, since every level of the tree may not be verified, there is a possibility that the forged level is not checked. We formalize the intuition as following; we let \(\mathsf {InvalidOPK}\) be the event that \(\mathcal A\) provides an invalid onetime public key. Consider the Merkle tree construction based on the onetime signature construction.
The \(\mathsf {FlexExp}_{\mathcal {A},\varSigma }(1^n,\alpha )=1 \wedge \mathsf {InvalidPK}\) implies that \(\mathcal A\) provided an invalid onetime public key but won the forgery experiment. Thus, either the verifier failed to check a “bad” level of the tree or \(\mathcal A\) found a collision of \(H(\cdot )\). For a tree of height h, there are \(h+1\) levels that one needs to verify for the complete authentication. Since \(k_{H}\) is the number of times \(H(\cdot )\) is evaluated, using a union bound, we have:
If \(\mathcal A\) found a collision of \(H(\cdot )\), then we can construct a collision finder [7].
The event \(\mathsf {FlexExp}_{\mathcal {A},\varSigma }(1^n,\alpha )=1 \wedge \overline{\mathsf {InvalidPK}}\) implies that \(\mathcal A\) won the flexible forgery experiment for onetime signature scheme. Since we defined \(k_F\) to be the number of \(F(\cdot )\) evaluated, the underlying flexible onetime signature is \((t_{fots},\epsilon _{fots},1)\). Therefore, using Theorem 1, we get:
Since there are \(2^h\) instances of the flexible LamportDiffie onetime signature, it means that for \(0 \le d \le \ell \le nk_F,\mathcal A\) wins the forgery game with probability:
From Eqs. (4), (5) and (6), for \(0 \le d \le \ell \le nk_F\), we have:
When \(k_H < h+1\), we simply let \(t_{fms} = \mathcal O(1)\) because \(\mathcal A\) will win the forgery experiment with probability \(1k_H/(h+1)\). When \(k_H = h+1\), we have:
and using [7, Theorem 5], we have \(t_{fms} = \min \{t_{cr}, t_{fots}\}2^h\cdot t_{sign} t_{ver} t_{gen}\). Now, using Theorem 1, we get: \(t_{fms} = \min \{t_{ow}, t_{\ell \text {}ncr},t_{cr}\}  2^h\cdot t_{sign} t_{ver}  t_{gen} \text { where } 0\le \ell \le nk\).
This completes the proof. \(\blacksquare \)
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Le, D.V., Kelkar, M., Kate, A. (2019). Flexible Signatures: Making Authentication Suitable for RealTime Environments. In: Sako, K., Schneider, S., Ryan, P. (eds) Computer Security – ESORICS 2019. ESORICS 2019. Lecture Notes in Computer Science(), vol 11735. Springer, Cham. https://doi.org/10.1007/9783030299590_9
Download citation
DOI: https://doi.org/10.1007/9783030299590_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030299583
Online ISBN: 9783030299590
eBook Packages: Computer ScienceComputer Science (R0)