Abstract
Aggregate signature schemes allow for the creation of a short aggregate of multiple signatures. This feature leads to significant reductions of bandwidth and storage space in sensor networks, secure routing protocols, certificate chains, software authentication, and secure logging mechanisms. Unfortunately, in all prior schemes, adding a single invalid signature to a valid aggregate renders the whole aggregate invalid. Verifying such an invalid aggregate provides no information on the validity of any individual signature. Hence, adding a single faulty signature destroys the proof of integrity and authenticity for a possibly large amount of data. This is largely impractical in a range of scenarios, e.g. secure logging, where a single tampered log entry would render the aggregate signature of all log entries invalid.
In this paper, we introduce the notion of faulttolerant aggregate signature schemes. In such a scheme, the verification algorithm is able to determine the subset of all messages belonging to an aggregate that were signed correctly, provided that the number of aggregated faulty signatures does not exceed a certain bound.
We give a generic construction of faulttolerant aggregate signatures from ordinary aggregate signatures based on coverfree families. A signature in our scheme is a small vector of aggregated signatures of the underlying scheme. Our scheme is bounded, i.e. the number of signatures that can be aggregated into one signature must be fixed in advance. However the length of an aggregate signature is logarithmic in this number. We also present an unbounded construction, where the size of the aggregate signature grows linearly in the number of aggregated messages, but the factor in this linear function can be made arbitrarily small.
The additional information encoded in our signatures can also be used to speed up verification (compared to ordinary aggregate signatures) in cases where one is only interested in verifying the validity of a single message in an aggregate, a feature beyond faulttolerance that might be of independent interest. For concreteness, we give an instantiation using a suitable coverfree family.
Keywords
 Aggregate signatures
 Faulttolerance
 Coverfree family
This work was supported by the German Federal Ministry of Education and Research within the framework of the project KASTEL_IoE in the Competence Center for Applied Security Technology (KASTEL).
Download conference paper PDF
1 Introduction
Aggregate signature schemes allow anyone to aggregate multiple signatures by different signers into a single combined signature, which is considerably smaller than the size of the individual signatures. This type of digital signature schemes was first proposed and instantiated by Boneh, Gentry, Lynn and Shacham [Bon+03], and has since evolved into a diverse and active research area.
Applications of Aggregate Signatures. The main motivation for aggregate signature schemes is to save bandwidth and storage space. Therefore, their applications are manifold [AGH10].
A wellknown field of application are sensor networks, which consist of several small sensors that measure an aspect of their physical environment and send their findings to a central base station. Digital signatures ensure the integrity and authenticity of the measurements during transfer from the sensors to the base station. Using a conventional digital signature scheme, the verifying base station would need to receive each signature separately, which is bandwidthintensive. However, if the signatures were aggregated beforehand using an aggregate signature scheme, the bandwidth consumption on the side of the base station is reduced drastically. Also, verifying an aggregate signature is typically considerably faster than verifying all individual signatures.
Another application is secure logging. Log files are used to record events like user actions, system errors, failed login attempts as well as general information, and play an important role in computer security by providing, for example, accountability and a basis for intrusion detection. Log files are usually kept for very long periods of time, which means that thousands or even millions of log entries need to be stored. Digital signatures are used to ensure the integrity of the log data. For aggregate signature schemes, it is sufficient to store one single aggregate signature over all log entries, instead of an individual signature per log entry as with a normal digital signature scheme. Whenever a new log entry is added to the log file, one simply calculates a signature for the new entry and aggregates it into the already existing aggregate signature.
Aggregate signatures can also be useful for authenticating software. To ensure the validity of software libraries and programs it has become common to sign their code and/or compiled binaries. Mobile operating systems often only allow signed programs to be executed. Again, it is advantageous to use an aggregate signature to save download bandwidth and verification overhead upon execution, e.g. if all programs are verified at boot time. Like in the logging scenario, aggregate signatures allow for installation of new applications without having to store the individual signatures of all installed programs.
Problem Statement. In all known aggregate signature schemes an aggregate signature is invalid (i.e., verification fails) if just one invalid message–signature pair is contained in the aggregate. Note that either this pair was already invalid (i.e., the individual signature was not valid for this particular message) when the aggregate was created or a “wrong” message is included for verification. In any case, the verification algorithm can give no information about which message–signature pair is the reason for the failure or if other message–signature pairs were valid. This essentially renders the aggregate useless after an invalid signature is added, even though the majority of the messages might have been correctly signed.
For sensor networks, this means that the measurements of all sensors are lost even if only a single sensor sends an invalid signature, for example because of calculation glitches or transmission errors. Usually it is not feasible for computationally weak sensors to ensure the validity of their signature before sending it, since many signature schemes use expensive operations like pairings for verification. An aggregator could ensure the validity of the individual signatures before aggregation, but this would undo one of the advantages of the aggregate signature scheme altogether.
The same problem occurs in the logging scenario: If one log entry is not correctly signed, is tampered with, or is lost (for example through hard disk errors or crashes), the signature for the whole log file becomes invalid. In [MT09] Ma and Tsudik state that this is one of the reasons why they still need to store individual signatures for every log entry, although they use an aggregated signature for the complete log. This is undesirable, since one of the motivations for using the aggregate signature schemes for logging is to save storage space.
This problem also affects the software authentication scenario. If a new program is installed and its signature is invalid or the code of an already installed program gets changed (through hard disk problems etc.), the whole aggregated signature used for authenticating the software becomes invalid. In the worst case this would mean that no program can be executed anymore, because the operating system might block every unauthenticated program.
Contribution. To solve the above mentioned problems, we introduce the concept of faulttolerant aggregate signature schemes, which are able to tolerate a specific number of invalid (or faulty) signatures while aggregating. In such a scheme, the verification algorithm does not output boolean values like “valid” and “invalid” but instead outputs a list of validly signed messages and will leave out all messages that are invalid.
Note that in contrast to ordinary aggregate signatures, faulttolerant aggregate signatures cannot offer an aggregate signature size which is independent of the number of individual signatures to be aggregated. In other words, we cannot hope to aggregate an unlimited number of individual signatures using a constantsize aggregate. This easily follows from an informationtheoretic argument: Let us assume we fix the size of an aggregate signature to l bits. This lbit string then needs to be used by the verification algorithm (as the only “source of information”) to determine which of its input messages are valid. Hence, based on the lbit string, the algorithm can distinguish at most \(2^l\) different outputs. However, considering n messages and corresponding individual signatures, d of which are invalid, there are \(n \atopwithdelims ()nd\) possible different subsets (and thus outputs) which should be distinguishable by the verification algorithm by considering this string. So n is upper bounded by \({n \atopwithdelims ()nd} \le 2^l\). (For a more formal argument using the notation of faulttolerant aggregate signature introduced later, refer to Appendix A.)
Besides a formal framework for faulttolerant aggregate signatures, we also present a generic construction which can be used to turn any aggregate signature scheme into a faulttolerant scheme. This construction makes use of coverfree families [KS64] to provide faulttolerance and comes with a tight security reduction to the underlying signature scheme. For concreteness, we explicitly describe how to instantiate our scheme with a coverfree family based on polynomials over a finite field [KRS99], which has a compact representation. (We generalize the known family to multivariate polynomials in Appendix B.) This leads to an instantiation featuring short aggregate signatures relative to the number n of individual signatures that are aggregated (provided that the maximal number of faults the scheme should tolerate is relatively small compared to n).
As an additional feature, our construction allows the verification of an individual signature in a fashion that is more efficient (e.g., saving a number of costly pairing operations in the case of pairingbased aggregate signatures) than verifying the complete aggregate. This provides a level of flexibility to the signature scheme as demanded by certain applications such as secure logging [MT09].
As a shortcoming of our scheme, we need to assume that aggregates may only contain a previously fixed upper bound d of invalid individual signatures. If for some reason this bound is exceeded, the faulty signatures may affect the verifiability of other messages, as is the case for common aggregate signatures. This is also analogous to errorcorrection codes (which are related to coverfree families), where only a specific number of errors can be located.
Basic Idea of Our Construction. To get a glimpse of our generic construction of a faulttolerant aggregate signature scheme, we now informally illustrate the basic idea. Let an ordinary aggregate signature scheme (e.g., BGLS) and n individual signatures \(\sigma _1, \ldots , \sigma _n\) generated using this scheme be given. Our goal is to detect \(d=1\) faulty individual signatures. To achieve this, our approach is to choose m subsets \(T_1, \ldots , T_m \subset \{\sigma _1, \ldots , \sigma _n\}\) of individual signatures and aggregate the signatures of each subset, thereby yielding aggregate signatures \({\tau }_1, \ldots , {\tau }_m\), such that

1.
m is (significantly) smaller than n and

2.
even if one of the individual signatures is faulty and the corresponding aggregate signatures \({\tau }_i\) will be invalid, all other individual signatures \(\sigma _j\) are aggregated into at least one different, valid signature \({\tau }_k\).
For example, consider the following binary \(4 \times 6\) matrix for which \(n = 6\) and \(m=4\).
A describes a solution to the above mentioned problem as follows: The 1 entries in column j indicate in which \(T_i\) the individual signature \(\sigma _j\) is contained. Consequently, the 1entries in row i indicate the \(\sigma _j\) contained in \(T_i\). More precisely, \(T_i := \left\{ \sigma _j:a_{i,j}=1 \right\} \) and \({\tau }_i\) is the aggregate of all \(\sigma _j \in T_i\). Observe that while it is usually unnecessary for the verification to know the order of the claims from the aggregation process, in our faulttolerant scheme, specifying the correct order is inevitable.
For the matrix above, an aggregate signature \({\tau }= ({\tau }_1, {\tau }_2, {\tau }_3, {\tau }_4)\) for individual signatures \(\sigma _1, \dots , \sigma _6\) would be formed in the following manner:
where \(\mathsf {Agg}\) informally denotes the aggregation function of the underlying aggregate signature scheme.
Let us assume that only one signature \(\sigma _j\) is faulty. Then all \({\tau }_i\) are faulty where \(a_{i,j} = 1\). However, because all other \(\sigma _k\) were also aggregated into at least one different \({\tau }_i\), we can still derive the validity of \(\sigma _k\).
For a concrete example, suppose \(\sigma _1\) is faulty. Then \({\tau }_1\) and \({\tau }_2\) will be faulty, whereas \({\tau }_3\) and \({\tau }_4\) are valid. We see that \(\sigma _2, \sigma _3\) and \(\sigma _4\) occur in \({\tau }_3\), and \(\sigma _5, \sigma _6\) occur in \({\tau }_4\), and so we may be sure that the corresponding messages were signed.
The matrix A defined above has the property that it can tolerate one faulty signature, i.e., if just one signature is faulty, then all other messages can still be verified. Unfortunately, this is not possible if two or more faulty signatures are aggregated. Lets assume that \(\sigma _1\) and \(\sigma _2\) are faulty. In this case, \({\tau }_1, {\tau }_2\) and \({\tau }_3\) become invalid and \({\tau }_4\) is the only valid signature. We could still derive the validity of \(\sigma _3, \sigma _5, \sigma _6\), because \({\tau }_4\) is valid. However, the validity of \(\sigma _1, \sigma _2\) and \(\sigma _4\) can no longer be verified, since they were never aggregated to \({\tau }_4\).
Note that our scheme does not support fully flexible aggregation: Each column of A can only be used to hold one individual signature, as can be seen in the example above. Two aggregate signatures where the same column is used can not be aggregated further without losing the guarantee of faulttolerance. However, our scheme still supports a notion of aggregation which is only slightly restricted: Individual signatures can always be aggregated, while aggregate signatures can only be aggregated if no column is used in both. As long as this requirement is met, signatures can be aggregated in any order. This notion is sufficient for many use cases, we discuss this further in Sect. 3.
The construction of matrices that can tolerate \(d > 1\) faulty signatures is more intricate, but incidence matrices belonging to dcoverfree families turned out to imply the desired property. Informally speaking, in such a matrix the “superposition” \(\varvec{s}\) of up to d arbitrary column vectors \(\varvec{a_{i_1}}, \ldots , \varvec{a_{i_d}}\), i.e., the vector \(\varvec{s}\) which has a 1 at position \(\ell \) if at least one of the vectors \(\varvec{a_{i_1}}, \ldots , \varvec{a_{i_d}}\) has a 1 at this position, does not “cover” any other distinct column vector \(\varvec{a_j}\) (\(j \not \in \{i_1, \ldots , i_d\}\)). In other words, there is at least one position \(\ell \) such that \(\varvec{a_{j}}\) has a 1 at this position but \(\varvec{s}\) shows a 0. This implies that if at most d individual signatures (each belonging to one column) are invalid, then each distinct individual signature is contained in at least one valid aggregate signature, and the corresponding message can therefore be trusted. Hence applying such a matrix, as sketched above, implies that any subset of faulty individual signatures of size up to d will not compromise the trustworthiness of any other message. There are different constructions of dcoverfree families for d and n of unlimited size (where these parameters need to satisfy certain conditions depending on the family) featuring \(m \ll n\) for choices of the parameters n, d with \(d \ll n\).
Related Work. The first full aggregate scheme was constructed by Boneh et al. [Bon+03] in the random oracle model. Full aggregate schemes allow any user to aggregate signatures of different signers, i.e., aggregation is a public operation. Furthermore it is possible to aggregate individual signatures as well as already aggregated signatures in any order. In [HSW13] Hohenberger, Sahai and Waters give the first construction of such a scheme in the standard model using multilinear maps. Recently, Hohenberger, Koppula, and Waters [HKW15] have constructed a “universal signature aggregator” based on indistinguishability obfuscation. A universal signature aggregator can aggregate signatures from any set of signing algorithms, even if they use different algebraic settings. In [ZS11] Zaverucha and Stinson construct an aggregate onetimesignature.
Since it has proven difficult to construct full aggregate schemes in the standard model, a lot of research was focused on signature schemes with some form of restricted aggregation. One major type of restricted aggregation is sequential aggregation, as proposed by Lysyanskaya et al. [Lys+04]. In these schemes, the aggregate is sequentially sent from signer to signer and each signer can add new information to the aggregate. Multiple constructions are known, both in the random oracle [Lys+04, Nev08, Bol+07, Ger+12] and the standard model [Lu+06, Sch11, LLY15]. Another type of aggregation is synchronized aggregation, as proposed by Gentry and Ramzan [GR06]. Here, a special synchronization information, like the current time period, is used while signing. All signatures sharing the same synchronizing information behave like signatures of a full aggregate scheme, i.e., both individual and aggregated signatures can be aggregated in any order. Again, schemes in the random oracle [AGH10, GR06] and standard model [AGH10] are known. Other authors considered aggregate signature schemes that need interaction between the signers [BN07, BJ10] or can only partially aggregate the signatures [Her06, BGR14].
Our construction is based on coverfree families, which are a combinatorial structure that was first introduced by Kautz and Singleton [KS64] in the language of coding theory. They have several applications in cryptography, for example group testing [STW97], multireceiver authentication codes [SW99], encryption [Cra+07, Dod+02, HK04] and traitortracing [TS06]. There are multiple constructions of signature schemes using coverfree families. Hofheinz, Jager, and Kiltz [HJK11] use coverfree families to construct a (m, 1)programmable hash function. They then use this hash function to construct conventional digital signature schemes from weak assumptions. Zaverucha and Stinson [ZS11] construct an aggregate onetimesignature using coverfree families.
Outline. Section 2 introduces some notations, conventions, and preliminary definitions. Section 3 presents a general definition of faulttolerant aggregate signature schemes and some properties of such schemes, such as the security definition. Our construction is presented and analyzed in Sect. 4. Afterwards, we discuss an instantiation of our scheme with a specific class of coverfree families in Sect. 5.
2 Preliminaries
Let \([n] \,{:}{=}\{1, \ldots , n\}\). The multiplicity of an element m in a multiset M is the number of occurrences of m in M. For two multisets \(M_1, M_2\), the union \(M_1 \cup M_2\) is defined as the multiset where the multiplicity of each element is the sum of the multiplicities in \(M_1\), \(M_2\).
If v is a vector or a tuple, \(v[i]\) refers to the ith entry of v. If M is a matrix, \({{\mathrm{rows}}}(M)\) and \({{\mathrm{cols}}}(M)\) denote the number of rows and columns of M, respectively. For \(i \in [{{\mathrm{rows}}}(M)], j \in [{{\mathrm{cols}}}(M)]\), \(M[i,j]\) is the entry in the ith row and jth column of M.
Throughout the paper, \(\kappa \in {\mathbb {N}}\) is the security parameter. We say an algorithm A is probabilistic polynomial time (PPT) if the running time of A is polynomial in \(\kappa \) and A is a probabilistic algorithm. All algorithms are implicitly given \(1^\kappa \) as input, even when not noted explicitly.
In this work, \(\sigma \) usually refers to signatures of standard aggregate signature schemes, whereas \({\tau }\) mostly refers to signatures of a faulttolerant aggregate signature scheme.
2.1 Aggregate Signatures
Let us quickly review the definition of aggregate signature schemes and the associated security notion, as defined in [Bon+03]. An aggregate signature scheme is a tuple of four PPT algorithms:

\({{\mathrm{\mathsf {KeyGen}}}}(1^\kappa )\) creates a key pair \((\mathsf {pk}, \mathsf {sk})\).

\(\mathsf {Sign}(\mathsf {sk}, m)\) creates a signature for message m under secret key \(\mathsf {sk}\).

\(\mathsf {Agg}(C_1, C_2, \sigma _1, \sigma _2)\) takes as input two multisets of publickey and message pairs \(C_1\) and \(C_2\) and corresponding signatures \(\sigma _1\) and \(\sigma _2\) and creates an aggregate signature \(\sigma \), certifying the validity of the messages in \(C_1 \cup C_2\) under the corresponding public keys.

\(\mathsf {Verify}(C, \sigma )\) takes as input a multiset of publickey and message pairs \(C\) and an aggregate signature \(\sigma \) for \(C\) and outputs 1, if the signature is valid, and 0 otherwise.
For correctness we require that any signature that is generated by the signature scheme by applications of \(\mathsf {Sign}\) and \(\mathsf {Agg}\) using key pairs of the scheme, is valid, i.e. \(\mathsf {Verify}\) outputs 1.
Security Notion for Aggregate Signatures. The security experiment for aggregate signatures consists of three phases [Bon+03]:

Setup Phase. The challenger generates a pair of keys \((\mathsf {pk}, \mathsf {sk}) :={{\mathrm{\mathsf {KeyGen}}}}(1^\kappa )\) and gives the public key \(\mathsf {pk}\) to the adversary.

Query Phase. The adversary \(\mathcal {A}\) may (adaptively) issue signature queries \(m_i\) to the challenger, who responds with \(\sigma _i :=\mathsf {Sign}(\mathsf {sk}, m_i)\).

Forgery Phase. Finally, \(\mathcal {A}\) outputs a multiset of publickey and message pairs \({C}^*\) and a signature \(\sigma ^*\).
The adversary wins the experiment iff there is a message \(m^*\) such that \(c^* = (\mathsf {pk}, m^*)\) is in \({C}^*\), \(\mathsf {Verify}({C}^*, \sigma ^*) = 1\), and \(m^*\) has never been submitted to the signature oracle.
An aggregate signature scheme \(\varSigma \) is \({(t, q, \varepsilon )}\)secure if there is no adversary \(\mathcal {A}\) running in time at most t, making at most q queries to the signature oracle and winning in the above experiment with probability at least \(\varepsilon \).
2.2 CoverFree Families
For our construction of a faulttolerant aggregate signature scheme in Sect. 3 we need a dcoverfree family, which allows us to detect up to d invalid individual signatures in our aggregate signature.
Definition 1
A dcoverfree family \(\mathcal {F}=(\mathcal {S}, \mathcal {B})\) (denoted by dCFF) consists of a set \(\mathcal {S}\) of \(m\) elements and a set \(\mathcal {B}\) of n subsets of \(\mathcal {S}\), where \(d<m<n\), such that: For any d subsets \(B_{i_1}, \dots , B_{i_d}\in \mathcal {B}\) and all distinct \(B\in \mathcal {B}\setminus \{B_{i_1}, \dots , B_{i_d}\}\), it holds that
So, it is not possible to cover a single subset with at most d different subsets. To get a better representation of a dCFF and to simplify the handling of it, we will use a matrix in the following way:
Definition 2
For a dCFF \(\mathcal {F}=(\mathcal {S}, \mathcal {B})\), where the elements of \(\mathcal {S}\) and \(\mathcal {B}\) have a welldefined order, such that we can write \(\mathcal {S}=\{s_1, \dots , s_m\}\), \(\mathcal {B}=\{B_1, \dots , B_n\}\), we define its incidence matrix \(\mathcal {M}\) as follows:
The ith row of \(\mathcal {M}\) is denoted by \(\mathcal {M}_{i} \in \{0,1\}^n\), for \(i \in [m]\).
So, \(s_i\in \mathcal {S}\) corresponds to row i and \(B_j\in \mathcal {B}\) corresponds to column j, i.e. \(\mathcal {M}\) has m rows and n columns.
3 FaultTolerant Aggregate Signatures
Claims and Claim Sequences. As a notational convenience, we introduce the concept of claims. A claim c is simply a pair \((\mathsf {pk}, m)\) of a public key and a message, conveying the meaning that the owner of \(\mathsf {pk}\) has authenticated the message m. In this sense, a signature \(\sigma \) for m that is valid under \(\mathsf {pk}\) is a proof for the claim c. This definition allows for a more compact representation of our algorithms.
The signature scheme we introduce in Sect. 4 critically requires an order among the claims. While the actual order is arbitrary, it must be maintained by the aggregation and verification algorithms. We therefore define the faulttolerant signature schemes based on sequences of claims, instead of multisets.
More precisely, when an individual signature \({\tau }'\) for a claim c is first aggregated into an aggregate signature \({\tau }\), one must assign a unique “position” j to c. If one wishes to verify \({\tau }\), one must call \(\mathsf {Verify}\) with a sequence of claims \({C}\) that has c at its jth position, i.e. \({C}[j] = c\). Therefore, two aggregate signatures \({\tau }_1, {\tau }_2\) for two sequences of claims \({C}_1, {C}_2\) can not be aggregated if \({C}_1[j] \ne {C}_2[j]\) for some j.
Thus, our scheme does not support fully flexible, arbitrary aggregation. However, if the signers agree in advance on the positions j of their claims, they can aggregate all their signatures into a single combined signature \({\tau }\). This prerequisite can easily be fulfilled in many applications. In wireless sensor networks for example, one only has to configure each sensor to use a different position j. Moreover, it is always possible to use our scheme as a sequential aggregate signature scheme, since the position j of a claim needs only be determined when it is first aggregated. Our scheme is therefore suitable for all applications where sequential aggregate signatures are sufficient, too, such as secure logging [MT09].
For the general aggregation setting, we will have to deal with “incomplete” claim sequences, i.e. if a claim sequence does not yet contain a claim at position j. We therefore assume the existence of a claim placeholder \(\bot \) that may be contained in claim sequences. When aggregating the signatures of two such incomplete claim sequences \({C}_1, {C}_2\), the claim sequences will be merged, meaning that claim placeholders in \({C}_1\) are replaced by actual claims from \({C}_2\), for each position j where \({C}_1[j] = \bot \) and \({C}_2[j] \ne \bot \), and vice versa. (This merging operation replaces the multiset union used by common aggregate signature schemes.)
For technical reasons, we also require that there is no position where \({C}_1\) and \({C}_2\) both contain a claim, even if the claims are identical. As a consequence, if a signature \({\tau }\) is aggregated into two different aggregate signatures \({\tau }_1, {\tau }_2\) using the same position j, \({\tau }_1\) and \({\tau }_2\) can not be aggregated later. Note, however, that this does not preclude the possibility to aggregate \({\tau }\) into \({\tau }_1\) and \({\tau }_2\) at different positions.
We now move to the formal definition. A claim sequence is a tuple of claims and claim placeholders \(\bot \). The multiset of elements of a claim sequence \({C}\) excluding \(\bot \) is denoted by \(\mathsf {elem}\!\left( {C}\right) \). Two claim sequences \({C}_1\), \({C}_2\) are mergeable if for all \(i \in [\min ({}{C}_1{}, {}{C}_2{})]\) it holds that \({C}_1[i] = \bot \) or \({C}_2[i] = \bot \) or \({C}_1[i] = {C}_2[i]\). \({C}_1, {C}_2\) are called exclusively mergeable, if for all such i it holds that \({C}_1[i] = \bot \) or \({C}_2[i] = \bot \). (In particular, two exclusively mergeable sequences are mergeable.) For example, for distinct claims \(c_1, c_2, c_3\), define \({C}_1 = (\bot , c_2, c_3)\), \({C}_2 = (c_1, \bot , \bot )\), \({C}_2' = (c_1, c_2, \bot )\), \({C}_2'' = (c_1, c_3, c_2)\). Then, \({C}_1, {C}_2\) are exclusively mergeable, \({C}_1, {C}_2'\) are mergeable, but not exclusively mergeable, and \({C}_1, {C}_2''\) are not mergeable.
Let \({C}_1\) and \({C}_2\) be two mergeable claim sequences of length k and l, respectively. Without loss of generality, assume \(k \ge l\). Then the merged claim sequence \({C}_1 \mathop {\sqcup }{C}_2\) is \((c_1, \ldots , c_k)\), where
The empty signature \(\lambda \) is a signature valid for exactly the claim sequences containing only \(\bot \) and the empty claim sequence.
Subsequences. Let \({C}= (c_1, \ldots , c_n)\) be a tuple and \(b \in \{0, 1\}^n\) be a bit sequence specifying a selection of indices. Then \({{C}}[{b}]\) is the subsequence of \({C}\) containing exactly the elements \(c_j\) where \(b[j] = 1\), replacing all other claims by \(\bot \). In particular, if \(\mathcal {M}\) is an incidence matrix of a coverfree family, then \({{C}}[{\mathcal {M}_{i}}]\) is the subsequence containing all \(c_j\) where \(\mathcal {M}[i,j] = 1\) and \(\bot \) at all other positions.
Syntax of FaultTolerant Signature Schemes. We are now ready to define faulttolerant aggregate signature schemes. The intuitive difference of such a scheme to an ordinary aggregate signature scheme is that its verification algorithm does not only output a boolean value \(1\) or \(0\) that determines if either all claims are valid or at least one claim is invalid, but it gives (some) information on which claims in \({C}\) are valid. In particular, it outputs the set of valid claims. If the signature contains more errors than the scheme can cope with, \(\mathsf {Verify}\) may output just a subset of the valid claims. Other claims may be clearly false or just not certainly true. (The verification algorithm ought to be conservative and reject a claim in case of uncertainty.)
The aggregation algorithm is called with two claim sequences, hence, before aggregating, a single claim c must be converted to a claim sequence \({C}= (\bot , \ldots , \bot , c)\) by assigning a position to c.
Definition 3
An aggregate signature scheme with list verification ^{Footnote 1} is a tuple of four PPT algorithms \(\varSigma = ({{\mathrm{\mathsf {KeyGen}}}}, \mathsf {Sign}, \mathsf {Agg}, \mathsf {Verify})\), where

\({{\mathrm{\mathsf {KeyGen}}}}(1^\kappa )\) creates a key pair \((\mathsf {pk}, \mathsf {sk})\).

\(\mathsf {Sign}(\mathsf {sk}, m)\) creates a signature for message m under secret key \(\mathsf {sk}\).

\(\mathsf {Agg}({C}_1, {C}_2, {\tau }_1, {\tau }_2)\) takes as input two exclusively mergeable claim sequences \({C}_1\) and \({C}_2\) and corresponding signatures \({\tau }_1\) and \({\tau }_2\) and creates an aggregate signature \({\tau }\), certifying the validity of the claim sequence \({C}_1 \mathop {\sqcup }{C}_2\).

\(\mathsf {Verify}({C}, {\tau })\) takes as input a claim sequence \({C}\) and an aggregate signature \({\tau }\) for \({C}\) and outputs a multiset of claims \({C}_\mathrm{{valid}}\subseteq \mathsf {elem}\!\left( {C}\right) \) specifying the valid claims in \({\tau }\). Note that this may be a proper subset of \(\mathsf {elem}\!\left( {C}\right) \), or even empty, if none of the claims can be derived from \({\tau }\) (for certain). Again, here, \({C}\) may contain \(\bot \) as a claim placeholder.
\(\varSigma \) is required to be correct as defined in the following paragraphs.
Regular Signatures. Informally, a signature is regular if it is created by running the algorithms of \(\varSigma \). More formally, let \({C}\) be a claim sequence and \({\tau }\) be a signature. We recursively define what it means for \({\tau }\) to be regular for \({C}\):

If \((\mathsf {pk}, \mathsf {sk})\) is in the image of \({{\mathrm{\mathsf {KeyGen}}}}(1^\kappa )\) and \({C}= ((\mathsf {pk}, m))\) for a message m, and if \({\tau }\) is in the image of \(\mathsf {Sign}(\mathsf {sk}, m)\), then \({\tau }\) is said to be regular for \({C}\) and for any claim sequence obtained by prepending any number of \(\bot \) symbols to \({C}\).

If \({\tau }_1\) is regular for a claim sequence \({C}_1\), \({\tau }_2\) is regular for another claim sequence \({C}_2\), and \({C}_1, {C}_2\) are exclusively mergeable, then \({\tau }\) is regular for \({C}_1 \mathop {\sqcup }{C}_2\) if \({\tau }\) is in the image of \(\mathsf {Agg}({C}_1, {C}_2, {\tau }_1, {\tau }_2)\).

The empty signature \(\lambda \) is regular for the claim sequences containing only \(\bot \) and the empty claim sequence ().
If a signature \({\tau }\) is not regular for a claim sequence \({C}\), it is called irregular for \({C}\).
Fault Tolerance. Let \(M = {\{}(c_1, {\tau }_1), \ldots , (c_n, {\tau }_n){\}}\) be a multiset of claim and signature pairs, which is partitioned into two multisets \(M_\mathrm{{irreg}}\) and \(M_\mathrm{{reg}}\), containing the pairs for which \({\tau }_i\) is irregular for \({C}= (c_i)\) and regular for \({C}\), respectively.^{Footnote 2}
Then the multiset M contains d errors, if \({}M_\mathrm{{irreg}}{}\) is d. An aggregate signature scheme \(\varSigma \) with list verification is tolerant against d errors, if for any such multiset M containing at most d errors, for any signature \({\tau }\) that was aggregated from the signatures in M (in arbitrary order) and the corresponding claim sequence \({C}\), which may additionally contain any number of claim placeholders \(\bot \), we have
where R is the multiset of all the claims (i.e. the first component of the pairs) in \(M_\mathrm{{reg}}\). In other words, \(\mathsf {Verify}\) outputs at least all claims of regular signatures.^{Footnote 3}
A dfaulttolerant aggregate signature scheme is an aggregate signature scheme with list verification that is tolerant against d errors. A faulttolerant aggregate signature scheme is a scheme that is dfaulttolerant for some \(d>0\).
Correctness. Observe that 0faulttolerance means that if M contains only regularly created signatures, then \(\mathsf {Verify}\) must output all claims in M (or \({C}\), respectively). This is analogous to the common definition of correctness for aggregate signature schemes. We therefore call an aggregate signature scheme with list verification correct, if it is tolerant against 0 errors.
Errors During Aggregation. Our definitions above assume that aggregation is always done correctly. This is a necessary assumption, since it is impossible to give guarantees for arbitrary errors that happen during aggregation. Consider for example a faulty aggregation algorithm that ignores its input and just outputs a random string. It is an interesting open question to find a faulttolerant signature scheme that can tolerate certain types of aggregation errors, too.
Compression Ratio. Denote by \(\mathsf {size}(\sigma )\) the size of a signature \(\sigma \). Let \({C}\) be a claim sequence of length n, and \(\sigma ^*\) an aggregate signature of maximum size^{Footnote 4} which is regular for \({C}\). We say that an aggregate signature scheme has compression ratio \(\rho (n)\) iff
Note that if \(\mathsf {size}(\sigma ^*)\) is upper bounded by a constant, then the compression ratio is \(\rho (n) = n\), which is optimal for common aggregate signature schemes. As argued in the introduction this is not possible for faulttolerant aggregate signatures, cf. Appendix A.
Security Experiment. The security experiment for aggregate signatures with list verification, which is a direct adaption of the standard security experiment of [Bon+03], consists of three phases:

Setup Phase. The challenger generates a pair of keys \((\mathsf {pk}, \mathsf {sk}) :={{\mathrm{\mathsf {KeyGen}}}}(1^\kappa )\) and gives the public key \(\mathsf {pk}\) to the adversary.

Query Phase. The adversary \(\mathcal {A}\) may (adaptively) issue signature queries \(m_i\) to the challenger, who responds with \({\tau }_i :=\mathsf {Sign}(\mathsf {sk}, m_i)\).

Forgery Phase. Finally, \(\mathcal {A}\) outputs a claim sequence \({C}^*\) and a signature \({\tau }^*\).
The adversary wins the experiment iff there is a message \(m^*\) such that \(c^* = (\mathsf {pk}, m^*) \in \mathsf {Verify}({C}^*, {\tau }^*)\), and \(m^*\) has never been submitted to the signature oracle.
Definition 4
An aggregate signature scheme with list verification is \((t, q, \varepsilon )\) secure if there is no adversary \(\mathcal {A}\) running in time at most t, making at most q queries to the signature oracle and winning in the above experiment with probability at least \(\varepsilon \).
4 Generic Construction of FaultTolerant Aggregate Signatures
In this section, we present our generic construction of faulttolerant aggregate signature schemes. It is based on an arbitrary aggregate signature scheme \(\varSigma \), which is used as a black box, and a coverfree family. Our scheme inherits its security from \(\varSigma \), and can tolerate d faults if it uses a dcoverfree family.
Our Construction. In the following we describe a generic construction of our faulttolerant aggregate signature scheme. For this let \(\varSigma \) be an ordinary aggregate signature scheme. Moreover, let \(\mathcal {M}\) be the incidence matrix of a dcoverfree family \(\mathcal {F}= (\mathcal {S}, \mathcal {B})\), as defined in Sect. 2.2. For the sake of presentation, we first show our bounded construction. In this version of our construction, the maximum number of signatures that can be aggregated is \({{\mathrm{cols}}}(\mathcal {M})\). We discuss in Sect. 4.1 how to remove this restriction.
In our scheme, signatures for just one claim are simply signatures of the underlying scheme \(\varSigma \), whereas aggregate signatures are short vectors of signatures of \(\varSigma \). We identify each element of the universe \(\mathcal {S}\) with a position in this vector, and each subset \(B \in \mathcal {B}\) with an individual signature of the underlying scheme \(\varSigma \).
Here, we require also that the underlying scheme \(\varSigma \) supports claim sequences and claim placeholders as an input to \(\mathsf {Agg}\) and \(\mathsf {Verify}\), contrary to just multisets, as in the definition of Sect. 2.1. Moreover, we assume that \(\varSigma \) supports the empty signature \(\lambda \) as an input to \(\mathsf {Agg}\) and \(\mathsf {Verify}\). However, these are not essential restrictions, as for instance any normal aggregate scheme may be easily adapted to a scheme of the modified syntax, by ignoring any order and claim placeholders, i.e. applying \(\mathsf {elem}\!\left( \cdot \right) \) on the claim sequences before they are passed to the \(\mathsf {Agg}\) and \(\mathsf {Verify}\) algorithm.

\({{\mathrm{\mathsf {KeyGen}}}}(1^\kappa )\) creates a key pair \((\mathsf {pk}, \mathsf {sk})\) by using the \({{\mathrm{\mathsf {KeyGen}}}}\) algorithm of \(\varSigma \).

\(\mathsf {Sign}(\mathsf {sk}, m)\) takes as input a secret key \(\mathsf {sk}\) and a message m and outputs the signature as given by \(\varSigma .\mathsf {Sign}(\mathsf {sk}, m)\).

\(\mathsf {Agg}({C}_1, {C}_2, {\tau }_1, {\tau }_2)\) takes as input two exclusively mergeable claim sequences \({C}_1\) and \({C}_2\) and corresponding signatures \({\tau }_1\) and \({\tau }_2\). It proceeds as follows:

1.
If one or both of the claim sequences \({C}_k\) (\(k \in \{1, 2\}\)) contains only one (proper) claim c, i.e. \({\tau }_k\) is an individual signature, then \(\sigma _k\) is initialized as \({\tau }_k\), the corresponding signature given to \(\mathsf {Agg}\). Then \({\tau }_k\) is expanded to a vector, by setting
$$\begin{aligned} {\tau }_k[i] \,{:}{=}{\left\{ \begin{array}{ll} \sigma _k, &{} \text {if } \mathcal {M}[i, j] = 1, \\ \lambda , &{} \text {otherwise,} \end{array}\right. } \quad \text { for } i = 1, \ldots , m, \end{aligned}$$where j is the index of c in the claim sequence.

2.
Then the signatures \({\tau }_1, {\tau }_2\), which are both vectors now, are aggregated componentwise, i.e.
$$\begin{aligned} {\tau }[i] = \varSigma .\mathsf {Agg}( {{C}_1}[{\mathcal {M}_{i}}], {{C}_2}[{\mathcal {M}_{i}}], {\tau }_1[i], {\tau }_2[i] ). \end{aligned}$$
Finally, \(\mathsf {Agg}\) outputs \({\tau }\).

1.

\(\mathsf {Verify}({C}, {\tau })\) takes as input a claim sequence \({C}\) and an aggregate signature \({\tau }\) for \({C}\). For each component \({\tau }[i]\) of \({\tau }\) it computes \(b_i \,{:}{=}\varSigma .\mathsf {Verify}({{C}}[{\mathcal {M}_{i}}], {\tau }[i])\) and outputs the multiset of valid claims
$$\begin{aligned} {C}_\mathrm{{valid}} := \mathsf {elem}\!\left( \bigsqcup _{{i\in [k], b_i = 1}} {{C}}[{\mathcal {M}_{i}}] \right) . \end{aligned}$$(1)
We now prove the security of our scheme.
Theorem 1
If \(\varSigma \) is a \((t, q, \varepsilon )\)secure aggregate signature scheme, then the scheme defined above is a \((t', q, \varepsilon )\)secure aggregate signature scheme with list verification, where \(t'\) is approximately the same as t.
Proof
Let \(\varSigma '\) be the scheme described above. The following argument is rather direct. Assume that \(\mathcal {A}\) is an adversary breaking the \((t', q, \varepsilon )\)security of \(\varSigma '\). We construct an attacker \(\mathcal {B}\) breaking the \((t, q, \varepsilon )\)security of \(\varSigma \).
\(\mathcal {B}\) simulates \(\mathcal {A}\) as follows. In the setup phase \(\mathcal {B}\) starts executing \(\mathcal {A}\) and passes its own input \(\mathsf {pk}\) on to \(\mathcal {A}\). Whenever \(\mathcal {A}\) makes a signature query for a message m, \(\mathcal {B}\) obtains the signature \(\sigma \) by forwarding m to the challenger. \(\mathcal {B}\) then passes \(\sigma \) to \(\mathcal {A}\) and continues the simulation. When \(\mathcal {A}\) outputs a claim sequence \({C}^*\) and a signature \({\tau }^*\), \(\mathcal {B}\) checks if there is a claim \(c^* = (\mathsf {pk}, m^*)\) in \({C}^*\), such that \(m^*\) was never queried by \(\mathcal {A}\).
If this is not the case, then \(\mathcal {B}\) outputs \(\bot \) and terminates. Otherwise, by definition of \(\varSigma '.\mathsf {Verify}\), there must be an index i such that
and \(m^* \in \mathsf {elem}\!\left( {{C}^*}[{\mathcal {M}_{i}}]\right) \). \(\mathcal {B}\) outputs \({{C}^*}[{\mathcal {M}_{i}}]\) and \({\tau }^*[i]\). This is a valid signature, because of (2).
Note that \(\mathcal {B}\)’s queries are exactly the same as \(\mathcal {A}\)’s. Therefore, if \(\mathcal {A}\) did not query \(m^*\), then neither did \(\mathcal {B}\). Thus, \(\mathcal {B}\) wins exactly iff \(\mathcal {A}\) wins, and therefore \(\mathcal {B}\) also has success probability \(\varepsilon \). We also see that \(\mathcal {B}\) makes at most q queries. Finally, it is easy to verify that the running time of \(\mathcal {B}\) is approximately the same as the running time of \(\mathcal {A}\). \(\square \)
We now turn to proving the faulttolerance of our scheme.
Theorem 2
Let \(\varSigma \) be the aggregate signature scheme with list verification defined above. If \(\varSigma \) is based on a dCFF, then it is tolerant against d errors, and in particular, it is correct.
Proof
Let \(M = {\{}(c_1, {\tau }_1), \ldots , (c_m, {\tau }_m){\}}\) be a multiset of claim and signature pairs, which is partitioned into two multisets \(M_\mathrm{{irreg}}\) and \(M_\mathrm{{reg}}\), containing the pairs for which \({\tau }_i\) is irregular for \({C}_i=(c_i)\) or regular for \({C}_i\), respectively. Let M contain at most d errors, i.e., \({}M_\mathrm{{irreg}}{}\le d\). Moreover, let \({\tau }\) be a signature that was aggregated from the signatures in M (in arbitrary order) and \({C}\) the corresponding claim sequence. To simplify the proof, we assume without loss of generality that \({C}= (c_1, \ldots , c_n)\), i.e., the order in the claim sequence is the same as in the indexing of the signatures in M and it does not include any claim placeholders \(\bot \). Finally, let \(\mathcal {F}= (\mathcal {S}, \mathcal {B})\) be a dcoverfree family used by the scheme above, where \(\mathcal {S}= \{s_1, \ldots , s_m\}\) and \(\mathcal {B}= \{B_1, \ldots , B_n\}\).
We need to show that \(R \subseteq \varSigma .\mathsf {Verify}({C}, {\tau }) \, {=}{:}\,V\), where R is the multiset of all the claims in \(M_\mathrm{{reg}}\). Recall that \({{\mathrm{rows}}}(\mathcal {M})\) and \({{\mathrm{cols}}}(\mathcal {M})\) denote the number of rows and columns of \(\mathcal {M}\), respectively. Let \(b_i \,{:}{=}\varSigma '.\mathsf {Verify}({{C}}[{\mathcal {M}_{i}}], {\tau }[i])\) for all \(i \in [{{\mathrm{rows}}}(\mathcal {M})]\).
Assume for a contradiction that there is a claim \(c^*\) that is contained strictly more often in R than in V. Then there exists an index \(j^*\) such that \({C}[j^*] = c^*\) and \(b_i = 0\) for all \(i \in [{{\mathrm{rows}}}(\mathcal {M})]\) with \(\mathcal {M}[i,j^*] = 1\).
In the following, let \(I := \left\{ i \in [{{\mathrm{rows}}}(\mathcal {M})]:\mathcal {M}[i,j^*] = 1 \right\} \) be the set of these indices I, and observe that these are the indices of all rows where the signature for \(c^*\) is aggregated into \({\tau }[i]\).
We now try to obtain a contradiction by showing that the set \(B_{j^*}\), which corresponds to the column \(j^*\) of \(\mathcal {M}\), is covered by the sets \(B_k\), corresponding to the columns of the claims with irregular signatures.
For each \(i \in I\), since \(b_i = 0\) and using the correctness of \(\varSigma '\), there must be some \(k \in [n]\) such that \((c_k, \sigma _k) \in M_\mathrm{{irreg}}\) and \(\mathcal {M}[i,k] = 1\). Since M contains at most d errors, there are at most d such indices k in total. Let K denote the set of these indices. Note that \(j^* \notin K\), since \((c^*, \sigma ^*) \in M_\text {reg}\), according to our assumption.
We now have established that for each \(i \in I\), there exists a k with \((c_k, \sigma _k) \in M_\text {irreg}\) and \(\mathcal {M}[i,k] = 1\), and \({}K{} \le d\). Recall that by definition of the incidence matrix \(\mathcal {M}\), we have for all \(i \in [{{\mathrm{rows}}}(\mathcal {M})]\) and \(j \in [{{\mathrm{cols}}}(\mathcal {M})]\):
Restating the fact from the above paragraph using this equivalence yields that for all i with \(s_i \in B_{j^*}\), there exists a k with \(s_i \in B_k\), where there are at most d distinct indices \(k \in K\) in total. But this means that \(B_{j^*} \subseteq \bigcup _{k \in K} B_k\), where the union is over at most d different subsets \(B_k\) of \(\mathcal {S}\). This is a direct contradiction to the dcoverfreeness of \(\mathcal {F}\), so our assumption must be false, and we must therefore have \(R \subseteq V\). \(\square \)
Compression Ratio. Let C be a claim sequence of length \(n \in {\mathbb {N}}\), and \({\tau }\) be an aggregate signature regular for C. We assume in the following that the length of all signatures of the underlying scheme \(\varSigma '\) is bounded by a constant s and is at least 1. Then the compression ratio of our scheme is \(\rho (n) = \frac{n}{{{\mathrm{rows}}}(\mathcal {M})}\), since
Clearly, the compression ratio \(\rho (n)\) of our scheme is less than 1 if \(n < {{\mathrm{rows}}}(\mathcal {M})\), and the resulting aggregate signature is larger than the sum of the individual signature sizes when only few signatures have been aggregated so far. Our scheme can be easily adapted to fix this behavior, by simply storing all individual signatures instead of immediately aggregating them, until \(n = {{\mathrm{rows}}}(\mathcal {M})\). When the \(n+1\)st signature is added, the individual signatures are aggregated using the aggregation algorithm defined above. When further signatures are added, the size of the aggregate signature remains bounded by \({{\mathrm{rows}}}(\mathcal {M})\cdot s\).
4.1 Achieving Unbounded Aggregation
In order to achieve unbounded aggregation, we do not need just one coverfree family, but a sequence of coverfree families increasing in size, such that we can jump to the next larger one, as soon as we exceed the capacity for the number of aggregatable signatures. This sequence needs to exhibit a monotonicity property, in order to work with our scheme, which we define next.
Definition 5
We consider a family \((\mathcal {M}^{(l)})_l\) of incidence matrices of corresponding dcoverfree families \((\mathcal {F}_l)_l:=(\mathcal {S}_l, \mathcal {B}_l)_l\), where \({{\mathrm{rows}}}(l)\) denotes the number of rows and \({{\mathrm{cols}}}(l)\) denotes the number of columns of \(\mathcal {M}^{(l)}\). \((\mathcal {M}^{(l)})_l\) is a monotone family of incidence matrices of \((\mathcal {F}_l)_l\), if \(\mathcal {S}_l\subseteq \mathcal {S}_{l+1}\), \(\mathcal {B}_l\subseteq \mathcal {B}_{l+1}\), \(l\ge 1\), s.t. \(\mathcal {S}_{l+1}=\{s_1, \dots , s_{{{\mathrm{rows}}}(l)}, s_{{{\mathrm{rows}}}(l)+1}, \dots , s_{{{\mathrm{rows}}}(l+1)}\}\) and \(\mathcal {B}_{l+1}=\{B_1, \dots , B_{{{\mathrm{cols}}}(l)}, B_{cols(l)+1},\dots , B_{{{\mathrm{cols}}}(l+1)}\}\), where \(\mathcal {S}_l=\{s_1, \dots , s_{{{\mathrm{rows}}}(l)}\}\) and \(\mathcal {B}_l=\{B_1, \dots , B_{{{\mathrm{cols}}}(l)}\}\).
Note that Definition 5 implies that
where for \(i=1, \dots , {{\mathrm{rows}}}(l), j={{\mathrm{cols}}}(l)+1, \dots , {{\mathrm{cols}}}(l+1)\)
and for \(i={{\mathrm{rows}}}(l), \dots , {{\mathrm{rows}}}(l+1), j={{\mathrm{cols}}}(l)+1, \dots , {{\mathrm{cols}}}(l+1)\)
So, each \(\mathcal {M}^{(l)}\) contains all previous \(\mathcal {M}^{(1)}, \dots , \mathcal {M}^{(l1)}\).
Now, we are able to achieve unbounded aggregation, i.e. our construction is able to aggregate an arbitrary number of signatures, by replacing the fixed incidence matrix \(\mathcal {M}\) of a dCFF in our construction with a monotone family of incidence matrices \((\mathcal {M}^{(l)})_l\). For this, a run of our aggregation algorithm \(\mathsf {Agg}\) on inputs \({C}_1, {C}_2, {\tau }_1, {\tau }_2\) first has to determine the smallest l, such that \({{\mathrm{cols}}}(l)\ge \max ({C}_1, {C}_2)\) and then proceeds with the corresponding incidence matrix \(\mathcal {M}^{(l)}\). Analogously, our verification algorithm \(\mathsf {Verify}\) on inputs \({C}, {\tau }\) first determines the smallest l such that \({{\mathrm{cols}}}(l)\ge {C}\).
Compression Ratio. The compression ratio of our unbounded scheme is \(\rho (n) = n/\!{{\mathrm{rows}}}(l)\), where l is the minimum index such that \({{\mathrm{cols}}}(l) \ge n\).
4.2 Additional Features of Our Construction
Selective Verification. Let \({\tau }\) be a regular signature with corresponding claim sequence \({C}= (c_1, \ldots , c_n)\). Assume we would want to know whether a signature for a specific claim \(c^*\) was aggregated into \({\tau }\), but we want to avoid verifying all the claims in \({C}\) to save verification time, especially if \({C}\) is large. It is a unique feature of our faulttolerant aggregate signature scheme that there is an additional algorithm \(\mathsf {SelectiveVerify}({C}, {\tau }, c^*)\) that outputs the number of occurrences of \(c^*\) in \({C}\) that have a valid signature in \({\tau }\), i.e., the number of occurrences of \(c^*\) in \(\mathsf {Verify}({C}, {\tau })\), while being faster than actually calling \(\mathsf {Verify}({C}, {\tau })\).
Let \(\varSigma \) be the aggregate signature scheme with list verification defined above and \(\varSigma '\) be the underlying aggregate signature scheme. Then \(\mathsf {SelectiveVerify}\) works as follows. First, it determines the set J of indices j where \(c^*\) occurs in C, i.e. \(c_j = c^*\). Then it determines the set \(I \,{:}{=}\left\{ i \in {{\mathrm{rows}}}(\mathcal {M}):\mathcal {M}[i,j] = 1 \text { for a } j \in J \right\} \), i.e. the set of indices of all rows where an individual signature for \(c^*\) should have been aggregated. Then, it initializes \(M := ()\) and iterates over all \(i \in I\), checking if \(b_i := \varSigma '.\mathsf {Verify}({{C}}[{\mathcal {M}_{i}}], {\tau }[i]) = 1\). If this is the case for an i, it sets
As soon as M contains \({}J{}\) occurrences of \(c^*\), \(\mathsf {SelectiveVerify}\) skips all remaining \(i \in I\). After the loop is done, \(\mathsf {SelectiveVerify}\) outputs the number of occurrences of \(c^*\) in M.
Since \(\varSigma .\mathsf {Verify}\) returns all claims that are contained in a subsequence \({{C}}[{\mathcal {M}_{i}}]\) with \(b_i = 1\), the output of \(\mathsf {SelectiveVerify}\) is exactly the number of occurrences of \(c^*\) in \(\varSigma .\mathsf {Verify}\). \(\mathsf {SelectiveVerify}\) therefore inherits the faulttolerance and security properties already proven for \(\varSigma .\mathsf {Verify}\).
In the best case, \(\mathsf {SelectiveVerify}\) requires only one call to the underlying verification algorithm \(\varSigma '.\mathsf {Verify}\). In the worst case, it still only requires \({}I{} \le \sum _{j \in J} {}B_j{}\) calls to \(\varSigma '.\mathsf {Verify}\), where \(B_j\) is the set from the coverfree family corresponding to column j.
Going a little further, it is even possible to create a “subsignature” for \(c^*\) that allows everyone to check that \(c^*\) has a valid signature without requiring the complete claim sequence \({C}\) and the complete signature \({\tau }\): It is sufficient to give \(C' \,{:}{=}\bigsqcup _{i \in I} {{C}}[{\mathcal {M}_{i}}]\) and the signatures \({\tau }[i]\) for \(i \in I\) to the verifier.
5 A Concrete Instantiation of Our Scheme
In this section, we consider a concrete construction of a dCFF which can be used to instantiate our generic dfaulttolerant aggregate signature scheme. There are several dCFF constructions in the literature, for instance, constructions based on concatenated codes [LVY01, DMR00b, DMR00a], polynomials, algebraicgeometric Goppa codes as well as randomized constructions [KRS99]. The following theorem gives a lower bound for the number of rows of the incidence matrix in terms of parameter d and the number of columns. Proofs can be found in [DR82, Fur96, Rus94].
Theorem 3
For a dCFF \(\mathcal {F}=(\mathcal {S}, \mathcal {B})\), where \(\mathcal {S}=m\), \(\mathcal {B}=n\), it holds
for some constant \(c\in (0,1)\).
In the following construction we use for concreteness only a single incidence matrix, but the next lemma by [LVW06] shows a generic construction to get a monotone family of incidence matrices.
Lemma 1
If \(\mathcal {F}=(\mathcal {S}, \mathcal {B})\) and \(\mathcal {F}'=(\mathcal {S}', \mathcal {B}')\) are dCFFs, then there exist a dCFF \(\mathcal {F}^*=(\mathcal {S}^*, \mathcal {B}^*)\) with \(\mathcal {S}^*=\mathcal {S}+\mathcal {S}'\) and \(\mathcal {B}^*=\mathcal {B}+\mathcal {B}'\).
Proof
Suppose \(\mathcal {M}\) and \(\mathcal {M}'\) are the incidence matrices of dCFFs \(\mathcal {F}=(\mathcal {S}, \mathcal {B})\) and \(\mathcal {F}'=(\mathcal {S}', \mathcal {B}')\), respectively. Then
is an incidence matrix for a dCFF \(\mathcal {F}^*=(\mathcal {S}^*, \mathcal {B}^*)\) with \(\mathcal {S}^*=\mathcal {S}+\mathcal {S}'\) and \(\mathcal {B}^*=\mathcal {B}+\mathcal {B}'\). \(\square \)
For our approach we could use a deterministic construction of a dCFF based on polynomials like [KRS99] did in the following way and for which we propose a generalization to the multivariate case in Appendix B.
For our dCFF \(\mathcal {F}=(\mathcal {S},\mathcal {B})\) let \(\mathbb {F}_q=\{x_1, \ldots , x_q\}\) be a finite field and
For ease of presentation, we assume that q is a prime (as opposed to a prime power), so we may write \(\mathbb {F}_q=\{0,\ldots ,q1\}\). We consider the set of all univariate polynomials \(f\in \mathbb {F}_q[X]\) of degree at most k, denoted by \(\mathbb {F}_{q}[X]_{\le k}\). So,
We have \({}\mathbb {F}_{q}[X]_{\le k}{}=q^{k+1}\). Now, for every \(f\in \mathbb {F}_{q}[X]_{\le k}\), we consider the subsets
consisting of all tuples \((x,y)\in \mathcal {S}\) which lie on the graph of \(f\in \mathbb {F}_{q}[X]_{\le k}\), i.e. for which \(f(x)=y\). From this we obtain
For any distinct \(B_f, B_{f_1}, \ldots , B_{f_d}\in \mathcal {B}\) it holds that
since the degree of each polynomial \(g_i:=ff_i\) is at most k and hence they have at most k zeros. Thus, we have
To achieve a dCFF with this construction, \(q\ge d\cdot k +1\) must be fulfilled.
Now, we consider the incidence matrix \(\mathcal {M}\) of a dCFF, which consists of \(\mathcal {S}\) rows and \(\mathcal {B}\) columns. Each row corresponds to an element of \(\mathcal {S}\) and each column to an element of \(\mathcal {B}\). In the construction above each row corresponds to a tuple \((x,y)\in \mathbb {F}_q^2\), where the order is \((0,0), (0,1), \ldots , (q1,q1)\). In the following, let \((x_i, y_i)\) denote the corresponding tuple for row i, \(i=0, \ldots , q^21\). We start counting from 0 for simplicity, hence,
Each column of the incidence matrix \(\mathcal {M}\) corresponds to a polynomial of degree at most k, where we decide to start with constant polynomials and end with polynomials of degree k, i.e. \(f_0:=0\), \(f_1:= 1\), \(f_2:=2\), \(\ldots ,\) \(f_q:= X\), \(f_{q+1}:= X+1\), \(f_{q+2}:= X+2\), \(\ldots ,\) \(f_{2q}:= 2X\), \(f_{2q+1}:= 2X+1\), \(\ldots ,\) \(f_{q^{k+1}1}:=(q1)X^k+(q1)X^{k1}+\cdots +(q1)X+q1\).
By \(f_j\) we will denote the corresponding polynomial for column j, for \(j=0,\ldots , q^{k+1}1\), again starting from 0. Now, the incidence matrix is built as
Example. For \(q=5\), \(d=2\) and \(k=2\) we have a 2CFF with
We have
since \(\mathbb {F}_{5}[X]_{\le 2}=5^3=125,\) where
and \(f_0:=0\), \(f_1:=1\), \(\ldots ,\) \(f_{124}:=4X^2+4X+4\). Thus, we obtain our incidence matrix \(\mathcal {M}\):
Remark. With this univariate polynomialbased construction of a dCFF it is very easy to generate our incidence matrix or only some parts of it, which we need for our verification algorithm or if we want to check some information separately.
If, for example, one is interested to verify the validity of only one single claim–signature pair \((c_j, \sigma _j)\) in an aggregate signature, it is not necessary to generate the whole matrix but only the rows where the related column j has 1entries. So, you have to know which polynomial corresponds to column j.
For this, we can use the fact, that each positive number \(n=0, \ldots , q^{k+1}1\) can be written as \(a_k\cdot q^k + a_{k1}\cdot q^{k1} + \cdots + a_0\), where \(a_k, \dots , a_0\in \{0,\dots ,q1\}\). So, each n corresponds to a \((k+1)\)tuple denoted by \((a_k^{(n)},\ldots , a_0^{(n)})\). For the sake of convenience, we start to count the rows and columns of our matrix by 0, as before. Thus, for column \(j=0, \ldots , q^{k+1}1\) we assign the polynomial \(f_j=a_k^{(j)}X^k + \cdots + a_0^{(j)}\).
Analogously, for each row \(i=0, \ldots , q^21\), we assign the tuple \((b_1^{(i)}, b_0^{(i)})\in \mathbb {F}_q^2\), where \(i=b_1^{(i)}\cdot q+b_0^{(i)}\). Let \(I_j'\subset \{0, \ldots q^21\}\) be the subset of all rows \(i'\) where \(f_j(b_1^{(i')})=b_0^{(i')}\). So, it suffices to generate only the rows \(i'\in I_j'\) to verify the validity of \(\sigma _j\). To get the 1entries of these rows, you have to check for each \(i'\in I_j'\) which polynomials \(f\in \mathbb {F}_{q}[X]_{\le k}\) fulfill \(f(b_1^{(i')})=b_0^{(i')}\). For all arbitrary, but fixed values \(a_k, \ldots a_1\in \{0, \ldots , q1\}\) compute an appropriate \(a_0\). This results in \(q^k\) polynomials, accordingly columns, per row. If the coefficients of the appropriate polynomials are known then we can use them to compute the number of the corresponding columns with 1entries.
Compression Ratio of Our Bounded Scheme. If our bounded scheme is instantiated with this CFF, and we assume that the length of signatures of the underlying scheme \({\Sigma }'\) is bounded by a constant s, then, as shown in (3), the compression ratio is
For \(n = {}\mathcal {B}{}\), we therefore have
Since \(q \ge dk + 1\), we have that \({}\mathcal {B}{}\) grows exponentially in k, whereas \({}\mathcal {S}{}\) grows only quadratically in k. Hence, \({}\mathcal {B}{}\) is exponential in \({}\mathcal {S}{}\), or, stated differently, \({}\mathcal {S}{}\) is logarithmic in \({}\mathcal {B}{}\).
Compression Ratio of Our Unbounded Scheme. When our unbounded scheme is instantiated with the monotone family of CFFs obtained by fixing an incidence matrix \(\mathcal {M}\) and repeatedly using Lemma 1 on \(\mathcal {M}\), then the asymptotic compression ratio is \(\rho (n) = 1\), since
which is constant. Therefore, the size of an aggregate signature is linear in the length of the claim sequence.
However, if we assume that all signatures of the underlying scheme \(\varSigma '\) have a size bounded by s, then the concrete size of an aggregate signature is at most
since \({{\mathrm{rows}}}(l) = l {{\mathrm{rows}}}(\mathcal {M})\) for the construction of the monotone family of CFFs, and \(l = \left\lceil n/\!{{\mathrm{cols}}}(\mathcal {M})\right\rceil \le n/\!{{\mathrm{cols}}}(\mathcal {M}) + 1\).
Therefore we see that the length of the aggregate signature is linear in n, but the factor \({{\mathrm{rows}}}(\mathcal {M})/\!{{\mathrm{cols}}}(\mathcal {M})\) can be made arbitrarily small by choosing a proper CFF, such as the one described above.
It is an interesting open problem to construct an unbounded faulttolerant scheme with better compression ratio, for example by finding a better monotone family of CFFs. A generalization of the above construction to multivariate polynomials, which might be advantageous in some scenarios, is given in Appendix B.
Example Instantiations. Table 1 shows parameters of several coverfree families based on the construction described in this section. For each of the rows given there, there is an instance of our faulttolerant signature scheme that can compress signatures for up to n claims to a vector of m aggregates, while tolerating up to d errors. (The numbers q and k are needed for the instantiation of the CFF, but do not immediately reflect a property of our faulttolerant aggregate signature scheme.) Of course, our scheme can be instantiated with different parameters and completely different constructions of CFFs as well.
Notes
 1.
The name “list verification” is chosen to indicate the changes in syntax, in particular that the verification algorithm outputs a multiset (list) instead of just 1 or 0.
 2.
While there may be schemes with valid signatures which are not regularly generated, like in the usual correctness properties, our guarantees do only concern regular signatures.
 3.
Intuitively, one would expect \(R = \varSigma .\mathsf {Verify}({C}, {\tau })\). However, this is not achievable in general, as the aggregation of multiple irregular signatures may contain a new valid claim \(c_i\) corresponding to an irregular signature \(\sigma _i\). This does not contradict security, as crafting such irregular signatures may be hard if one does not know \(\sigma _i\).
 4.
The size of an aggregated signature might depend on the aggregation order.
References
Ahn, J.H., Green, M., Hohenberger, S.: Synchronized aggregate signatures: new definitions, constructions and applications. In: AlShaer, E., Keromytis, A.D., Shmatikov, V. (eds.) CCS 2010, pp. 473–484. ACM Press, October 2010
Brogle, K., Goldberg, S., Reyzin, L.: Sequential aggregate signatures with lazy verification from trapdoor permutations. Inf. Comput. 239, 356–376 (2014). doi:10.1016/j.ic.2014.07.001
Bagherzandi, A., Jarecki, S.: Identitybased aggregate and multisignature schemes based on RSA. In: Nguyen, P.Q., Pointcheval, D. (eds.) PKC 2010. LNCS, vol. 6056, pp. 480–498. Springer, Heidelberg (2010)
Bellare, M., Neven, G.: Identitybased multisignatures from RSA. In: Abe, M. (ed.) CTRSA 2007. LNCS, vol. 4377, pp. 145–162. Springer, Heidelberg (2006)
Boldyreva, A., Gentry, C., O’Neill, A., Yum, D.H.: Ordered multisignatures and identitybased sequential aggregate signatures, with applications to secure routing. In: Ning, P., di Vimercati, S.D.C., Syverson, P.F. (eds.) CCS 2007, pp. 276–285. ACM Press, October 2007
Boneh, D., Gentry, C., Lynn, B., Shacham, H.: Aggregate and verifiably encrypted signatures from bilinear maps. In: Biham, E. (ed.) EUROCRYPT 2003, vol. 2656. LNCS, pp. 416–432. Springer, Heidelberg (2003)
Cramer, R., Hanaoka, G., Hofheinz, D., Imai, H., Kiltz, E., Pass, R., Shelat, A., Vaikuntanathan, V.: Bounded CCA2secure encryption. In: Kurosawa, K. (ed.) ASIACRYPT 2007. LNCS, vol. 4833, pp. 502–518. Springer, Heidelberg (2007)
Dyachkov, A.G., Macula, A.J., Rykov, V.V.: New applications and results of superimposed code theory arising from the potentialities of molecular biology. In: Althöfer, I., Cai, N., Dueck, G., Khachatrian, L., Pinsker, M.S., Sárközy, A., Wegener, I., Zhang, Z. (eds.) Numbers, Information and Complexity, pp. 265–282. Springer, Heidelberg (2000)
Dyachkov, A.G., Macula, A.J., Rykov, V.V.: New constructions of superimposed codes. IEEE Trans. Inf. Theory 46(1), 284–290 (2000). doi:10.1109/18.817530
Dodis, Y., Katz, J., Xu, S., Yung, M.: Keyinsulated public key cryptosystems. In: Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332, pp. 65–82. Springer, Heidelberg (2002)
Dyachkov, A.G., Rykov, V.V.: Bounds on the length of disjunctive codes. Problemy Peredachi Informatsii 18(3), 7–13 (1982)
Füredi, Z.: On \(r\)coverfree families. J. Comb. Theory, Ser. A 73(1), 172–173 (1996). doi:10.1006/jcta.1996.0012
Gerbush, M., Lewko, A., O’Neill, A., Waters, B.: Dual form signatures: an approach for proving security from static assumptions. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 25–42. Springer, Heidelberg (2012). doi:10.1007/9783642349614_4
Gentry, C., Ramzan, Z.: Identitybased aggregate signatures. In: Yung, M., Dodis, Y., Kiayias, A., Malkin, T. (eds.) PKC 2006. LNCS, vol. 3958, pp. 257–273. Springer, Heidelberg (2006)
Herranz, J.: Deterministic identitybased signatures for partial aggregation. Comput. J. 49(3), 322–330 (2006). doi:10.1093/comjnl/bxh153
Hofheinz, D., Jager, T., Kiltz, E.: Short signatures from weaker assumptions. In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 647–666. Springer, Heidelberg (2011)
Heng, S.H., Kurosawa, K.: kresilient identitybased encryption in the standard model. In: Okamoto, T. (ed.) CTRSA 2004. LNCS, vol. 2964, pp. 67–80. Springer, Heidelberg (2004)
Hohenberger, S., Koppula, V., Waters, B.: Universal signature aggregators. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9057, pp. 3–34. Springer, Heidelberg (2015). doi:10.1007/9783662468036_1
Hohenberger, S., Sahai, A., Waters, B.: Full domain hash from (leveled) multilinear maps and identitybased aggregate signatures. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 494–512. Springer, Heidelberg (2013). doi:10.1007/9783642400414_27
Kumar, R., Rajagopalan, S., Sahai, A.: Coding constructions for blacklisting problems without computational assumptions. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 609–623. Springer, Heidelberg (1999)
Kautz, W.H., Singleton, R.C.: Nonrandom binary superimposed codes. IEEE Trans. Inf. Theory 10(4), 363–377 (1964). doi:10.1109/TIT.1964.1053689
Lee, K., Lee, D.H., Yung, M.: Sequential aggregate signatures with short public keys without random oracles. Theor. Comput. Sci. 579, 100–125 (2015). doi:10.1016/j.tcs.2015.02.01923
Lu, S., Ostrovsky, R., Sahai, A., Shacham, H., Waters, B.: Sequential aggregate signatures and multisignatures without random oracles. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 465–485. Springer, Heidelberg (2006)
Li, P., Van Rees, G., Wei, R.: Constructions of 2coverfree families and related separating hash families. J. Comb. Des. 14(6), 423–440 (2006)
Lebedev, V., Vilenkin, P., Yekhanin, S.: Coverfree families and superimposed codes: constructions, bounds, and applications to cryptography and group testing. In: IEEE International Symposium on Information Theory (2001)
Lysyanskaya, A., Micali, S., Reyzin, L., Shacham, H.: Sequential aggregate signatures from trapdoor permutations. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 74–90. Springer, Heidelberg (2004)
Ma, D., Tsudik, G.: A new approach to secure logging. TOS 5(1) (2009). doi:10.1145/1502777.1502779
Neven, G.: Efficient sequential aggregate signed data. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 52–69. Springer, Heidelberg (2008)
Ruszinkó, M.: On the upper bound of the size of the \(r\)coverfree families. J. Comb. Theory, Ser. A 66(2), 302–310 (1994). doi:10.1016/00973165(94)900671
Schröder, D.: How to aggregate the CL signature scheme. In: Atluri, V., Diaz, C. (eds.) ESORICS 2011. LNCS, vol. 6879, pp. 298–314. Springer, Heidelberg (2011). doi:10.1007/9783642238222_17
Stinson, D.R., Trung, T.V., Wei, R.: Secure frameproof codes, key distribution patterns, group testing algorithms and related structures. J. Stat. Plann. Infer. 86, 595–617 (1997)
SafaviNaini, R., Wang, H.: Multireceiver authentication codes: models, bounds, constructions, and extensions. Inf. Comput. 151(1–2), 148–172 (1999). doi:10.1006/inco.1998.2769
Tonien, D., SafaviNaini, R.: An efficient singlekey pirates tracing scheme using coverfree families. In: Zhou, J., Yung, M., Bao, F. (eds.) ACNS 2006. LNCS, vol. 3989, pp. 82–97. Springer, Heidelberg (2006)
Zaverucha, G.M., Stinson, D.R.: Short onetime signatures. Adv. Math. Comm. 5(3), 473–488 (2011). doi:10.3934/amc.2011.5.473
Acknowledgements
We wish to thank our colleague and friend Julia Hesse for raising the initial research question that led to this work. We would also like to thank the anonymous reviewers for their helpful comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Discussion on Signature Size
A typical requirement for aggregate signatures is that the length of an aggregate signature is the same as that of any of the individual signatures [HSW13]. Also, the number of signatures that can be aggregated into a single signature should be unbounded.
We show that these goals are mutually exclusive for an “ideal” faulttolerant aggregate signature schemes if one wishes to maintain a constant \(d \ge 1\).
Proposition 1
Let \(n,d \in {\mathbb {N}}\), and \(\varSigma \) be a dfaulttolerant signature scheme. Assume that \(\varSigma .\mathsf {Verify}({C}, {\tau }) = R\) for all claim sequences \({C}\) and corresponding signatures \({\tau }\) constructed from an arbitrary multiset \(M = {\{}(c_1, {\tau }_1), \ldots , (c_n, {\tau }_n){\}}\) of n claim–signature pairs and containing at most d errors, and where R is the multiset of all claims \(c_i\) accompanied by a regular signature \({\tau }_i\) in M. Then we have \({}{\tau }{} \ge \varOmega (\log _2 n)\) as a function of n, where d is considered constant, and \({}{\tau }{}\) is the length of the signature \({\tau }\) in bits.
Proof
Call an output O of \(\varSigma .\mathsf {Verify}\) in accordance with \({C}\), if O is a submultiset of \(\mathsf {elem}\!\left( {C}\right) \) and \({}O{} \ge {}\mathsf {elem}\!\left( {C}\right) {}  d\).
Now, let \(n,d, \varSigma , {C}, {\tau }, M, R\) be as in the theorem statement. Clearly, since we assumed that \(\mathsf {Verify}\) always outputs R, \(\varSigma .\mathsf {Verify}\)’s output must be in accordance with \({C}\). For a fixed number of errors \(i \in \left\{ 0, \ldots , d\right\} \), there are \(\left( {\begin{array}{c}n\\ i\end{array}}\right) \) distinct outputs in accordance with \({C}\). Thus, for up to d errors, there are up to
distinct outputs in accordance with \({C}\).
\(\varSigma .\mathsf {Verify}\) must use \({\tau }\) to determine the correct output R among the set of outputs in accordance with \({C}\). If the signature size \({}{\tau }{}\) is at most \(l \in {\mathbb {N}}\) bits, then \(\varSigma .\mathsf {Verify}\) can distinguish at most \(2^l\) cases based on \({\tau }\). Thus, we must have \( 2^l \ge s(n)\), or, equivalently,
This concludes the proof. \(\square \)
Note that the assumption that \(\varSigma .\mathsf {Verify}({C}, {\tau }) = R\) is somewhat artificial: We assume an ideal dfaulttolerant signature scheme, where \(\varSigma .\mathsf {Verify}\) always “magically” outputs the correct multiset R, when called with a claim sequence containing n claims.
On the one hand, \(\varSigma .\mathsf {Verify}({C}, {\tau }) \supseteq R\) is required by the dfaulttolerance of \(\varSigma \). Intuitively, one would expect the other \(\varSigma .\mathsf {Verify}({C}, {\tau }) \subseteq R\) direction to follow from the security of \(\varSigma \). However, this does not appear to follow in general, due to two reasons:
The first reason is that security is only required against adversaries that have running time polynomial in \(\kappa \), i.e. adversaries that can create at most a polynomial number of claims.
The second reason is that if for two fixed \({C}, {\tau }\) there is a claim \(c = (\mathsf {pk}, m)\) in \(\varSigma .\mathsf {Verify}({C}, {\tau })\) that is not in R, then this does only violate the security definition if the challenge public key randomly drawn by the security experiment happens to be equal to \(\mathsf {pk}\) by chance.
B CoverFree Families Using Multivariate Polynomials
For our polynomial based construction, we can also use multivariate polynomials \(f\in \mathbb {F}_q[X_1, \ldots , X_t]\), \(t\in \mathbb {N}\), of degree at most k. Each multivariate polynomial f with degree \(\le k\) consists of monomials in terms of \(a_{i_1,\ldots ,i_t}X_1^{i_1}\cdots X_t^{i_t}\), where \(a_{i_1,\ldots ,i_t}\in \mathbb {F}_q\) and \(i_1+\cdots +i_t\le k\). We denote by \(\mathbb {F}_{q}[X_1, \ldots , X_t]_{\le k}\) the set of all multivariate polynomials \(f\in \mathbb {F}_q[X_1,\ldots ,X_t]\) of degree at most k, i.e.
For the maximal number of monomials of degree exactly k, we obtain \({t+k1 \atopwithdelims ()k}\). Hence, for degree at most k, we have
We can now define
and
Now, we set
The number of zeros is at most \(k\cdot q^{t1}\) and thus, for different \(B_f, B_{f_1}, \ldots , B_{f_d}\in \mathcal {B}\) it holds
To achieve a dCFF with this construction, \(q^t\ge d\cdot k\cdot q^{t1} +1\) must be fulfilled.
Compression Ratio of Our Bounded Scheme. If our bounded scheme is instantiated with this multivariate CFF, and we assume for simplicity, that the size of signatures of the underlying scheme \(\varSigma '\) is bounded by a constant, then as shown in (3), the compression ratio is
For \(n = {}\mathcal {B}{}\), we therefore have
Compression Ratio of Our Unbounded Scheme. By using Lemma 1 on \(\mathcal {M}\) we can also obtain a monotone CFF based on multivariate polynomials and use it to instantiate our unbounded scheme. The discussion about the compression ratio of the unbounded scheme in Sect. 5 also applies to this instantiation.
Rights and permissions
Copyright information
© 2016 International Association for Cryptologic Research
About this paper
Cite this paper
Hartung, G., Kaidel, B., Koch, A., Koch, J., Rupp, A. (2016). FaultTolerant Aggregate Signatures. In: Cheng, CM., Chung, KM., Persiano, G., Yang, BY. (eds) PublicKey Cryptography – PKC 2016. Lecture Notes in Computer Science(), vol 9614. Springer, Berlin, Heidelberg. https://doi.org/10.1007/9783662493847_13
Download citation
DOI: https://doi.org/10.1007/9783662493847_13
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 9783662493830
Online ISBN: 9783662493847
eBook Packages: Computer ScienceComputer Science (R0)