A Panorama of Post-quantum Cryptography
Abstract
In 1994, Peter Shor published a quantum algorithm capable of factoring large integers and computing discrete logarithms in Abelian groups in polynomial time. Since these computational problems provide the security basis of conventional asymmetric cryptosystems (e.g., RSA, ECC), information encrypted under such schemes today may well become insecure in a future scenario where quantum computers are a technological reality. Fortunately, certain classical cryptosystems based on entirely different intractability assumptions appear to resist Shor’s attack, as well as others similarly based on quantum computing. The security of these schemes, which are dubbed post-quantum cryptosystems, stems from hard problems on lattices, error-correcting codes, multivariate quadratic systems, and hash functions. Here we introduce the essential notions related to each of these schemes and explore the state of the art on practical aspects of their adoption and deployment, like key sizes and cryptogram/signature bandwidth overhead.
1 Introduction
In the 1990s, Peter Shor introduced new concerns to cryptography. He discovered a quantum algorithm able to factor large integers and compute discrete logarithms in finite fields in polynomial time, more precisely O(log^{3}N) [84]. These concerns are due to the fact that the security of conventional techniques used in asymmetric cryptography is based precisely on these or related problems (e.g., RSA, ECC) [59, 81].
Another even more effective threat are the recent discoveries of classical algorithms for solving certain discrete logarithms used in asymmetric encryption [6].
Fortunately, there exist cryptographic schemes based on different computational problems that resist the known attacks with quantum computers. They became known as post-quantum cryptosystems: this is the case of cryptosystems based on lattices [42], error-correcting codes [55, 64], multivariate quadratic systems (\(\mathcal{M}\mathcal{Q}\)) [28, 48], and hash functions [28, 29], excluding symmetric encryption buildings in general.
Given the new paradigm of Internet of Things, in which any object is able to connect to the Internet. A side effect of this interconnectivity is a possible vulnerability of these embedded systems. Attacks that have been primarily aimed at PCs can be launched against cars, cell phones, e-tickets, and RFIDs. In this scenario, the devices are typically characterized by shortages in energy supply (via battery) and limited processing power, storage, and often communication channels with low bandwidth (e.g., SMS).
Since embedded systems are typically deployed on a large scale, cost becomes designers’ main concern. Therefore, security solutions for embedded systems must provide low cost, which can be achieved with tools that minimize overhead transmission, processing, and memory occupation. In this sense, symmetric encryption techniques already attend the required metrics, and asymmetric encryption is the bottleneck in the most cases.
Asymmetric cryptographic primitives for encryption and digital signatures are essential in a modern security framework. However, conventional techniques are not efficient enough in some aspects, which makes them unsuitable for embedded platforms, specially highly resource-constrained ones. In this context the absence of costly operations (operations with large integers, especially modular exponentiations) of post-quantum techniques makes them more attractive in such scenarios, as described previously.
The objective of this chapter is to introduce the basics of the main lines of post-quantum cryptography research (hash-based signatures, \(\mathcal{M}\mathcal{Q}\) systems, error-correcting codes, and lattices), as well as presenting the latest research focusing on improvements regarding key sizes and signature/cryptogram overheads of these schemes.
2 Hash-Based Digital Signature Schemes
Hash-based digital signature schemes became popular after Ralph Merkle’s work [56] in 1979. The scheme proposed by Merkle (MSS) is inspired by Lamport and Diffie’s one-time signature scheme [49]. The security of these signature schemes depends on the collision resistance and inversion resistance of the hash function used. The scheme (MSS) is considered practical, and although there is not a proof, it is believed to be resistant against quantum computers. The disadvantage of one-time signature schemes is that a key pair can only be used for one signature, although this signature can be verified an arbitrary number of times.
2.1 Hash Function
Cryptographic hash functions are used in security applications such as digital signatures, identification data and key derivation, among others. Formally, a hash function \(h:\{ 0,1\}^{{\ast}}\rightarrow \{ 0,1\}^{n}\) takes as input an arbitrarily long string m and returns a fixed string r of size n, the hash value (i.e., r = h(m)).
Since the image {0, 1}^{n} of h is a subset of {0, 1}^{∗}, it is easy to see that more than one message is mapped to the same hash value (or digest). Some applications require that it be computationally infeasible for an attacker to find two random messages that generate the same digest; one such example is that of digital signatures, in which the hash of messages are signed, not the messages themselves.
2.2 Properties
Pre image resistance: Let r be a known digest. Then, it should be infeasible to find a value m such that h(m) = r.
Second pre image resistance: Let m be a known message. Then, it should be infeasible to find m^{′} such that m^{′} ≠ m e h(m^{′}) = h(m).
Collision resistance: It should be infeasible to find a pair \(m,m^{{\prime}}\in \{ 0,1\}^{{\ast}}\) such that m^{′} ≠ m and h(m^{′}) = h(m).
Another desirable property for practical applications is that the hash function be efficient (speed, memory, energy, etc.) when implemented on various computing platforms (hardware and/or software). It is easy to see that a function that is collision resistant is also second pre image resistant, but the reciprocal is not necessarily true.
2.3 Construction of Hash Functions
The design of hash functions has been based on various techniques such as block ciphers [54, 78, 93], the iterative Merkle-Damgård method [56], the sponge construction [16], and primitive arithmetic [25].
Standards based on these functions have evolved, mainly due to successive attacks advertised in the literature and specialized events. Recently, the National Institute of Standards and Technology (NIST) selected the Keccak [15] algorithm as the winner of a 5-year competition to create a new Secure Hash Algorithm standard.
2.4 Signature Schemes
The key generation algorithm GEN receives as input a security parameter 1^{n} and produces a key pair (X, Y ), where X is the private key and Y is the public key.
The signature generation algorithm SIG takes as input a message M ∈ { 0, 1}^{∗} and a private key X and produces a signature Sig, denoted by \(\mathit{Sig} \leftarrow \mathit{SIG}_{X}(M)\).
The signature verification algorithm VER takes as input a message M, a signature Sig of M, and a public key Y and produces a bit b, where b = 1 means that the signature is valid and b = 0 indicates that the signature is not valid.
2.5 One-Time Signature Schemes
One-time signature schemes first appeared in the work of Lamport [49] and Rabin [27, Chapter “Digitalized signatures” by Michael O. Rabin]. Merkle [56] proposed a technique to transform a one-time signature scheme into a scheme with an arbitrary but fixed number of signatures. The following describes the Lamport [49] and Winternitz [57] schemes.
2.5.1 Lamport-Diffie One-Time Signature Scheme
2.5.2 Winternitz One-Time Signature Scheme
2.6 Merkle Digital Signature Scheme
In the Merkle digital signature scheme described below, the one-time signing key and the verification key are the leaves of the tree, and the public key is the root. A tree with height H and 2^{H} leaves will have 2^{H} one-time key pairs (public and private).
2.6.1 Merkle Key Generation
For the generation of the Merkle public key (pub), which corresponds to the root of the Merkle tree, one must first generate 2^{H} one-time key pairs (public and private), for each leaf of the Merkle tree.
One-Time Key Pair Generation A one-time signature algorithm generates private keys X[u] and public keys Y [u], for each leaf of the Merkle tree, \(u = 0,\ldots,2^{H} - 1\). Algorithm 2.1 describes the process of one-time key pair generation.
Algorithm 2.1 Winternitz one-time key pair generation (Leafcalc) [56]
Merkle Public Key Generation (Pub) Algorithm 2.2 generates the Merkle tree public key pub. The input values are the initial leaf leafIni and tree height H. Each leaf node node[u] of the tree receives the corresponding verification key Y [u]. The inner nodes of the Merkle tree node[parent] contain the hash value of the concatenation of their left and right children, node[left] and node[right], respectively. Each time a leaf u is calculated and stacked in stackNode, the algorithm checks if the nodes at the top of the stackNode have the same height. If the nodes have the same height, the two nodes will be unstacked and the hash value of their concatenation will be pushed into stackNode. The algorithm terminates when the root of the tree is found.
Algorithm 2.2 Merkle public key generation (CalcRoot) [56]
2.6.2 MSS Signature Generation
Scheme MSS allows the generation of 2^{H} signatures for a tree of height H. Suppose we want to sign M[u] messages, for \(u = 0,..,2^{H} - 1\). Each message M[u] is signed with the one-time signature key X[u] resulting in a signature Sig[u].
An authentication path Aut is used to store the nodes in the path needed to authenticate leaf Y [u], eliminating the need for sending the whole tree to the receiver.
Output and Update Phases Algorithm 2.3 shows the steps for producing the authentication path for the next leaf u in the tree. The algorithm starts by signing leaf u = 0; then, the leaf is updated in one unit, and the next authentication path is computed efficiently since only the nodes that change in the path will be updated.
Algorithm 2.3 The Path Regeneration Algorithm [56]
2.6.3 MSS Signature Verification
2.7 CMSS: An Improved Merkle Signature Scheme
The CMSS scheme [20] is a variation of the MSS scheme which allows the increase of the number of signatures from 2^{20} to 2^{40}. In addition, CMSS reduces key pair generation time, signature generation time, and private key size. In [20] it was demonstrated that CMSS is competitive in practice, by presenting a highly efficient implementation within the Java Cryptographic Service Provider FlexiProvider and showing that the implementation can be used to sign messages within Microsoft Outlook.
In the CMSS scheme, two MSS authentication trees are used, a subtree and a main tree, each one with 2^{h} leaves, where \(h = H/2\). Thus, we increase the number of signatures in relation to MSS. Note that MSS becomes impractical for H > 25 since private keys are too large and the key pair generation takes too much time. For example, to generate 2^{20} signature keys, two trees with 2^{10} leaves are generated with CMSS, while with MSS, a single tree with 2^{20} leaves is constructed. Therefore, key generation time is reduced.
In order to improve signature generation time, CMSS uses Szydlo’s algorithm [89], which is more efficient for constructing authentication paths. This algorithm was implemented in [24], in which the purpose is to balance the number of calculated leaves in each authentication path.
As for reducing the private key size, a pseudo-number random generator PRNG [65] is used, where only the seed of the PRNG is stored. By using a hash function of n bits and the Winternitz parameter t, the signature key will have (t ⋅ n) bits. Thus, one needs only to store a seed of n bits.
The CMSS public key is the root of the main tree. The messages are signed using the leaves of the subtree. After the first 2^{h} signatures have been generated, a new subtree is constructed and used to generate the next 2^{h} signatures.
CMSS Key Generation For key pair generation, the MSS key pair generation is called twice. The subtree and its first authentication path are generated. Then, the main tree and its first authentication path are computed.
CMSS Signature GenerationCMSS signature generation is carried out in various parts. First, the one-time signature of the message is computed using the leaf of subtree. After that, the one-time signature of the root of the subtree is computed using the leaf of the main tree. This signature will be recalculated in the next signature only if all the leaves of the current subtree have already been used. Then, the authentication path of both trees (main and subtree) is appended in the signature and the next authentication paths are computed. Thus, the next subtree is partially constructed, and the CMSS private key is updated.
CMSS Verification To verify the CMSS signature, it is required the checking of the roots of both subtrees and both one-time signatures.
2.8 GMSS: Merkle Signatures with Virtually Unlimited Signature Capacity
The GMSS scheme was published in 2007 [23]. It is another variation of the Merkle signature scheme, which allows a virtually unlimited 2^{80} number of messages to be signed with one key pair. The basic construction of GMSS consists of a tree with T layers (subtrees). Subtrees in different layers may have different heights. To reduce the cost of signature time, GMSS distributes the cost of one signature generation across previous signatures and key generation. Thus, this scheme allows for the choice of different parameters w of Winternitz in different subtrees, in order to produce smaller signatures.
GMSS Key Generation For each subtree, the one-time key generation algorithm computes the signature keys and Algorithm 2.2 calculates the roots. The first authentication path of each subtree is stored during generation of the root. Then, the signatures Sig_{τ} of Merkle subtrees, for τ = 2, …, T, will be computed to be used in the first signature. Since those signature values change less frequently for the upper layers, the precomputation can be distributed over many stages, resulting in a significant improvement of the signing speed. To ensure small size of private keys, only the seed of the PRNG needs to be stored.
GMSS Signature Generation The root of a subtree is signed with the one-time signature key corresponding to the parent tree. Root_{τ} denotes the root of the tree τ. Sig_{τ} denotes the one-time signature of Root_{τ}, which is generated using the leaf l of parent τ. The message digest d is signed using the leaves on the deepest layer T.
the index leaf s;
the one-time signatures Sig_{d} and \(\mathit{Sig}_{\tau _{i,j_{ i}}}\) for i = 2, …, T, \(j = 0,\ldots,2^{h_{1}+\ldots +h_{i-1}} - 1\).
authentication paths \(\mathit{Aut}[\tau _{i,j_{i}},l_{i}]\) of leaves l_{i}, for i = 1, …, T, \(j = 0,\ldots,2^{h_{1}+\ldots +h_{i-1}} - 1\).
During the signature generation roots \(\mathit{Root}_{\tau _{ i,1}}\) are also calculated, as are the authentication paths \(\mathit{Aut}[\tau _{i,1},0]\) of trees τ_{i, 1}, for i = 2, …, T. The signature generation is split into two parts. The first, online part, computes Sig_{d}. The second, offline part, precomputes the authentication paths and one-time signatures of the roots required for upcoming signatures.
GMSS Verification The GMSS signature verification is essentially the same as that of schemes MSS and CMSS: the verifier checks the one-time signatures Sig_{d} and \(\mathit{Sig}_{\tau _{ i,j_{i}}}\) for i = 2, …, T and \(j = 0,\ldots,2^{h_{1}+\ldots +h_{i-1}} - 1\). Therefore, she verifies the roots Root_{τ} for τ = 2, …, T, and the public key using the corresponding authentication path.
2.9 XMSS: eXtended Merkle Signature Scheme
The one-time signature key x is used to construct the one-time verification y, by applying the function family F_{n}. In [22] the family function used was \(f_{K}(x) = g(\mathit{Pad}(K)\vert \vert \mathit{Pad}(x))\), for a key K ∈ { 0, 1}^{n}, x ∈ { 0, 1}^{n}. \(\mathit{Pad}(z) = (z\vert \vert 10^{b-\vert z\vert -1})\), for | z | < b, where b is the size of the hash function block.
The XMSS scheme uses a slightly modified version of the WOTS proposed in [21]. This modification makes collision resistance unnecessary: the iterated evaluations of a hash function is replaced by a random walk through the function family F_{n}, as follows: for K, x ∈ { 0, 1}^{n}, e ∈ \(\mathbb{N}\), and f_{K} ∈ F_{n}, the function f_{K}^{e}(x) is f_{K}^{0}(x) = K. For e > 0, the function is \(f_{K}^{e}(x) = f_{K^{{\prime}}}(x)\), where \(K^{{\prime}} = f_{K}^{e-1}(x)\).
To generate a leaf of the XMSS tree, an L-tree is used. The one-time public verification keys \((y_{1},\ldots,y_{l})\) are the first l leaves of an L-tree. If l is not a power of 2, then there are not sufficiently many leaves. A node that has no right sibling is lifted to a higher level of the L-tree until it becomes the right sibling of another node. The hash function uses new bitmasks (bitmaskLtree). The bitmaskLtree are the same for each of those trees. The XMSS public key contains the bitmaskTree, bitmaskLtree, and the root of the XMSS tree.
2.10 Security of the Hash-Based Digital Signature Schemes
In this section we present the main results known about the security of hash-based digital signature schemes.
In [12, Chapter “Hash-based Digital Signature Schemes” by J. Buchmann, E. Dahmen and M. Szydlo], it was proved that the Lamport-Diffie one-time signature scheme has existential unforgeability under an adaptive chosen message attack (CMAsecure), assuming that the underlying one-way function is pre image resistant. In the same work, it was also proved that the Merkle signature scheme has existential unforgeability under the assumption that the hash function is collision resistant and the underlying one-time signature scheme has existential unforgeability.
On the security of XMSS, in [21], it was proved the following result: if H_{n} is a second preimage-resistant hash function family and F_{n} a pseudorandom function family, then XMSS is existentially unforgeable under chosen message attacks. In addition, in the same paper, it was shown that XMSS is forward secure under some modifications on the key generation process.
Hülsing [44] showed that W-OTS is existentially unforgeable under adaptive chosen message attacks. In the same work it was also shown that scheme XMSS^{MT} is secure; more specifically, it is proved the following result: if H_{n} is a second-preimage-resistant hash function family and F_{n} is a pseudorandom function family, then XMSS^{MT} is a forward secure signature scheme.
2.11 Implementation Results
In this section we present a summary of recent works on the implementation of variants of the Merkle signature scheme.
CMSS scheme [20] software implementation on a Pentium M 1. 73 GHz, 1 GB of RAM running Microsoft Windows XP for 2^{40} signatures and w = 3;
GMSS scheme [23] software implementation on a Pentium computer dual core 1. 8 GHz for 2^{40} signatures (w_{1} = 9 and w_{2} = 3 were 390 min, 10. 7 ms and 10. 7 ms);
XMSS scheme [22] software implementation on an Intel(R) Core (TM) i5 M540, 2. 53GHz computer with Infineon technology;
CMSS scheme [85] hardware implementation on a novel architecture on an FPGA Platform (Virtex-5);
XMSS scheme [66] software implementation on an Intel Core i7 − 2670 QMCPU, 2. 20 GHz with 6 GB of RAM.
Implementation results
Schemes | Hash | H | w | t_{key} | t_{sig} (ms) | t_{ver} (ms) | |
---|---|---|---|---|---|---|---|
CMSS [06] | SHA2 | 40 | (3,3) | 120.7 min | 40.9 | 3.7 | |
GMSS [07] | SHA1 | 40 | (9,3) | 390 min | 10.7 | 10.7 | |
XMSS [11] | SHA2 | 20 | 4 | 408.6 s | 6.3 | 0.51 | |
CMSS [11] | SHA2 | 30 | 4 | 820 ms | 2.7 | 1.7 | |
XMSS [13] | SHA2 | 20 | 4 | 553 s | 2.7 | 0.31 |
In Table 1 the size of all public keys is 32 bytes, except for the XMSS scheme, that also has to store the bitmasks. The private key and signature are smaller in the XMSS scheme, since in the other schemes it is necessary to store information of more than one tree. The XMSS scheme presented the best timings for signing and verification on a software implementation, given that only one authentication path needs to be updated and checked for each signature. However, the XMSS is only recommended for applications requiring up to 2^{20} signature keys, since the generation of more keys is too time consuming. A Multi Tree XMSS (XMSS^{MT}) [79] based on algorithms CMSS and GMSS is recommended for applications that require a large numbers of signatures.
3 Multivariate Schemes
Multivariate public key cryptosystems (MPKC) constitute one of the main public key families considered potentially resistant against the powerful quantum computers. The security of MPKC schemes is based upon the difficulty of solving nonlinear system of equations over finite fields. In particular, in most cases, such schemes are based upon multivariate systems of quadratic equations because of computational advantages. This last problem is known as multivariate quadratic problem or \(\mathcal{M}\mathcal{Q}\) Problem, and it was shown to be NP-complete by Patarin [69]. MPKC has been developed more intensively in the last two decades. It was shown that, in general, encryption schemes were not as secure as it was believed to be, while signatures constructions can be considered viable.
A formal definition for the \(\mathcal{M}\mathcal{Q}\) Problem is given as follows.
Definition 1 (\(\mathcal{M}\mathcal{Q}\) Problem)
Solve the random system \(p_{1}(\mathbf{x}) = p_{2}(\mathbf{x}) = \cdots = p_{m}(\mathbf{x}) = 0\), where each p_{i} is quadratic in variables \(\mathbf{x} = (x_{1},\ \ldots,\ x_{n})\). All coefficients and variables are in \(K = \mathbb{F}_{q}\), the field with q elements.
In other words, the target of the \(\mathcal{M}\mathcal{Q}\) Problem is to find a solution x for a given map \(\mathcal{P}\). In 1979, Garey and Johnson proved [33, page 251] that the decision variant of the \(\mathcal{M}\mathcal{Q}\) Problem over binary finite fields is NP-complete.
On the other hand, the proposed \(\mathcal{M}\mathcal{Q}\) signature schemes in literature do not rely their security only on the original \(\mathcal{M}\mathcal{Q}\) Problem. In order to invert the trapdoor one-way function, which means finding the original private system (or an equivalent), it is necessary to solve a related problem called the Isomorphism of Polynomials Problem or IP Problem, proposed by Patarin [70].
Definition 2 (Isomorphism of Polynomials Problem)
Let \(m,n \in \mathbb{N}\) be arbitrarily fixed. Further denote \(\mathcal{P},\mathcal{Q}: \mathbb{F}_{q}^{n} \rightarrow \mathbb{F}_{q}^{m}\) two multivariate quadratic maps and \(\mathbb{T} \in \mathbb{F}_{q}^{mxm}\), \(\mathbb{S} \in \mathbb{F}_{q}^{nxn}\) two bijective linear maps, such that \(\mathcal{P} = \mathbb{T} \circ \mathcal{Q}\circ \mathbb{S}\). Given \(\mathcal{P}\) and \(\mathcal{Q}\), find \(\mathbb{T}\) and \(\mathbb{S}\).
In other words, the IP Problem goal is to find \(\mathbb{T}\) and \(\mathbb{S}\) for a given pair \((\mathcal{P},\mathcal{Q})\). Note that, originally, \(\mathbb{S}\) was defined as an affine instead of linear transformation [71]. But, Braeken et al. [18, Sec. 3.1] noticed that the constant part is not important for the security of certain \(\mathcal{M}\mathcal{Q}\) schemes and thus can be omitted.
3.1 Construction of \(\mathcal{M}\mathcal{Q}\) Keys
Generically, a typical \(\mathcal{M}\mathcal{Q}\) private key consists of two linear transformations \(\mathbb{T}\) and \(\mathbb{S}\) along with a quadratic transformation \(\mathcal{Q}\). Note that \(\mathcal{Q}\) presents certain particular trapdoor structure. We will present two distinct trapdoor structures in Sect. 3.2 for the UOV and Rainbow signature schemes. The trapdoor structure will allow the signer to easily solve the public \(\mathcal{M}\mathcal{Q}\) system in order to generate valid signatures. The public key is simply given by the composition \(\mathcal{P} = \mathbb{T} \circ \mathcal{Q}\circ \mathbb{S}\). For some signature schemes it is not necessary to explicitly use the map \(\mathbb{T}\), since it is reduced to the identity [12, Chapter 6].
The main difference among distinct \(\mathcal{M}\mathcal{Q}\) signature schemes falls in the trapdoor structure of \(\mathcal{Q}\). Since public keys have the same structure in most schemes, verifying a signature follows the same procedure, in other words checking if a given signature x is a solution of a public quadratic system \(p_{k}(\mathbf{x}) = h_{k},1 \leq k \leq m\). For other trapdoor constructions, the reader can see, for example, [94].
3.2 UOV and Rainbow \(\mathcal{M}\mathcal{Q}\) Signatures
One of the main still secure \(\mathcal{M}\mathcal{Q}\) signature families is the Unbalanced Oil and Vinegar (UOV) construction which was proposed by Patarin [48]. The name Oil and Vinegar came from the fact that variables (x_{1}, ⋯ , x_{n}) of a certain quadratic private system are separated in two subsets \(O = (x_{1},\cdots \,,x_{m})\) and \(V = (x_{m+1},\cdots \,,x_{n})\), in such a way that variables of the first set are never mixed in a term of the quadratic system.
In order to generate a signature \(\mathbf{x} \in \mathbb{F}_{q}^{n}\) of a given message, particularly of its hash \(h \in \mathbb{F}_{q}^{m}\), the signer have to invert the map \(P(\mathbf{x}) = Q(S(\mathbf{x})) = h\). Defining \(\mathbf{x^{{\prime}}} = \mathbf{x} \cdot S\), one first solves the multivariate system, \(\mathbf{x^{{\prime}}}Q^{(k)}\mathbf{x^{{\prime}}}^{\mathop{\mathrm{T}}\nolimits } = h_{k}\), 1 ≤ k ≤ m, finding x^{′}. Finally, the signature \(\mathbf{x} = \mathbf{x^{{\prime}}}S^{-1}\) is computed.
As explained before, the structure of the matrices Q^{(k)} allows to efficiently solve the \(\mathcal{M}\mathcal{Q}\) system, by choosing v vinegar variables at random and then solving the resulting system for the remaining m oil variables. If the linear system has no solution, repeat the process by choosing new vinegar variables until it has a valid solution.
In order to hide the trapdoor structure at polynomials f_{k}, an invertible linear transformation \(\mathbb{S} \in \mathbb{F}_{q}^{n\times n}\) is applied to the right of \(\mathcal{Q}\). So the resulting public map is \(\mathcal{P} = \mathcal{Q}\circ \mathbb{S}\). The private key is given by the pair \(\mathit{sk}:= (\mathcal{Q}, \mathbb{S})\) and the public key is composed by polynomials \(\mathcal{P}:= p(x_{1},\cdots \,,x_{n}) =\{ p_{1}(x_{1},\cdots \,,x_{n}),\cdots \,,p_{m}(x_{1},\cdots \,,x_{n})\}\). So, it becomes clear that the security of the system is not directly based on the \(\mathcal{M}\mathcal{Q}\) Problem and indeed recovering the private key is related to the difficult to decompose \(\mathcal{P}\) in \(\mathcal{Q}\) and \(\mathbb{S}\), in other words, to solve the IP Problem.
An important variant of the UOV scheme is the Rainbow [28] signature. It was proposed by Ding and Schmidt, whose main advantage is the shorter signature footprints attained compared to UOV [91, Section 3].
The basic idea of the Rainbow signature is to separate the m private UOV equations into smaller bands and partitioning the variables accordingly; in other words, each band has its own oils and vinegars. After a band is processed, all of its variables become the vinegars for the next band and so on until the last band is processed.
Typically the central map is divided into only two bands, since this configuration has been shown the most suitable in the sense that it avoids certain structural attacks and keeps the signatures reasonably short [91].
The signature procedure is similar to UOV one, choosing vinegars at random for the first band in order to be able to compute its oils, as it is done in UOV. Then, these computed variables (vinegars plus oils) are used as vinegars for the next band.
3.3 The Cyclic UOV Signature
An interesting step towards the reduction of UOV/Rainbow key sizes was made by means of the Cyclic UOV/Rainbow constructions [74, 77]. Petzoldt et al. noticed the existence of a linear relation between part of the public quadratic map and the private quadratic map. That relation was exploited in order to construct key pairs in a different unusual way being able to reduce the size of the public key. The idea is to firstly generate the quadratic part of the public key with a desired compact structure and then compute the quadratic part of the private key by using the linear relation.
Therefore, it is possible to obtain shorter public keys while the private ones remain random looking and without any apparent structure. The structure suggested by Petzoldt et al. was the one that uses circulant matrices, which is the origin of the name Cyclic UOV [77]. Circulant matrices are very compact, since they can be represent by simply its first row. Thus, the public key can be stored in a efficient manner, apart from some advantages in processing like Karatsuba and fast Fourier transform (FFT) techniques.
Blocks B of M_{P} and F of M_{F} obey the relation B: = F ⋅ A_{UOV}(S). Thus, for the key generation, one may first generate matrix M_{P} with B with circulant structure and then computing \(F:= B \cdot A_{\mathrm{UOV}}^{-1}(S)\). That methodology was able to reduce UOV public key size in about 6 times for the security level of 80 bits.
As mentioned above, \(\mathcal{M}\mathcal{Q}\) signatures have been developed more intensively in the last two decades. Many constructions were purposed toward key size reduction which is the main disadvantage today. Table 2 shows some of them.
\(\mathcal{M}\mathcal{Q}\) signatures evolution
Construction | | sk | | | pk | | | hash | | | sig | | Ref. | |
---|---|---|---|---|---|---|
Rainbow(\(\mathbb{F}_{2^{4}}\), 30, 29, 29) | 75.8 KiB | 113.4 KiB | 232 | 352 | [75] | |
Rainbow(\(\mathbb{F}_{31}\), 25, 24, 24) | 59.0 KiB | 77.7 KiB | 232 | 392 | [75] | |
CyclicUOV(\(\mathbb{F}_{2^{8}}\), 26, 52) | 14.5 KiB | 76.1 KiB | 208 | 624 | [74] | |
NC-Rainbow(\(\mathbb{F}_{2^{8}}\), 17, 13, 13) | 25.5 KiB | 66.7 KiB | 384 | 672 | [95] | |
Rainbow(\(\mathbb{F}_{2^{8}}\), 29, 20, 20) | 42.0 KiB | 58.2 KiB | 272 | 456 | [75] | |
CyclicLRS(\(\mathbb{F}_{2^{8}}\), 26, 52) | 71.3 KiB | 13.6 KiB | 208 | 624 | [76] | |
UOVLRS(\(\mathbb{F}_{2^{8}}\), 26, 52, 26) | 71.3 KiB | 11.0 KiB | 208 | 624 | [76] | |
CyclicRainbow(\(\mathbb{F}_{2^{8}}\), 17, 13, 13) | 19.1 KiB | 10.2 KiB | 208 | 344 | [74] |
4 Code-Based Schemes
In this section we will discuss the theory and practice of cryptosystems based on error-correcting codes.
Coding theory aims at ensuring that when transmitting a collection of data over a channel subject to noise (i.e., the perturbations in the data), the recipient of this transaction can recover the original message. For this, one must find efficient ways to add redundant information to the original message such that if the message reaches the recipient containing errors (existing inversion in certain bits in case of binary messages), the receiver can decode it.
In the cryptographic context, the primitive adds errors in a word of an error-correcting code and compute a syndrome relative to the parity check matrix of this code.
The first construction was a public key encryption scheme proposed by Robert J. McEliece in 1978 [55]. The private key is a random, binary, and irreducible Goppa code (which will be reviewed in Sect. 4.1.1), and the public key is a random generator matrix with a permuted version of this code. The ciphertext is a codeword in which some errors were introduced, and only the owner of the private key can correct these errors (and thus decrypt the message). A few years later some parameter modifications were necessary to keep the security level high, but it remains unbroken until now.
4.1 Linear Codes
For a better technical understanding of this section, we first explain some basic concepts used within the code-based cryptography.
Matrix and vector indices will be numbered from 0 throughout this context, unless otherwise stated. Let p be a prime, and let q = p^{m} for some integer m > 0. \(\mathbb{F}_{q}\) denotes the finite field with q elements. The degree of a polynomial \(g \in \mathbb{F}_{q}[x]\) is denoted by \(\deg (g)\). It is also defined the notion of Hamming weight and Hamming distance:
Definition 3
The Hamming weight of a vector \(u \in \mathcal{C}\subseteq \mathbb{F}_{q}^{n}\) is the number of nonzero coordinates on it, i.e., \(\mathop{wt}\nolimits (u) = \#\{i,\;0\leqslant i < n\mid u_{i}\neq 0\}\). The Hamming distance between two vectors \(u,v \in \mathcal{C}\subseteq \mathbb{F}_{q}^{n}\) is the number \(\mathop{dist}\nolimits (u,v)\) of coordinates that these vectors differ from each other, i.e. \(\mathop{dist}\nolimits (u,v):=\mathop{ wt}\nolimits (u - v)\).
Now we will introduce some useful concepts to the task of encoding messages. The first refers to the linear code, which can be defined as
Definition 4
A (binary) linear [n, k] error-correcting code \(\mathcal{C}\) is a subspace of \(\mathbb{F}_{2}^{n}\) of dimension k.
A vector \(u \in \mathcal{C}\) is also called codeword (or, briefly, a word) of \(\mathcal{C}\).
A generator matrix G of \(\mathcal{C}\) is a matrix over \(\mathbb{F}_{q}\) such that \(\mathcal{C} = \left \langle G\right \rangle\), where \(\left \langle G\right \rangle\) indicates the vector space generated by the rows of G. Normally the rows of G are independent and the matrix has dimension k × n; in other words, \(\exists G \in \mathbb{F}_{q}^{k\times n}: \mathcal{C} =\{ uG \in \mathbb{F}_{q}^{n}\mid u \in \mathbb{F}_{q}^{k}\}\).
We say that a generator matrix G is in the systematic form if its first k columns form the identity matrix.
The so-called dual code \(\mathcal{C}^{\perp }\) is the orthogonal code of \(\mathcal{C}\) to the scalar product over \(\mathbb{F}_{q}\) and is a linear code of dimension n × (n − k) over \(\mathbb{F}_{q}\).
A parity matrix H over \(\mathcal{C}\) is a generator matrix of \(\mathcal{C}^{\perp }\). In other words, \(\exists H \in \mathbb{F}_{q}^{r\times n}: \mathcal{C} =\{ v \in \mathbb{F}_{q}^{n}\mid Hv^{\mathop{\mathrm{T}}\nolimits } = 0^{r} \in \mathbb{F}_{q}^{r}\}\), where \(r = n - k\) is the codimension of \(\mathcal{C}\) (i.e., the dimension of the orthogonal space \(\mathcal{C}^{\perp }\)).
It is easy to see that G and H, although not uniquely defined (because there is no one single basis for \(\mathcal{C}\) or to \(\mathcal{C}^{\perp }\)), are related by \(HG^{\mathop{\mathrm{T}}\nolimits } = 0 \in \mathbb{F}_{q}^{r\times k}\).
The linear transformation defined by a parity matrix is called syndrome function of the code. The value of this transformation over any vector \(u \in \mathbb{F}_{q}^{n}\) is called syndrome of this vector. Clearly, the syndrome of any codeword is always null.
Definition 5
The distance (or minimum distance) of a \(\mathcal{C}\subseteq \mathbb{F}_{q}^{n}\) code is the minimum Hamming distance between words of \(\mathcal{C}\), i.e., \(\mathop{dist}\nolimits (\mathcal{C}) =\min \{\mathop{ wt}\nolimits (u)\mid u \in \mathcal{C}\}\).
We write [n, k, d] for a code [n, k] whose minimum distance is (at least) d. If d ⩾2t + 1, it is said that the code is capable of correcting at least t errors, in the sense that there is no more than one codeword with a Hamming distance no more than t from any vector of \(\mathbb{F}_{q}^{n}\).
Several computational problems involving codes are intractable, starting with the actual determination of the minimum distance of a code. The following problems are important for code-based cryptography:
Definition 6 (General Decoding)
Let \(\mathbb{F}_{q}\) be a finite field, and let (G, w, c) be a triple consisting of a matrix \(G \in \mathbb{F}_{q}^{k\times n}\), an integer w < n, and a vector \(c \in \mathbb{F}_{q}^{n}\). The general decoding problem (GDP) is the question if there is a vector \(m \in \mathbb{F}_{q}^{k}\) such that \(e = c - mG\) has Hamming weight \(\mathop{wt}\nolimits (e)\leqslant w\).
The search problem associated with the GDP is to calculate the vector m given the word with errors c.
Definition 7 (Syndrome Decoding)
Let \(\mathbb{F}_{q}\) be a finite field, and let (H, w, s) be a triple consisting of an \(H \in \mathbb{F}_{q}^{r\times n}\), an integer w < n, and a vector \(s \in \mathbb{F}_{q}^{r}\). The syndrome decoding problem (SDP) is whether there is a vector \(e \in \mathbb{F}_{q}^{n}\) with Hamming weight of \(\mathop{wt}\nolimits (e)\leqslant w\) such that \(He^{\mathop{\mathrm{T}}\nolimits } = s^{\mathop{\mathrm{T}}\nolimits }\).
The problem associated with the SDP consists in computing the error pattern e given its syndrome \(s_{e}:= eH^{\mathop{\mathrm{T}}\nolimits }\).
Both the general decoding problem and the problem of syndrome decoding for linear codes are NP-complete [9].
In contrast to the overall results, the knowledge of the structure of certain codes makes the GDP and SDP soluble in polynomial time. A basic strategy to define code-based cryptosystems is therefore keep secret the information about the structure of the code and publish a code associated without any apparent structure (hence, by hypothesis hard to decode).
4.1.1 Goppa Codes
One of the most important families of linear error-correcting codes for cryptographic purposes is the Goppa codes:
Definition 8
Given a prime number p, q = p^{m} for some m > 0, a sequence \(L = (L_{0},\ldots,L_{n-1}) \in \mathbb{F}_{q}^{n}\) of distinct elements, and a monic polynomial \(g(x) \in \mathbb{F}_{q}[x]\) of degree t (called generator polynomial) such that g(L_{i}) ≠ 0 to 0⩽ i < n, the Goppa code Γ(L, g) is the code \(\mathbb{F}_{p}\)-alternate corresponding to \(\mathop{GRS}\nolimits _{t}(L,D)\) over \(\mathbb{F}_{q}\), where \(D = (g(L_{0})^{-1},\ldots,g(L_{n-1})^{-1})\).
The distance of an irreducible binary Goppa code is at least 2t + 1 [43], and therefore a Goppa code can correct up to t errors (using, e.g., Patterson’s algorithm [72]), sometimes a little more [11]. Appropriate decoding algorithms can still decode t errors when the generator g(x) is not irreducible but free of squares. For example, one can see equivalently a binary Goppa code as an alternate code defined by the generator polynomial g^{2}(x), in which case any alternate is able to decode t errors. Codes called wild extend this result under certain circumstances [14]. For all other cases, there is no known decoding method capable of correcting more than t∕2 errors.
Equivalently we can define Goppa codes in terms of its syndrome function:
Definition 9
The syndrome is a linear function of e. We also present an alternative definition for Goppa codes:
Definition 10
The Goppa code [n, n −mt] over \(\mathbb{F}_{p}\) supported L and generator polynomial g is the core function syndrome (Eq. 5), i.e., the set of \(\varGamma (L,g):=\{ e \in \mathbb{F}_{p}^{n}\mid s_{e} \equiv 0\mod g\}\).
Thus, H = TVD where T is a Toeplitz matrix t × t, V is a Vandermonde matrix t × n, and D is a diagonal matrix n × n according to the following definitions:
Definition 11
Definition 12
Definition 13
4.2 Decodability
This is called the Gilbert-Varshamov (GV) boundary. Random binary codes achieve the GV bound, in the sense that the above inequality is very close to equality [53]. There is no known family of binary codes, however, that can be decoded in subexponential time until the GV limit nor known subexponential algorithm for decoding general codes to the GV limit.
A particularly good case is therefore δ ⩾1∕t! , which occurs when \(p^{c}/(p - 1)\leqslant 1\), i.e., c ⩽log_{p}(p − 1) or \(n\geqslant p^{m}/(p - 1)\). Unfortunately this also means that, for binary codes, the highest densities are reached only by codes of maximum length or nearly maximum; otherwise the density is reduced by a factor 2^{ct}. For codes of maximum length (n = p^{m} and hence c = 0), the density simplifies to \(\delta \approx (p - 1)^{t}/t!\) that achieves the relative minimum δ ≈ 1∕t! for binary codes.
We will also be interested in the particular case of error patterns if a particular magnitude prevails over the others and more especially when all the error magnitudes are equal. In this case, the density of decodable syndromes is \(\delta \approx (p - 1)/t!\) which again reaches the minimum δ ≈ 1∕t! in binary codes.
4.3 Code-Based Cryptosystems
The original McEliece encryption schemes [55] and Niederreiter [64], despite the historic name, but inaccurate and undue, as cryptosystems, are best described as trapdoor one-way functions than as full encryption methods themselves. Functions of this nature can be transformed in various ways in cryptosystems, for example, Fujisaki-Okamoto transform.
Interestingly, McEliece and Niederreiter commonly show a substantial speed advantage over traditional processing schemes. For example, a code of length n presents time complexity O(n^{2}), while Diffie-Hellman/DSA systems, as well as the operations of the RSA private exponent system, have time complexity O(n^{3}) and keys with n bits.
For simplicity, the descriptions of McEliece and Niederreiter schemes below assume that patterns of correctable errors are binary vectors of weight t, but variants with broader patterns of error are possible, as the ability to decode the underlying code. Simple and effective criteria for choosing parameters are provided in Sect. 4.3.3. Each encryption scheme consists of three algorithms: MakeKeyPair, Encrypt, and Decrypt.
4.3.1 McEliece
MakeKeyPair. Given the desired level of security λ, choose a prime p (commonly p = 2), a finite field \(\mathbb{F}_{q}\) with q = p^{m} for some m > 0, and a Goppa code Γ(L, g) with support \(L = (L_{0},\ldots,L_{n-1}) \in (\mathbb{F}_{q})^{n}\) (with distinct elements) and generator polynomial \(g \in \mathbb{F}_{q}[x]\) of degree t and free of squares satisfying g(L_{j}) ≠ 0, 0⩽ j < n. Let \(k = n -\mathit{mt}\). The choice is guided so that the cost of decoding a code [n, k, 2t + 1] is at least 2^{λ} steps. Compute a systematic generator matrix \(G \in \mathbb{F}_{p}^{k\times n}\) for Γ(L, g), i.e., \(G = [I_{k}\mid - M^{T}]\) for any matrix \(M \in \mathbb{F}_{p}^{\mathit{mt}\times k}\) and I_{k} an identity matrix of order k. The private key is sk: = (L, g) and the public key is pk: = (M, t).
Encrypt. To encrypt a plaintext \(d \in \mathbb{F}_{p}^{k}\), we choose a vector \(e\stackrel{\;_{\$}}{\leftarrow }\{0,1\}^{n} \subseteq \mathbb{F}_{p}^{n}\) with weight \(\mathop{wt}\nolimits (e)\leqslant t\), and compute the encrypted text \(c \leftarrow \mathit{dG} + e \in \mathbb{F}_{p}^{n}\).
Decrypt. To decrypt an encrypted text \(c \in \mathbb{F}_{p}^{n}\) with the knowledge of L and g, we compute the decodable syndrome c, apply in it a decoder to determine the error vector e, and recover the plain text d from the first k columns of c − e.
4.3.2 Niederreiter
MakeKeyPair. Given the desired level of security λ, choose a prime p (commonly p = 2), a finite field \(\mathbb{F}_{q}\) with q = p^{m} for some m > 0, and a Goppa code Γ(L, g) with support \(L = (L_{0},\ldots,L_{n-1}) \in (\mathbb{F}_{q})^{n}\) (of distinct elements) and a generator polynomial \(g \in \mathbb{F}_{q}[x]\) of degree t and free of squares satisfying g(L_{j}) ≠ 0, 0⩽ j < n. Let \(k = n -\mathit{mt}\). The choice is guided so that the cost of decoding a code [n, k, 2t + 1] is at least 2^{λ} steps. Compute a systematic parity matrix \(H \in \mathbb{F}_{p}^{\mathit{mt}\times n}\) for Γ(L, g), i.e., \(M = [M\mid I_{\mathit{mt}}]\) for some matrix \(M \in \mathbb{F}_{p}^{\mathit{mt}\times k}\) and I_{mt} the identity matrix of order mt. Finally, choose as public parameter a function of rank permutation \(\phi: \mathcal{B}(n,t) \rightarrow \mathbb{Z}/\binom{n}{t}\mathbb{Z}\). The private key is sk: = (L, g) and the public key is pk: = (M, t, ϕ).
Encrypt. To encrypt a plaintext, \(d \in \mathbb{Z}/\binom{n}{t}\mathbb{Z}\) is represented d as a error pattern \(e \leftarrow \phi ^{-1}(d) \in \{ 0,1\}^{n} \subseteq \mathbb{F}_{p}^{n}\) of weight \(\mathop{wt}\nolimits (e) = t\), and compute as a ciphertext syndrome \(s \leftarrow \mathit{eH}^{\mathop{\mathrm{T}}\nolimits } \in \mathbb{F}_{p}^{\mathit{mt}}\).
Decrypt. To decrypt an encrypted text \(s \in \mathbb{F}_{p}^{\mathit{mt}}\) with the knowledge of L and g, this syndrome becomes another one decodable, applies to income results a decoder to determine the error vector e, and recovers from this the plaintext \(d \leftarrow \phi (e)\).
4.3.3 Parameters for Code-Based Cryptosystems
The classical schemes of McEliece and Niederreiter, implemented on the class of Goppa codes, remain safe until the present date, in contrast to implementations on many other families of codes proposed [41, 67]. Indeed, Goppa codes have weathered well the intense attempts of cryptanalysis, and despite considerable progress in the area [10] (see also [12] for review), they remain essentially intact for cryptographic purposes that have been suggested in the pioneering work of McEliece [55].
Table 3 suggests parameters for the underlying codes of cryptosystems such as McEliece or Niederreiter and size | pk | in bits of the resulting public key. Only generic Goppa codes are considered irreducible.
Parameters for McEliece/Niederreiter using generic binary Goppa codes
m | n | k | t | lg WF | | pk | | |
---|---|---|---|---|---|---|
11 | 1893 | 1431 | 42 | 80.025 | 661122 | |
12 | 2887 | 2191 | 58 | 112.002 | 1524936 | |
12 | 3307 | 2515 | 66 | 128.007 | 1991880 | |
13 | 5397 | 4136 | 97 | 192.003 | 5215496 | |
13 | 7150 | 5447 | 131 | 256.002 | 9276241 |
We notice that in this generic Goppa codes scenario, these schemes are adversely affected by very large keys compared to conventional counterparts. That is the importance of seeking ways to reduce the key sizes, maintaining intact the level of security associated.
The first steps toward the goal of reducing the size of the keys without reducing the level of security in post-quantum cryptosystems were given by Monico et al. through codes with low-density parity check (matrix) (LDPC codes) [61], after that by Gaborit with quasi-cyclic codes [31], and Baldi and Chiaraluce through a combination of both [4].
4.4 LDPC and QC-LDPC Codes
The graph representation is analogous to the matrix representation looking at the adjacency matrix of the graph: H is a binary matrix r × n whose entry (i, j) is 1 if and only if the ith check node is connected to the jth message node in the graph. Then the LDPC code defined by the graph is the set of vectors \(c = (c_{1},\ldots,c_{n})\) such that \(H \cdot c^{\mathop{\mathrm{T}}\nolimits } = 0\). The matrix M is called parity matrix for the code. Conversely, any binary matrix r × n gives rise to a bipartite graph between n message nodes and r verification nodes, and the code defined to null space of M is precisely the code associated with that graph. Therefore, any linear code has a representation such as a code associated with a bipartite graph (note that this graph is not defined solely by the code). However, not every binary linear code has a representation as a sparse bipartite graph. If there is, then the code is called low-density paritycheck code.
An important subclass of LDPC codes which have advantages over other codes in the same class of codes is the quasi-cyclic low-density parity check (QC-LDPC) [90]. In general, a [n, k] QC-LDPC code satisfies n = n_{0}b and k = k_{0}b (and thus also r = r_{0}b) for some b, n_{0}, k_{0} (and r_{0}) and admits a parity matrix consisting of \(n_{0} \times r_{0}\) blocks of circulating sparse b × b submatrices. A particularly important case is when b = r (and r_{0} = 1 and \(k_{0} = n_{0} - 1\)), since a systematic parity matrix for this code is fully defined by the first line of each r × r block. It is said that the parity matrix is in the circulant form.
However, it was shown that all these proposals contain vulnerabilities that make them unsuitable for cryptographic purposes [67]. Indeed, in these methods, the trapdoor was essentially protected by any other mechanism including a private permutation of the underlying code. The attack strategy in this scenario is to obtain a soluble system of linear equations that the components of the permutation matrix must satisfy and was set up successfully because of the overly restrictive nature of the secret permutation (since it needs to preserve the quasi-cyclic structure of the result) and the fact that the secret code is a subcode of a very particular public code.
- 1.
Extract public keys shortened by blocks of very long private codes, exploring a theorem due to Wieschebrink about NP-completeness to distinguish shortened codes [92];
- 2.
Working with the subfield subcodes of an intermediate field between the original field and the extension field of the original GRS code adopted by construction.
These techniques have been applied with some success to quasi-cyclic codes. However, almost all of this family of codes was subsequently broken due to structural failure of security, more precisely a relationship between the secret structure and certain multivariate quadratic equation systems [30].
Historical and experiential wisdom suggests, therefore, to restrict the search for more efficient parameters of code-based cryptosystems to the class of Goppa codes.
4.5 MDPC and QC-MDPC Codes
An interesting subclass of the LDPC codes consists of moderate density parity check codes (MDPC) and their quasi-cyclic variant (QC-MDPC) [60].
These codes, introduced by Misoczki et al., have densities low enough to enable decoding by simple (and arguably more efficient) methods of belief propagation and Gallager’s bit flipping. Yet densities are high enough to prevent attacks based on the presence of very sparse words in the dual code as seen in the Stern attack [87] and variants, without ruining the error correction capability, as well as keeping decoding attacks based on information set [10, 13] also unfeasible.
Moreover, to prevent structural attacks as proposed by Faugère et al. [30] and Leander and Gauthier [36], oriented encryption codes should be maintained as much as possible without structure except for the secret trapdoor that allows private decryption, and in the case of quasi-cyclic codes, external symmetries allow an efficient implementation. Finally, the circulant symmetry can introduce security weaknesses as pointed out by Sendrier [83], but with respect to attacking performance, it induces only a polynomial gain (specifically linear), and a small adjustment in the parameters completely eliminates this problem. Typical densities in this case are in the range from 0.4 to 0.9 % of the code size, an order of magnitude above LDPC codes, but much better than previously mentioned MDPC, and certainly appropriate for Gallager codes. The construction is also as random as possible, maintaining only the desired density and circulant geometry. Furthermore, the code size is much higher than typical values for MDPC.
4.6 Method for Gallager’s Hard Decision Decoding (Bit Flipping)
In this section we describe Gallager’s hard decision decoding algorithm, or more simply bit flipping, following the concise and clear description of Huffman and Pless [46]. This algorithm is necessary to recover the original message from the encrypted codeword with errors.
- 1.
Compute \(\mathit{cH}^{\mathop{\mathrm{T}}\nolimits }\) and determine the unsatisfied parity checks (namely, the parity checks where the components of \(\mathit{cH}^{\mathop{\mathrm{T}}\nolimits }\) equal 1).
- 2.
For each of the n bits, compute the number of unsatisfied parity checks involving that bit.
- 3.
Flip the bits of c that are involved in a number of unsatisfied parity check equations overcoming some threshold.
- 4.
Repeat steps 1, 2, and 3 until either \(\mathit{cH}^{\mathop{\mathrm{T}}\nolimits } = 0\), in which case c has been successfully decoded, or until a certain bound in the number of iterations is reached, in which case decoding of the received vector has failed.
The bit-flipping algorithm is not the best method for decoding LDPC codes; in fact, the belief propagation technique [32, 46] is known for its ability to exceed correction errors. However, belief propagation decoders involve a computation with a probability increasingly refined for each bit of the received word c containing an error, incurring floating-point arithmetic and high-precision approaches that suit the process and computationally expensive algorithms. In a scenario where the number of errors is fixed and known in advance, as is the case for cryptographic applications, parameters can be adjusted so that complex and expensive decoding methods, such as belief propagation, are no longer needed.
4.7 Digital Signatures with Error Correcting Codes
After unsuccessful attempts to create a digital signature scheme based on error-correcting codes [2, 88], in 2001, Courtois, Finiasz, and Sendrier proposed a promising scheme [26].
4.7.1 CFS
- Make Key Pair:
- 1.
Choose a Goppa code G(L, g(X));
- 2.
Compute a corresponding (n − k) × n parity check matrix H;
- 3.
Compute V = SHP, where S is a random binary invertible matrix \((n - k) \times (n - k)\) and P is a random permutation matrix n × n.
The private key is G, and the public key is (V, t).
- 1.
- Signature:
- 1.
Find the short \(i \in \mathbb{N}\) such that, for c = h(m, i) and \(c^{{\prime}} = S^{-1}c\), c^{′} is a decodable syndrome of G.
- 2.
Using the decoding algorithm of G, compute the error vector e^{′}, whose syndrome is c^{′}, i.e., \(c^{{\prime}} = H(e^{{\prime}})^{t}\).
- 3.
Compute \(e^{t} = P^{-1}(e^{{\prime}})^{t}\).
Therefore, the signature is the pair (e, i).
- 1.
- Signature Verification:
- 1.
Compute c = Ve^{t}.
- 2.
Accept iff c = h(m, i).
- 1.
Although CFS is a still safe signature scheme after going through many cryptanalysis, it is not suitable for standard applications commonly used today, since besides the size of public keys to sign the cost is too large for a set of reliable parameters.
5 Lattice-Based Schemes
From the mathematical point of view, historically lattices have been studied since the 18th century by mathematicians such as Lagrange and Gauss. However, the interest in cryptography starts more recently with Ajtai’s work, that proves the existence of one-way functions based on the hardness of the shortest vector problem (SVP). The versatility and flexibility of lattice based cryptography, in terms of possible cryptographic features and simplicity of the basic operations, make it one of the most promising lines of research in cryptography. Moreover, some lattice schemes are supported by security demonstrations that rely on the worst-case hardness of certain problems.
Lattice-based cryptography can be divided in two categories: (i) those with a security proof, as, for example, is the case of Ajtai’s construction or cryptosystems based on the LWE problem, whose encryption and decryption are quadratic or even cubic algorithms involving the manipulation of a matrix A, associated with the public key, which is not efficient when compared to conventional cryptography; and (ii) those without a security proof, but with efficient implementations, for example, the NTRU cryptosystem. A recent result [86] reduces the security of NTRU-based cryptosystems to the worst-case problem over ideal lattices. Although hard problems over lattices may not be hard over ideal lattices, no polynomial algorithm is known to solve them, even when considering a polynomial approximation factor or the utilization of quantum computation.
5.1 Basic Definitions
Definition 14
Alternatively, it is possible to define lattices using matrix notation. Let \(B \in \mathbb{R}^{m\times n}\) be a matrix, the lattice generated by B is defined as \(\mathcal{L} = \left \{Bx\mid x \in \mathbb{Z}^{n}\right \}\), such that the determinant \(\det (B)\) is independent from the basis choice and corresponds geometrically to the inverse of the lattice point density in \(\mathbb{Z}^{m}\).
Definition 15
Theorem 1
The volume of the fundamental parallelepiped is related with the determinant of B, and given by \(\mathop{Vol}\nolimits (\mathcal{P}(B)) = \vert \mathop{det}\nolimits (B)\vert \). Given two basis \(B =\{ b_{1},\ldots,b_{n}\}\) and \(B^{{\prime}} =\{ b_{1}^{{\prime}},\ldots,b_{n}^{{\prime}}\}\) for lattice \(\mathcal{L}\), we have that \(\mathop{det}\nolimits (B) = \pm \mathop{det}\nolimits (B^{{\prime}})\).
The most important computational problem in lattices is the shortest vector problem (SVP), it is defined as follows: given the lattice \(\mathcal{L}(B)\), one has to find a nonzero vector with minimum norm. In practice, it is used an approximation factor γ(n) such that we look for a vector whose norm is less than the minimum multiplied by γ(n).
closest vector problem (CVP). Given lattice \(\mathcal{L}(B)\) and a vector \(t \in \mathbb{R}^{m}\), the goal is to find the vector \(v \in \mathcal{L}(B)\) closest to t;
shortest independent vector problem (SIVP). Given basis \(B \in \mathbb{Z}^{m\times n}\), we must find n linearly independent lattice vectors \((v_{1},\ldots,v_{n})\), such that maximum norm among these vectors is minimum.
Definition 16
It is easy to show that, for any basis B, we have that \(0 \leq \mathcal{H}(B) \leq 1\). Furthermore, the closer this ratio is to 1, the “more orthogonal” is the basis.
A particularly important class in lattices is that of q-ary lattices class, denoted by Λ_{q}. Given an integer q, the vector coordinates are restricted to be elements in \(\mathbb{Z}_{q}\). Given the matrix \(A \in \mathbb{Z}_{q}^{n\times m}\), the q-ary lattice is determined by the rows of A, instead of the columns. That is, it is formed by vectors \(y = A^{T}s\pmod q\), for \(s \in \mathbb{Z}^{n}\). The orthogonal q-ary lattice, \(\varLambda _{q}^{\perp }\), corresponding to matrix A, is given by vectors y such that \(Ay = 0\pmod q\). Given the lattice \(\mathcal{L}\), the dual lattice, \(\mathcal{L}^{{\ast}}\), is formed by vectors y, such that \(\langle x,y\rangle \in \mathbb{Z}\), for \(x \in \mathcal{L}\). In particular, the q-ary orthogonal lattice, \(\varLambda _{q}^{\perp }(A)\), is the same as q Λ_{q}(A)^{∗}.
5.1.1 LLL Algorithm
The LLL Algorithm is important in lattice because the practical security analysis in general are based this algorithm. In fact, the LLL can be used to tackle the SVP and related problems, as we will see later. In this section we will describe the LLL algorithm. Given a lattice and a basis for it, LLL computes a new basis, with Hadamard ratio closer to 1. In other words, the LLL algorithm performs a basis reduction, because the computed basis has lower norm and greater orthogonality than the original one.
In a vector space with a basis \((v_{1},\cdots \,,v_{n})\), an orthonormal basis can easily be obtained by using the Gram-Schmidt algorithm. In lattices we can apply a similar approach using Gauss reduction. The idea used in Gauss reduction is the same as in Gram-Schmidt algorithm, where we have \(\mu _{\mathit{ij}} = v_{i}v_{j}^{{\ast}}/\vert \vert v_{j}^{{\ast}}\vert \vert ^{2}\), but the values μ_{ij} are not necessarily integers. Thus, Gauss reduction considers the closest integers \(\lfloor \mu _{\mathit{ij}}\rceil \). The algorithm ends when this closest integers are zero, a condition that only in dimension 2 is sufficient to prove that the shortest vector was found.
Algorithm 5.1 Gauss reduction
Definition 17
- Norm condition:
\(\vert \mu _{i,j}\vert = \frac{v_{i}.v_{j}^{{\ast}}} {\vert \vert v_{j}^{{\ast}}\vert \vert ^{2}} \leq \frac{1} {2}\) for all 1 ≤ j < i ≤ n.
- Lovász condition:
\(\vert \vert v_{i}^{{\ast}}\vert \vert ^{2} \geq (\frac{3} {4} -\mu _{i,i-1}^{2})\vert \vert v_{ i-1}^{{\ast}}\vert \vert ^{2}\) for all 1 < i ≤ n.
Theorem 2
Let B be an LLL-reduced basis for lattice\(\mathcal{L}\); then, B solves the SVP problem with approximation factor\(2^{(n-1)/2}\).
It is important to justify the choice of value 3∕4. If this value were replaced by 1, we would have a Gauss reduction. However, there is no proof that the algorithm would end in polynomial time. In fact, any value strictly less than 1 would be enough. Thus, cryptosystems based on SVP and CVP must have their parameters well chosen in order to avoid attacks based on the LLL algorithm.
Algorithm 5.2 LLL
In general, given a basis \((v_{1},\ldots,v_{n})\), it is possible to obtain a new basis satisfying the norm condition, just by subtracting multiples of \(v_{1},\ldots,v_{k-1}\) from v_{k}, in order to reduce the absolute value of v_{k}. If the norm condition is satisfied, we verify if Lovász condition is also satisfied; if not, the vectors are reordered and the procedure is repeated, executing the norm reduction again.
5.2 Lattice Based Hash
The first lattice-based cryptosystem was proposed by Ajtai [1]. This work is very important because it presented a worst-case reduction, in the sense that an attack to the cryptosystem leads to solutions of hard instances of problems on lattices. In particular, inverting the hash function has, in average, the same complexity as the SVP problem on dual lattices in the worst case.
Specifically, given integers n, m, d, q, we build a cryptographic hash family, \(f_{A}:\{ 0,\ldots,d - 1\}^{m} \rightarrow \mathbb{Z}_{q}^{n}\), indexed by matrix \(A \in \mathbb{Z}_{q}^{n\times m}\). Given a vector y, we have that \(f_{A}(y) = Ay\pmod q\). Algorithm 5.3 describes these operations. A possible parameter choice is \(d = 2,q = n^{2},m \approx 2n\log q/\log d\), such that we obtain a compression factor of 2.
The scheme’s security follows from the fact that if one is able to find a collision \(f_{A}(y) = f_{A}(y^{{\prime}})\), then it is possible to compute a short vector, y − y^{′} in \(\mathcal{L}_{q}^{{\ast}}(A)\).
Algorithm 5.3 Ajtai’s hash
This proposal is really simple and can be efficiently implemented; however in practice, hash functions are designed in an ad-hoc way, without theoretical guarantees provided by a security proof, so that they are faster than Ajtai’s construction. Moreover, if an attacker has access to sufficiently many hash values, then it is possible to recover the fundamental domain of \(\mathcal{L}_{q}^{{\ast}}(A)\), allowing an attacker to compute collisions easily.
In 2011, Stehle and Steinfeld [86] proposed a collision-resistant hash function family with better algorithms, whose construction will be important to digital signature schemes, as we are going show in Sect. 5.4.
5.3 Lattice-Based Encryption
5.3.1 GGH
The GGH cryptosystem [42] allows us to easily understand the use of lattices in public key cryptography. This cryptosystem uses the orthonormality of the basis in the key pair definition. The private key is defined as a basis \(B_{\mathrm{priv}}\), formed by almost orthonormal vectors, namely, vectors with Hadamard ratio close to 1.
The encryption algorithm adds noise \(r \in \mathbb{R}^{n}\) to the plaintext \(m \in \mathcal{L}\), obtaining the ciphertext \(c = m + r\);
The decryption algorithm must be able to remove the inserted noise. Alternatively, it is necessary to solve an instance of the CVP problem.
In lattices with high dimension, if basis orthonormality is closer to zero, then the CVP problem becomes harder. Thus, we can define the public key as a basis B_{pub}, such that \(\mathcal{H}(B_{\mathrm{pub}})\) is close to zero. Furthermore, if we know the private key B_{priv}, then it is possible to use Babai’s algorithm [3], defined below, to recover the plaintext.
Algorithm 5.4 Babai’s algorithm
The general idea of Babai’s algorithm is to represent the vector c using private basis \(B_{\mathrm{priv}}\), solving the linear system in n equations. As \(c \in \mathbb{R}^{n\times n}\), to obtain a lattice point \(\mathcal{L}\), each coefficient \(t_{i} \in \mathbb{R}^{n}\) must be approximated to the nearest integer a_{i}, where this operation is denoted by \(a_{i} \leftarrow \lfloor t_{i}\rceil \). This procedure is simple and works very well, since the basis B_{priv} is sufficiently orthonormal, reducing rounding errors.
One way to attack the cryptosystem is trying to reduce the basis B_{pub}, in order to obtain shorter vectors, with Hadamard ratio close to 1. In dimension two the problem can be easily solved using Gauss reduction (algorithm 5.1). For higher dimensions the problem is considered hard, although in 1982 there was a great advance, with the invention of LLL algorithm [50]. Thus, the cryptosystem parameters must be designed to resist LLL basis reduction.
5.3.2 NTRU
The NTRU cryptosystem [45] is originally constructed over polynomial rings but can also be defined over lattices, because the underlying problem can be interpreted as being SVP and CVP. Hence, the solution of these problems would mean an attack to the cryptosystem if parameters are not carefully chosen.
The cryptosystem uses the following polynomial rings: \(R = \mathbb{Z}[x]/(x^{N} - 1)\), \(R_{p} = (\mathbb{Z}/p\mathbb{Z})[x]/(x^{N} - 1)\) and \(R_{q} = (\mathbb{Z}/q\mathbb{Z})[x]/(x^{N} - 1)\), where N, p, q are positive integers.
Definition 18
Given positive integers d_{1} and d_{2}, we define \(\mathcal{T} (d_{1},d_{2})\) as the class of polynomials that have d_{1} coefficients equal to 1, d_{2} coefficients equal to − 1 and the remaining coefficients equal to zero. These polynomials are called ternary polynomials.
The parameters are given by (N, p, q, d), where N and p are prime numbers, \((p,q) = (N,q) = 1\) and q > (6d + 1)p. The private key corresponds to the polynomials \(f(x) \in \mathcal{T} (d + 1,d)\) and \(g(x) \in \mathcal{T} (d,d)\). Public key is given by polynomial h(x) ≡ F_{q}(x). g(x), where F_{q}(x) is the multiplicative inverse of f(x) in R_{q}.
Given message m(x) ∈ R, with coefficients in the interval \([-p/2,p/2]\), r(x) is randomly chosen and the ciphertext is computed by \(e(x) \equiv ph(x).r(x) + m(x)\pmod q\).
KeyGen. Choose \(f \in \mathcal{T} (d + 1,d)\) such that f has inverse in R_{q} and R_{p}. Choose also \(g \in \mathcal{T} (d,d)\). Compute F_{q} as the inverse of f in R_{q} and, analogously, F_{p} is the inverse of f in R_{p}. The public key is given by h = F_{q}. g.
Encrypt. Given plaintext m ∈ R_{p}, choose randomly \(r \in \mathcal{T} (d,d)\) and compute \(e \equiv pr.h + m\pmod q\), where h is a public key.
Decrypt. Compute \(a = \lfloor f.e\rceil _{q} \equiv \lfloor pg.r + f.m\rceil _{q}\). Finally, the message can be obtained computing \(m \equiv F_{p}.a\pmod p\).
5.3.3 LWE (Learning with Errors)-Based Encryption
In this section, we are going to present a cryptosystem based on the LWE problem, that is, an efficient proposal with security proof based on worst-case problems over lattices [80]. This proof was a quantum reduction; in other words, it shows that a cryptosystem vulnerability implies the existence of a quantum algorithm to solve hard problems over lattices. In 2009, Peikert gave a classical reduction for the security proof [73].
Definition 19
The LWE problem consists in finding the vector \(s \in \mathbb{Z}_{q}^{n}\), given the equations \(\langle s,a_{i}\rangle + e_{i} = b_{i}\pmod q\), for 1 ≤ i ≤ n. The values e_{i} are small errors that were inserted according to the distribution \(\mathcal{D}\), generally taken as a Gaussian distribution.
KeyGen. Choose randomly a ∈ R_{q} and generate s and e in R using distribution \(\mathcal{D}\). The private key is given by s, while the public key is given by \((a,b = a.s + e)\).
- Encrypt. To encrypt d bits, it is possible to interpret these bits as R polynomial coefficients. The encryption algorithm then chooses \(r,e_{1},e_{2} \in R\), using the same distribution \(\mathcal{D}\) and computes (u, v) in the following way:$$\displaystyle\begin{array}{rcl} u& =& a.r + e_{1}\pmod q\text{,} {}\\ v& =& b.r + e_{2} + \lfloor q/2\rfloor.z\pmod q. {}\\ \end{array}$$
- Decrypt. To decrypt, the algorithm computes$$\displaystyle{v - u.s = (r.e - s.e_{1} + e_{2}) + \lfloor q/2\rfloor.z\pmod q.}$$
According to the parameters choice, we have that \((r.e - s.e_{1} + e_{2})\) has maximum size q∕4, such that each plaintext bit can be computed verifying each coefficient from the obtained result. If the coefficient is closer to 0 than to q∕2, then the corresponding bit is 0; otherwise it is 1.
Some concepts in this section, for example, the cyclotomic polynomial ring and the Gaussian distribution \(\mathcal{D}\), were recently incorporated to the NTRU scheme, allowing us to construct a semantically secure scheme efficient for lattice-based encryption [86], whose public and private keys and encryption and decryption algorithms have complexity \(\tilde{O}(\lambda )\), such that λ is the security parameter.
5.3.4 Homomorphic Encryption
In 2009, Gentry proposed the construction of a fully homomorphic encryption scheme [37], solving a problem open since 1978, when Rivest, Adleman, and Dertouzos conjectured the existence of privacy homomorphisms [62], such that the encryption function is also an algebraic homomorphism. In other words, it is possible to add and multiply encrypted texts, so that when decrypted we obtain the corresponding result with respect to the same operation, executed using corresponding plaintexts.
If the plaintext space is given by { 0, 1 }, then bit addition is equivalent to logic exclusive disjunction, while multiplication is equivalent to logic conjunction. Hence, it is possible to compute any Boolean circuit over encrypted data, which implies that we can evaluate any algorithm homomorphically with encrypted arguments, obtaining an encrypted output.
Using homomorphic encryption it is possible to delegate algorithm computation to a server, maintaining input confidentiality. This is interesting for cloud computing, because it allows the construction of applications such as encrypted databases, encrypted disks, encrypted search engines, etc.
The computational complexity of performing homomorphic encryption is, nevertheless, still a hindrance for its practical utilization. Recently, Brakerski proposed the use of the LWE problem to construct fully homomorphic encryption [19], reducing the algorithms’ complexities and achieving polylogarithmic complexity per operation. Brakerski used a new way to manage noise growth, which allowed us to execute a greater number of multiplications. In particular, he proposed a modulo reduction algorithm, which implicitly reduced the noise growth rate. An algorithm called dimension reduction allows us to replace the bootstrapping procedure by a new method (similar in many aspects), which leads to better parameters. Nevertheless, even considering recent optimizations, homomorphic encryption remains not practical.
5.4 Digital Signatures
GGH and NTRU cryptosystems can be converted into digital signature schemes [12]. However, such proposals do not have a security proof and, in fact, there are attacks that allow us to recover the private key given a sufficiently large number of signatures [63].
In 2007, Gentry, Peikert, and Vaikuntanathan [39] created a new kind of trapdoor function f with an extra property: an efficient algorithm that, using the trapdoor, samples elements from the preimage of f. A composition of Gaussians is used to obtain a point close to a lattice vector. This distribution has standard deviation greater than the basis vector within maximum norm, such that reduction by fundamental parallelepiped has indistinguishable distribution from uniform. Furthermore, this construction does not reveal the lattice geometry, because the normal distribution is spherical. Given message m and a hash function H that maps plaintexts to the preimage of f, we compute the point y = H(m). The signature is given by \(\delta = f^{-1}(y)\). To verify the signature, we compute f(δ) = H(m). This kind of construction was proposed by Bellare and Rogaway [7], using trapdoor permutations and modeling H as a random oracle. Therefore, a digital signature scheme is obtained, with existential unforgeability under adaptive chosen plaintext attack. We can use a Gaussian to generate the noise e, such that f(e) = y and \(y = v + e\), for a point v chosen uniformly in the lattice. Thus, the construction has a security proof based on lattices worst-case problems.
This construction can be viewed with respect to two functions: \(f_{A}(x) = Ax\pmod q\), Ajtai’s construction, and \(g_{A}(s,e) = A^{T}s + e\), LWE problem, such that the first function is surjective and the second is injective. In 2012, Micciancio and Peikert [58] showed a simple, secure, and efficient way to invert g_{A} and sample from the preimage of f_{A}, allowing the construction of an efficient digital signature scheme. In this proposal, the Gaussian composition allowed parallelism (in later work [39], and subsequent proposals [86], it was inherently sequential), leading to a concrete improvement. The optimizations described above can be used in applications based on function g_{A} or sampling from preimage of f_{A}; hence, it is not only important for digital signatures, but also to secure encryption construction in the adaptive chosen ciphertext attack.
5.5 Other Applications
Lattice-based cryptography is interesting not only because it resists to quantum attacks but also because it is a flexible alternative to the construction of cryptosystems. In particular, the ring-LWE problem has become more and more important, as it allows us to construct stronger trapdoor functions, with better parameters for both security and performance [58].
multilinear maps. Bilinear pairings can be used in different contexts, as, for example, in identity-based encryption. The generalization of this concept, called multilinear maps, is very useful and, although no proposal appeared for a while, many applications were suggested. Using the noise concept, also present in homomorphic encryption, Garg, Gentry, and Halevi achieved the construction of multilinear maps [34];
- identity-based encryption. For some time, identity-based encryption was only achievable by using bilinear pairings. Using lattices, many proposals were put forward [17, 39], built upon the dual scheme \(\mathcal{E}\), composed by algorithms \(\{\mathop{DualKeyGen}\nolimits,\mathop{DualEnc}\nolimits,\mathop{DualDec}\nolimits \}\), as pointed out in Sect. 5.3.3. Specifically, \(\mathop{DualKeyGen}\nolimits\) computes the private key as the error e, chosen using the Gaussian distribution, while the public key is given by u = f_{A}(e). To encrypt a bit b, the algorithm \(\mathop{DualEnc}\nolimits\) randomly chooses s, chooses x and e^{′} according to the Gaussian, and computes \(c_{1} = g_{A}(s,x)\) e \(c_{2} = u^{T}s + e^{{\prime}} + b.\lfloor q/2\rfloor \). The ciphertext is \(\langle c_{1},c_{2}\rangle\). Finally, \(\mathop{DualDec}\nolimits\) computes \(b = c_{2} - e^{T}c_{1}\). Then, given the hash function H, modeled as a random oracle mapping identities to public keys of the dual cryptosystem, the identity-based encryption scheme was constructed as follows:
Setup. Choose the public key \(A \in \mathbb{Z}_{q}^{n\times m}\) and the master key as the trapdoor s, according to the description in Sect. 5.4;
Extraction. Given the identity \(\mathrm{id}\), we compute u = H(id) and the decryption key \(e = f^{-1}(u)\), using the trapdoor preimage sampling algorithm with trapdoor s;
Encrypt. Given bit b, return \(\langle c_{1},c_{2}\rangle =\mathop{ DualEnc}\nolimits (u,b)\);
Decrypt. Return \(\mathop{DualDec}\nolimits (e,\langle c_{1},c_{2}\rangle )\).
functional encryption. Functional encryption is a new primitive in cryptography, that opens new horizons [51]. In this system, a public function f(x, y) determines what the user that knows the key y can infer from a ciphertext, denoted by c_{x}, according to parameter x. Within this model, the one who encrypts a message m can previously choose what kind of information is obtained after decryption. Moreover, a trusted party is responsible for key s_{y} generation, which can be used to decrypt c_{x}, returned as output for f(x, y), without necessarily revealing information about m. With this approach it is possible to define an identity-based encryption scheme as a functional encryption special case, such that x = (m, id) and f(x, y) = m if and only if y = id. A recent result [35] proposes the construction of a functional encryption scheme based on lattices, being able to deal with any polynomial-size Boolean circuit;
attribute-based encryption. This is a functional encryption special case, such that x = (m, ϕ) and f(x, y) = m if and only if ϕ(y) = 1. Namely, the decryption works since y, the decryptor’s attribute, satisfies the predicate ϕ, such that the encryptor can determine a access control policy (predicate ϕ) for the cryptosystem. There are proposals to achieve this kind of operations based on the LWE problem [82], and the multilinear map construction mentioned above has been used by Sahai and Waters [40] to propose an attributed-based scheme for any Boolean circuit, showing one more time the versatility of lattice-based cryptography;
obfuscation. There is a negative result proving that obfuscation is impossible in a certain security model. However, it was recently proposed the construction of indistinguishability obfuscation which, in a different security model, is proved to be the best possible approach. The LWE problem was used to construct this kind of primitive [35], being part of a functional encryption construction. Such schemes, therefore, although versatile, are relevant mostly for their theoretical importance rather than for their practical applications.
Concluding Remarks
As we have seen, not all is lost for the deployment of efficient and flexible cryptosystems in a scenario where large quantum computers are a technological reality. Many proposals have already attained a fairly good level of maturity, and one can even discern some patterns in schemes based on different underlying security assumptions, in the sense of there existing strikingly similar schemes based on codes, lattices,\(\mathcal{M}\mathcal{Q}\) systems and sometimes even hash functions. Determining how far the analogies can go (and why) is an interesting line for future investigation.
At the same time, practical considerations are ever more often being addressed in the literature, as they are as important as theoretical ones in a truly post-quantum scenario where conventional systems would have to be replaced.
The fact that post-quantum schemes can also provide functionalities not available elsewhere has already been, and is likely to continue to be, a strong additional motivation for further research in the area.
Notes
Acknowledgements
Paulo S. L. M. Barreto, Ricardo Dahab and Julio López acknowledge support by the Brazilian National Council for Scientific and Technological Development (CNPq) research productivity grants 306935/2012-0, 311530/2011-7, and 309258/2011-1, respectively.
References
- 1.M. Ajtai, Generating hard instances of lattice problems (extended abstract), in Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC ‘96 (ACM, New York, 1996), pp. 99–108Google Scholar
- 2.M. Alabbadi, S.B. Wicker, A digital signature scheme based on linear error-correcting block codes, in Advances in Cryptology – Asiacrypt ‘94, vol. 917 of Lecture Notes in Computer Science (Springer, New York, 1994), pp. 238–348Google Scholar
- 3.L Babai, On lovsz lattice reduction and the nearest lattice point problem. Combinatorica 6(1), 1–13 (1986)CrossRefMATHMathSciNetGoogle Scholar
- 4.M. Baldi, F. Chiaraluce, Cryptanalysis of a new instance of McEliece cryptosystem based on QC-LDPC code, in IEEE International Symposium on Information Theory – ISIT 2007 (IEEE, Nice, 2007), pp. 2591–2595Google Scholar
- 5.M. Baldi, F. Chiaraluce, M. Bodrato, A new analysis of the McEliece cryptosystem based on QC-LDPC codes, in Security and Cryptography for Networks – SCN 2008, vol. 5229 of Lecture Notes in Computer Science (Springer, Amalfi, 2008), pp. 246–262Google Scholar
- 6.R. Barbulescu, P. Gaudry, A. Joux, E. Thomé, A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. HAL-INRIA technical report, http://hal.inria.fr/hal-00835446/ (2013)
- 7.M. Bellare, P. Rogaway, Random oracles are practical: A paradigm for designing efficient protocols, in Proceedings of the 1st ACM conference on Computer and communications security (ACM, 1993), pp. 62–73Google Scholar
- 8.T.P. Berger, P.-L. Cayrel, P. Gaborit, A. Otmani, Reducing key length of the McEliece cryptosystem, in Progress in Cryptology – Africacrypt 2009, Lecture Notes in Computer Science (Springer, Gammarth, 2009), pp. 77–97Google Scholar
- 9.E. Berlekamp, R. McEliece, H. van Tilborg, On the inherent intractability of certain coding problems. IEEE Trans. Inf. Theory 24(3), 384–386 (1978)CrossRefMATHGoogle Scholar
- 10.D. Bernstein, T. Lange, C. Peters, Smaller decoding exponents: ball-collision decoding, in Advances in Cryptology – Crypto 2011, vol. 6841 of Lecture Notes in Computer Science (Springer, Santa Barbara, 2011), pp. 743–760Google Scholar
- 11.D.J. Bernstein, List decoding for binary Goppa codes, in Coding and Cryptology—Third International Workshop, IWCC 2011, Lecture Notes in Computer Science (Springer, Qingdao, 2011), pp. 62–80Google Scholar
- 12.D.J. Bernstein, J. Buchmann, E. Dahmen, Post-Quantum Cryptography (Springer, Heidelberg, 2008)Google Scholar
- 13.D.J. Bernstein, T. Lange, C. Peters, Attacking and defending the McEliece cryptosystem, in Post-Quantum Cryptography – PQCrypto 2008, vol. 5299 of Lecture Notes in Computer Science (Springer, New York, 2008), pp. 31–46. http://www.springerlink.com/content/68v69185x478p53g
- 14.D.J. Bernstein, T. Lange, C. Peters, Wild McEliece, in Selected Areas in Cryptography – SAC 2010, vol. 6544 of Lecture Notes in Computer Science (Springer, Waterloo, 2010), pp. 143–158Google Scholar
- 15.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Keccak specifications. Submission to NIST (2010). http://keccak.noekeon.org/Keccak-specifications.pdf
- 16.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Sponge functions. ECRYPT Hash Workshop 2007 (2007). Also available as public comment to NIST from http://www.csrc.nist.gov/pki/HashWorkshop/Public_Comments/2007_May.html
- 17.D. Boneh, C. Gentry, M. Hamburg, Space-efficient identity based encryption without pairings, in FOCS, pp. 647–657 (2007)Google Scholar
- 18.A. Braeken, C. Wolf, B. Preneel, A study of the security of unbalanced oil and vinegar signature schemes, in Topics in Cryptology – CT-RSA 2005, vol. 3376 of Lecture Notes in Computer Science (Springer, New York, 2005), pp. 29–43Google Scholar
- 19.Z. Brakerski, V. Vaikuntanathan, Efficient fully homomorphic encryption from (standard) lwe. Electron. Colloq. Comput. Complex. 18, 109 (2011)Google Scholar
- 20.J. Buchmann, C. Coronado, E. Dahmen, M. Dring, E. Klintsevich, CMSS – an improved merkle signature scheme, in Progress in Cryptology INDOCRYPT 2006, vol. 4329 of Lecture Notes in Computer Science (Springer, New York, 2006), pp. 349–363Google Scholar
- 21.J. Buchmann, E. Dahmen, S. Ereth, A. Hlsing, M. Rckert, On the security of the Winternitz one-time signature scheme, in Progress in Cryptology – AFRICACRYPT 2011, vol. 6737 of Lecture Notes in Computer Science (Springer, New York, 2011), pp. 363–378Google Scholar
- 22.J. Buchmann, E. Dahmen, A. Hlsing, XMSS-a practical secure signature scheme based on minimal security assumptions, in Cryptology ePrint Archive - Report 2011/484. ePrint (2011)Google Scholar
- 23.J. Buchmann, E. Dahmen, E. Klintsevich, K. Okeya, C. Vuillaume, Merkle signatures with virtually unlimited signature capacity, in Applied Cryptography and Network Security – ACNS 2007, vol. 4521 of Lecture Notes in Computer Science (Springer, New York, 2007), pp. 31–45Google Scholar
- 24.J. Buchmann, E. Dahmen, M. Schneider, Merkle tree traversal revisited, in Post-Quantum Cryptography – PQCrypto 2008, vol. 5299 of Lecture Notes in Computer Science (Springer, New York, 2008), pp. 63–78Google Scholar
- 25.S. Contini, A.K. Lenstra, R. Steinfeld, VSH, an Efficient and Provable Collision Resistant Hash Function. Cryptology ePrint Archive, Report 2005/193 (2005). http://eprint.iacr.org/
- 26.N. Courtois, M. Finiasz, N. Sendrier, How to achieve a McEliece-based digital signature scheme, in Advances in Cryptology – Asiacrypt 2001, vol. 2248 of Lecture Notes in Computer Science (Springer, Gold Coast, 2001), pp. 157–174Google Scholar
- 27.R.A. DeMillo, D.P. Dobkin, A.K. Jones, R.J. Lipton, Foundations of Secure Computation (Academic Press, New York, 1978)MATHGoogle Scholar
- 28.J. Ding, D. Schmidt, Rainbow, a new multivariable polynomial signature scheme, in International Conference on Applied Cryptography and Network Security – ACNS 2005, vol. 3531 of Lecture Notes in Computer Science (Springer, New York, 2005), pp. 164–175Google Scholar
- 29.C. Dods, N. Smart, M. Stam, Hash based digital signature schemes, in Cryptography and Coding, vol. 3796 of Lecture Notes in Computer Science (Springer, New York, 2005), pp. 96–115Google Scholar
- 30.J.-C. Faugère, A. Otmani, L. Perret, J.-P. Tilllich, Algebraic cryptanalysis of McEliece variants with compact keys, in Advances in Cryptology – Eurocrypt 2010, vol. 6110 of Lecture Notes in Computer Science (Springer, Nice, 2010), pp. 279–298Google Scholar
- 31.P. Gaborit, Shorter keys for code based cryptography, in International Workshop on Coding and Cryptography – WCC 2005 (ACM Press, Bergen, 2005), pp. 81–91Google Scholar
- 32.R.G. Gallager, Low-density parity-check codes. Information Theory, IRE Transactions on 8(1), 21–28 (1962)CrossRefMATHMathSciNetGoogle Scholar
- 33.M.R. Garey, D.S. Johnson, Computers and Intractability – A Guide to the Theory of NP-Completeness (W. H. Freeman and Company, New York, 1979)MATHGoogle Scholar
- 34.S. Garg, C. Gentry, S. Halevi, Candidate multilinear maps from ideal lattices, in Advances in Cryptology – EUROCRYPT 2013, pp. 1–17 (2013)Google Scholar
- 35.S. Garg, C. Gentry, S. Halevi, M. Raykova, A. Sahai, B. Waters, Candidate indistinguishability obfuscation and functional encryption for all circuits, IACR Cryptology ePrint Archive 2013, 451 (2013)Google Scholar
- 36.V. Gauthier, G. Leander, Practical key recovery attacks on two McEliece variants, in International Conference on Symbolic Computation and Cryptography – SCC 2010 (Springer, Egham, 2010)Google Scholar
- 37.C. Gentry, A fully homomorphic encryption scheme. PhD thesis, Stanford University, 2009. crypto.stanford.edu/craig
- 38.C. Gentry, Encrypted messages from the heights of cryptomania, in TCC, pp. 120–121 (2013)Google Scholar
- 39.C. Gentry, C. Peikert, V. Vaikuntanathan, Trapdoors for hard lattices and new cryptographic constructions, in Proceedings of the 40th Annual ACM Symposium on Theory of Computing, STOC ‘08 (ACM, New York, 2008), pp. 197–206Google Scholar
- 40.C. Gentry, A. Sahai, B. Waters, Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster, attribute-based, in Advances in Cryptology – CRYPTO ‘89, vol. 8042 of Lecture Notes in Computer Science (Springer, New York, 2013), pp. 75–92Google Scholar
- 41.J.K. Gibson, The security of the Gabidulin public key cryptosystem, in Advances in Cryptology – Eurocrypt ‘96, vol. 1070 of Lecture Notes in Computer Science (Springer, Zaragoza, 1996), pp. 212–223Google Scholar
- 42.O. Goldreich, S. Goldwasser, S. Halevi, Public-key cryptosystems from lattice reduction problems, in Advances in Cryptology – CRYPTO ‘97, vol. 1294 of Lecture Notes in Computer Science (Springer, New York, 1997), pp. 112–131Google Scholar
- 43.V.D. Goppa, A new class of linear error correcting codes. Problemy Peredachi Informatsii 6, 24–30 (1970)MATHMathSciNetGoogle Scholar
- 44.A. Hülsing, Practical forward secure signatures using minimal security assumptions. PhD thesis, TU Darmstadt, 2013Google Scholar
- 45.J. Hoffstein, J. Pipher, J.H. Silverman, Ntru: A ring-based public key cryptosystem, in Lecture Notes in Computer Science (Springer, New York, 1998), pp. 267–288Google Scholar
- 46.W.C. Huffman, V. Pless, Fundamentals of Error-Correcting Codes (Cambridge University Press, Cambridge, 2003)CrossRefMATHGoogle Scholar
- 47.A. Kipnis, A. Shamir, Cryptanalysis of the oil and vinegar signature scheme, in ed. by H. Krawczyk. Advances in Cryptology – Crypto 1998, vol. 1462 of Lecture Notes in Computer Science (Springer, New York, 1998), pp. 257–266Google Scholar
- 48.A. Kipnis, J. Patarin, L. Goubin, Unbalanced oil and vinegar signature schemes, in ed. by J. Stern. Advances in Cryptology – EUROCRYPT ‘99, vol. 1592 of Lecture Notes in Computer Science (Springer, New York, 1999), pp. 206–222Google Scholar
- 49.L. Lamport, Constructing digital signatures from a one way function, in SRI International. CSL-98 (1979)Google Scholar
- 50.A.K. Lenstra, H.W. Lenstra, L. Lovsz, Factoring polynomials with rational coefficients. Math. Ann. 261(4), 515–534 (1982)CrossRefMATHMathSciNetGoogle Scholar
- 51.A. Lewko, T. Okamoto, A. Sahai, K. Takashima, B. Waters, Fully secure functional encryption: Attribute-based encryption and (hierarchical) inner product encryption, in H. Gilbert. Advances in Cryptology – EUROCRYPT 2010, vol. 6110 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2010), pp. 62–91Google Scholar
- 52.V. Lyubashevsky, C. Peikert, O. Regev, On ideal lattices and learning with errors over rings. Adv. Cryptology EUROCRYPT 2010 6110/2010(015848), 1–23 (2010)Google Scholar
- 53.F.J. MacWilliams, N.J.A. Sloane, The Theory of Error-Correcting Codes, vol. 16 (North-Holland Mathematical Library, Amsterdam, 1977)MATHGoogle Scholar
- 54.S.M. Matyas, C.H. Meyer, J. Oseas, Generating strong one-way functions with cryptographic algorithm, IBM Techn. Disclosure Bull., 1985Google Scholar
- 55.R. McEliece, A public-key cryptosystem based on algebraic coding theory. The Deep Space Network Progress Report, DSN PR 42–44, 1978. http://ipnpr.jpl.nasa.gov/progressreport2/42-44/44N.PDF. Acesso em:.
- 56.R.C. Merkle, Secrecy, Authentication, and Public Key Systems. Stanford Ph.D. thesis, 1979Google Scholar
- 57.R.C. Merkle, A digital signature based on a conventional encryption function, in Advances in Cryptology – CRYPTO’87, vol. 435 of Lecture Notes in Computer Science (Springer, New York, 1987), pp. 369–378Google Scholar
- 58.D. Micciancio, C. Peikert, Trapdoors for lattices: Simpler, tighter, faster, smaller, in ed. by D. Pointcheval, T. Johansson. Advances in Cryptology EUROCRYPT 2012, vol. 7237 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2012), pp. 700–718Google Scholar
- 59.V.S. Miller, Use of elliptic curves in cryptography, in Advances in Cryptology — Crypto ‘85 (Springer, New York, 1986), pp. 417–426Google Scholar
- 60.R. Misoczki, N. Sendrier, J.-P. Tilllich, P.S.L.M. Barreto, MDPC-McEliece: New McEliece variants from moderate density parity-check codes. Cryptology ePrint Archive, Report 2012/409, 2012. http://eprint.iacr.org/2012/409
- 61.C. Monico, J. Rosenthal, A. Shokrollahi, Using low density parity check codes in the McEliece cryptosystem, in IEEE International Symposium on Information Theory – ISIT 2000 (IEEE, Sorrento, 2000), p. 215Google Scholar
- 62.E.M. Morais, R. Dahab, Encriptao homomrfica, in XII Simpsio Brasileiro em Segurana da Informao e de Sistemas Computacionais: Minicursos, SBSeg (2012)Google Scholar
- 63.P. Nguyen, O. Regev, Learning a parallelepiped: Cryptanalysis of ggh and ntru signatures, in S. Vaudenay. Advances in Cryptology - EUROCRYPT 2006, vol. 4004 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2006), pp. 271–288Google Scholar
- 64.H. Niederreiter, Knapsack-type cryptosystems and algebraic coding theory. Prob. Control Inf. Theory 15(2), 159–166 (1986)MATHMathSciNetGoogle Scholar
- 65.NIST, Federal Information Processing Standard FIPS 186-3 – Digital Signature Standard (DSS) – 6. The Elliptic Curve Digital Signature Algorithm (ECDSA) (National Institute of Standards and Technology (NIST), Gaithersburg, 2012). http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf
- 66.A. K. D. S. Oliveira, J. López. Implementação em software do Esquema de Assinatura Digital de Merkle e suas variantes, in Brazilian Symposium on Information and Computer Systems Security – SBSeg 2013 (SBC, 2013)Google Scholar
- 67.A. Otmani, J.-P. Tillich, L. Dallot, Cryptanalysis of two McEliece cryptosystems based on quasi-cyclic codes. Math. Comput. Sci. 3(2), 129–140 (2010)CrossRefMATHMathSciNetGoogle Scholar
- 68.J. Patarin, The oil and vinegar signature scheme, in Dagstuhl Workshop on Cryptography (1997). TransparenciesGoogle Scholar
- 69.J. Patarin, L. Goubin, Trapdoor one-way permutations and multivariate polynomials, in ICICS’97, vol. 1334 of Lecture Notes in Computer Science (Springer, New York, 1997), pp. 356–368Google Scholar
- 70.J. Patarin, Hidden fields equations (hfe) and isomorphisms of polynomials (ip): Two new families of asymmetric algorithms, in ed. by U. Maurer. Advances in Cryptology – EUROCRYPT ‘96, vol. 1070 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 1996), pp. 33–48Google Scholar
- 71.J. Patarin, L. Goubin, N. Courtois, Improved algorithms for isomorphisms of polynomials, in Advances in Cryptology – EUROCRYPT ‘98 (Springer, New York, 1998), pp. 184–200CrossRefGoogle Scholar
- 72.N.J. Patterson, The algebraic decoding of Goppa codes. IEEE Trans. Inf. Theory 21(2), 203–207 (1975)CrossRefMATHGoogle Scholar
- 73.C. Peikert, Public-key cryptosystems from the worst-case shortest vector problem: extended abstract, in Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC ‘09 (ACM, New York, 2009), pp. 333–342Google Scholar
- 74.A. Petzoldt, S. Bulygin, J. Buchmann, CyclicRainbow – a multivariate signature scheme with a partially cyclic public key, in ed. by G. Gong, K. Gupta. Progress in Cryptology – Indocrypt 2010, vol. 6498 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2010), pp. 33–48Google Scholar
- 75.A. Petzoldt, S. Bulygin, J. Buchmann, Selecting parameters for the Rainbow signature scheme, in ed. by N. Sendrier Post-Quantum Cryptography – PQCrypto 2010, vol. 6061 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2010), pp. 218–240. Extended Version: http://eprint.iacr.org/2010/437
- 76.A. Petzoldt, S. Bulygin, J. Buchmann, Linear recurring sequences for the UOV key generation, in International Conference on Practice and Theory in Public Key Cryptography – PKC 2011, vol. 6571 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2011), pp. 335–350Google Scholar
- 77.A. Petzoldt, S. Bulygin, J. Buchmann, Cyclicrainbow - a multivariate signature scheme with a partially cyclic public key, in ed. by G. Gong, K.C. Gupta. INDOCRYPT, volume 6498 of Lecture Notes in Computer Science (Springer, New York, 2010), pp. 33–48Google Scholar
- 78.B. Preneel, Analysis and design of cryptographic hash functions. PhD thesis, Katholieke Universiteit Leuven, 1983Google Scholar
- 79.L. Rausch, A. Hlsing, J. Buchmann, Optimal parameters for \(xmss^{\mathrm{MT}}\), in CD-ARES 2013, vol. 8128 of Lecture Notes in Computer Science (Springer, New York, 2013), pp. 194–208Google Scholar
- 80.O. Regev, The learning with errors problem (invited survey), in IEEE Conference on Computational Complexity (IEEE Computer Society, Washington, DC, 2010), pp. 191–204Google Scholar
- 81.R.L. Rivest, A. Shamir, L. Adleman, A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21, 120–126 (1978)CrossRefMATHMathSciNetGoogle Scholar
- 82.A. Sahai, B. Waters, Attribute-based encryption for circuits from multilinear maps. CoRR, abs/1210.5287 (2012)Google Scholar
- 83.N. Sendrier, Decoding one out of many, in ed. by B-Y. Yang. Post-Quantum Cryptography – PQCrypto 2011, vol. 7071 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2011), pp. 51–67. 10.1007/978-3-642-25405-5-4Google Scholar
- 84.P.W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 26, 1484–1509 (1997)CrossRefMATHMathSciNetGoogle Scholar
- 85.A. Shoufan, N. Huber, H. Molter, A novel cryptoprocessor architecture for chained merkle signature scheme, in Microprocessors and Microsystems (Elsevier, Amsterdam, 2011), pp. 34–47Google Scholar
- 86.D. Stehlé, R. Steinfeld, Making ntru as secure as worst-case problems over ideal lattices, in Proceedings of the 30th Annual International Conference on Theory and Applications of Cryptographic Techniques: Advances in Cryptology, EUROCRYPT’11 (Springer, Berlin, Heidelberg, 2011), pp. 27–47Google Scholar
- 87.J. Stern, A method for finding codewords of small weight. Coding Theory Appl. 388, 106–133 (1989)CrossRefGoogle Scholar
- 88.J. Stern, Can one design a signature scheme based on error-correcting codes? in Advances in Cryptology – ASIACRYPT’94, vol. 917 of Lecture Notes in Computer Science (Springer, New York, 1994), pp. 426–428Google Scholar
- 89.M. Szydlo, Merkle tree traversal in log space and time, in Advances in Cryptology – Eurocrypt 2004, vol. 3027 of Lecture Notes in Computer Science (Springer, New York, 2004), pp. 541–554Google Scholar
- 90.R.M. Tanner, Spectral graphs for quasi-cyclic LDPC codes, in IEEE International Symposium on Information Theory – ISIT 2001 (IEEE, Washington, DC, 2001), p. 226Google Scholar
- 91.E. Thomae, A generalization of the Rainbow band separation attack and its applications to multivariate schemes. Cryptology ePrint Archive, Report 2012/223, 2012. http://eprint.iacr.org/2012/223.
- 92.C. Wieschebrink, Two NP-complete problems in coding theory with an application in code based cryptography, in IEEE International Symposium on Information Theory – ISIT 2006 (IEEE, Seattle, 2006), pp. 1733–1737Google Scholar
- 93.R.S. Winternitz, Producing a one-way hash function from DES, in Advances in Cryptology – CRYPTO ‘83 (Springer, New York, 1983), pp. 203–207Google Scholar
- 94.C. Wolf, B. Preneel, Taxonomy of public key schemes based on the problem of multivariate quadratic equations. IACR Cryptology ePrint Archive 2005, 77 (2005)Google Scholar
- 95.T. Yasuda, K Sakurai, T. Takagi, Reducing the key size of Rainbow using non-commutative rings, in Topics in Cryptology – CT-RSA 2012, vol. 7178 of Lecture Notes in Computer Science (Springer, New York, 2012), pp. 68–83Google Scholar